id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
3346954
https://en.wikipedia.org/wiki/Network%20calculus
Network calculus
Network calculus is "a set of mathematical results which give insights into man-made systems such as concurrent programs, digital circuits and communication networks." Network calculus gives a theoretical framework for analysing performance guarantees in computer networks. As traffic flows through a network it is subject to constraints imposed by the system components, for example: link capacity traffic shapers (leaky buckets) congestion control background traffic These constraints can be expressed and analysed with network calculus methods. Constraint curves can be combined using convolution under min-plus algebra. Network calculus can also be used to express traffic arrival and departure functions as well as service curves. The calculus uses "alternate algebras ... to transform complex non-linear network systems into analytically tractable linear systems." Currently, there exists two branches in network calculus: one handling deterministic bounded, and one handling stochastic bounds . System modelling Modelling flow and server In network calculus, a flow is modelled as cumulative functions , where represents the amount of data (number of bits for example) send by the flow in the interval . Such functions are non-negative and non-decreasing. The time domain is often the set of non negative reals. A server can be a link, a scheduler, a traffic shaper, or a whole network. It is simply modelled as a relation between some arrival cumulative curve and some departure cumulative curve . It is required that , to model the fact that the departure of some data can not occur before its arrival. Modelling backlog and delay Given some arrival and departure curve and , the backlog at any instant , denoted can be defined as the difference between and . The delay at , is defined as the minimal amount of time such that the departure function reached the arrival function. When considering the whole flows, the supremum of these values is used. In general, the flows are not exactly known, and only some constraints on flows and servers are known (like the maximal number of packet sent on some period, the maximal size of packets, the minimal link bandwidth). The aim of network calculus is to compute upper bounds on delay and backlog, based on these constraints. To do so, network calculus uses the min-plus algebra. Min-plus algebra In filter theory and linear systems theory the convolution of two functions and is defined as In min-plus algebra the sum is replaced by the minimum respectively infimum operator and the product is replaced by the sum. So the min-plus convolution of two functions and becomes e.g. see the definition of service curves. Convolution and min-plus convolution share many algebraic properties. In particular both are commutative and associative. A so-called min-plus de-convolution operation is defined as e.g. as used in the definition of traffic envelopes. The vertical and horizontal deviations can be expressed in terms of min-plus operators. Traffic envelopes Cumulative curves are real behaviours, unknown at design time. What is known is some constraint. Network calculus uses the notion of traffic envelope, also known as arrival curves. A cumulative function is said to conform to an envelope (or arrival curve) , if for all it holds that Two equivalent definitions can be given Thus, places an upper constraint on flow . Such function can be seen as an envelope that specifies an upper bound on the number of bits of flow seen in any interval of length starting at an arbitrary , cf. eq. (). Service curves In order to provide performance guarantees to traffic flows it is necessary to specify some minimal performance of the server (depending on reservations in the network, or scheduling policy, etc.). Service curves provide a means of expressing resource availability. Several kinds of service curves exists, like weakly strict, variable capacity node, etc. See for an overview. Minimal service Let be an arrival flow, arriving at the ingress of a server, and be the flow departing at the egress. The system is said to provide a simple minimal service curve to the pair , if for all it holds that Strict minimal service Let be an arrival flow, arriving at the ingress of a server, and be the flow departing at the egress. A backlog period is an interval such that, on any , . The system is said to provide a strict minimal service curve to the pair iff, , such that , if is a backlog period, then . If a server offers a strict minimal service of curve , it also offers a simple minimal service of curve . Basic results: Performance bounds and envelope propagation From traffic envelope and service curves, some bounds on the delay and backlog, and an envelope on the departure flow can be computed. Let be an arrival flow, arriving at the ingress of a server, and be the flow departing at the egress. If the flow as a traffic envelope , and the server provides a minimal service of curve , then the backlog and delay can be bounded: Moreover, the departure curve has envelope . Moreover, these bounds are tight i.e. given some , and , one may build an arrival and departure such that = and =. Concatenation / PBOO Consider a sequence of two servers, when the output of the first one is the input of the second one. This sequence can be seen as a new server, built as the concatenation of the two other ones. Then, if the first (resp. second) server offers a simple minimal service (resp. ), then, the concatenation of both offers a simple minimal service . The proof does iterative application of the definition of service curves , and some properties of convolution, isotonicity (), and associativity (). The interest of this result is that the end-to-end delay bound is not greater than the sum of local delays: . This result is known as Pay burst only once (PBOO). Tools There are several tools based on network calculus. A comparison can be found in. The DiscoDNC is an academic Java implementation of the network calculus framework. The RTC Toolbox is an academic Java/MATLAB implementation of the Real-Time calculus framework, a theory quasi equivalent to network calculus. The CyNC tool is an academic MATLAB/Symulink toolbox, based on top of the RTC Toolbox. The tool was developed in 2004-2008 and it is currently used for teaching at Aalborg university. The RTaW-PEGASE is an industrial tool devoted to timing analysis tool of switched Ethernet network (AFDX, industrial and automotive Ethernet), based on network calculus. The Network calculus interpreter is an on-line (min,+) interpreter. The WOPANets is an academic tool combining network calculus based analysis and optimization analysis. The DelayLyzer is an industrial tool designed to compute bounds for Profinet networks. DEBORAH is an academic tool devoted to FIFO networks. NetCalBounds is an academic tool devoted to blind & FIFO tandem networks. NCBounds is a network calculus tool in Python, published under BSD 3-Clause License. It considers rate-latency servers and token-bucket arrival curves. It handles any topology, including cyclic ones. The Siemens Network Planner (SINETPLAN) uses network calculus (among other methods) to help the design of a PROFINET network. Events WoNeCa workshop is a Workshop on Network Calculus. It is organized every two years to bring together researchers with an interest in the theory of network calculus as well as those who want to apply existing results to new applications. The workshop also serves to promote the network calculus theory to researchers with an interest in applied queueing models. WoNeCa6, hosted by EPFL, is scheduled on September 8th and 9th, 2022 in Lausanne, Switzerland. Call for presentation here. WoNeCa5 was held virtually due to the COVID-19 pandemic on October 9th, 2020. WoNeCa4 was organized in conjunction with the 19th International GI/ITG Conference on Measurement, Modelling and Evaluation of Computing Systems (MMB2018) on February 28th, 2018 in Erlangen, Germany. WoNeCa3 was held in as a part of the MMB & DFT 2016 conference on April 6th, 2016 in Müster, Germany. WoNeCa2 was held within the MMB & DFT 2014 conference on March 19th, 2014 in Bamberg, Germany. WoNeCa1 was hosted by University of Kaiserslautern and was held as a part of MMB2012 on March 21st, 2012 in Kaiserslautern, Germany. In 2018, International Workshop on Network Calculus and Applications (NetCal 2018) was held in Vienna, Austria as a part of the 30th International Teletraffic Congress (ITC 30). References Books, Surveys, and Tutorials on Network Calculus C.-S. Chang: Performance Guarantees in Communications Networks, Springer, 2000. J.-Y. Le Boudec and P. Thiran: Network Calculus: A Theory of Deterministic Queuing Systems for the Internet, Springer, LNCS, 2001 (available online). A. Bouillard, M. Boyer, E. Le Corronc: Deterministic Network Calculus: From Theory to Practical Implementation, Wiley-ISTE, 2018 Y. Jiang and Y. Liu: Stochastic Network Calculus, Springer, 2008. A. Kumar, D. Manjunath, and J. Kuri: Communication Networking: An Analytical Approach, Elsevier, 2004. S. Mao and S. Panwar: A survey of envelope processes and their applications in quality of service provisioning, IEEE Communications Surveys and Tutorials, 8(3):2-20, July 2006. M. Fidler: Survey of deterministic and stochastic service curve models in the network calculus, IEEE Communications Surveys and Tutorials, 12(1):59-86, January 2010. C. Lin, Y. Deng, and Y. Jiang: On applying stochastic network calculus, Frontiers Computer Science, 7(6): 924-942, 2013 M. Fidler and A. Rizk: A guide to the stochastic network calculus, IEEE Communications Surveys and Tutorials, 17(1):92-105, March 2015. L. Maile, K. Hielscher and R. German: Network Calculus Results for TSN: An Introduction, IEEE Information Communication Technologies Conference(1): 131-140, May 2020. Related books on the max-plus algebra or on convex minimization R. T. Rockafellar: Convex analysis, Princeton University Press, 1972. F. Baccelli, G. Cohen, G. J. Olsder, and J.-P. Quadrat: Synchronization and Linearity: An Algebra for Discrete Event Systems, Wiley, 1992. V. N. Kolokol'tsov, Victor P. Maslov: Idempotent Analysis and Its Applications, Springer, 1997. . Deterministic network calculus R. L. Cruz: and , IEEE Transactions on Information Theory, 37(1):114-141, Jan. 1991. A. K. Parekh and R. G. Gallager: A Generalized Processor Sharing Approach to Flow Control : The Multiple Node Case, IEEE Transactions on Networking, 2 (2):137-150, April 1994. C.-S. Chang: Stability, Queue Length and Delay of Deterministic and Stochastic Queueing Networks, IEEE Transactions on Automatic Control, 39(5):913-931, May 1994. D. E. Wrege, E. W. Knightly, H. Zhang, and J. Liebeherr: Deterministic delay bounds for VBR video in packet-switching networks: Fundamental limits and practical tradeoffs, IEEE/ACM Transactions on Networking, 4(3):352-362, Jun. 1996. R. L. Cruz: SCED+: Efficient Management of Quality of Service Guarantees, IEEE INFOCOM, pp. 625–634, Mar. 1998. J.-Y. Le Boudec: Application of Network Calculus to Guaranteed Service Networks, IEEE Transactions on Information Theory, 44(3):1087-1096, May 1998. C.-S. Chang: On Deterministic Traffic Regulation and Service Guarantees: A Systematic Approach by Filtering, IEEE Transactions on Information Theory, 44(3):1097-1110, May 1998. R. Agrawal, R. L. Cruz, C. Okino, and R. Rajan: Performance Bounds for Flow Control Protocols, IEEE/ACM Transactions on Networking, 7(3):310-323, Jun. 1999. J.-Y. Le Boudec: Some properties of variable length packet shapers, IEEE/ACM Transactions on Networking, 10(3):329-337, Jun. 2002. C.-S. Chang, R. L. Cruz, J.-Y. Le Boudec, and P. Thiran: A Min, + System Theory for Constrained Traffic Regulation and Dynamic Service Guarantees, IEEE/ACM Transactions on Networking, 10(6):805-817, Dec. 2002. Y. Jiang: Relationship between guaranteed rate server and latency rate server, Computer Networks 43(3): 307-315, 2003. M. Fidler and S. Recker: Conjugate network calculus: A dual approach applying the Legendre transform, Computer Networks, 50(8):1026-1039, Jun. 2006. Eitan Altman, Kostya Avrachenkov, and Chadi Barakat: TCP network calculus: The case of large bandwidth-delay product, In proceedings of IEEE INFOCOM, NY, June 2002. J. Liebeherr: Duality of the Max-Plus and Min-Plus Network Calculus, Foundations and Trends in Networking 11(3-4): 139-282, 2017. Network topologies, feed-forward networks A. Charny and J.-Y. Le Boudec: Delay Bounds in a Network with Aggregate Scheduling, QoFIS, pp. 1–13, Sep. 2000. D. Starobinski, M. Karpovsky, and L. Zakrevski: Application of Network Calculus to General Topologies using Turn-Prohibition, IEEE/ACM Transactions on Networking, 11(3):411-421, Jun. 2003. M. Fidler: A parameter based admission control for differentiated services networks, Computer Networks, 44(4):463-479, March 2004. L. Lenzini, L. Martorini, E. Mingozzi, and G. Stea: Tight end-to-end per-flow delay bounds in FIFO multiplexing sink-tree networks, Performance Evaluation, 63(9-10):956-987, October 2006. J. Schmitt, F. Zdarsky, and M. Fidler: Delay bounds under arbitrary multiplexing: when network calculus leaves you in the lurch ..., Prof. IEEE Infocom, April 2008. A. Bouillard, L. Jouhet, and E. Thierry: Tight performance bounds in the worst-case analysis of feed-forward networks, Proc. IEEE Infocom, April 2010. Measurement-based system identification C. Cetinkaya, V. Kanodia, and E.W. Knightly: Scalable services via egress admission control, IEEE Transactions on Multimedia, 3(1):69-81, March 2001. S. Valaee, and B. Li: Distributed call admission control for ad hoc networks, Proc. of IEEE VTC, pp. 1244–1248, 2002. A. Undheim, Y. Jiang, and P. J. Emstad. Network Calculus Approach to Router Modeling with External Measurements, Proc. of IEEE Second International Conference on Communications and Networking in China (Chinacom), August 2007. J. Liebeherr, M. Fidler, and S. Valaee: A system-theoretic approach to bandwidth estimation, IEEE Transactions on Networking, 18(4):1040-1053, August 2010. M. Bredel, Z. Bozakov, and Y. Jiang: Analyzing router performance using network calculus with external measurements, Proc. IEEE IWQoS, June 2010. R. Lubben, M. Fidler, and J. Liebeherr: Stochastic bandwidth estimation in networks with random service, IEEE Transactions on Networking, 22(2):484-497, April 2014. Stochastic network calculus O. Yaron and M. Sidi: Performance and Stability of Communication Networks via Robust Exponential Bounds, IEEE/ACM Transactions on Networking, 1(3):372-385, Jun. 1993. D. Starobinski and M. Sidi: Stochastically Bounded Burstiness for Communication Networks, IEEE Transactions on Information Theory, 46(1):206-212, Jan. 2000. C.-S. Chang: Stability, Queue Length and Delay of Deterministic and Stochastic Queueing Networks, IEEE Transactions on Automatic Control, 39(5):913-931, May 1994. R.-R. Boorstyn, A. Burchard, J. Liebeherr, and C. Oottamakorn: Statistical Service Assurances for Traffic Scheduling Algorithms, IEEE Journal on Selected Areas in Communications, 18(12):2651-2664, Dec. 2000. Q. Yin, Y. Jiang, S. Jiang, and P. Y. Kong: Analysis of Generalized Stochastically Bounded Bursty Traffic for Communication Networks, IEEE LCN, pp. 141–149, Nov. 2002. C. Li, A. Burchard, and J. Liebeherr: A Network Calculus with Effective Bandwidth, University of Virginia, Technical Report CS-2003-20, Nov. 2003. Y. Jiang: A basic stochastic network calculus, ACM SIGCOMM 2006. A. Burchard, J. Liebeherr, and S. D. Patek: A Min-Plus Calculus for End-to-end Statistical Service Guarantees, IEEE Transactions on Information Theory, 52(9):4105–4114, Sep. 2006. F. Ciucu, A. Burchard, and J. Liebeherr: A Network Service Curve Approach for the Stochastic Analysis of Networks, IEEE/ACM Transactions on Networking, 52(6):2300–2312, Jun. 2006. M. Fidler: An End-to-End Probabilistic Network Calculus with Moment Generating Functions, IEEE IWQoS, Jun. 2006. Y. Liu, C.-K. Tham, and Y. Jiang: A calculus for stochastic QoS analysis, Performance Evaluation, 64(6): 547-572, 2007. Y. Jiang and Y. Liu: Stochastic Network Calculus, Springer, 2008. Wireless network calculus M. Fidler: A Network Calculus Approach to Probabilistic Quality of Service Analysis of Fading Channels, Proc. IEEE Globecom, November 2006. K. Mahmood, A. Rizk, and Y. Jiang: On the Flow-Level Delay of a Spatial Multiplexing MIMO Wireless Channel, Proc. IEEE ICC, June 2011. K. Mahmood, M. Vehkaperä, and Y. Jiang: Delay Constrained Throughput Analysis of a Correlated MIMO Wireless Channel, Proc. IEEE ICCCN, 2011. K. Mahmood, M. Vehkaperä, and Y. Jiang: Delay constrained throughput analysis of CDMA using stochastic network calculus, Proc. IEEE ICON, 2011. K. Mahmood, M. Vehkaperä, and Y. Jiang: Performance of multiuser CDMA receivers with bursty traffic and delay constraints, Proc. ICNC, 2012. Y. Zhang and Y. Jiang: Performance of data transmission over a Gaussian channel with dispersion, Proc. ISWCS, 2012. H. Al-Zubaidy, J. Liebeherr, and A. Burchard: A (min, ×) network calculus for multi-hop fading channels, Proc. IEEE Infocom, pp. 1833–1841, April 2013. K. Zheng, F. Liu, L. Lei, C. Lin, and Y. Jiang: Stochastic Performance Analysis of a Wireless Finite-State Markov Channel, IEEE Trans. Wireless Communications 12(2): 782-793, 2013. J.-w. Cho and Y. Jiang: Fundamentals of the Backoff Process in 802.11: Dichotomy of the Aggregation, IEEE Trans. Information Theory 61(4): 1687-1701, 2015. M. Fidler, R. Lubben, and N. Becker: Capacity–Delay–Error Boundaries: A Composable Model of Sources and Systems, Transactions on Wireless Communications, 14(3):1280-1294, March 2015. F. Sun and Y. Jiang: A Statistical Property of Wireless Channel Capacity: Theory and Application, Proc. IFIP Performance, 2017. Network performance Computer network analysis
33656853
https://en.wikipedia.org/wiki/Information%20history
Information history
Information history may refer to the history of each of the categories listed below (or to combinations of them). It should be recognized that the understanding of, for example, libraries as information systems only goes back to about 1950. The application of the term information for earlier systems or societies is a retronym. The word and concept "information" The Latin roots and Greek origins of the word "information" is presented by Capurro & Hjørland (2003). References on "formation or molding of the mind or character, training, instruction, teaching" date from the 14th century in both English (according to Oxford English Dictionary) and other European languages. In the transition from Middle Ages to Modernity the use of the concept of information reflected a fundamental turn in epistemological basis – from "giving a (substantial) form to matter" to "communicating something to someone". Peters (1988, pp. 12–13) concludes: Information was readily deployed in empiricist psychology (though it played a less important role than other words such as impression or idea) because it seemed to describe the mechanics of sensation: objects in the world inform the senses. But sensation is entirely different from "form" – the one is sensual, the other intellectual; the one is subjective, the other objective. My sensation of things is fleeting, elusive, and idiosyncratic [sic]. For Hume, especially, sensory experience is a swirl of impressions cut off from any sure link to the real world... In any case, the empiricist problematic was how the mind is informed by sensations of the world. At first informed meant shaped by; later it came to mean received reports from. As its site of action drifted from cosmos to consciousness, the term's sense shifted from unities (Aristotle's forms) to units (of sensation). Information came less and less to refer to internal ordering or formation, since empiricism allowed for no preexisting intellectual forms outside of sensation itself. Instead, information came to refer to the fragmentary, fluctuating, haphazard stuff of sense. Information, like the early modern worldview in general, shifted from a divinely ordered cosmos to a system governed by the motion of corpuscles. Under the tutelage of empiricism, information gradually moved from structure to stuff, from form to substance, from intellectual order to sensory impulses. In the modern era, the most important influence on the concept of information is derived from the Information theory developed by Claude Shannon and others. This theory, however, reflects a fundamental contradiction. Northrup (1993) wrote: Thus, actually two conflicting metaphors are being used: The well-known metaphor of information as a quantity, like water in the water-pipe, is at work, but so is a second metaphor, that of information as a choice, a choice made by :an information provider, and a forced choice made by an :information receiver. Actually, the second metaphor implies that the information sent isn’t necessarily equal to the information received, because any choice implies a comparison with a list of possibilities, i.e., a list of possible meanings. Here, meaning is involved, thus spoiling the idea of information as a pure "Ding an sich." Thus, much of the confusion regarding the concept of information seems to be related to the basic confusion of metaphors in Shannon’s theory: is information an autonomous quantity, or is information always per SE information to an observer? Actually, I don’t think that Shannon himself chose one of the two definitions. Logically speaking, his theory implied information as a subjective phenomenon. But this had so wide-ranging epistemological impacts that Shannon didn’t seem to fully realize this logical fact. Consequently, he continued to use metaphors about information as if it were an objective substance. This is the basic, inherent contradiction in Shannon’s information theory." (Northrup, 1993, p. 5) In their seminal book The Study of Information: Interdisciplinary Messages, Almach and Mansfield (1983) collected key views on the interdisciplinary controversy in computer science, artificial intelligence, library and information science, linguistics, psychology, and physics, as well as in the social sciences. Almach (1983, p. 660) himself disagrees with the use of the concept of information in the context of signal transmission, the basic senses of information in his view all referring "to telling something or to the something that is being told. Information is addressed to human minds and is received by human minds." All other senses, including its use with regard to nonhuman organisms as well to society as a whole, are, according to Machlup, metaphoric and, as in the case of cybernetics, anthropomorphic. Hjørland (2007) describes the fundamental difference between objective and subjective views of information and argues that the subjective view has been supported by, among others, Bate son, Yovits, Span-Hansen, Brier, Buck land, Goguen, and Hjørland. Hjørland provided the following example: A stone on a field could contain different information for different people (or from one situation to another). It is not possible for information systems to map all the stone’s possible information for every individual. Nor is any one mapping the one "true" mapping. But people have different educational backgrounds and play different roles in the division of labor in society. A stone in a field represents typical one kind of information for the geologist, another for the archaeologist. The information from the stone can be mapped into different collective knowledge structures produced by e.g. geology and archaeology. Information can be identified, described, represented in information systems for different domains of knowledge. Of course, there are much uncertainty and many and difficult problems in determining whether a thing is informative or not for a domain. Some domains have high degree of consensus and rather explicit criteria of relevance. Other domains have different, conflicting paradigms, each containing its own more or less implicate view of the informativeness of different kinds of information sources. (Hjørland, 1997, p. 111, emphasis in original). Academic discipline Information history is an emerging discipline related to, but broader than, library history. An important introduction and review was made by Alistair Black (2006). A prolific scholar in this field is also Toni Weller, for example, Weller (2007, 2008, 2010a and 2010b). As part of her work Toni Weller has argued that there are important links between the modern information age and its historical precedents. A description from Russia is Volodin (2000). Alistair Black (2006, p. 445) wrote: "This chapter explores issues of discipline definition and legitimacy by segmenting information history into its various components: The history of print and written culture, including relatively long-established areas such as the histories of libraries and librarianship, book history, publishing history, and the history of reading. The history of more recent information disciplines and practice, that is to say, the history of information management, information systems, and information science. The history of contiguous areas, such as the history of the information society and information infrastructure, necessarily enveloping communication history (including telecommunications history) and the history of information policy. The history of information as social history, with emphasis on the importance of informal information networks." "Bodies influential in the field include the American Library Association’s Round Table on Library History, the Library History Section of the International Federation of Library Associations and Institutions (IFLA), and, in the U.K., the Library and Information History Group of the Chartered Institute of Library and Information Professionals (CILIP). Each of these bodies has been busy in recent years, running conferences and seminars, and initiating scholarly projects. Active library history groups function in many other countries, including Germany (The Wolfenbuttel Round Table on Library History, the History of the Book and the History of Media, located at the Herzog August Bibliothek), Denmark (The Danish Society for Library History, located at the Royal School of Library and Information Science), Finland (The Library History Research Group, University of Tamepere), and Norway (The Norwegian Society for Book and Library History). Sweden has no official group dedicated to the subject, but interest is generated by the existence of a museum of librarianship in Bods, established by the Library Museum Society and directed by Magnus Torstensson. Activity in Argentina, where, as in Europe and the U.S., a "new library history" has developed, is described by Parada (2004)." (Black (2006, p. 447). Journals Information & Culture (previously Libraries & the Cultural Record, Libraries & Culture) Library & Information History (until 2008: Library History; until 1967: Library Association. Library History Group. Newsletter) Information technology (IT) The term IT is ambiguous although mostly synonym with computer technology. Haigh (2011, pp. 432-433) wrote "In fact, the great majority of references to information technology have always been concerned with computers, although the exact meaning has shifted over time (Kline, 2006). The phrase received its first prominent usage in a Harvard Business Review article (Haigh, 2001b; Leavitt & Whisler, 1958) intended to promote a technocratic vision for the future of business management. Its initial definition was at the conjunction of computers, operations research methods, and simulation techniques. Having failed initially to gain much traction (unlike related terms of a similar vintage such as information systems, information processing, and information science) it was revived in policy and economic circles in the 1970s with a new meaning. Information technology now described the expected convergence of the computing, media, and telecommunications industries (and their technologies), understood within the broader context of a wave of enthusiasm for the computer revolution, post-industrial society, information society (Webster, 1995), and other fashionable expressions of the belief that new electronic technologies were bringing a profound rupture with the past. As it spread broadly during the 1980s, IT increasingly lost its association with communications (and, alas, any vestigial connection to the idea of anybody actually being informed of anything) to become a new and more pretentious way of saying "computer". The final step in this process is the recent surge in references to "information and communication technologies" or ICTs, a coinage that makes sense only if one assumes that a technology can inform without communicating". Some people use the term information technology about technologies used before the development of the computer. This is however to use the term as a retronym. See also History of computer and video games History of computing hardware (1960s-present) History of computing hardware History of operating systems History of software engineering History of programming languages History of artificial intelligence History of the graphical user interface History of the Internet History of the World Wide Web IT History Society Timeline of computing Information society "It is said that we live in an "Age of Information," but it is an open scandal that there is no theory, nor even definition, of information that is both broad and precise enough to make such an assertion meaningful." (Goguen, 1997). The Danish Internet researcher Niels Ole Finnemann (2001) developed a general history of media. He wrote: "A society cannot exist in which the production and exchange of information are of only minor significance. For this reason one cannot compare industrial societies to information societies in any consistent way. Industrial societies are necessarily also information societies, and information societies may also be industrial societies." He suggested the following media matrix: Oral cultures based mainly on speech. Literate cultures: speech + writing (primary alphabets and number systems). Print cultures: speech + written texts + print. Mass-media cultures: speech + written texts + print + analogue electric media. Second-order alphabetic cultures: speech + written texts + print + analogue electric media + digital media. Information science Many information science historians cite Paul Otlet and Henri La Fontaine as the fathers of information science with the founding of the International Institute of Bibliography (IIB) in 1895 Institutionally, information science emerged in the last part of the 19th century as documentation science which in general shifted name to information science in the 1960s. Heting Chu (2010) classified the history and development of information representation and retrieval (IRR) in four phases. "The history of IRR is not long. A retrospective look at the field identifies increased demand, rapid growth, the demystification phase, and the networked era as the four major stages IRR has experienced in its development:" Increased Demand (1940s–early 1950s) (Information explosion) Rapid Growth (1950s–1980s) (the emergence of computers and systems such as Dialog (online database)) Demystification Phase (1980s–1990s) (systems developed for end-user searching) The Networked Era (1990s–Present) (search engines such as AltaVista and Google) References Further reading Cortada, James W. All the Facts: A History of Information in the United States since 1870 (Oxford UP, 2016). xx, 636 pp External links Pioneers of Information Science in North America Chronology of Information Science and Technology History of Information Science and Technology Library and Information History Group (CILIP) Fields of history
2155746
https://en.wikipedia.org/wiki/Mobile%20phone%20features
Mobile phone features
The features of mobile phones are the set of capabilities, services and applications that they offer to their users. Mobile phones are often referred to as feature phones, and offer basic telephony. Handsets with more advanced computing ability through the use of native code try to differentiate their own products by implementing additional functions to make them more attractive to consumers. This has led to great innovation in mobile phone development over the past 20 years. The common components found on all phones are: A number of metal–oxide–semiconductor (MOS) integrated circuit (IC) chips. A battery (typically a lithium-ion battery), providing the power source for the phone functions. An input mechanism to allow the user to interact with the phone. The most common input mechanism is a keypad, but touch screens are also found in smartphones. Basic mobile phone services to allow users to make calls and send text messages. All GSM phones use a SIM card to allow an account to be swapped among devices. Some CDMA devices also have a similar card called a R-UIM. Individual GSM, WCDMA, IDEN and some satellite phone devices are uniquely identified by an International Mobile Equipment Identity (IMEI) number. All mobile phones are designed to work on cellular networks and contain a standard set of services that allow phones of different types and in different countries to communicate with each other. However, they can also support other features added by various manufacturers over the years: roaming which permits the same phone to be used in multiple countries, providing that the operators of both countries have a roaming agreement. send and receive data and faxes (if a computer is attached), access WAP services, and provide full Internet access using technologies such as GPRS. applications like a clock, alarm, calendar, contacts, and calculator and a few games. Sending and receiving pictures and videos (by without internet) through MMS, and for short distances with e.g. Bluetooth. In Multimedia phones Bluetooth is commonly but important Feature. GPS receivers integrated or connected (i.e. using Bluetooth) to cell phones, primarily to aid in dispatching emergency responders and road tow truck services. This feature is generally referred to as E911. Push to talk, available on some mobile phones, is a feature that allows the user to be heard only while the talk button is held, similar to a walkie-talkie. A hardware notification LED on some phones. MOS integrated circuit chips A typical smartphone contains a number of metal–oxide–semiconductor (MOS) integrated circuit (IC) chips, which in turn contain billions of tiny MOS field-effect transistors (MOSFETs). A typical smartphone contains the following MOS IC chips. Application processor (CMOS system-on-a-chip) Flash memory (floating-gate MOS memory) Cellular modem (baseband RF CMOS) RF transceiver (RF CMOS) Phone camera image sensor (CMOS image sensor) Power management integrated circuit (power MOSFETs) Display driver (LCD or LED driver) Wireless communication chips (Wi-Fi, Bluetooth, GPS receiver) Sound chip (audio codec and power amplifier) Gyroscope Capacitive touchscreen controller (ASIC and DSP) RF power amplifier (LDMOS) User interface Besides the number keypad and buttons for accepting and declining calls (typically from left to right and coloured green and red respectively), button mobile phones commonly feature two option keys, one to the left and one to the right, and a four-directional D-pad which may feature a center button which acts in resemblance to an "Enter" and "OK" button. A pushable scroll wheel has been implemented in the 1990s on the Nokia 7110. Software, applications and services In early stages, every mobile phone company had its own user interface, which can be considered as "closed" operating system, since there was a minimal configurability. A limited variety of basic applications (usually games, accessories like calculator or conversion tool and so on) was usually included with the phone and those were not available otherwise. Early mobile phones included basic web browser, for reading basic WAP pages. Handhelds (Personal digital assistants like Palm, running Palm OS) were more sophisticated and also included more advanced browser and a touch screen (for use with stylus), but these were not broadly used, comparing to standard phones. Other capabilities like Pulling and Pushing Emails or working with calendar were also made more accessible but it usually required physical (and not wireless) Syncing. BlackBerry 850, an email pager, released January 19, 1999, was the first device to integrate Email. A major step towards a more "open" mobile OS was the symbian S60 OS, that could be expanded by downloading software (written in C++, java or python), and its appearance was more configurable. In July 2008, Apple introduced its App store, which made downloading mobile applications more accessible. In October 2008, the HTC Dream was the first commercially released device to use the Linux-based Android OS, which was purchased and further developed by Google and the Open Handset Alliance to create an open competitor to other major smartphone platforms of the time (Mainly Symbian operating system, BlackBerry OS, and iOS)-The operating system offered a customizable graphical user interface and a notification system showing a list of recent messages pushed from apps. The most commonly used data application on mobile phones is SMS text messaging. The first SMS text message was sent from a computer to a mobile phone in 1992 in the UK, while the first person-to-person SMS from phone to phone was sent in Finland in 1993. The first mobile news service, delivered via SMS, was launched in Finland in 2000. Mobile news services are expanding with many organizations providing "on-demand" news services by SMS. Some also provide "instant" news pushed out by SMS. Mobile payments were first trialled in Finland in 1998 when two Coca-Cola vending machines in Espoo were enabled to work with SMS payments. Eventually, the idea spread and in 1999 the Philippines launched the first commercial mobile payments systems, on the mobile operators Globe and Smart. Today, mobile payments ranging from mobile banking to mobile credit cards to mobile commerce are very widely used in Asia and Africa, and in selected European markets. Usually, the SMS services utilize short code. Some network operators have utilized USSD for information, entertainment or finance services (e.g. M-Pesa). Other non-SMS data services used on mobile phones include mobile music, downloadable logos and pictures, gaming, gambling, adult entertainment and advertising. The first downloadable mobile content was sold to a mobile phone in Finland in 1998, when Radiolinja (now Elisa) introduced the downloadable ringtone service. In 1999, Japanese mobile operator NTT DoCoMo introduced its mobile Internet service, i-Mode, which today is the world's largest mobile Internet service. Even after the appearance of smartphones, network operators have continued to offer information services, although in some places, those services have become less common. Power supply Mobile phones generally obtain power from rechargeable batteries. There are a variety of ways used to charge cell phones, including USB, portable batteries, mains power (using an AC adapter), cigarette lighters (using an adapter), or a dynamo. In 2009, the first wireless charger was released for consumer use. Some manufacturers have been experimenting with alternative power sources, including solar cells. Various initiatives, such as the EU Common External Power Supply have been announced to standardize the interface to the charger, and to promote energy efficiency of mains-operated chargers. A star rating system is promoted by some manufacturers, where the most efficient chargers consume less than 0.03 watts and obtain a five-star rating. Battery Most modern mobile phones use a lithium-ion battery. A popular early mobile phone battery was the nickel metal-hydride (NiMH) type, due to its relatively small size and low weight. Lithium-ion batteries later became commonly used, as they are lighter and do not have the voltage depression due to long-term over-charging that nickel metal-hydride batteries do. Many mobile phone manufacturers use lithium–polymer batteries as opposed to the older lithium-ion, the main advantages being even lower weight and the possibility to make the battery a shape other than strict cuboid. SIM card GSM mobile phones require a small microchip called a Subscriber Identity Module or SIM card, to function. The SIM card is approximately the size of a small postage stamp and is usually placed underneath the battery in the rear of the unit. The SIM securely stores the service-subscriber key (IMSI) used to identify a subscriber on mobile telephony devices (such as mobile phones and computers). The SIM card allows users to change phones by simply removing the SIM card from one mobile phone and inserting it into another mobile phone or broadband telephony device. A SIM card contains its unique serial number, internationally unique number of the mobile user (IMSI), security authentication and ciphering information, temporary information related to the local network, a list of the services the user has access to and two passwords (PIN for usual use and PUK for unlocking). SIM cards are available in three standard sizes. The first is the size of a credit card (85.60 mm × 53.98 mm x 0.76 mm, defined by ISO/IEC 7810 as ID-1). The newer, most popular miniature version has the same thickness but a length of 25 mm and a width of 15 mm (ISO/IEC 7810 ID-000), and has one of its corners truncated (chamfered) to prevent misinsertion. The newest incarnation known as the 3FF or micro-SIM has dimensions of 15 mm × 12 mm. Most cards of the two smaller sizes are supplied as a full-sized card with the smaller card held in place by a few plastic links; it can easily be broken off to be used in a device that uses the smaller SIM. The first SIM card was made in 1991 by Munich smart card maker Giesecke & Devrient for the Finnish wireless network operator Radiolinja. Giesecke & Devrient sold the first 300 SIM cards to Elisa (ex. Radiolinja). Those cell phones that do not use a SIM card have the data programmed into their memory. This data is accessed by using a special digit sequence to access the "NAM" as in "Name" or number programming menu. From there, information can be added, including a new number for the phone, new Service Provider numbers, new emergency numbers, new Authentication Key or A-Key code, and a Preferred Roaming List or PRL. However, to prevent the phone being accidentally disabled or removed from the network, the Service Provider typically locks this data with a Master Subsidiary Lock (MSL). The MSL also locks the device to a particular carrier when it is sold as a loss leader. The MSL applies only to the SIM, so once the contract has expired, the MSL still applies to the SIM. The phone, however, is also initially locked by the manufacturer into the Service Provider's MSL. This lock may be disabled so that the phone can use other Service Providers' SIM cards. Most phones purchased outside the U.S. are unlocked phones because there are numerous Service Providers that are close to one another or have overlapping coverage. The cost to unlock a phone varies but is usually very cheap and is sometimes provided by independent phone vendors. A similar module called a Removable User Identity Module or RUIM card is present in some CDMA networks, notably in China and Indonesia. Multi-card hybrid phones A hybrid mobile phone can take more than one SIM card, even of different types. The SIM and RUIM cards can be mixed together, and some phones also support three or four SIMs. From 2010 onwards they became popular in India and Indonesia and other emerging markets, attributed to the desire to obtain the lowest on-net calling rate. In Q3 2011, Nokia shipped 18 million of its low cost dual SIM phone range in an attempt to make up lost ground in the higher end smartphone market. Display Mobile phones have a display device, some of which are also touch screens. The screen size varies greatly by model and is usually specified either as width and height in pixels or the diagonal measured in inches. Some mobiles have more than one display, for example that mobileKyocera Echo, an Android smartphone with a dual 3.5 inch screen. The screens can also be combined into a single 4.7 inch tablet style computer. Central processing unit Mobile phones have central processing units (CPUs), similar to those in computers, but optimised to operate in low power environments. In smartphones, the CPU is typically integrated in a system-on-a-chip (SoC) application processor. Mobile CPU performance depends not only on the clock rate (generally given in multiples of hertz) but also the memory hierarchy also greatly affects overall performance. Because of these problems, the performance of mobile phone CPUs is often more appropriately given by scores derived from various standardized tests to measure the real effective performance in commonly used applications. Miscellaneous features Other features that may be found on mobile phones include GPS navigation, music (MP3) and video (MP4) playback, RDS radio receiver, built-in projector, vibration and other "silent" ring options, alarms, memo recording, personal digital assistant functions, ability to watch streaming video, video download, video calling, built-in cameras (1.0+ Mpx) and camcorders (video recording), with autofocus and flash, ringtones, games, PTT, memory card reader (SD), USB (2.0), dual line support, infrared, Bluetooth (2.0) and WiFi connectivity, NFC, instant messaging, Internet e-mail and browsing and serving as a wireless modem. The first smartphone was the Nokia 9000 Communicator in 1996 which added PDA functionality to the basic mobile phone at the time. As miniaturization and increased processing power of microchips has enabled ever more features to be added to phones, the concept of the smartphone has evolved, and what was a high-end smartphone five years ago, is a standard phone today. Several phone series have been introduced to address a given market segment, such as the RIM BlackBerry focusing on enterprise/corporate customer email needs; the SonyEricsson Walkman series of musicphones and Cybershot series of cameraphones; the Nokia Nseries of multimedia phones, the Palm Pre the HTC Dream and the Apple iPhone. Nokia and the University of Cambridge demonstrated a bendable cell phone called the Morph. Some phones have an electromechanical transducer on the back which changes the electrical voice signal into mechanical vibrations. The vibrations flow through the cheek bones or forehead allowing the user to hear the conversation. This is useful in the noisy situations or if the user is hard of hearing. As of 2018, there are smartphones that offer reverse wireless charging. Multi-mode and multi-band mobile phones Most mobile phone networks are digital and use the GSM, CDMA or iDEN standard which operate at various radio frequencies. These phones can only be used with a service plan from the same company. For example, a Verizon phone cannot be used with a T-Mobile service, and vica versa. A multi-mode phone operates across different standards whereas a multi-band phone (also known more specifically as dual, tri or quad band) mobile phone is a phone which is designed to work on more than one radio frequency. Some multi-mode phones can operate on analog networks as well (for example, dual band, tri-mode: AMPS 800 / CDMA 800 / CDMA 1900). For a GSM phone, dual-band usually means 850 / 1900 MHz in the United States and Canada, 900 / 1800 MHz in Europe and most other countries. Tri-band means 850 / 1800 / 1900 MHz or 900 / 1800 / 1900 MHz. Quad-band means 850 / 900 / 1800 / 1900 MHz, also called a world phone, since it can work on any GSM network. Multi-band phones have been valuable to enable roaming whereas multi-mode phones helped to introduce WCDMA features without customers having to give up the wide coverage of GSM. Almost every single true 3G phone sold is actually a WCDMA/GSM dual-mode mobile. This is also true of 2.75G phones such as those based on CDMA-2000 or EDGE. Challenges in producing multi-mode phones The special challenge involved in producing a multi-mode mobile is in finding ways to share the components between the different standards. Obviously, the phone keypad and display should be shared, otherwise it would be hard to treat as one phone. Beyond that, though, there are challenges at each level of integration. How difficult these challenges are depends on the differences between systems. When talking about IS-95/GSM multi-mode phones, for example, or AMPS/IS-95 phones, the base band processing is very different from system to system. This leads to real difficulties in component integration and so to larger phones. An interesting special case of multi-mode phones is the WCDMA/GSM phone. The radio interfaces are very different from each other, but mobile to core network messaging has strong similarities, meaning that software sharing is quite easy. Probably more importantly, the WCDMA air interface has been designed with GSM compatibility in mind. It has a special mode of operation, known as punctured mode, in which, instead of transmitting continuously, the mobile is able to stop sending for a short period and try searching for GSM carriers in the area. This mode allows for safe inter-frequency handovers with channel measurements which can only be approximated using "pilot signals" in other CDMA based systems. A final interesting case is that of mobiles covering the DS-WCDMA and MC-CDMA 3G variants of the CDMA-2000 protocol. Initially, the chip rate of these phones was incompatible. As part of the negotiations related to patents, it was agreed to use compatible chip rates. This should mean that, despite the fact that the air and system interfaces are quite different, even on a philosophical level, much of the hardware for each system inside a phone should be common with differences being mostly confined to software. Data communications Mobile phones are now heavily used for data communications. such as SMS messages, browsing mobile web sites, and even streaming audio and video files. The main limiting factors are the size of the screen, lack of a keyboard, processing power and connection speed. Most cellphones, which supports data communications, can be used as wireless modems (via cable or bluetooth), to connect computer to internet. Such access method is slow and expensive, but it can be available in very remote areas. With newer smartphones, screen resolution and processing power has become bigger and better. Some new phone CPUs run at over 1 GHz. Many complex programs are now available for the various smartphones, such as Symbian and Windows Mobile. Connection speed is based on network support. Originally data transfers over GSM networks were possible only over CSD (circuit switched data), it has bandwidth of 9600 bit/s and usually is billed by connection time (from network point of view, it does not differ much from voice call). Later, there were introduced improved version of CSD - HSCSD (high speed CSD), it could use multiple time slots for downlink, improving speed. Maximum speed for HSCSD is ~42 kbit/s, it also is billed by time. Later was introduced GPRS (general packet radio service), which operates on completely different principle. It also can use multiple time slots for transfer, but it does not tie up radio resources, when not transferring data (as opposed to CSD and like). GPRS usually is prioritized under voice and CSD, so latencies are large and variable. Later, GPRS was upgraded to EDGE, which differs mainly by radio modulation, squeezing more data capacity in same radio bandwidth. GPRS and EDGE usually are billed by data traffic volume. Some phones also feature full Qwerty keyboards, such as the LG enV. As of April 2006, several models, such as the Nokia 6680, support 3G communications. Such phones have access to the Web via a free download of the Opera web browser. Verizon Wireless models come with Internet Explorer pre-loaded onto the phone. Vulnerability to viruses As more complex features are added to phones, they become more vulnerable to viruses which exploit weaknesses in these features. Even text messages can be used in attacks by worms and viruses. Advanced phones capable of e-mail can be susceptible to viruses that can multiply by sending messages through a phone's address book. In some phone models, the USSD was exploited for inducing a factory reset, resulting in clearing the data and resetting the user settings. A virus may allow unauthorized users to access a phone to find passwords or corporate data stored on the device. Moreover, they can be used to commandeer the phone to make calls or send messages at the owner's expense. Mobile phones used to have proprietary operating system unique only to the manufacturer which had the beneficial effect of making it harder to design a mass attack. However, the rise of software platforms and operating systems shared by many manufacturers such as Java, Microsoft operating systems, Linux, or Symbian OS, may increase the spread of viruses in the future. Bluetooth is a feature now found in many higher-end phones, and the virus Caribe hijacked this function, making Bluetooth phones infect other Bluetooth phones running the Symbian OS. In early November 2004, several web sites began offering a specific piece of software promising ringtones and screensavers for certain phones. Those who downloaded the software found that it turned each icon on the phone's screen into a skull-and-crossbones and disabled their phones, so they could no longer send or receive text messages or access contact lists or calendars. The virus has since been dubbed "Skulls" by security experts. The Commwarrior-A virus was identified in March 2005, and it attempts to replicate itself through MMS to others on the phone's contact list. Like Cabir, Commwarrior-A also tries to communicate via Bluetooth wireless connections with other devices, which can eventually lead to draining the battery. The virus requires user intervention for propagation however. Bluetooth phones are also subject to bluejacking, which although not a virus, does allow for the transmission of unwanted messages from anonymous Bluetooth users. Cameras Most current phones also have a built-in digital camera (see camera phone), that can have resolutions as high as 108M pixels. This gives rise to some concern about privacy, in view of possible voyeurism, for example in swimming pools. South Korea has ordered manufacturers to ensure that all new handsets emit a beep whenever a picture is taken. Sound recording and video recording is often also possible. Most people do not walk around with a video camera, but do carry a phone. The arrival of video camera phones is transforming the availability of video to consumers, and helps fuel citizen journalism. See also Mobile game Ringtone Smartphone Mobile phone form factor For more in depth information about mobile phone review you can also reach out hare Wallpaper https://thenitintech.com Mobile phone Mobile phones
13225486
https://en.wikipedia.org/wiki/Private%20VLAN
Private VLAN
Private VLAN, also known as port isolation, is a technique in computer networking where a VLAN contains switch ports that are restricted such that they can only communicate with a given uplink. The restricted ports are called private ports. Each private VLAN typically contains many private ports, and a single uplink. The uplink will typically be a port (or link aggregation group) connected to a router, firewall, server, provider network, or similar central resource. This concept was primarily introduced as the number of network segregation (number of vlans) in a Network switch are generally restricted to a specific number and all the resources could be used up in highly scaled scenarios. Hence, there was a requirement to create multiple network segregation with minimum resources. The switch forwards all frames received from a private port to the uplink port, regardless of VLAN ID or destination MAC address. Frames received from an uplink port are forwarded in the normal way (i.e. to the port hosting the destination MAC address, or to all ports of the VLAN for broadcast frames or for unknown destination MAC addresses). As a result, direct peer-to-peer traffic between peers through the switch is blocked, and any such communication must go through the uplink. While private VLANs provide isolation between peers at the data link layer, communication at higher layers may still be possible depending on further network configuration. A typical application for a private VLAN is a hotel or Ethernet to the home network where each room or apartment has a port for Internet access. Similar port isolation is used in Ethernet-based ADSL DSLAMs. Allowing direct data link layer communication between customer nodes would expose the local network to various security attacks, such as ARP spoofing, as well as increasing the potential for damage due to misconfiguration. Another application of private VLANs is to simplify IP address assignment. Ports can be isolated from each other at the data link layer (for security, performance, or other reasons), while belonging to the same IP subnet. In such a case direct communication between the IP hosts on the protected ports is only possible through the uplink connection by using MAC-Forced Forwarding or a similar Proxy ARP based solution. Overview Private VLAN divides a VLAN (Primary) into sub-VLANs (Secondary) while keeping existing IP subnet and layer 3 configuration. A regular VLAN is a single broadcast domain, while private VLAN partitions one broadcast domain into multiple smaller broadcast subdomains. Primary VLAN: Simply the original VLAN. This type of VLAN is used to forward frames downstream to all Secondary VLANs. Secondary VLAN: Secondary VLAN is configured with one of the following types: Isolated: Any switch ports associated with an Isolated VLAN can reach the primary VLAN, but not any other Secondary VLAN. In addition, hosts associated with the same Isolated VLAN cannot reach each other. There can be multiple Isolated VLANs in one Private VLAN domain (which may be useful if the VLANs need to use distinct paths for security reasons); the ports remain isolated from each other within each VLAN. Community: Any switch ports associated with a common community VLAN can communicate with each other and with the primary VLAN but not with any other secondary VLAN. There can be multiple distinct community VLANs within one Private VLAN domain. There are mainly two types of ports in a Private VLAN: Promiscuous port (P-Port) and Host port. Host port further divides in two types Isolated port (I-Port) and Community port (C-port). Promiscuous port (P-Port): The switch port connects to a router, firewall or other common gateway device. This port can communicate with anything else connected to the primary or any secondary VLAN. In other words, it is a type of a port that is allowed to send and receive frames from any other port on the VLAN. Host Ports: Isolated Port (I-Port): Connects to the regular host that resides on isolated VLAN. This port communicates only with P-Ports. Community Port (C-Port): Connects to the regular host that resides on community VLAN. This port communicates with P-Ports and ports on the same community VLAN. Example scenario: a switch with VLAN 100, converted into a Private VLAN with one P-Port, two I-Ports in Isolated VLAN 101 (Secondary) and two community VLANs 102 and 103 (Secondary), with 2 ports in each. The switch has one uplink port (trunk), connected to another switch. The diagram shows this configuration graphically. The following table shows the traffic which can flow between all these ports. Traffic from an Uplink port to an Isolated port will be denied if it is in the Isolated VLAN. Traffic from an Uplink port to an isolated port will be permitted if it is in the primary VLAN. Use cases Network segregation Private VLANs are used for network segregation when: Moving from a flat network to a segregated network without changing the IP addressing of the hosts. A firewall can replace a router, and then hosts can be slowly moved to their secondary VLAN assignment without changing their IP addresses. There is a need for a firewall with many tens, hundreds or even thousands interfaces. Using Private VLANs the firewall can have only one interface for all the segregated networks. There is a need to preserve IP addressing. With Private VLANs, all Secondary VLANs can share the same IP subnet. Overcome license fees for number of supported VLANs per firewall. There is a need for more than 4095 segregated networks. With Isolated VLAN, there can be endless number of segregated networks. Secure hosting Private VLANs in hosting operation allows segregation between customers with the following benefits: No need for separate IP subnet for each customer. Using Isolated VLAN, there is no limit on the number of customers. No need to change firewall's interface configuration to extend the number of configured VLANs. Secure VDI An Isolated VLAN can be used to segregate VDI desktops from each other, allowing filtering and inspection of desktop to desktop communication. Using non-isolated VLANs would require a different VLAN and subnet for each VDI desktop. Backup network On a backup network, there is no need for hosts to reach each other. Hosts should only reach their backup destination. Backup clients can be placed in one Isolated VLAN and the backup servers can be placed as promiscuous on the Primary VLAN, this will allow hosts to communicate only with the backup servers. Vendor support Hardware switches Alcatel-Lucent Enterprise OmniSwitch series Arista Networks Data Center Switching Brocade BigIron, TurboIron and FastIron switches Cisco Systems Catalyst 2960-XR, 3560 and higher product lines switches Extreme Networks XOS based switches FortiNet FortiOS based switches Juniper Networks EX switches Hewlett-Packard Enterprise Aruba Access Switches 2920 series and higher product lines switches Lenovo CNOS based switches MICROSENS G6 switch family MikroTik All models (routers/switches) with switch chips since RouterOS v6.43 TP-Link T2600G series, T3700G series TRENDnet many models Ubiquiti Networks EdgeSwitch series, Unifi series Software switches Cisco Systems Nexus 1000V Microsoft HyperV 2012 Oracle Oracle VM Server for SPARC 3.1.1.1 VMware vDS switch Other private VLAN–aware products Cisco Systems Firewall Services Module Marathon Networks PVTD Private VLAN deployment and operation appliance See also Ethernet Broadcast domain VLAN hopping Related RFCs – Cisco Systems' Private VLANs: Scalable Security in a Multi-Client Environment References "Configuring Private VLAN" TP-Link Configuration Guide. CCNP BCMSN Official exam certification guide.By-David Hucaby, , Notes Local area networks Network architecture
1448702
https://en.wikipedia.org/wiki/Iterated%20function
Iterated function
In mathematics, an iterated function is a function (that is, a function from some set to itself) which is obtained by composing another function with itself a certain number of times. The process of repeatedly applying the same function is called iteration. In this process, starting from some initial number, the result of applying a given function is fed again in the function as input, and this process is repeated. Iterated functions are objects of study in computer science, fractals, dynamical systems, mathematics and renormalization group physics. Definition The formal definition of an iterated function on a set X follows. Let be a set and be a function. Defining as the n-th iterate of (a notation introduced by Hans Heinrich Bürmann and John Frederick William Herschel), where n is a non-negative integer, by: and where is the identity function on and denotes function composition. That is, , always associative. Because the notation may refer to both iteration (composition) of the function or exponentiation of the function (the latter is commonly used in trigonometry), some mathematicians choose to use to denote the compositional meaning, writing for the -th iterate of the function , as in, for example, meaning . For the same purpose, was used by Benjamin Peirce whereas Alfred Pringsheim and Jules Molk suggested instead. Abelian property and iteration sequences In general, the following identity holds for all non-negative integers and , This is structurally identical to the property of exponentiation that , i.e. the special case . In general, for arbitrary general (negative, non-integer, etc.) indices and , this relation is called the translation functional equation, cf. Schröder's equation and Abel equation. On a logarithmic scale, this reduces to the nesting property of Chebyshev polynomials, , since . The relation also holds, analogous to the property of exponentiation that . The sequence of functions is called a Picard sequence, named after Charles Émile Picard. For a given in , the sequence of values is called the orbit of . If for some integer , the orbit is called a periodic orbit. The smallest such value of for a given is called the period of the orbit. The point itself is called a periodic point. The cycle detection problem in computer science is the algorithmic problem of finding the first periodic point in an orbit, and the period of the orbit. Fixed points If for some in (that is, the period of the orbit of is ), then is called a fixed point of the iterated sequence. The set of fixed points is often denoted as . There exist a number of fixed-point theorems that guarantee the existence of fixed points in various situations, including the Banach fixed point theorem and the Brouwer fixed point theorem. There are several techniques for convergence acceleration of the sequences produced by fixed point iteration. For example, the Aitken method applied to an iterated fixed point is known as Steffensen's method, and produces quadratic convergence. Limiting behaviour Upon iteration, one may find that there are sets that shrink and converge towards a single point. In such a case, the point that is converged to is known as an attractive fixed point. Conversely, iteration may give the appearance of points diverging away from a single point; this would be the case for an unstable fixed point. When the points of the orbit converge to one or more limits, the set of accumulation points of the orbit is known as the limit set or the ω-limit set. The ideas of attraction and repulsion generalize similarly; one may categorize iterates into stable sets and unstable sets, according to the behaviour of small neighborhoods under iteration. (Also see Infinite compositions of analytic functions.) Other limiting behaviours are possible; for example, wandering points are points that move away, and never come back even close to where they started. Invariant measure If one considers the evolution of a density distribution, rather than that of individual point dynamics, then the limiting behavior is given by the invariant measure. It can be visualized as the behavior of a point-cloud or dust-cloud under repeated iteration. The invariant measure is an eigenstate of the Ruelle-Frobenius-Perron operator or transfer operator, corresponding to an eigenvalue of 1. Smaller eigenvalues correspond to unstable, decaying states. In general, because repeated iteration corresponds to a shift, the transfer operator, and its adjoint, the Koopman operator can both be interpreted as shift operators action on a shift space. The theory of subshifts of finite type provides general insight into many iterated functions, especially those leading to chaos. Fractional iterates and flows, and negative iterates The notion must be used with care when the equation has multiple solutions, which is normally the case, as in Babbage's equation of the functional roots of the identity map. For example, for and , both and are solutions; so the expression doesn't denote a unique function, just as numbers have multiple algebraic roots. The issue is quite similar to the expression "0/0" in arithmetic. A trivial root of f can always be obtained if fs domain can be extended sufficiently, cf. picture. The roots chosen are normally the ones belonging to the orbit under study. Fractional iteration of a function can be defined: for instance, a half iterate of a function is a function such that . This function can be written using the index notation as . Similarly, is the function defined such that , while may be defined as equal to , and so forth, all based on the principle, mentioned earlier, that . This idea can be generalized so that the iteration count becomes a continuous parameter, a sort of continuous "time" of a continuous orbit. In such cases, one refers to the system as a flow. (cf. Section on conjugacy below.) Negative iterates correspond to function inverses and their compositions. For example, is the normal inverse of , while is the inverse composed with itself, i.e. . Fractional negative iterates are defined analogously to fractional positive ones; for example, is defined such that , or, equivalently, such that . Some formulas for fractional iteration One of several methods of finding a series formula for fractional iteration, making use of a fixed point, is as follows. First determine a fixed point for the function such that . Define for all n belonging to the reals. This, in some ways, is the most natural extra condition to place upon the fractional iterates. Expand around the fixed point a as a Taylor series, Expand out Substitute in for , for any k, Make use of the geometric progression to simplify terms, There is a special case when , This can be carried on indefinitely, although inefficiently, as the latter terms become increasingly complicated. A more systematic procedure is outlined in the following section on Conjugacy. Example 1 For example, setting gives the fixed point , so the above formula terminates to just which is trivial to check. Example 2 Find the value of where this is done n times (and possibly the interpolated values when n is not an integer). We have . A fixed point is . So set and expanded around the fixed point value of 2 is then an infinite series, which, taking just the first three terms, is correct to the first decimal place when n is positive–cf. Tetration: . (Using the other fixed point causes the series to diverge.) For , the series computes the inverse function . Example 3 With the function , expand around the fixed point 1 to get the series which is simply the Taylor series of x(bn ) expanded around 1. Conjugacy If and are two iterated functions, and there exists a homeomorphism such that , then and are said to be topologically conjugate. Clearly, topological conjugacy is preserved under iteration, as . Thus, if one can solve for one iterated function system, one also has solutions for all topologically conjugate systems. For example, the tent map is topologically conjugate to the logistic map. As a special case, taking , one has the iteration of as ,   for any function . Making the substitution yields ,   a form known as the Abel equation. Even in the absence of a strict homeomorphism, near a fixed point, here taken to be at = 0, (0) = 0, one may often solve Schröder's equation for a function Ψ, which makes locally conjugate to a mere dilation, , that is . Thus, its iteration orbit, or flow, under suitable provisions (e.g., ), amounts to the conjugate of the orbit of the monomial, , where in this expression serves as a plain exponent: functional iteration has been reduced to multiplication! Here, however, the exponent no longer needs be integer or positive, and is a continuous "time" of evolution for the full orbit: the monoid of the Picard sequence (cf. transformation semigroup) has generalized to a full continuous group. This method (perturbative determination of the principal eigenfunction Ψ, cf. Carleman matrix) is equivalent to the algorithm of the preceding section, albeit, in practice, more powerful and systematic. Markov chains If the function is linear and can be described by a stochastic matrix, that is, a matrix whose rows or columns sum to one, then the iterated system is known as a Markov chain. Examples There are many chaotic maps. Well-known iterated functions include the Mandelbrot set and iterated function systems. Ernst Schröder, in 1870, worked out special cases of the logistic map, such as the chaotic case , so that , hence . A nonchaotic case Schröder also illustrated with his method, , yielded , and hence . If is the action of a group element on a set, then the iterated function corresponds to a free group. Most functions do not have explicit general closed-form expressions for the n-th iterate. The table below lists some that do. Note that all these expressions are valid even for non-integer and negative n, as well as non-negative integer n. Note: these two special cases of are the only cases that have a closed-form solution. Choosing b = 2 = –a and b = 4 = –a, respectively, further reduces them to the nonchaotic and chaotic logistic cases discussed prior to the table. Some of these examples are related among themselves by simple conjugacies. A few further examples, essentially amounting to simple conjugacies of Schröder's examples can be found in ref. Means of study Iterated functions can be studied with the Artin–Mazur zeta function and with transfer operators. In computer science In computer science, iterated functions occur as a special case of recursive functions, which in turn anchor the study of such broad topics as lambda calculus, or narrower ones, such as the denotational semantics of computer programs. Definitions in terms of iterated functions Two important functionals can be defined in terms of iterated functions. These are summation: and the equivalent product: Functional derivative The functional derivative of an iterated function is given by the recursive formula: Lie's data transport equation Iterated functions crop up in the series expansion of combined functions, such as . Given the iteration velocity, or beta function (physics), for the th iterate of the function , we have For example, for rigid advection, if , then . Consequently, , action by a plain shift operator. Conversely, one may specify given an arbitrary , through the generic Abel equation discussed above, where This is evident by noting that For continuous iteration index , then, now written as a subscript, this amounts to Lie's celebrated exponential realization of a continuous group, The initial flow velocity suffices to determine the entire flow, given this exponential realization which automatically provides the general solution to the translation functional equation, See also Irrational rotation Iterated function system Iterative method Rotation number Sarkovskii's theorem Fractional calculus Recurrence relation Schröder's equation Functional square root Abel function Infinite compositions of analytic functions Flow (mathematics) Tetration Notes References Dynamical systems Fractals Sequences and series Fixed points (mathematics) Functions and mappings Functional equations
5122880
https://en.wikipedia.org/wiki/Innovative%20Interfaces
Innovative Interfaces
Innovative Interfaces, Inc. (abbreviated III and called "Innovative" or "Triple I" in the industry) is a software company specializing in integrated systems for library management. Their key products include Sierra, Polaris, Millennium, and Virtua, with customers in 66 countries. Innovative was joined ProQuest in January 2020. On December 1, 2021, Clarivate completed their acquisition ProQuest and by extension Innovative. The company's software is used by various types of libraries including academic, public, school, medical, law, and special libraries as well as consortia. In September 2014 Sierra was installed at 1494 libraries (with 3435 facilities), Polaris at 1339 (with 2808 facilities), Millennium at 1316 (with 2640 facilities), and Virtua at 224 (with 490 facilities). Founded in 1978 by Jerry Kline and Steve Silberstein in Berkeley, California, the initial product was a system to interface OCLC data with a library's cataloging system. Huntsman Gay Global Capital and JMI Equity invested in the company in 2012, the same year Kim Massana, formerly president of Thomson Reuters Elite, was appointed CEO. The equity firms purchased the company outright the next year. The company also made several acquisitions within the next two years: SkyRiver Technology Solutions (which maintains partnerships with 3M, EBSCO Information Services, OverDrive, Inc., and Bibliotheca), Polaris Library Systems, and VTLS Inc. Bert Winemiller took over as CEO for a brief period in 2015 before the company named James Tallman as their new CEO in January 2016. During May 2019, Shaheen Javadizadeh was appointed CEO, and Jim Tallman was appointed to the role of Executive Chairman. In December 2019, Innovative was acquired by Ex Libris. When acquired by ProQuest/Ex Libris, Yariv Kursh was named General Manager. Vega Platform In April 2019, Innovative launched a new platform Inspire. The first product on this platform was Discovery. It was architected on a cloud-based platform. MARC records are converted to the BIBFRAME model which are made searchable in a proprietary Context Engine. Search results were then shown in a Context Wheel that represents the relationships between resources, people, and concepts. Later the platform was renamed to Vega and made publicly available in December 2020. Innovative Users Group Formed in 1991 as an independent organization, the Innovative Users Group serves the libraries that use the company's software. The Innovative Users Group organizes an annual conference, organizes ballots for user-submitted enhancements, and maintains the IUG Clearinghouse for users to share tutorials, scripts, guides, and other resources created to better use the software. References External links Innovative Interfaces, Inc. Home Page Home Page of the Innovative Users Group, an independent organization of users of the Innovative software List of current Encore libraries “Automation Marketplace 2013: The Rush to Innovate.” Library Journal. Retrieved September 18, 2014. “Automation Marketplace 2012: Agents of Change.” Library Journal. Retrieved September 18, 2014. “Investing in The Future: Automation Marketplace 2009.” Library Journal Retrieved September 21, 2011. “Automation System Marketplace 2008: Opportunity Out of Turmoil.” Library Journal Retrieved September 21, 2011. Library automation Library-related organizations Software companies established in 1978 Companies based in Emeryville, California 1978 establishments in California
20800509
https://en.wikipedia.org/wiki/Toy%20Story%20%28franchise%29
Toy Story (franchise)
{{Infobox media franchise | title = Toy Story | image = Toy Story logo.svg | creator = John Lasseter Pete Docter Andrew Stanton Joe Ranft | owner = Disney Enterprises, Inc. | years = 1995–present | origin = Toy Story (1995) | comics = List of comics | films = {{plainlist| Toy Story films: Toy Story (1995) Toy Story 2 (1999) Toy Story 3 (2010) Toy Story 4 (2019)Spin-off film: Lightyear (2022) }} | shorts = | atv = | tv_specials = | dtv = Buzz Lightyear of Star Command: The Adventure Begins | plays = | musicals = Toy Story: The Musical (2008–16) | video_games = List of video games | soundtracks = | toys = Lego Toy Story | attractions = }}Toy Story is a Disney media franchise that started in 1995 with the release of the animated feature film of the same name, produced by Pixar Animation Studios and released by Walt Disney Pictures. It is the first computer-animated franchise. The franchise is based on the anthropomorphic concept that all toys, unknown to humans, are secretly alive and the films focus on a diverse group of toys that feature a classic cowboy doll named Sheriff Woody and a modern spaceman action figure named Buzz Lightyear, principally voiced by Tom Hanks and Tim Allen. The group unexpectedly embark on adventures that challenge and change them. The franchise consists mainly of four CGI animated films: Toy Story (1995), Toy Story 2 (1999), Toy Story 3 (2010), and Toy Story 4 (2019), with a spin-off prequel film, Lightyear (2022), slated for release. The first Toy Story was the first feature-length film to be made entirely using computer-generated imagery. The first two films were directed by John Lasseter, the third film by Lee Unkrich, who acted as the co-director of the second film (together with Ash Brannon), the fourth film by Josh Cooley, and Lightyear by Angus MacLane. Produced on a total budget of $520 million, the Toy Story films have grossed more than $3 billion worldwide, becoming the 20th-highest-grossing franchise worldwide and the fourth-highest-grossing animated franchise. Each film set box office records, with the third and fourth included in the top 50 all-time worldwide films. The entire franchise has received critical acclaim from critics and audiences. The first two films were re-released in theaters as a Disney Digital 3-D "double feature" for at least 2 weeks in October 2009 as a promotion for the then-upcoming third film. Main films Toy Story (1995) Toy Story, the first film in the franchise, was released on November 22, 1995. It was the first feature-length film created entirely by CGI and was directed by John Lasseter. The plot involves Andy (voiced by John Morris), an imaginative young boy, getting a new Buzz Lightyear (Tim Allen) action figure for his birthday, causing Sheriff Woody (Tom Hanks), a vintage cowboy doll, to think that he has been replaced as Andy's favorite toy. In competing for Andy's attention, Woody accidentally knocks Buzz out of a window, leading the other toys to believe he tried to murder Buzz. Determined to set things right, Woody tries to save Buzz and both must escape from the house of the next-door neighbor Sid Phillips (voiced by Erik von Detten), who likes to torture and destroy toys. In addition to Hanks and Allen, the film featured the voices of Jim Varney, Don Rickles, John Ratzenberger, Wallace Shawn, and Annie Potts. The film was critically and financially successful, grossing over $373 million worldwide. The film was later re-released in Disney Digital 3-D as part of a double feature, along with Toy Story 2, for a two-week run, which was later extended due to its financial success. Toy Story 2 (1999) Toy Story 2, the second film in the franchise, was released on November 24, 1999. Lasseter reprised his role as director. The plot involves Woody getting stolen by a greedy toy collector named Al McWhiggin (voiced by Wayne Knight). Buzz and several of Andy's toys set off to attempt to free Woody, who meanwhile has discovered his origins as a historic television star. In addition to the returning cast, Toy Story 2 included voice acting from Joan Cusack, Kelsey Grammer, Estelle Harris, and Jodi Benson. Toy Story 2 was not originally intended for release in theaters, but as a direct-to-video sequel to the original Toy Story, with a 60-minute running time. However, Disney's executives were impressed by the high quality of the in-work imagery for the sequel, and were also pressured by the main characters' voice actors Hanks and Allen, so they decided to convert Toy Story 2 into a theatrical film. It turned out to be an even greater success than the original Toy Story, grossing over $497 million worldwide. The film was re-released in Disney Digital 3-D as part of a double feature, along with Toy Story, on October 2, 2009. Toy Story 3 (2010) Toy Story 3, the third film in the franchise, was released on June 18, 2010, nearly 11 years after Toy Story 2. The plot focuses on the toys being accidentally dropped off at a daycare center while their owner, Andy, is getting ready to go to college. The toys discover that all of the toys are ruled by Lotso (voiced by Ned Beatty), a sinister teddy bear, while Woody finds potential hope for a new home in the hands of Bonnie, a toddler that takes great care of her toys. Blake Clark replaced Varney after Varney's death in 2000, while other new cast members included Michael Keaton, Timothy Dalton, Jeff Garlin, Kristen Schaal, and Bonnie Hunt. It was the first Toy Story film not to be directed by Lasseter (although he remained involved in the film as executive producer), but by Lee Unkrich, who edited the first two films and co-directed the second. It was Pixar's highest-grossing film of all time both domestically, surpassing Finding Nemo, until it was surpassed by Finding Dory in 2016 and worldwide, also surpassing Finding Nemo, until it was surpassed by Incredibles 2 in 2018. Toy Story 3 grossed more than the first and second films combined, making it the first animated film to have crossed the $1 billion mark. In August 2010, it surpassed Shrek 2, becoming the highest-grossing animated film of all time until it was surpassed by Frozen, another Disney production, in March 2014. Toy Story 3 was released on DVD and Blu-ray on November 2, 2010. Toy Story 4 (2019) Toy Story 4, the fourth feature film in the franchise, was released on June 21, 2019. Taking place not long after Toy Story 3, the story involves Woody, Buzz, and the other toys living well with their new owner Bonnie. On her first day of kindergarten, Bonnie creates a toy spork, named Forky (voiced by Tony Hale), out of garbage. Woody, having been neglected by Bonnie lately, personally takes it upon himself to keep Forky out of harm's way. During a road trip with Bonnie's family, Woody, to his delight encounters his old friend and former fellow toy Bo Peep (Annie Potts), who he had been separated from in the interim period between Toy Story 2 and Toy Story 3 and has to deal with fears of becoming a "lost toy". Rickles had died in 2017 prior to the production of the film, but Pixar used archival recordings from him to continue his voice work for the film. Additional new cast members include Keegan-Michael Key, Jordan Peele, Keanu Reeves, Ally Maki, and Christina Hendricks. The film had been originally announced on November 6, 2014 during an investor's call with Lasseter to direct, Galyn Susman to produce, with the screenplay written by Rashida Jones and Will McCormack based on the story developed by Lasseter, Andrew Stanton, Pete Docter, and Lee Unkrich. However, during production, Lasseter stepped down from his position at Pixar in 2017, though remained to consult for the film; Josh Cooley was named as the film's director, with Jonas Rivera replacing Susman as producer. The film underwent a major revision following the departures of Jones and McCormack later in 2017, with Stephany Folsom replacing them as screenwriter. Much of the original script by Jones and McCormack had to be dropped, delaying the release of the film. Possible fifth film In February 2019, Tim Allen, who voiced Lightyear in the films, expressed interest in doing another film as he "did not see any reason why they would not do it". On The Ellen DeGeneres Show that May, Hanks said that Toy Story 4 would be the final installment in the franchise, but producer Mark Nielsen disclosed a possibility of a fifth film, as Pixar was not ruling out that possibility. Spin-off film Lightyear (2022) At Disney's 2020 Investor Day meeting in December, Lightyear was announced as a spin-off film depicting the in-universe origin story of the fictional human Buzz Lightyear character who inspired the toy featured in the main films, with Chris Evans cast in the title role. Directed by Angus MacLane, the film will be released on June 17, 2022. Television series Toy Story Treats In 1996, a series of shorts known as Toy Story Treats were created as interstitials on ABC Saturday Morning, the predecessor to Disney's One Saturday Morning and ABC Kids. They did not necessarily follow the continuity from Toy Story, taking place before, during and after the events of the first film. They were aired roughly around the time of Toy Story release to home video. The shorts also appeared as bonus features on both "The Ultimate Toy Box" and as Easter eggs on the "10th Anniversary Edition" DVD menu of the first film, they were also restored in HD in a 1.33:1 aspect ratio and presented in the special features of the 2010 Blu-ray release of the film. John Ratzenberger, Wallace Shawn, and Jeff Pidgeon reprise their roles from the film as Hamm, Rex, and the Aliens, respectively, with Jim Hanks and Pat Fraley voicing Sheriff Woody and Buzz Lightyear. Toy Story Toons In 2011, Pixar started releasing short animated films to supplement the Toy Story films, called Toy Story Toons. The shorts pick up where Toy Story 3 has left off, with Woody, Buzz, and Andy's other toys finding a new home at Bonnie's. So far, three shorts have been released; Hawaiian Vacation, Small Fry, and Partysaurus Rex. Another short, titled Mythic Rock, was in development in 2013 but was never released. Forky Asks a Question A series of shorts named Forky Asks a Question for Disney+, with the new character Forky from Toy Story 4 (voiced by Tony Hale), was released on the launch date of the service on November 12, 2019. Television specials Pixar has also developed two 22-minute Toy Story television specials for ABC. They also air them on Disney Junior. Toy Story of Terror! The first was a Halloween-themed special titled Toy Story of Terror!, aired on October 16, 2013. Toy Story That Time Forgot The second was a Christmas-themed special titled Toy Story That Time Forgot, aired on December 2, 2014. Short films Lamp Life Lamp Life is a short film revealing Bo Peep's whereabouts between the events of Toy Story 2 and Toy Story 4, where she was used as a night light for first one and then two children before being donated to the antique shop, where she and her sheep eventually abandoned their home lamp and were reunited with Woody. It was released on Disney+ on January 31, 2020. Valerie LaPointe, who was a story supervisor on Toy Story 4, wrote and directed the short. Annie Potts and Ali Maki returned as Bo and Giggle McDimples. However, Woody is voiced by Jim Hanks, Tom Hanks' brother. Reception Box office performance Toy Storys first five days of domestic release (on Thanksgiving weekend), earned the film $39.1 million. The film placed first in the weekend's box office with $29.1 million, and maintained its number one position at the domestic box office for the following two weekends. It was the highest-grossing domestic film in 1995, and the third-highest-grossing animated film at the time. Toy Story 2 opened at No. 1 over the Thanksgiving Day weekend, with a three-day tally of $57.4 million from 3,236 theaters. It averaged $17,734 per theater over three days during that weekend, and stayed at No. 1 for the next two weekends. It was the third-highest-grossing film of 1999. Toy Story 3 had a strong debut, opening in 4,028 theaters and grossing $41.1 million at the box office on its opening day. In addition, Toy Story 3 had the highest opening-day gross for an animated film on record. During its opening weekend, the film grossed $110.3 million, making it #1 for the weekend; it was the biggest opening weekend ever for any Pixar film. Toy Story 3 stayed at the #1 spot for the next weekend. The film had the second-highest opening ever for an animated film at the time. It was the highest-grossing film of 2010, both domestically and worldwide. Toy Story 3 grossed over $1 billion, making it the seventh film in history, the second Disney film in 2010, and the first animated film to do so. Critical and public response Films According to Rotten Tomatoes, the Toy Story franchise is the most critically acclaimed franchise of all time. The first two films received a 100% "Certified Fresh" rating, while the third and fourth earned 98% and 97% "Certified Fresh" ratings. According to the site, no other franchise has had all of its films so highly rated—the Before trilogy comes closest with 98%, and the Dollars trilogy and The Lord of the Rings trilogy come after with average ratings of 95% and 94%, respectively, while the Toy Story franchise has an average of 99%. According to Metacritic, the Toy Story franchise is the second most critically acclaimed franchise of all time, after The Lord of the Rings trilogy, having an average rounded score of 90 out of 100. According to CinemaScore, polls conducted during the opening weekend, cinema audiences gave the first, third and fourth installments of the series an average grade of "A," while the second earned an "A+," on an A+ to F scale. Television specials Accolades Toy Story was nominated for three Academy Awards, including Best Original Screenplay, Best Original Score and Best Original Song for Randy Newman's "You've Got a Friend in Me." John Lasseter, the director of the film, also received a Special Achievement Award for "the development and inspired application of techniques that have made possible the first feature-length computer-animated film." Toy Story was also the first animated film to be nominated for the Academy Award for Best Original Screenplay. At the 53rd Golden Globe Awards, Toy Story earned two Golden Globe nominations—Best Motion Picture – Musical or Comedy and Best Original Song. It was also nominated for Best Special Visual Effects at the 50th British Academy Film Awards. Toy Story 2 won a Golden Globe for Best Motion Picture – Musical or Comedy and earned a single Academy Award nomination for the song "When She Loved Me," performed by Sarah McLachlan. The Academy Award for Best Animated Feature was introduced in 2001 after the first two Toy Story installments. Toy Story 3 won two Academy Awards – Best Animated Feature and Best Original Song for "We Belong Together". It earned three other nominations, including Best Picture, Best Adapted Screenplay, and Best Sound Editing. It was the third animated film in history to be nominated for Best Picture, after Beauty and the Beast and Up. Toy Story 3 also won the Golden Globe for Best Animated Feature Film and the award for Best Animated Film at the British Academy Film Awards. Toy Story 4 won the Academy Award for Best Animated Feature and was also nominated for Best Original Song for Newman's "I Can't Let You Throw Yourself Away." It is the first animated franchise to win Best Animated Feature award twice. It's also the first animated franchise to have every film nominated in the same category (Original Song). It was also nominated to the Golden Globe for Best Animated Feature Film (but lost against Missing Link) and nominated for Best Animated Film at the British Academy Film Awards. Cast and characters Crew Other spin-offs Buzz Lightyear of Star Command: The Adventure Begins Buzz Lightyear of Star Command: The Adventure Begins is a 2000 traditionally animated direct-to-video television film produced by Walt Disney Television Animation and by Pixar Animation Studios as a co-production that serves as a spin-off of the Toy Story franchise. The film was released on August 8, 2000 and features Tim Allen as the voice of Buzz Lightyear. The film follows Buzz Lightyear as a space ranger who fights against the evil Emperor Zurg, showing the inspiration for the Buzz Lightyear toyline that exists in the Toy Story series. The film later led to the television series, Buzz Lightyear of Star Command. Although the film was criticized for not using the same animation in Toy Story and Toy Story 2, it sold three million VHS and DVDs in its first week of release. Buzz Lightyear of Star Command Buzz Lightyear of Star Command is an traditionally animated television series produced by Walt Disney Television Animation and co-produced by Pixar Animation Studios that is a spin off of the Toy Story franchise, and was led from the direct-to-video film Buzz Lightyear of Star Command: The Adventure Begins, depicting the in-universe Toy Story series on which the Buzz Lightyear toy is based. The series takes place in the far future, featuring Buzz Lightyear voiced by Patrick Warburton (replacing Tim Allen), a famous, experienced Space Ranger who takes a crew of rookies under his wing as he investigates criminal activity across the galaxy and attempts to bring down Evil Emperor Zurg once and for all. It aired on UPN from October 2, 2000 to November 29, 2000 and on ABC from October 14, 2000 to January 13, 2001. Other media Comic books A 4-issue limited series Toy Story: Mysterious Stranger was published by Boom! Entertainment from May to August 2009. This was followed by an 8-issue ongoing series, starting with #0 in November 2009. Two Buzz Lightyear one-shots were released in 2010, for Free Comic Book Day and Halloween. A second 4-issue limited series, Toy Story: Toy Overboard was published by Boom! Entertainment from July to October 2010. A 4-issue limited series by Marvel Comics Toy Story: Tales from the Toy Chest was published from May to August 2012. Toy Story magazine was first released on July 21, 2010. Each edition was 24 pages in length, apart from the launch edition, which was 28 pages. A one-shot anthology comic book by Dark Horse Comics was released to tie in with Toy Story 4 in 2019. The comic picks up just after the events of the film, also exploring the backstories of Duke Caboom, Ducky, Bunny, Bo Peep and Giggle McDimples during their exploits as a band of lost toys. Video games Toy Story (1995) (Sega Genesis, Super Nintendo Entertainment System, Microsoft Windows, and Game Boy) Disney's Activity Center: Toy Story (1996) (Microsoft Windows) Disney's Animated Storybook: Toy Story (1996) (Microsoft Windows and macOS) Disney's Activity Center: Toy Story 2 (2000) (Microsoft Windows) Toy Story 2: Buzz Lightyear to the Rescue (1999) (Dreamcast, PlayStation, Nintendo 64, Microsoft Windows, macOS, and Game Boy Color) Toy Story 2: Woody Sousaku Daisakusen!! (2000) (Sega Pico) – released only in Japan Buzz Lightyear of Star Command (2000) (Game Boy Color, PlayStation, and Microsoft Windows) Jessie's Wild West Rodeo (2001) (Microsoft Windows and macOS) Toy Story Racer (2001) (PlayStation and Game Boy Color) Disney Hotshots: Toy Story 2 (2003) (Microsoft Windows) Toy Story 2: Operation Rescue Woody! (2005) (V.Smile) Toy Story Mania! (2009) (Wii, Microsoft Windows, Xbox 360, and PlayStation 3). Disney•Pixar Toy Story 3 (2010) (LeapPad, LeapPad2, LeapPad3, LeapPad Platinum, LeapPad Ultra, LeapPad Jr., Leapster Explorer, and LeapsterGS Explorer) Toy Story 3: The Video Game (2010) (PlayStation 2, PlayStation 3, Xbox 360, Wii, PlayStation Portable, Nintendo DS, Microsoft Windows, macOS, and iOS) Shooting Beena: Toy Story 3 – Woody to Buzz no Daibōken! (2010) (Advanced Pico Beena) – released only in Japan Toy Story: Smash It! (2013) (iOS and Android) Toy Story Drop! (2019) (iOS and Android) Games featuring Toy Story characters Disney Learning: 1st Grade (2000) (Microsoft Windows and macOS) Disney Learning: 2nd Grade (2000) (Microsoft Windows and macOS) Disney•Pixar Learning: 1st Grade (2002) (Microsoft Windows and macOS) Disney•Pixar Learning: 2nd and 3rd Grade (2002) (Microsoft Windows and macOS) Disney's Extreme Skate Adventure (2003) (Game Boy Advance, PlayStation 2, Xbox, and GameCube) LittleBigPlanet 2 (2011) (PlayStation 3) Disney•Pixar Pixar Pals (2011) (LeapPad, LeapPad2, LeapPad3, LeapPad Platinum, LeapPad Ultra, LeapPad Jr., Leapster Explorer, and LeapsterGS Explorer) Kinect: Disneyland Adventures (2011) (Xbox 360, Xbox One, and Microsoft Windows) Kinect Rush: A Disney•Pixar Adventure (2012) (Xbox 360, Xbox One, and Microsoft Windows) Disney Infinity (2013) (PlayStation 3, Xbox 360, Wii, Wii U, Nintendo 3DS, Microsoft Windows, iOS, and Apple TV) Lego The Incredibles (2018) (PlayStation 4, Xbox One, Nintendo Switch, Microsoft Windows, and macOS) Kingdom Hearts III (2019) (PlayStation 4 and Xbox One) Pixar created some original animations for the games, including fully animated sequences for PC titles. Woody and Buzz Lightyear were originally going to appear in the Final Mix version of the Disney/Square Enix video game Kingdom Hearts II. They were omitted from the final product, but their models appear in the game's coding, without textures. The director of the Kingdom Hearts series, Tetsuya Nomura, stated that he would like to include Pixar property in future Kingdom Hearts games, given Disney's purchase of Pixar. This eventually came true, as a stage based on Toy Story made its debut appearance in the series in Kingdom Hearts III, marking the first time that Pixar-based content appears in the series, along with Monsters, Inc. and Ratatouille. Merchandising and software Toy Story had a large promotion before its release, leading to numerous tie-ins with the film including images on food packaging. A variety of merchandise was released during the film's theatrical run and its initial VHS release including toys, clothing, and shoes, among other things. When action figures for Buzz Lightyear and Sheriff Woody were created, they were initially ignored by retailers. However, after over 250,000 figures were sold for each character before the film's release, demand continued to expand, eventually reaching over 25 million units sold by 2007. Also, Disney's Animated Storybook: Toy Story and Disney's Activity Center: Toy Story were released for Windows and Mac. Disney's Animated Storybook: Toy Story was the best selling software title of 1996, selling over 500,000 copies. Theme park attractions Buzz Lightyear attractions in many Disney Parks. Toy Story Midway Mania! at Disney's Hollywood Studios at the Walt Disney World Resort, Disney California Adventure at the Disneyland Resort and Tokyo DisneySea at Tokyo Disney Resort. Toy Story Land themed lands at Walt Disney Studios Park, Hong Kong Disneyland, Shanghai Disneyland and Disney's Hollywood Studios. Toy Story: The Musical on Disney Cruise Line's ship Disney Wonder. Totally Toy Story, an "instant theme park" then a theme area in Tomorrowland at Disneyland. Totally Toy Story Totally Toy Story was an instant theme park and a promotional event for the Toy Story film premiere held at El Capitan Theatre and Masonic Convention Hall. For the November 18, 1995 Toy Story premiere at El Capitan Theatre, Disney rented the Masonic Convention Hall, the next door building, for Totally Toy Story, an instant theme park and a promotional event for the movie. Movie goers paid an additional fee for the pop up park. The promotional event had pre-sales over $1 million and remained opened until January 1, 1996. The Toy Story Funhouse part was moved to Disneyland's Tomorrowland and opened there on January 27, 1996 and closed on May 27, 1996. Totally Toy Story, while in Hollywood, consisted of "Toy Story Art of Animation" exhibit in El Capitan's basement and the Toy Story Funhouse at the convention hall. The fun house consisted of 30,000 square feet of various attractions. These attractions continue the story of the movie with the toys life-size. Attractions Toy Story Funhouse attractions: Hamm's Theater – "Hamm's All-Doll Revue" has energetic dancing and original songs lasted 20 minutes Buzz's Galaxy - "Buzz & the Buzz Lites" show included music from Frank Sinatra two arcade-style games, "Whack-A-Alien" a motion-simulator ride Woody's Roundup dance hall, live musicians and country line-dancing lessons Pizza Planet restaurant Green Army Men's obstacle course, participants strap on foot base to tackle the course Mr. Potato Head's Playroom, contained Etch-a-Sketches and other dexterity games had a floor made up of old game boards Totally Interactive Room, had Sega and Nintendo Toy Story games souvenir shop Impact Toy Story innovative computer animation had a large impact on the film industry. After the film's debut, various industries were interested in the technology used for the film. Graphics chip makers desired to compute imagery similar to the film's animation for personal computers; game developers wanted to learn how to replicate the animation for video games; and robotics researchers were interested in building artificial intelligence into their machines that compared to the lifelike characters in the film. Various authors have also compared the film to an interpretation of Don Quixote as well as humanism. The free and open-source Linux distribution Debian takes its codenames from Toy Story characters, the tradition of which came about as Bruce Perens was involved in the early development of Debian while working at Pixar. Gromit Unleashed In 2013, Pixar designed a "Gromit Lightyear" sculpture based on the Aardman Animations character Gromit from Wallace and Gromit for Gromit Unleashed which sold for £65,000. To infinity and beyond! Buzz Lightyear's classic line "To infinity and beyond!" has seen usage not only on T-shirts, but among philosophers and mathematical theorists as well. Lucia Hall of The Humanist linked the film's plot to an interpretation of humanism. She compared the phrase to "All this and heaven, too!", indicating one who is happy with a life on Earth as well as having an afterlife. In 2008, during STS-124, astronauts took an action figure of Buzz Lightyear into space on the Discovery Space Shuttle as part of an educational experience for students that also stressed the catchphrase. The action figure was used for experiments in zero-g. Also, in 2008, the phrase made international news when it was reported that a father and son had continually repeated the phrase to help them keep track of each other while treading water for 15 hours in the Atlantic Ocean. Notes References External links at Disney Film series introduced in 1995 Animated film series Pixar franchises Mass media franchises Children's film series Films about sentient toys Children's comedy-drama films American children's animated adventure films
9873493
https://en.wikipedia.org/wiki/MindManager
MindManager
MindManager is a commercial mind mapping software application developed by Mindjet. The software provides ways for users to visualize information in mind maps and flowcharts. MindManager can be used to manage projects, organize information, and for brainstorming. , Mindjet had approximately two million users, including notable customers such as Coca Cola, Disney, IBM, and Wal-Mart. Features MindManager provides ways for users to visualize information using mind maps, and with the release of MindManager 2016 for Windows, now includes flowchart and concept map creation tools. The digital mind maps can be used as a “virtual whiteboard” for brainstorming, managing and planning projects, compiling research, organizing large amounts of information, and for strategic planning. MindManager also has features that allow budget calculations and formulas, Gantt chart views of project timelines, and guided brainstorming. Documents can be attached to mind map topics and viewed within the MindManager application. Links, images, and notes can also be added to mind map topics and viewed and searched in a side panel. Development The software that became MindManager was originally developed by Mike Jetter in the mid-1990s while he was recovering from a bone marrow transplant to treat leukemia. Jetter's goal was to develop a program that would overcome the limitations of creating mind maps with pen and paper, such as the inability to easily move items around. Following his release from hospital, Jetter decided to sell the software. The software's mind maps were initially based on the method created by Tony Buzan. Over time, however, Mindjet has developed its own style of mind mapping. The software was originally marketed under the name "MindMan — The Creative MindManager". In 1999, it was rebranded as MindManager. Originally only available for Windows, MindManager expanded to Mac OS X in 2006. With the release of version 7, the Windows version of MindManager adopted the ribbon interface first seen in Microsoft Office 2007 and introduced support for Office Open XML. In 2011, mobile versions of MindManager were released for both iOS and Android. Later that year, the company acquired Thinking Space, an Android-based information mapping application, and Cohuman, a social task management service, which the company developed into a collaborative, cloud-based service to complement MindManager called Mindjet Connect or Project Director. In September 2012, the Mindjet company combined all of its software, including MindManager, Mindjet Connect, and its mobile offerings into a single product, also called Mindjet. Mindjet moved away from the single-product offering in mid-2013. The stand-alone mind mapping product was again named MindManager, with a more expansive version tailored to large enterprise adoptions called MindManager Enterprise released in 2014. MindManager Enterprise added sharing options including viewing/editing within Microsoft SharePoint. A MindManager mind map viewer also became available with MindManager Enterprise 2016. On August 9, 2016, Corel announced that they had acquired the Mindjet MindManager business. Reception and awards MindManager has received generally positive notice from reviewers. MindManager 2016 for Windows took first place in Biggerplate's MindMapper's Choice poll. MindManager 8 received four out of five stars from TechRadar, while MindManager 9 received 3.5 out of 5 stars from PC Magazine and 4 out of 5 stars from Macworld. MindManager was chosen as one of the top 5 best mind mapping tools. MindManager also received a number of awards, including "Collaboration Product of the Year" for 2008 by Intranet Journal, a Jolt Productivity award for Design and Modeling tools from Dr. Dobb's Journal, and "Best of CeBIT" in the Personal Software category in 2004. See also Brainstorming List of concept- and mind-mapping software References External links Mind-mapping software Windows software Classic Mac OS software 1994 software
21469550
https://en.wikipedia.org/wiki/Virtual%20network%20interface
Virtual network interface
A virtual network interface (VIF) is an abstract virtualized representation of a computer network interface that may or may not correspond directly to a network interface controller. Operating system level It is common for the operating system kernel to maintain a table of virtual network interfaces in memory. This may allow the system to store and operate on such information independently of the physical interface involved (or even whether it is a direct physical interface or for instance a tunnel or a bridged interface). It may also allow processes on the system to interact concerning network connections in a more granular fashion than simply to assume a single amorphous "Internet" (of unknown capacity or performance). W. Richard Stevens, in volume 2 of his treatise entitled TCP/IP Illustrated, refers to the kernel's Virtual Interface Table in his discussion of multicast routing. For example, a multicast router may operate differently on interfaces that represent tunnels than on physical interfaces (e.g. it may only need to collect membership information for physical interfaces). Thus the virtual interface may need to divulge some specifics to the user, such as whether or not it represents a physical interface directly. In addition to allowing user space applications to refer to abstract network interface connections, in some systems a virtual interface framework may allow processes to better coordinate the sharing of a given physical interface (beyond the default operating system behavior) by hierarchically subdividing it into abstract interfaces with specified bandwidth limits and queueing models. This can imply restriction of the process, e.g. by inheriting a limited branch of such a hierarchy from which it may not stray. This extra layer of network abstraction is often unnecessary, and may have a minor performance penalty. However, it is also possible to use such a layer of abstraction to work around a performance bottleneck, indeed even to bypass the kernel for optimization purposes. Application level The term VIF has also been applied when the application virtualizes or abstracts network interfaces. Since most software need not concern itself with the particulars of network interfaces, and since the desired abstraction may already be available through the operating system, this usage is rare. See also Loopback Network virtualization Virtual Interface Architecture References External links Linux Network Interfaces Computer networks
27579990
https://en.wikipedia.org/wiki/WolfSSL
WolfSSL
wolfSSL is a small, portable, embedded SSL/TLS library targeted for use by embedded systems developers. It is an open source implementation of TLS (SSL 3.0, TLS 1.0, 1.1, 1.2, 1.3, and DTLS 1.0, 1.2, and 1.3) written in the C programming language. It includes SSL/TLS client libraries and an SSL/TLS server implementation as well as support for multiple APIs, including those defined by SSL and TLS. wolfSSL also includes an OpenSSL compatibility interface with the most commonly used OpenSSL functions. A predecessor of wolfSSL, yaSSL is a C++ based SSL library for embedded environments and real time operating systems with constrained resources. Platforms wolfSSL is currently available for Win32/64, Linux, macOS, Solaris, Threadx, VxWorks, FreeBSD, NetBSD, OpenBSD, embedded Linux, Yocto Project, OpenEmbedded, WinCE, Haiku, OpenWrt, iPhone, Android, Nintendo Wii and Gamecube through DevKitPro support, QNX, MontaVista, Tron variants, NonStop OS, OpenCL, Micrium's MicroC/OS-II, FreeRTOS, SafeRTOS, Freescale MQX, Nucleus, TinyOS, TI-RTOS, HP-UX, uTasker, uT-kernel, embOS, INtime, mbed, RIOT, CMSIS-RTOS, FROSTED, Green Hills INTEGRITY, Keil RTX, TOPPERS, PetaLinux, Apache Mynewt, and PikeOS. History The genesis of yaSSL, or yet another SSL, dates to 2004. OpenSSL was available at the time, and was dual licensed under the OpenSSL License and the SSLeay license. yaSSL, alternatively, was developed and dual-licensed under both a commercial license and the GPL. yaSSL offered a more modern API, commercial style developer support and was complete with an OpenSSL compatibility layer. The first major user of wolfSSL/CyaSSL/yaSSL was MySQL. Through bundling with MySQL, yaSSL has achieved extremely high distribution volumes in the millions. In February 2019, Daniel Stenberg, the creator of cURL, joined the wolfSSL project. Protocols The wolfSSL lightweight SSL library implements the following protocols: SSL 3.0, TLS 1.0, TLS 1.1, TLS 1.2, TLS 1.3 DTLS 1.0, DTLS 1.2, DTLS 1.3 Protocol Notes: SSL 2.0 - SSL 2.0 was deprecated (prohibited) in 2011 by RFC 6176. wolfSSL does not support it. SSL 3.0 - SSL 3.0 was deprecated (prohibited) in 2015 by RFC 7568. In response to the POODLE attack, SSL 3.0 has been disabled by default since wolfSSL 3.6.6, but can be enabled with a compile-time option. Algorithms wolfSSL uses the following cryptography libraries: wolfCrypt By default, wolfSSL uses the cryptographic services provided by wolfCrypt. wolfCrypt Provides RSA, ECC, DSS, Diffie–Hellman, EDH, NTRU, DES, Triple DES, AES (CBC, CTR, CCM, GCM), Camellia, IDEA, ARC4, HC-128, ChaCha20, MD2, MD4, MD5, SHA-1, SHA-2, SHA-3, BLAKE2, RIPEMD-160, Poly1305, Random Number Generation, Large Integer support, and base 16/64 encoding/decoding. An experimental cipher called Rabbit, a public domain software stream cipher from the EU's eSTREAM project, is also included. Rabbit is potentially useful to those encrypting streaming media in high performance, high demand environments. wolfCrypt also includes support for the recent Curve25519 and Ed25519 algorithms. wolfCrypt acts as a back-end crypto implementation for several popular software packages and libraries, including MIT Kerberos (where it can be enabled using a build option). NTRU CyaSSL+ includes NTRU public key encryption. The addition of NTRU in CyaSSL+ was a result of the partnership between yaSSL and Security Innovation. NTRU works well in mobile and embedded environments due to the reduced bit size needed to provide the same security as other public key systems. In addition, it's not known to be vulnerable to quantum attacks. Several cipher suites utilizing NTRU are available with CyaSSL+ including AES-256, RC4, and HC-128. SGX wolfSSL supports use of Intel SGX (Software Guard Extensions). Intel SGX allows for a smaller attack surface area and has been shown to provide a higher level of security for executing code without a significant negative impact on performance. Hardware Acceleration Platforms Supported Supported trusted elements Currently, wolfSSL has the following as supported trusted elements: STSAFE ATECC508A Hardware encryption support The following tables list wolfSSL's support for using various devices' hardware encryption with various algorithms. - "All" denotes 128, 192, and 256-bit supported block sizes Licensing wolfSSL is Free Software, licensed under the GPL-2.0-or-later license. See also Transport Layer Security Comparison of TLS implementations Comparison of cryptography libraries GnuTLS Network Security Services OpenSSL References External links wolfSSL/CyaSSL Homepage wolfSSL Now With ChaCha20 and Poly1305 C (programming language) libraries Cryptographic software Transport Layer Security implementation
33104468
https://en.wikipedia.org/wiki/Trello
Trello
Trello is a web-based, Kanban-style, list-making application and is developed by Trello Enterprise, a subsidiary of Atlassian. Created in 2011 by Fog Creek Software (now Glitch), it was spun out to form the basis of a separate company in New York City in 2014 and sold to Atlassian in January 2017. History The name Trello is derived from the word "trellis" which had been a code name for the project at its early stages. Trello was released at a TechCrunch event by Fog Creek founder Joel Spolsky. In September 2011 Wired magazine named the application one of "The 7 Coolest Startups You Haven't Heard of Yet". Lifehacker said "it makes project collaboration simple and kind of enjoyable". In 2014, it raised US$10.3 million in funding from Index Ventures and Spark Capital. Prior to its acquisition, Trello had sold 22% of its shares to investors, with the remaining shares held by founders Michael Pryor and Joel Spolsky. In May 2016, Trello claimed it had more than 1.1 million daily active users and 14 million total signups. On January 9, 2017, Atlassian announced its intent to acquire Trello for $425 million. The transaction was made with $360 million in cash and $65 million in shares and options. In December 2018, Trello Enterprise announced its acquisition of Butler, a company that developed a "Power-Up" for automating tasks within a Trello board. Trello announced 35 million users in March 2019 and 50 million users in October 2019. Uses Users can create their task boards with different columns and move the tasks between them. Typically columns include task statuses such as To Do, In Progress, Done. The tool can be used for personal and business purposes including real estate management, software project management, school bulletin boards, lesson planning, accounting, web design, gaming, and law office case management. Architecture According to a Fog Creek blog post in January 2012, the client was a thin web layer which downloads the main app, written in CoffeeScript and compiled to minified JavaScript, using Backbone.js, HTML5 .pushState(), and the Mustache templating language. The server was built on top of MongoDB, Node.js and a modified version of Socket.io. Reception On January 26, 2017, PC Magazine gave Trello a 3.5 / 5 rating, calling it "flexible" and saying that "you can get rather creative", while noting that "it may require some experimentation to figure out how to best use it for your team and the workload you manage." See also Comparison of Scrum software Comparison of project management software Kanban board List of collaborative software Tech companies in the New York metropolitan area References External links Android (operating system) software Wear OS software Atlassian products Internet properties established in 2011 IOS software Project management software Task management software Windows software Web applications 2017 mergers and acquisitions
255039
https://en.wikipedia.org/wiki/Luwian%20language
Luwian language
Luwian (), sometimes known as Luvian or Luish, is an ancient language, or group of languages, within the Anatolian branch of the Indo-European language family. The ethnonym Luwian comes from Luwiya (also spelled Luwia or Luvia) – the name of the region in which the Luwians lived. Luwiya is attested, for example, in the Hittite laws. The two varieties of Proto-Luwian or Luwian (in the narrow sense of these names) are known after the scripts in which they were written: Cuneiform Luwian (CLuwian) and Hieroglyphic Luwian (HLuwian). There is no consensus as to whether these were a single language or two closely related languages. Classification Several other Anatolian languages – particularly Carian, Lycian, Lydian and Milyan (also known as Lycian B or Lycian II) – are now usually identified as related to Luwian – and as mutually connected more closely than other constituents of the Anatolian branch. This suggests that these languages formed a sub-branch within Anatolian. Some linguists follow Craig Melchert in referring to this broader group as Luwic, whereas others refer to the "Luwian group" (and, in that sense, "Luwian" may mean several distinct languages). Likewise, Proto-Luwian may mean the common ancestor of the whole group, or just the ancestor of Luwian (normally, under tree-naming conventions, were the branch to be called Luwic, its ancestor should be known as Proto-Luwic or Common Luwic; in practice, such names are seldom used). Luwic or Luwian (in the broad sense of the term), is one of three major sub-branches of Anatolian, alongside Hittite and Palaic. As Luwian has numerous archaisms, it is regarded as important to the study of Indo-European languages (IE) in general, the other Anatolian languages, and the Bronze Age Aegean. These archaisms are often regarded as supporting the view that the Proto-Indo-European language (PIE) had three distinct sets of velar consonants: plain velars, palatovelars, and labiovelars. For Melchert, PIE → Luwian z (probably ); → k; and → ku (probably ). Luwian has also been enlisted for its verb kalut(t)i(ya)-, which means "make the rounds of" and is probably derived from *kalutta/i- "circle". It has been argued that this derives from a proto-Anatolian word for "wheel", which in turn would have derived from the common word for "wheel" found in all other Indo-European families. The wheel was invented in the 5th millennium BC and, if kaluti does derive from it, then the Anatolian branch left PIE after its invention (so validating the Kurgan hypothesis as applicable to Anatolian). However, kaluti need not imply a wheel and so need not have been derived from a PIE word with that meaning. The IE words for a wheel may well have arisen in those other IE languages after the Anatolian split. Geographic and chronological distribution Luwian was among the languages spoken during the 2nd and 1st millennia BC by groups in central and western Anatolia and northern Syria. The earliest Luwian texts in cuneiform transmission are attested in connection with the Kingdom of Kizzuwatna in southeastern Anatolia, as well as a number of locations in central Anatolia. Beginning in the 14th century BC, Luwian-speakers came to constitute the majority in the Hittite capital Hattusa. It appears that by the time of the collapse of the Hittite Empire ca. 1180 BC, the Hittite king and royal family were fully bilingual in Luwian. Long after the extinction of the Hittite language, Luwian continued to be spoken in the Neo-Hittite states of Syria, such as Milid and Carchemish, as well as in the central Anatolian kingdom of Tabal that flourished in the 8th century BC. A number of scholars in the past attempted to argue for the Luwian homeland in western Anatolia. According to James Mellaart, the earliest Indo-Europeans in northwest Anatolia were the horse-riders who came to this region from the north and founded Demircihöyük (Eskisehir Province) in Phrygia c. 3000 BC. They were allegedly ancestors of the Luwians who inhabited Troy II, and spread widely in the Anatolian peninsula. He cited the distribution of a new type of wheel-made pottery, Red Slip Wares, as some of the best evidence for his theory. According to Mellaart, the proto-Luwian migrations to Anatolia came in several distinct waves over many centuries. The recent detailed review of Mellaart's claims suggests that his ethnolinguistic conclusions cannot be substantiated on archaeological grounds. Other arguments were advanced for the extensive Luwian presence in western Anatolia in the late second millennium BC. In the Old Hittite version of the Hittite Code, some, if not all, of the Luwian-speaking areas were called Luwiya. Widmer (2007) has argued that the Mycenaean term ru-wa-ni-jo, attested in Linear B, refers to the same area. but the stem *Luwan- was recently shown to be non-existent. In a corrupt late copy of the Hittite Code the geographical term Luwiya is replaced with Arzawa a western Anatolian kingdom corresponding roughly with Mira and the Seha River Land. Therefore, several scholars shared the view that Luwian was spoken—to varying degrees—across a large portion of western Anatolia, including Troy (Wilusa), the Seha River Land (Sēḫa ~ Sēḫariya, i.e., the Greek Hermos river and Kaikos valley), and the Mira-Kuwaliya kingdom with its core being the Maeander valley. In a number of recent publications, however, the geographic identity between Luwiya and Arzawa was rejected or doubted. In the post-Hittite era, the region of Arzawa came to be known as Lydia (Assyrian Luddu, Greek Λυδία), where the Lydian language was in use. The name Lydia has been derived from the name Luwiya (Lydian *lūda- < *luw(i)da- < luwiya-, with regular Lydian sound change y > d). The Lydian language, however, cannot be regarded as the direct descendant of Luwian and probably does not even belong to the Luwic group (see Anatolian languages). Therefore, none of the arguments in favour of the Luwian linguistic dominance in Western Asia Minor can be regarded as compelling, although the issue continues to be debated. Script and dialects Luwian was split into many dialects, which were written in two different writing systems. One of these was the Cuneiform Luwian which used the form of Old Babylonian cuneiform that had been adapted for the Hittite language. The other was Hieroglyphic Luwian, which was written in a unique native hieroglyphic script. The differences between the dialects are minor, but they affect vocabulary, style, and grammar. The different orthographies of the two writing systems may also hide some differences. Cuneiform Luwian Cuneiform Luwian is the corpus of Luwian texts attested in the tablet archives of Hattusa; it is essentially the same cuneiform writing system used in Hittite. In Laroche's Catalog of Hittite Texts, the corpus of Hittite cuneiform texts with Luwian insertions runs from CTH 757–773, mostly comprising rituals. Cuneiform Luwian texts are written in several dialects, of which the most easily identifiable are Kizzuwatna Luwian, Ištanuwa Luwian, and Empire Luwian. The last dialect represents the vernacular of Hattusan scribes of the 14th–13th centuries BC and is mainly attested through Glossenkeil words in Hittite texts. Compared to cuneiform Hittite, logograms (signs with a set symbolic value) are rare. Instead, most writing is done with the syllabic characters, where a single symbol stands for a vowel, or a consonant-vowel pair (either VC or CV). A striking feature is the consistent use of 'full-writing' to indicate long vowels, even at the beginning of words. In this system a long vowel is indicated by writing it twice. For example, īdi "he goes" is written i-i-ti rather than i-ti, and ānda "in" is written a-an-ta rather than an-ta. Hieroglyphic Luwian Hieroglyphic Luwian is the corpus of Luwian texts written in a native script, known as Anatolian hieroglyphs. Once thought to be a variety of the Hittite language, "Hieroglyphic Hittite" was formerly used to refer to the language of the same inscriptions, but this term is now obsolete. The dialect of Luwian hieroglyphic inscriptions appears to be either Empire Luwian or its descendant, Iron Age Luwian. The first report of a monumental inscription dates to 1850, when an inhabitant of Nevşehir reported the relief at Fraktin. In 1870, antiquarian travellers in Aleppo found another inscription built into the south wall of the Al-Qaiqan Mosque. In 1884, Polish scholar discovered an inscription near Köylütolu, in western Turkey. The largest known inscription was excavated in 1970 in Yalburt, northwest of Konya. Luwian hieroglyphic texts contain a limited number of lexical borrowings from Hittite, Akkadian, and Northwest Semitic; the lexical borrowings from Greek are limited to proper nouns, although common nouns borrowed in the opposite direction do exist. Phonology The reconstruction of the Luwian phoneme inventory is based mainly on the written texts and comparisons with the known development of other Indo-European languages. Two series of stops can be identified, transliterated as geminate in the cuneiform script. These fortis and lenis stops may have been distinguished by either voicing or gemination. The contrast was lost initially and finally, suggesting that any voicing only appeared intervocalically. The following table provides a minimal consonant inventory, as can be reconstructed from the script. The existence of other consonants, which were not differentiated in writing, is possible. There are only three vowels, a, i, and u, which could be short or long. Vowel length is not stable but changes with the stress and word position. For example, annan occurs alone as an adverb as ānnan ('underneath') but as a preposition, it becomes annān pātanza ('under the feet'). The characters that are transliterated as -h- and -hh- have often been interpreted as pharyngeal fricatives and . However, they may have instead been uvular and or velar fricatives and . In loans to Ugaritic, these sounds are transcribed with <ḫ> and <ġ>, while in Egyptian they are transcribed with ḫ and g. As both of these languages had pharyngeal consonants, the Luwian sounds are unlikely to have been pharyngeal. In transcriptions of Luwian cuneiform, š is traditionally distinguished from s, since they were originally distinct signs for two different sounds, but in Luwian, both signs probably represented the same s sound. A noteworthy phonological development in Luwian is rhotacism; in some cases, d, l, and n becomes r. For example, *īdi ('he gets') becomes īri and wala- '(die') becomes wara-. Additionally, a d in word final position can be dropped, and an s may be added between two dental consonants and so *ad-tuwari becomes aztuwari ('you all eat') (ds and z are phonetically identical). Morphology Nouns There were two grammatical genders: animate and inanimate/neuter. There are two grammatical numbers: singular and plural. Some animate nouns could also take a collective plural in addition to the regular numerical plural. Luwian had six cases: nominative genitive dative/locative accusative ablative/instrumental vocative. The vocative case occurs rarely in surviving texts and only in the singular. In the animate gender, an -i- is inserted between the stem and the case ending. In hieroglyphic Luwian, the particle -sa/-za is added to the nominative/accusative inanimate case ending. In the genitive case, cuneiform and hieroglyphic Luwian differ sharply from each other. In cuneiform Luwian the possessive suffix -assa is used for the genitive singular and -assanz- is used for the genitive plural. In hieroglyphic Luwian, as in Hittite, the classical Indo-European suffixes -as for the genitive singular and -an for the plural are used. The special form of possessive adjectives with a plural possessor is restricted to Kizzuwatna Luwian and probably represents a calque from Hurrian. Because of the prevalence of -assa place names and words scattered around all sides of the Aegean Sea, the possessive suffix was sometimes considered evidence of a shared non-Indo-European language or an Aegean Sprachbund preceding the arrivals of Luwians and Greeks. It is, however, possible to account for the Luwian possessive construction as a result of case attraction in the Indo-European noun phrase. Adjective Adjectives agree with nouns in number and gender. Forms for the nominative and the accusative differ only in the animate gender and even then, only in the singular. For the sake of clarity, the table includes only the endings beginning with -a, but endings can also begin with an -i. The forms are largely derived from the forms of the nominal declension, with an -as- before the case ending that would be expected for nouns. Pronouns In addition to personal pronouns typical of Anatolian languages, Luwian also has demonstrative pronouns, the which are formed from apa- and za-/zi-. The case endings are similar those of Hittite, but not all cases are attested for personal pronouns. In the third person, the demonstrative pronoun apa- occurs instead of the personal pronoun. Possessive pronouns and demonstrative pronouns in apa- are declined as adjectives. All known forms of the personal pronouns are given, but it is not clear how their meanings differed or how they changed for different cases. In addition to the forms given in the table, Luwian also had a demonstrative pronoun formed from the stem za-/zi-, but not all cases are known, and also a relative pronoun, which was declined regularly: kwis (nominative singular animate), kwin (accusative singular animate), kwinzi (nominative/accusative plural animate), kwati (ablative/instrumental singular), kwanza (dative/locative plural), kwaya (nominative/accusative plural inanimate). Some indefinite pronouns whose meanings are not entirely clear are also transmitted. Verbs Like for many other Indo-European languages, two numbers (singular and plural) and three persons are distinguished. There are two moods: indicative and imperative but no subjunctive. Only the active voice has been attested, but the existence of a mediopassive is assumed. There are two tenses: the present, which is used to express future events as well, and the preterite. The conjugation is very similar to the Hittite ḫḫi conjugation. A single participle can be formed with the suffix -a(i)mma. It has a passive sense for transitive verbs and a stative sense for intransitive verbs. The infinitive ends in -una. Syntax The usual word order is subject-object-verb, but words can be moved to the front of the sentence for stress or to start a clause. Relative clauses are normally before the antecedent, but they sometimes follow the antecedent. Dependent words and adjectives are normally before their head word. Enclitic particles are often attached to the first word or conjunction. Various conjunctions with temporal or conditional meaning are used to link clauses. There is no coordinating conjunction, but main clauses can be coordinated with the enclitic -ha, which is attached to the first word of the following clause. In narratives, clauses are linked by using the prosecutive conjunctions: a- before the first word of the following clause means 'and then', and pā, can be an independent conjunction at the start of a clause and the enclitic -pa indicates contrast or a change of theme. The following example sentence demonstrates several common features of Luwian: a final verb, the particle chain headed by the conjunction a-, the quotative clitic -wa, and the preverb sarra adding directionality to the main verb awiha. Vocabulary and texts The known Luwian vocabulary consists mostly of words inherited from Proto-Indo-European. Loan words for various technical and religious concepts derive mainly from Hurrian, and were often subsequently passed on through Luwian to Hittite. The surviving corpus of Luwian texts consists principally of cuneiform ritual texts from the 16th and 15th centuries BC and monumental inscriptions in hieroglyphs. There are also some letters and economic documents. The majority of the hieroglyphic inscriptions derive from the 12th to 7th centuries BC, after the fall of the Hittite empire. Another source of Luwian are the hieroglyphic seals which date from the 16th to the 7th centuries BC. Seals from the time of the Hittite empire are often digraphic, written in both cuneiform and hieroglyphics. However, the seals nearly always are limited to logograms. The absence of the syllabic symbols from the seals makes it impossible to determine the pronunciation of names and titles that appear on them, or even to make a certain attribution of the text to a specific language. History of research After the decipherment of Hittite, cuneiform Luwian was recognised as a separate, but related language by Emil Forrer in 1919. Further progress in the understanding of the language came after the Second World War, with the publication and analysis of a larger number of texts. Important work in this period was produced by Bernhard Rosenkranz, Heinrich Otten and Emmanuel Laroche. An important advance came in 1985 with the reorganisation of the whole text-corpus by Frank Starke. The decipherment and classification of Hieroglyphic Luwian was much more difficult. In the 1920s, there were a number of failed attempts. In the 1930s some individual logograms and syllabic signs were correctly identified. At this point the classification of the language was not yet clear and, since it was believed to be a form of Hittite, it was referred to as Hieroglyphic Hittite. After a break in research due to the Second World War, there was breakthrough in 1947 with the discovery and publication of a Phoenician-Hieroglyphic Luwian bilingual text by Helmuth Theodor Bossert. The reading of several syllabic signs was still faulty, however, and as a result it was not realised that the cuneiform and hieroglyphic texts recorded the same language. In the 1970s, as a result of a fundamental revision of the readings of a large number of hieroglyphs by John David Hawkins, Anna Morpurgo Davies, and Günter Neumann, it became clear that both cuneiform and hieroglyphic texts recorded the same Luwian language. This revision resulted from a discovery outside the area of Luwian settlement, namely the annotations on Urartian pots, written in the Urartian language using the hieroglyphic Luwian script. The sign , which had hitherto been read as ī was shown to be being used to indicate the sound za, which triggered a chain reaction resulting in an entirely new system of readings. Since that time, research has concentrated on better understanding the relationship between the two different forms of Luwian, in order to gain a clearer understanding of Luwian as a whole. Trojan hypothesis Luwian has been deduced as one of the likely candidates for the language spoken by the Trojans. After the 1995 finding of a Luwian biconvex seal at Troy VII, there has been a heated discussion over the language that was spoken in Homeric Troy. Frank Starke of the University of Tübingen demonstrated that the name of Priam, king of Troy at the time of the Trojan War, is connected to the Luwian compound Priimuua, which means "exceptionally courageous". "The certainty is growing that Wilusa/Troy belonged to the greater Luwian-speaking community," but it is not entirely clear whether Luwian was primarily the official language or it was in daily colloquial use. See also Pre-Greek substrate Notes Sources Beekes, R. S. P. "Luwians and Lydians", Kadmos 42 (2003): 47–9. Gander, Max. "Asia, Ionia, Maeonia und Luwiya? Bemerkungen zu den neuen Toponymen aus Kom el-Hettan (Theben-West) mit Exkursen zu Westkleinasien in der Spätbronzezeit". Klio 97/2 (2015): 443-502. Gander, Max "The West: Philology". Hittite Landscape and Geography, M. Weeden and L. Z. Ullmann (eds.). Leiden: Brill, 2017. pp. 262–280. Hawkins, J. D. "Tarkasnawa King of Mira: 'Tarkendemos', Boğazköy Sealings, and Karabel", Anatolian Studies 48 (1998): 1–31. Hawkins, J. D. "The Arzawa letters in recent perspective", British Museum Studies in Ancient Egypt and Sudan 14 (2009): 73–83. Hawkins, J. D. "A New Look at the Luwian Language". Kadmos 52/1 (2013): 1-18. Laroche, Emmanuel. Catalogue des textes hittites. Paris: Klincksieck, 1971. Matessi, A. "The Making of Hittite Imperial Landscapes: Territoriality and Balance of Power in South-Central Anatolia during the Late Bronze Age". Journal of Ancient Near Eastern History, AoP (2017). Melchert H. Craig. "Greek mólybdos as a loanword from Lydian", in Anatolian Interfaces: Hittites, Greeks and their Neighbours, eds. B. J. Collins et al. Oxford: Oxbow Books, 2008, pp. 153–7. Melchert, H. Craig. 'Lycian', in The Ancient Languages of Asia Minor, ed. R. D. Woodard. Cambridge: Cambridge University Press, 2008, pp. 46–55, esp. 46. Melchert, H. Craig, ed. The Luwians. Boston: Brill, 2003. . Melchert, H. Craig. Anatolian Historical Phonology. Amsterdam: Rodopi, 1994. Melchert, H. Craig. Cuneiform Luvian Lexicon. Chapel Hill: self-published, 1993. Melchert, H. Craig. "PIE velars in Luvian", in Studies in memory of Warren Cowgill (1929–1985): Papers from the Fourth East Coast Indo-European Conference, Cornell University, June 6–9, 1985, ed. C. Watkins. Berlin: Walter de Gruyter, 1987, pp. 182–204. Otten, Heinrich. Zur grammatikalischen und lexikalischen Bestimmung des Luvischen. Berlin: Akademie-Verlag, 1953. Rieken, Elisabeth. "Luwier, Lykier, Lyder—alle vom selben Stamm?", in Die Ausbreitung des Indogermanischen: Thesen aus Sprachwissenschaft, Archäologie und Genetik; Akten der Arbeitstagung der Indogermanischen Gesellschaft, Würzburg, 24–26 September 2009, ed. H. Hettrich & S. Ziegler. Wiesbaden: Reichert, 2012. Rosenkranz, Bernhard. Beiträge zur Erforschung des Luvischen. Wiesbaden: Harrassowitz, 1952. Sasseville, David. Anatolian Verbal Stem Formation. Leiden / New-York: Brill, 2021. Singer, I. 2005. 'On Luwians and Hittites.' Bibliotheca Orientalis 62:430–51. (Review article of Melchert 2003). Starke, Frank. 'Troia im Kontext des historisch-politischen und sprachlichen Umfeldes Kleinasiens im 2. Jahrtausend. Studia Troica 7:446–87. Starke, Frank. Die keilschrift-luwischen Texte in Umschrift (StBoT 30, 1985) Starke, Frank. Untersuchungen zur Stammbildung des keilschrift-luwischen Nomens (StBoT 30, 1990) Watkins, C. 1995. How to Kill a Dragon: Aspects of Indo-European Poetics. New York and Oxford. Watkins, C.1994. 'The Language of the Trojans.' In Selected Writings, ed. L. Oliver et al., vol. 2. 700–717. Innsbruck. = Troy and the Trojan War. A Symposium held at Bryn Mawr College, October 1984, ed. M. Mellink, 45–62. Bryn Mawr. Widmer, P. 2006. 'Mykenisch ru-wa-ni-jo, "Luwier".' Kadmos 45:82–84. Woudhuizen, Fred. The Language of the Sea Peoples. Amsterdam: Najade Pres, 1992. Yakubovich, Ilya. Sociolinguistics of the Luvian Language. Leiden: Brill, 2010 Yakubovich, Ilya. "The Origin of Luwian Possessive Adjectives". In Proceedings of the 19th Annual UCLA Indo-European Conference, Los Angeles, November 3–4, 2007, ed. K. Jones-Bley et al., Washington: Institute for the Study of Man, 2008. Luwian Identities: Culture, Language and Religion between Anatolia and the Aegean. Brill, 2013. (Hardback) (e-Book) External links Luwian Swadesh list of basic vocabulary words (from Wiktionary's Swadesh list appendix) Arzawa, to the west, throws light on Hittites Alekseev Manuscript Hieroglyphic Luwian Phonetic Signs Catalog of Hittite Texts: texts in other languages Genitive Case and Possessive Adjective in Anatolian Melchert's homepage on the UCLA website
37160229
https://en.wikipedia.org/wiki/Join-pattern
Join-pattern
Join-patterns provides a way to write concurrent, parallel and distributed computer programs by message passing. Compared to the use of threads and locks, this is a high level programming model using communication constructs model to abstract the complexity of concurrent environment and to allow scalability. Its focus is on the execution of a chord between messages atomically consumed from a group of channels. This template is based on join-calculus and uses pattern matching. Concretely, this is done by allowing the join definition of several functions and/or channels by matching concurrent call and messages patterns. It is a type of concurrency pattern because it makes easier and more flexible for these entities to communicate and deal with the multi-threaded programming paradigm. Description The join-pattern (or a chord in Cω) is like a super pipeline with synchronisation and matching. In fact, this concept is summarise by match and join a set of message available from different message queues, then handles them all simultaneously with one handler. It could be represented by the keywords to specify the first communication that we expected, with the to join/pair other channels and the to run some tasks with the different collected messages. A constructed join pattern typically takes this form: j.When(a1).And(a2). ... .And(an).Do(d) Argument of may be a synchronous or asynchronous channel or an array of asynchronous channels. Each subsequent argument to (for ) must be an asynchronous channel. More precisely, when a message matches with a chain of linked patterns causes its handler to run (in a new thread if it's in asynchronous context) otherwise the message is queued until one of its patterns is enabled; if there are several matches, an unspecified pattern is selected. Unlike an event handler, which services one of several alternative events at a time, in conjunction with all other handlers on that event, a join pattern waits for a conjunction of channels and competes for execution with any other enabled pattern. Join-pattern is defined by a set of pi-calculus channels that supports two different operations, sending and receiving, we need two join calculus names to implement it: a channel name for sending (a message), and a function name for receiving a value (a request). The meaning of the join definition is that a call to returns a value that was sent on a channel . Each time functions are concurrently, triggers the return process and synchronizes with other joins. J ::= //join patterns | x<y> //message send pattern | x(y) //function call pattern | J | JBIS //synchronization {{quote|From a client’s perspective, a channel just declares a method of the same name and signature. The client posts a message or issues a request by invoking the channel as a method. A continuation method must wait until/unless a single request or message has arrived on each of the channels following the continuation’s When clause. If the continuation gets to run, the arguments of each channel invocation are dequeued (thus consumed) and transferred (atomically) to the continuation’s parameters. }} In most of cases, the order of synchronous calls is not guaranteed for performance reasons. Finally, during the match the messages available in the queue could be stolen by some intervening thread; indeed, the awakened thread may have to wait again. History π-calculus – 1992 The π-calculus belongs to the family of process calculi, allows mathematical formalisms for describing and analyzing properties of concurrent computation by using channel names to be communicated along the channels themselves, and in this way it is able to describe concurrent computations whose network configuration may change during the computation. Join-Calculus – 1993 Join patterns first appeared in Fournet and Gonthier’s foundational join-calculus, an asynchronous process algebra designed for efficient implementation in a distributed setting. The join-calculus is a process calculus as expressive as the full π-calculus. It was developed to provide a formal basis for the design of distributed programming languages, and therefore intentionally avoids communications constructs found in other process calculi, such as rendezvous communications. Distributed Join-Calculus – 1996 The Join-Calculus is both a name passing calculus and a core language for concurrent and distributed programming. That's why the Distributed Join-Calculus based on the Join-Calculus with the distributed programming was created on 1996. This work use the mobile agents where agents are not only programs but core images of running processes with their communication capabilities. JoCaml, Funnel and Join Java – 2000 JoCaml and Funnel are functional languages supporting declarative join patterns. They present the ideas to direct implement a process calculi in a functional setting. Another extensions to (non-generic) Java, JoinJava, were independently proposed by von Itzstein and Kearney. Polyphonic C# – 2002 Cardelli, Benton and Fournet proposed an object-oriented version of join patterns for C# called Polyphonic C#. Cω – 2003 Cω is adaptation of join-calculus to an object-oriented setting. This variant of Polyphonic C# was included in the public release of Cω (a.k.a. Comega) in 2004. Scala Joins – 2007 Scala Joins is a library to use Join-Pattern with Scala in the context of extensible pattern matching in order to integrate joins into an existing actor-based concurrency framework. JErlang – 2009 Erlang is a language which natively supports the concurrent, real time and distributed paradigm. Concurrency between processes was complex, that's why the project build a new language, JErlang (J stands for Join) using based on the Join-calculus. Join-pattern in classic programming literature "Join-patterns can be used to easily encode related concurrency idioms like actors and active objects." Barriers class SymmetricBarrier { public readonly Synchronous.Channel Arrive; public SymmetricBarrier(int n) { // create j and init channels (elided) var pat = j.When(Arrive); for (int i = 1; i < n; i++) pat = pat.And(Arrive); pat.Do(() => { }); } } Dining philosophers problem var j = Join.Create(); Synchronous.Channel[] hungry; Asynchronous.Channel[] chopstick; j.Init(out hungry, n); j.Init(out chopstick, n); for (int i = 0; i < n; i++) { var left = chopstick[i]; var right = chopstick[(i+1) % n]; j.When(hungry[i]).And(left).And(right).Do(() => { eat(); left(); right(); // replace chopsticks }); } Mutual exclusion class Lock { public readonly Synchronous.Channel Acquire; public readonly Asynchronous.Channel Release; public Lock() { // create j and init channels (elided) j.When(Acquire).And(Release).Do(() => { }); Release(); // initially free } } Producers/Consumers class Buffer<T> { public readonly Asynchronous.Channel<T> Put; public readonly Synchronous<T>.Channel Get; public Buffer() { Join j = Join.Create(); // allocate a Join object j.Init(out Put); // bind its channels j.Init(out Get); j.When(Get).And(Put).Do // register chord (t => { return t; }); } } Reader-writer locking class ReaderWriterLock { private readonly Asynchronous.Channel idle; private readonly Asynchronous.Channel<int> shared; public readonly Synchronous.Channel AcqR, AcqW, RelR, RelW; public ReaderWriterLock() { // create j and init channels (elided) j.When(AcqR).And(idle).Do(() => shared(1)); j.When(AcqR).And(shared).Do(n => shared(n+1)); j.When(RelR).And(shared).Do(n => { if (n == 1) idle(); else shared(n-1); }); j.When(AcqW).And(idle).Do(() => { }); j.When(RelW).Do(() => idle()); idle(); // initially free } } Semaphores class Semaphore { public readonly Synchronous.Channel Acquire; public readonly Asynchronous.Channel Release; public Semaphore(int n) { // create j and init channels (elided) j.When(Acquire).And(Release).Do(() => { }); for (; n > 0; n--) Release(); // initially n free } } Fundamental features and concepts Join-calculus : The first apparition of the Join-Pattern comes out with this process calculus. Message passing : Join-pattern works with a message passing system for parallel reason. Channel : Channels are used to synchronize and pass messages between concurrently executing threads. In general, a channel may be involved in more than one join pattern, each pattern defines a different continuation that may run when the channel is invoked . Synchronous : The join-pattern could use a synchronous channel which return a result. The continuation of a synchronous pattern runs in the thread of the synchronous sender. Asynchronous : It could also use an asynchronous channel which return no result but take arguments. The continuation of an asynchronous pattern runs in a newly spawned thread. A join pattern may be purely asynchronous, provided its continuation is a subroutine and its When clause only lists asynchronous channels. Combine synchronous and asynchronous : Merging the declarations of synchronous and asynchronous buffer would yield a module that supports the two communication type of consumers. Scheduler : There is a scheduling between join patterns (e.g. a round-robin scheduler, first-match scheduler). Design patterns : The join-pattern is first of all a behavioral and a concurrency pattern. Concurrent programming : It's execute in a concurrent way. Pattern matching : The join-pattern works with matching tasks. Parallel programming : It performs tasks in parallel. Distributed programming : Jobs could be scatter on different agent and environments with this pattern. Software transactional memory : Software transactional memory (STM) is one of the possible implementation for the communications between joint. Overlapping : The pattern could allow patterns declared on overlapping sets of channels. Application domain Mobile agent A mobile agent is an autonomous software agent with a certain social ability and most importantly, mobility. It is composed of computer software and data which can move between different computers automatically while continuing their executions. The mobile agents can be used to match concurrency and distribution if one uses the Join-calculus. That's why a new concept named "distributed Join-calculus" was created; it's an extension of Join-calculus with locations and primitives to describe the mobility. This innovation use agents as running processes with their communication capabilities to allow an idea of location, which is a physical site expressing the actual position of the agent. Thanks to the Join-calculus, one location can be moved atomically to another site. The processes of an agent is specified as a set which define its functionality including asynchronous emission of a message, migration to other location. Consequently, locations are organized in a tree to represent the movement of the agent easier. With this representation, a benefit of this solution is the possibility to create a simple model of failure. Usually a crash of a physical site causes the permanent failure of all its locations. But with the join-calculus a problem with a location can be detected at any other running location, allowing error recovery. In 2007, an extension of the basic join calculus with methods which make agents proactive has come out. The agents can observe an environment shared between them. With this environment, it is possible to define shared variables with all agents (e.g. a naming service to discover agents between themselves). Compilation Join-languages are built on top of the join-calculus taken as a core language. So all the calculus are analysed with asynchronous processes and the join pattern provides a model to synchronize the result. To do this, it exists two Compilers: Join Compiler: A compiler of a language named "join langage". This language has been created only for the join calculus Jocaml Compiler: A compiler of an extension of Objectif Caml created to use the join calculus. This two compiler works with the same system, an automaton. let A(n) | B() = P(n) and A(n) | C() = Q(n) ;; It represents the consumption of message arrive at a completed join model. Each state is a possibly step for the code execution and each transitions is the reception of a message to change between two steps. And so when all messages are grab, the compiler execute the body join code corresponding to the completed model joint. So in the join-calculus, the basic values are the names like on the example is A,B or C. So the two compiler representing this values with two ways. Join compiler use a vector with Two slots, the first to the name it-self and the second to a queue of pending messages. Jocaml use name like a pointer on definitions. This definitions store the others pointer of the others names with a status field and a matching date structure by message. The fundamental difference is when the guard process is executed, for the first, it was verify if all names are the pending messages ready whereas the second use only one variable and access at the others to know if the model is completed. Recent research describe the compilation scheme as the combination of two basic steps: dispatching and forwarding. The design and correctness of the dispatcher essentially stems from pattern matching theory, while inserting an internal forwarding step in communications is a natural idea, which intuitively does not change process behavior. They made the observation that the worth observing is a direct implementation of extended join-pattern matching at the runtime level would significantly complicate the management of message queues, which would then need to be scanned in search of matching messages before consuming them. Implementations and libraries There are many uses of the Join-patterns with different languages. Some languages use join-patterns as a base of theirs implementations, for example the Polyphonic C# or MC# but others languages integrate join-pattern by a library like Scala Joins for Scala or the Joins library for VB. Moreover, the join-pattern is used through some languages like Scheme to upgrade the join-pattern. Join Java Join Java is a language based on the Java programming language allowing the use of the join calculus. It introduces three new language constructs: Join methods is defined by two or more Join fragments. A Join method will execute once all the fragments of the Join pattern have been called. If the return type is a standard Java type then the leading fragment will block the caller until the Join pattern is complete and the method has executed. If the return type is of type signal then the leading fragment will return immediately. All trailing fragments are asynchronous so will not block the caller. Example: class JoinExample { int fragment1() & fragment2(int x) { // Will return value of x to caller of fragment1 return x; } } Asynchronous methods are defined by using the signal return type. This has the same characteristics as the void type except that the method will return immediately. When an asynchronous method is called a new thread is created to execute the body of the method. Example: class ThreadExample { signal thread(SomeObject x) { // This code will execute in a new thread } } Ordering modifiers Join fragments can be repeated in multiple Join patterns so there can be a case when multiple Join patterns are completed when a fragment is called. Such a case could occur in the example below if B(), C() and D() then A() are called. The final A() fragment completes three of the patterns so there are three possible methods that may be called. The ordered class modifier is used here to determine which Join method will be called. The default and when using the unordered class modifier is to pick one of the methods at random. With the ordered modifier the methods are prioritised according to the order they are declared. Example: class ordered SimpleJoinPattern { void A() & B() { } void A() & C() { } void A() & D() { } signal D() & E() { } } The closest related language is the Polyphonic C#. JErlang In Erlang coding synchronisation between multiple processes is not straightforward. That's why the JErlang, an extension of Erlang was created, The J is for Join. Indeed, To overcome this limitation JErlang was implemented, a Join-Calculus inspired extension to Erlang. The features of this language are: Joins allows first Match semantics and the possibility of having multiple patterns with a preservation of the messages's order. operation() -> receive {ok, sum} and {val, X} and {val, Y} -> {sum, X + Y}; {ok, mult} and {val, X} and {val, Y} -> {mult, X * Y}; {ok, sub} and {val, X} and {val, Y} -> {sub, X - Y}; end end Guards provides additional filtering not expressing in terms of patterns. Limited number of expression without side-effects receive {Transaction, M} and {limit, Lower, Upper} when (Lower <= M and M <= Upper ) -> commit_transaction(M, Transaction) end With Non-linear patterns, messages can match multiple joins receive {get, X} and {set, X} -> {found, 2, X} end ... receive {Pin, id} and {auth, Pin} and {commit, Id} -> perform_transaction(Pin, Id) end propagation allows for copying correct messages instead of removing them. receive prop({session, Id}) and {act, Action, Id} -> perform_action(Action, Id); {session, Id} and {logout, Id} -> logout_user(Id) end ... receive {Pin, id} and {auth, Pin} and {commit, Id} -> perform_transaction(Pin, Id) end Synchronous calls receive {accept, Pid1} and {asynchronous, Value} and {accept, Pid2} -> Pid1 ! {ok, Value}, Pid2 ! {ok, Value} end C++ Yigong Liu has written some classes for the join pattern including all useful tools like asynchronous and synchronous channels, chords, etc. It's integrated in the project Boost c++. template <typename V> class buffer: public joint { public: async<V> put; synch<V,void> get; buffer() { chord(get, put, &buffer::chord_body); } V chord_body(void_t g, V p) { return p; } }; This example shows us a thread safe buffer and message queue with the basic operations put and get. C# Polyphonic C# Polyphonic C# is an extension of the C# programming language. It introduces a new concurrency model with synchronous and asynchronous (which return control to the caller) methods and chords (also known as ‘synchronization patterns’ or ‘join patterns’). public class Buffer { public String get() & public async put(String s) { return s; } } This is a simple buffer example. MC# MC# language is an adaptation of the Polyphonic C# language for the case of concurrent distributed computations. public handler Get2 long () & channel c1 (long x) & channel c2 (long y) { return (x + y); } This example demonstrates the using of chords as a synchronization tool. Parallel C# Parallel C# is based Polyphonic C# and they add some new concepts like movables methods, high-order functions. using System; class Test13 { int Receive() & async Send(int x) { return x * x; } public static void Main(string[] args) { Test13 t = new Test13(); t.Send(2); Console.WriteLine(t.Receive()); } } This example demonstrates how to use joins. Cω Cω adds new language features to support concurrent programming (based on the earlier Polyphonic C#). The Joins Concurrency Library for C# and other .NET languages is derived of this project. Scalable Join Patterns It's an easy to use declarative and scalable join-pattern library. In opposite to the Russo library, it has no global lock. In fact, it's working with a compare-and-swap CAS and Atomic message system. The library use three improvements for the join-pattern : Stealing message for unused resources (allowing barging); Lazy queue saves both on allocation and potentially on interprocessor communication by avoiding allocate or enqueue with an optimistic fast-path; A status "WOKEN" : ensures that a blocked synchronous caller is woken only once. JoCaml JoCaml is the first language where the join-pattern was implemented. Indeed, at the beginning all the different implementation was compiled with the JoCaml Compiler. JoCaml language is an extension of the OCaml language. It extends OCaml with support for concurrency and synchronization, the distributed execution of programs, and the dynamic relocation of active program fragments during execution. type coins = Nickel | Dime and drinks = Coffee | Tea and buttons = BCoffee | BTea | BCancel;; (* def defines a Join-pattern set clause * "&" in the left side of = means join (channel synchronism) * "&" in the right hand side means: parallel process * synchronous_reply :== "reply" [x] "to" channel_name * synchronous channels have function-like types (`a -> `b) * asynchronous channels have types (`a Join.chan) * only the last statement in a pattern rhs expression can be an asynchronous message * 0 in an asynchronous message position means STOP ("no sent message" in CSP terminology). *) def put(s) = print_endline s ; 0 (* STOP *) ;; (* put: string Join.chan *) def serve(drink) = match drink with Coffee -> put("Cofee") | Tea -> put("Tea") ;; (* serve: drinks Join.chan *) def refund(v) = let s = Printf.sprintf "Refund %d" v in put(s) ;; (* refund: int Join.chan *) let new_vending serve refund = let vend (cost:int) (credit:int) = if credit >= cost then (true, credit - cost) else (false, credit) in def coin(Nickel) & value(v) = value(v+5) & reply () to coin or coin(Dime) & value(v) = value(v+10) & reply () to coin or button(BCoffee) & value(v) = let should_serve, remainder = vend 10 v in (if should_serve then serve(Coffee) else 0 (* STOP *)) & value(remainder) & reply () to button or button(BTea) & value(v) = let should_serve, remainder = vend 5 v in (if should_serve then serve(Tea) else 0 (* STOP *)) & value(remainder) & reply () to button or button(BCancel) & value(v) = refund( v) & value(0) & reply () to button in spawn value(0) ; coin, button (* coin, button: int -> unit *) ;; (* new_vending: drink Join.chan -> int Join.chan -> (int->unit)*(int->unit) *) let ccoin, cbutton = new_vending serve refund in ccoin(Nickel); ccoin(Nickel); ccoin(Dime); Unix.sleep(1); cbutton(BCoffee); Unix.sleep(1); cbutton(BTea); Unix.sleep(1); cbutton(BCancel); Unix.sleep(1) (* let the last message show up *) ;; gives Coffee Tea Refund 5 Hume Hume is a strict, strongly typed functional language for limited resources platforms, with concurrency based on asynchronous message passing, dataflow programming, and a Haskell like syntax. Hume does not provide synchronous messaging. It wraps a join-pattern set with a channel in common as a box, listing all channels in an in tuple and specifying all possible outputs in an out tuple. Every join-pattern in the set must conform to the box input tuple type specifying a '*' for non required channels, giving an expression whose type conform to the output tuple, marking '*' the non fed outputs. A wire clause specifies a tuple of corresponding input origins or sources and optionally start values a tuple of output destinations, being channels or sinks (stdout, ..). A box can specify exception handlers with expressions conforming to the output tuple. data Coins = Nickel | Dime; data Drinks = Coffee | Tea; data Buttons = BCoffee | BTea | BCancel; type Int = int 32 ; type String = string ; show u = u as string ; box coffee in ( coin :: Coins, button :: Buttons, value :: Int ) -- input channels out ( drink_outp :: String, value’ :: Int, refund_outp :: String) -- named outputs match -- * wildcards for unfilled outputs, and unconsumed inputs ( Nickel, *, v) -> ( *, v + 5, *) | ( Dime, *, v) -> ( *, v + 10, *) | ( *, BCoffee, v) -> vend Coffee 10 v | ( *, BTea, v) -> vend Tea 5 v | ( *, BCancel, v) -> let refund u = "Refund " ++ show u ++ "\n" in ( *, 0, refund v) ; vend drink cost credit = if credit >= cost then ( serve drink, credit - cost, *) else ( *, credit, *); serve drink = case drink of Coffee -> "Cofee\n" Tea -> "Tea\n" ; box control in (c :: char) out (coin :: Coins, button:: Buttons) match 'n' -> (Nickel, *) | 'd' -> (Dime, *) | 'c' -> (*, BCoffee) | 't' -> (*, BTea) | 'x' -> (*, BCancel) | _ -> (*, *) ; stream console_outp to "std_out" ; stream console_inp from "std_in" ; -- dataflow wiring wire cofee -- inputs (channel origins) (control.coin, control.button, coffee.value’ initially 0) -- outputs destinations (console_outp, coffee.value, console_outp) ; wire control (console_inp) (coffee.coin, coffee.button) ; Visual Basic Concurrent Basic – CB An extension of Visual Basic 9.0 with asynchronous concurrency constructs, called Concurrent Basic (for short CB), offer the join patterns. CB (builds on earlier work on Polyphonic C#, Cω and the Joins Library) adopts a simple event-like syntax familiar to VB programmers, allows one to declare generic concurrency abstractions and provides more natural support for inheritance, enabling a subclass to augment the set of patterns. CB class can declare method to execute when communication has occurred on a particular set of local channels asynchronous and synchronous, forming a join pattern. Module Buffer Public Asynchronous Put(ByVal s As String) Public Synchronous Take() As String Private Function CaseTakeAndPut(ByVal s As String) As String _ When Take, Put Return s End Function End Module This example shows all new keywords used by Concurrent Basic: Asynchronous, Synchronous and When. Joins library (C# and VB) This library is a high-level abstractions of the Join Pattern using objects and generics. Channels are special delegate values from some common Join object (instead of methods). class Buffer { public readonly Asynchronous.Channel<string> Put; public readonly Synchronous<string>.Channel Get; public Buffer() { Join join = Join.Create(); join.Initialize(out Put); join.Initialize(out Get); join.When(Get).And(Put).Do(delegate(string s) { return s; }); } } This example shows how to use methods of the Join object. Scala There is a library in Scala called "Scala Joins" Scala Joins to use the Join-Pattern, it proposes to use pattern matching Pattern Matching as a tool for creating models of joins. You can find examples of the use of the join pattern in scala here: Join definitions in Scala. The pattern matching facilities of this language have been generalized to allow representation independence for objects used in pattern matching. So now it's possible to use a new type of abstraction in libraries. The advantage of join patterns is that they allow a declarative specification of the synchronization between different threads. Often, the join patterns corresponds closely to a finite state machine that specifies the valid states of the object. In Scala, it's possible to solve many problem with the pattern matching and Scala Joins, for example the Reader-Writer. class ReaderWriterLock extends Joins { private val Sharing = new AsyncEvent[Int] val Exclusive, ReleaseExclusive = new NullarySyncEvent val Shared, ReleaseShared = new NullarySyncEvent join { case Exclusive() & Sharing(0) => Exclusive reply case ReleaseExclusive() => { Sharing(0); ReleaseExclusive reply } case Shared() & Sharing(n) => { Sharing(n+1); Shared reply } case ReleaseShared() & Sharing(1) => { Sharing(0); ReleaseShared reply } case ReleaseShared() & Sharing(n) => { Sharing(n-1); ReleaseShared reply } } Sharing(0) } With a class we declare events in regular fields. So, it's possible to use the Join construct to enable a pattern matching via a list of case declarations. That list is representing by => with on each side a part of the declaration. The left-side is a model of join pattern to show the combinaison of events asynchronous and synchronous and the right-side is the body of join which is executed with the join model is completed. In Scala, it's also possible to use the Scala's actor library with the join pattern. For example, an unbounded buffer: val Put = new Join1[Int] val Get = new Join class Buffer extends JoinActor { def act() { receive { case Get() & Put(x) => Get reply x } } } Actor-based concurrency is supported by means of a library and we provide join patterns as a library extension as well, so there are the opportunity to combine join patterns with the event-driven concurrency model offered by actors. Like you see in the example, it's the same way to use join pattern with actors, it just do it a list of case declaration in the method receive to show when the model is completed. Practically the same tools are available in F# to use join pattern Scala Join and Chymyst are newer implementations of the Join Pattern, improving upon Dr. Philipp Haller's Scala Joins. Haskell Join Language is an implementation of the Join Pattern in Haskell. Scheme The Join Patterns allows a new programming type especially for the multi-core architectures available in many programming situations with a high-levels of abstraction. This is based on the Guards and Propagation. So an example of this innovation has been implemented in Scheme . Guards is essential to guarantee that only data with a matching key is updated/retrieved. Propagation can cancel an item, reads its contents and puts backs an item into a store. Of course, the item is also in the store during the reading. The guards is expressed with shared variables. And so the novelty is that the join patern can contains now propagated and simplified parts. So in Scheme, the part before / is propagated and the part after / is removed. The use of the Goal-Based is to divise the work in many tasks and joins all results at the end with the join pattern. A system named "MiniJoin" has implemented to use the intermediate result to solve the others tasks if it's possible. If is not possible it waits the solution of others tasks to solve itself. So the concurrent join pattern application executed in parallel on a multi-core architecture doesn't guarantee that parallel execution lead to conflicts. To Guarantee this and a high degree of parallelism, a Software Transactional Memory (STM) within a highlytuned concurrent data structure based on atomic compare-and-swap (CAS) is use. This allows to run many concurrents operations in parallel on multi-core architecture. Moreover, an atomic execution is used to prevent the "false conflict" between CAS and STM. Other similar design patterns Join Pattern is not the only pattern to perform multitasks but it's the only one that allow communication between resources, synchronization and join different processes. Sequence pattern : consists of waiting that a task have completed to switch to another (the classic implementation). Split pattern (parallel split'') : perform several tasks in parallel at the same time (e.g. Map reduce). See also Join Java – Join Java is a programming language that extends the standard Java programming language Joins (concurrency library) – Joins is an asynchronous concurrent computing API from Microsoft Research for the .NET Framework. Join-calculus – The join-calculus was developed to provide a formal basis for the design of distributed programming languages. References Notes External links Concurrent Basic Scalable Joins The Joins Concurrency Library INRIA, Join Calculus homepage Distributed computing Software design patterns
57973941
https://en.wikipedia.org/wiki/Casebook%20PBC
Casebook PBC
Casebook PBC is a US cloud computing public-benefit corporation headquartered in New York City. Incubated by the Annie E. Casey Foundation, the company initially developed child welfare solutions and has since expanded to provide a SaaS platform servicing the whole of Human Services. History Casebook initially started as a project of the Annie E. Casey Foundation under the leadership of Kathleen Feely, who was Vice-President for Innovation at the foundation. After early success with private service providers, the software was spun out into its own non-profit organization, known as Case Commons. In 2012, Indiana's Department of Child Services was the first state agency to implement Casebook as a mobile and web-based solution for its child welfare caseworkers. This effort led to the organization receiving the Design for Experience Award in 2014 and a Code for America Technology Award in 2015. In 2017, the organization helped the state of California's Child Welfare Digital Services agency learn how to build and ship software. That same year, under the leadership of a new CEO, Tristan Louis, Casebook PBC entered into a national partnership with KPMG, allowing KPMG to leverage the Casebook platform as its exclusive solution for the child welfare vertical. In late 2018, Casebook PBC moved from a non-profit status to a public benefit corporation model and rebranded from Case Commons to Casebook PBC. In mid-2020, the company started offering Case Management and Provider Management software aimed at not-for-profit organizations in social services.. Products The company offers the Casebook Platform, a set of core components that can be used for a variety of Human Services software developments. In 2019, the company launched provider management software aimed at small and medium-sized providers in human services . Because of its historical background in the Child Welfare space, the company also offers a suite of applications that, when put together, can allow states to assemble a CCWIS-ready child welfare system. Awards and recognitions Casebook PBC was recognized as a govtech 100 company by Government Technology Magazine in 2019, 2020, 20212019, and 2022 The company was also the recipient of the 2019 Stevie Awards for Innovation and Startup of the Year, and was named as one of the "2019 Best for the World" companies by B Lab. In 2020 Casebook received a mention at the Fast Company's 2020 World Changing Ideas Awards. References Software companies based in New York City Companies based in Manhattan American companies established in 2017 Software companies established in 2017 Benefit corporations Public benefit corporations based in the United States Privately held companies of the United States Privately held companies based in New York City Cloud applications Cloud computing providers Software companies of the United States
10123059
https://en.wikipedia.org/wiki/Routing%20protocol
Routing protocol
A routing protocol specifies how routers communicate with each other to distribute information that enables them to select routes between nodes on a computer network. Routers perform the traffic directing functions on the Internet; data packets are forwarded through the networks of the internet from router to router until they reach their destination computer. Routing algorithms determine the specific choice of route. Each router has a prior knowledge only of networks attached to it directly. A routing protocol shares this information first among immediate neighbors, and then throughout the network. This way, routers gain knowledge of the topology of the network. The ability of routing protocols to dynamically adjust to changing conditions such as disabled connections and components and route data around obstructions is what gives the Internet its fault tolerance and high availability. The specific characteristics of routing protocols include the manner in which they avoid routing loops, the manner in which they select preferred routes, using information about hop costs, the time they require to reach routing convergence, their scalability, and other factors such as relay multiplexing and cloud access framework parameters. Certain additional characteristics such as multilayer interfacing may also be employed as a means of distributing uncompromised networking gateways to authorized ports. This has the added benefit of preventing issues with routing protocol loops. Many routing protocols are defined in technical standards documents called RFCs. Types Although there are many types of routing protocols, three major classes are in widespread use on IP networks: Interior gateway protocols type 1, link-state routing protocols, such as OSPF and IS-IS Interior gateway protocols type 2, distance-vector routing protocols, such as Routing Information Protocol, RIPv2, IGRP. Exterior gateway protocols are routing protocols used on the Internet for exchanging routing information between Autonomous Systems, such as Border Gateway Protocol (BGP), a path-vector routing protocol. Exterior gateway protocols should not be confused with Exterior Gateway Protocol (EGP), an obsolete routing protocol. OSI layer designation Routing protocols, according to the OSI routing framework, are layer management protocols for the network layer, regardless of their transport mechanism: IS-IS runs on the data link layer (Layer 2) Open Shortest Path First (OSPF) is encapsulated in IP, but runs only on the IPv4 subnet, while the IPv6 version runs on the link using only link-local addressing. IGRP, and EIGRP are directly encapsulated in IP. EIGRP uses its own reliable transmission mechanism, while IGRP assumed an unreliable transport. Routing Information Protocol (RIP) runs over the User Datagram Protocol (UDP). Version 1 operates in broadcast mode, while version 2 uses multicast addressing. BGP runs over the Transmission Control Protocol (TCP). Interior gateway protocols Interior gateway protocols (IGPs) exchange routing information within a single routing domain. Examples of IGPs include: Open Shortest Path First (OSPF) Routing Information Protocol (RIP) Intermediate System to Intermediate System (IS-IS) Enhanced Interior Gateway Routing Protocol (EIGRP) Exterior gateway protocols Exterior gateway protocols exchange routing information between autonomous systems. Examples include: Exterior Gateway Protocol (EGP) Border Gateway Protocol (BGP) Routing software Many software implementations exist for most of the common routing protocols. Examples of open-source applications are Bird Internet routing daemon, Quagga, GNU Zebra, OpenBGPD, OpenOSPFD, and XORP. Routed protocols Some network certification courses distinguish between routing protocols and routed protocols. A routed protocol is used to deliver application traffic. It provides appropriate addressing information in its internet layer or network layer to allow a packet to be forwarded from one network to another. Examples of routed protocols are the Internet Protocol (IP) and Internetwork Packet Exchange (IPX). See also Static routing Dynamic routing Hierarchical state routing Optimized Link State Routing Protocol B.A.T.M.A.N. Notes References Further reading Chapter "Routing Basics" in the Cisco "Internetworking Technology Handbook" Computer networking routing
1270922
https://en.wikipedia.org/wiki/Repton%20%28video%20game%29
Repton (video game)
Repton is a computer game originally developed by 16-year-old Briton Tim Tyler for the BBC Micro and Acorn Electron and released by Superior Software in 1985. The game spawned a series of follow up games which were released throughout the 1980s. The series sold around 125,000 copies between 1985 and 1990 with Repton 2 selling 35,000 itself. The games have since been remade for several modern systems, including iRepton for the iPhone / iPod Touch in 2010; Android Repton 1, Android Repton 2 and Android Repton 3 from 2016 to 2018; and Repton's Journeys in 2018. The author was inspired by a review of the recently released Boulder Dash, but had never played the game. Compared with Boulder Dash, Repton is a much more calm and organized playing experience with the emphasis on puzzle-solving as opposed to arcade-style improvisation. This remained true as more types of object were added in the sequels. Series overview Repton Repton, the titular protagonist, is moved around an underground maze in a quest to find all the diamonds (some are held in safes, their release being triggered by finding and collecting a key) within a time limit for each of several levels, while avoiding being trapped or killed by falling rocks and monsters hatched from eggs. The original Repton game was released in the summer of 1985 and has 12 levels, with passwords making it possible to jump directly to later levels. If passwords are employed, on completion of the final level the displayed message challenges the player to complete the game without using them. The new versions of Repton for the PC, iOS and Android introduce additional levels; and new Repton levels are also featured in Repton Spectacular, Repton's Mystic and Challenge (for Repton 1) and Repton's Journeys (for Repton 2). Repton 2 The sequel to the game, Repton 2, released for Christmas 1985 (release date 14 November 1985 ) is much bigger. It introduces several new features: spirits (that follow walls and objects to their left and must be guided into cages, turning them into diamonds) and skulls, both of which are fatal to Repton on collision. There are also jigsaw puzzle pieces to collect, which eventually spell out the message "Repton 2 is ended". There are no levels as such in Repton 2: instead transporters move Repton between different screens which, subject to some restrictions, can be completed in any order desired. The entire game is in effect one very large level without passwords, meaning that it must be completed in one attempt. Finally, some screens also contain an exposed 'roof', where meteors (predictably fatal to Repton) fall from the sky. Repton's requirements in Repton 2 are challenging: Repton must not only collect all diamonds (including those held in safes and behind cages), but also collect all earth, kill all the monsters, collect all puzzle pieces and use all transporters. Once these substantial tasks have been accomplished, Repton must then negotiate the 'roof' of the entire width of the final screen, avoiding meteors falling from the sky in order to reach the starport and thus complete the game. This part is particularly tricky, since the meteors fall in a random fashion, making it difficult for the gamer to guide Repton to safety. This long list of requirements, coupled with the fact that the game must be completed in one attempt, is unique among the Repton series and makes Repton 2 by far the hardest Repton game to successfully complete. Unfortunately a bug in the original version of Repton 2 meant that the game contained one diamond fewer than the stipulated number needed to finish the game, so completion of this first version is impossible. When Repton 2 was re-written for the PC, it introduced a 'save game' feature making it considerably easier to complete. In addition, brand new scenarios were included, effectively new games. Repton 3 Repton 3, released 5 November 1986, was developed by Matthew Atkinson at Superior's invitation since Tim Tyler was not interested in programming it—although he did design some of the levels for the new game. While the first two games had only taken a month each to program, Repton 3 took eight months. It reverts to the form of a series of time-limited, password-protected levels. A few new features were introduced: fungus (a substance that gradually spreads wherever it finds space and kills Repton on contact), time capsules (resetting the current level's time limit each time one is collected), crowns and a timebomb which must be defused to complete each level. The inclusion of the timebomb means that, as well as collecting all of the diamonds and crowns, the user has to plan their route so as to finish up at the timebomb at the end of the level. Repton 3 includes a map editor along with the game, so that data files can be created with new maps and graphics for the levels. Three themed sets of such files were released as continuations of Repton 3, with the titles Around the World in 40 Screens (1987), The Life of Repton (1987) and Repton Thru Time (1988). These three titles use a slightly modified game engine, in which the algorithm for deciding on the direction spirits first move at the start of a level is improved. They all come with the same game editor as Repton 3. Repton Infinity Repton Infinity was released in 1988, by which time the BBC Micro's popularity as a games platform was beginning to wane. It was developed by Dave Acton and Dave Lawrence (who wrote the user-submitted program section *INFO in Acorn User magazine). It supplements the map editor and graphics editor with a powerful game logic editor which made it possible to alter the way all game objects behaved using a purpose-designed language called Reptol. Four different example games are included to demonstrate its flexibility: Repton 3 - Take 2, with a couple of small technical differences in gameplay from Repton 3; Repton 4, with imaginative new objects including photocopiers and moving jewels; Robbo, “a crazy robot in a strange topsy-turvy world”, according to the game inlay; and Trakker, a chaotic game in which a bulldozer-driving protagonist must destroy various monsters by pushing fruit at them, and all scores are multiples of 17. There was a long-running problem, infamous amongst owners of the B+, the updated BBC B with 64k, when the newly released Repton Infinity ran on the original BBC B but refused to load on the updated B+. A string of unsuccessful replacements were issued before one that was compatible with both was eventually released. EGO: Repton 4 A game marketed as EGO: Repton 4, was released for the Acorn Archimedes in 1992. It was designed by Richard Hanson and programmed by Gary Partis. This was actually an Archimedes conversion of an earlier Amiga/Atari ST game called Personality Crisis with the character of Repton added in. The game bears little relation to the rest of the Repton series, particularly in that contrary to the spirit of the original it relies on "secret" traps and passages which can only be discovered by walking onto them. (The objects and objectives in all the previous Repton games are visible and there are no hidden secrets to be discovered, although in some advanced episodes - notably "OAP" in Life of Repton, "Oceans" in Around the World and "Future" in Repton Thru Time - some objects and enemies are invisible or appear very similar to desirable or innocuous objects.) Repton: The Lost Realms In 1988, teenage programmer Paras Sidapara submitted a game he called Repton 4 to Superior Software. As Superior were already working on Repton Infinity, it was not published, and was forgotten until 2008, when a copy was rediscovered. The game was re-programmed by Tom Walker and rechristened The Lost Realms, to avoid confusion with the Repton 4 game included in Repton Infinity. It was launched in November 2010 by Retro Software, with Superior's permission. The game is similar in style to Repton 3, retaining the structure of separate, password-protected levels and the map and graphics editors. New features include balloons, "absorbency" doors (which can be opened when an "absorbalene" pill is collected, but each pill only allows a certain number of doors to be opened) and ice crystals (which, when collected, freeze any monsters on the level). Music The music for Repton is Black and White Rag, by George Botsford, which has been well known in Britain at least since its 1969 popularization as the theme tune to the long-running TV snooker programme Pot Black. The Scott Joplin intermezzo The Chrysanthemum is the music for Repton 2. The music for Repton 3 was composed specifically for the game by Paul Hughes and Peter Clarke. Repton Infinity features in-game music, on pressing the 'T' key, although it does not play at the same time as the sound effects during the game, and is turned off by default. It was composed especially for the game by David Acton. Repton ports, clones and derivatives Ports The Repton games were closely associated with the BBC Micro and Acorn Electron but versions were released for other 8-bit computers. Superior Software had planned to launch Repton 3 with ports for the Commodore 64 and Amstrad CPC (as shown in pre-release press advertisements). The Amstrad version was never released but the C64 port did arrive in 1987. Ports of the first 3 Repton games were later developed for the ZX Spectrum and Repton and Repton 2 were released together as Repton Mania in 1989 (published using the joint Superior/Alligata name). This was not a success and the Spectrum Repton 3 was not released. In 1989 a version of Repton 3 featuring all expansion packs was also released for the BBC Micro's replacement, the 32-bit Acorn Archimedes. Its programmer, John Wallace, also produced a slightly expanded version of Repton 2 for the Acorn Archimedes which was released on the 1993 Play It Again Sam 2 compilation (which also included Zarch, Master Break and Arcpinball). None of these ports achieved the sales of the BBC originals. In the late 1990s, Superior sub-licensed the games to ProAction who released a number of RISC OS ports for the Acorn Archimedes and RiscPC. John Wallace created new ports of Repton, Repton 2 and Repton 3. ProAction also released Desktop Repton which includes the 3 games (including the expansion games for Repton 3). These games were built from scratch by Darren Salt, and developed to run in a multi tasking window on the desktop. There was also Desktop Repton Plus, with new PC graphics and extra levels included for Repton 1. There have been numerous ports of Repton 3, including a free version for Linux. Under the name Superior Interactive, the original publishers re-released versions of Repton 1 (2003), Repton 2 (2004) and Repton 3, including all of the expansion games (2005), for Microsoft Windows. They also released a large pack of new levels for all three modern Repton ports named Repton Spectacular in 2010. Also in 2010, iRepton was released for the Apple iPhone / iPod Touch (ESZ Consulting / Superior Interactive). This has retro and enhanced graphics and sounds and new screens. On 7 October 2014 iRepton 3 was released for the iPad and iPhone, featuring the same level of graphic enhancements as iRepton 1, which also had an overhaul at the same time. Clones A deliberate clone called Ripton, extremely faithful apart from different level design and several humorous digs at the original game, was written by Kenton Price and submitted to A&B Computing but the magazine never dared to publish it. It is, however, now available at BBC software Internet sites. There was also a PD clone for the ZX Spectrum called Riptoff, which included a level editor. It was developed by Rick O'Neill and Craig Hunter, and was released exclusively on a 1991 Your Sinclair covertape. Derivatives Because of Repton'''s ubiquity on the platform it became impossible not to compare to it any later commercial scrolling-map game for the BBC/Electron. Later puzzle-based games such as Bonecruncher and Clogger might justifiably be said to be derivative of Repton, but this perception also encompassed arcade adventure/role-playing games presented in the four-way-scrolling format (the notable ones being Ravenskull and Pipeline) despite their different style involving unique objects and encounters and unexpected traps. A non-scrolling 5-level type-in called Pitfall Pete written by Jonathan Temple was also described as "'Repton' style" when it was published by BEEBUG in 1986 and expanded to 15 levels in 1991.Repton's original author has written a public domain Java rocks-and-diamonds game, Rockz, which features elements in the vein of both Repton 2 and Boulder Dash. A game called Mole Miner was released for Android devices in 2009 by Little Fluffy Toys Ltd. The game was designed by Kenton Price, who also wrote Ripton (see above). It builds on the rocks-and-diamonds genre typified by Repton, extending it with features such as explosives, sliding ice and wraparound, and includes a community level designer. Mole Miner contains 60 levels created by Michael S. Repton, the author of many levels in later Repton'' series games. References External links Superior Interactive Repton author's personal website Repton Resource Page 1985 video games Rocks-and-diamonds games Superior Software games Acorn Archimedes games BBC Micro and Acorn Electron games Commodore 64 games IOS games Video games about reptiles Video games developed in the United Kingdom Windows games ZX Spectrum games Linux games
587698
https://en.wikipedia.org/wiki/Secure%20copy%20protocol
Secure copy protocol
Secure copy protocol (SCP) is a means of securely transferring computer files between a local host and a remote host or between two remote hosts. It is based on the Secure Shell (SSH) protocol. "SCP" commonly refers to both the Secure Copy Protocol and the program itself. According to OpenSSH developers in April 2019, SCP is outdated, inflexible and not readily fixed; they recommend the use of more modern protocols like sftp and rsync for file transfer. A near-future release of OpenSSH will switch scp from using the legacy scp/rcp protocol to using SFTP by default. Secure Copy Protocol The SCP is a network protocol, based on the BSD RCP protocol, which supports file transfers between hosts on a network. SCP uses Secure Shell (SSH) for data transfer and uses the same mechanisms for authentication, thereby ensuring the authenticity and confidentiality of the data in transit. A client can send (upload) files to a server, optionally including their basic attributes (permissions, timestamps). Clients can also request files or directories from a server (download). SCP runs over TCP port 22 by default. Like RCP, there is no RFC that defines the specifics of the protocol. Function Normally, a client initiates an SSH connection to the remote host, and requests an SCP process to be started on the remote server. The remote SCP process can operate in one of two modes: source mode, which reads files (usually from disk) and sends them back to the client, or sink mode, which accepts the files sent by the client and writes them (usually to disk) on the remote host. For most SCP clients, source mode is generally triggered with the -f flag (from), while sink mode is triggered with -t (to). These flags are used internally and are not documented outside the SCP source code. Remote to remote mode In the past, in remote-to-remote secure copy, the SCP client opens an SSH connection to the source host and requests that it, in turn, open an SCP connection to the destination. (Remote-to-remote mode did not support opening two SCP connections and using the originating client as an intermediary). It is important to note that SCP cannot be used to remotely copy from the source to the destination when operating in password or keyboard-interactive authentication mode, as this would reveal the destination server's authentication credentials to the source. It is, however, possible with key-based or GSSAPI methods that do not require user input. Recently, remote-to-remote mode supports routing traffic through the client which originated the transfer, even though it is a 3rd party to the transfer. This way, authorization credentials must reside only on the originating client, the 3rd party. Issues using talkative shell profiles SCP does not expect text communicating with the ssh login shell. Text transmitted due to the ssh profile (e.g. echo "Welcome" in the .bashrc file) is interpreted as an error message, and a null line (echo "") causes scp to deadlock waiting for the error message to complete. Secure Copy (remote file copy program) The SCP program is a software tool implementing the SCP protocol as a service daemon or client. It is a program to perform secure copying. Perhaps the most widely used SCP program is the OpenSSH command line scp program, which is provided in most SSH implementations. The scp program is the secure analog of the rcp command. The scp program must be part of all SSH servers that want to provide SCP service, as scp functions as SCP server too. Some SSH implementations provide the scp2 program, which uses the SFTP protocol instead of SCP, but provides the very same command line interface as scp. scp is then typically a symbolic link to scp2. Syntax Typically, a syntax of scp program is like the syntax of cp (copy): Copying local file to a remote host: scp LocalSourceFile user@remotehost:directory/TargetFile Copying file from remote host and copying folder from remote host (with -r switch): scp user@remotehost:directory/SourceFile LocalTargetFile scp -r user@host:directory/SourceFolder LocalTargetFolder Note that if the remote host uses a port other than the default of 22, it can be specified in the command. For example, copying a file from host: scp -P 2222 user@host:directory/SourceFile TargetFile Other clients As the Secure Copy Protocol implements file transfers only, GUI SCP clients are rare, as implementing it requires additional functionality (directory listing at least). For example, WinSCP defaults to the SFTP protocol. Even when operating in SCP mode, clients like WinSCP are typically not pure SCP clients, as they must use other means to implement the additional functionality (like the ls command). This in turn brings platform-dependency problems. More comprehensive tools for managing files over SSH are SFTP clients. Security In 2019 vulnerability was announced related to the openssh SCP tool and protocol allowing users to overwrite arbitrary files in the SCP client target directory. See also References Cryptographic software Cryptographic protocols Network file transfer protocols
36856931
https://en.wikipedia.org/wiki/James%20Olson%20%28author%29
James Olson (author)
James Olson (born Chet Myles Olson; June 27, 1943) is an American philosopher and author. A generalist focused on psychological aspects of the brain, neuropsychology, Olson explores the brain's role in influencing the nature of human consciousness, thought, and behavior. In particular, he seeks to understand how genetic dominance and functional lateralization combine to create a series of inheritable default brain-operating systems to help guide perception and response. Olson is the author of How Whole Brain Thinking Can Save the Future (Origin Press, 2017), a book that seeks to explain human behavior by focusing on functional differences in the brain's hemispheres in terms of how they view and manage the information that generates our thoughts, feelings, and actions; and The Whole-Brain Path to Peace (Origin Press, 2011), which promotes peace though whole-brain thinking. To explore the brain's role in our decision-making and behavior, Olson studies the character of the macro management systems that oversee the brain's operation, explaining how the various systems help shape our viewpoint, bias our perception, and divide us. As shared in Olson’s speaking engagements and radio interviews,<ref>[http://lifemasteryradio.net/?p=3132 Life Mastery Radio] interview with Todd Allen, August 7, 2012.</ref> his passion is to help bring a greater measure of peace to this planet by reducing the level of conflict created by dysfunctional polarizations. Olson believes that mental conflict is initiated in the brain/mind complex, the result of profound differences in the left and right hemisphere as expressed though the unique viewpoints and ideas that they generate. According to Olson, the perceptual and ideological conflicts that arise and divide us can often be consciously eliminated by recognizing the two hemispheres and their operating systems as a complementary whole rather than as parts. Early life Olson was born in Kansas City, Missouri. He grew up on a farm near Waynoka, Oklahoma, and was an active 4-H member.Oklahoma 4-H Hall of Fame, 1961. A former church deacon,Mayflower Congregational Church, Oklahoma City, OK, 1994–1997. Olson began with a core of conservative Christian values, which were later complemented by the study of other religions and the acquisition of more liberal values as a result of having lived in Paris, France, Vienna, Austria, Murnau, Germany, Schwäbisch Hall, Germany, and Iserlohn, Germany. After attending Oklahoma State University, Stillwater from 1961-1963, and the University of Vienna, Austria in 1963, Olson graduated with a Bachelor of Business Administration degree from the University of Oklahoma, Norman. In 1967 he attended graduate school at the University of Missouri-Kansas City. Current, ongoing work Olson currently promotes whole-brain thinking and works to reduce social and political polarization in order to create a more peaceful world. The scientific foundation of his published works starts with the split-brain research of Roger Walcott Sperry and his then student Michael S. Gazzaniga, and includes the work of Ned Herrmann, Iain McGilchrist, Robert Ornstein, and Jill Bolte Taylor. Olson's approach is interdisciplinary, focused on fundamentals, and inclusive of physical, mental, and spiritual values. To explain the brain's role in feeding consciousness, he describes sixteen variations in how the brain gathers and processes information and informs our response. Focused on practical aspects of brain science, Olson works to understand how we can consciously interact with and influence this activity. The foundation of Olson's philosophical and scientific research starts with a knowledge of the complementary systems that manage the operation of the two hemispheres, one a dual system (as expressed by ontological dualism), the other a nondual system. He explores how the two hemispheric systems relate to one another, seeking to understand the nature of the mental viewpoints they give us, and why they elicit the responses they do. Having studied each hemisphere's viewpoint (the view it shows us) and its typical response (its default reaction to what it perceives), Olson then set out to explain in detail the differences between left-brain and right-brain consciousness, resulting in his latest book How Whole Brain Thinking Can Save the Future (2017) which outlines his four- and sixteen-variation models of consciousness. Four-part brain-management model In researching for his 2017 book, Olson found that the dual consciousness discovered by Roger Sperry (for which Sperry won a Nobel Prize in 1981) [9], is a consequence of genetic dominance, specifically, genetic complete dominance, which produces the left-hemisphere- or right-hemisphere-dominant model of consciousness advocated by Sperry. Based on patterns common to genetic dominance, Olson hypothesizes that genetic co-dominance causes the two hemisphere to work as an integrated team to inform consciousness, and genetic incomplete dominance causes the two hemispheres to integrate into a hybrid system of operation. Brain operating systems and gender Olson views gender as a consequence of systems behavior. The specialized management systems that operate the hemispheres, like all management systems, can be described based on their operational characteristics. All systems, he points out, are characterized by their purpose and scope, and by the values they adhere to and promote, among other things. Based on a broad body of research detailing the brain's operation, Olson believes that in most people the left hemisphere uses a dual system of operation and the right hemisphere uses a nondual system. The brain’s dual and nondual systems engage in a variety of different tasks and in doing so produce different feelings, and gender is, in part, felt. Dualistic consciousness is aggressive, materialistic, selfish, and fearful. It feels masculine. Holistic nondual consciousness is passive, people oriented, selfless, and loving in character. It feels feminine. Gender's four variations Considering common genetic dominance patterns of behavior, Olson believes that some of us experience a combination gender. Whereas genetic complete dominance produces either masculine or feminine gender, genetic co-dominance can be expected to create a team-based operating system that, by default, produces a bisexual gender experience; and genetic incomplete dominance can be expected to give us a hybrid operating system that defaults to produce a hybrid gender variation that produces polysexual behavior. Sixteen variations in consciousness Olson claims to have resolved a major conflict between lateralization of brain function studies and handedness studies. The body of behavioral studies that form the foundation of functional lateralization suggest that most women are right-brain dominant; however, handedness studies indicate that most women are left-brain dominant since most women are right-handed, and right-handedness is widely accepted to be a reflection of left-brain dominance. To explain this apparent inconsistency, Olson contends that our information processes in two stages, first as brain input, then as brain output, and that each stage is independently subject to the regulation of one of the three types of genetic dominance. Thus, in accordance with most functional lateralization studies, it is common for women to be informed of their world though a dominant right hemisphere, and yet respond to this information from a dominant left hemisphere—as indicated by handedness studies. Furthermore, Olson claims that any one of the four responses described above (right-hemisphere dominance, left-hemisphere dominance, right-left hybrid, or right and left team), may dominate the processing of information input and any one of the four may dominate the processing of its output. Consequently, Olson believes that current brain science supports sixteen different systems of brain operation that produce sixteen variations in consciousness. In July, 2012, Olson published a research paper on the topic of sexual orientation with a table showing 32 variations in sexual orientation, “The Role of Brain Dominance in Sexual Orientation”, that has garnered critical media recognition in traditional and LGBTQ"The Brain's Role in Sexual Orientation" Guest Opinion by James Olson in The Bay Area Reporter Online, July 26, 2012. outlets, based on the prevailing controversies around the topic of sexual orientation. Following the unifying guidelines of philosophy and drawing on his wide-ranging education, Olson has stated that his mission is to help bring the planet’s masculine and feminine energies into greater balance, and therefore into a more peaceful state, through his advocacy of whole-brain thinking. Previous career From 1968 – 1987, James Olson worked in agriculture in Woods County, Oklahoma, managing a farm producing wheat and alfalfa. AwardsHow Whole Brain Thinking Can Save the Future (2017) is the recipient of several book awards: Montaigne Medal 2017 Foreword Reviews Book of the Year, Body, Mind, Spirit—Gold, 2016 Nautilus Book Award, Science, Cosmology, & Expanding Consciousness—Silver, 2016 Independent Publisher Book Award, Psychology/Mental Health—Silver, 2017 Best Book Awards, Science—Gold, 2017 Best Book Awards, LGBTQ Non-Fiction—Finalist, 2017 Living Now Evergreen Book Medals, World Peace—Gold, 2019 Awards for The Whole-Brain Path to Peace (2011) include: Foreword Reviews Book of the Year Awards, Philosophy—Gold, 2011 Works "Using Brain Science to Enhance Creativity", article in Common Ground, May, 2017 "The Role of Sacred Geometry in Informing Us", article in Spirituality & Health, March 27, 2017 "The Divine Feminine - As Above So Below", article in Spirituality & Health, March 5, 2017 "How the Split Brain Affects Our Political Observations", article in Spirituality & Health, Jan 29, 2017 "How Whole Brain Thinking Can Save the Future", article in Om Times, Jan 14, 2017 "Evolve Into Your Ultimate Self With Whole Brain Thinking". article in Spirituality & Health, Jan 8, 2017 How Whole Brain Thinking Can Save the Future: Why Left Hemisphere Dominance Has Brought Humanity to the Brink of Disaster and How We Can Think Our Way to Peace and Healing (), Origin Press, 2017 "Politics with Half a Brain", article in Whole Life Times, August 7, 2016 The Whole-Brain Path to Peace: The Role of Left- and Right-Brain Dominance in the Polarization and Reunification of America (), Origin Press, 2011 “The Role of Brain Dominance in Sexual Orientation”, article, July, 2012 “The Holistic Perspective is the Path to Peace”, article in Wisdom Magazine, Summer 2011 “The Brain's Role in Sexual Orientation”, Guest Opinion in The Bay Area Reporter Online, July 26, 2012 “Our Brains on Peace”, article in Light of Consciousness'' magazine, Spring 2012 Jesus One: The Life and Wisdom of Jesus in Scripture (), Spiritwarrior Publishing Company, 1981 Jesus Two: The Life and Wisdom of Jesus (), Spiritwarrior Publishing Company, 1982 References External links Official website, The Whole Brain Path to Peace YouTube video explanation of Olson's book How Whole Brain Thinking Can Save the Future YouTube video explains how genetic dominance and functional lateralization generate a series of four fundamental brain-operating systems YouTube video explains how genetic dominance and functional lateralization generate 16 brain-operating-system combinations YouTube video explains how genetic dominance and functional lateralization produce four genders American philosophers 1943 births Living people
34551820
https://en.wikipedia.org/wiki/Baltimore%20Hackerspace
Baltimore Hackerspace
Baltimore Hackerspace is a hackerspace, sometimes called a makerspace, located in Baltimore, Maryland. Its creation has been inspired and modeled after the many other Hackerspaces around the United States and Europe. About Harford Hackerspace was founded in January 2009 to fill the gap of a hackerspace in the Baltimore area. At the time, there wasn't another hackerspace within a reasonable distance. Originally set out to be in Harford County, Maryland, the name Harford Hackerspace was born. After the first few meetings, it was realized that the majority of the founding members resided within Baltimore, Maryland. Therefore, within its first month, the hackerspace was relocated to Baltimore where it became the first hackerspace in Baltimore. Although residing in Baltimore, the group was still incorporated under the name Harford Hackerspace. Beginning in 2013 the name problem was rectified by filing paperwork to register the group officially as Baltimore Hackerspace the name in which the group now operates as. With the help of Nick Farr of former HacDC fame, the group sorted through the paperwork and became one of the first registered non-profit 501(C)(3) charitable organization hackerspaces. Their mission is to create an environment where people could collaborate on ideas, share resources and talents, and learn from each other—all within a social environment. Notable Mentions Do It Yourself CNC featured in the book "Hack This: 24 Incredible Hackerspace Projects from the DIY Movement" by John Baichtal. Red Bull Creation 2011 finalist. Winner of "Best Hardware Prototype - Group" project in Baltimore Hackathon 2010. Project featured on MSDN Channel 9. Local Involvement Regular Exhibitor at RobotFest and Electronica Fest that both take place at the National Electronics Museum. Participation in the first Betascape which was a part of Artscape at the time. The group currently operates 1250 sqft warehouse facility. The facility is open to the community and provides classes and access to tools, small fabrication machinery, and workspace. See also Open Works References Hacker groups Computer clubs Hackerspaces
22628771
https://en.wikipedia.org/wiki/Frama-C
Frama-C
Frama-C stands for Framework for Modular Analysis of C programs. Frama-C is a set of interoperable program analyzers for C programs. Frama-C has been developed by the French Commissariat à l'Énergie Atomique et aux Énergies Alternatives (CEA-List) and Inria. It has also received funding from the Core Infrastructure Initiative. Frama-C, as a static analyzer, inspects programs without executing them. Despite its name, the software is not related to the French project Framasoft. Architecture Frama-C has a modular plugin architecture comparable to that of Eclipse (software) or GIMP. Frama-C relies on CIL (C Intermediate Language) to generate an abstract syntax tree. The abstract syntax tree supports annotations written in ANSI/ISO C Specification Language (ACSL). Several modules can manipulate the abstract syntax tree to add ANSI/ISO C Specification Language (ACSL) annotations. Among frequently used plugins are: Value analysis computes a value or a set of possible values for each variable in a program. This plugin uses abstract interpretation techniques and many other plugins make use of its results. Jessie verifies properties in a deductive manner. Jessie relies on the Why or Why3 back-end to enable proof obligations to be sent to automatic theorem provers like Z3, Simplify, Alt-Ergo or interactive theorem provers like Coq or Why. Using Jessie, an implementation of bubble-sort or a toy e-voting system can be proved to satisfy their respective specifications. It uses a separation memory model inspired by separation logic. WP (Weakest Precondition) similar to Jessie, verifies properties in a deductive manner. Unlike Jessie, it focuses on parameterization with regards to the memory model. WP is designed to cooperate with other Frama-C plugins such as the value analysis plug-in, unlike Jessie that compiles the C program directly into the Why language. WP can optionally use the Why3 platform to invoke many other automated and interactive provers. Impact analysis highlights the impacts of a modification in the C source code. Slicing enables slicing of a program. It enables generation of a smaller new C program that preserves some given properties. Spare code removes useless code from a C program. Other plugins are: Dominators computes dominators and postdominators of statements. From analysis computes functional dependencies. Features Frama-C can be used for the following purposes: To understand C code which you have not written. In particular, Frama-C enables one to observe a set of values, slice the program into shorter programs, and navigate in the program. To prove formal properties on the code. Using specifications written in ANSI/ISO C Specification Language enables it to ensure properties of the code for any possible behavior. Frama-C handles floating point numbers. To enforce coding standards or code conventions on C source code, by means of custom plugin(s) To instrument C code against some security flaws See also SPARK (programming language) References External links Frama-C discussion list Frama-C Bug Tracking System OCaml software C programming language family Formal methods tools Software testing tools Static program analysis tools Software using the LGPL license Science software that uses GTK Software that uses Cairo (graphics) Linux software
9271656
https://en.wikipedia.org/wiki/Glitch%20%28company%29
Glitch (company)
Glitch (previously known as Fog Creek Software) is a software company specializing in project management tools. Its products include project management and content management, and code review tools. History Based in New York City, Fog Creek was founded in 2000 as a consulting company by Joel Spolsky and Michael Pryor. As the consulting market started to dry up due to the collapse of the Dot-com bubble, Fog Creek moved to a product-based business. In December 2016 Anil Dash was appointed CEO. Fog Creek's offices are located in the Financial District of Manhattan. On September 25, 2018, the company was officially renamed Glitch after its flagship product. Glitch staff announced intentions to unionize with the Communications Workers of America in early 2020 as part of the Campaign to Organize Digital Employees. The company voluntarily recognized their union. Around the same time, the company laid off a third of its staff during the COVID-19 pandemic. In February 2021, Glitch workers signed a collective bargaining agreement with the company. According to the Communications Workers of America (CWA), this is the first agreement signed by white collar tech workers in the United States. Products Glitch (application) The Glitch web application launched in the spring of 2017 as a place for people to build simple web applications using JavaScript.While JavaScript is the only supported language, other languages can be unofficially used. Pitched as a "view source" tool that lets users "recombine code in useful ways". Glitch is an online IDE for JavaScript and Node.js with and includes instant hosting and automated deployment and live help from community members. IDE features include live editing, hosting, sharing, automatic source versioning,and Git integration. Glitch focuses on being a friendly, accessible community; since its launch over a million people have used the site to make web applications. The Glitch site is self-hosting (except for the editor and API), allowing users to view or remix the site's source code. In December 2018, Mozilla announced that it will retire Thimble — (Mozilla's browser-based, educational code editor) and asked users to migrate all of their projects to Glitch. Thimble was shut down in December 2019 and its projects were migrated to Glitch. In early 2020, Glitch released a paid plan, known as "boosted apps". Users can pay 8 dollars a month to have projects with more RAM, more storage, and no wake up screen. Stack Overflow In 2008, Jeff Atwood and Joel Spolsky created Stack Overflow, a question-and-answer Web site for computer programming questions, which they described as an alternative to the programmer forum Experts-Exchange. Stack Overflow serves as a platform for users to ask and answer questions, and, through membership and active participation, to vote questions and answers up or down and edit questions and answers in a fashion similar to a wiki or Digg. Users of Stack Overflow can earn reputation points and "badges" when another user votes up a question or answer they provided. , Stack Overflow has over 12,000,000 registered users and more than 20,100,000 questions. Based on the type of tags assigned to questions, the top ten most discussed topics on the site are: JavaScript, Java, Python, C#, PHP, Android, HTML, jQuery, C++, and CSS. Following the success of Stack Overflow they started additional sites in 2009 based on the Stack Overflow model: Server Fault for questions related to system administration and Super User for questions from computer "power users". In June 2021, Prosus acquired Stack Overflow for $1.8 billion. Stack Exchange In September 2009, Fog Creek Software released a beta version of the Stack Exchange 1.0 platform as a way for third parties to create their own communities based on the software behind Stack Overflow, with monthly fees. This white label service was not successful, with few customers and slowly growing communities. In May 2010, Stack Overflow was spun-off as its own new company, Stack Exchange Inc., and raised $6 million in venture capital from Union Square Ventures and other investors, and it switched its focus to developing new sites for answering questions on specific subjects. Trello In 2011, Fog Creek released Trello, a collaborative project management hosted web application that operated under a freemium business model. Trello was cross-subsidized by the company's other products. A basic service is provided free of charge, and a Business Class paid-for service was launched in 2013. In July 2014, Fog Creek Software spun off Trello as its own company operating under the name of Trello, Inc. Trello Inc. raised $10.3 million in funding from Index Ventures and Spark Capital. In January 2017, Atlassian announced it was acquiring Trello for $425 million. FogBugz FogBugz is an integrated web-based project management system featuring bug and issue tracking, discussion forums, wikis, customer relationship management, and evidence-based scheduling developed by Fog Creek Software. It was briefly rebranded as Manuscript in 2017, which was acquired in 2018 and was renamed back to FogBugz. Copilot Fog Creek Copilot was a remote assistance service offered by Fog Creek Software. It launched on August 8, 2005. Originally known as Project Aardvark, Fog Creek Copilot was developed by a group of summer interns at Fog Creek Software. Fog Creek's founder, Joel Spolsky, wanted to give his interns the experience of taking a project through its entire lifecycle from inception, to mature released product. The interns set up a blog, called Project Aardvark, where they posted updates on the progress of their project, to the world even though at that time the details of what they were working on was still a secret. On July 1, 2005 the Project Aardvark team revealed that they were working on a remote assistance system for consumer use. Fog Creek Copilot uses a heavily modified version of TightVNC, a variant of Virtual Network Computing (VNC), as its core protocol. On November 7, 2005 they released a documentary on the interns' summer, titled Aardvark'd: 12 Weeks with Geeks, produced by Lerone D. Wilson of Boondoggle Films. In 2014 Fog Creek restructured, spinning Copilot out as a separate company. CityDesk CityDesk was a website management software package. The backend of the system ran as a desktop application written on Windows in Visual Basic 6.0 with all data stored in a Microsoft Jet database. It was one of FogBugz's first products, first announced in 2001. See also Comparison of remote desktop software Tech companies in the New York metropolitan area References External links Business software companies Privately held companies based in New York City Software companies based in New York City Software companies established in 2000 Software companies of the United States
54406327
https://en.wikipedia.org/wiki/Petya%20and%20NotPetya
Petya and NotPetya
Petya is a family of encrypting malware that was first discovered in 2016. The malware targets Microsoft Windows–based systems, infecting the master boot record to execute a payload that encrypts a hard drive's file system table and prevents Windows from booting. It subsequently demands that the user make a payment in Bitcoin in order to regain access to the system. The Petya malware had infected millions of computers during its first year of its release. Variants of Petya were first seen in March 2016, which propagated via infected e-mail attachments. In June 2017, a new variant of Petya was used for a global cyberattack, primarily targeting Ukraine. The new variant propagates via the EternalBlue exploit, which is generally believed to have been developed by the U.S. National Security Agency (NSA), and was used earlier in the year by the WannaCry ransomware. Kaspersky Lab referred to this new version as NotPetya to distinguish it from the 2016 variants, due to these differences in operation. In addition, although it purports to be ransomware, this variant was modified so that it is unable to actually revert its own changes. The NotPetya attacks have been blamed on the Russian government, specifically the Sandworm hacking group within the GRU Russian military intelligence organization, by security researchers, Google, and several governments. History Petya was discovered in March 2016; Check Point noted that while it had achieved fewer infections than other ransomware active in early 2016, such as CryptoWall, it contained notable differences in operation that caused it to be "immediately flagged as the next step in ransomware evolution". Another variant of Petya discovered in May 2016 contained a secondary payload used if the malware cannot achieve administrator-level access. The name "Petya" is a reference to the 1995 James Bond film GoldenEye, wherein Petya is one of the two Soviet weapon satellites which carry a "Goldeneye"—an atomic bomb detonated in low Earth orbit to produce an electromagnetic pulse. A Twitter account that Heise suggested may have belonged to the author of the malware, named "Janus Cybercrime Solutions" after Alec Trevelyan's crime group in GoldenEye, had an avatar with an image of GoldenEye character Boris Grishenko, a Russian hacker and antagonist in the film played by Scottish actor Alan Cumming. On 30 August 2018, a regional court in Nikopol in the Dnipropetrovsk Oblast of Ukraine convicted an unnamed Ukrainian citizen to one year in prison after pleading guilty to having spread a version of Petya online. 2017 cyberattack On 27 June 2017, a major global cyberattack began (Ukrainian companies were among the first to state they were being attacked), utilizing a new variant of Petya. On that day, Kaspersky Lab reported infections in France, Germany, Italy, Poland, the United Kingdom, and the United States, but that the majority of infections targeted Russia and Ukraine, where more than 80 companies were initially attacked, including the National Bank of Ukraine. ESET estimated on 28 June 2017 that 80% of all infections were in Ukraine, with Germany second hardest hit with about 9%. Russian president Vladimir Putin's press secretary, Dmitry Peskov, stated that the attack had caused no serious damage in Russia. Experts believed this was a politically-motivated attack against Ukraine, since it occurred on the eve of the Ukrainian holiday Constitution Day. Kaspersky dubbed this variant "NotPetya", as it has major differences in its operations in comparison to earlier variants. McAfee engineer Christiaan Beek stated that this variant was designed to spread quickly, and that it had been targeting "complete energy companies, the power grid, bus stations, gas stations, the airport, and banks". It was believed that the software update mechanism of —a Ukrainian tax preparation program that, according to F-Secure analyst Mikko Hyppönen, "appears to be de facto" among companies doing business in the country—had been compromised to spread the malware. Analysis by ESET found that a backdoor had been present in the update system for at least six weeks prior to the attack, describing it as a "thoroughly well-planned and well-executed operation". The developers of M.E.Doc denied that they were entirely responsible for the cyberattack, stating that they too were victims. On 4 July 2017, Ukraine's cybercrime unit seized the company's servers after detecting "new activity" that it believed would result in "uncontrolled proliferation" of malware. Ukraine police advised M.E.Doc users to stop using the software, as it presumed that the backdoor was still present. Analysis of the seized servers showed that software updates had not been applied since 2013, there was evidence of Russian presence, and an employee's account on the servers had been compromised; the head of the units warned that M.E.Doc could be found criminally responsible for enabling the attack because of its negligence in maintaining the security of their servers. Operation Petya's payload infects the computer's master boot record (MBR), overwrites the Windows bootloader, and triggers a restart. Upon startup, the payload encrypts the Master File Table of the NTFS file system, and then displays the ransom message demanding a payment made in Bitcoin. Meanwhile, the computer's screen displays text purportedly output by chkdsk, Windows' file system scanner, suggesting that the hard drive's sectors are being repaired. The original payload required the user to grant it administrative privileges; one variant of Petya was bundled with a second payload, Mischa, which activated if Petya failed to install. Mischa is a more conventional ransomware payload that encrypts user documents, as well as executable files, and does not require administrative privileges to execute. The earlier versions of Petya disguised their payload as a PDF file, attached to an e-mail. United States Computer Emergency Response Team (US-CERT) and National Cybersecurity and Communications Integration Center (NCCIC) released Malware Initial Findings Report (MIFR) about Petya on 30 June 2017. The "NotPetya" variant used in the 2017 attack uses EternalBlue, an exploit that takes advantage of a vulnerability in Windows' Server Message Block (SMB) protocol. EternalBlue is generally believed to have been developed by the U.S. National Security Agency (NSA); it was leaked in April 2017 and was also used by WannaCry. The malware harvests passwords (using tweaked build of open-source Mimikatz) and uses other techniques to spread to other computers on the same network, and uses those passwords in conjunction with PSExec to run code on other local computers. Additionally, although it still purports to be ransomware, the encryption routine was modified so that the malware could not technically revert its changes. This characteristic, along with other unusual signs in comparison to WannaCry (including the relatively low unlock fee of US$300, and using a single, fixed Bitcoin wallet to collect ransom payments rather than generating a unique ID for each specific infection for tracking purposes), prompted researchers to speculate that this attack was not intended to be a profit-generating venture, but to damage devices quickly, and ride off the media attention WannaCry received by claiming to be ransomware. Mitigation It was found that it may be possible to stop the encryption process if an infected computer is immediately shut down when the fictitious chkdsk screen appears, and a security analyst proposed that creating read-only files named perf.c and/or perfc.dat in the Windows installation directory could prevent the payload of the current strain from executing. The email address listed on the ransom screen was suspended by its provider, Posteo, for being a violation of its terms of use. As a result, infected users could not actually send the required payment confirmation to the perpetrator. Additionally, if the computer's filesystem was FAT based, the MFT encryption sequence was skipped, and only the ransomware's message was displayed, allowing data to be recovered trivially. Microsoft had already released patches for supported versions of Windows in March 2017 to address the EternalBlue vulnerability. This was followed by patches for unsupported versions of Windows (such as Windows XP) in May 2017, in the direct wake of WannaCry. Wired believed that "based on the extent of damage Petya has caused so far, though, it appears that many companies have put off patching, despite the clear and potentially devastating threat of a similar ransomware spread." Some enterprises may consider it too disruptive to install updates on certain systems, either due to possible downtime or compatibility concerns, which can be problematic in some environments. Impact In a report published by Wired, a White House assessment pegged the total damages brought about by NotPetya to more than $10 billion. This was confirmed by former Homeland Security adviser Tom Bossert, who at the time of the attack was the most senior cybersecurity focused official in the US government. During the attack initiated on 27 June 2017, the radiation monitoring system at Ukraine's Chernobyl Nuclear Power Plant went offline. Several Ukrainian ministries, banks and metro systems were also affected. It is said to have been the most destructive cyberattack ever. Among those affected elsewhere included British advertising company WPP, Maersk Line, American pharmaceutical company Merck & Co., Russian oil company Rosneft (its oil production was unaffected), multinational law firm DLA Piper, French construction company Saint-Gobain and its retail and subsidiary outlets in Estonia, British consumer goods company Reckitt Benckiser, German personal care company Beiersdorf, German logistics company DHL, United States food company Mondelez International, and American hospital operator Heritage Valley Health System. The Cadbury's Chocolate Factory in Hobart, Tasmania, is the first company in Australia to be affected by Petya. On 28 June 2017, JNPT, India's largest container port, had reportedly been affected, with all operations coming to a standstill. Princeton Community Hospital in rural West Virginia will scrap and replace its entire computer network on its path to recovery. The business interruption to Maersk, the world's largest container ship and supply vessel operator, was estimated between $200m and $300m in lost revenues. The business impact on FedEx is estimated to be $400m in 2018, according to the company's 2019 annual report. Jens Stoltenberg, NATO Secretary-General, pressed the alliance to strengthen its cyber defenses, saying that a cyberattack could trigger the Article 5 principle of collective defense. Mondelez International's insurance carrier, Zurich American Insurance Company, has refused to pay out a claim for cleaning up damage from a Notpetya infection, on the grounds that Notpetya is an "act of war" that is not covered by the policy. Mondelez is suing Zurich American for $100 million. Reaction Europol said it was aware of and urgently responding to reports of a cyber attack in member states of the European Union. The United States Department of Homeland Security was involved and coordinating with its international and local partners. In a letter to the NSA, Democratic Congressman Ted Lieu asked the agency to collaborate more actively with technology companies to notify them of software vulnerabilities and help them prevent future attacks based on malware created by the NSA. On 15 February 2018, the Trump administration blamed Russia for the attack and warned that there would be "international consequences". The United Kingdom and the Australian government also issued similar statements. In October 2020 the DOJ named further GRU officers in an indictment. At the same time, the UK government blamed GRU's Sandworm also for attacks on the 2020 Summer Games. Other notable low-level malware CIH (1998) Stuxnet (2010) WannaCry (2017) See also References Further reading 2017 in computing 2017 in Ukraine Cyberattacks Cybercrime Hacking in the 2010s June 2017 crimes Ransomware
38748108
https://en.wikipedia.org/wiki/Kamakura%20Corporation
Kamakura Corporation
Kamakura Corporation is a global financial software company headquartered in Honolulu, Hawaii. It specializes in software and data for risk management for banking, insurance and investment businesses. The company was founded in 1990 by its current CEO and Chairman Dr. Donald R. van Deventer, and as of 2019 Kamakura had served more than 330 clients in 47 countries. Cornell professor Robert A. Jarrow, co-creator of the Heath–Jarrow–Morton framework for pricing interest rate derivatives and the reduced form Jarrow–Turnbull credit risk models employed for pricing credit derivatives, serve as the company's Director of Research. Products and services The company has two primary products. Kamakura Risk Manager (KRM), an enterprise risk management system integrating credit risk management including IFRS 9 and CECL, market risk management, asset liability management, Basel II and Basel III and other capital allocation technologies, transfer pricing, and performance measurement. Kamakura Risk Information Services (KRIS) is a risk portal providing data for quantitative credit risk measures such as default probabilities, bond spreads, implied spreads and implied ratings for corporate, sovereign and bank counterparties. It also allows users to stress portfolios through Macro Factor Sensitivities and Portfolio Management tools. The Kamakura Troubled Company index measures the percentage of 39,000 public firms in 76 countries that have an annualized one- month default risk of over one percent. In January 2018 the company released its Troubled Bank Index. History Kamakura Corporation was founded in Tokyo in 1990. Kamakura Risk Manager (KRM) first sold commercially in 1993. It was the first credit model published with random interest rates and the first stochastic interest rate term structure model-based valuation software. In 1995 they hired Robert A. Jarrow as their Director of Research. The first closed-form non-maturity deposit valuation model was implemented in KRM in 1996. TD Bank Financial Group start using KRM during that year. Kamakura relocated to Honolulu and qualified for the State research and development subsidy. Jarrow-Lando-Turnbull published Markov model for term structure of credit began to spread in 1997. The stochastic multi-period net income simulation was added to KRM in 1998. The first implementation of a reduced form credit risk model was made in 2000. Kamakura was the first vendor to offer integrated credit and market risk in their risk management products. In 2002 they launched the KRIS default probability service for 20,000 listed firms. They completed their first Basel II client implementation in 2003. Insurer MetLife and pension fund OTPP became clients during that year. Pair-wise default correlations were added to KRIS in 2004. Implied Ratings and Implied CDS Spreads added to KRIS in 2006. KRIS-CDO launched in 2007. In 2008 Kamakura was named one of the top three worldwide financial information vendors in a Risk Technology 2008 survey. They launched a Basel II-compliant default probability service for sovereigns in 2008 as well. They were named the world's number 1 asset and liability management vendor and number 1 liquidity risk vendor in a Risk Technology 2009 survey. In 2009 the U.S. Office of the Comptroller of the Currency signed for KRIS public firm default models, KRIS sovereign default models and KRIS credit portfolio manager. In 2017 Hong Leong Finance signed with Kamakura Corporation's risk management software. Kamakura was named for the second consecutive year to the World Finance 100 in 2018, and released version 10 of the Kamakura Risk Manager in March of that year. Awards 2018 Kamakura Corporation recognized as a Category Leader in Credit by Chartis Research in its report "Technology Solutions for Credit Risk 2.0 2018" World Finance 100 2017, 2016, 2012 *Credit Technology Innovation Awards 2010 winner: Thomson Reuters (Kamakura default probability service) Credit Technology Innovation Awards 2010 winner: Fiserv (Kamakura Risk Manager) Publications Advanced Financial Risk Management: Tools and Techniques for Integrated Credit Risk and Interest Rate Risk Management, 2nd Edition, Wiley & Sons, 2013, Advanced Financial Risk Management: Tools and Techniques for Integrated Credit Risk and Interest Rate Risk Management, 1st Edition, Wiley & Sons, 2005, Asset and Liability Management: A Synthesis of New Methodologies, RISK Books, 1998, Financial Risk Analytics: A Term Structure Model Approach for Banking, Insurance & Investment Management, 1997, IRWIN Professional Publishing, Risk Management in Banking: The Theory & Application of Asset & Liability Management, 1993, McGraw-Hill, References External links Kamakura Corporation Official site Kamakura Risk Information Services (KRIS) Software companies based in Hawaii Companies based in Honolulu Software companies established in 1990 Financial software companies Banking software companies 1990 establishments in Hawaii American brands Software companies of the United States
41420580
https://en.wikipedia.org/wiki/Hemiodoecus
Hemiodoecus
Hemiodoecus is a genus of moss bug. It was first identified from a northwestern Tasmania specimen by William Edward China in 1924, and Hemiodoecus leai became the type species. In 1982, Evans concluded that Hemiodoecus was, of known genera, the earliest evolved Australian genus of the Peloridiidae, based upon morphology and distribution. He further suggested that it gave rise to the genera Hemiowoodwardia and Hackeriella, both of which he had originally classified as Hemiodoecus. Species Accepted species Hemiodoecus crassus Burckhardt, 2009 Australia Hemiodoecus acutus Burckhardt, 2009 Australia Hemiodoecus leai China, 1924 Tasmania Other species named Hemiodoecus donnae Woodward, 1956 is a junior synonym of Hemiodoecellus fidefis Hemiodoecus fidefis Evans is now Hemiodoecellus fidefis Hemiodoecus veitchi Hacker, 1932 is now Hackeriella veitchi Hemiodoecus wilsoni Evans, 1936 is now Hemiowoodwardia wilsoni References Coleorrhyncha genera Peloridiidae
664004
https://en.wikipedia.org/wiki/Animusic
Animusic
Animusic, LLC is a dormant animation company specializing in the 3D visualization of MIDI-based music. Founded by Wayne Lytle, it is currently a registered limited liability company in New York, and had offices in Texas and California during its active stages. The initial name of the company was Visual Music, but was changed to Animusic in 1995. The company is known for its Animusic compilations of computer-generated animations, based on MIDI events processed to simultaneously drive the music and on-screen action, leading to and corresponding to every sound. The animated short "Pipe Dream," showed at SIGGRAPH's Electronic Theater in 2001, details the use of this specific sequencing. Unlike many other music visualizations, Animusic uses MIDI information to drive the animation, while other software programs, such as Blender, animate figures or characters to the music. Any animated models in Animusic are created first, and are then programmed to follow what the music, or MIDI information, instructs them to do. 'Solo cams' featured on the Animusic DVD shows how each instrument plays through a piece of music from beginning to end. Many of the instruments appear to be robotic or play themselves using seemingly curious methods to produce and visualize the original compositions. The animations typically feature dramatically-lit rooms or landscapes in rustic and/or futuristic locales. The music in Animusic is principally pop-rock based, consisting of straightforward sequences of triggered samples and digital patches mostly played "dry" (with few effects). There are no lyrics or voices, save for the occasional chorus synthesizer. According to the director's comments on Animusic 2, most instrument sounds are generated with software synthesizers on a music workstation (see Software Programs for more info). Many sounds resemble stock patches available on digital keyboards, subjected to some manipulation, such as pitch or playback speed, to enhance the appeal of their timbre. Compilations , 3 video albums have been released, one album short of the company's aspirations. Animusic: A Computer Animation Video Album (VHS/DVD (2001), CD (2002), Special Edition DVD (2004)) Animusic 2: A New Computer Animation Video Album (DVD (2005), CD (2006)) Animusic HD: Stunning Computer-Animated Music (Blu-Ray (2010)) All Animusic DVDs are set to Region 0, meaning they are playable in all DVD players worldwide. Animusic was released in 2001 on VHS, and later DVD, with a special edition DVD being released later, in 2004. This special edition included extra material, such as Animusic's first animation, "Beyond the Walls". A second album, Animusic 2, was released in the United States in 2005. Later, in 2008, this volume was released in Japan through a distribution deal with Japanese company Jorudan, Co. Ltd. In a company newsletter, it was announced that the Animusic company would also be producing a high-definition version of Animusic 2 on Blu-ray, to be released sometime before their third major album, Animusic 3. This HD compilation was eventually released in November 2010, featuring all of the animations featured in Animusic 2, as well as the animation "Pipe Dream" from Animusic encoded at a high bitrate. In a later newsletter, the working titles of three animations in Animusic 3, “Sonic Warfare”, “Paddle Ball” and “Super Pipe Dream”, were revealed. In 2012, a Kickstarter campaign for Animusic 3 was successfully funded. "The Sound of Twelve," a music-only album made using similar harmonics as Animusic, was released in March 2015. Animusic 3 was never finished or released (see Animusic 3 for more info). Publicity Animusic has been promoted at SIGGRAPH since 1990, and has been promoted on Public Broadcasting Service and other television networks such as CNN. Wayne Lytle and his works have also been featured on Fox News and over 30 other local stations in January 2007. Animusic's "Pipe Dream" was released as a real-time demo for ATI's Radeon 9700 series graphics cards. Animusic also rendered "Resonant Chamber" and "Starship Groove" in HD resolution for Apple's QuickTime HD Gallery. A popular tourist destination located in Fredericksburg, Texas, the Rockbox Theater, has often played the Animusic DVDs either before shows or during an intermission. There was an internet rumor that suggested that the "Pipe Dream" video was actually a machine created at the University of Iowa from farm machinery parts. Although this has been proven false, the rumor is still considered "pretty amusing" to the Animusic staff. Intel later commissioned a version of the machine to be built which was demonstrated at IDF 2011. Software programs According to the company's FAQ, animation is created procedurally with their own proprietary MIDImotion engine. Discreet 3D Studio Max was used for modeling, lighting, cameras, and rendering. Maps were painted with Corel Painter, Deep Paint 3D, and Photoshop. They have also created their own software called ANIMUSIC|studio that is based on scene-graph technology. According to an August 2015 newsletter, Animusic was using Unreal Engine 4 for the production of Animusic 3. Animusic 3 Animusic, in its former state with multiple employees, was working on the third volume of the Animusic series for over 10 years. It was initially intended to be released sometime in 2010, featuring animations such as "Sonic Warfare", "Paddle Ball", and "Super Pipe Dream". However, this release date passed with no definitive word regarding the volume's progress. A year later in November 2010, Animusic attributed this delay to a complete restructuring of their modeling and rendering software, ultimately yielding the creation of the Animusic|Studio software program. On August 6, 2012, the company began a Kickstarter campaign aimed at raising $200,000 USD to fund the completion of the Animusic 3 DVD and/or Blu-ray. This campaign was featured on several websites such as Animation World Network. A rough mix from the newly revealed music album The Sound of 12, titled "Glarpedge," was released online on August 28, 2012. This album has been described by the company as "the soul of Animusic 3." On August 31, 2012, two more mixes were released: "Emoticondria" and "EchoKrunch." The Kickstarter page was later updated to confirm that a Blu-ray edition of Animusic 3 would be released shortly following the DVD's completion. On September 5, 2012, the Kickstarter campaign ended successfully, with a final backing amount of $223,123, surpassing the objective and presumably financing Animusic 3's "final production stages". Animusic posted expected shipping dates of October 2013 for the DVD, and February 2014 for the Blu-ray disc. However, both dates passed without either product being released. On a Kickstarter update in August 2015, Wayne Lytle announced some other factors that have delayed the project, including Dave Crognale's departure from the project and new residence in California, personal struggles, physical stress and injury (including but not limited to Bell's Palsy), and the distribution of supplemental prizes for backers. Lytle had noted in this particular update that his financial situation was so unfortunate to the point where him, his mother and father, and his wife were unable to pay for shipping postage for the remainder of DVD/Blu-ray inventory, which can still be purchased via Amazon.com However, Lytle insisted on his determination to finish the project, expressing his excitement about the abilities of the newly-implemented Unreal Engine 4 and his gratitude for those who have invested in him. Lytle stated that he had withheld from posting an update until he had a completion date, but did not give one in the update. Since the 2015 Kickstarter update, no further updates have been posted. Many Kickstarter backers have expressed their frustration with the company and the continuous lack of communication about the project, inferring that it is/was suffering from a small number of personnel in its production team, along with other issues including, but not limited to, lawsuit threats and legal probes. It has been essentially theorised by Kickstarter backers, Animusic fanatics (via Reddit, YouTube, and Kickstarter backer forums) and general fans that the company, and its constituents, had failed due to an unnecessary, costly, and impulsive redesign of the software program, or programs, used to create MIDI input music animations (i.e. Animusic|Studio). Other users have expressed the lack of business leadership in the financial sector amongst Animusic LLC administration. In August 2017, the Animusic headquarters was sold to RP Solutions, Inc. The Lytle residence was sold to a separate buyer. This information was yielded using New York State property searching tools, including but not limited to arcGIS, and was not directly obtained via an update from Wayne Lytle or his counterparts. In August 2019, the Animusic website at ANIMUSIC was taken down briefly and replaced by a generic template with a small explanation that the site is undergoing a redesign. A quote from the new site states that "The ANIMUSIC website is being rebuilt from scratch, using Squarespace. Our previous website was ancient, dating back to the early days of ANIMUSIC.." As of August 9, 2019, the new website contains a collection of screenshots from Animusic 2, along with one picture from Animusic 1 and three work in progress images that appear to be from Animusic 3. The website's "About" page references the site being "A Fresh Start" for Animusic. As of November 2021, the registrar of the website is Network Solutions, LLC whilst the expiry date of https://www.animusic.comis listed as June 27th, 2028. Renewal plans have not been released, and are not expected, due to aforementioned probability. According to the New York State Division of Corporations, State Records, and UCC, Animusic is still an active domestic limited liability company and is registered to an address in Cortland, NY. The status of Wayne Lytle is publicly unknown, however the Cortland, NY property that the limited liability company is registered under is for sale as of November, 2021. Although the release of Animusic 3 has been continually postponed, others have created and released tribute animations and fan-made versions of the Animusic concept through YouTube. Over time, hundreds of these homemade animations have been produced and shared. The bulk of these have been created since shortly after the release of Animusic 2. References External links Software companies based in New York (state) Computer animation Music visualization PBS original programming 1990s video albums 2000s video albums Visual music Software companies of the United States
8786058
https://en.wikipedia.org/wiki/Dynamic%20pricing
Dynamic pricing
Dynamic pricing, also referred to as surge pricing, demand pricing, or time-based pricing is a pricing strategy in which businesses set flexible prices for products or services based on current market demands. Businesses are able to change prices based on algorithms that take into account competitor pricing, supply and demand, and other external factors in the market. Dynamic pricing is a common practice in several industries such as hospitality, tourism, entertainment, retail, electricity, and public transport. Each industry takes a slightly different approach to dynamic pricing based on its individual needs and the demand for the product. History of dynamic pricing Dynamic pricing has been the norm for most of human history. Traditionally, two parties would negotiate a price for a product based on a variety of factors, including who was involved, stock levels, time of day, and more. Store owners relied heavily on experienced shopkeepers to manage this process, and these shopkeepers would negotiate the price for every single product in a store. Shopkeepers needed to know everything they could about a product, including the purchase price, stock levels, market demand, and more, to succeed in their jobs and bring profit to the store. As retail expanded in the Industrial Revolution, storeowners faced the challenge of scaling this traditional haggling system. As assortments expanded and the number of stores grew, it quickly became impossible for shopkeepers to keep up with the store. The negotiation model quickly proved inefficient within an economy of scale. The invention of the price tag in the 1870s presented a solution: one price for every person. This idea harkened back to a traditional Quaker idea of fairness: Quaker store owners had long employed a fixed-price system in the name of egalitarianism. By charging the same price of all shoppers, Quakers created a system that was fair for all, regardless of shoppers' wealth or status. Unlike the Quakers, who used fixed pricing as a way to maintain fairness, retailers used fixed pricing to reduce the need for highly skilled shopkeepers and smooth out the shopping experience within a store. The price tag made it easier to train shopkeepers, reduced wait time at checkout, and improved the overall customer experience. This fixed-price model with price tags would dominate retail and commerce for years to come. Dynamic pricing (as we know it today) would re-emerge in the 1980s, aided by technological innovation. Dynamic pricing in air transportation Dynamic pricing re-appeared in the market at large in the 1980s airline industry in the United States. Before the 1980s, the airline industry's seat prices were heavily regulated by the United States government, but change in legislation during the decade gave airlines control over their prices. Companies invested millions of dollars to develop computer programs that would adjust prices automatically based on known variables like departure time, destination, season, and more. After seeing the success of dynamic pricing in selling airline seats, many other verticals within the travel and tourism industry adopted the practice. Dynamic pricing is now the norm for hotels, car rentals, and more, and consumers have largely accepted the practice as commonplace. The practice is now moving beyond the travel and tourism industry into other fields. Dynamic pricing in rideshare services The most recent innovation in dynamic pricing—and the one felt most by consumers—is the rise of dynamic pricing in rideshare apps like Uber. Uber's “Surge Pricing” model, where riders pay more for a trip during peak travel times, began as a way to incentivize drivers to stay out later in Boston, according to Bill Gurley, former board member of Uber. The incentive worked, and the number of drivers on the road in the early morning hours increased by 70%-80%, and the number of unfilled Uber requests plummeted. Dynamic pricing today Dynamic pricing has become commonplace in many industries for a variety of reasons. Hospitality Time-based pricing is the standard method of pricing in the tourism industry. Higher prices are charged during the peak season, or during special-event periods. In the off-season, hotels may charge only the operating costs of the establishment, whereas investments and any profit are gained during the high season (this is the basic principle of long-run marginal cost pricing: see also long run and short run). Hotels and other players in the hospitality industry use dynamic pricing to adjust the cost of rooms and packages based on the supply and demand needs at a particular moment. The goal of dynamic pricing in this industry is to find the highest price that consumers are willing to pay. Another name for dynamic pricing in the industry is demand pricing. This form of price discrimination is used to try to maximize revenue based on the willingness to pay of different market segments. It features price increases when demand is high and decreases to stimulate demand when it is low. Having a variety of prices based on the demand at each point in the day makes it possible for hotels to generate more revenue by bringing in customers at the different price points they are willing to pay. Transportation Airlines change prices often depending on the day of the week, time of day, and number of days before the flight. For airlines, dynamic pricing factors in different components such as: how many seats a flight has, departure time, and average cancellations on similar flights. Congestion pricing is often used in public transportation and road pricing, where a higher price at peak periods is used to encourage more efficient use of the service or time-shifting to cheaper or free off-peak travel. For example, the San Francisco Bay Bridge charges a higher toll during rush hour and on the weekend, when drivers are more likely to be travelling. This is an effective way to boost revenue when demand is high, while also managing demand since drivers unwilling to pay the premium will avoid those times. The London congestion charge discourages automobile travel to Central London during peak periods. The Washington Metro and Long Island Rail Road charge higher fares at peak times. The tolls on the Custis Memorial Parkway vary automatically according to the actual number of cars on the roadway, and at times of severe congestion can reach almost $50. Dynamic pricing is also used by Uber and Lyft. Uber's system for "dynamically adjusting prices for service" measures supply (Uber drivers) and demand (passengers hailing rides by use of smartphones), and prices fares accordingly. Professional sports Some professional sports teams use dynamic pricing structures to boost revenue. Dynamic pricing is particularly important in baseball because MLB teams play around twice as many games as some other sports and in much larger venues. Sports that are outdoors have to factor weather into pricing strategy, in addition to date of the game, date of purchase, and opponent. Tickets for a game during inclement weather will sell better at a lower price; conversely, when a team is on a winning streak, fans will be willing to pay more. Dynamic pricing was first introduced to sports by a start-up software company from Austin, Texas, Qcue and Major League Baseball club San Francisco Giants. The San Francisco Giants implemented a pilot of 2,000 seats in the View Reserved and Bleachers and moved on to dynamically pricing the entire venue for the 2010 season. Qcue currently works with two-thirds of Major League Baseball franchises, not all of which have implemented a full dynamic pricing structure, and for the 2012 postseason, the San Francisco Giants, Oakland Athletics, and St. Louis Cardinals became the first teams to dynamically price postseason tickets. While behind baseball in terms of adoption, the National Basketball Association, National Hockey League, and NCAA have also seen teams implement dynamic pricing. Outside of the U.S., it has since been adopted on a trial basis by some clubs in the Football League. Scottish Premier League club Heart of Midlothian introduced dynamic pricing for the sale of their season tickets in 2012, but supporters complained that they were being charged significantly more than the advertised price. Retail Retail is the next frontier for dynamic pricing. As e-commerce grows in importance and the size of assortments expands, retailers are turning to software to help track product prices and make pricing updates. Retailers, and online retailers in particular, adjust the price of their products according to competitors, time, traffic, conversion rates, and sales goals. Dynamic pricing is quickly becoming a best practice within the retail industry to help stores manage these factors in a fast-paced market. Dynamic pricing software allows retailers to easily understand what happens in their assortments at a glance and act proactively on market changes. Some retailers will build their own dynamic pricing software, but many more will outsource to a software vendor. Retailers in all categories use dynamic pricing software including sporting goods, beauty, fashion, do-it-yourself and hardware, baby and family, auto parts, home care, fast-moving consumer goods (FMCGs) and more. Dynamic pricing can even be used by brick and mortar stores with the help of electronic shelf labels (ESLs). Theme parks Theme Parks have also recently adapted this pricing model in hopes to boost sales. Disneyland and Disney World adapted this practise in 2016, and Universal Studios followed suit. It needs to be pointed out that this pricing model resembles price discrimination more than dynamic pricing, however for the sake of uniformity is included. Since the supply of parks is limited and new rides cannot be added based on surge of demand, the model followed by theme parks in regards to dynamic pricing resembles that followed by the hotel industry. During summer time, when demand is rather inelastic, the parks charge high prices due to the holiday season, whereas during 'off-peak' times such as winters, low prices are charged. 'Off-peak' pricing makes the term 'cheap-holiday' come to life as it encourages ticket sales at times where these parks experience a fall in demand, resulting in a win-win situation for both parties involved. Brands and dynamic pricing In recent years, more brands have launched direct-to-consumer sales channels to capture more consumer data and control brand perception. Many brands turn to dynamic pricing to help manage this sales channel and follow the market. With dynamic pricing, brands can more easily control their market perception and create a direct relationship with consumers. However the most interesting benefit to a direct-to-consumer strategy is the market data that brands can collect on their customers. Some third-party sellers in the Amazon Marketplace use software to change prices more frequently than would be feasible for people to do, in order to compete for business on price. Dynamic pricing methods There are a number of ways to execute a pricing strategy with dynamic pricing software, and they can all be combined to match any commercial strategy. This section details some of the most well-known and popular pricing methods and explains how they change in a dynamic pricing engine. These pricing mechanisms are from the seller's point of view and not the consumer's point of view, meaning that the seller plays an active role in price setting due to the assumption of high bargaining power of sellers. Cost-plus pricing Cost-plus pricing is the most basic method of pricing. A store will simply charge consumers the cost required to produce a product plus a predetermined amount of profit. Cost-plus pricing is simple to execute, but it only considers internal information when setting the price and does not factor in external influencers like market reactions, the weather, or changes in consumer value. A dynamic pricing tool can make it easier to update prices, but will not make the updates often if the user doesn't account for external information like competitor market prices. Due to its simplicity this is the most widely used method of pricing with around 74% companies in the United States employing this dynamic pricing strategy. Although widely used, the usage is skewered, with companies facing high degree of competition using this strategy the most, on the other hand, companies that deal with manufacturing tend to use this strategy the least. Pricing based on competitors Businesses that want to price competitively will monitor their competitors’ prices and adjust accordingly. This is called competitor-based pricing. In retail, the competitor that many companies watch is Amazon, which changes prices frequently throughout the day. Amazon is a market leader in retail that changes prices often, which encourages other retailers to alter their prices to stay competitive. Such online retailers use price - matching mechanism. The retailers give the end user an option for the same, and upon selecting the option to price match, an online bot searches for the lowest price across various websites and offers a price lower than the lowest. Such pricing behaviour depends on market conditions, as well as a firm's planning. Although a firm existing within a highly competitive market is compelled to cut prices, that is not always the case. In case of high competition, yet a stable market, and a long-term view, it was predicted that firms will tend to cooperate on price basis rather than undercut each other. It needs to be pointed out that the three conditions are necessary in case of firms deciding to forego competitive pricing. Pricing based on value or elasticity Ideally, companies should ask the price for a product which is equal to the value a consumer attaches to a product. This is called value-based pricing. As this value can differ from person to person, it is difficult to uncover the perfect value and have a differentiated price for every person. However, consumer's willingness-to-pay can be used as a proxy for the perceived value. With the price elasticity of products, companies can calculate how many consumers are willing to pay for the product at each price point. Products with high elasticities are highly sensitive to changes in price, while products with low elasticities are less sensitive to price changes (ceteris paribus). Subsequently, products with low elasticity are typically valued more by consumers if everything else is equal. The dynamic aspect of this pricing method is that elasticities change with respect to product, category, time, location and retailers. With the price elasticity of products the margin of the product, retailers can use this method with their pricing strategy to aim for volume, revenue or profit maximization strategies. Bundle pricing There are two types of bundle pricing strategies, one from the consumer point of view, and one from the seller's point of view. However to maintain homogeneity in the article, the bundle pricing used by sellers is focused on. Under this dynamic pricing approach, the price of the end product depends on whether or not it is bundled with something else, and if yes, what bundle does it belong to, sometimes partially also depending on what customers it is offered to. This strategy is adapted by print media houses and other subscription based services. An example is Wall Street Journal, which offers a standalone price if its electronic mode of delivery is purchased, but offers a discount if this online delivery is bundled with physical print delivery. As for target customers, music streaming sites such as Spotify offer student discounts to those who are eligible as part of their bundle pricing tactics. Time-based dynamic pricing Time-based dynamic pricing is popular in several different industries where demand changes throughout the day or where suppliers want to offer an incentive for customers to use a product at a certain time of day. Time-based retail pricing Many industries change prices depending on the time of day, especially online retailers. Most retail customers usually shop the most during weekly office hours between 9AM-5PM, so many retailers will raise prices during the morning and afternoon, then lower prices during the evening. Time-based utility pricing Time-based pricing of services such as provision of electric power includes, but is not limited to: Time-of-use pricing (TOU pricing), whereby electricity prices are set for a specific time period on an advance or forward basis, typically not changing more often than twice a year. Prices paid for energy consumed during these periods are pre established and known to consumers in advance, allowing them to vary their usage in response to such prices and manage their energy costs by shifting usage to a lower cost period or reducing their consumption overall (demand response) Critical peak pricing whereby time-of-use prices are in effect except for certain peak days, when prices may reflect the costs of generating and/or purchasing electricity at the wholesale level Real-time pricing whereby electricity prices may change as often as hourly (exceptionally more often). Price signal is provided to the user on an advanced or forward basis, reflecting the utility's cost of generating and/or purchasing electricity at the wholesale level; and Peak load reduction credits for consumers with large loads who enter into pre-established peak load reduction agreements that reduce a utility's planned capacity obligations. A utility with regulated prices may develop a time-based pricing schedule on analysis of its cost on a long-run basis, including both operation and investment costs. A utility operating in a market environment, where electricity (or other service) is auctioned on a competitive market, time-based pricing will typically reflect the price variations on the market. Such variations include both regular oscillations due to the demand pattern of users, supply issues (such as availability of intermittent natural resources: water flow, wind), and occasional exceptional price peaks. Price peaks reflect strained conditions on the market (possibly augmented by market manipulation, as during the California electricity crisis) and convey possible lack of investment. Extreme events include the default by Griddy after the 2021 Texas power crisis. Conversion rate pricing Conversion rates measure how many browsers on a website turn into buyers. When conversion rates of viewers to buyers is low, dropping the price to increase conversions is standard with a dynamic pricing strategy. Controversy Some critics of dynamic pricing, also known as 'surge pricing', say it is a form of price gouging. Dynamic pricing is widely unpopular among some consumers as some feel it tends to favour the rich. While the intent for surge pricing may not be maligned to favour the rich as it is usually driven by demand-supply dynamics, however some instances may prove otherwise. Some internet giants have received severe backlash for their dynamic pricing practices and are as follows: - Amazon.com Amazon.com engaged in price discrimination for some customers in the year 2000, showing different prices at the same time for the same item to different customers, potentially violating the Robinson–Patman Act. When this incident became public knowledge, Amazon issued an apology, however did not stop there. During the COVID-19 pandemic, prices of certain items in high demand were reported to shoot up by quadruple their original price, garnering negative attention. Although Amazon denied claims of any such manipulation and blamed a few sellers for shooting up prices for essentials such as sanitisers and masks. However, prices of essential products 'sold by Amazon' had also seen a hefty rise in prices, it is not determined whether this was intentional or was a result of software malfunction as claimed by Amazon. Uber Uber's surge pricing has also created its own share of controversy. One of the most notable ones being in 2013, when New York was in the midst of a storm, Uber users saw fares go up eight times the usual fares. This incident attracted public backlash from even celebrities, with Salman Rushdie amongst others going online to criticise this move. After this incident, the company started placing caps on how high the surge pricing can go during times of emergency starting 2015 onwards. Drivers have been known to hold off on accepting rides in an area until surge pricing forces fares up to a level satisfactory to them. See also Hedonic regression Pay what you want Price discrimination Price gouging Variable pricing Demand shaping References External links In Praise of Efficient Price Gouging (2014-08-19), MIT Technology Review Pricing Economics of regulation Economics and time
4513331
https://en.wikipedia.org/wiki/Luciano%20Floridi
Luciano Floridi
Luciano Floridi (; born 16 November 1964) is an Italian philosopher who is currently Professor of Philosophy and Ethics of Information and Director of the Digital Ethics Lab at the University of Oxford, Oxford Internet Institute, Professorial Fellow of Exeter College, Oxford, Senior Member of the Faculty of Philosophy, Research Associate and Fellow in Information Policy at the Department of Computer Science, University of Oxford, and Distinguished Research Fellow of the Oxford Uehiro Centre for Practical Ethics. He is also Adjunct Professor ("Distinguished Scholar in Residence"), Department of Economics, American University, Washington D.C. He is Turing Fellow of the Alan Turing Institute, where he was chair of the Data Ethics Group (DEG). Floridi is best known for his work on two areas of philosophical research: the philosophy of information and information ethics. Between 2008 and 2013, he held the Research Chair in philosophy of information and the UNESCO Chair in Information and Computer Ethics at the University of Hertfordshire. He was the founder and director of the IEG, an interdepartmental research group on the philosophy of information at the University of Oxford, and of the GPI the research Group in Philosophy of Information at the University of Hertfordshire. He was the founder and director of the SWIF, the Italian e-journal of philosophy (1995–2008). He is a former Governing Body Fellow of St Cross College, Oxford. Early life and education Floridi was born in Rome in 1964, and studied at Rome University La Sapienza (laurea, first class with distinction, 1988), where he was originally educated as a historian of philosophy. He soon became interested in analytic philosophy and wrote his tesi di laurea (roughly equivalent to an M.A. thesis) in philosophy of logic, on Michael Dummett's anti-realism. He obtained his Master of Philosophy (1989) and PhD degree (1990) from the University of Warwick, working in epistemology and philosophy of logic with Susan Haack (who was his PhD supervisor) and Michael Dummett. Floridi's early student years are partly recounted in the non-fiction book The Lost Painting: The Quest for a Caravaggio Masterpiece, where he was "Luciano". During his graduate and postdoctoral years, he covered the standard topics in analytic philosophy in search of a new methodology. He sought to approach contemporary problems from a heuristically powerful and intellectually enriching perspective when dealing with lively philosophical issues. During his graduate studies, he began to distance himself from classical analytic philosophy. In his view, the analytic movement had lost its way. For this reason, he worked on pragmatism (especially Peirce) and foundationalist issues in epistemology and philosophy of logic. Academic career He was a lecturer in Philosophy at the University of Warwick in 1990–1991. He joined the Faculty of Philosophy of the University of Oxford in 1990 and the OUCL (Oxford's Department of Computer Science) in 1999. He was Junior Research Fellow in Philosophy at Wolfson College, Oxford University (1990-1994), Frances Yates Fellow in the History of Ideas at the Warburg Institute, University of London (1994–1995) and Research Fellow in Philosophy at Wolfson College, Oxford University (1994-2001). During these years in Oxford, he held lectureships in different Colleges. Between 1994 and 1996, he also held a post-doctoral research scholarship at the Department of Philosophy, University of Turin. Between 2001 and 2006, he was Markle Foundation Senior Research Fellow in Information Policy at the Programme in Comparative Media Law and Policy, Oxford University. Between 2002 and 2008, he was Associate Professor of Logic at the Università degli Studi di Bari. In 2006, he became Fellow by Special Election of St Cross College, Oxford University, where he played for the squash team. In 2008, he was appointed full professor of philosophy at the University of Hertfordshire, to hold the newly established research chair in philosophy of information and, since 2009, the UNESCO Chair in Information and Computer Ethics. In his first book, Scepticism and the Foundation of Epistemology, Floridi was already looking for a concept of subject-independent knowledge close to what he now identifies as semantic information. During his postdoctoral studies, as a Junior Research Fellow of Wolfson College, Oxford University, he began to embrace a more Neo-Kantian philosophy, which led him to spend one academic year in Marburg, where he focused on Ernst Cassirer's version of Neo-Kantianism. He began working exclusively on what is now known as the philosophy of information during his years as Research Fellow, still at Wolfson College. Philosophy According to Floridi, it is necessary to develop a constructionist philosophy, where design, modelling and implementation replace analysis and dissection. Shifting from one set of tasks to the other, philosophy could then stop retreating into the increasingly small corner of its self-sustaining investigations and hence reacquire a wider view about what really matters. Slowly, Floridi has come to characterise his constructionist philosophy as an innovative field, now known as the philosophy of information, the new area of research that has emerged from the computational/informational turn. Floridi approaches the philosophy of information from the perspectives of logic and epistemology (theoretical), and computer science, IT and Humanities Computing (theoretical). For example, in the Preface of Philosophy and Computing, published in 1999, he wrote that the book was meant for philosophy students who need IT literacy to use computers efficiently or indispensable background knowledge for the critical understanding of our digital age. The latter provides a basis for the would-be branch of philosophy, the philosophy of information. PI, or PCI (Philosophy of Computing and Information), became his major research interest. Floridi's perspective is that there is a need for a broader concept of information, which includes computation, but not only computation. This new framework provides a theoretical framework within which to make sense of various lines of research that have emerged since the fifties. The second advantage is PI's perspective on the development of philosophy through time. In his view, PI gives us a much wider and more profound perspective on what philosophy might have actually been doing throughout the centuries. Currently, Floridi is working on two areas of research: computer ethics (see the entry information ethics) and the concept of information. Key to this area of work is the claim that ICT (Information and Communications Technology) is radically re-engineering or re-ontologizing the infosphere. Recognitions and awards 2007 Fellow by Special Election of St Cross College, University of Oxford 2008 Ethics and Information Technology, Springer, published a special issue in two numbers dedicated to his work. First philosopher to be awarded the Gauss Professorship by the Göttingen Academy of Sciences. 2009 Winner of the American Philosophical Association's Barwise Prize The APA's Newsletter dedicates two issues to his work. Elected Fellow of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour (SSAISB). Appointed UNESCO Chair in Information and Computer Ethics. 2010 Appointed Editor-in-chief of Springer's new journal Metaphilosophy, Blackwell-Wiley, published a special issue dedicated to his work. University of Hertfordshire, Vice Chancellor Award 2010: "Highly commended for research supporting engagement with business, the profession and partner organisations". Elected Fellow of the Center for Information Policy Research, University of Wisconsin–Milwaukee. Knowledge, Technology and Policy, Springer, published a special issue dedicated to his work. 2011 Laurea honoris causa in Philosophy, University of Suceava, Romania, for "foundational research on the philosophy of information". 2012 AISB/IACAP World Congress (in Honor of Alan Turing, 1912–1954), dedicates its "Author Meets Critics Session" to Luciano Floridi's "The Philosophy of Information". Winner of the International Association for Computing And Philosophy - IACAP's Covey Award for "outstanding research in computing and philosophy". Philosophy of Technology and Engineering, Springer, published a collection of essays dedicated to "Luciano Floridi's Philosophy of Technology - Critical Reflections". 2013 Winner of the International Society for Ethics and Information Technology - INSEIT's Weizenbaum Award for "significant contribution to the field of information and computer ethics, through his or her research, service, and vision." Elected member of the Internationale de Philosophie des Sciences Minds and Machines, Springer, is preparing a special issue dedicated to his work, entitled Philosophy in the Age of Information: A Symposium on Luciano Floridi's The Philosophy of Information (Oxford, 2011) Journal of Experimental and Theoretical Artificial Intelligence, Taylor & Francis, is preparing a special issue dedicated to his work, entitled Inforgs and the Infosphere: Themes from Luciano Floridi's Philosophy of Artificial Intelligence. 2014 Cátedras de Excelencia Prize by the University Carlos III of Madrid 2015 Fernand Braudel Senior Fellowship of the European University Institute 2016 Copernicus Scientist Award by the Institute of Advanced Studies of the University of Ferrara J. Ong Award by the Media Ecology Association for the book The Fourth Revolution Malpensa Prize, by the city of Guarcino, Italy 2017 Fellow of the Academy of Social Sciences (FAoSS) 2018 Ryle Lectures 2018 IBM's Thinker Award 2018 CRUI's Premio Conoscenza 2018 Books Augmented Intelligence — A Guide to IT for Philosophers. (in Italian) Rome: Armando, 1996. Scepticism and the Foundation of Epistemology - A Study in the Metalogical Fallacies. Leiden: Brill Publishers, 1996. Internet - An Epistemological Essay. (in Italian and in French) Milan: Il Saggiatore, 1997. Philosophy and Computing: An Introduction. London/New York: Routledge, 1999. Sextus Empiricus, The Recovery and Transmission of Pyrrhonism. Oxford: Oxford University Press, 2002. The Blackwell Guide to the Philosophy of Computing and Information. (editor) Oxford: Blackwell, 2003. Philosophy of Computing and Information: 5 Questions. (editor) Automatic Press / VIP, 2008. Information. Oxford: Oxford University Press, 2010. A volume for the Very Short Introduction series. The Cambridge Handbook of Information and Computer Ethics. (editor) Cambridge: Cambridge University Press, 2010. The Philosophy of Information. Oxford: Oxford University Press, 2011. The Ethics of Information. Oxford: Oxford University Press, 2013. "The Fourth Revolution: How the Infosphere is Reshaping Human Reality". Oxford: Oxford University Press, 2014. The Logic of Information Oxford: Oxford University Press, 2019. Podcasts and videos A full collection is available on the website of the Oxford Internet Institute, University of Oxford The Fourth Revolution, a TED presentation, April 4, 2011. Relevant Information, the SIRLS/Thomson Scientific ISI Samuel Lazarow Memorial lecture, University of Arizona, USA, February 8, 2007. A Look into the Future of ICT North American Computing and Philosophy Conference, August 10–12, 2006, Rensselaer Polytechnic Institute, USA. Where are we in the philosophy of information?, June 21, 2006, University of Bergen, Norway. The Logic of Information, presentation, discussion, Télé-université (Université du Québec), 11 May 2005, Montréal, Canada. From Augmented Intelligence to Augmented Responsibility, North American Computing and Philosophy Conference, January 24, 2002, Oregon State University, USA. Artificial Evil and the Foundation of Computer Ethics, presentation, discussion, CEPE2000 Computer Ethics: Philosophical Enquiry, July 14–16, 2000, Dartmouth College, USA. Interview for SuchThatCast - Philosophers' Podcast (August 27, 2012). "Luciano Floridi on The Fourth Revolution" – podcast of an interview with Nigel Warburton (June 29, 2009). See also Digital physics Information theory Logic of information Philosophy of artificial intelligence Philosophy of technology Philosophy of information Notes External links Home page and articles online "We dislike the truth and love to be fooled" - Interview of Luciano Floridi on Cyceon, 21 November 2016 Interview for RAI International, Taccuino Italiano, 5 March 2008 (in Italian) Interview for the American Philosophical Association — Philosophy And Computing Newsletter Biography, in English 1964 births 20th-century Italian philosophers 21st-century Italian philosophers Academics of the University of Hertfordshire Alumni of the University of Warwick Analytic philosophers Cultural critics Epistemologists Fellows of St Cross College, Oxford Fellows of the SSAISB Fellows of Wolfson College, Oxford Historians of philosophy Italian ethicists Artificial intelligence ethicists Italian logicians Living people Members of the Department of Computer Science, University of Oxford Members of the International Academy of Philosophy of Science Moral philosophers Philosophers of culture Philosophers of education Philosophers of ethics and morality Philosophers of logic Philosophers of science Philosophers of technology Sapienza University of Rome alumni Italian social commentators Social critics Social philosophers
3383921
https://en.wikipedia.org/wiki/Simoeis
Simoeis
Simoeis or Simois ( Simóeis) was a river of the Trojan plain, now called the Dümruk Su (Dümrek Çayı), and the name of its god in Greek mythology. River The Simoeis was a small river of the ancient Troad, having its source in Mount Ida, or more accurately in Mount Cotylus, which passed by Troy, joined the Scamander River below that city. This river is frequently spoken of in the Iliad, and described as a rapid mountain torrent. The river is also noted by the ancient geographers Strabo, Ptolemy, Stephanus of Byzantium, Pomponius Mela, and Pliny the Elder. Its present course is so altered that it is no longer a tributary of the Scamander, but flows directly into the Hellespont. Family Like other river-gods, Simoeis was the son of Oceanus and Tethys. Simoeis had two daughters who were married into the Trojan royal family. One daughter, Astyoche, was married to Erichthonius, and the other daughter, Hieromneme was the wife of Assaracus. Mythology When the gods took sides in the Trojan War, Simoeis supported the Trojans. Scamander, another river who also supported the Trojans, called upon Simoeis for help in his battle against Achilles:"Come to my aid with all speed, fill your streams with water from your springs, stir up all your torrents, stand high in a great wave, and rouse a mighty roar of timbers and rocks, so we can stop this savage man who in his strength is raging like the gods." (Iliad, 21.311-15).Before Simoeis could respond, Hephaestus was able to save Achilles by subduing Scamander with flame. Trojan descendants References March, J. Cassell's Dictionary Of Classical Mythology. London, 1999. Ancient Greek geography Potamoi Locations in the Iliad
2056861
https://en.wikipedia.org/wiki/Internet%20Locator%20Server
Internet Locator Server
An Internet Locator Server (abbreviated ILS) is a server that acts as a directory for Microsoft NetMeeting clients. An ILS is not necessary within a local area network and some wide area networks in the Internet because one participant can type in the IP address of the other participant's host and call them directly. An ILS becomes necessary when one participant is trying to contact a host who has a private IP address internal to a local area network that is inaccessible to the outside world, or when the host is blocked by a firewall. An ILS is also useful when a participant has a different IP address during each session, e.g., assigned by the Dynamic Host Configuration Protocol. There are two main approaches to using Internet Location Servers: use a public server on the Internet, or run and use a private server. Private Internet Location Server The machine running an Internet Location Server must have a public IP address. If the network running an Internet Location Server has a firewall, it is usually necessary to run the server in the demilitarized zone of the network. Microsoft Windows includes an Internet Location Server. It can be installed in the Control Panel using Add/Remove Windows Components, under "Networking Services" (Site Server ILS Services). The Internet Location Server (ILS) included in Microsoft Windows 2000 offers service on port 1002, while the latest version of NetMeeting requests service from port 389. The choice of 1002 was to avoid conflict with Windows 2000's domain controllers, which use LDAP and Active Directory on port 389, as well as Microsoft Exchange Server 2000, which uses port 389. If the server is running neither Active Directory nor Microsoft Exchange Server, the Internet Location Server's port can be changed to 389 using the following command at a command prompt: ILSCFG [servername] /port 389 Additional firewall issues Internet Location Servers do not address two other issues with using NetMeeting behind a firewall. First, although a participant can join the directory from an external IP address, the participant cannot join a meeting unless the internal host manually adds the participant to the meeting from the directory. Second, while this approach is fine for data conferencing, audio or video conferencing requires opening of a wide range of ports on the firewall. In this case, it may be desirable to use a gateway. See also User Location Service LDAP Windows communication and services
7072152
https://en.wikipedia.org/wiki/Build%20automation
Build automation
Build automation is the process of automating the creation of a software build and the associated processes including: compiling computer source code into binary code, packaging binary code, and running automated tests. Overview Historically, build automation was accomplished through makefiles. Today, there are two general categories of tools: Build-automation utility This includes utilities like Make, Rake, CMake, MSBuild, Ant, Maven or Gradle (Java) etc. Their primary purpose is to generate build artifacts through activities like compiling and linking source code. Build-automation servers These are general web based tools that execute build-automation utilities on a scheduled or triggered basis; a continuous integration server is a type of build-automation server. Depending on the level of automation the following classification is possible: Makefile - level Make-based tools Non-Make-based tools Build script (or Makefile) generation tools Continuous-integration tools Configuration-management tools Meta-build tools or package managers Other A software list for each can be found in list of build automation software. Build-automation utilities Build-automation utilities allow the automation of simple, repeatable tasks. When using the tool, it will calculate how to reach the goal by executing tasks in the correct, specific order and running each task. The two ways build tools differ are task-oriented vs. product-oriented. Task-oriented tools describe the dependency of networks in terms of a specific set task and product-oriented tools describe things in terms of the products they generate. Build-automation servers Although build servers existed long before continuous-integration servers, they are generally synonymous with continuous-integration servers, however a build server may also be incorporated into an ARA tool or ALM tool. Server types On-demand automation such as a user running a script at the command line Scheduled automation such as a continuous integration server running a nightly build Triggered automation such as a continuous integration server running a build on every commit to a version-control system. Distributed build automation Automation is achieved through the use of a compile farm for either distributed compilation or the execution of the utility step. The distributed build process must have machine intelligence to understand the source-code dependencies to execute the distributed build. Relationship to continuous delivery and continuous integration Build automation is considered the first step in moving toward implementing a culture of continuous delivery and DevOps. Build automation combined with continuous integration, deployment, application-release automation, and many other processes help move an organization forward in establishing software-delivery best practices. Advantages The advantages of build automation to software development projects include A necessary pre-condition for continuous integration and continuous testing Improve product quality Accelerate the compile and link processing Eliminate redundant tasks Minimize "bad builds" Eliminate dependencies on key personnel Have history of builds and releases in order to investigate issues Save time and money - because of the reasons listed above. See also References Types of tools used in software development
3468166
https://en.wikipedia.org/wiki/Killobyte
Killobyte
Killobyte is a 1993 novel by Piers Anthony. This book explores a virtual reality world in the context of the Internet. The game Killobyte is a "second generation" virtual reality game that puts players into a three-dimensional, fully sensory environment. Users are hooked up to a machine that not only simulates a range of sensations, from pain to sex, but responds to brain signals to move a player's character. The only way to exit the game and return to the real world is by selecting that option from a menu that appears within the virtual world. The game takes place in many different settings, as players face a series of increasing challenges and accumulate points. In the tradition of role-playing games, players get some choice over their characters' appearance and abilities, and they must use logic and ingenuity to overcome each obstacle, often involving riddles. When encountering another character, it is not always easy to tell whether the person is a fellow player or a part of the program. As is implied by the name (a pun on the word kilobyte), the game enables users to kill or be killed. Violence is quite graphic. Players who die receive electric shocks and the feeling of being buried in a coffin, and each death is longer and more unpleasant than the last one. Plot summary The novel cuts between Walter Toland, a former police officer, and Baal Curran, an angst-ridden teenage girl. Both are playing Killobyte from their own home, hooked to the network through a telephone modem. Walter notices Baal's name on a list and initially assumes she is a man. Indeed, each of them first poses as the other sex. Walter is learning the game as he goes, having neglected to read the instruction manual. He narrowly survives attacks by gunslingers, snakes, and runaway vehicles. Each time he destroys an enemy, he receives a point, and a door to a new setting appears. Eventually he must solve a more complicated problem when he finds himself in a women's prison, evading execution and a possible mole. In the meantime, Baal enters a fantasy setting in which a knight must rescue a princess from an evil sorcerer in a castle guarded by a dragon. She first goes through the adventure as the knight and fails. When she tries again, this time in the role of the princess, Walter has entered the setting as the sorcerer. He captures Baal under the ruse that he is the hero, but when she makes sexual advances at him, he tells her the truth, too honourable to take advantage of her even if it is only within a game. They begin telling each other about their real lives. In his days as a cop, he had an affair with a battered woman he was protecting, and the jealous husband ran him down with a car, leaving him paralysed from the waist down. He still has sexual feelings but is unable to perform. Baal, a plain girl despite her voluptuous appearance in the game, has type I diabetes and is depressed from having recently broken up with her boyfriend, who couldn't handle her disease. She pursued the game as a way of flirting with suicide. They discover that he may be capable of sex within the game, but they are interrupted by a hacker who has infiltrated the software. Calling himself Phreak, the hacker targets specific individuals and locks them in the game so that he can harass them. Walter receives an error message every time he attempts to return to the real world. Aided by his police training, he remains calm and talks to Phreak, even though he knows his real body is in danger of eventual dehydration. Baal temporarily quits the game after agreeing to meet Walter later in the game's next section, where they would use signals to identify each other, since they would have a different appearance. After unsuccessfully trying to get the police involved, she contacts the game's company, who want Walter to stay in the game as bait so that they can capture Phreak, who has eluded authorities for years. They give Baal a patch that will lock Phreak in the game along with Walter, so that they can force him to give the code that will free Walter. We learn that Phreak is a 15-year-old boy whose father was part of a snake handling sect and died of a rattlesnake bite. The mother eventually died, and Phreak is convinced that she was also killed by snakes, which he believes lurk in the shadows waiting to pounce on him. He lives in his aunt's house, secretly using his own telephone line to hack into games, but he avoids experiencing the games directly for fear of being traced, despite the temptations of online sex. Baal reenters the game world. Unfortunately, the next section is especially violent and unpredictable, modelled after Beirut. She poses as an Israeli spy, he as a Druze, and after several dangerous adventures they find each other, but not before Phreak catches up with them. Baal successfully sets the patch on Phreak, locking him in the game, though he has now locked her inside as well. They all end up in a prison together, and Walter tries to force the information out of Phreak, while Baal makes motions to seduce him, but he resists their methods. Walter bombs the prison, causing their virtual deaths (so that they will no longer be imprisoned when they reappear). Walter believes that if his character dies again, he will die for real, for the electric shocks are interfering with his pacemaker and causing heart palpitations. Baal, meanwhile, is in danger of insulin shock if she does not exit the game soon. Phreak is traumatised by the game's simulated death and is terrified of experiencing it again, but he will not volunteer that information to Walter, whom he decides to kill. They all end up in a special section called Potpourri, which mixes elements of various other sections. Baal is able to track the approximate locations of Walter and Phreak. Walter and Baal decide they are in love and want to marry if they manage to survive their current ordeal and meet in the real world. They chase Phreak across Potpourri, evading various obstacles he places in their path. Baal goes into insulin shock and her game body becomes still. Walter finally corners Phreak on a train and threatens to encase him in a box with snakes. Phreak finally relents. Baal wakes up in a hospital, recovering from the insulin shock. She tries calling Walter, whose number she has memorised, but she gets no answer. She has her ex-boyfriend drive her across the country, and it gives him a chance to assuage his guilty conscience as he is comforted that she has found love again. Phreak has manipulated police records so that there is a phony arrest warrant on Walter, but the friends he met in Killobyte show up and refute the charges. A small party is held where Walter and Baal meet face-to-face at last. Author's note Anthony writes about the development of the novel and the research it required. He got the first germ of the idea in 1981, before virtual reality was invented. He was influenced by the page-turning qualities of Robert A. Heinlein's 1951 novel The Puppet Masters. He did considerable research into gaming and diabetes. His description of Baal's condition was based on several real-life cases he investigated. (Anthony himself has been diagnosed with Type II diabetes, though the diagnosis was later called into question, as some doctors believe he may instead have chronic fatigue syndrome.) Additionally, Phreak was formed as a composite of several real-life hackers. Because of the approaching deadline, he had an assistant do the research on Beirut. In 1991, he believed the technology described in the book would be created in the following decade. External links 1993 American novels American science fiction novels Novels by Piers Anthony Video Games 1993 science fiction novels
8771245
https://en.wikipedia.org/wiki/SchoolTool
SchoolTool
SchoolTool is a GPL licensed, free student information system for schools around the world. The goals of the project are to create a simple turnkey student information system, including demographics, gradebook, attendance, calendaring and reporting for primary and secondary schools, as well as a framework for building customized applications and configurations for individual schools or states. SchoolTool is built as a free software/open source software stack, licensed under the GNU General Public License, Version 2, written in Python using the Zope 3 framework. The sub-projects of School Tool are as follows: The SchoolTool Calendar and SchoolBell are calendar and resource management tools for schools available as part of the Edubuntu Linux distribution. A SchoolTool student information system is being developed and tested in collaboration with schools CanDo is a SchoolTool-based skills tracking program developed by Virginia students and teachers to track which skills students are acquiring in their classes and at what level of competency. SchoolTool is configured by default to act as what is often called a student information system or SIS. The focus is on tracking information related to students: demographics, enrollment grades, attendance, reporting. It is a subset of a complete "management information system" (MIS) for schools, which might also cover systems like accounting. SchoolTool is not a learning management system, or LMS, such as Moodle, although they share some overlapping feature sets, such as a gradebook. SchoolTool does not contain curriculum or learning objects. A post on the product news page in October 2016 titled "The Future of SIELibre and SchoolTool" indicates that the primary SchoolTool developers of have moved on to other things. This was accompanied by a google document explaining the decision and thanking contributors for their efforts. SchoolTool Features Customizable demographics; Student contact management; Calendars for the school, groups, and individuals; Resource booking; Teacher gradebooks; Class attendance; Report card generation. See also OpenEMIS Enterprise Application Integration Open Knowledge Initiative Web services FET References External links Website 2004 software Edubuntu Educational software Cross-platform free software Free educational software Free content management systems Free software programmed in Python School-administration software
371033
https://en.wikipedia.org/wiki/Wget
Wget
GNU Wget (or just Wget, formerly Geturl, also written as its package name, wget) is a computer program that retrieves content from web servers. It is part of the GNU Project. Its name derives from "World Wide Web" and "get." It supports downloading via HTTP, HTTPS, and FTP. Its features include recursive download, conversion of links for offline viewing of local HTML, and support for proxies. It appeared in 1996, coinciding with the boom of popularity of the Web, causing its wide use among Unix users and distribution with most major Linux distributions. Written in portable C, Wget can be easily installed on any Unix-like system. Wget has been ported to Microsoft Windows, macOS, OpenVMS, HP-UX, AmigaOS, MorphOS and Solaris. Since version 1.14 Wget has been able to save its output in the web archiving standard WARC format. It has been used as the basis for graphical programs such as GWget for the GNOME Desktop. History Wget descends from an earlier program named Geturl by the same author, the development of which commenced in late 1995. The name changed to Wget after the author became aware of an earlier Amiga program named GetURL, written by James Burton in AREXX. Wget filled a gap in the inconsistent web-downloading software available in the mid-1990s. No single program could reliably use both HTTP and FTP to download files. Existing programs either supported FTP (such as NcFTP and dl) or were written in Perl, which was not yet ubiquitous. While Wget was inspired by features of some of the existing programs, it supported both HTTP and FTP and could be built using only the standard development tools found on every Unix system. At that time many Unix users struggled behind extremely slow university and dial-up Internet connections, leading to a growing need for a downloading agent that could deal with transient network failures without assistance from the human operator. In 2010, Chelsea Manning used Wget to download 250,000 U.S. diplomatic cables and 500,000 Army reports that came to be known as the Iraq War logs and Afghan War logs sent to WikiLeaks. Features Robustness Wget has been designed for robustness over slow or unstable network connections. If a download does not complete due to a network problem, Wget will automatically try to continue the download from where it left off, and repeat this until the whole file has been retrieved. It was one of the first clients to make use of the then-new Range HTTP header to support this feature. Recursive download Wget can optionally work like a web crawler by extracting resources linked from HTML pages and downloading them in sequence, repeating the process recursively until all the pages have been downloaded or a maximum recursion depth specified by the user has been reached. The downloaded pages are saved in a directory structure resembling that on the remote server. This "recursive download" enables partial or complete mirroring of web sites via HTTP. Links in downloaded HTML pages can be adjusted to point to locally downloaded material for offline viewing. When performing this kind of automatic mirroring of web sites, Wget supports the Robots Exclusion Standard (unless the option -e robots=off is used). Recursive download works with FTP as well, where Wget issues the LIST command to find which additional files to download, repeating this process for directories and files under the one specified in the top URL. Shell-like wildcards are supported when the download of FTP URLs is requested. When downloading recursively over either HTTP or FTP, Wget can be instructed to inspect the timestamps of local and remote files, and download only the remote files newer than the corresponding local ones. This allows easy mirroring of HTTP and FTP sites, but is considered inefficient and more error-prone when compared to programs designed for mirroring from the ground up, such as rsync. On the other hand, Wget doesn't require special server-side software for this task. Non-interactiveness Wget is non-interactive in the sense that, once started, it does not require user interaction and does not need to control a TTY, being able to log its progress to a separate file for later inspection. Users can start Wget and log off, leaving the program unattended. By contrast, most graphical or text user interface web browsers require the user to remain logged in and to manually restart failed downloads, which can be a great hindrance when transferring a lot of data. Portability Written in a highly portable style of C with minimal dependencies on third-party libraries, Wget requires little more than a C compiler and a BSD-like interface to TCP/IP networking. Designed as a Unix program invoked from the Unix shell, the program has been ported to numerous Unix-like environments and systems, including Microsoft Windows via Cygwin, and macOS. It is also available as a native Microsoft Windows program as one of the GnuWin packages. Other features Wget supports download through proxies, which are widely deployed to provide web access inside company firewalls and to cache and quickly deliver frequently accessed content. It makes use of persistent HTTP connections where available. IPv6 is supported on systems that include the appropriate interfaces. SSL/TLS is supported for encrypted downloads using the OpenSSL or GnuTLS library. Files larger than 2 GiB are supported on 32-bit systems that include the appropriate interfaces. Download speed may be throttled to avoid using up all of the available bandwidth. Can save its output in the web archiving standard WARC format, deduplicating from an associated CDX file as required. Authors and copyright GNU Wget was written by Hrvoje Nikšić with contributions by many other people, including Dan Harkless, Ian Abbott, and Mauro Tortonesi. Significant contributions are credited in the AUTHORS file included in the distribution, and all remaining ones are documented in the changelogs, also included with the program. Wget is currently maintained by Giuseppe Scrivano, Tim Rühsen and Darshit Shah. The copyright to Wget belongs to the Free Software Foundation, whose policy is to require copyright assignments for all non-trivial contributions to GNU software. License GNU Wget is distributed under the terms of the GNU General Public License, version 3 or later, with a special exception that allows distribution of binaries linked against the OpenSSL library. The text of the exception follows: Additional permission under GNU GPL version 3 section 7 If you modify this program, or any covered work, by linking or combining it with the OpenSSL project's OpenSSL library (or a modified version of that library), containing parts covered by the terms of the OpenSSL or SSLeay licenses, the Free Software Foundation grants you additional permission to convey the resulting work. Corresponding Source for a non-source form of such a combination shall include the source code for the parts of OpenSSL used as well as that of the covered work. It is expected that the exception clause will be removed once Wget is modified to also link with the GnuTLS library. Wget's documentation, in the form of a Texinfo reference manual, is distributed under the terms of the GNU Free Documentation License, version 1.2 or later. The man page usually distributed on Unix-like systems is automatically generated from a subset of the Texinfo manual and falls under the terms of the same license. Development Wget is developed in an open fashion, most of the design decisions typically being discussed on the public mailing list followed by users and developers. Bug reports and patches are relayed to the same list. Source contribution The preferred method of contributing to Wget's code and documentation is through source updates in the form of textual patches generated by the diff utility. Patches intended for inclusion in Wget are submitted to the mailing list where they are reviewed by the maintainers. Patches that pass the maintainers' scrutiny are installed in the sources. Instructions on patch creation as well as style guidelines are outlined on the project's wiki. The source code can also be tracked via a remote version control repository that hosts revision history beginning with the 1.5.3 release. The repository is currently running Git. Prior to that, the source code had been hosted on (in reverse order): Bazaar, Mercurial, Subversion, and via CVS. Release When a sufficient number of features or bug fixes accumulate during development, Wget is released to the general public via the GNU FTP site and its mirrors. Being entirely run by volunteers, there is no external pressure to issue a release nor are there enforceable release deadlines. Releases are numbered as versions of the form of major.minor[.revision], such as Wget 1.11 or Wget 1.8.2. An increase of the major version number represents large and possibly incompatible changes in Wget's behavior or a radical redesign of the code base. An increase of the minor version number designates addition of new features and bug fixes. A new revision indicates a release that, compared to the previous revision, only contains bug fixes. Revision zero is omitted, meaning that for example Wget 1.11 is the same as 1.11.0. Wget does not use the odd-even release number convention popularized by Linux. Popular references Wget makes an appearance in the 2010 Columbia Pictures motion picture release, The Social Network. The lead character, loosely based on Facebook co-founder Mark Zuckerberg, uses Wget to aggregate student photos from various Harvard University housing-facility directories. Notable releases The following releases represent notable milestones in Wget's development. Features listed next to each release are edited for brevity and do not constitute comprehensive information about the release, which is available in the NEWS file distributed with Wget. Geturl 1.0, released January 1996, was the first publicly available release. The first English-language announcement can be traced to a Usenet news posting, which probably refers to Geturl 1.3.4 released in June. Wget 1.4.0, released November 1996, was the first version to use the name Wget. It was also the first release distributed under the terms of the GNU GPL, Geturl having been distributed under an ad hoc no-warranty license. Wget 1.4.3, released February 1997, was the first version released as part of the GNU project with the copyright assigned to the FSF. Wget 1.5.3, released September 1998, was a milestone in the program's popularity. This version was bundled with many Linux based distributions, which exposed the program to a much wider audience. Wget 1.6, released December 1999, incorporated many bug fixes for the (by then stale) 1.5.3 release, largely thanks to the effort of Dan Harkless. Wget 1.7, released June 2001, introduced SSL support, cookies, and persistent connections. Wget 1.8, released December 2001, added bandwidth throttling, new progress indicators, and the breadth-first traversal of the hyperlink graph. Wget 1.9, released October 2003, included experimental IPv6 support, and ability to POST data to HTTP servers. Wget 1.10, released June 2005, introduced large file support, IPv6 support on dual-family systems, NTLM authorization, and SSL improvements. The maintainership was picked up by Mauro Tortonesi. Wget 1.11, released January 2008, moved to version 3 of the GNU General Public License, and added preliminary support for the Content-Disposition header, which is often used by CGI scripts to indicate the name of a file for downloading. Security-related improvements were also made to the HTTP authentication code. Micah Cowan took over maintainership of the project. Wget 1.12, released September 2009, added support for parsing URLs from CSS content on the web, and for handling Internationalized Resource Identifiers. Wget 1.13, released August 2011, supports HTTP/1.1, fixed some portability issues, and used the GnuTLS library by default for secure connections. Wget 1.14, released August 2012, improved support for TLS and added support for Digest Access Authentication. Wget 1.15, released January 2014, added—https-only and support for Perfect-Forward Secrecy. Wget 1.16, released October 2014, changed the default progress bar output, closed , added support for libpsl to verify cookie domains, and introduced—start-pos to allow starting downloads from a specified position. Wget 1.17, released November 2015, removed FTP passive to active fallback due to privacy concerns, added support for FTPS and for—if-modified-since. Wget 1.18, released June 2016, resolved the issue, and added the "--bind-dns-address" and "--dns-servers" options. Wget 1.19, released February 2017, added new options for processing a Metalink file; version 1.19.1 added the—retry-on-http-error option to retry a download if the Web server responds with a given HTTP status code. Wget 1.20, released November 2018, added --retry-on-host-error for more reliability and --accept-regex, --reject-regex options for recursive FTP retrievals. Wget2 GNU Wget2 2.0.0 was released on the 26 September 2021. It is licensed under the GPL-3.0-or-later license, and is wrapped around Libwget which is under the LGPL-3.0-or-later license. It has many improvements in comparison to Wget, particularly, in many cases Wget2 downloads much faster than Wget1.x due to support of the following protocols and technologies: HTTP/2, HTTP compression, parallel connections, use of If-Modified-Since HTTP header, TCP Fast Open. Related works GWget GWget is a free software graphical user interface for Wget. It is developed by David Sedeño Fernández and is part of the GNOME project. GWget supports all of the main features that Wget does, as well as parallel downloads. Cliget Cliget is an open source Firefox addon downloader that uses Curl, Wget and Aria2. It is developed by Zaid Abdulla. Clones For embeded systems characteristically limited size of storage and typically they are using clones of GNU Wget that have only basic options, usually downloading only: OpenWrt uclient-fetch BusyBox wget ToyBox wget See also cURL HTTrack lftp Web crawler PowerShell iwr Invoke-WebRequest command Notes References External links 1996 software Command-line software Cross-platform free software Download managers Free FTP clients Free web crawlers GNU Project software Hypertext Transfer Protocol clients Portable software Text mode Web archiving Web scraping
13941848
https://en.wikipedia.org/wiki/Comparison%20of%20programming%20languages%20%28associative%20array%29
Comparison of programming languages (associative array)
This Comparison of programming languages (associative arrays) compares the features of associative array data structures or array-lookup processing for over 40 computer programming languages. Language support The following is a comparison of associative arrays (also "mapping", "hash", and "dictionary") in various programming languages. Awk Awk has built-in, language-level support for associative arrays. For example: phonebook["Sally Smart"] = "555-9999" phonebook["John Doe"] = "555-1212" phonebook["J. Random Hacker"] = "555-1337" The following code loops through an associated array and prints its contents: for (name in phonebook) { print name, " ", phonebook[name] } The user can search for elements in an associative array, and delete elements from the array. The following shows how multi-dimensional associative arrays can be simulated in standard Awk using concatenation and the built-in string-separator variable SUBSEP: { # for every input line multi[$1 SUBSEP $2]++; } # END { for (x in multi) { split(x, arr, SUBSEP); print arr[1], arr[2], multi[x]; } } C There is no standard implementation of associative arrays in C, but a 3rd-party library, C Hash Table, with BSD license, is available. Another 3rd-party library, uthash, also creates associative arrays from C structures. A structure represents a value, and one of the structure fields serves as the key. Finally, the GLib library also supports associative arrays, along with many other advanced data types and is the recommended implementation of the GNU Project. Similar to GLib, Apple's cross-platform Core Foundation framework provides several basic data types. In particular, there are reference-counted CFDictionary and CFMutableDictionary. C# C# uses the collection classes provided by the .NET Framework. The most commonly used associative array type is System.Collections.Generic.Dictionary<TKey, TValue>, which is implemented as a mutable hash table. The relatively new System.Collections.Immutable package, available in .NET Framework versions 4.5 and above, and in all versions of .NET Core, also includes the System.Collections.Immutable.Dictionary<TKey, TValue> type, which is implemented using an AVL tree. The methods that would normally mutate the object in-place instead return a new object that represents the state of the original object after mutation. Creation The following demonstrates three means of populating a mutable dictionary: the Add method, which adds a key and value and throws an exception if the key already exists in the dictionary; assigning to the indexer, which overwrites any existing value, if present; and assigning to the backing property of the indexer, for which the indexer is syntactic sugar (not applicable to C#, see F# or VB.NET examples). Dictionary<string, string> dic = new Dictionary<string, string>(); dic.Add("Sally Smart", "555-9999"); dic["John Doe"] = "555-1212"; // Not allowed in C#. // dic.Item("J. Random Hacker") = "553-1337"; dic["J. Random Hacker"] = "553-1337"; The dictionary can also be initialized during construction using a "collection initializer", which compiles to repeated calls to Add. var dic = new Dictionary<string, string> { { "Sally Smart", "555-9999" }, { "John Doe", "555-1212" }, { "J. Random Hacker", "553-1337" } }; Access by key Values are primarily retrieved using the indexer (which throws an exception if the key does not exist) and the TryGetValue method, which has an output parameter for the sought value and a Boolean return-value indicating whether the key was found. var sallyNumber = dic["Sally Smart"]; var sallyNumber = (dic.TryGetValue("Sally Smart", out var result) ? result : "n/a"; In this example, the sallyNumber value will now contain the string "555-9999". Enumeration A dictionary can be viewed as a sequence of keys, sequence of values, or sequence of pairs of keys and values represented by instances of the KeyValuePair<TKey, TValue> type, although there is no guarantee of order. For a sorted dictionary, the programmer could choose to use a SortedDictionary<TKey, TValue> or use the .Sort LINQ extension method when enumerating. The following demonstrates enumeration using a foreach loop: // loop through the collection and display each entry. foreach(KeyValuePair<string,string> kvp in dic) { Console.WriteLine("Phone number for {0} is {1}", kvp.Key, kvp.Value); } C++ C++ has a form of associative array called std::map (see Standard Template Library#Containers). One could create a phone-book map with the following code in C++: #include <map> #include <string> #include <utility> int main() { std::map<std::string, std::string> phone_book; phone_book.insert(std::make_pair("Sally Smart", "555-9999")); phone_book.insert(std::make_pair("John Doe", "555-1212")); phone_book.insert(std::make_pair("J. Random Hacker", "553-1337")); } Or less efficiently, as this creates temporary std::string values: #include <map> #include <string> int main() { std::map<std::string, std::string> phone_book; phone_book["Sally Smart"] = "555-9999"; phone_book["John Doe"] = "555-1212"; phone_book["J. Random Hacker"] = "553-1337"; } With the extension of initialization lists in C++11, entries can be added during a map's construction as shown below: #include <map> #include <string> int main() { std::map<std::string, std::string> phone_book { {"Sally Smart", "555-9999"}, {"John Doe", "555-1212"}, {"J. Random Hacker", "553-1337"} }; } You can iterate through the list with the following code (C++03): std::map<std::string, std::string>::iterator curr, end; for(curr = phone_book.begin(), end = phone_book.end(); curr != end; ++curr) std::cout << curr->first << " = " << curr->second << std::endl; The same task in C++11: for(const auto& curr : phone_book) std::cout << curr.first << " = " << curr.second << std::endl; Using the structured binding available in C++17: for (const auto& [name, number] : phone_book) { std::cout << name << " = " << number << std::endl; } In C++, the std::map class is templated which allows the data types of keys and values to be different for different map instances. For a given instance of the map class the keys must be of the same base type. The same must be true for all of the values. Although std::map is typically implemented using a self-balancing binary search tree, C++11 defines a second map called std::unordered_map, which has the algorithmic characteristics of a hash table. This is a common vendor extension to the Standard Template Library (STL) as well, usually called hash_map, available from such implementations as SGI and STLPort. CFML A structure in CFML is equivalent to an associative array: dynamicKeyName = "John Doe"; phoneBook = { "Sally Smart" = "555-9999", "#dynamicKeyName#" = "555-4321", "J. Random Hacker" = "555-1337", UnknownComic = "???" }; writeOutput(phoneBook.UnknownComic); // ??? writeDump(phoneBook); // entire struct Cobra Initializing an empty dictionary and adding items in Cobra: Alternatively, a dictionary can be initialized with all items during construction: The dictionary can be enumerated by a for-loop, but there is no guaranteed order: D D offers direct support for associative arrays in the core language; such arrays are implemented as a chaining hash table with binary trees. The equivalent example would be: int main() { string[ string ] phone_book; phone_book["Sally Smart"] = "555-9999"; phone_book["John Doe"] = "555-1212"; phone_book["J. Random Hacker"] = "553-1337"; return 0; } Keys and values can be any types, but all the keys in an associative array must be of the same type, and the same goes for dependent values. Looping through all properties and associated values, and printing them, can be coded as follows: foreach (key, value; phone_book) { writeln("Number for " ~ key ~ ": " ~ value ); } A property can be removed as follows: phone_book.remove("Sally Smart"); Delphi Delphi supports several standard containers, including TDictionary<T>: uses SysUtils, Generics.Collections; var PhoneBook: TDictionary<string, string>; Entry: TPair<string, string>; begin PhoneBook := TDictionary<string, string>.Create; PhoneBook.Add('Sally Smart', '555-9999'); PhoneBook.Add('John Doe', '555-1212'); PhoneBook.Add('J. Random Hacker', '553-1337'); for Entry in PhoneBook do Writeln(Format('Number for %s: %s',[Entry.Key, Entry.Value])); end. Versions of Delphi prior to 2009 do not offer direct support for associative arrays. However, associative arrays can be simulated using the TStrings class: procedure TForm1.Button1Click(Sender: TObject); var DataField: TStrings; i: Integer; begin DataField := TStringList.Create; DataField.Values['Sally Smart'] := '555-9999'; DataField.Values['John Doe'] := '555-1212'; DataField.Values['J. Random Hacker'] := '553-1337'; // access an entry and display it in a message box ShowMessage(DataField.Values['Sally Smart']); // loop through the associative array for i := 0 to DataField.Count - 1 do begin ShowMessage('Number for ' + DataField.Names[i] + ': ' + DataField.ValueFromIndex[i]); end; DataField.Free; end; Erlang Erlang offers many ways to represent mappings; three of the most common in the standard library are keylists, dictionaries, and maps. Keylists Keylists are lists of tuples, where the first element of each tuple is a key, and the second is a value. Functions for operating on keylists are provided in the lists module. PhoneBook = [{"Sally Smith", "555-9999"}, {"John Doe", "555-1212"}, {"J. Random Hacker", "553-1337"}]. Accessing an element of the keylist can be done with the lists:keyfind/3 function: {_, Phone} = lists:keyfind("Sally Smith", 1, PhoneBook), io:format("Phone number: ~s~n", [Phone]). Dictionaries Dictionaries are implemented in the dict module of the standard library. A new dictionary is created using the dict:new/0 function and new key/value pairs are stored using the dict:store/3 function: PhoneBook1 = dict:new(), PhoneBook2 = dict:store("Sally Smith", "555-9999", Dict1), PhoneBook3 = dict:store("John Doe", "555-1212", Dict2), PhoneBook = dict:store("J. Random Hacker", "553-1337", Dict3). Such a serial initialization would be more idiomatically represented in Erlang with the appropriate function: PhoneBook = dict:from_list([{"Sally Smith", "555-9999"}, {"John Doe", "555-1212"}, {"J. Random Hacker", "553-1337"}]). The dictionary can be accessed using the dict:find/2 function: {ok, Phone} = dict:find("Sally Smith", PhoneBook), io:format("Phone: ~s~n", [Phone]). In both cases, any Erlang term can be used as the key. Variations include the orddict module, implementing ordered dictionaries, and gb_trees, implementing general balanced trees. Maps Maps were introduced in OTP 17.0, and combine the strengths of keylists and dictionaries. A map is defined using the syntax #{ K1 => V1, ... Kn => Vn }: PhoneBook = #{"Sally Smith" => "555-9999", "John Doe" => "555-1212", "J. Random Hacker" => "553-1337"}. Basic functions to interact with maps are available from the maps module. For example, the maps:find/2 function returns the value associated with a key: {ok, Phone} = maps:find("Sally Smith", PhoneBook), io:format("Phone: ~s~n", [Phone]). Unlike dictionaries, maps can be pattern matched upon: #{"Sally Smith", Phone} = PhoneBook, io:format("Phone: ~s~n", [Phone]). Erlang also provides syntax sugar for functional updates—creating a new map based on an existing one, but with modified values or additional keys: PhoneBook2 = PhoneBook#{ % the `:=` operator updates the value associated with an existing key "J. Random Hacker" := "355-7331", % the `=>` operator adds a new key-value pair, potentially replacing an existing one "Alice Wonderland" => "555-1865" } F# Map<'Key,'Value> At runtime, F# provides the Collections.Map<'Key,'Value> type, which is an immutable AVL tree. Creation The following example calls the Map constructor, which operates on a list (a semicolon delimited sequence of elements enclosed in square brackets) of tuples (which in F# are comma-delimited sequences of elements). let numbers = [ "Sally Smart", "555-9999"; "John Doe", "555-1212"; "J. Random Hacker", "555-1337" ] |> Map Access by key Values can be looked up via one of the Map members, such as its indexer or Item property (which throw an exception if the key does not exist) or the TryFind function, which returns an option type with a value of Some <result>, for a successful lookup, or None, for an unsuccessful one. Pattern matching can then be used to extract the raw value from the result, or a default value can be set. let sallyNumber = numbers.["Sally Smart"] // or let sallyNumber = numbers.Item("Sally Smart") let sallyNumber = match numbers.TryFind("Sally Smart") with | Some(number) -> number | None -> "n/a" In both examples above, the sallyNumber value would contain the string "555-9999". Dictionary<'TKey,'TValue> Because F# is a .NET language, it also has access to features of the .NET Framework, including the System.Collections.Generic.Dictionary<'TKey,'TValue> type (which is implemented as a hash table), which is the primary associative array type used in C# and Visual Basic. This type may be preferred when writing code that is intended to operate with other languages on the .NET Framework, or when the performance characteristics of a hash table are preferred over those of an AVL tree. Creation The dict function provides a means of conveniently creating a .NET dictionary that is not intended to be mutated; it accepts a sequence of tuples and returns an immutable object that implements IDictionary<'TKey,'TValue>. let numbers = [ "Sally Smart", "555-9999"; "John Doe", "555-1212"; "J. Random Hacker", "555-1337" ] |> dict When a mutable dictionary is needed, the constructor of System.Collections.Generic.Dictionary<'TKey,'TValue> can be called directly. See the C# example on this page for additional information. let numbers = System.Collections.Generic.Dictionary<string, string>() numbers.Add("Sally Smart", "555-9999") numbers.["John Doe"] <- "555-1212" numbers.Item("J. Random Hacker") <- "555-1337" Access by key IDictionary instances have an indexer that is used in the same way as Map, although the equivalent to TryFind is TryGetValue, which has an output parameter for the sought value and a Boolean return value indicating whether the key was found. let sallyNumber = let mutable result = "" if numbers.TryGetValue("Sally Smart", &result) then result else "n/a" F# also allows the function to be called as if it had no output parameter and instead returned a tuple containing its regular return value and the value assigned to the output parameter: let sallyNumber = match numbers.TryGetValue("Sally Smart") with | true, number -> number | _ -> "n/a" Enumeration A dictionary or map can be enumerated using Seq.map. // loop through the collection and display each entry. numbers |> Seq.map (fun kvp -> printfn "Phone number for %O is %O" kvp.Key kvp.Value) FoxPro Visual FoxPro implements mapping with the Collection Class. mapping = NEWOBJECT("Collection") mapping.Add("Daffodils", "flower2") && Add(object, key) – key must be character index = mapping.GetKey("flower2") && returns the index value 1 object = mapping("flower2") && returns "Daffodils" (retrieve by key) object = mapping(1) && returns "Daffodils" (retrieve by index) GetKey returns 0 if the key is not found. Go Go has built-in, language-level support for associative arrays, called "maps". A map's key type may only be a boolean, numeric, string, array, struct, pointer, interface, or channel type. A map type is written: map[keytype]valuetype Adding elements one at a time: phone_book := make(map[string] string) // make an empty map phone_book["Sally Smart"] = "555-9999" phone_book["John Doe"] = "555-1212" phone_book["J. Random Hacker"] = "553-1337" A map literal: phone_book := map[string] string { "Sally Smart": "555-9999", "John Doe": "555-1212", "J. Random Hacker": "553-1337", } Iterating through a map: // over both keys and values for key, value := range phone_book { fmt.Printf("Number for %s: %s\n", key, value) } // over just keys for key := range phone_book { fmt.Printf("Name: %s\n", key) } Haskell The Haskell programming language provides only one kind of associative container – a list of pairs: m = [("Sally Smart", "555-9999"), ("John Doe", "555-1212"), ("J. Random Hacker", "553-1337")] main = print (lookup "John Doe" m) output: Just "555-1212" Note that the lookup function returns a "Maybe" value, which is "Nothing" if not found, or "Just 'result when found. GHC, the most commonly used implementation of Haskell, provides two more types of associative containers. Other implementations might also provide these. One is polymorphic functional maps (represented as immutable balanced binary trees): import qualified Data.Map as M m = M.insert "Sally Smart" "555-9999" M.empty m' = M.insert "John Doe" "555-1212" m m'' = M.insert "J. Random Hacker" "553-1337" m' main = print (M.lookup "John Doe" m'' :: Maybe String) output: Just "555-1212" A specialized version for integer keys also exists as Data.IntMap. Finally, a polymorphic hash table: import qualified Data.HashTable as H main = do m <- H.new (==) H.hashString H.insert m "Sally Smart" "555-9999" H.insert m "John Doe" "555-1212" H.insert m "J. Random Hacker" "553-1337" foo <- H.lookup m "John Doe" print foo output: Just "555-1212" Lists of pairs and functional maps both provide a purely functional interface, which is more idiomatic in Haskell. In contrast, hash tables provide an imperative interface in the IO monad. Java In Java associative arrays are implemented as "maps", which are part of the Java collections framework. Since J2SE 5.0 and the introduction of generics into Java, collections can have a type specified; for example, an associative array that maps strings to strings might be specified as follows: Map<String, String> phoneBook = new HashMap<String, String>(); phoneBook.put("Sally Smart", "555-9999"); phoneBook.put("John Doe", "555-1212"); phoneBook.put("J. Random Hacker", "555-1337"); The method is used to access a key; for example, the value of the expression phoneBook.get("Sally Smart") is "555-9999". This code uses a hash map to store the associative array, by calling the constructor of the class. However, since the code only uses methods common to the interface , a self-balancing binary tree could be used by calling the constructor of the class (which implements the subinterface ), without changing the definition of the phoneBook variable, or the rest of the code, or using other underlying data structures that implement the Map interface. The hash function in Java, used by HashMap and HashSet, is provided by the method. Since every class in Java inherits from , every object has a hash function. A class can override the default implementation of hashCode() to provide a custom hash function more in accordance with the properties of the object. The Object class also contains the method, which tests an object for equality with another object. Hashed data structures in Java rely on objects maintaining the following contract between their hashCode() and equals() methods: For two objects a and b, a.equals(b) == b.equals(a) if a.equals(b), then a.hashCode() == b.hashCode() In order to maintain this contract, a class that overrides equals() must also override hashCode(), and vice versa, so that hashCode() is based on the same properties (or a subset of the properties) as equals(). A further contract that a hashed data structure has with the object is that the results of the hashCode() and equals() methods will not change once the object has been inserted into the map. For this reason, it is generally a good practice to base the hash function on immutable properties of the object. Analogously, TreeMap, and other sorted data structures, require that an ordering be defined on the data type. Either the data type must already have defined its own ordering, by implementing the interface; or a custom must be provided at the time the map is constructed. As with HashMap above, the relative ordering of keys in a TreeMap should not change once they have been inserted into the map. JavaScript JavaScript (and its standardized version, ECMAScript) is a prototype-based object-oriented language. Map and WeakMap Modern JavaScript handles associative arrays, using the Map and WeakMap classes. A map does not contain any keys by default; it only contains what is explicitly put into it. The keys and values can be any type (including functions, objects, or any primitive). Creation A map can be initialized with all items during construction: const phoneBook = new Map([ ["Sally Smart", "555-9999"], ["John Doe", "555-1212"], ["J. Random Hacker", "553-1337"], ]); Alternatively, you can initialize an empty map and then add items: const phoneBook = new Map(); phoneBook.set("Sally Smart", "555-9999"); phoneBook.set("John Doe", "555-1212"); phoneBook.set("J. Random Hacker", "553-1337"); Access by key Accessing an element of the map can be done with the get method: const sallyNumber = phoneBook.get("Sally Smart"); In this example, the value sallyNumber will now contain the string "555-9999". Enumeration The keys in a map are ordered. Thus, when iterating through it, a map object returns keys in order of insertion. The following demonstrates enumeration using a for-loop: // loop through the collection and display each entry. for (const [name, number] of phoneBook) { console.log(`Phone number for ${name} is ${number}`); } A key can be removed as follows: phoneBook.delete("Sally Smart"); Object An object is similar to a map—both let you set keys to values, retrieve those values, delete keys, and detect whether a value is stored at a key. For this reason (and because there were no built-in alternatives), objects historically have been used as maps. However, there are important differences that make a map preferable in certain cases. In JavaScript an object is a mapping from property names to values—that is, an associative array with one caveat: the keys of an object must be either a string or a symbol (native objects and primitives implicitly converted to a string keys are allowed). Objects also include one feature unrelated to associative arrays: an object has a prototype, so it contains default keys that could conflict with user-defined keys. So, doing a lookup for a property will point the lookup to the prototype's definition if the object does not define the property. An object literal is written as { property1: value1, property2: value2, ... }. For example: const myObject = { "Sally Smart": "555-9999", "John Doe": "555-1212", "J. Random Hacker": "553-1337", }; To prevent the lookup from using the prototype's properties, you can use the Object.setPrototypeOf function: Object.setPrototypeOf(myObject, null); As of ECMAScript 5 (ES5), the prototype can also be bypassed by using Object.create(null): const myObject = Object.create(null); Object.assign(myObject, { "Sally Smart": "555-9999", "John Doe": "555-1212", "J. Random Hacker": "553-1337", }); If the property name is a valid identifier, the quotes can be omitted, e.g.: const myOtherObject = { foo: 42, bar: false }; Lookup is written using property-access notation, either square brackets, which always work, or dot notation, which only works for identifier keys: myObject["John Doe"] myOtherObject.foo You can also loop through all enumerable properties and associated values as follows (a for-in loop): for (const property in myObject) { const value = myObject[property]; console.log(`myObject[${property}] = ${value}`); } Or (a for-of loop): for (const [property, value] of Object.entries(myObject)) { console.log(`${property} = ${value}`); } A property can be removed as follows: delete myObject["Sally Smart"]; As mentioned before, properties are strings and symbols. Since every native object and primitive can be implicitly converted to a string, you can do: myObject[1] // key is "1"; note that myObject[1] == myObject["1"] myObject[["a", "b"]] // key is "a,b" myObject[{ toString() { return "hello world"; } }] // key is "hello world" In modern JavaScript it's considered bad form to use the Array type as an associative array. Consensus is that the Object type and Map/WeakMap classes are best for this purpose. The reasoning behind this is that if Array is extended via prototype and Object is kept pristine, for and for-in loops will work as expected on associative 'arrays'. This issue has been brought to the fore by the popularity of JavaScript frameworks that make heavy and sometimes indiscriminate use of prototypes to extend JavaScript's inbuilt types. See JavaScript Array And Object Prototype Awareness Day for more information on the issue. Julia In Julia, the following operations manage associative arrays. Declare dictionary: phonebook = Dict( "Sally Smart" => "555-9999", "John Doe" => "555-1212", "J. Random Hacker" => "555-1337" ) Access element: phonebook["Sally Smart"] Add element: phonebook["New Contact"] = "555-2222" Delete element: delete!(phonebook, "Sally Smart") Get keys and values as iterables: keys(phonebook) values(phonebook) KornShell 93, and compliant shells In KornShell 93, and compliant shells (ksh93, bash4...), the following operations can be used with associative arrays. Definition: typeset -A phonebook; # ksh93 declare -A phonebook; # bash4 phonebook=(["Sally Smart"]="555-9999" ["John Doe"]="555-1212" ["[[J. Random Hacker]]"]="555-1337"); Dereference: ${phonebook["John Doe"]}; Lisp Lisp was originally conceived as a "LISt Processing" language, and one of its most important data types is the linked list, which can be treated as an association list ("alist"). '(("Sally Smart" . "555-9999") ("John Doe" . "555-1212") ("J. Random Hacker" . "553-1337")) The syntax (x . y) is used to indicate a consed pair. Keys and values need not be the same type within an alist. Lisp and Scheme provide operators such as assoc to manipulate alists in ways similar to associative arrays. A set of operations specific to the handling of association lists exists for Common Lisp, each of these working non-destructively. To add an entry the acons function is employed, creating and returning a new association list. An association list in Common Lisp mimicks a stack, that is, adheres to the last-in-first-out (LIFO) principle, and hence prepends to the list head. (let ((phone-book NIL)) (setf phone-book (acons "Sally Smart" "555-9999" phone-book)) (setf phone-book (acons "John Doe" "555-1212" phone-book)) (setf phone-book (acons "J. Random Hacker" "555-1337" phone-book))) This function can be construed as an accommodation for cons operations. ;; The effect of ;; (cons (cons KEY VALUE) ALIST) ;; is equivalent to ;; (acons KEY VALUE ALIST) (let ((phone-book '(("Sally Smart" . "555-9999") ("John Doe" . "555-1212")))) (cons (cons "J. Random Hacker" "555-1337") phone-book)) Of course, the destructive push operation also allows inserting entries into an association list, an entry having to constitute a key-value cons in order to retain the mapping's validity. (push (cons "Dummy" "123-4567") phone-book) Searching for an entry by its key is performed via assoc, which might be configured for the test predicate and direction, especially searching the association list from its end to its front. The result, if positive, returns the entire entry cons, not only its value. Failure to obtain a matching key leds to a return of the NIL value. (assoc "John Doe" phone-book :test #'string=) Two generalizations of assoc exist: assoc-if expects a predicate function that tests each entry's key, returning the first entry for which the predicate produces a non-NIL value upon invocation. assoc-if-not inverts the logic, accepting the same arguments, but returning the first entry generating NIL. ;; Find the first entry whose key equals "John Doe". (assoc-if #'(lambda (key) (string= key "John Doe")) phone-book) ;; Finds the first entry whose key is neither "Sally Smart" nor "John Doe" (assoc-if-not #'(lambda (key) (member key '("Sally Smart" "John Doe") :test #'string=)) phone-book) The inverse process, the detection of an entry by its value, utilizes rassoc. ;; Find the first entry with a value of "555-9999". ;; We test the entry string values with the "string=" predicate. (rassoc "555-9999" phone-book :test #'string=) The corresponding generalizations rassoc-if and rassoc-if-not exist. ;; Finds the first entry whose value is "555-9999". (rassoc-if #'(lambda (value) (string= value "555-9999")) phone-book) ;; Finds the first entry whose value is not "555-9999". (rassoc-if-not #'(lambda (value) (string= value "555-9999")) phone-book) All of the previous entry search functions can be replaced by general list-centric variants, such as find, find-if, find-if-not, as well as pertinent functions like position and its derivates. ;; Find an entry with the key "John Doe" and the value "555-1212". (find (cons "John Doe" "555-1212") phone-book :test #'equal) Deletion, lacking a specific counterpart, is based upon the list facilities, including destructive ones. ;; Create and return an alist without any entry whose key equals "John Doe". (remove-if #'(lambda (entry) (string= (car entry) "John Doe")) phone-book) Iteration is accomplished with the aid of any function that expects a list. ;; Iterate via "map". (map NIL #'(lambda (entry) (destructuring-bind (key . value) entry (format T "~&~s => ~s" key value))) phone-book) ;; Iterate via "dolist". (dolist (entry phone-book) (destructuring-bind (key . value) entry (format T "~&~s => ~s" key value))) These being structured lists, processing and transformation operations can be applied without constraints. ;; Return a vector of the "phone-book" values. (map 'vector #'cdr phone-book) ;; Destructively modify the "phone-book" via "map-into". (map-into phone-book #'(lambda (entry) (destructuring-bind (key . value) entry (cons (reverse key) (reverse value)))) phone-book) Because of their linear nature, alists are used for relatively small sets of data. Common Lisp also supports a hash table data type, and for Scheme they are implemented in SRFI 69. Hash tables have greater overhead than alists, but provide much faster access when there are many elements. A further characteristic is the fact that Common Lisp hash tables do not, as opposed to association lists, maintain the order of entry insertion. Common Lisp hash tables are constructed via the make-hash-table function, whose arguments encompass, among other configurations, a predicate to test the entry key. While tolerating arbitrary objects, even heterogeneity within a single hash table instance, the specification of this key :test function is confined to distinguishable entities: the Common Lisp standard only mandates the support of eq, eql, equal, and equalp, yet designating additional or custom operations as permissive for concrete implementations. (let ((phone-book (make-hash-table :test #'equal))) (setf (gethash "Sally Smart" phone-book) "555-9999") (setf (gethash "John Doe" phone-book) "555-1212") (setf (gethash "J. Random Hacker" phone-book) "553-1337")) The gethash function permits obtaining the value associated with a key. (gethash "John Doe" phone-book) Additionally, a default value for the case of an absent key may be specified. (gethash "Incognito" phone-book 'no-such-key) An invocation of gethash actually returns two values: the value or substitute value for the key and a boolean indicator, returning T if the hash table contains the key and NIL to signal its absence. (multiple-value-bind (value contains-key) (gethash "Sally Smart" phone-book) (if contains-key (format T "~&The associated value is: ~s" value) (format T "~&The key could not be found."))) Use remhash for deleting the entry associated with a key. (remhash "J. Random Hacker" phone-book) clrhash completely empties the hash table. (clrhash phone-book) The dedicated maphash function specializes in iterating hash tables. (maphash #'(lambda (key value) (format T "~&~s => ~s" key value)) phone-book) Alternatively, the loop construct makes provisions for iterations, through keys, values, or conjunctions of both. ;; Iterate the keys and values of the hash table. (loop for key being the hash-keys of phone-book using (hash-value value) do (format T "~&~s => ~s" key value)) ;; Iterate the values of the hash table. (loop for value being the hash-values of phone-book do (print value)) A further option invokes with-hash-table-iterator, an iterator-creating macro, the processing of which is intended to be driven by the caller. (with-hash-table-iterator (entry-generator phone-book) (loop do (multiple-value-bind (has-entry key value) (entry-generator) (if has-entry (format T "~&~s => ~s" key value) (loop-finish))))) It is easy to construct composite abstract data types in Lisp, using structures or object-oriented programming features, in conjunction with lists, arrays, and hash tables. LPC LPC implements associative arrays as a fundamental type known as either "map" or "mapping", depending on the driver. The keys and values can be of any type. A mapping literal is written as ([ key_1 : value_1, key_2 : value_2 ]). Procedural code looks like: mapping phone_book = ([]); phone_book["Sally Smart"] = "555-9999"; phone_book["John Doe"] = "555-1212"; phone_book["J. Random Hacker"] = "555-1337"; Mappings are accessed for reading using the indexing operator in the same way as they are for writing, as shown above. So phone_book["Sally Smart"] would return the string "555-9999", and phone_book["John Smith"] would return 0. Testing for presence is done using the function member(), e.g. if(member(phone_book, "John Smith")) write("John Smith is listed.\n"); Deletion is accomplished using a function called either m_delete() or map_delete(), depending on the driver: m_delete(phone_book, "Sally Smart"); LPC drivers of the Amylaar family implement multivalued mappings using a secondary, numeric index (other drivers of the MudOS family do not support multivalued mappings.) Example syntax: mapping phone_book = ([:2]); phone_book["Sally Smart", 0] = "555-9999"; phone_book["Sally Smart", 1] = "99 Sharp Way"; phone_book["John Doe", 0] = "555-1212"; phone_book["John Doe", 1] = "3 Nigma Drive"; phone_book["J. Random Hacker", 0] = "555-1337"; phone_book["J. Random Hacker", 1] = "77 Massachusetts Avenue"; LPC drivers modern enough to support a foreach() construct use it to iterate through their mapping types. Lua In Lua, "table" is a fundamental type that can be used either as an array (numerical index, fast) or as an associative array. The keys and values can be of any type, except nil. The following focuses on non-numerical indexes. A table literal is written as { value, key = value, [index] = value, ["non id string"] = value }. For example: phone_book = { ["Sally Smart"] = "555-9999", ["John Doe"] = "555-1212", ["J. Random Hacker"] = "553-1337", -- Trailing comma is OK } aTable = { -- Table as value subTable = { 5, 7.5, k = true }, -- key is "subTable" -- Function as value ['John Doe'] = function (age) if age < 18 then return "Young" else return "Old!" end end, -- Table and function (and other types) can also be used as keys } If the key is a valid identifier (not a reserved word), the quotes can be omitted. Identifiers are case sensitive. Lookup is written using either square brackets, which always works, or dot notation, which only works for identifier keys: print(aTable["John Doe"](45)) x = aTable.subTable.k You can also loop through all keys and associated values with iterators or for-loops: simple = { [true] = 1, [false] = 0, [3.14] = math.pi, x = 'x', ["!"] = 42 } function FormatElement(key, value) return "[" .. tostring(key) .. "] = " .. value .. ", " end -- Iterate on all keys table.foreach(simple, function (k, v) io.write(FormatElement(k, v)) end) print"" for k, v in pairs(simple) do io.write(FormatElement(k, v)) end print"" k= nil repeat k, v = next(simple, k) if k ~= nil then io.write(FormatElement(k, v)) end until k == nil print"" An entry can be removed by setting it to nil: simple.x = nil Likewise, you can overwrite values or add them: simple['%'] = "percent" simple['!'] = 111 Mathematica and Wolfram Language Mathematica and Wolfram Language use the Association expression to represent associative arrays. phonebook = <| "Sally Smart" -> "555-9999", "John Doe" -> "555-1212", "J. Random Hacker" -> "553-1337" |>; To access: phonebook[[Key["Sally Smart"]]] If the keys are strings, the Key keyword is not necessary, so: phonebook[["Sally Smart"]] To list keys: and values Keys[phonebook] Values[phonebook] MUMPS In MUMPS every array is an associative array. The built-in, language-level, direct support for associative arrays applies to private, process-specific arrays stored in memory called "locals" as well as to the permanent, shared, global arrays stored on disk which are available concurrently to multiple jobs. The name for globals is preceded by the circumflex "^" to distinguish them from local variables. SET ^phonebook("Sally Smart")="555-9999" ;; storing permanent data SET phonebook("John Doe")="555-1212" ;; storing temporary data SET phonebook("J. Random Hacker")="553-1337" ;; storing temporary data MERGE ^phonebook=phonebook ;; copying temporary data into permanent data Accessing the value of an element simply requires using the name with the subscript: WRITE "Phone Number :",^phonebook("Sally Smart"),! You can also loop through an associated array as follows: SET NAME="" FOR S NAME=$ORDER(^phonebook(NAME)) QUIT:NAME="" WRITE NAME," Phone Number :",^phonebook(NAME),! Objective-C (Cocoa/GNUstep) Cocoa and GNUstep, written in Objective-C, handle associative arrays using NSMutableDictionary (a mutable version of NSDictionary) class cluster. This class allows assignments between any two objects. A copy of the key object is made before it is inserted into NSMutableDictionary, therefore the keys must conform to the NSCopying protocol. When being inserted to a dictionary, the value object receives a retain message to increase its reference count. The value object will receive the release message when it will be deleted from the dictionary (either explicitly or by adding to the dictionary a different object with the same key). NSMutableDictionary *aDictionary = [[NSMutableDictionary alloc] init]; [aDictionary setObject:@"555-9999" forKey:@"Sally Smart"]; [aDictionary setObject:@"555-1212" forKey:@"John Doe"]; [aDictionary setObject:@"553-1337" forKey:@"Random Hacker"]; To access assigned objects, this command may be used: id anObject = [aDictionary objectForKey:@"Sally Smart"]; All keys or values can be enumerated using NSEnumerator: NSEnumerator *keyEnumerator = [aDictionary keyEnumerator]; id key; while ((key = [keyEnumerator nextObject])) { // ... process it here ... } In Mac OS X 10.5+ and iPhone OS, dictionary keys can be enumerated more concisely using the NSFastEnumeration construct: for (id key in aDictionary) { // ... process it here ... } What is even more practical, structured data graphs may be easily created using Cocoa, especially NSDictionary (NSMutableDictionary). This can be illustrated with this compact example: NSDictionary *aDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSDictionary dictionaryWithObjectsAndKeys: @"555-9999", @"Sally Smart", @"555-1212", @"John Doe", nil], @"students", [NSDictionary dictionaryWithObjectsAndKeys: @"553-1337", @"Random Hacker", nil], @"hackers", nil]; Relevant fields can be quickly accessed using key paths: id anObject = [aDictionary valueForKeyPath:@"students.Sally Smart"]; OCaml The OCaml programming language provides three different associative containers. The simplest is a list of pairs: # let m = [ "Sally Smart", "555-9999"; "John Doe", "555-1212"; "J. Random Hacker", "553-1337"];; val m : (string * string) list = [ ("Sally Smart", "555-9999"); ("John Doe", "555-1212"); ("J. Random Hacker", "553-1337") ] # List.assoc "John Doe" m;; - : string = "555-1212" The second is a polymorphic hash table: # let m = Hashtbl.create 3;; val m : ('_a, '_b) Hashtbl.t = <abstr> # Hashtbl.add m "Sally Smart" "555-9999"; Hashtbl.add m "John Doe" "555-1212"; Hashtbl.add m "J. Random Hacker" "553-1337";; - : unit = () # Hashtbl.find m "John Doe";; - : string = "555-1212" The code above uses OCaml's default hash function Hashtbl.hash, which is defined automatically for all types. To use a modified hash function, use the functor interface Hashtbl.Make to create a module, such as with Map. Finally, functional maps (represented as immutable balanced binary trees): # module StringMap = Map.Make(String);; ... # let m = StringMap.add "Sally Smart" "555-9999" StringMap.empty let m = StringMap.add "John Doe" "555-1212" m let m = StringMap.add "J. Random Hacker" "553-1337" m;; val m : string StringMap.t = <abstr> # StringMap.find "John Doe" m;; - : string = "555-1212" Note that in order to use Map, you have to provide the functor Map.Make with a module which defines the key type and the comparison function. The third-party library ExtLib provides a polymorphic version of functional maps, called PMap, which is given a comparison function upon creation. Lists of pairs and functional maps both provide a purely functional interface. By contrast, hash tables provide an imperative interface. For many operations, hash tables are significantly faster than lists of pairs and functional maps. OptimJ The OptimJ programming language is an extension of Java 5. As does Java, Optimj provides maps; but OptimJ also provides true associative arrays. Java arrays are indexed with non-negative integers; associative arrays are indexed with any type of key. String[String] phoneBook = { "Sally Smart" -> "555-9999", "John Doe" -> "555-1212", "J. Random Hacker" -> "553-1337" }; // String[String] is not a java type but an optimj type: // associative array of strings indexed by strings. // iterate over the values for(String number : phoneBook) { System.out.println(number); } // The previous statement prints: "555-9999" "555-1212" "553-1337" // iterate over the keys for(String name : phoneBook.keys) { System.out.println(name + " -> " + phoneBook[name]); } // phoneBook[name] access a value by a key (it looks like java array access) // i.e. phoneBook["John Doe"] returns "555-1212" Of course, it is possible to define multi-dimensional arrays, to mix Java arrays and associative arrays, to mix maps and associative arrays. int[String][][double] a; java.util.Map<String[Object], Integer> b; Perl 5 Perl 5 has built-in, language-level support for associative arrays. Modern Perl refers to associative arrays as hashes; the term associative array is found in older documentation but is considered somewhat archaic. Perl 5 hashes are flat: keys are strings and values are scalars. However, values may be references to arrays or other hashes, and the standard Perl 5 module Tie::RefHash enables hashes to be used with reference keys. A hash variable is marked by a % sigil, to distinguish it from scalar, array, and other data types. A hash literal is a key-value list, with the preferred form using Perl's => token, which is semantically mostly identical to the comma and makes the key-value association clearer: my %phone_book = ( 'Sally Smart' => '555-9999', 'John Doe' => '555-1212', 'J. Random Hacker' => '553-1337', ); Accessing a hash element uses the syntax $hash_name{$key} – the key is surrounded by curly braces and the hash name is prefixed by a $, indicating that the hash element itself is a scalar value, even though it is part of a hash. The value of $phone_book{'John Doe'} is '555-1212'. The % sigil is only used when referring to the hash as a whole, such as when asking for keys %phone_book. The list of keys and values can be extracted using the built-in functions keys and values, respectively. So, for example, to print all the keys of a hash: foreach $name (keys %phone_book) { print $name, "\n"; } One can iterate through (key, value) pairs using the each function: while (($name, $number) = each %phone_book) { print 'Number for ', $name, ': ', $number, "\n"; } A hash "reference", which is a scalar value that points to a hash, is specified in literal form using curly braces as delimiters, with syntax otherwise similar to specifying a hash literal: my $phone_book = { 'Sally Smart' => '555-9999', 'John Doe' => '555-1212', 'J. Random Hacker' => '553-1337', }; Values in a hash reference are accessed using the dereferencing operator: print $phone_book->{'Sally Smart'}; When the hash contained in the hash reference needs to be referred to as a whole, as with the keys function, the syntax is as follows: foreach $name (keys %{$phone_book}) { print 'Number for ', $name, ': ', $phone_book->{$name}, "\n"; } Perl 6 (Raku) Perl 6, renamed as "Raku", also has built-in, language-level support for associative arrays, which are referred to as hashes or as objects performing the "associative" role. As in Perl 5, Perl 6 default hashes are flat: keys are strings and values are scalars. One can define a hash to not coerce all keys to strings automatically: these are referred to as "object hashes", because the keys of such hashes remain the original object rather than a stringification thereof. A hash variable is typically marked by a % sigil, to visually distinguish it from scalar, array, and other data types, and to define its behaviour towards iteration. A hash literal is a key-value list, with the preferred form using Perl's => token, which makes the key-value association clearer: my %phone-book = 'Sally Smart' => '555-9999', 'John Doe' => '555-1212', 'J. Random Hacker' => '553-1337', ; Accessing a hash element uses the syntax %hash_name{$key} – the key is surrounded by curly braces and the hash name (note that the sigil does not change, contrary to Perl 5). The value of %phone-book{'John Doe'} is '555-1212'. The list of keys and values can be extracted using the built-in functions keys and values, respectively. So, for example, to print all the keys of a hash: for %phone-book.keys -> $name { say $name; } By default, when iterating through a hash, one gets key–value pairs. for %phone-book -> $entry { say "Number for $entry.key(): $entry.value()"; # using extended interpolation features } It is also possible to get alternating key values and value values by using the kv method: for %phone-book.kv -> $name, $number { say "Number for $name: $number"; } Raku doesn't have any references. Hashes can be passed as single parameters that are not flattened. If you want to make sure that a subroutine only accepts hashes, use the % sigil in the Signature. sub list-phone-book(%pb) { for %pb.kv -> $name, $number { say "Number for $name: $number"; } } list-phone-book(%phone-book); In compliance with gradual typing, hashes may be subjected to type constraints, confining a set of valid keys to a certain type. # Define a hash whose keys may only be integer numbers ("Int" type). my %numbersWithNames{Int}; # Keys must be integer numbers, as in this case. %numbersWithNames.push(1 => "one"); # This will cause an error, as strings as keys are invalid. %numbersWithNames.push("key" => "two"); PHP PHP's built-in array type is, in reality, an associative array. Even when using numerical indexes, PHP internally stores arrays as associative arrays. So, PHP can have non-consecutively numerically-indexed arrays. The keys have to be of integer (floating point numbers are truncated to integer) or string type, while values can be of arbitrary types, including other arrays and objects. The arrays are heterogeneous: a single array can have keys of different types. PHP's associative arrays can be used to represent trees, lists, stacks, queues, and other common data structures not built into PHP. An associative array can be declared using the following syntax: $phonebook = array(); $phonebook['Sally Smart'] = '555-9999'; $phonebook['John Doe'] = '555-1212'; $phonebook['J. Random Hacker'] = '555-1337'; // or $phonebook = array( 'Sally Smart' => '555-9999', 'John Doe' => '555-1212', 'J. Random Hacker' => '555-1337', ); // or, as of PHP 5.4 $phonebook = [ 'Sally Smart' => '555-9999', 'John Doe' => '555-1212', 'J. Random Hacker' => '555-1337', ]; // or $phonebook['contacts']['Sally Smart']['number'] = '555-9999'; $phonebook['contacts']['John Doe']['number'] = '555-1212'; $phonebook['contacts']['J. Random Hacker']['number'] = '555-1337'; PHP can loop through an associative array as follows: foreach ($phonebook as $name => $number) { echo 'Number for ', $name, ': ', $number, "\n"; } // For the last array example it is used like this foreach ($phonebook['contacts'] as $name => $num) { echo 'Name: ', $name, ', number: ', $num['number'], "\n"; } PHP has an extensive set of functions to operate on arrays. Associative arrays that can use objects as keys, instead of strings and integers, can be implemented with the SplObjectStorage class from the Standard PHP Library (SPL). Pike Pike has built-in support for associative arrays, which are referred to as mappings. Mappings are created as follows: mapping(string:string) phonebook = ([ "Sally Smart":"555-9999", "John Doe":"555-1212", "J. Random Hacker":"555-1337" ]); Accessing and testing for presence in mappings is done using the indexing operator. So phonebook["Sally Smart"] would return the string "555-9999", and phonebook["John Smith"] would return 0. Iterating through a mapping can be done using foreach: foreach(phonebook; string key; string value) { write("%s:%s\n", key, value); } Or using an iterator object: Mapping.Iterator i = get_iterator(phonebook); while (i->index()) { write("%s:%s\n", i->index(), i->value()); i->next(); } Elements of a mapping can be removed using m_delete, which returns the value of the removed index: string sallys_number = m_delete(phonebook, "Sally Smart"); PostScript In PostScript, associative arrays are called dictionaries. In Level 1 PostScript they must be created explicitly, but Level 2 introduced direct declaration using a double-angled-bracket syntax: % Level 1 declaration 3 dict dup begin /red (rouge) def /green (vert) def /blue (bleu) def end % Level 2 declaration << /red (rot) /green (gruen) /blue (blau) >> % Both methods leave the dictionary on the operand stack Dictionaries can be accessed directly, using get, or implicitly, by placing the dictionary on the dictionary stack using begin: % With the previous two dictionaries still on the operand stack /red get print % outputs 'rot' begin green print % outputs 'vert' end Dictionary contents can be iterated through using forall, though not in any particular order: % Level 2 example << /This 1 /That 2 /Other 3 >> {exch =print ( is ) print ==} forall Which may output: That is 2 This is 1 Other is 3 Dictionaries can be augmented (up to their defined size only in Level 1) or altered using put, and entries can be removed using undef: % define a dictionary for easy reuse: /MyDict << /rouge (red) /vert (gruen) >> def % add to it MyDict /bleu (blue) put % change it MyDict /vert (green) put % remove something MyDict /rouge undef Prolog Some versions of Prolog include dictionary ("dict") utilities. Python In Python, associative arrays are called "dictionaries". Dictionary literals are delimited by curly braces: phonebook = { "Sally Smart": "555-9999", "John Doe": "555-1212", "J. Random Hacker": "553-1337", } To access an entry in Python simply use the array indexing operator: >>> phonebook["Sally Smart"] '555-9999' Loop iterating through all the keys of the dictionary: >>> for key in phonebook: ... print(key, phonebook[key]) Sally Smart 555-9999 J. Random Hacker 553-1337 John Doe 555-1212 Iterating through (key, value) tuples: >>> for key, value in phonebook.items(): ... print(key, value) Sally Smart 555-9999 J. Random Hacker 553-1337 John Doe 555-1212 Dictionary keys can be individually deleted using the del statement. The corresponding value can be returned before the key-value pair is deleted using the "pop" method of "dict" type: >>> del phonebook["John Doe"] >>> val = phonebook.pop("Sally Smart") >>> phonebook.keys() # Only one key left ['J. Random Hacker'] Python 2.7 and 3.x also support dictionary list comprehension, a compact syntax for generating a dictionary from any iterator: >>> square_dict = {i: i*i for i in range(5)} >>> square_dict {0: 0, 1: 1, 2: 4, 3: 9, 4: 16} >>> {key: value for key, value in phonebook.items() if "J" in key} {'J. Random Hacker': '553-1337', 'John Doe': '555-1212'} Strictly speaking, a dictionary is a super-set of an associative array, since neither the keys or values are limited to a single datatype. One could think of a dictionary as an "associative list" using the nomenclature of Python. For example, the following is also legitimate: phonebook = { "Sally Smart": "555-9999", "John Doe": None, "J. Random Hacker": -3.32, 14: "555-3322", } The dictionary keys must be of an immutable data type. In Python, strings are immutable due to their method of implementation. Red In Red the built-in map! datatype provides an associative array that maps values of word, string, and scalar key types to values of any type. A hash table is used internally for lookup. A map can be written as a literal, such as #(key1 value1 key2 value2 ...), or can be created using make map! [key1 value1 key2 value2 ...]: Red [Title:"My map"] my-map: make map! [ "Sally Smart" "555-9999" "John Doe" "555-1212" "J. Random Hacker" "553-1337" ] ; Red preserves case for both keys and values, however lookups are case insensitive by default; it is possible to force case sensitivity using the <code>/case</code> refinement for <code>select</code> and <code>put</code>. ; It is of course possible to use <code>word!</code> values as keys, in which case it is generally preferred to use <code>set-word!</code> values when creating the map, but any word type can be used for lookup or creation. my-other-map: make map! [foo: 42 bar: false] ; Notice that the block is not reduced or evaluated in any way, therefore in the above example the key <code>bar</code> is associated with the <code>word!</code> <code>false</code> rather than the <code>logic!</code> value false; literal syntax can be used if the latter is desired: my-other-map: make map! [foo: 42 bar: #[false]] ; or keys can be added after creation: my-other-map: make map! [foo: 42] my-other-map/bar: false ; Lookup can be written using <code>path!</code> notation or using the <code>select</code> action: select my-map "Sally Smart" my-other-map/foo ; You can also loop through all keys and values with <code>foreach</code>: foreach [key value] my-map [ print [key "is associated to" value] ] ; A key can be removed using <code>remove/key</code>: remove/key my-map "Sally Smart" REXX In REXX, associative arrays are called "stem variables" or "Compound variables". KEY = 'Sally Smart' PHONEBOOK.KEY = '555-9999' KEY = 'John Doe' PHONEBOOK.KEY = '555-1212' KEY = 'J. Random Hacker' PHONEBOOK.KEY = '553-1337' Stem variables with numeric keys typically start at 1 and go up from there. The 0-key stem variable by convention contains the total number of items in the stem: NAME.1 = 'Sally Smart' NAME.2 = 'John Doe' NAME.3 = 'J. Random Hacker' NAME.0 = 3 REXX has no easy way of automatically accessing the keys of a stem variable; and typically the keys are stored in a separate associative array, with numeric keys. Ruby In Ruby a hash table is used as follows: irb(main):001:0> phonebook = { irb(main):002:1* 'Sally Smart' => '555-9999', irb(main):003:1* 'John Doe' => '555-1212', irb(main):004:1* 'J. Random Hacker' => '553-1337' irb(main):005:1> } => {"Sally Smart"=>"555-9999", "John Doe"=>"555-1212", "J. Random Hacker"=>"553-1337"} irb(main):006:0> phonebook['John Doe'] => "555-1212" Ruby supports hash looping and iteration with the following syntax: irb(main):007:0> ### iterate over keys and values irb(main):008:0* phonebook.each {|key, value| puts key + " => " + value} Sally Smart => 555-9999 John Doe => 555-1212 J. Random Hacker => 553-1337 => {"Sally Smart"=>"555-9999", "John Doe"=>"555-1212", "J. Random Hacker"=>"553-1337"} irb(main):009:0> ### iterate keys only irb(main):010:0* phonebook.each_key {|key| puts key} Sally Smart John Doe J. Random Hacker => {"Sally Smart"=>"555-9999", "John Doe"=>"555-1212", "J. Random Hacker"=>"553-1337"} irb(main):011:0> ### iterate values only irb(main):012:0* phonebook.each_value {|value| puts value} 555-9999 555-1212 553-1337 => {"Sally Smart"=>"555-9999", "John Doe"=>"555-1212", "J. Random Hacker"=>"553-1337"} Ruby also supports many other useful operations on hashes, such as merging hashes, selecting or rejecting elements that meet some criteria, inverting (swapping the keys and values), and flattening a hash into an array. Rust The Rust standard library provides a hash map (std::collections::HashMap) and a B-tree map (std::collections::BTreeMap). They share several methods with the same names, but have different requirements for the types of keys that can be inserted. The HashMap requires keys to implement the Eq (equivalence relation) and Hash (hashability) traits and it stores entries in an unspecified order, and the BTreeMap requires the Ord (total order) trait for its keys and it stores entries in an order defined by the key type. The order is reflected by the default iterators. use std::collections::HashMap; let mut phone_book = HashMap::new(); phone_book.insert("Sally Smart", "555-9999"); phone_book.insert("John Doe", "555-1212"); phone_book.insert("J. Random Hacker", "555-1337"); The default iterators visit all entries as tuples. The HashMap iterators visit entries in an unspecified order and the BTreeMap iterator visits entries in the order defined by the key type. for (name, number) in &phone_book { println!("{} {}", name, number); } There is also an iterator for keys: for name in phone_book.keys() { println!("{}", name); } S-Lang S-Lang has an associative array type: phonebook = Assoc_Type[]; phonebook["Sally Smart"] = "555-9999" phonebook["John Doe"] = "555-1212" phonebook["J. Random Hacker"] = "555-1337" You can also loop through an associated array in a number of ways: foreach name (phonebook) { vmessage ("%s %s", name, phonebook[name]); } To print a sorted-list, it is better to take advantage of S-lang's strong support for standard arrays: keys = assoc_get_keys(phonebook); i = array_sort(keys); vals = assoc_get_values(phonebook); array_map (Void_Type, &vmessage, "%s %s", keys[i], vals[i]); Scala Scala provides an immutable Map class as part of the scala.collection framework: val phonebook = Map("Sally Smart" -> "555-9999", "John Doe" -> "555-1212", "J. Random Hacker" -> "553-1337") Scala's type inference will decide that this is a Map[String, String]. To access the array: phonebook.get("Sally Smart") This returns an Option type, Scala's equivalent of the Maybe monad in Haskell. Smalltalk In Smalltalk a Dictionary is used: phonebook := Dictionary new. phonebook at: 'Sally Smart' put: '555-9999'. phonebook at: 'John Doe' put: '555-1212'. phonebook at: 'J. Random Hacker' put: '553-1337'. To access an entry the message #at: is sent to the dictionary object: phonebook at: 'Sally Smart' Which gives: '555-9999' A dictionary hashes, or compares, based on equality and marks both key and value as strong references. Variants exist in which hash/compare on identity (IdentityDictionary) or keep weak references (WeakKeyDictionary / WeakValueDictionary). Because every object implements #hash, any object can be used as key (and of course also as value). SNOBOL SNOBOL is one of the first (if not the first) programming languages to use associative arrays. Associative arrays in SNOBOL are called Tables. PHONEBOOK = TABLE() PHONEBOOK['Sally Smart'] = '555-9999' PHONEBOOK['John Doe'] = '555-1212' PHONEBOOK['J. Random Hacker'] = '553-1337' Standard ML The SML'97 standard of the Standard ML programming language does not provide any associative containers. However, various implementations of Standard ML do provide associative containers. The library of the popular Standard ML of New Jersey (SML/NJ) implementation provides a signature (somewhat like an "interface"), ORD_MAP, which defines a common interface for ordered functional (immutable) associative arrays. There are several general functors—BinaryMapFn, ListMapFn, RedBlackMapFn, and SplayMapFn—that allow you to create the corresponding type of ordered map (the types are a self-balancing binary search tree, sorted association list, red–black tree, and splay tree, respectively) using a user-provided structure to describe the key type and comparator. The functor returns a structure in accordance with the ORD_MAP interface. In addition, there are two pre-defined modules for associative arrays that employ integer keys: IntBinaryMap and IntListMap. - structure StringMap = BinaryMapFn (struct type ord_key = string val compare = String.compare end); structure StringMap : ORD_MAP - val m = StringMap.insert (StringMap.empty, "Sally Smart", "555-9999") val m = StringMap.insert (m, "John Doe", "555-1212") val m = StringMap.insert (m, "J. Random Hacker", "553-1337"); val m = T {cnt=3,key="John Doe", left=T {cnt=1,key="J. Random Hacker",left=E,right=E,value="553-1337"}, right=T {cnt=1,key="Sally Smart",left=E,right=E,value="555-9999"}, value="555-1212"} : string StringMap.map - StringMap.find (m, "John Doe"); val it = SOME "555-1212" : string option SML/NJ also provides a polymorphic hash table: - exception NotFound; exception NotFound - val m : (string, string) HashTable.hash_table = HashTable.mkTable (HashString.hashString, op=) (3, NotFound); val m = HT {eq_pred=fn,hash_fn=fn,n_items=ref 0,not_found=NotFound(-), table=ref [|NIL,NIL,NIL,NIL,NIL,NIL,NIL,NIL,NIL,NIL,NIL,NIL,...|]} : (string,string) HashTable.hash_table - HashTable.insert m ("Sally Smart", "555-9999"); val it = () : unit - HashTable.insert m ("John Doe", "555-1212"); val it = () : unit - HashTable.insert m ("J. Random Hacker", "553-1337"); val it = () : unit HashTable.find m "John Doe"; (* returns NONE if not found *) val it = SOME "555-1212" : string option - HashTable.lookup m "John Doe"; (* raises the exception if not found *) val it = "555-1212" : string Monomorphic hash tables are also supported, using the HashTableFn functor. Another Standard ML implementation, Moscow ML, also provides some associative containers. First, it provides polymorphic hash tables in the Polyhash structure. Also, some functional maps from the SML/NJ library above are available as Binarymap, Splaymap, and Intmap structures. Tcl There are two Tcl facilities that support associative-array semantics. An "array" is a collection of variables. A "dict" is a full implementation of associative arrays. array set {phonebook(Sally Smart)} 555-9999 set john {John Doe} set phonebook($john) 555-1212 set {phonebook(J. Random Hacker)} 553-1337 If there is a space character in the variable name, the name must be grouped using either curly brackets (no substitution performed) or double quotes (substitution is performed). Alternatively, several array elements can be set by a single command, by presenting their mappings as a list (words containing whitespace are braced): array set phonebook [list {Sally Smart} 555-9999 {John Doe} 555-1212 {J. Random Hacker} 553-1337] To access one array entry and put it to standard output: puts $phonebook(Sally\ Smart) Which returns this result: 555-9999 To retrieve the entire array as a dictionary: array get phonebook The result can be (order of keys is unspecified, not because the dictionary is unordered, but because the array is): {Sally Smart} 555-9999 {J. Random Hacker} 553-1337 {John Doe} 555-1212 dict set phonebook [dict create {Sally Smart} 555-9999 {John Doe} 555-1212 {J. Random Hacker} 553-1337] To look up an item: dict get $phonebook {John Doe} To iterate through a dict: foreach {name number} $phonebook { puts "name: $name\nnumber: $number" } Visual Basic Visual Basic can use the Dictionary class from the Microsoft Scripting Runtime (which is shipped with Visual Basic 6). There is no standard implementation common to all versions: ' Requires a reference to SCRRUN.DLL in Project Properties Dim phoneBook As New Dictionary phoneBook.Add "Sally Smart", "555-9999" phoneBook.Item("John Doe") = "555-1212" phoneBook("J. Random Hacker") = "553-1337" For Each name In phoneBook MsgBox name & " = " & phoneBook(name) Next Visual Basic .NET Visual Basic .NET uses the collection classes provided by the .NET Framework. Creation The following code demonstrates the creation and population of a dictionary (see the C# example on this page for additional information): Dim dic As New System.Collections.Generic.Dictionary(Of String, String) dic.Add("Sally Smart", "555-9999") dic("John Doe") = "555-1212" dic.Item("J. Random Hacker") = "553-1337" An alternate syntax would be to use a collection initializer, which compiles down to individual calls to Add: Dim dic As New System.Collections.Dictionary(Of String, String) From { {"Sally Smart", "555-9999"}, {"John Doe", "555-1212"}, {"J. Random Hacker", "553-1337"} } Access by key Example demonstrating access (see C# access): Dim sallyNumber = dic("Sally Smart") ' or Dim sallyNumber = dic.Item("Sally Smart") Dim result As String = Nothing Dim sallyNumber = If(dic.TryGetValue("Sally Smart", result), result, "n/a") Enumeration Example demonstrating enumeration (see #C# enumeration): ' loop through the collection and display each entry. For Each kvp As KeyValuePair(Of String, String) In dic Console.WriteLine("Phone number for {0} is {1}", kvp.Key, kvp.Value) Next Windows PowerShell Unlike many other command line interpreters, Windows PowerShell has built-in, language-level support for defining associative arrays: $phonebook = @{ 'Sally Smart' = '555-9999'; 'John Doe' = '555-1212'; 'J. Random Hacker' = '553-1337' } As in JavaScript, if the property name is a valid identifier, the quotes can be omitted: $myOtherObject = @{ foo = 42; bar = $false } Entries can be separated by either a semicolon or a newline: $myOtherObject = @{ foo = 42 bar = $false ; zaz = 3 } Keys and values can be any .NET object type: $now = [DateTime]::Now $tomorrow = $now.AddDays(1) $ProcessDeletionSchedule = @{ (Get-Process notepad) = $now (Get-Process calc) = $tomorrow } It is also possible to create an empty associative array and add single entries, or even other associative arrays, to it later on: $phonebook = @{} $phonebook += @{ 'Sally Smart' = '555-9999' } $phonebook += @{ 'John Doe' = '555-1212'; 'J. Random Hacker' = '553-1337' } New entries can also be added by using the array index operator, the property operator, or the Add() method of the underlying .NET object: $phonebook = @{} $phonebook['Sally Smart'] = '555-9999' $phonebook.'John Doe' = '555-1212' $phonebook.Add('J. Random Hacker', '553-1337') To dereference assigned objects, the array index operator, the property operator, or the parameterized property Item() of the .NET object can be used: $phonebook['Sally Smart'] $phonebook.'John Doe' $phonebook.Item('J. Random Hacker') You can loop through an associative array as follows: $phonebook.Keys | foreach { "Number for {0}: {1}" -f $_,$phonebook.$_ } An entry can be removed using the Remove() method of the underlying .NET object: $phonebook.Remove('Sally Smart') Hash tables can be added: $hash1 = @{ a=1; b=2 } $hash2 = @{ c=3; d=4 } $hash3 = $hash1 + $hash2 Data serialization formats support Many data serialization formats also support associative arrays (see this table) JSON In JSON, associative arrays are also referred to as objects. Keys can only be strings. { "Sally Smart": "555-9999", "John Doe": "555-1212", "J. Random Hacker": "555-1337" } YAML YAML associative arrays are also called map elements or key-value pairs. YAML places no restrictions on the types of keys; in particular, they are not restricted to being scalar or string values. Sally Smart: 555-9999 John Doe: 555-1212 J. Random Hacker: 555-1337 References Programming language comparison Mapping Articles with example Julia code
288448
https://en.wikipedia.org/wiki/Windows%20Script%20Host
Windows Script Host
The Microsoft Windows Script Host (WSH) (formerly named Windows Scripting Host) is an automation technology for Microsoft Windows operating systems that provides scripting abilities comparable to batch files, but with a wider range of supported features. This tool was first provided on Windows 95 after Build 950a on the installation discs as an optional installation configurable and installable by means of the Control Panel, and then a standard component of Windows 98 (Build 1111) and subsequent and Windows NT 4.0 Build 1381 and by means of Service Pack 4. The WSH is also a means of automation for Internet Explorer via the installed WSH engines from IE Version 3.0 onwards; at this time VBScript became means of automation for Microsoft Outlook 97. The WSH is also an optional install provided with a VBScript and JScript engine for Windows CE 3.0 and following and some third-party engines including Rexx and other forms of Basic are also available. It is language-independent in that it can make use of different Active Scripting language engines. By default, it interprets and runs plain-text JScript (.JS and files) and VBScript (.VBS and files). Users can install different scripting engines to enable them to script in other languages, for instance PerlScript. The language independent filename extension WSF can also be used. The advantage of the Windows Script File (.WSF) is that it allows multiple scripts ("jobs") as well as a combination of scripting languages within a single file. WSH engines include various implementations for the Rexx, BASIC, Perl, Ruby, Tcl, PHP, JavaScript, Delphi, Python, XSLT, and other languages. Windows Script Host is distributed and installed by default on Windows 98 and later versions of Windows. It is also installed if Internet Explorer 5 (or a later version) is installed. Beginning with Windows 2000, the Windows Script Host became available for use with user login scripts. Usage Windows Script Host may be used for a variety of purposes, including logon scripts, administration and general automation. Microsoft describes it as an administration tool. WSH provides an environment for scripts to run – it invokes the appropriate script engine and provides a set of services and objects for the script to work with. These scripts may be run in GUI mode (WScript.exe) or command line mode (CScript.exe), or from a COM object (wshom.ocx), offering flexibility to the user for interactive or non-interactive scripts. Windows Management Instrumentation is also scriptable by this means. The WSH, the engines, and related functionality are also listed as objects which can be accessed and scripted and queried by means of the VBA and Visual Studio object explorers and those for similar tools like the various script debuggers, e.g. Microsoft Script Debugger, and editors. WSH implements an object model which exposes a set of Component Object Model (COM) interfaces. So in addition to ASP, IIS, Internet Explorer, CScript and WScript, the WSH can be used to automate and communicate with any Windows application with COM and other exposed objects, such as using PerlScript to query Microsoft Access by various means including various ODBC engines and SQL, ooRexxScript to create what are in effect Rexx macros in Microsoft Excel, Quattro Pro, Microsoft Word, Lotus Notes and any of the like, the XLNT script to get environment variables and print them in a new TextPad document, and so on. The VBA functionality of Microsoft Office, Open Office(as well as Python and other installable macro languages) and Corel WordPerfect Office is separate from WSH engines although Outlook 97 uses VBScript rather than VBA as its macro language. Python in the form of ActiveState PythonScript can be used to automate and query the data in SecureCRT, as with other languages with installed engines, e.g. PerlScript, ooRexxScript, PHPScript, RubyScript, LuaScript, XLNT and so on. One notable exception is Paint Shop Pro, which can be automated in Python by means of a macro interpreter within the PSP programme itself rather than using the PythonScript WSH engine or an external Python implementation such as Python interpreters supplied with Unix emulation and integration software suites or other standalone Python implementations et al. as an intermediate and indeed can be programmed like this even in the absence of any third-party Python installation; the same goes for the Rexx-programmable terminal emulator Passport. The SecureCRT terminal emulator, SecureFX FTP client, and related client and server programmes from Van Dyke are as of the current versions automated by means of the WSH so any language with an installed engine may be used; the software comes with VBScript, JScript, and PerlScript examples. As of the most recent releases and going back a number of versions now, the programmability of 4NT / Take Command in the latest implementations (by means of "@REXX" and similar for Perl, Python, Tcl, Ruby, Lua, VBScript, JScript and the like and so on) generally uses the WSH engine. The ZOC terminal emulator gets its ability to be programmed in Rexx by means of an external interpreter, one of which is supplied with the programme, and alternate Rexx interpreters can be specified in the configuration of the programme. The MKS Toolkit provides PScript, a WSH engine in addition to the standard Perl interpreter perl.exe which comes with the package. VBScript, JScript, and some third-party engines have the ability to create and execute scripts in an encoded format which prevents editing with a text editor; the file extensions for these encoded scripts is and and others of that type. Unless otherwise specified, any WSH scripting engine can be used with the various Windows server software packages to provide CGI scripting. The current versions of the default WSH engines and all or most of the third party engines have socket abilities as well; as a CGI script or otherwise, PerlScript is the choice of many programmers for this purpose and the VBScript and various Rexx-based engines are also rated as sufficiently powerful in connectivity and text-processing abilities to also be useful. This also goes for file access and processing—the earliest WSH engines for VBScript and JScript do not since the base language did not, whilst PerlScript, ooRexxScript, and the others have this from the beginning. WinWrap Basic, SaxBasic and others are similar to Visual Basic for Applications, These tools are used to add scripting and macro abilities to software being developed and can be found in earlier versions of Host Explorer for example. Many other languages can also be used in this fashion. Other languages used for scripting of programmes include Rexx, Tcl, Perl, Python, Ruby, and others which come with methods to control objects in the operating system and the spreadsheet and database programmes. One exception is that the Zoc terminal emulator is controlled by a Rexx interpreter supplied with the package or another interpreter specified by the user; this is also the case with the Passport emulator. VBScript is the macro language in Microsoft Outlook 97, whilst WordBasic is used for Word up to 6, Powerpoint and other tools. Excel to 5.0 uses Visual Basic 5.0. In Office 2000 forward, true Visual Basic for Applications 6.0 is used for all components. Other components use Visual Basic for Applications. OpenOffice uses Visual Basic, Python, and several others as macro languages and others can be added. LotusScript is very closely related to VBA and used for Lotus Notes and Lotus SmartSuite, which includes Lotus Word Pro (the current descendant of Ami Pro), Lotus Approach, Lotus FastSite, Lotus 1-2-3, &c, and pure VBA, licensed from Microsoft, is used in Corel products such as WordPerfect, Paradox, Quattro Pro &c. Any scripting language installed under Windows can be accessed by external means of PerlScript, PythonScript, VBScript and the other engines available can be used to access databases (Lotus Notes, Microsoft Access, Oracle Database, Paradox) and spreadsheets (Microsoft Excel, Lotus 1-2-3, Quattro Pro) and other tools like word processors, terminal emulators, command shells and so on. This can be accomplished by means of the WSH, so any language can be used if there is an installed engine. In recent versions of the Take Command enhanced command prompt and tools, the "script" command typed at the shell prompt will produce a list of the currently installed engines, one to a line and therefore CR-LF delimited. Examples The first example is very simple; it shows some VBScript which uses the root WSH COM object "WScript" to display a message with an 'OK' button. Upon launching this script the CScript or WScript engine would be called and the runtime environment provided. Content of a file hello0.vbs WScript.Echo "Hello world" WScript.Quit WSH programming can also use the JScript language. Content of a file hello1.js WSH.Echo("Hello world"); WSH.Quit(); Or, code can be mixed in one WSF file, such as VBScript and JScript, or any other: Content of a file hello2.wsf <job> <script language="VBScript"> MsgBox "hello world (from vb)" </script> <script language="JScript"> WSH.echo("hello world (from js)"); </script> </job> Security concerns Windows applications and processes may be automated using a script in Windows Script Host. Viruses and malware could be written to exploit this ability. Thus, some suggest disabling it for security reasons. Alternatively, antivirus programs may offer features to control .vbs and other scripts which run in the WSH environment. Since version 5.6 of WSH, scripts can be digitally signed programmatically using the Scripting.Signer object in a script itself, provided a valid certificate is present on the system. Alternatively, the signcode tool from the Platform SDK, which has been extended to support WSH filetypes, may be used at the command line. By using Software Restriction Policies introduced with Windows XP, a system may be configured to execute only those scripts which are stored in trusted locations, have a known MD5 hash, or have been digitally signed by a trusted publisher, thus preventing the execution of untrusted scripts. Available scripting engines Note: By definition, all of these scripting engines can be utilised in CGI programming under Windows with any number of programmes and set up, meaning that the source code files for a script used on a server for CGI purposes could bear other file extensions such as .cgi and so on. The aforementioned ability of the Windows Script Host to run a script with multiple languages in it in files with a extension. Extended Html and XML also add to the additional possibilities when working with scripts for network use, as do Active Server Pages and so forth. Moreover, Windows shell scripts and scripts written in shells with enhanced capabilities like TCC, 4NT, etc. and Unix shells under interoperability software like the MKS Toolkit can have scripts embedded in them as well. There have been suggestions of creating engines for other languages, such as LotusScript, SaxBasic, BasicScript, KiXtart, awk, bash, csh and other Unix shells, 4NT, cmd.exe (the Windows NT shell), Windows PowerShell, DCL, C, C++, Fortran and others. The XLNT language is based on DCL and provides a very large subset of the language along with additional commands and statements and the software can be used in three ways: the WSH engine (*.xcs), the console interpreter (*.xlnt) and as a server and client side CGI engine (*.xgi). When a server implementing CGI such as the Windows Internet Information Server, ports of Apache and others, all or most of the engines can be used; the most commonly used are VBScript, JScript, PythonScript, PerlScript, ActivePHPScript, and ooRexxScript. The MKS Toolkit PScript programme also runs Perl. Command shells like cmd.exe, 4NT, ksh, and scripting languages with string processing and preferably socket functionality are also able to be used for CGI scripting; compiled languages like C++, Visual Basic, and Java can also be used like this. All Perl interpreters, ooRexx, PHP, and more recent versions of VBScript and JScript can use sockets for TCP/IP and usually UDP and other protocols for this. Version history The redistributable version of WSH version 5.6 can be installed on Windows 95/98/Me and Windows NT 4.0/2000. WSH 5.7 is downloadable for Windows 2000, Windows XP and Windows Server 2003. Recently , redistributable versions for older operating systems (Windows 9x and Windows NT 4.0) are no longer available from the Microsoft Download Center. Since Windows XP Service Pack 3, release 5.7 is not needed as it is included, with newer revisions being included in newer versions of Windows since. See also JScript .NET References External links Is VBScript Dead?, isvbscriptdead.com WSH Primer on Microsoft TechNet – Get started with WSH WSH home at MSDN WSH Reference Windows Script 5.6 Documentation Release notes for Windows Script 5.7 Console WSH Shell – a third-party shell for WSH and VBScript Internet Explorer Windows administration Windows components
43335285
https://en.wikipedia.org/wiki/Information%20Technology%20Park%2C%20Nepal
Information Technology Park, Nepal
The Information Technology Park (commonly known as Info Tech Park or IT Park), Nepal's only Information Technology Park, is situated between Banepa and Panauti of Kavrepalanchowk District, Nepal, although it comes under Panauti Municipality. It lies about northeast of the capital city Kathmandu. History The Information Technology Park was completed in April 2003 with a total investment of NRs. 270 million (~ million USD). The initiation of the Park was formally done by an American company IBM but it left after nine months. Javra Software Company from the Netherlands had also started work but it too left, stating technical reasons and shifted to Kathmandu. By July 2014, it has not come into operation, as a result of which the Government of Nepal is missing millions of rupees in revenue. The government has formed a department named the Department of Information Technology for the socio-economic transformation to form a developed country. Aims and objectives The IT Park was established under the Ministry of Industry, for the development and promotion of information technology and services in the country. It has been said to be providing employment opportunities to about 15,000 technicians after the full range of operations of the IT Park. It has aimed to provide government services by maximizing the use of information and technology. Land and distribution The Park has been distributed within the area of 235 Ropanis () consisting of a commercial building, an administrative building, four residential buildings and a security building built by investing NRs. 500 million (~ million USD). References External links Information technology in Nepal Buildings and structures in Kavrepalanchok District Science parks in Nepal 2003 establishments in Nepal
61653332
https://en.wikipedia.org/wiki/FeatherPad
FeatherPad
FeatherPad is a free software text editor available under the GPL-3.0-or-later license. It is developed by Pedram Pourang (aka Tsu Jan) of Iran, written in Qt, and runs on FreeBSD, Linux, Haiku OS and macOS. It has few dependencies and is independent of any desktop environment. FeatherPad has been the default text editor in Lubuntu, since it switched to the LXQt desktop with Lubuntu 18.10. Prior to that Lubuntu used the Leafpad text editor as part of its GTK-based LXDE desktop. FeatherPad is also included in the Debian and Ubuntu package repositories. Development Pourang started the project to fill a perceived gap in available text editors. He identified that many feature-rich text editors are RAM-intensive and even then lack key features. Development of FeatherPad started in 2016, with the first public release version 0.5.8. The first version included syntax highlighting and was written in GTK. With the introduction of GTK 3 the application was rewritten, but Pourang found Qt more flexible and it was rewritten in C++ and ported to Qt starting with version 0.6 in April 2017. FeatherPad added spell checking using Hunspell, starting with version 0.11.0, released in August 2019. FeatherPad has been translated into 21 different languages in addition to English. Haiku OS support was written by Khallebal at GitHub and support for macOS was added by Pavel Shlyak. Future development goals for FeatherPad include syntax highlighting color customization, virtual desktop awareness and tab drag-and-drop under Wayland. Features FeatherPad includes text drag and drop support, search, search and replace, optional line numbering, automatic detection of text encoding, syntax highlighting for many common programming languages, ability to open URLs in a browser, optional side-pane or tabbed page navigation and spell-checking. The text editor is highly customizable and by default has a wide range of keyboard shortcuts defined. There is an unofficial Snap package available for FeatherPad. Reception A review in Full Circle in August 2019 noted, "FeatherPad has obviously been designed for software developers, but it is also a good text editor for any general user to write plain text documents or web pages on." The review noted its relatively low RAM use compared to more full-featured text editors like jEdit and gedit. It also praised its extensive, if non-standard keyboard shortcuts, noting, "the keyboard shortcuts are all nicely explained in the menus, however, and, once learned, FeatherPad becomes very fast to use." Scott Nesbitt, writing in March 2020, on Red Hat's opensource.com noted, "when you first fire it up, FeatherPad doesn't look much different from most text editors out there. It does launch quickly, though. FeatherPad's features include automatic syntax highlighting of markup and coding languages, automatically closing brackets (again, useful when working with markup and coding languages), and an extensive set of keyboard shortcuts. One feature that grew on me was the ability to position document tabs. In most text editors that open documents in separate tabs, those tabs appear along the top of the editor window. With FeatherPad, you can put tabs at the top, bottom, left, or right. I've found that putting the tabs on the left reduces visual clutter and distractions." See also List of text editors Comparison of text editors References External links Free text editors MacOS text editors 2016 software Free HTML editors Linux text editors Software using the GPL license
22370013
https://en.wikipedia.org/wiki/2d%20Command%20and%20Control%20Squadron
2d Command and Control Squadron
The United States Air Force's 2d Command and Control Squadron (2 CACS) was an Air Force Space Command command and control unit located at Falcon AFB (later Schriever AFB), Colorado. The 2 CACS commanded passive surveillance systems supporting USSPACECOM and theater warfighters’ requirements through continuous all-weather, day-night surveillance of on-orbit satellites. Mission The 2 CACS was responsible for planning, assessing, and developing execution orders for passive surveillance missions around the world, at locations such as Misawa AB, Japan, Osan AB, Republic of Korea, RAF Feltwell, United Kingdom and RAF Edzell, United Kingdom. Information from the sites' Low Altitude Surveillance System (LASS) and Deep Space Tracking System (DSTS) were fed to 2 CACS, which was then forwarded onto the space surveillance center at Cheyenne Mountain AFS, Colorado. The center uses this data, along with data from other sensors, to maintain a catalog of man-made objects in space. Assignments Major Command Air Force Space Command (???- ???) Previous designations 2d Command and Control Squadron (???) Commanders Lt Col James E. Mackin (c. 1996) Lt Col T. Clark (c. 1995) Bases stationed Schriever AFB, Colorado (???-???) Equipment Commanded Low Altitude Surveillance System (???-???) Detachment 1, 3d SSS - Osan AB, Republic of Korea 5th SSS - RAF Feltwell, United Kingdom 17 SSS - RAF Edzell, United Kingdom Deep Space Tracking System (???-???) 3d SSS - Misawa AB, Japan 5th SSS - RAF Feltwell, United Kingdom Decorations Air Force Outstanding Unit Award 1 January 1998 – 31 December 1998 1 October 1997 – 30 September 1999 1 October 1995 – 30 September 1997 See also 3d Space Surveillance Squadron 5th Space Surveillance Squadron 17th Space Surveillance Squadron References External links Western States Legal Foundation: National Security Space Road Map, 1998 Globalsecurity.org: 21st Space Wing Military units and formations in Colorado Command and Control 0002
5235162
https://en.wikipedia.org/wiki/Pitivi
Pitivi
Pitivi (originally spelled PiTiVi) is a free and open-source non-linear video editor for Linux, developed by various contributors from free software community and the GNOME project, with support also available from Collabora. Pitivi is designed to be the default video editing software for the GNOME desktop environment. It is licensed under the terms of the GNU Lesser General Public License. History Edward Hervey started working on PiTiVi in December 2003 as an end-of-studies project at the EPITECH engineering school in Paris. Initially written in C, the PiTiVi codebase was first checked into version control in May 2004 and was rewritten in Python a year later. After his graduation, Hervey was hired by Fluendo to work on GStreamer for the following two years, after which Hervey co-founded Collabora's Multimedia division in order to improve Pitivi, GStreamer and the GNonlin plugins from 2008 to 2010. In the past there have been several video editors available for Linux, but, they were considered difficult to use. Ubuntu Community Manager Jono Bacon stated "Back in 2006, the video editing situation was looking far more exciting. Michael Dominik was working on the hugely exciting Diva project and Edward Hervey was working on PiTiVi. Both combined exciting technologies, being built on the formidable foundations of GTK, GNOME, GStreamer, and Cairo. Diva was developed using Mono, and PiTiVi using Python. With the video buzz in the air, Michael and Edward both demoed their projects at the Villanova GUADEC to rapturous applause". Bacon also noted that Pitivi has taken a long time to mature: "For Edward to have created the first incarnation of PiTiVi he needed to ensure that GStreamer and GNonLin were mature and stable enough to use for his application". Inclusion in the default set of Ubuntu applications In April 2010, with the launch of Ubuntu 10.04 Lucid Lynx, PiTiVi version 0.13.4 became the first default movie editor offered as part of the Ubuntu ISO CD. In May 2011, it was announced that Pitivi would be no longer part of the Ubuntu ISO, starting with Ubuntu 11.10 Oneiric Ocelot's release in October 2011. The reasons given for removing it included "poor user reception, lack of fit with the default user-case for Ubuntu, lack of polish and the application's lack of development maturity". PiTiVi will not be replaced on the ISO with another video editor and will remain available to users for installation from the Ubuntu repositories. In response to this, Jeff Fortin, one of the project developers raised concerns regarding the reasons given for removing Pitivi from the set of default applications and voiced disappointment in Canonical/Ubuntu not supporting the application as they would have been expected to. Rework Edward Hervey announced the availability of GStreamer Editing Services (GES) at the end of 2009. Further confirmations of intentions to migrate Pitivi to GES came at the Meego conference in 2011 but it was not until the 0.15 release in September 2011 that Thibault Saunier officially announced that the next Pitivi release would be based upon GES. The first version using GES was 0.91 "Charming Defects", released in October 2013. Due to the new engine, a lot of old code could be removed and the Pitivi codebase underwent massive reorganization, cleanup and refactoring. Multiple architectural changes occurred during the time between the 0.15 and the 0.91 release, including three intertwined technological migrations: Porting the user interface to GTK 3 Porting from static Python bindings PyGTK to PyGObject, which uses GObject Introspection Porting from GStreamer 0.10 to GStreamer 1.0 During the final stages of these changes leading to the 0.91 release, the timeline was also ported from the Canvas (scene graph) "GooCanvas" to Clutter. Re-branding as Pitivi With the release of 0.91, PiTiVi was renamed Pitivi, without the "T" and "V" capitalized. Fundraising In February 2014 the project announced that it was seeking €100,000 for further development. The money was to be allocated, as follows: Phase 1 - €35,000 to improve stability for a version 1.0 release. Phase 2 - Improving features, €1,925 for adding a magnetic time-line, €4,400 for interfaces for multi-camera editing, €4,766 for porting to Mac OS X. The fundraising was conducted through the GNOME Foundation. The fundraiser did not meet its targeted amount, reaching slightly above €23,000 , allowing for partially funded development. Features Pitivi inherits its capabilities for importing and exporting (rendering) media from the GStreamer framework, or plugins for the GStreamer framework. Pitivi supports simple media editing capabilities such as trimming, snapping, splitting and cutting of clips. Audio mixing is supported by curves, visualised as line segments drawn over an audio waveform. Pitivi has the ability to step through a piece of media using scrubbers or keyboard shortcuts. Audio and video clips can be linked together, and treated as a single clip. Initial support for video mixing (compositing and transitions) has been added in late 2009 but is still under heavy work. A more exhaustive list of features can be found on the Pitivi website. Jean-François Fortin Tam gave a talk at Libre Graphics Meeting 2009, discussing how usability became a major focus for the Pitivi project, and how design considerations impacted PiTiVi's user-interface, with examples such as the use of subtle gradients in timeline objects, drag and drop importing and direct manipulation, native theme integration, and reducing complexity by carefully evaluating the need (or lack thereof) to impose preference choices onto users. Another talk, focused on the economics of open source video editors, was given by Jean-François at Libre Graphics Meeting 2011. The Pitivi project also has a user manual that covers the usage of the application. Pitivi has been translated and localized for several languages by the GNOME i18n teams. Through GStreamer, Pitivi is the first open source video editor to support the Material Exchange Format (MXF). As part of a Google Summer of Code project to "Permit Pitivi users to add effects to the videos they are editing", Thibault Saunier implemented video effects in the development version of Pitivi. This work was initially anticipated to be included starting with PiTiVi 0.13.5, but was announced as being deferred to the 0.13.6 release. These features were finally released as version 0.14.0-2 on 1 June 2011. Aside from improved and expanded effects this version included a new welcome screen, a redesigned project settings dialog box and a simplified rendering dialog. In reviewing this version for OMG! Ubuntu! writer Joey Sneddon said of the new rendering that it "totally wipes the floor with its competition: it is so incredibly simple to use." Sponsorship Throughout the years, development has been funded through the Google Summer of Code program, donations and paid developer time. At the end of 2008, Collabora Multimedia decided to fund the development of Pitivi throughout 2009 by assigning Edward Hervey, Brandon Lewis and Alessandro Decina to improve Pitivi and GNonlin. After this two-year effort, as Collabora's direct involvement gradually came to an end, a new team of contributors from the community took over the maintainership of the project, including former GSoC student Thibault Saunier. In 2014, a public fundraiser was run through the GNOME Foundation to allow two maintainers, Mathieu Duponchelle and Thibault Saunier, to work for a year on bringing Pitivi to "1.0 quality". Reception In an interview with gnomedesktop.org in 2009 Edward Hervey discussed the state of Pitivi and Linux Video editing; at one stage Hervey noted that "there's a total lack of cohesion between all the various multimedia applications/libraries/device-support on Linux which is IMHO the reason why we're not yet the reference platform for multimedia creation". This point of view is further expanded in another article that showed that Hervey believed that "if the Linux desktop was going to have a nice and easy to use video editor any time soon, we needed to do something to increase the pace of development significantly". In a review of Pitivi 0.94 in January 2015 Red Hat Senior Systems Engineer Chris Long said: "It looked great and professional-esque, almost Avid/premiere like. So, I brought in a video clip... and CRASH! I opened it again, brought in a clip, no crash, so that's great. I added another video track... and CRASH! I tried at least 15 more times before giving up on it. And it's a shame, because it looks like it has potential to be simple to use and not overly garish." The release of the next version, 0.95 improved stability. See also Comparison of video editing software List of video editing software References External links User manual Film and video technology Free multilingual software Free software programmed in Python Free video software Linux audio video-related software Linux-only free software Multimedia software Software that uses GStreamer Software that uses Meson Software that uses PyGObject Video editing software for Linux Video editing software that uses GTK Collabora
1607215
https://en.wikipedia.org/wiki/HP%202640
HP 2640
The HP 2640A and other HP 264X models were block-mode "smart" and intelligent ASCII standard serial terminals produced by Hewlett-Packard using the Intel 8008 and 8080 microprocessors. History The HP 2640A was introduced in November 1974 at a list price of US$3000. Based on the Intel 8008 CPU, it had 8 KB of ROM firmware and came standard with 1 KB of RAM, expandable up to 8 KB (two 4 KB semiconductor RAM cards). In September 1975 Hewlett-Packard introduced the HP 2644A, which was an HP 2640A with mass storage (two mini-tape cartridges, 110 KB each), for US$5000. HP followed up in 1976 with the 2640B, an updated, cost-reduced version of the 2640A with a list price of US$2600, along with three international versions: the Cyrillic-oriented 2640C, the Swedish/Finnish-oriented 2640S, and the Danish/Norwegian-oriented 2640N. All of these early members of the 2640 series had the relatively slow 8008 CPU running at 700 kHz, and they were thus limited to speeds of 2400 baud. The 2640A and 2644A were discontinued in February 1977, but the 2640B remained in production until August 1981. In September 1976, HP introduced the 2645A, which could handle speeds up to 9600 baud and had a number of advanced features, including as an option the mini-tape cartridge storage of the 2644A. The introductory list price was US$3500, or US$5100 with the cartridge storage option. The 2645A was the first terminal in the 2640 series to use the Intel 8080A, rather than the 8008, as its CPU. Almost all subsequent 2640-family terminals would have 8080A CPUs, all running at 2.5 MHz. The 2645A was followed in November 1976 by the 2641A, a 2645A derivative designed for the APL programming language, and in April 1977 by the 2645R, a 2645 which supported right-to-left Arabic text as well as left-to-right text in Roman letters. In July 1977, Hewlett-Packard introduced the 2648A graphics terminal, a 2645A derivative which added 720×360 black-and-white raster graphics in a separate graphics page that could overlay the main text memory. This was joined in May 1978 by the 2647A programmable graphics terminal, which included its own BASIC interpreter. In October 1980, HP introduced the 2642A, which was like the 2645A, but instead of optional tape cartridges it had a standard 5.25-inch floppy disk drive storing 270 KB per diskette. The ultimate and final model in the 2640 series was the 2647F programmable graphics terminal introduced in June 1982, an improved replacement for the 2647A with the 2642A's floppy drive. Unlike the preceding terminals in the 264X family that had 8080A CPUs, the 2647F used the faster Intel 8085A running at 4.9 MHz. HP kept the 264X family in production until early 1985. Model number The HP catalogs usually refer to the terminal model as simply "2640A", and infrequently as "HP 2640", or "HP 2640A" (both with a blank after the "HP"), or "2640". The incorrect "HP2640" and "HP2640A" are often seen outside of HP. Functionality The functionality defined by the HP 264X series hasn't changed much as the preferred terminal for HP1000 and HP 3000 series computers. They never achieved the fame of the VT100 among programmers, but included sophisticated features not found in the VT100, such as offline forms, multipages, and (in some models) local storage. The styling looked like vaguely like a microwave or toaster oven. It was boxy, with a "widescreen" aspect ratio for the reason that it gave the same character length as a punched card. This is still seen in the modern command window . HP had determined that the combination of a standard 4:3 aspect ratio with the 25 line by 80 character display that was the standard of the time required the characters to have a very high profile. HP's response was to specify a CRT with an aspect ratio designed around the desired character shape instead of the other way around. Of course, this also mandated rather high manufacturing costs as standard parts could not be used. HP took pains to further improve the rendering of displayed characters via half-pixel positioning of individual lines within each character. Although the character cell was only 7 horizontal by 9 vertical dots, half-pixel positioning effectively doubled the horizontal resolution to 14 dots, giving the characters very smooth outlines. (The initial sales literature referred to it as using a 7×9 matrix generated in a 9×15 dot character cell). All of this resulted in an extremely easy to read display with the dot-matrix nature, and the scan lines, almost invisible. The keyboard had flat tops, similar to the HP 9800 series desktop computers rather than the curved contours now considered to be ergonomic. It featured three keypad areas: Alphabetic, numeric, and an array of cursor positioning and editing keys somewhat similar to modern PC keyboard layouts. There were also a number of smaller function and feature control keys arrayed in two rows above the normal keypad areas. The keyboard chassis was separate from the main body, connected via a thick cable. The keyboard used a bit-paired layout (similar to that on a teleprinter machine) rather than the typewriter-paired arrangement on DEC's VT100. Although large, users loved the keyboard because "it had a key for everything". Similar to the HP desktop computers, it had a number of F-keys (F1 through F8) placed close to the screen. Paper templates were available for some application programs to which placed legends for these keys on the keyboard. Later models arranged these across the top row, and provided for screen labels close to their respective keys. Terminal configuration in the 262X series was done entirely through the screen-labeled function keys rather than dedicated keys and through escape sequences sent from the host computer. The on-screen labeling of the eight function keys, pioneered by the HP 300 ("Amigo") computer, was one of the first applications of a hierarchical menu which allows accessing many functions with a small number of keys. This arrangement is now common on TI graphic calculators, and automated teller and gas pump machines, though no longer used in GUI user interfaces. Internally, the electronics used a motherboard with plug-in daughter cards. The microprocessor, memory, serial interface card, and various optional functions were each on separate cards. This permitted easy field maintenance, upgrades, and reconfiguration. For example, more memory (providing larger scrollback capability) could be easily added, the serial interface could be changed from RS-232 to current loop, etc. The optional tape drives of the 2645 model were interfaced via another plug-in card. The plug-in card capability strongly resembled the later Apple II expansion architecture. In fact Apple I is derived from a plug-in daughter board inside the 2645 terminal sponsored by Hewlett Packard. The manufacturing area was across from R&D cubicles in the Data Terminals Division in Cupertino. The testing area was dubbed "beepland" because it had racks of 500 terminals, with the end of the test ending in a beep. The HP 2640 introduced "block mode", similar to the IBM 3270 (although the IBM 3270 did not work for ASCII standard serial communications). The escape sequences Esc-[ and Esc-] defined unprotected areas, but it didn't have to take up a visible space. It acted much like a web page, disconnected from the host until the SEND key was pressed. The fields could screen for alphabetic or numeric characters, a feature beyond Windows Forms today. This would be supported by programs such as DEL/3000 and VIEW/3000 which would map form data into runtime variables and databases. It also supported teletype character mode like a standard ASCII terminal, and did not need specialized communications like IBM. The hardware was radically different from most "dumb" terminals in that the characters were not stored in a simple data array. To save memory, which could extend over several pages, characters were allocated as linked lists of blocks which were dynamically allocated. Display enhancements were encoded as embedded bytes in the stream. Software enhancements which did not affect the appearance such as dim or underline, but protected and unprotected fields were also coded with embedded bytes. The display hardware was capable of reading this unusual data structure. When the cost of memory came down by the 262X series, this was changed to a "parallel" structure with one bit for each enhancement code, but the logic required to emulate previous behaviors was complex. Inserting a code for underline would "propagate" to the next display enhancement, while deleting such a code would also have to be propagated to the next display byte or a cursor jump sequence was issued to jump several bytes. You could also completely turn off enhancements as well as provide protected only field enhancements. This data structure would inspire the sparse matrix data structure for the Twin spreadsheet. The HP 2640 also introduced multiple pages of memory, much like the DOS box in Microsoft Windows today, and the page up and page down key which appears on PC keyboards. Users learned to use the offline key to take the terminal offline, edit a line in the display buffer, and then retransmit it. This gave the effect of command line recall and editing even if the operating system did not support it. For example, when working at an operating system's command prompt, an erroneous command could quickly be corrected and re-sent without having to retype the entire line. This was possible in many terminals of the day, but the HP 2640 was smart enough to only retransmit the line from the first character typed by the user, omitting, for example, the operating system's command prompt. This was later implemented as "line mode". Another method was to paint a formatted screen in character mode with protected fields and place it into local edit mode similar to the above but the user did not know. This meant that the characters entered by the user would not be transmitted to the host until a 'special' key, typically the enter key, but other keys were also deemed special (i.e. immediate interrupt of the host) such as control y and function keys. Only the data within the unprotected areas would be transferred in this way, using a semi block mode mechanism, a sort of half way house between block mode and normal character mode, Formatted fields also meant forms could be stored in memory ( tested for and recalled locally or repainted from the host if not present), just the unprotected data areas need be sent, thereby removing the need to repaint or issue direct cursor placements in order to update the screen (TIM/3000 Air Call Computer Systems). The PCL language was PCL level 3 in an HP645/7, which was later implemented to drive Hewlett Packards' first Laserjet printer. HP Printer Control Language shares a common non-ANSI escape sequence grammar and common sequences with HP terminals. In-house developers ported TinyBASIC to the HP 2645A, as well as developing several games in assembler (most notably "Keep On Drivin'", Tennis and Reversi). Plotters could also be interfaced to using HP/GL 2 with TinyBasic. Models The HP 264X series included several models beyond the HP 2640A. The HP2644A introduced 3M mini cartridge tape drives which could be used to upload or download data, as opposed to slow paper tapes of the time. Another later model used floppy disks, and supported drawing forms etch-a-sketch style and would compute intersections. Also notable was the use of paper labeled function keys on the upper left. These would always get lost, so users would scroll lock the top 2 lines of the screen and used these for labels. These were built into the next generation of terminals. The values of these keys could be programmed. The HP 2648 was a graphics terminal which featured hardware zoom, and "autoplot". It utilized separate memory for graphics and text, allowing the user to turn off either type of display at will. The HP 2647 had a variant of Microsoft BASIC with AGL (HP's standard for plotting) built in, and perhaps the first real business charting for a microcomputer, complete with 3D cross-hatched pie charts. 02647-13301 Graphics: 2647 Multiplot and Slide Software. Multiplot was the model for the PC based Chartman by the Cambridge company that also produced the Twin spreadsheet 1-2-3 clone which introduced HP 2640 style forms to PC applications. 13257B Graphics: 2647 Graphics Presentation Resource Pac 13257D Statistics/Mathematics: 2647 Statistical Analysis Resource Pac 13257C Statistics/Mathematics: 2647 Mathematics Analysis Resources Pac 13257F Business: 2647: Project Management Analysis Resource Pac 13257K General/Utilities: 2647 2647/1351 Basic The HP262X series introduced the "periscope" look, "soft" key labels along with a 4 + 4 key display at the bottom of the screen, a hierarchical setup tree, 12" screen and an optional internal thermal printer. The HP-125 45500A Dual Z80 CP/M used the form factor and terminal emulation of the HP 2621 terminal. The HP-150 had the terminal capabilities of the HP 2623 graphics terminal in a smaller package (9" screen). The HP2382 "munchkin" repackaged the HP 2622 in a 9" screen package. The HP-120 45600A packaged the HP-125 into the HP2382 form factor. The "Therminal" was an unusual implementation of a screen-less printing terminal which used the thermal print mechanism. It was one of the first projects of the Vancouver division. It even supported tape cartridge local storage, but it was not successful. The great over-reach was a color graphics terminal that cost more than the HP 2647 monochrome graphics workstation that sold very few units but cost a huge effort to develop. Eventually, HP ended up selling essentially a low-cost version of the HP 2640. Today, terminal emulators still implement the late 1970s feature set of these terminals on common PCs. See also List of HP 26xx terminals (introduction, price, discontinuation) References External links HP 2640A on the terminals wiki Reflection (Attachmate) User's manual Service manual, preliminary 2640 Block-oriented terminal Character-oriented terminal Computer-related introductions in 1975
42965868
https://en.wikipedia.org/wiki/Cleo%20%28company%29
Cleo (company)
Cleo is an ecosystem integration software company that provides business-to-business (B2B), application-to-application (A2A), cloud integration and data movement and transformation solutions. The privately held company, formally known as Cleo Communications LLC, was founded in 1976, but goes by "Cleo." History Cleo originally began as a division of Phone 1 Inc., a voice data gathering systems manufacturer, and built data concentrators and terminal emulators — multi-bus computers, modems, and terminals to interface with IBM mainframes via bisynchronous communications. The company then began developing mainframe middleware in the 1980s, and with the rise of the PC, moved into B2B data communications and secure file transfer software. Since being acquired in 2012 the company’s offerings have evolved into Cleo Integration Cloud, a platform for enterprise business integration. Business Based in Rockford, Illinois (USA), with offices in Chicago, London, and Bangalore, Cleo has about 300 employees and more than 4,000 direct customers. The company's flagship offering, Cleo Integration Cloud, provides both on-premise and cloud-based integration technologies and comprises solutions for B2B/EDI, application integration and data movement and transformation. Previous products now incorporated into the Cleo Integration Cloud platform include Cleo Harmony, Cleo Clarify, and Cleo Jetsonic. Cleo solutions span a variety of industries, including manufacturing, logistics and supply chain, retail, third-party logistics, warehouse management and transportation management, healthcare, financial services and government. The U.S. Department of Veterans Affairs adopted Cleo's fax technology, Cleo Streem, in 2013 when in need of FIPS 140-2-compliant technology to protect information, and the City of Atlanta has used Cleo Streem for network and desktop faxing since 2006. Cleo also serves U.S. transportation logistics company MercuryGate International and SaaS-based food logistics organization ArrowStream, powers the architecture for several major supply chain companies, such as Blue Yonder and SAP, and integrates the pharmaceutical supply chain for such companies as Octapharma. Notable manufacturing customers include Duraflame, Inc., Sauder Woodworking, and Camira Fabrics Ltd. Key partners include FourKites and ClientsFirst, among many others. Expansion In June 2014, Cleo opened an office in Chicago for members of its support and engineering teams. The company in 2014 hired Jorge Rodriguez as senior vice president of product development and John Thielens as vice president of technology. Cleo hired Dave Brunswick as vice president of solutions for North America in 2015, and Cleo hired Ken Lyons to lead global sales in 2016. More recent additions to the company's leadership team include Drew Skarupa, CFO, Vipin Mittal, vice president, Vidya Chadaga, vice president, Products, and Tushar Patel, CMO. Cleo opened its Center of Innovation product development facility in Bengaluru, India, in 2015 and expanded its hybrid cloud integration teams into a new office there in 2017. The company also opened a London office in 2016 and expanded its network of channel partners in EMEA. In 2016, Cleo acquired EXTOL International, a Pottsville, Pa.-based business and EDI integration and data transformation company for an undisclosed amount. In 2017, the company moved its headquarters from Loves Park, Illinois, to Rockford. In 2021 the company received a significant growth investment from H.I.G. Capital. Certification Cleo regularly submits its products to Drummond Group's interoperability software testing for AS2, AS3 and ebMS 2.0. In January 2020, Cleo announced that its new application connector for Acumatica ERP has been recognized as an Acumatica-Certified Application (ACA). The company also holds SOC 2, Type 2 certification. Awards Cleo was a Xerox partner of the year award for five years, from 2009 to 2014. The Cleo Streem solution integrates with Xerox multi-function products, providing customers with solutions for network fax and interactive messaging needs. Cleo was named to Food Logistics’ FL100+ Top Software and Technology Providers Lists in 2016, 2017, 2019 and 2020 References EDI software companies Software companies based in Illinois Network management Managed file transfer File transfer protocols Data management Software companies of the United States
21347364
https://en.wikipedia.org/wiki/Unix
Unix
Unix (; trademarked as UNIX) is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, whose development started in 1969 at the Bell Labs research center by Ken Thompson, Dennis Ritchie, and others. Initially intended for use inside the Bell System, AT&T licensed Unix to outside parties in the late 1970s, leading to a variety of both academic and commercial Unix variants from vendors including University of California, Berkeley (BSD), Microsoft (Xenix), Sun Microsystems (SunOS/Solaris), HP/HPE (HP-UX), and IBM (AIX). In the early 1990s, AT&T sold its rights in Unix to Novell, which then sold its Unix business to the Santa Cruz Operation (SCO) in 1995. The UNIX trademark passed to The Open Group, an industry consortium founded in 1996, which allows the use of the mark for certified operating systems that comply with the Single UNIX Specification (SUS). However, Novell continues to own the Unix copyrights, which the SCO Group, Inc. v. Novell, Inc. court case (2010) confirmed. Unix systems are characterized by a modular design that is sometimes called the "Unix philosophy". According to this philosophy, the operating system should provide a set of simple tools, each of which performs a limited, well-defined function. A unified and inode-based filesystem (the Unix filesystem) and an inter-process communication mechanism known as "pipes" serve as the main means of communication, and a shell scripting and command language (the Unix shell) is used to combine the tools to perform complex workflows. Unix distinguishes itself from its predecessors as the first portable operating system: almost the entire operating system is written in the C programming language, which allows Unix to operate on numerous platforms. Overview Unix was originally meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmers. The system grew larger as the operating system started spreading in academic circles, and as users added their own tools to the system and shared them with colleagues. At first, Unix was not designed to be portable or for multi-tasking. Later, Unix gradually gained portability, multi-tasking and multi-user capabilities in a time-sharing configuration. Unix systems are characterized by various concepts: the use of plain text for storing data; a hierarchical file system; treating devices and certain types of inter-process communication (IPC) as files; and the use of a large number of software tools, small programs that can be strung together through a command-line interpreter using pipes, as opposed to using a single monolithic program that includes all of the same functionality. These concepts are collectively known as the "Unix philosophy". Brian Kernighan and Rob Pike summarize this in The Unix Programming Environment as "the idea that the power of a system comes more from the relationships among programs than from the programs themselves". By the early 1980s, users began seeing Unix as a potential universal operating system, suitable for computers of all sizes. The Unix environment and the client–server program model were essential elements in the development of the Internet and the reshaping of computing as centered in networks rather than in individual computers. Both Unix and the C programming language were developed by AT&T and distributed to government and academic institutions, which led to both being ported to a wider variety of machine families than any other operating system. The Unix operating system consists of many libraries and utilities along with the master control program, the kernel. The kernel provides services to start and stop programs, handles the file system and other common "low-level" tasks that most programs share, and schedules access to avoid conflicts when programs try to access the same resource or device simultaneously. To mediate such access, the kernel has special rights, reflected in the distinction of kernel space from user space, the latter being a priority realm where most application programs operate. History The origins of Unix date back to the mid-1960s when the Massachusetts Institute of Technology, Bell Labs, and General Electric were developing Multics, a time-sharing operating system for the GE-645 mainframe computer. Multics featured several innovations, but also presented severe problems. Frustrated by the size and complexity of Multics, but not by its goals, individual researchers at Bell Labs started withdrawing from the project. The last to leave were Ken Thompson, Dennis Ritchie, Douglas McIlroy, and Joe Ossanna, who decided to reimplement their experiences in a new project of smaller scale. This new operating system was initially without organizational backing, and also without a name. The new operating system was a single-tasking system. In 1970, the group coined the name Unics for Uniplexed Information and Computing Service as a pun on Multics, which stood for Multiplexed Information and Computer Services. Brian Kernighan takes credit for the idea, but adds that "no one can remember" the origin of the final spelling Unix. Dennis Ritchie, Doug McIlroy, and Peter G. Neumann also credit Kernighan. The operating system was originally written in assembly language, but in 1973, Version 4 Unix was rewritten in C. Version 4 Unix, however, still had many PDP-11 dependent codes, and was not suitable for porting. The first port to another platform was made five years later (1978) for the Interdata 8/32. In 1974, Ken Robinson of the Department of Computer Science at University of New South Wales (UNSW) in Australia requested a copy of Unix for their PDP-11/40 minicomputer from Dennis Ritchie at Bell Labs. This 1975 installation made UNSW the first university outside the United States to run Unix. Bell Labs produced several versions of Unix that are collectively referred to as Research Unix. In 1975, the first source license for UNIX was sold to Donald B. Gillies at the University of Illinois Urbana–Champaign Department of Computer Science (UIUC). UIUC graduate student Greg Chesson, who had worked on the Unix kernel at Bell Labs, was instrumental in negotiating the terms of the license. During the late 1970s and early 1980s, the influence of Unix in academic circles led to large-scale adoption of Unix (BSD and System V) by commercial startups, which in turn led to Unix fragmenting into multiple, similar but often slightly mutually-incompatible systems including DYNIX, HP-UX, SunOS/Solaris, AIX, and Xenix. In the late 1980s, AT&T Unix System Laboratories and Sun Microsystems developed System V Release 4 (SVR4), which was subsequently adopted by many commercial Unix vendors. In the 1990s, Unix and Unix-like systems grew in popularity and became the operating system of choice for over 90% of the world's top 500 fastest supercomputers, as BSD and Linux distributions were developed through collaboration by a worldwide network of programmers. In 2000, Apple released Darwin, also a Unix system, which became the core of the Mac OS X operating system, later renamed macOS. Unix operating systems are widely used in modern servers, workstations, and mobile devices. Standards In the late 1980s, an open operating system standardization effort now known as POSIX provided a common baseline for all operating systems; IEEE based POSIX around the common structure of the major competing variants of the Unix system, publishing the first POSIX standard in 1988. In the early 1990s, a separate but very similar effort was started by an industry consortium, the Common Open Software Environment (COSE) initiative, which eventually became the Single UNIX Specification (SUS) administered by The Open Group. Starting in 1998, the Open Group and IEEE started the Austin Group, to provide a common definition of POSIX and the Single UNIX Specification, which, by 2008, had become the Open Group Base Specification. In 1999, in an effort towards compatibility, several Unix system vendors agreed on SVR4's Executable and Linkable Format (ELF) as the standard for binary and object code files. The common format allows substantial binary compatibility among different Unix systems operating on the same CPU architecture. The Filesystem Hierarchy Standard was created to provide a reference directory layout for Unix-like operating systems; it has mainly been used in Linux. Components The Unix system is composed of several components that were originally packaged together. By including the development environment, libraries, documents and the portable, modifiable source code for all of these components, in addition to the kernel of an operating system, Unix was a self-contained software system. This was one of the key reasons it emerged as an important teaching and learning tool and has had such a broad influence. The inclusion of these components did not make the system large the original V7 UNIX distribution, consisting of copies of all of the compiled binaries plus all of the source code and documentation occupied less than 10 MB and arrived on a single nine-track magnetic tape. The printed documentation, typeset from the online sources, was contained in two volumes. The names and filesystem locations of the Unix components have changed substantially across the history of the system. Nonetheless, the V7 implementation is considered by many to have the canonical early structure: Kernel source code in /usr/sys, composed of several sub-components: conf configuration and machine-dependent parts, including boot code dev device drivers for control of hardware (and some pseudo-hardware) sys operating system "kernel", handling memory management, process scheduling, system calls, etc. h header files, defining key structures within the system and important system-specific invariables Development environment early versions of Unix contained a development environment sufficient to recreate the entire system from source code: ed text editor, for creating source code files cc C language compiler (first appeared in V3 Unix) as machine-language assembler for the machine ld linker, for combining object files lib object-code libraries (installed in /lib or /usr/lib). libc, the system library with C run-time support, was the primary library, but there have always been additional libraries for things such as mathematical functions (libm) or database access. V7 Unix introduced the first version of the modern "Standard I/O" library stdio as part of the system library. Later implementations increased the number of libraries significantly. make build manager (introduced in PWB/UNIX), for effectively automating the build process include header files for software development, defining standard interfaces and system invariants Other languages V7 Unix contained a Fortran-77 compiler, a programmable arbitrary-precision calculator (bc, dc), and the awk scripting language; later versions and implementations contain many other language compilers and toolsets. Early BSD releases included Pascal tools, and many modern Unix systems also include the GNU Compiler Collection as well as or instead of a proprietary compiler system. Other tools including an object-code archive manager (ar), symbol-table lister (nm), compiler-development tools (e.g. lex & yacc), and debugging tools. Commands Unix makes little distinction between commands (user-level programs) for system operation and maintenance (e.g. cron), commands of general utility (e.g. grep), and more general-purpose applications such as the text formatting and typesetting package. Nonetheless, some major categories are: sh the "shell" programmable command-line interpreter, the primary user interface on Unix before window systems appeared, and even afterward (within a "command window"). Utilities the core toolkit of the Unix command set, including cp, ls, grep, find and many others. Subcategories include: System utilities administrative tools such as mkfs, fsck, and many others. User utilities environment management tools such as passwd, kill, and others. Document formatting Unix systems were used from the outset for document preparation and typesetting systems, and included many related programs such as nroff, troff, tbl, eqn, refer, and pic. Some modern Unix systems also include packages such as TeX and Ghostscript. Graphics the plot subsystem provided facilities for producing simple vector plots in a device-independent format, with device-specific interpreters to display such files. Modern Unix systems also generally include X11 as a standard windowing system and GUI, and many support OpenGL. Communications early Unix systems contained no inter-system communication, but did include the inter-user communication programs mail and write. V7 introduced the early inter-system communication system UUCP, and systems beginning with BSD release 4.1c included TCP/IP utilities. Documentation Unix was the first operating system to include all of its documentation online in machine-readable form. The documentation included: man manual pages for each command, library component, system call, header file, etc. doc longer documents detailing major subsystems, such as the C language and troff Impact The Unix system had a significant impact on other operating systems. It achieved its reputation by its interactivity, by providing the software at a nominal fee for educational use, by running on inexpensive hardware, and by being easy to adapt and move to different machines. Unix was originally written in assembly language, but was soon rewritten in C, a high-level programming language. Although this followed the lead of CTSS, Multics and Burroughs MCP, it was Unix that popularized the idea. Unix had a drastically simplified file model compared to many contemporary operating systems: treating all kinds of files as simple byte arrays. The file system hierarchy contained machine services and devices (such as printers, terminals, or disk drives), providing a uniform interface, but at the expense of occasionally requiring additional mechanisms such as ioctl and mode flags to access features of the hardware that did not fit the simple "stream of bytes" model. The Plan 9 operating system pushed this model even further and eliminated the need for additional mechanisms. Unix also popularized the hierarchical file system with arbitrarily nested subdirectories, originally introduced by Multics. Other common operating systems of the era had ways to divide a storage device into multiple directories or sections, but they had a fixed number of levels, often only one level. Several major proprietary operating systems eventually added recursive subdirectory capabilities also patterned after Multics. DEC's RSX-11M's "group, user" hierarchy evolved into OpenVMS directories, CP/M's volumes evolved into MS-DOS 2.0+ subdirectories, and HP's MPE group.account hierarchy and IBM's SSP and OS/400 library systems were folded into broader POSIX file systems. Making the command interpreter an ordinary user-level program, with additional commands provided as separate programs, was another Multics innovation popularized by Unix. The Unix shell used the same language for interactive commands as for scripting (shell scripts – there was no separate job control language like IBM's JCL). Since the shell and OS commands were "just another program", the user could choose (or even write) their own shell. New commands could be added without changing the shell itself. Unix's innovative command-line syntax for creating modular chains of producer-consumer processes (pipelines) made a powerful programming paradigm (coroutines) widely available. Many later command-line interpreters have been inspired by the Unix shell. A fundamental simplifying assumption of Unix was its focus on newline-delimited text for nearly all file formats. There were no "binary" editors in the original version of Unix – the entire system was configured using textual shell command scripts. The common denominator in the I/O system was the byte – unlike "record-based" file systems. The focus on text for representing nearly everything made Unix pipes especially useful and encouraged the development of simple, general tools that could be easily combined to perform more complicated ad hoc tasks. The focus on text and bytes made the system far more scalable and portable than other systems. Over time, text-based applications have also proven popular in application areas, such as printing languages (PostScript, ODF), and at the application layer of the Internet protocols, e.g., FTP, SMTP, HTTP, SOAP, and SIP. Unix popularized a syntax for regular expressions that found widespread use. The Unix programming interface became the basis for a widely implemented operating system interface standard (POSIX, see above). The C programming language soon spread beyond Unix, and is now ubiquitous in systems and applications programming. Early Unix developers were important in bringing the concepts of modularity and reusability into software engineering practice, spawning a "software tools" movement. Over time, the leading developers of Unix (and programs that ran on it) established a set of cultural norms for developing software, norms which became as important and influential as the technology of Unix itself; this has been termed the Unix philosophy. The TCP/IP networking protocols were quickly implemented on the Unix versions widely used on relatively inexpensive computers, which contributed to the Internet explosion of worldwide real-time connectivity, and which formed the basis for implementations on many other platforms. The Unix policy of extensive on-line documentation and (for many years) ready access to all system source code raised programmer expectations, and contributed to the launch of the free software movement in 1983. Free Unix and Unix-like variants In 1983, Richard Stallman announced the GNU (short for "GNU's Not Unix") project, an ambitious effort to create a free software Unix-like system; "free" in the sense that everyone who received a copy would be free to use, study, modify, and redistribute it. The GNU project's own kernel development project, GNU Hurd, had not yet produced a working kernel, but in 1991 Linus Torvalds released the Linux kernel as free software under the GNU General Public License. In addition to their use in the GNU operating system, many GNU packages – such as the GNU Compiler Collection (and the rest of the GNU toolchain), the GNU C library and the GNU core utilities – have gone on to play central roles in other free Unix systems as well. Linux distributions, consisting of the Linux kernel and large collections of compatible software have become popular both with individual users and in business. Popular distributions include Red Hat Enterprise Linux, Fedora, SUSE Linux Enterprise, openSUSE, Debian, Ubuntu, Linux Mint, Mandriva Linux, Slackware Linux, Arch Linux and Gentoo. A free derivative of BSD Unix, 386BSD, was released in 1992 and led to the NetBSD and FreeBSD projects. With the 1994 settlement of a lawsuit brought against the University of California and Berkeley Software Design Inc. (USL v. BSDi) by Unix System Laboratories, it was clarified that Berkeley had the right to distribute BSD Unix for free if it so desired. Since then, BSD Unix has been developed in several different product branches, including OpenBSD and DragonFly BSD. Linux and BSD are increasingly filling the market needs traditionally served by proprietary Unix operating systems, as well as expanding into new markets such as the consumer desktop and mobile and embedded devices. Because of the modular design of the Unix model, sharing components is relatively common; consequently, most or all Unix and Unix-like systems include at least some BSD code, and some systems also include GNU utilities in their distributions. In a 1999 interview, Dennis Ritchie voiced his opinion that Linux and BSD operating systems are a continuation of the basis of the Unix design, and are derivatives of Unix: In the same interview, he states that he views both Unix and Linux as "the continuation of ideas that were started by Ken and me and many others, many years ago". OpenSolaris was the free software counterpart to Solaris developed by Sun Microsystems, which included a CDDL-licensed kernel and a primarily GNU userland. However, Oracle discontinued the project upon their acquisition of Sun, which prompted a group of former Sun employees and members of the OpenSolaris community to fork OpenSolaris into the illumos kernel. As of 2014, illumos remains the only active open-source System V derivative. ARPANET In May 1975, RFC 681 described the development of Network Unix by the Center for Advanced Computation at the University of Illinois Urbana-Champaign. The Unix system was said to "present several interesting capabilities as an ARPANET mini-host". At the time, Unix required a license from Bell Telephone Laboratories that cost US$20,000 for non-university institutions, while universities could obtain a license for a nominal fee of $150. It was noted that Bell was "open to suggestions" for an ARPANET-wide license. The RFC specifically mentions that Unix "offers powerful local processing facilities in terms of user programs, several compilers, an editor based on QED, a versatile document preparation system, and an efficient file system featuring sophisticated access control, mountable and de-mountable volumes, and a unified treatment of peripherals as special files." The latter permitted the Network Control Program (NCP) to be integrated within the Unix file system, treating network connections as special files that could be accessed through standard Unix I/O calls, which included the added benefit of closing all connections on program exit, should the user neglect to do so. The modular design of Unix allowed them "to minimize the amount of code added to the basic Unix kernel", with much of the NCP code in a swappable user process, running only when needed. Branding In October 1993, Novell, the company that owned the rights to the Unix System V source at the time, transferred the trademarks of Unix to the X/Open Company (now The Open Group), and in 1995 sold the related business operations to Santa Cruz Operation (SCO). Whether Novell also sold the copyrights to the actual software was the subject of a federal lawsuit in 2006, SCO v. Novell, which Novell won. The case was appealed, but on August 30, 2011, the United States Court of Appeals for the Tenth Circuit affirmed the trial decisions, closing the case. Unix vendor SCO Group Inc. accused Novell of slander of title. The present owner of the trademark UNIX is The Open Group, an industry standards consortium. Only systems fully compliant with and certified to the Single UNIX Specification qualify as "UNIX" (others are called "Unix-like"). By decree of The Open Group, the term "UNIX" refers more to a class of operating systems than to a specific implementation of an operating system; those operating systems which meet The Open Group's Single UNIX Specification should be able to bear the UNIX 98 or UNIX 03 trademarks today, after the operating system's vendor pays a substantial certification fee and annual trademark royalties to The Open Group. Systems that have been licensed to use the UNIX trademark include AIX, EulerOS, HP-UX, Inspur K-UX, IRIX, macOS, Solaris, Tru64 UNIX (formerly "Digital UNIX", or OSF/1), and z/OS. Notably, EulerOS and Inspur K-UX are Linux distributions certified as UNIX 03 compliant. Sometimes a representation like Un*x, *NIX, or *N?X is used to indicate all operating systems similar to Unix. This comes from the use of the asterisk (*) and the question mark characters as wildcard indicators in many utilities. This notation is also used to describe other Unix-like systems that have not met the requirements for UNIX branding from the Open Group. The Open Group requests that UNIX is always used as an adjective followed by a generic term such as system to help avoid the creation of a genericized trademark. Unix was the original formatting, but the usage of UNIX remains widespread because it was once typeset in small caps (Unix). According to Dennis Ritchie, when presenting the original Unix paper to the third Operating Systems Symposium of the American Association for Computing Machinery (ACM), "we had a new typesetter and troff had just been invented and we were intoxicated by being able to produce small caps". Many of the operating system's predecessors and contemporaries used all-uppercase lettering, so many people wrote the name in upper case due to force of habit. It is not an acronym. Trademark names can be registered by different entities in different countries and trademark laws in some countries allow the same trademark name to be controlled by two different entities if each entity uses the trademark in easily distinguishable categories. The result is that Unix has been used as a brand name for various products including bookshelves, ink pens, bottled glue, diapers, hair driers and food containers. Several plural forms of Unix are used casually to refer to multiple brands of Unix and Unix-like systems. Most common is the conventional Unixes, but Unices, treating Unix as a Latin noun of the third declension, is also popular. The pseudo-Anglo-Saxon plural form Unixen is not common, although occasionally seen. Sun Microsystems, developer of the Solaris variant, has asserted that the term Unix is itself plural, referencing its many implementations. See also Comparison of operating systems and free and proprietary software List of operating systems, Unix systems, and Unix commands Market share of operating systems Timeline of operating systems Plan 9 from Bell Labs Unix time Year 2038 problem References Further reading General Lions, John: Lions' with Source Code, Peer-to-Peer Communications, 1996; Books Salus, Peter H.: A Quarter Century of UNIX, Addison Wesley, June 1, 1994; Television Computer Chronicles (1985). "UNIX". Computer Chronicles (1989). "Unix". Talks External links The UNIX Standard, at The Open Group. The Unix Tree: files from historic releases Unix History Repository — a git repository representing a reconstructed version of the Unix history The Unix 1st Edition Manual 1st Edition manual rendered to HTML (film about Unix featuring Dennis Ritchie, Ken Thompson, Brian Kernighan, Alfred Aho, and more) (complementary film to the preceding "Making Computers More Productive") audio bsdtalk170 - Marshall Kirk McKusick at DCBSDCon -- on history of tcp/ip (in BSD) -- abridgement of the three lectures on the history of BSD. A History of UNIX before Berkeley: UNIX Evolution: 1975-1984 BYTE Magazine, September 1986: UNIX and the MC68000 a software perspective on the MC68000 CPU architecture and UNIX compatibility 1969 software Operating system families Time-sharing operating systems
22741394
https://en.wikipedia.org/wiki/Phatch
Phatch
Phatch (PHoto & bATCH) is a raster graphics editor used to batch process digital graphics and photographs. Phatch can be used on the desktop as a GUI program or on the server as a console program. Operation Typical actions include resizing, rotating, cropping, converting, applying shadows, rounded corners, perspective, reflection, and converting between different image formats. Phatch can also be used to rename or copy image files based on the Exif or IPTC Information Interchange Model tags. The image inspector can be used to explore the metadata tags stored in images. The tabs can be passed to any action, which is especially useful for renaming or copying files, but also for data-stamping such as date, time, aperture or shutter speed on the picture. Multiple inspectors can be opened at once to compare tag values with a preview of the image. Phatch can turn itself into a droplet which stays a small graphic on top of the other windows. It processes any images which are dragged and dropped on it. Phatch has a built in interactive Python console to explore the internals of the program. Development Phatch is being developed on Linux (Ubuntu) by Stani Michiels. The logo, mascot and some icons are designed by Admiror Design Studio. The other icons are taken from the Open Clip Art Library. The image processing of Phatch is done with the Python Imaging Library. Phatch uses Bazaar in combination with Launchpad for coordinating its development and translations. Phatch has a Python (wxPython) API and is extensible through Python. Limitations Phatch does not provide a live preview of the image manipulation and has no built in support for remote file systems. Although Phatch runs from source on Windows and Mac OS X, there are no final binary installers available for these platforms, although a pre-release binary installer for OS X was made available by the developers in May 2010. Distribution The source code of Phatch is released on its homepage. Binary packages are available in the repositories of the major Linux distributions such as Debian, Ubuntu, ArchLinux, Fedora and OpenSuse. Phatch requires Python, Python Imaging Library and wxPython (2.6 or more) for the GUI. Users can install pyexiv2 for better Exif and IPTC IIM support. Currently the website is down and Phatch is no longer downloadable along with python dependencies for Phatch. Critical reception Softpedia's editor's review of Phatch 0.0.bzr157 overall awarded 4 stars, highlighting the clean and simple interface, program stability, and batch processing actions. Allowing the user to use Python to create additional batch process was also seen as an advantage over similar products. Criticisms included the lack of a help file or preferences menu. Some processes such as "Convert Mode" caused errors that were not reported to the error log. It was the opinion of Linux Pratique magazine that Phatch 0.1.3 filled the gap between GIMP and Imagemagick, with a user-friendly interface saving a lot of time for batch processing. Overall it was felt that it offered fewer features than these two programs. Phatch 0.1.6 was featured together with GIMP on the front page of Linux+ magazine, May 2009. References External links Free software programmed in Python Free raster graphics editors Free photo software Raster graphics editors for Linux Digital photography IRIX software Macintosh graphics software MacOS graphics software Windows graphics-related software Software that uses wxPython
55647366
https://en.wikipedia.org/wiki/Orion-128
Orion-128
The Orion-128 () is a DIY computer designed in Soviet Union. It was featured in the Radio magazine in 1990, other materials for the computer were published until 1996. It was the last Intel 8080-based DIY computer in Russia. Overview The Orion-128 used the same concepts as the Specialist and had similar specifications, with both advances and flaws. It gained more popularity because it was supported by a more popular magazine. In the early 1990s the computer was produced industrially at the Livny pilot plant of machine graphics means in Oryol Oblast. Much of the software for the Orion-128 was ported by hobbyists from the Specialist and the ZX Spectrum. Technical specifications CPU: KR580VM80A (Intel 8080A clone) clocked at 2.5 MHz. RAM: 128 KiB in original version, expandable to 256 KiB. A bank switching scheme was used. ROM: 2 KiB contains monitor firmware Video: three graphics modes with the same image resolution 384 × 256 pixels. Text can be displayed using 64 columns × 25 rows of characters. Images for the upper case Cyrillic and Latin characters in KOI-7 N2 encoding are built in the Monitor ROM. List of graphics modes includes: monochrome mode (two color palettes available: black and green, yellow and blue) 4 color mode (each pixel has its own color, two palettes available) 16 color mode (each group of 8 horizontal pixels can use one of 16 foreground colors and one of 16 background colors) Storage media: cassette tape, ROM drive (a special board containing a set of ROM chips). In later years a floppy disk controller and an ATA hard disk controller were developed Keyboard: 67 keys. The keyboard matrix is attached via programmable peripheral interface chip KR580VV55 (Intel 8255 clone) and scanned by CPU Peculiarities "Orion" is partially compatible with "Radio-86RK" in terms of keyboard, standard ROM subroutines and data storage format on the cassette, and with another amateur radio computer, "Specialist" in terms of graphic screen format. Apparently, he also used the idea of ​​an electronic disk from RAM from another domestic computer with 128 kb RAM - Okean-240. The Orion developers, they say, set themselves the task of creating an inexpensive, simple and affordable consumer PC with good graphics capabilities, and they succeeded. In the minimum configuration (without color, with 64 kb RAM), ORION contains only 42 microcircuits, in the standard configuration (128 kb) there are only 59, and expensive or scarce components are not used, you can use obsolete series microcircuits. For the same reasons, the KR580VM80A was used as the CPU, as the cheapest and most affordable. Moreover, the Orion circuitry is such that the processor operates at its maximum frequency of 2.5 MHz without any delays. The same idea of ​​transparent access to RAM is implemented, which was previously applied in the "Specialist" and its clones. Other domestic machines used WAIT cycles to synchronize the processor with the video part, which reduced performance by 25%. This made the Orion, along with the Corvette, the fastest domestic home computer on this processor. For example, Vector-06Ts, which has a much higher clock rate of 3 MHz, is inferior to Orion in terms of speed due to the slowdown of the processor by the video controller. "Orion" has high graphics capabilities for this class of machines - a resolution of 384x256 allows good graphics in games, although the resolution is still insufficient for text processing; a full-fledged color mode is provided with its own color for each pixel (analogous to CGA, only with a different organization), 4 colors selected from two palettes and visually the number of colors can be increased due to a mosaic of colored dots, as is done in CGA games. This mode is typical for many Western computers of this level (alas, this mode was almost never used by programs, because it was not needed for text, and there was no graphic editor for creating games); and for games and texts there is a convenient 16-color mode (only 2 colors are possible within the screen byte). The organization of the Orion screen is linear and very convenient for the programmer - the low byte of the address specifies the vertical position of the screen byte, and the high byte indicates its horizontal position. This simplified and accelerated the display of graphics on the screen (a similar screen organization is also in the "Specialist", "Vector" and "Ocean"). A color screen in 16-color mode consists of two planes - the graphics plane and the color plane. For text in a single-color window, this speeds up output and shifting, as before output, the window is first painted over, which halves the amount of output bytes per character (relative to CGA), and with a video in the window, the color simply does not need to be changed. Also, in all video modes, Orion allows you to use up to 4 software-switchable screen buffers. This allows you to output to a currently invisible screen and then instantly turn it on, which eliminates the problems with flickering sprites in dynamic games and the need to deal with this due to interruptions, as in the ZX-Spectrum. On the Orion, even large sprites can be moved across the screen without flickering. For Orion-128, its developers initially created the author's ORDOS operating system, designed to work not with a disk drive, but with a ROM disk (external ROM readable through PPA), RAM disks (the second and subsequent 60- kilobyte pages of RAM) and a tape recorder. ORDOS made it possible to work comfortably with a computer without having disk drives that were not available at that time (the Okean-240, a small-scale production of Okean-240, also had a similar built-in ROM OS CP / M running on an electronic disk from RAM). Of the serial home computers, the Junior FV-6506, which also used CP / M, had something similar. As relative shortcomings of "Orion" can be noted only non-optimal screen resolution of 384 * 256 at a video signal frequency of 10 MHz. This leads to the need to use an ugly, and most importantly, non-byte 6*10 font, which (due to the need for masking) is displayed 2.5 times slower than an 8*10 byte font. But in Corvette, Ocean and Vector, a 512 * 256 screen is used, therefore, even with a lower CPU speed and a larger screen buffer, their text processing is much faster and prettier, and the raster occupies the entire screen (while in "Orion" only part of the screen). As a disadvantage, sometimes they point to the lack of a hardware sound generator (the sound is generated purely by software, with a heavy processor load). This is possible because the authors understood that the gaming niche in the country was already occupied by ZX-Spectrum clones. But the lack of hardware screen shift, contrary to the reviews on some sites, is not at all a disadvantage, because thanks to the vertically linear organization of the screen, the vertical shift of the screen by the stack is fast enough, and the horizontal shift is simply not needed. References Soviet computer systems
1155559
https://en.wikipedia.org/wiki/Business%20process%20modeling
Business process modeling
Business process modeling (BPM) in business process management and systems engineering is the activity of representing processes of an enterprise, so that the current business processes may be analyzed, improved, and automated. BPM is typically performed by business analysts, who provide expertise in the modeling discipline; by subject matter experts, who have specialized knowledge of the processes being modeled; or more commonly by a team comprising both. Alternatively, the process model can be derived directly from events' logs using process mining tools. The business objective is often to increase process speed or reduce cycle time; to increase quality; or to reduce costs, such as labor, materials, scrap, or capital costs. In practice, a management decision to invest in business process modeling is often motivated by the need to document requirements for an information technology project. Change management programs are typically involved to put any improved business processes into practice. With advances in software design, the vision of BPM models becoming fully executable (and capable of simulations and round-trip engineering) is coming closer to reality. History Techniques to model business process such as the flow chart, functional flow block diagram, control flow diagram, Gantt chart, PERT diagram, and IDEF have emerged since the beginning of the 20th century. The Gantt charts were among the first to arrive around 1899, the flow charts in the 1920s, Functional Flow Block Diagram and PERT in the 1950s, Data Flow Diagrams and IDEF in the 1970s. Among the modern methods are Unified Modeling Language and Business Process Model and Notation. Still, these represent just a fraction of the methodologies used over the years to document business processes. The term 'business process modeling' was coined in the 1960s in the field of systems engineering by S. Williams in his 1967 article 'Business Process Modelling Improves Administrative Control'. His idea was that techniques for obtaining a better understanding of physical control systems could be used in a similar way for business processes. It was not until the 1990s that the term became popular. In the 1990s the term 'process' became a new productivity paradigm. Companies were encouraged to think in processes instead of functions and procedures. Process thinking looks at the chain of events in the company from purchase to supply, from order retrieval to sales, etc. The traditional modeling tools were developed to illustrate time and cost, while modern tools focus on cross-functional activities. These cross-functional activities have increased significantly in number and importance, due to the growth of complexity and dependence. New methodologies include business process redesign, business process innovation, business process management, integrated business planning, among others, all "aiming at improving processes across the traditional functions that comprise a company". In the field of software engineering, the term 'business process modeling' opposed the common software process modeling, aiming to focus more on the state of the practice during software development.<ref>Brian C. Warboys (1994). Software Process Technology: Third European Workshop EWSPT'94, Villard de Lans, France, February 7–9, 1994: Proceedings. p. 252.</ref> In that time (the early 1990s) all existing and new modeling techniques to illustrate business processes were consolidated as 'business process modeling languages'. In the Object Oriented approach, it was considered to be an essential step in the specification of business application systems. Business process modeling became the base of new methodologies, for instance, those that supported data collection, data flow analysis, process flow diagrams, and reporting facilities. Around 1995, the first visually oriented tools for business process modeling and implementation were being presented. Topics Business model A business model is a framework for creating economic, social, and/or other forms of value. The term 'business model' is thus used for a broad range of informal and formal descriptions to represent core aspects of a business, including purpose, offerings, strategies, infrastructure, organizational structures, trading practices, and operational processes and policies. In the most basic sense, a business model is a method of doing business by which a company can sustain itself. That is, generate revenue. The business model spells-out how a company makes money by specifying where it is positioned in the value chain. Business process A business process is a collection of related, structured activities or tasks that produce a specific service or product (serve a particular goal) for a particular customer or customers. There are three main types of business processes: Management processes, that govern the operation of a system. Typical management processes include corporate governance and strategic management. Operational processes, that constitute the core business and create the primary value stream. Typical operational processes are purchasing, manufacturing, marketing, and sales. Supporting processes, that support the core processes. Examples include accounting, recruitment, and technical support. A business process can be decomposed into several sub-processes, which have their own attributes but also contribute to achieving the goal of the super-process. The analysis of business processes typically includes the mapping of processes and sub-processes down to activity level. A business process model is a model of one or more business processes and defines the ways in which operations are carried out to accomplish the intended objectives of an organization. Such a model remains an abstraction and depends on the intended use of the model. It can describe the workflow or the integration between business processes. It can be constructed in multiple levels. A workflow is a depiction of a sequence of operations, declared as work of a person, of a simple or complex mechanism, of a group of persons, of an organization of staff, or of machines. The workflow may be seen as any abstraction of real work, segregated into workshare, work split or other types of ordering. For control purposes, the workflow may be a view of real work under a chosen aspect. Artifact-centric business process The artifact-centric business process model has emerged as a holistic approach for modeling business processes, as it provides a highly flexible solution to capture operational specifications of business processes. It particularly focuses on describing the data of business processes, known as "artifacts", by characterizing business-relevant data objects, their life-cycles, and related services. The artifact-centric process modelling approach fosters the automation of the business operations and supports the flexibility of the workflow enactment and evolution. Tools Business process modelling tools provide business users with the ability to model their business processes, implement and execute those models, and refine the models based on as-executed data. As a result, business process modelling tools can provide transparency into business processes, as well as the centralization of corporate business process models and execution metrics. Modelling tools may also enable collaborate modelling of complex processes by users working in teams, where users can share and simulate models collaboratively. Business process modelling tools should not be confused with business process automation systems - both practices have modeling the process as the same initial step and the difference is that process automation gives you an ‘executable diagram’ and that is drastically different from traditional graphical business process modelling tools. Modelling and simulation Modelling and simulation functionality allows for pre-execution "what-if" modelling and simulation. Post-execution optimization is available based on the analysis of actual as-performed metrics. Use case diagrams created by Ivar Jacobson, 1992 (integrated in UML) Activity diagrams (also adopted by UML) Some business process modelling techniques are: Business Process Model and Notation (BPMN) Life-cycle Modelling Language (LML) Subject-oriented business process management (S-BPM) Cognition enhanced Natural language Information Analysis Method (CogNIAM) Extended Business Modelling Language (xBML) Event-driven process chain (EPC) ICAM DEFinition (IDEF0) Unified Modelling Language (UML), extensions for business process Formalized Administrative Notation (FAN) Harbarian process modeling (HPM) Programming language tools BPM suite software provides programming interfaces (web services, application program interfaces (APIs)) which allow enterprise applications to be built to leverage the BPM engine. This component is often referenced as the engine of the BPM suite. Programming languages that are being introduced for BPM include: Business Process Execution Language (BPEL), Web Services Choreography Description Language (WS-CDL). XML Process Definition Language (XPDL), Some vendor-specific languages: Architecture of Integrated Information Systems (ARIS) supports EPC, Java Process Definition Language (JBPM), Other technologies related to business process modelling include model-driven architecture and service-oriented architecture. See also Business reference model A business reference model is a reference model, concentrating on the functional and organizational aspects of an enterprise, service organization or government agency. In general a reference model is a model of something that embodies the basic goal or idea of something and can then be looked at as a reference for various purposes. A business reference model is a means to describe the business operations of an organization, independent of the organizational structure that perform them. Other types of business reference model can also depict the relationship between the business processes, business functions, and the business area's business reference model. These reference models can be constructed in layers, and offer a foundation for the analysis of service components, technology, data, and performance. The most familiar business reference model is the Business Reference Model of the US federal government. That model is a function-driven framework for describing the business operations of the federal government independent of the agencies that perform them. The Business Reference Model provides an organized, hierarchical construct for describing the day-to-day business operations of the federal government. While many models exist for describing organizations – organizational charts, location maps, etc. – this model presents the business using a functionally driven approach. Business process integration A business model, which may be considered an elaboration of a business process model, typically shows business data and business organizations as well as business processes. By showing business processes and their information flows, a business model allows business stakeholders to define, understand, and validate their business enterprise. The data model part of the business model shows how business information is stored, which is useful for developing software code. See the figure on the right for an example of the interaction between business process models and data models. Usually a business model is created after conducting an interview, which is part of the business analysis process. The interview consists of a facilitator asking a series of questions to extract information about the subject business process. The interviewer is referred to as a facilitator to emphasize that it is the participants, not the facilitator, who provide the business process information. Although the facilitator should have some knowledge of the subject business process, but this is not as important as the mastery of a pragmatic and rigorous method interviewing business experts. The method is important because for most enterprises a team of facilitators is needed to collect information across the enterprise, and the findings of all the interviewers must be compiled and integrated once completed. Business models are developed as defining either the current state of the process, in which case the final product is called the "as is" snapshot model, or a concept of what the process should become, resulting in a "to be" model. By comparing and contrasting "as is" and "to be" models the business analysts can determine if the existing business processes and information systems are sound and only need minor modifications, or if reengineering is required to correct problems or improve efficiency. Consequently, business process modeling and subsequent analysis can be used to fundamentally reshape the way an enterprise conducts its operations. Business process re-engineering Business process reengineering (BPR) aims to improve the efficiency and effectiveness of the processes that exist within and across organizations. It examines business processes from a "clean slate" perspective to determine how best to construct them. Business process re-engineering (BPR) began as a private sector technique to help organizations fundamentally rethink how they do their work. A key stimulus for re-engineering has been the development and deployment of sophisticated information systems and networks. Leading organizations use this technology to support innovative business processes, rather than refining current ways of doing work. Business process management Business process management is a field of management focused on aligning organizations with the wants and needs of clients. It is a holistic management approach that promotes business effectiveness and efficiency while striving for innovation, flexibility and integration with technology. As organizations strive for attainment of their objectives, business process management attempts to continuously improve processes - the process to define, measure and improve your processes – a "process optimization" process. See also Artifact-centric business process model Business architecture Business Model Canvas Business plan Business process mapping Business Process Model and Notation Capability Maturity Model Integration Drakon-chart Generalised Enterprise Reference Architecture and Methodology Model Driven Engineering Value Stream Mapping PinpointBPS References Further reading Aguilar-Saven, Ruth Sara. "Business process modelling: Review and framework." International Journal of production economics 90.2 (2004): 129–149. Becker, Jörg, Michael Rosemann, and Christoph von Uthmann. "Guidelines of business process modelling." Business Process Management. Springer Berlin Heidelberg, 2000. 30–49. Hommes, L.J. The Evaluation of Business Process Modelling Techniques. Doctoral thesis. Technische Universiteit Delft. Håvard D. Jørgensen (2004). Interactive Process Models{{|bot=InternetArchiveBot |fix-attempted=yes }}. Thesis Norwegian University of Science and Technology Trondheim, Norway. Manuel Laguna, Johan Marklund (2004). Business Process Modeling, Simulation, and Design''. Pearson/Prentice Hall, 2004. Ovidiu S. Noran (2000). Business Modelling: UML vs. IDEF Paper Griffh University Jan Recker (2005). "Process Modelling in the 21st Century". In: BP Trends, May 2005. Ryan K. L. Ko, Stephen S. G. Lee, Eng Wah Lee (2009) Business Process Management (BPM) Standards: A Survey. In: Business Process Management Journal, Emerald Group Publishing Limited. Volume 15 Issue 5. ISSN 1463-7154. Jan Vanthienen, S. Goedertier and R. Haesen (2007). "EM-BrA2CE v0.1: A vocabulary and execution model for declarative business process modelling". DTEW - KBI_0728. External links
20983659
https://en.wikipedia.org/wiki/List%20of%20Japanese%20inventions%20and%20discoveries
List of Japanese inventions and discoveries
This is a list of Japanese inventions and discoveries. The Japanese have made contributions across a number of scientific and technological domains. In particular, the country has played a crucial role in the digital revolution since the 20th century, with many modern revolutionary and widespread technologies in fields such as electronics and robotics introduced by Japanese inventors and entrepreneurs. Arts Comic book Adam L. Kern has suggested that kibyoshi, picture books from the late 18th century, may have been the world's first comic books. These graphical narratives share with modern manga humorous, satirical, and romantic themes. Some works were mass-produced as serials using woodblock printing. Folding hand fan In ancient Japan, the first hand fans were oval and rigid fans, influenced greatly by Chinese fans. The earliest visual depiction of fans in Japan dates back to the 6th century AD, with burial tomb paintings showed drawings of fans. The folding fan was invented in Japan, with dates ranging from the 6th to 9th centuries and later exported to East Asia, Southeast Asia, and the West. Such a flourishing trade involving Japanese hand fans existed in the Ming dynasty times, when folding fans almost absolutely displaced the old rigid type in China. Manga The history of manga has origins in scrolls dating back to the 12th century, and it is believed they represent the basis for the right-to-left reading style. During the Edo period (1603–1867), Toba Ehon embedded the concept of manga. The word itself first came into common usage in 1798, with the publication of works such as Santō Kyōden's picturebook Shiji no yukikai (1798), and in the early 19th century with such works as Aikawa Minwa's Manga hyakujo (1814) and the Hokusai Manga books (1814–1834). Revolving stage Invented for the Kabuki theatre in Japan in the 18th century, the revolving stage was introduced into Western theater at the Residenz theatre in Munich in 1896 under the influence of japonism fever. Film and animation Anime Japanese animation, or anime, today widely popular both in Japan and abroad, began in the early 20th century. Man with No Name A stock character that originated with Akira Kurosawa's Yojimbo (1961), where the archetype was first portrayed by Toshirō Mifune. The archetype was adapted by Sergio Leone for his Spaghetti Western Dollars Trilogy (1964–1966), with Clint Eastwood playing the role of the "Man with No Name" in Japan. The first depiction of mecha Super Robots being piloted by a user from within a cockpit was introduced in the manga and anime series Mazinger Z by Go Nagai in 1972. Postcyberpunk animation/film The first postcyberpunk media work in an animated/film format was Ghost in the Shell: Stand Alone Complex in 2002. It has been called "the most interesting, sustained postcyberpunk media work in existence." Steampunk animation The earliest examples of steampunk animation are Hayao Miyazaki's anime works Future Boy Conan (1978), Nausicaä of the Valley of the Wind (1984) and Castle in the Sky (1986). Superflat A postmodern art form, founded by the artist Takashi Murakami, which is influenced by manga and anime. Architecture Japanese castle Fortresses constructed primarily out of stone and wood used for military defence in strategic locations. Metabolism A post-war Japanese architectural movement developed by a wide variety of Japanese architects including Kiyonori Kikutake, Kisho Kurokawa and Fumihiko Maki, Metabolism aimed to fuse ideas about architectural megastructures with those of organic biological growth. Tahōtō Tahōtō is a form of Japanese pagoda found primarily at Esoteric Shingon and Tendai school Buddhist temples. Unlike most pagodas, it has two stories. Capsule hotel The first capsule hotel in the world opened in 1979 and was the Capsule Inn Osaka, located in the Umeda district of Osaka, Japan and designed by Kisho Kurokawa. From there, it spread to other cities within Japan. Since then, the concept has further spread to various other territories, including Belgium, China, Hong Kong, Iceland, India, Indonesia, and Poland. Atmospheric sciences Downburst Downbursts, strong ground-level wind systems that emanate from a point above and blow radially, were discovered by Ted Fujita. Fujita scale The first scale designed to measure tornado intensity, the Fujita scale, was first introduced by Ted Fujita (in collaboration with Allen Pearson) in 1971. The scale was widely adopted throughout the world until the development of the Enhanced Fujita scale. Fujiwhara effect The Fujiwhara effect is an atmospheric phenomenon where two nearby cyclonic vortices orbit each other and close the distance between the circulations of their corresponding low-pressure areas. The effect was first described by Sakuhei Fujiwhara in 1921. Jet stream Jet streams were first discovered by Japanese meteorologist Wasaburo Oishi by tracking ceiling balloons. However, Oishi's work largely went unnoticed outside Japan because it was published in Esperanto. Microburst The microburst was first discovered and identified as a small scale downburst affecting an area 4 km (2.5 mi) in diameter or less by Ted Fujita in 1974. Microbursts are recognized as capable of generating wind speeds higher than 270 km/h (170 mph). In addition, Fujita also discovered macrobursts and classified them as downbursts larger than 4 km (2.5 mi). Sports Drifting competition In 1988, Keiichi Tsuchiya alongside Option magazine founder and chief editor Daijiro Inada organised the first contest specifically for sliding a car sideways. In 1996, Option organized the first contest outside Japan which began to spread to other countries. Ekiden (Road Relay) Gateball Keirin Started as a gambling sport in 1948 and became an Olympic sport in 2000. Martial arts Aikido Aikido was created and developed by Morihei Ueshiba in first half of the 20th century. Judo It was created as a physical, mental and moral pedagogy in Japan, in 1882, by Kanō Jigorō. Jujutsu Jujutsu, the "way of yielding", is a collective name for Japanese martial art styles including unarmed and armed techniques. Jujutsu evolved among the samurai of feudal Japan as a method for defeating an armed and armored opponent without weapons. Due to the ineffectiveness of striking against an armored opponent, the most efficient methods for neutralizing an enemy took the form of pins, joint locks, and throws. These techniques were developed around the principle of using an attacker's energy against him, rather than directly opposing it. Karate It began as a common fighting system known as "ti" (or "te") among the pechin class of the Ryukyuans. There were few formal styles of ti, but rather many practitioners with their own methods. One surviving example is the Motobu-ryū school passed down from the Motobu family by Seikichi Uehara. Early styles of karate are often generalized as Shuri-te, Naha-te, and Tomari-te, named after the three cities from which they emerged. Kendo Ninjutsu Developed by groups of people mainly from the Iga Province and Kōka, Shiga of Japan. Throughout history, many different schools (ryū) have taught their unique versions of ninjutsu. An example of these is the Togakure-ryū. This ryū was developed after a defeated samurai warrior called Daisuke Togakure escaped to the region of Iga. Later he came in contact with the warrior-monk Kain Doshi who taught him a new way of viewing life and the means of survival (ninjutsu). Okinawan martial arts In the 14th century, when the three kingdoms on Okinawa (Chūzan, Hokuzan, and Nanzan) entered into a tributary relationship with the Ming Dynasty of China, Chinese Imperial envoys and other Chinese arrived, some of whom taught Chinese Chuan Fa (Kempo) to the Okinawans. The Okinawans combined Chinese Chuan Fa with the existing martial art of Te to form , sometimes called . By the 18th century, different types of Te had developed in three different villages – Naha, Shuri, and Tomari. The styles were named Naha-te, Shuri-te, and Tomari-te, respectively. Practitioners from these three villages went on to develop modern karate. Sumo Sumo is said to have started in the Heian period (794–1192). The imperial family watches sumo as a form of entertainment. It has evolved over the centuries with professional sumo wrestlers appearing in the Edo period (1603–1868). The word sumo is written with the Chinese characters or Kanji of “mutual bruising." Video games PlayStation The first Sony PlayStation was invented by Ken Kutaragi. Research and development for the PlayStation began in 1990, headed by Kutaragi, a Sony engineer. Nintendo Gunpei Yokoi was the creator of the Game Boy and Virtual Boy and worked on Famicom (and NES), the Metroid series, Game Boy Pocket and did extensive work on the system we know today as the Nintendo Entertainment System. Active Time Battle Hiroyuki Ito introduced the "Active Time Battle" system in Final Fantasy IV (1991), where the time-keeping system does not stop. Square Co., Ltd. filed a United States patent application for the ATB system on March 16, 1992, under the title "Video game apparatus, method and device for controlling same" and was awarded the patent on February 21, 1995. On the battle screen, each character has an ATB meter that gradually fills, and the player is allowed to issue a command to that character once the meter is full. The fact that enemies can attack or be attacked at any time is credited with injecting urgency and excitement into the combat system. Beat 'em up The first game to feature fist fighting was Sega's boxing game Heavyweight Champ (1976), but it was Data East's fighting game Karate Champ (1984) which popularized martial arts themed games. The same year, Hong Kong cinema-inspired Kung-Fu Master laid the foundations for scrolling beat 'em ups with its simple gameplay and multiple enemies. Nekketsu Kōha Kunio-kun, released in 1986 in Japan, deviated from the martial arts themes of earlier games and introduced street brawling to the genre. Renegade (released the same year) added an underworld revenge plot that proved more popular with gamers than the principled combat sport of other games. Renegade set the standard for future beat 'em up games as it introduced the ability to move both horizontally and vertically. Bullet hell The bullet hell or danmaku genre began to emerge in the early 1990s as 2D developers needed to find a way to compete with 3D games which were becoming increasingly popular at the time. Toaplan's Batsugun (1993) is considered to be the ancestor of the modern bullet hell genre. The Touhou Project series is one of the most popular bullet hell franchises. Fighting game Sega's black and white boxing game Heavyweight Champ was released in 1976 as the first video game to feature fist fighting. However, Data East's Karate Champ from 1984 is credited with establishing and popularizing the one-on-one fighting game genre, and went on to influence Konami's Yie Ar Kung-Fu from 1985. Yie Ar Kung Fu expanded on Karate Champ by pitting the player against a variety of opponents, each with a unique appearance and fighting style. Capcom's Street Fighter (1987) introduced the use of special moves that could only be discovered by experimenting with the game controls. Street Fighter II (1991) established the conventions of the fighting game genre and, whereas previous games allowed players to combat computer-controlled fighters, Street Fighter II allowed players to play against each other. Platform game Space Panic, a 1980 arcade release, is sometimes credited as the first platform game. It was clearly an influence on the genre, with gameplay centered on climbing ladders between different floors, a common element in many early platform games. Donkey Kong, an arcade game created by Nintendo, released in July 1981, was the first game that allowed players to jump over obstacles and across gaps, making it the first true platformer. Psychological horror game Silent Hill (1999) was praised for moving away survival horror games from B movie horror elements to the psychological style seen in art house or Japanese horror films, due to the game's emphasis on a disturbing atmosphere rather than visceral horror. The original Silent Hill is considered one of the scariest games of all time, and the strong narrative from Silent Hill 2 in 2001 has made the series one of the most influential in the genre. Fatal Frame from 2001 was a unique entry into the genre, as the player explores a mansion and takes photographs of ghosts in order to defeat them. Rhythm game Dance Aerobics was released in 1987, and allowed players to create music by stepping on Nintendo's Power Pad peripheral. It has been called the first rhythm-action game in retrospect, although the 1996 title PaRappa the Rapper has also been deemed the first rhythm game, whose basic template forms the core of subsequent games in the genre. In 1997, Konami's Beatmania sparked an emergent market for rhythm games in Japan. The company's music division, Bemani, released a number of music games over the next several years. Scrolling platformer The first platform game to use scrolling graphics was Jump Bug (1981), a simple platform-shooter developed by Alpha Denshi. In August 1982, Taito released Jungle King, which featured scrolling jump and run sequences that had players hopping over obstacles. Namco took the scrolling platformer a step further with the 1984 release Pac-Land. Pac-Land came after the genre had a few years to develop, and was an evolution of earlier platform games, aspiring to be more than a simple game of hurdle jumping, like some of its predecessors. It closely resembled later scrolling platformers like Wonder Boy and Super Mario Bros and was probably a direct influence on them. It also had multi-layered parallax scrolling. Shoot 'em up Space Invaders is frequently cited as the "first" or "original" in the genre. Space Invaders pitted the player against multiple enemies descending from the top of the screen at a constantly increasing speed. As with subsequent shoot 'em ups of the time, the game was set in space as the available technology only permitted a black background. The game also introduced the idea of giving the player a number of "lives". Space Invaders was a massive commercial success, causing a coin shortage in Japan. The following year, Namco's Galaxian took the genre further with more complex enemy patterns and richer graphics. Stealth game The first stealth-based videogame was Hiroshi Suzuki's Manbiki Shounen (1979). The first commercially successful stealth game was Hideo Kojima's Metal Gear (1987), the first in the Metal Gear series. It was followed by Metal Gear 2: Solid Snake (1990) which significantly expanded the genre, and then Metal Gear Solid (1998). Survival horror The term survival horror was coined by Capcom's Resident Evil (1996) and definitely defined that genre. The game was inspired by Capcom's earlier horror game Sweet Home (1989). The earliest survival horror game was Nostromo, developed by Akira Takiguchi (a Tokyo University student and Taito contractor) for the PET 2001 and published by ASCII for the PC-6001 in 1981. Visual novel The visual novel genre is a type of Interactive fiction developed in Japan in the early 1990s. As the name suggests, visual novels typically have limited interactivity, as most player interaction is restricted to clicking text and graphics. Philosophy Lean manufacturing A generic process management philosophy derived mostly from the Toyota Production System (TPS) (hence the term Toyotism is also prevalent) and identified as "Lean" only in the 1990s. Biology, chemistry, and biomedical science Agar Agar was discovered in Japan around 1658 by Mino Tarōzaemon. Aspergillus oryzae The genome for Aspergillus oryzae was sequenced and released by a consortium of Japanese biotechnology companies, in late 2005. CRISPR Yoshizumi Ishino discovered CRISPR in 1987. Dementia with Lewy bodies First described in 1976 by psychiatrist Kenji Kosaka. Kosaka was awarded the Asahi Prize in 2013 for his discovery. Ephedrine synthesis Ephedrine in its natural form, known as má huáng (麻黄) in traditional Chinese medicine, had been documented in China since the Han dynasty. However, it was not until 1885 that the chemical synthesis of ephedrine was first accomplished by Japanese organic chemist Nagai Nagayoshi. Epinephrine (Adrenaline) Japanese chemist Jōkichi Takamine and his assistant Keizo Uenaka first discovered epinephrine in 1900. In 1901 Takamine successfully isolated and purified the hormone from the adrenal glands of sheep and oxen. Esophagogastroduodenoscope Mutsuo Sugiura was a Japanese engineer famous for being the first to develop a Gastro-camera (a present-day Esophagogastroduodenoscope). His story was illustrated in the NHK TV documentary feature, "Project X: Challengers: The Development of a Gastro-camera Wholly Made in Japan". Sugiura graduated from Tokyo Polytechnic University in 1938 and then joined Olympus Corporation. While working at this company, he first developed an esophagogastroduodenoscope in 1950. Frontier molecular orbital theory Kenichi Fukui developed and published a paper on Frontier molecular orbital theory in 1952. General anesthesia Hanaoka Seishū was the first surgeon in the world who used the general anaesthesia in surgery, in 1804, and who dared to operate on cancers of the breast and oropharynx, to remove necrotic bone, and to perform amputations of the extremities in Japan. Immunoglobulin E (IgE) Immunoglobulin E is a type of antibody only found in mammals. IgE was simultaneously discovered in 1966-7 by two independent groups: Kimishige Ishizaka's team at the Children's Asthma Research Institute and Hospital in Denver, Colorado, and by Gunnar Johansson and Hans Bennich in Uppsala, Sweden. Their joint paper was published in April 1969. Induced pluripotent stem cell The induced pluripotent stem cell (iPSCs) is a kind of pluripotent stem cell which can be created using a mature cell. iPSCs technology was developed by Shinya Yamanaka and his lab workers in 2006. Methamphetamine Methamphetamine was first synthesized from ephedrine in Japan in 1894 by chemist Nagayoshi Nagai. In 1919, methamphetamine hydrochloride was synthesized by pharmacologist Akira Ogata. Nihonium Element 113. Named after Nihon, the local name for Japan. Okazaki fragment Okazaki fragments are short, newly synthesized DNA fragments that are formed on the lagging template strand during DNA replication. They are complementary to the lagging template strand, together forming short double-stranded DNA sections. A series of experiments led to the discovery of Okazaki fragments. The experiments were conducted during the 1960s by Reiji Okazaki, Tsuneko Okazaki, Kiwako Sakabe, and their colleagues during their research on DNA replication of Escherichia coli. In 1966, Kiwako Sakabe and Reiji Okazaki first showed that DNA replication was a discontinuous process involving fragments. The fragments were further investigated by the researchers and their colleagues through their research including the study on bacteriophage DNA replication in Escherichia coli. Photocatalysis Akira Fujishima discovered photocatalysis occurring on the surface of titanium dioxide in 1967. Pulse oximetry Pulse oximetry was developed in 1972, by Takuo Aoyagi and Michio Kishi, bioengineers, at Nihon Kohden using the ratio of red to infrared light absorption of pulsating components at the measuring site. Susumu Nakajima, a surgeon, and his associates first tested the device in patients, reporting it in 1975. Portable electrocardiograph Taro Takemi built the first portable electrocardiograph in 1937. Statin The statin class of drugs was first discovered by Akira Endo, a Japanese biochemist working for the pharmaceutical company Sankyo. Mevastatin was the first discovered member of the statin class. Takadiastase A form of diastase which results from the growth, development and nutrition of a distinct microscopic fungus known as Aspergillus oryzae. Jōkichi Takamine developed the method first used for its extraction in the late 19th century. Thiamine (Vitamin B1) Thiamine was the first of the water-soluble vitamins to be described, leading to the discovery of more such trace compounds essential for survival and to the notion of vitamin. It was not until 1884 that Kanehiro Takaki (1849–1920) attributed beriberi to insufficient nitrogen intake (protein deficiency). In 1910, Japanese scientist Umetaro Suzuki succeeded in extracting a water-soluble complex of micronutrients from rice bran and named it aberic acid. He published this discovery in a Japanese scientific journal. The Polish biochemist Kazimierz Funk later proposed the complex be named "Vitamine" (a portmanteau of "vital amine") in 1912. Urushiol Urushiol, a mixture of alkyl catechols, was discovered by Rikou Majima. Majima also discovered that Urushiol was an allergen which gave members of the genus Toxicodendron, such as poison ivy and poison oak, their skin-irritating properties. Vectorcardiography Taro Takemi invented the vectorcardiograph in 1939. Finance Futures contract The first futures exchange market was the Dōjima Rice Exchange in Japan in the 1730s. Candlestick chart Candlestick charts have been developed in the 18th century by Munehisa Homma, a Japanese rice trader of financial instruments. They were introduced to the Western world by Steve Nison in his book, Japanese Candlestick Charting Techniques. Food and food science Instant noodle Invented by Momofuku Ando, a Taiwanese-Japanese inventor, in 1958. Monosodium glutamate Invented and patented by Kikunae Ikeda. Umami Umami as a separate taste was first identified in 1908 by Kikunae Ikeda of the Tokyo Imperial University while researching the strong flavor in seaweed broth. Fortune cookie Although popular in Western Chinese restaurants, fortune cookies did not originate in China and are in fact rare there. They most likely originated from cookies made by Japanese immigrants to the United States in the late 19th or early 20th century. The Japanese version had a fortune, but not lucky numbers, and was commonly eaten with tea. Mathematics Bernoulli number Studied by Seki Kōwa and published after his death, in 1712. Jacob Bernoulli independently developed the concept in the same period, though his work was published a year later. Determinant In Japan, determinants were introduced to study elimination of variables in systems of higher-order algebraic equations. They used it to give shorthand representation for the resultant. The determinant as an independent function was first studied by Seki Kōwa in 1683. Elimination theory In 1683 (Kai-Fukudai-no-Hō), Seki Kōwa came up with elimination theory, based on resultant. To express resultant, he developed the notion of determinant. Hironaka's example Hironaka's example is a non-Kähler complex manifold that is a deformation of Kähler manifolds discovered by Heisuke Hironaka. Itô calculus Developed by Kiyosi Itô throughout the 20th century, Itô calculus extends calculus to stochastic processes such as Brownian motion (Wiener process). Its basic concept is the Itô integral, and among the most important results is a change of variable formula known as Itô's lemma. Itô calculus is widely applied in various fields, but is perhaps best known for its use in mathematical finance. Iwasawa theory and the Main conjecture of Iwasawa theory Initially created by Kenkichi Iwasawa, Iwasawa theory was originally developed as a Galois module theory of ideal class groups. The main conjecture of Iwasawa theory is a deep relationship between p-adic L-functions and ideal class groups of cyclotomic fields, proved by for primes satisfying the Kummer–Vandiver conjecture and proved for all primes by . Resultant In 1683 (Kai-Fukudai-no-Hō), Seki Kōwa came up with elimination theory, based on resultant. To express resultant, he developed the notion of determinant. Sangaku Japanese geometrical puzzles in Euclidean geometry on wooden tablets created during the Edo period (1603–1867) by members of all social classes. The Dutch Japanologist Isaac Titsingh first introduced sangaku to the West when he returned to Europe in the late 1790s after more than twenty years in the Far East. Soddy's hexlet Irisawa Shintarō Hiroatsu analyzed Soddy's hexlet in a Sangaku in 1822 and was the first person to do so. Takagi existence theorem Takagi existence theorem was developed by Teiji Takagi in isolation during World War I. He presented it at the International Congress of Mathematicians in 1920. Physics Cabibbo–Kobayashi–Maskawa matrix Building off the work of Nicola Cabibbo, Makoto Kobayashi and Toshihide Maskawa introduced the Cabibbo–Kobayashi–Maskawa matrix which introduced for three generations of quarks. In 2008, Kobayashi and Maskawa shared one half of the Nobel Prize in Physics "for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in nature". Nagaoka model (first Saturnian model of the atom) In 1904, Hantaro Nagaoka proposed the first planetary model of the atom as an alternative to J. J. Thomson's plum pudding model. Ernest Rutherford and Niels Bohr would later develop the more viable Bohr model in 1913. Sakata model The Sakata model was a precursor to the quark model proposed by Shoichi Sakata in 1956. Technology Airsoft Airsoft originated in Japan, then spread to Hong Kong and China in the late 1970s. The inventor of the first airsoft gun is Tanio Kobayashi. Blue laser In 1992 Japanese inventor Shuji Nakamura invented the first efficient blue LED. Camera phone The world's first camera phone (it also had a real-time-video-call functionality. It could send an email with a picture), the VP-210, was developed by Kyocera in 1999. TV Watch The world's first TV watch, the TV-Watch, was developed by Seiko in 1982. Japanese typewriter The first typewriter to be based on the Japanese writing system was invented by Kyota Sugimoto in 1929. KS steel Magnetic resistant steel that is three times more resistant than tungsten steel, invented by Kotaro Honda. MKM steel MKM steel, an alloy containing nickel and aluminum, was developed in 1931 by the Japanese metallurgist Tokuhichi Mishima. Neodymium magnet Neodymium magnets were invented independently in 1982 by General Motors (GM) and Sumitomo Special Metals. Double-coil bulb In 1921, Junichi Miura created the first double-coil bulb using a coiled coil tungsten filament while working for Hakunetsusha (a predecessor of Toshiba). At the time, machinery to mass-produce coiled coil filaments did not exist. Hakunetsusha developed a method to mass-produce coiled coil filaments by 1936. QR code The QR code, a type of matrix barcode, was invented by Denso Wave in 1994. Tactile paving The original tactile paving was developed by Seiichi Miyake in 1965. The paving was first introduced in a street in Okayama city, Japan, in 1967. Its use gradually spread in Japan and then around the world. Audio technology Compact Disc player Sony released the world's first CD Player, called the CDP-101, in 1982, using a slide-out tray design for the Compact Disc. Physical modelling synthesis The first commercially available physical modelling synthesizer was Yamaha's VL-1 in 1994. Commercial digital recording Commercial digital recording was pioneered in Japan by NHK and Nippon Columbia, also known as Denon, in the 1960s. The first commercial digital recordings were released in 1971. Karaoke There are various disputes about who first invented the name karaoke (a Japanese word meaning "empty orchestra"). One claim is that the karaoke styled machine was invented by Japanese musician Daisuke Inoue in Kobe, Japan, in 1971. Portable CD player Sony's Discman, released in 1984, was the first portable CD player. Perpendicular recording Perpendicular recording was first demonstrated in the late 19th century by Danish scientist Valdemar Poulsen, who was also the first person to demonstrate that sound could be recorded magnetically. There weren’t many advances in perpendicular recording until 1976 when Dr. Shun-ichi Iwasaki (president of the Tohoku Institute of Technology in Japan) verified the distinct density advantages in perpendicular recording. Then in 1978, Dr. T. Fujiwara began an intensive research and development program at the Toshiba Corporation that eventually resulted in the perfection of floppy disk media optimized for perpendicular recording and the first commercially available magnetic storage devices using the technique. Digital audio tape recorder In 1971, Heitaro Nakajima resigned from his post as head of NHK's Technical Research Laboratories and joined Sony. Four years earlier at NHK, Nakajima had commenced work on the digitization of sound and within two years had developed the first digital audio tape recorder. Direct-drive turntable Invented by Shuichi Obata, an engineer at Matsushita (now Panasonic), based in Osaka. In 1969, Matsushita released it as the SP-10, the first in their influential Technics series of turntables. The Technics SL-1100, released in 1971, was adopted by early hip hop DJs for turntablism, and the SL-1200 is still widely used by dance and hip hop DJs. Fully programmable drum machine The Roland TR-808, also known as the 808, introduced by Roland in 1980, was the first fully programmable drum machine. It was the first drum machine with the ability to program an entire percussion track from beginning to end, complete with breaks and rolls. Created by Ikutaro Kakehashi, the 808 has been fundamental to hip hop music and electronic dance music since the 1980s, making it one of the most influential inventions in popular music. Phaser effects pedal In 1968, Shin-ei's Uni-Vibe effects pedal, designed by audio engineer Fumio Mieda, incorporated phase shift and chorus effects, soon becoming favorite effects of guitarists such as Jimi Hendrix and Robin Trower. Vowel-Consonant synthesis A type of hybrid Digital-analogue synthesis first employed by the early Casiotone keyboards in the early 1980s. Batteries Lithium-ion battery Akira Yoshino invented the modern li-ion battery in 1985. In 1991, Sony and Asahi Kasei released the first commercial lithium-ion battery using Yoshino's design. Dry cell The world's first dry-battery was invented in Japan during the Meiji Era. The inventor was Sakizou Yai. Unfortunately, the company Yai founded no longer exists Calculators Pocket calculator The first portable calculators appeared in Japan in 1970, and were soon marketed around the world. These included the Sanyo ICC-0081 "Mini Calculator", the Canon Pocketronic, and the Sharp QT-8B "micro Compet". Sharp put in great efforts in size and power reduction and introduced in January 1971 the Sharp EL-8, also marketed as the Facit 1111, which was close to being a pocket calculator. It weighed about one pound, had a vacuum fluorescent display, and rechargeable NiCad batteries. The first truly pocket-sized electronic calculator was the Busicom LE-120A "HANDY", which was marketed early in 1971. Cameras Digital single-lens reflex camera On August 25, 1981 Sony unveiled a prototype of the first still video camera, the Sony Mavica. This camera was an analog electronic camera that featured interchangeable lenses and a SLR viewfinder. At photokina in 1986, Nikon revealed a prototype analog electronic still SLR camera, the Nikon SVC, the first digital SLR. The prototype body shared many features with the N8008. Portapak In 1967, Sony unveiled the first self-contained video tape analog recording system that was portable. Chindōgu Chindōgu is the Japanese art of inventing ingenious everyday gadgets that, on the face of it, seem like an ideal solution to a particular problem. However, Chindōgu has a distinctive feature: anyone actually attempting to use one of these inventions would find that it causes so many new problems, or such significant social embarrassment, that effectively it has no utility whatsoever. Thus, Chindōgu are sometimes described as "unuseless" – that is, they cannot be regarded as 'useless' in an absolute sense, since they do actually solve a problem; however, in practical terms, they cannot positively be called "useful." The term "Chindōgu" was coined by Kenji Kawakami. Domestic appliances Bladeless fan The first bladeless fan was patented by Toshiba in 1981. Bread machine The bread machine was developed and released in Japan in 1986 by the Matsushita Electric Industrial Company. Electric rice cooker Invented by designers at the Toshiba Corporation in the late 1940s. RFIQin An automatic cooking device, invented by Mamoru Imura and patented in 2007. Electronics Avalanche photodiode Invented by Jun-ichi Nishizawa in 1952. Continuous wave semiconductor laser Invented by Izuo Hayashi and Morton B. Panish in 1970. This led directly to the light sources in fiber-optic communication, laser printers, barcode readers, and optical disc drives, technologies that were commercialized by Japanese entrepreneurs. Fiber-optic communication While working at Tohoku University, Jun-ichi Nishizawa proposed the use of optical fibers for optical communication, in 1963. Nishizawa invented other technologies that contributed to the development of optical fiber communications, such as the graded-index optical fiber as a channel for transmitting light from semiconductor lasers. Izuo Hayashi's invention of the continuous wave semiconductor laser in 1970 led directly to light sources in fiber-optic communication, commercialized by Japanese entrepreneurs. Glass integrated circuit Shunpei Yamazaki invented an integrated circuit made entirely from glass and with an 8-bit central processing unit. JFET (junction gate field-effect transistor) The first type of JFET was the static induction transistor (SIT), invented by Japanese engineers Jun-ichi Nishizawa and Y. Watanabe in 1950. The SIT is a type of JFET with a short channel length. Laptop Despite that Adam Osborne announced the "first laptop/notebook" called Osborne 1 but it is now called a luggable portable computer along with other protable computers such as IBM 5100. Yukio Yokozawa, an employee for Suwa Seikosha, a branch of Seiko (now Seiko Epson), invented the first laptop/notebook computer in July 1980, receiving a patent for the invention. Seiko's notebook computer, known as the HC-20 in Japan, was announced in 1981. In North America, Epson introduced it as the Epson HX-20 in 1981, at the COMDEX computer show in Las Vegas, where it drew significant attention for its portability. It had a mass-market release in July 1982, as the HC-20 in Japan and as the Epson HX-20 in North America. It was the first notebook-sized handheld computer, the size of an A4 notebook and weighing . In 1983, the Sharp PC-5000 and Ampere WS-1 laptops from Japan featured a modern clamshell design. Microcomputer for Automotive Engine Toshiba developed a close relationship with Ford for the supply of rectifier diodes for automobile AC alternators. In March 1971, Ford unexpectedly sent a set bulky specifications asking Toshiba to join a project to make an electronic engine control (EEC) in response to US Clean Air Act (sometimes known as the Muskie Act). Microprocessor The concept of a single-chip microprocessor central processing unit (CPU) was conceived in a 1968 meeting in Japan between Sharp engineer Tadashi Sasaki and a software engineering researcher from Nara Women's College. Sasaki discussed the microprocessor concept with Busicom and Intel in 1968. The first commercial microprocessor, the 4-bit Intel 4004, began with the "Busicom Project" in 1968 as Masatoshi Shima's three-chip CPU design, which was simplified down to a single-chip microprocessor, designed from 1969 to 1970 by Intel's Marcian Hoff and Federico Faggin and Busicom's Masatoshi Shima, and commercially released in 1971. Parametron Eiichi Goto invented the parametron in 1954 as an alternative to the vacuum tube. Early Japanese computers used parametrons until they were superseded by transistors. PIN diode/photodiode Invented by Jun-ichi Nishizawa and his colleagues in 1950. Plastic central processing unit Shunpei Yamazaki invented a central processing unit made entirely from plastic. Quantum flux parametron Eiichi Goto invented the quantum flux parametron in 1986 using superconducting Josephson junctions on integrated circuits as an improvement over existing parametron technology. Radio-controlled wheel transmitter Futaba introduced the FP-T2F in 1974 that was the first to use a steering wheel onto a box transmitter. KO Propo introduced the EX-1 in 1981 that integrated a wheel with a pistol grip with its trigger acting as the throttle. This became one of the two types of radio controlled transmitters currently for surface use. Semiconductor laser Invented by Jun-ichi Nishizawa in 1957. Solid-state maser Invented by Jun-ichi Nishizawa in 1955. Static induction transistor Invented by Jun-ichi Nishizawa and Y. Watanabe in 1950. Stored-program transistor computer The ETL Mark III began development in 1954, and was completed in 1956, created by the Electrotechnical Laboratory. It was the first stored-program transistor computer. Switching circuit theory From 1934 to 1936, NEC engineer Akira Nakashima introduced switching circuit theory in a series of papers showing that two-valued Boolean algebra, which he discovered independently, can describe the operation of switching circuits. Videocassette recorder The first machines (the VP-1100 videocassette player and the VO-1700 videocassette recorder) to use the first videocassette format, U-matic, were introduced by Sony in 1971. Game controllers D-pad In 1982, Nintendo's Gunpei Yokoi elaborated on the idea of a circular pad, shrinking it and altering the points into the familiar modern "cross" design for control of on-screen characters in their Donkey Kong handheld game. It came to be known as the "D-pad". The design proved to be popular for subsequent Game & Watch titles. This particular design was patented. In 1984, the Japanese company Epoch created a handheld game system called the Epoch Game Pocket Computer. It featured a D-pad, but it was not popular for its time and soon faded. Initially intended to be a compact controller for the Game & Watch handheld games alongside the prior non-connected style pad, Nintendo realized that Gunpei's design would also be appropriate for regular consoles, and Nintendo made the D-pad the standard directional control for the hugely successful Nintendo Entertainment System under the name "+Control Pad". Motion-sensing controller Invented by Nintendo for the Wii, the Wii Remote is the first controller with motion-sensing capability. It was a candidate for Time's Best Invention of 2006. Printing 3D printing In 1981, Hideo Kodama of Nagoya Municipal Industrial Research Institute invented two additive methods for fabricating three-dimensional plastic models with photo-hardening thermoset polymer, where the UV exposure area is controlled by a mask pattern or a scanning fiber transmitter. Hydrographics Hydrographics, also known variously as immersion printing, water transfer printing, water transfer imaging, hydro dipping, or cubic printing has an somewhat fuzzy history. Three different Japanese companies are given credit for its invention. Taica Corporation claims to have invented cubic printing in 1974. However, the earliest hydrographic patent was filed by Motoyasu Nakanishi of Kabushiki Kaisha Cubic Engineering in 1982. Robotics Android Waseda University initiated the WABOT project in 1967, and in 1972 completed the WABOT-1, the world's first full-scale humanoid intelligent robot. Its limb control system allowed it to walk with the lower limbs, and to grip and transport objects with hands, using tactile sensors. Its vision system allowed it to measure distances and directions to objects using external receptors, artificial eyes and ears. And its conversation system allowed it to communicate with a person in Japanese, with an artificial mouth. This made it the first android. Actroid DER 01 was developed by a Japanese research group, The Intelligent Robotics Lab, directed by Hiroshi Ishiguro at Osaka University, and Kokoro Co., Ltd. The Actroid is a humanoid robot with strong visual human-likeness developed by Osaka University and manufactured by Kokoro Company Ltd. (the animatronics division of Sanrio). It was first unveiled at the 2003 International Robot Exposition in Tokyo, Japan. The Actroid woman is a pioneer example of a real machine similar to imagined machines called by the science fiction terms android or gynoid, so far used only for fictional robots. It can mimic such lifelike functions as blinking, speaking, and breathing. The "Repliee" models are interactive robots with the ability to recognise and process speech and respond in kind. Karakuri puppet are traditional Japanese mechanized puppets or automata, originally made from the 17th century to the 19th century. The word karakuri means "mechanisms" or "trick". The dolls' gestures provided a form of entertainment. Three main types of karakuri exist. were used in theatre. were small and used in homes. were used in religious festivals, where the puppets were used to perform reenactments of traditional myths and legends. Robotic exoskeleton for motion support (medicine) The first HAL prototype was proposed by Yoshiyuki Sankai, a professor at Tsukuba University. Fascinated with robots since he was in the third grade, Sankai had striven to make a robotic suit in order “to support humans.” In 1989, after receiving his Ph.D. in robotics, he began the development of HAL. Sankai spent three years, from 1990 to 1993, mapping out the neurons that govern leg movement. It took him and his team an additional four years to make a prototype of the hardware. Space exploration Interplanetary solar sail spacecraft IKAROS the world's first successful interplanetary solar sail spacecraft was launched by JAXA on 21 May 2010. Storage technology Blu-ray Disc (along with other nations) After Shuji Nakamura's invention of practical blue laser diodes, Sony started two projects applying the new diodes: UDO (Ultra Density Optical) and DVR Blue (together with Pioneer), a format of rewritable discs which would eventually become the Blu-ray Disc. The Blu-ray Disc Association was founded by Massachusetts Institute of Technology along with nine companies: five from Japan, two from Korea, one from the Netherlands and one from France. Compact Disc (also Dutch company Philips) The compact disc was jointly developed by Philips (Joop Sinjou) and Sony (Toshitada Doi). Sony first publicly demonstrated an optical digital audio disc in September 1976. In September 1978, they demonstrated an optical digital audio disc with a 150 minute playing time, and with specifications of 44,056 Hz sampling rate, 16-bit linear resolution, cross-interleaved error correction code, that were similar to those of the Compact Disc they introduced in 1982. Digital video disc (also Dutch company Philips) The DVD, first developed in 1995, resulted from a cooperation between three Japanese companies (Sony, Toshiba and Panasonic) and one Dutch company (Philips). Flash memory Flash memory (both NOR and NAND types) was invented by Dr. Fujio Masuoka while working for Toshiba c. 1980. Betamax Betamax was an analog videocassette magnetic tape marketed to consumers released by Sony on May 10, 1975. VHS (Video Home System) The VHS was invented in 1973 by Yuma Shiraishi and Shizuo Takano who worked for JVC. Video tape recorder Norikazu Sawazaki invented the first video tape recorder in 1953, a prototype helical scan video tape recorder. In 1959, Toshiba released the first commercial helical scan video tape recorder. Television All-electronic television In 1926, Kenjiro Takayanagi invented the world's first all-electronic television, preceding Philo T. Farnsworth by several months. By 1927, Takayanagi improved the resolution to 100 lines, which was not surpassed until 1931. By 1928, he was the first to transmit human faces in halftones. His work had an influence on the later work of Vladimir K. Zworykin. Aperture grille One of two major cathode ray tube (CRT) display technologies, along with the older shadow mask. Aperture grille was introduced by Sony with their Trinitron television in 1968. Color plasma display The world's first color plasma display was produced by Fujitsu in 1989. Handheld television In 1970, Panasonic released the first television that was small enough to fit in a large pocket, the Panasonic IC TV MODEL TR-001. It featured a 1.5-inch display, along with a 1.5-inch speaker. LCD television The first LCD televisions were invented as handheld televisions in Japan. In 1980, Hattori Seiko's R&D group began development on color LCD pocket televisions. In 1982, Seiko Epson released the first LCD television, the Epson TV Watch, a wristwatch equipped with an active-matrix LCD television. In 1983, Casio released a handheld LCD television, the Casio TV-10. LED-backlit LCD The world's first LED-backlit LCD television was Sony's Qualia 005, released in 2004. Textiles Automatic power loom with a non-stop shuttle-change motion Sakichi Toyoda invented numerous weaving devices. His most famous invention was the automatic power loom in which he implemented the principle of Jidoka (autonomation or autonomous automation). It was the 1924 Toyoda Automatic Loom, Type G, a completely automatic high-speed loom featuring the ability to change shuttles without stopping and dozens of other innovations. At the time it was the world's most advanced loom, delivering a dramatic improvement in quality and a twenty-fold increase in productivity.This loom automatically stopped when it detected a problem such as thread breakage. Vinylon The second man-made fiber to be invented, after nylon. It was first developed by Ichiro Sakurada, H. Kawakami, and Korean scientist Ri Sung-gi at the Takatsuki chemical research center in 1939 in Japan. Timekeeping Automatic quartz The first watch to combine self-winding with a crystal oscillator for timekeeping was unveiled by Seiko in 1986. Myriad year clock The Myriad year clock (万年自鳴鐘 Mannen Jimeishou, lit. Ten-Thousand Year Self-ringing Bell), was a universal clock designed by the Japanese inventor Hisashige Tanaka in 1851. It belongs to the category of Japanese clocks called Wadokei. Quartz wristwatch The world's first quartz wristwatch was revealed in 1967: the prototype of the Astron revealed by Seiko in Japan, where it was in development since 1958. It was eventually released to the public in 1969. Spring Drive A watch movement which was first conceived by Yoshikazu Akahane working for Seiko in 1977 and was patented in 1982. It features a true continuously sweeping second hand, rather than the traditional beats per time unit, as seen with traditional mechanical and most quartz watches. Transportation Bullet train The world's first high volume capable (initially 12 car maximum) "high-speed train" was Japan's Tōkaidō Shinkansen, which officially opened in October 1964, with construction commencing in April 1959. The 0 Series Shinkansen, built by Kawasaki Heavy Industries, achieved maximum passenger service speeds of 210 km/h (130 mph) on the Tokyo–Nagoya–Kyoto–Osaka route, with earlier test runs hitting top speeds in 1963 at 256 km/h. Electronically-controlled continuously variable transmission In early 1987, Subaru launched the Justy in Tokyo with an electronically-controlled continuously variable transmission (ECVT) developed by Fuji Heavy Industries, which owns Subaru. Self-driving car The first self-driving car that did not rely upon rails or wires under the road is designed by the Tsukuba Mechanical Engineering Laboratory in 1977. The car was equipped with two cameras that used analog computer technology for signal processing. Hybrid electric vehicle The first commercial hybrid vehicle was the Toyota Prius launched in 1997. Hydrogen car In 2014, Toyota launched the first production hydrogen fuel cell vehicle, the Toyota Mirai. The Mirai has a range of 312 miles (502 km) and takes about five minutes to refuel. The initial sale price was roughly 7 million yen ($69,000). Kei car A category of small automobiles, including passenger cars, vans, and pickup trucks. They are designed to exploit local tax and insurance relaxations, and in more rural areas are exempted from the requirement to certify that adequate parking is available for the vehicle. Rickshaw A two or three-wheeled passenger cart seating one or two people that serves as a mode of human-powered transport pulled by a runner draws a two-wheeled cart. The rickshaws was invented in Japan around 1869, after the lifting of a ban on wheeled vehicles from the Tokugawa period (1603–1868), and at the beginning of a rapid period of technical advancement across the Japanese archipelago. Spiral escalator Mitsubishi Electric unveiled the world's first practical spiral escalator in 1985. Spiral escalators have the advantage of taking up less space than their conventional counterparts. Inverter-Controlled High-Speed Gearless Elevator The insulated gate bipolar transistors (IGBTs) realized increased switching frequency and reduced magnetic noise in the motor, which eliminated the need for a filter circuit and resulted in a more compact system. The IGBT also allowed the development of a small, highly integrated and highly sophisticated all-digital control device, consisting of the combination of a high-speed processor, specially customized gate arrays, and a circuit capable of controlling large currents of several kHz. Today, the inverter-controlled gearless drive system is applied in high-speed elevators worldwide. Military Aircraft Carrier Hōshō was the world's first purpose-built aircraft carrier to be completed. She was commissioned in 1922 for the Imperial Japanese Navy (IJN). Hōshō and her aircraft group participated in the January 28 Incident in 1932 and in the opening stages of the Second Sino-Japanese War in late 1937. Amphibious assault ship Imperial Japanese Army Akitsu maru is regarded as the first of the kind. Dock landing ship Imperial Japanese Army Shinshu maru is regarded as the first of the kind. Fire balloon A fire balloon, or balloon bomb, was an experimental weapon launched by Japan from 1944 to 1945, during World War II. Diesel-powered tank The world's first diesel-powered tank, this distinction goes to Japanese Type 89B I-Go Otsu, produced with a diesel engine from 1934 onwards. Katana The katana were traditional Japanese swords used by samurai warriors of ancient and feudal Japan. The swords originated in the Muromachi period (1392–1573) as a result of changing battle conditions requiring faster response times. The katana facilitated this by being worn with the blade facing up, which allowed the samurai to draw their blade and slash at their enemy in a single motion. Previously, the curved sword of the samurai was worn with the blade facing down. The ability to draw and cut in one motion also became increasingly useful in the daily life of the samurai. Shuriken The shuriken was invented during the Gosannen War as a concealed weapon, primarily for the purpose of distracting a target. Wireless transmission Meteor burst communications The first observation of interaction between meteors and radio propagation was reported by Hantaro Nagaoka in 1929. Yagi antenna The Yagi-Uda antenna was invented in 1926 by Shintaro Uda of Tohoku Imperial University, Sendai, Japan, with the collaboration of Hidetsugu Yagi, also of Tohoku Imperial University. Yagi published the first English-language reference on the antenna in a 1928 survey article on short wave research in Japan and it came to be associated with his name. However, Yagi always acknowledged Uda's principal contribution to the design, and the proper name for the antenna is, as above, the Yagi-Uda antenna (or array). Writing and correction implements Correction tape Correction tape was invented in 1989 by the Japanese product manufacturer Seed. It is an alternative to correction fluid. Gel pen The gel pen was invented in 1984 by the Sakura Color Products Corporation of Osaka. Rollerball pen The first rollerball pen was invented in 1963 by the Japanese company Ohto. Other Artificial snowflake The first artificial snowflake was created by Ukichiro Nakaya in 1936, three years after his first attempt. Canned coffee Canned coffee was invented in 1965 by Miura Yoshitake, a coffee shop owner in Hamada, Shimane Prefecture, Japan. Emoji The first emoji was created in 1998 or 1999 in Japan by Shigetaka Kurita. Fake food Simulated food was invented after Japan’s surrender ending World War II in 1945. Westerners traveling to Japan had trouble reading Japanese menus and in response, Japanese artisans and candlemakers created wax food so foreigners could easily order something that looked appetizing. Go, modern rules of Though the game originated in China, free opening of the game as it is played globally began in the 16th century Japan. Imageboard The first imageboards were created in Japan. Later imageboards such as 2chan would be created. Yoshizawa–Randlett system The Yoshizawa–Randlett system is a diagramming system used for origami models. It was first developed by Akira Yoshizawa in 1954. It was later improved upon by Samuel Randlett and Robert Harbin. Textboard Textboards like imageboards were invented in Japan. However, unlike imageboards, textboards are relatively unknown outside Japan. See also History of science and technology in Japan History of typography in East Asia List of automotive superlatives – list of first by Japanese cars List of Chinese inventions List of Chinese discoveries List of Korean inventions and discoveries List of Taiwanese inventions and discoveries Science and technology in Japan Ten Japanese Great Inventors References Inventions Lists of inventions or discoveries
1031771
https://en.wikipedia.org/wiki/Applix%201616
Applix 1616
The Applix 1616 was a kit computer with a Motorola 68000 CPU, produced by a small company called Applix in Sydney, Australia, from 1986 to the early 1990s. It ran a custom multitasking multiuser operating system that was resident in ROM. A version of Minix was also ported to the 1616, as was the MGR Window System. Andrew Morton, designer of the 1616 and one of the founders of Applix, later became the maintainer of the 2.6 version of the Linux kernel. History Paul Berger and Andrew Morton formed the Australian company Applix Pty. Ltd. in approximately 1984 to sell a Z80 card they had developed for the Apple IIc that allowed it to run CP/M. This product was not a commercial success, but Paul later proposed they develop a Motorola 68000-based personal computer for sale in kit form. The project was presented to Jon Fairall, then editor of the Australia and New Zealand electronics magazine Electronics Today International, and in December 1986, the first of four construction articles was published as "Project 1616", with the series concluding in June 1987. In October and November 1987, a disk controller card was also published as "Project 1617". Over the next decade, about 400 1616s were sold. Applix Pty. Ltd., was in no way related to the North American company of the same name that produced Applixware. Hardware Main board The main board contains: a Motorola 68000 running at 7.5 MHz, or a 68010 running at 15 MHz. 512 kibibytes of Dynamic RAM between 64 kibibytes and 256 kibibytes of ROM on board bit mapped colour graphics (no "text" mode), with timing provided by a Motorola 6845 CRT controller. The video could produce 320x200 in 16 colours, or 640x200 in a palette of 4 colours out of 16, with a later modification providing a 960x512 monochrome mode. The frame buffer resided in system memory and video refresh provided DRAM refresh cycles. The video output was able to drive CGA, EGA, MGA and multisync monitors. dual RS232 serial ports using a Zilog Z8530. a parallel port for Centronics-type printers or general purpose I/O. This was provided by a Rockwell 6522 Versatile Interface Adaptor, which was also the source of timer interrupts. 4 channel analog/audio output via an 8 bit DAC and multiplexor. software audio/analogue input via the DAC and a comparator. a PC/XT keyboard interface. The main board also had four 80-pin expansion slots. The 1616 shared this backplane with a platform developed by Andrew Morton for Keno Computer Systems, allowing the 1616 to use expansion boards developed for the Keno Computer Systems platform (primarily the 34010 graphics coprocessor), although the form-factor was different, which left the KCS cards sticking out of the top of the 1616 case! Disk controller card The disk controller card contains: A Zilog Z80 processor running at 8 MHz 32 kibibytes of ROM 64 kibibytes of Static RAM a WD1772 floppy disk controller dual RS232 serial ports using a Zilog Z8530 An NCR5380 SCSI controller The coprocessor is able to run ZRDOS (a CP/M clone), or can act as a smart disk controller. Memory expansion card The memory card: accepts between 1 and 4 megabytes of Dynamic RAM in 1 megabyte increments, has an optional memory management unit implemented in fast Static RAM and PALs, Another NCR5380 SCSI hard disk interface. This SCSI controller was mapped into the 68000's address space, and was considerably faster than the one on the Z80 coprocessor card. 34010 graphics coprocessor card The TMS34010 card was developed by Andrew Morton for Keno Computer Systems. The 34010 was a bit-addressable graphics processor with instructions for two-dimensional graphics primitives and arbitrary width arithmetic operations on pixel data. User developed cards Graham Redwood developed an Ethernet card (wire-wrap or Speedwire prototype?). Philip Hutchison developed a Motorola 68030 coprocessor card (small run of working double sided PCBs). Kevin Bertram developed a Transputer card, an Eprom Programmer, and an IO card. (The Eprom Programmer was manufactured under licence by Timothy Ward of Silicon Supply and Manufacturing.) (The IO card design was used in development by Silicon Supply and Manufacturing of a CNC PC Drill which had a provisional patent, but never released as a kit.) Other one-off interface cards were developed for specific projects, including a numerically controlled sheet metal spinning machine controller, several EEPROM programmers, etc. Operating systems 1616/OS 1616/OS was initially little more than a powerful monitor, with commands for dumping and modifying memory, loading and saving to tape, and a built in macro assembler and full screen editor. Over time, the operating system gained a hierarchical file system, preemptive multitasking, support for multiple users with access controls (although no memory protection), lightweight threads, message passing primitives and pipes. Ultimately, the operating system had around 250 system calls, and 78 commands built into the shell. The operating system had enough similarity to Unix that porting Unix source to the 1616/OS was relatively painless. Minix Colin McCormack ported Minix to the 1616. He worked around the lack of a memory management unit when fork()ing by copying BSS, heap and stack of the child and parent processes before scheduling each one. The MMU on the RAM expansion card was developed to support Colin's Minix port, although it's unclear if it was ever used for this purpose. ZRDOS Conal Walsh ported the CP/M clone ZRDOS to the Z80-based disk controller card. When operating in this mode, the 68000 acted as a console for ZRDOS, although it was still possible to suspend the connection to ZRDOS, and run 1616 programs, provided they didn't need disk I/O. MGR Not strictly an operating system, the MGR windowing system run under 1616/OS, but usurped the console video and keyboard, and added virtual tty devices for each window. The MGR port required a video hack to add a higher resolution but monochrome video mode; this was done by replacing a PAL in the video circuit. Applications Most Unix and Minix programs were able to be ported to 1616/OS. Ports included: advent, ar, arc, at, cal, cat, chess (gnu), cmp, comm, compress, conquest, cron, dd, diff, ed, eroff, grep, head, indent, make, MicroEMACS, more, nroff, roff, sc, sed, sort, split, STEVIE, strings, sum, tail, tar, tee, ularn, uniq, vi, wanderer, wc, xmodem, ymodem, zmodem, zoo Several messaging or bulletin board systems were written, including Usenet and Fidonet gateways, and many utilities to allow safe shell-level dial-up access. Several computer languages were supported, including: BASIC Tiny BASIC C (HiTech C, and later gcc) Forth Lisp MUMPS 68000 assembly language The collection of 1616/OS shareware eventually grew to thirty-one 800kB floppies. Included were innumerable small utilities and ported applications from other environments. The 1616 users group Applix Pty Ltd started holding informal user group meetings in their Sydney store in 1987. The meetings were held on the second Saturday of the month, and often finished well after midnight after consumption of much pizza. Users brought their latest 1616-related creations to demonstrate and share, and discussion ranged from hardware design, operating system theory, language design, to politics and philosophy. When the Mortons sold the shop in the 1990s, the meetings moved to their house at Yerrinbool, in the Southern Highlands, NSW. When the Mortons again moved to Wollongong, the meetings moved with them. Not able to escape the User Group by moving around NSW, the Mortons moved to Palo Alto, California in 2001. The user group still meets on the second Saturday of every month, although it has been many years since an Applix 1616 has been booted at one, and, everyone being older, the meetings tend to end somewhat before midnight, and pizza is consumed in moderation. References External links The Applix 1616 Project Andrew Morton's pages on the 1616 Applix 1616 manuals 68k architecture Microcomputers
490908
https://en.wikipedia.org/wiki/Metro%20Detroit
Metro Detroit
The Detroit metropolitan area, often referred to as Metro Detroit, is a major metropolitan area in the U.S. State of Michigan, consisting of the city of Detroit and its surrounding area. There are varied definitions of the area, including the official statistical areas designated by the Office of Management and Budget, a federal agency of the United States. Metro Detroit is known for its automotive heritage, arts, entertainment, popular music, and sports. The area includes a variety of natural landscapes, parks, and beaches, with a recreational coastline linking the Great Lakes. Metro Detroit also has one of the largest metropolitan economies in America with seventeen Fortune 500 companies. Definitions The Detroit Urban Area, which serves as the metropolitan area's core, ranks as the 11th most populous in the United States, with a population of 3,734,090 as of the 2010 census and an area of . This urbanized area covers parts of the counties of Macomb, Oakland, and Wayne. These counties are sometimes referred to as the Detroit Tri-County Area and had a population of 3,862,888 as of the 2010 census with an area of . The Office of Management and Budget (OMB), a federal agency of the United States, defines the Detroit–Warren–Dearborn Metropolitan Statistical Area (MSA) as the six counties of Lapeer, Livingston, Macomb, Oakland, St. Clair, and Wayne. As of the 2010 census, the MSA had a population of 4,296,250 with an area of . Detroit–Warren–Dearborn Metropolitan Statistical Area The nine county area designated by the OMB as the Detroit–Warren–Ann Arbor Combined Statistical Area (CSA) includes the Detroit–Warren–Dearborn MSA and the three additional counties of Genesee, Monroe, and Washtenaw (which include the metropolitan areas of Flint, Monroe, and Ann Arbor, respectively). It had a population of 5,318,744 as of the 2010 census, making it one of the largest metropolitan areas in the United States, covering an area of . Lenawee County was removed from the CSA in 2000, but added back in 2013. Detroit–Warren–Ann Arbor Combined Statistical Area With the adjacent city of Windsor, Ontario, and its suburbs, the combined Detroit–Windsor area has a population of about 5.7 million. When the nearby Toledo metropolitan area and its commuters are taken into account, the region constitutes a much larger population center. An estimated 46 million people live within a radius of Detroit proper. Metro Detroit is at the center of an emerging Great Lakes Megalopolis. Conan Smith, a businessperson quoted in a 2012 article by The Ann Arbor News, stated the most significant reason Washtenaw County, including Ann Arbor, is not often included in definitions of Metro Detroit is that there is a "lack of affinity that Washtenaw County as a whole has with Wayne County and Detroit or Oakland County and Macomb". Ann Arbor is nearly 43 miles by car from Downtown Detroit, and developed separately as a university city, with its own character. Smith said that county residents "just don't yet see ourselves as a natural part of that [Detroit] region, so I think it feels a little forced to a lot of people, and they're scared about it". Economy Detroit and the surrounding region constitute a major center of commerce and global trade, most notably as home to America's 'Big Three' automobile companies: General Motors, Ford, and Chrysler. Detroit's six-county Metropolitan Statistical Area (MSA) has a population of about 4.3 million and a workforce of about 2.1 million. In December 2017, the Department of Labor reported metropolitan Detroit's unemployment rate at 4.2%. The Detroit MSA had a Gross Metropolitan Product (GMP) of $252.7 billion as of September 2017. Firms in the region pursue emerging technologies including biotechnology, nanotechnology, information technology, and hydrogen fuel cell development. Metro Detroit is one of the leading health care economies in the U.S., according to a 2003 study measuring health care industry components, with the region's hospital sector ranked fourth in the nation. Casino gaming plays an important economic role, with Detroit the largest US city to offer casino resort hotels. Caesars Windsor, Canada's largest, complements the MGM Grand Detroit, MotorCity Casino, and Greektown Casino in the city. The casino hotels contribute significant tax revenue along with thousands of jobs for residents. Gaming revenues have grown steadily, with Detroit ranked as the fifth-largest gambling market in the United States for 2007. When Casino Windsor is included, Detroit's gambling market ranks either third or fourth. There are about four thousand factories in the area. The domestic auto industry is primarily headquartered in Metro Detroit. The area is an important source of engineering job opportunities. A rise in automated manufacturing using robotic technology has created related industries in the area. A 2004 Border Transportation Partnership study showed that 150,000 jobs in the Detroit–Windsor region and $13 billion in annual production depend on the city's international border crossing. In addition to property taxes, residents of the City of Detroit pay an income tax rate of 2.50%. Detroit automakers and local manufacturers have made significant restructurings in response to market competition. GM made its initial public offering (IPO) of stock in 2010, after bankruptcy, bailout, and restructuring by the federal government. Domestic automakers reported significant profits in 2010, interpreted by some analysts as the beginning of an industry rebound and an economic recovery for the Detroit area. The region's nine-county area, with its population of 5.3 million, has a workforce of about 2.6 million and about 247,000 businesses. Fourteen Fortune 500 companies are based in metropolitan Detroit. In April 2015, the metropolitan Detroit unemployment rate was 5.1 percent, a rate lower than the New York, Los Angeles, Chicago and Atlanta metropolitan areas. Metro Detroit has made Michigan's economy a leader in information technology, biotechnology, and advanced manufacturing. Michigan ranks fourth nationally in high-tech employment with 568,000 high-tech workers, including 70,000 in the automotive industry. Michigan typically ranks second or third in overall Research & development (R&D) expenditures in the United States. Metro Detroit is an important source of engineering and high-tech job opportunities. As the home of the "Big Three" American automakers (General Motors, Ford, and Chrysler), it is the world's traditional automotive center and a key pillar of the U.S. economy. In the 2010s, the domestic auto industry accounts, directly and indirectly, for one of ten jobs in the United States, making it a significant component for economic recovery. For 2010, the domestic automakers have reported significant profits indicating the beginning of rebound. Metro Detroit serves as the headquarters for the United States Army TACOM Life Cycle Management Command (TACOM), with Selfridge Air National Guard Base. Detroit Metropolitan Airport (DTW) is one of America's largest and most recently modernized facilities, with six major runways, Boeing 747 maintenance facilities, and an attached Westin Hotel and Conference Center. Detroit is a major U.S. port with an extensive toll-free expressway system. A 2004 Border Transportation Partnership study showed that 150,000 jobs in the Detroit-Windsor region and $13 billion in annual production depend on Detroit's international border crossing. A source of top talent, the University of Michigan in Ann Arbor is one of the world's leading research institutions, and Wayne State University in Detroit has the largest single-campus medical school in the United States. Metro Detroit is a prominent business center, with major commercial districts such as the Detroit Financial District and Renaissance Center, the Southfield Town Center, and the historic New Center district with the Fisher Building and Cadillac Place. Among the major companies based in the area, aside from the major automotive companies, are BorgWarner (Auburn Hills), Quicken Loans (Downtown Detroit), TRW Automotive Holdings (Livonia), Ally Financial (Downtown Detroit), Carhartt (Dearborn), and Shinola (Detroit). Compuware, IBM, Google, and Covansys are among the information technology and software companies with a headquarters or major presence in or near Detroit. HP Enterprise Services makes Detroit its regional headquarters, and one of its largest global employment locations. The metropolitan Detroit area has one of the nation's largest office markets with 147,082,003 square feet. Chrysler's largest corporate facility is its U.S. headquarters and technology center in the Detroit suburb of Auburn Hills, while Ford Motor Company is in Dearborn, directly adjacent to Detroit. In the decade leading up to 2006, downtown Detroit gained more than $15 billion in new investment from private and public sectors. Tourism Tourism is an important component of the region's culture and economy, comprising nine percent of the area's two million jobs. About 15.9 million people visit metro Detroit annually, spending about $4.8 billion. Detroit is the largest city or metro area in the U.S. to offer casino resort hotels (MGM Grand Detroit, MotorCity Casino, Greektown Casino, and nearby Caesars Windsor). Metro Detroit is a tourist destination that easily accommodates super-sized crowds to events such as the Woodward Dream Cruise, North American International Auto Show, the Windsor-Detroit International Freedom Festival, 2009 NCAA Final Four, and Super Bowl XL. The Detroit International Riverfront links the Renaissance Center to a series of venues, parks, restaurants, and hotels. In 2006, the four-day Motown Winter Blast drew a cold weather crowd of about 1.2 million people to Campus Martius Park area downtown. Detroit's metroparks include fresh water beaches, such as Metropolitan Beach, Kensington Beach, and Stony Creek Beach. Metro Detroit offers canoeing through the Huron-Clinton Metroparks. Sports enthusiasts and enjoy downhill and cross-county skiing at Alpine Valley Ski Resort, Mt. Brighton, Mt. Holly, and Pine Knob Ski Resort. The Detroit River International Wildlife Refuge is the only international wildlife preserve in North America that is located in the heart of a major metropolitan area. The Refuge includes islands, coastal wetlands, marshes, shoals, and waterfront lands along of the Detroit River and Western Lake Erie shoreline. Metro Detroit contains a number of shopping malls, including the upscale Somerset Collection in Troy, Great Lakes Crossing Outlets in Auburn Hills, and Twelve Oaks Mall in Novi, all of which are draws for tourists. The region's leading attraction is The Henry Ford, located in the Detroit suburb of Dearborn; it is America's largest indoor-outdoor museum complex. The recent renovation of the Renaissance Center, and related construction of a state-of-the-art cruise ship dock, new stadiums, and a new RiverWalk, have stimulated related private economic development. Nearby Windsor has a 19-year-old drinking age with a myriad of entertainment to complement Detroit's Greektown district. Some analysts believe that tourism planners have yet to tap the full economic power of the estimated 46 million people who live within a 300-mile (480-km) radius of Detroit. Demographics Metro Detroit is a six-county metropolitan statistical area (MSA) with a population of 4,392,041—making it the 14th-largest MSA in the United States as enumerated by the 2020 United States Census. The Detroit region is a ten-county Combined Statistical Area (CSA) with a population of 5,325,219—making it the 12th-largest CSA in the United States as enumerated by the 2020 Census. The Detroit–Windsor area, a commercial link straddling the Canada-U.S. border, has a total population of about 5,700,000. As of the census of 2010, there were 4,296,250 people, 1,682,111 households, and 1,110,454 families residing within the metropolitan statistical area. The census reported 70.1% White, 22.8% African American, 0.3% Native American, 3.3% Asian, 0.02% Pacific Islander, 1.2% from other races, and 2.2% from two or more races. Hispanic or Latino of any race were 6.2% of the population. Arab Americans were at least 4.7% of the region's population (considered white in the US Census). As of the 2010 American Community Survey estimates, the median income for a household in the MSA was $48,198, and the median income for a family was $62,119. The per capita income for the MSA was $25,403. The region's foreign-born population sat at 8.6%. The region contains the largest concentration of Arab-Americans in the United States, particularly in Dearborn. The metro area also has the 25th largest Jewish population worldwide. In 1701, French officer Antoine de La Mothe Cadillac, along with fifty-one additional French-Canadians, founded a settlement called Fort Ponchartrain du Détroit, naming it after the comte de Pontchartrain, Minister of Marine under Louis XIV. The French legacy can be observed today in the names of many area cities (ex. Detroit, Grosse Pointe, Grosse Ile) and streets (ex. Gratiot, Beaubien, St. Antoine, Cadieux). Later came an influx of persons of British and German descent, followed by Polish, Irish, Italian, Lebanese, Assyrian/Chaldean, Greek, Jewish, and Belgian immigrants who made their way to the area in the early 20th century and during and after World War II. There was a large migration of African Americans into the city from the rural South during The Great Migration and following World War I. Today, the Detroit suburbs in Oakland County, Macomb County, and northeastern and northwestern Wayne County are predominantly ethnic European American. Oakland County is among the most affluent counties in the United States, with a population of more than one million. In Wayne County, the city of Dearborn has a large concentration of Arab Americans, mainly Shi'ite Muslim Lebanese, whose ancestors immigrated here in the early 20th century. Recently, the area has witnessed some growth in ethnic Albanian, Asian and Hispanic populations. Metro Detroit has a sizeable population of Indian Americans, with an estimated 1.5% of the population being of Indian descent. Indians Americans in Metro Detroit are employed in various engineering and medical fields. In the 2000s, 115 of the 185 cities and townships in Metro Detroit were more than 95% white. African Americans have also moved to the suburbs: in 2000 44% of the more than 240,000 suburban blacks lived in Inkster, Pontiac, Oak Park, and Southfield. Transportation Airports The largest airport in the area is Detroit Metropolitan Wayne County Airport (DTW) in Romulus, an international airport that serves as a commercial hub for Delta Air Lines and Spirit Airlines. The other airports in the metropolitan area are: Ann Arbor Municipal Airport (ARB) Coleman A. Young International Airport (DET) (Detroit) - General aviation only Flint-Bishop International Airport(FNT) (Flint) - Commercial airport Oakland County International Airport (PTK) Waterford Township - Charter passenger facility St. Clair County International Airport (near Port Huron, Michigan) - An international airport on the Canada–US border. Selfridge Air National Guard Base (Mount Clemens) - Military airbase Willow Run Airport (YIP) (Ypsilanti) - Cargo, general aviation, charter passenger traffic Transit systems Bus service for the metropolitan area is provided jointly by the Detroit Department of Transportation (DDOT) and Suburban Mobility Authority for Regional Transportation (SMART) which operate under a cooperative service and fare agreement. The elevated Detroit People Mover encircles downtown providing service to numerous downtown hotels, offices and attractions. The Woodward Avenue Streetcar has recently began service to provide service between downtown and New Center, and the proposed SEMCOG Commuter Rail would extend from Detroit's New Center area to The Henry Ford, Dearborn, Detroit Metropolitan Airport, Ypsilanti, and Ann Arbor The Regional Transit Authority (RTA) was established in December 2012 to coordinate the services of all existing transit providers, and to develop a bus rapid transit service along Woodward Avenue. Roads and freeways The Metro Detroit area is linked by an advanced network of major roads and freeways which include Interstate highways. Traditionally, Detroiters refer to some of their freeways by name rather than route number. The Davison, Lodge, and Southfield freeways are almost always referred to by name rather than route number. Detroiters commonly precede freeway names with the word 'the' as in the Lodge, the Southfield, and the Davison. Those without names are referred to by number. Surface street navigation in Metro Detroit is commonly anchored by "mile roads", major east–west surface streets that are spaced at one-mile (1.6 km) intervals and increment as one travels north and away from the city center. Mile roads sometimes have two names, the numeric name (ex. 15 Mile Road) used in Macomb County and a local name (ex. Maple Road) used in Oakland County mostly. Education Colleges and universities Baker College - Auburn Hills and Royal Oak Cleary University - Detroit and Howell College for Creative Studies - Detroit Concordia University Ann Arbor - Ann Arbor Cranbrook Academy of Art - Bloomfield Hills Davenport University - Detroit and Warren Dorsey College - Dearborn, Madison Heights, Roseville, Wayne and Woodhaven Eastern Michigan University - Ypsilanti Henry Ford College - Dearborn Kettering University - Flint Lawrence Technological University - Southfield Macomb Community College - Warren and Clinton Township Madonna University - Livonia Michigan State University Management Education Center - Troy Monroe County Community College - Monroe Mott Community College - Flint Northwood University - Midland Oakland Community College - Auburn Hills, Farmington Hills, Highland Lakes, Royal Oak and Southfield Oakland University - Auburn Hills and Rochester Hills Rochester College - Rochester Saint Clair County Community College - Port Huron St. Clair College - Windsor, Ontario Schoolcraft College - Livonia Specs Howard School of Media Arts - Southfield Sacred Heart Major Seminary - Detroit SS. Cyril and Methodius Seminary - Orchard Lake University of Detroit Mercy - Detroit University of Michigan - Ann Arbor University of Michigan–Dearborn - Dearborn University of Michigan–Flint - Flint University of Windsor - Windsor, Ontario Walsh College - Troy Washtenaw Community College - Ann Arbor Wayne County Community College - Detroit Wayne State University - Detroit Crime The principal City of Detroit has struggled with high crime for decades. About half of all murders in Michigan in 2015 occurred in Detroit. Since 2013, the FBI has reported a 26% decrease in property crimes and a 27% decrease in violent crimes. Sports Professional sports has a major fan following in Metro Detroit. The area is home to many sports teams, including six professional teams in four major sports. The area's several universities field teams in a variety of sports. Michigan Stadium, home of the Michigan Wolverines, is the largest American football stadium in the world. Metro Detroit hosts many annual sporting events including auto and hydroplane racing. The area has hosted many major sporting events, including the 1994 FIFA World Cup, Super Bowl XVI, Super Bowl XL, Wrestlemania 23, the 2005 Major League Baseball All-Star Game, many Stanley Cup Championship rounds, the first two games of the 2006 World Series, and the last two games of the 2012 World Series. The Michigan International Speedway in Brooklyn hosts various Auto racing: NASCAR, INDYCAR, and ARCA. The Detroit River hosts Hydroplane racing held by the APBA for the Detroit APBA Gold Cup. Area codes Metro Detroit is served by nine telephone area codes (six not including Windsor). The 313 area code, which used to encompass all of Southeast Michigan, is today confined exclusively to the City of Detroit and several neighboring Wayne County suburbs. The 248 area code along with the newer 947 area code overlay mostly serve Oakland County. Macomb County is largely served by 586. Genesee, St. Clair, and Lapeer counties, eastern Livingston County, and part of northern Oakland County are covered by 810. Washtenaw, Monroe, and most of the Wayne County suburbs are in the 734 area. The Windsor area (and most of southwestern Ontario) is served by an overlay complex of three codes—519, 226, and 548. References External links Metro Detroit Convention and Visitors Bureau Southeast Michigan Council of Governments City Charter of Detroit Michigan's Official Economic Development and Travel Site. Map of Michigan Lighthouse in PDF Format. Collection: "Detroit Metro" from the University of Michigan Museum of Art Geography of Detroit
24631754
https://en.wikipedia.org/wiki/List%20of%20flashcard%20software
List of flashcard software
This article contains a list of flashcard software. Flashcards are widely used as a learning drill to aid memorization by way of spaced repetition. Software Platform support References Classic Mac OS software Utilities for macOS Utilities for Windows Utilities for Linux Android (operating system) software BlackBerry software Palm OS software Software Lists of software
1174674
https://en.wikipedia.org/wiki/Internet%20geolocation
Internet geolocation
In computing, Internet geolocation is software capable of deducing the geographic position of a device connected to the Internet. For example, the device's IP address can be used to determine the country, city, or ZIP code, determining its geographical location. Other methods include examination of Wi-Fi hotspots, Data sources An IP address is assigned to each device (e.g. computer, printer) participating in a computer network that uses the Internet Protocol for communication. The protocol specifies that each IP packet must have a header which contains, among other things, the IP address of the sender. There are a number of free and paid subscription geolocation databases, ranging from country level to state or city—including ZIP/post code level—each with varying claims of accuracy (generally higher at the country level). These databases typically contain IP address data which may be used in firewalls, ad servers, routing, mail systems, web sites, and other automated systems where a geolocation may be useful. An alternative to hosting and querying a database is to obtain the country code for a given IP address through a DNSBL-style lookup from a remote server. Some commercial databases have augmented geolocation software with demographic data to enable demographic-type targeting using IP address data. The primary source for IP address data is the regional Internet registries which allocate and distribute IP addresses amongst organizations located in their respective service regions: African Network Information Centre (AfriNIC) American Registry for Internet Numbers (ARIN) Asia-Pacific Network Information Centre (APNIC) Latin American and Caribbean Internet Address Registry (LACNIC) RIPE Network Coordination Centre (RIPE NCC) Secondary sources include: Data mining or user-submitted geographic location data: Website submitted. e.g. a weather website asking visitors for a city name to find their local forecast or pairing a user's IP address with the address information in their account profile. Wi-Fi positioning system through the examination of neighborhood Wi-Fi BSSID. e.g. Mozilla Location Service. Examination of neighborhood Bluetooth devices. Pairing a user's IP address with the GPS location of a device that's using such IP address. Data contributed by Internet service providers. Guesstimates from adjacent Class C range and/or gleaned from network hops. Network routing information collected to the end point of IP address. Accuracy is improved by: Data scrubbing to filter out or identify anomalies. Statistical analysis of user submitted data. Utilizing third-party tests conducted by reputable organizations. Errors If geolocation software maps IP addresses associated with an entire county or territory to a particular location, such as the geographic center of the territory, this can cause considerable problems for the people who happen to live there, as law enforcement authorities and others may mistakenly assume any crimes or other misconduct associated with the IP address to originate from that particular location. For example, a farmstead northeast of Potwin, Kansas became the default site of 600 million IP addresses (due to their lack of fine granularity) when the Massachusetts-based digital mapping company MaxMind changed the putative geographic center of the contiguous United States from 39.8333333,-98.585522 to 38.0000,-97.0000. Since 2012, a family in Pretoria, South Africa, has been regularly visited by police or angry private citizens who believed their stolen phones were to be found in the family's backyard. This was also the result of geolocation by MaxMind. The company used the National Geospatial-Intelligence Agency's coordinates for Pretoria, which pointed to the family's house, to represent IP addresses associated with Pretoria. Privacy A distinction can be made between co-operative and oppositional geolocation. In some cases, it is in the interest of users to be accurately located, for example, so that they can be offered information relevant to their location. In other cases, users prefer not to disclose their location for privacy or other reasons. Technical measures for ensuring anonymity, such as proxy servers, can be used to circumvent restrictions imposed by geolocation software. Some sites detect the use of proxies and anonymizers, and may either block service or provide non-localized content in response. Applications Geolocation technology has been under development only since 1999, and the first patents were granted in 2004. The technology is already widely used in multiple industries, including e-retail, banking, media, telecommunications, education, travel, hospitality, entertainment, health care, online gaming and law enforcement, for preventing online fraud, complying with regulations, managing digital rights and serving targeted marketing content and pricing. Additionally, the U.S. Federal Communications Commission (FCC) has proposed that geolocation software might be leveraged to support 9-1-1 location determination. An IP address or the related unique URL may also be investigated with basic functions, typing from the keyboard two instructions: ping and traceroute. In Unix-like systems, they are available as a command line tool. In the same way, Microsoft Windows has the prompt of DOS working with those instructions. Criminal investigations Banks, software vendors and other online enterprises in the US and elsewhere became subject to strict "know your customer" laws imposed by the USA PATRIOT Act, the Bank Secrecy Act, the US Treasury Department's Office of Foreign Assets Control and other regulatory entities in the US and Europe from the early twenty-first century. These laws are intended to prevent money laundering, trafficking with terrorist organizations, and trading with banned nations. When it is possible to identify the true location of online visitors, geolocation can protect banks from participating in the transfer of funds for illicit purposes. More and more prosecuting bodies are bringing cases involving cyber-crimes such as cyber-stalking and identity theft. Prosecutors often have the capability of determining the IP address data necessary to link a computer to a crime. Fraud detection Online retailers and payment processors use geolocation to detect possible credit card fraud by comparing the user's location to the billing address on the account or the shipping address provided. A mismatch – an order placed from the US on an account number from Tokyo, for example – is a strong indicator of potential fraud. IP address geolocation can be also used in fraud detection to match billing address postal code or area code. Banks can prevent "phishing" attacks, money laundering and other security breaches by determining the user's location as part of the authentication process. Whois databases can also help verify IP addresses and registrants. Government, law enforcement and corporate security teams use geolocation as an investigatory tool, tracking the Internet routes of online attackers to find the perpetrators and prevent future attacks from the same location. Geomarketing Since geolocation software can get the information of user location, companies using geomarketing may provide web content or products that are famous or useful in that specific location. Advertisements and content on a website that uses geolocation software in the form of an API (also referred to as "IP API" or "IP address geolocation API") may be tailored to provide the information that a certain user wants. Regional licensing Internet movie vendors, online broadcasters who serve live streaming video of sporting events, or certain TV and music video sites that are licensed to broadcast their videos of episodes/music videos are permitted to serve viewers only in their licensed territories. By geolocating viewers, they can be certain of obeying licensing regulations. Online gambling websites must also know where their customers violate gambling laws, or risk doing so. Jim Ramo, chief executive of movie distributor Movielink, said studios were aware of the shortcomings going in and have grown more confident now that the system has been shown to work. Gaming A location-based game is a type of pervasive game for smartphones or other mobile devices in which the gameplay evolves and progresses via a player's real-world location which is typically obtained by GPS functionality from the device. See also Geo-blocking Geotargeting GPS navigation software Location-based service MAC address anonymization Locator software Personalization W3C Geolocation API Kinomap (geolocation video software) Mobile phone tracking TV White Space Database (geolocation database) References External links Towards Street-Level Client-Independent IP Geolocation: Recent research paper explaining how to find a location from an IP address within 1 km How accurate can IP Geolocation get? Internet privacy software Business software
50575063
https://en.wikipedia.org/wiki/Google%20Assistant
Google Assistant
Google Assistant is an artificial intelligence–powered virtual assistant developed by Google that is primarily available on mobile and smart home devices. Unlike the company's previous virtual assistant, Google Now, the Google Assistant can engage in two-way conversations. Assistant initially debuted in May 2016 as part of Google's messaging app Allo, and its voice-activated speaker Google Home. After a period of exclusivity on the Pixel and Pixel XL smartphones, it began to be deployed on other Android devices in February 2017, including third-party smartphones and Android Wear (now Wear OS), and was released as a standalone app on the iOS operating system in May 2017. Alongside the announcement of a software development kit in April 2017, the Assistant has been further extended to support a large variety of devices, including cars and third-party smart home appliances. The functionality of the Assistant can also be enhanced by third-party developers. Users primarily interact with the Google Assistant through natural voice, though keyboard input is also supported. In the same nature and manner as Google Now, the Assistant is able to search the Internet, schedule events and alarms, adjust hardware settings on the user's device, and show information from the user's Google account. Google has also announced that the Assistant will be able to identify objects and gather visual information through the device's camera, and support purchasing products and sending money. At CES 2018, the first Assistant-powered smart displays (smart speakers with video screens) were announced, with the first one being released in July 2018. In 2020, Google Assistant is already available on more than 1 billion devices. Google Assistant is available in more than 90 countries and in over 30 languages, and is used by more than 500 million users monthly. History Google Assistant was unveiled during Google's developer conference on May 18, 2016, as part of the unveiling of the Google Home smart speaker and new messaging app Allo; Google CEO Sundar Pichai explained that the Assistant was designed to be a conversational and two-way experience, and "an ambient experience that extends across devices". Later that month, Google assigned Google Doodle leader Ryan Germick and hired former Pixar animator Emma Coats to develop "a little more of a personality". Platform expansion For system-level integration outside of the Allo app and Google Home, the Google Assistant was initially exclusive to the Pixel and Pixel XL smartphones. In February 2017, Google announced that it had begun to enable access to the Assistant on Android smartphones running Android Marshmallow or Nougat, beginning in select English-speaking markets. Android tablets did not receive the Assistant as part of this rollout. The Assistant is also integrated in Android Wear 2.0, and will be included in future versions of Android TV and Android Auto. In October 2017, the Google Pixelbook became the first laptop to include Google Assistant. Google Assistant later came to the Google Pixel Buds. In December 2017, Google announced that the Assistant would be released for phones running Android Lollipop through an update to Google Play Services, as well as tablets running 6.0 Marshmallow and 7.0 Nougat. In February 2019, Google reportedly began testing ads in Google Assistant results. On May 15, 2017, Android Police reported that the Google Assistant would be coming to the iOS operating system as a separate app. The information was confirmed two days later at Google's developer conference. Smart displays In January 2018 at the Consumer Electronics Show, the first Assistant-powered "smart displays" were released. Smart displays were shown at the event from Lenovo, Sony, JBL and LG. These devices have support for Google Duo video calls, YouTube videos, Google Maps directions, a Google Calendar agenda, viewing of smart camera footage, in addition to services which work with Google Home devices. These devices are based on Android Things and Google-developed software. Google unveiled its own smart display, Google Home Hub in October 2018, and later Nest Hub Max, which utilizes a different system platform. Developer support In December 2016, Google launched "Actions on Google", a developer platform for the Google Assistant. Actions on Google allows 3rd party developers to build apps for Google Assistant. In March 2017, Google added new tools for developing on Actions on Google to support the creation of games for Google Assistant. Originally limited to the Google Home smart speaker, Actions on Google was made available to Android and iOS devices in May 2017, at which time Google also introduced an app directory for overview of compatible products and services. To incentivize developers to build Actions, Google announced a competition, in which first place won tickets to Google's 2018 developer conference, $10,000, and a walk-through of Google's campus, while second place and third place received $7,500 and $5,000, respectively, and a Google Home. In April 2017, a software development kit (SDK) was released, allowing third-party developers to build their own hardware that can run the Google Assistant. It has been integrated into Raspberry Pi, cars from Audi and Volvo, and smart home appliances, including fridges, washers, and ovens, from companies including iRobot, LG, General Electric, and D-Link. Google updated the SDK in December 2017 to add several features that only the Google Home smart speakers and Google Assistant smartphone apps had previously supported. The features include: letting third-party device makers incorporate their own "Actions on Google" commands for their respective products incorporating text-based interactions and more languages allowing users to set a precise geographic location for the device to enable improved location-specific queries. On May 2, 2018, Google announced a new program on their blog that focuses on investing in the future of the Google Assistant through early-stage startups. Their focus was to build an environment where developers could build richer experiences for their users. This includes startups that broaden Assistant's features, are building new hardware devices, or simply differentiating in different industries. Voices Google Assistant launched using the voice of Kiki Baessell for the American female voice, the same actress for the Google Voice voicemail system since 2010. From 2016 until present day, the Assistant's default voice is portrayed by Antonia Flynn. On October 11, 2019, Google announced that Issa Rae had been added to Google Assistant as an optional voice, which could be enabled by the user by saying "Okay, Google, talk like Issa". Interaction Google Assistant, in the nature and manner of Google Now, can search the Internet, schedule events and alarms, adjust hardware settings on the user's device, and show information from the user's Google account. Unlike Google Now, however, the Assistant can engage in a two-way conversation, using Google's natural language processing algorithm. Search results are presented in a card format that users can tap to open the page. In February 2017, Google announced that users of Google Home would be able to shop entirely by voice for products through its Google Express shopping service, with products available from Whole Foods Market, Costco, Walgreens, PetSmart, and Bed Bath & Beyond at launch, and other retailers added in the following months as new partnerships were formed. Google Assistant can maintain a shopping list; this was previously done within the notetaking service Google Keep, but the feature was moved to Google Express and the Google Home app in April 2017, resulting in a severe loss of functionality. In May 2017, Google announced that the Assistant would support a keyboard for typed input and visual responses, support identifying objects and gather visual information through the device's camera, and support purchasing products and sending money. Through the use of the keyboard, users can see a history of queries made to the Google Assistant, and edit or delete previous inputs. The Assistant warns against deleting, however, due to its use of previous inputs to generate better answers in the future. In November 2017, it became possible to identify songs currently playing by asking the Assistant. The Google Assistant allows users to activate and modify vocal shortcut commands in order to perform actions on their device (both Android and iPad/iPhone) or configuring it as a hub for home automation. This feature of the speech recognition is available in English, among other languages. In July 2018, the Google Home version of Assistant gained support for multiple actions triggered by a single vocal shortcut command. At the annual I/O developers conference on May 8, 2018, Google's SEO announced the addition of six new voice options for the Google Assistant, one of which being John Legend's. This was made possible by WaveNet, a voice synthesizer developed by DeepMind, which significantly reduced the amount of audio samples that a voice actor was required to produce for creating a voice model. However, John Legend's Google Assistant cameo voice was discontinued on March 23, 2020. In August 2018, Google added bilingual capabilities to the Google Assistant for existing supported languages on devices. Recent reports say that it may support multilingual support by setting a third default language on Android Phone. As a default option, the Google Assistant doesn't support two common features of the speech recognition on the transcribed texts, like punctuation and spelling. However, a Beta feature of Speech-to-text enables only English (United States) language users to ask "to detect and insert punctuation in transcription results". Speech-to-Text can recognize commas, question marks, and periods in transcription requests. In April 2019, the most popular audio games in the Assistant, Crystal Ball and Lucky Trivia, have had the biggest voice changes in the application's history. The voice in the assistant has been able to add expression to the games. For instance, in the Crystal Ball game, the voice would speak slow and soft during the intro and before the answer is revealed to make the game more excitable and in the Lucky Trivia game, the voice would become excitable like a game show host. In the British accent voice of Crystal Ball, the voice would say the word 'probably' in a downwards slide like she's not too sure. The games used the text to speech voice which makes the voice more robotic. In May 2019 however, it turned out to be a bug in the speech API that caused the games to lose the studio-quality voices. These audio games were fixed in May 2019. On December 12, 2019, Google rolled out its interpreter mode for iOS and Android Google Assistant smartphone apps. Interpreter mode allows Google Assistant to translate conversations in real time and was previously only available on Google Home smart speakers and displays. Google Assistant won the 2020 Webby Award for Best User Experience in the category Apps, Mobile & Voice. On March 5, 2020, Google rolled out its article-reading feature on Google Assistant that read webpages aloud in 42 different languages. On October 15, 2020, Google announced a new ‘hum to search' function to allow users to find a song by simply humming, whistling or singing the song. According to Google, when a user hums a melody to search, the machine learning models will automatically convert the audio into a number-based sequence, which represents the song's melody. Google Duplex In May 2018, Google revealed Duplex, an extension of the Google Assistant that allows it to carry out natural conversations by mimicking human voice, in a manner not dissimilar to robocalling. The assistant can autonomously complete tasks such as calling a hair salon to book an appointment, scheduling a restaurant reservation, or calling businesses to verify holiday store hours. While Duplex can complete most of its tasks fully autonomously, it is able to recognize situations that it is unable to complete and can signal a human operator to finish the task. Duplex was created to speak in a more natural voice and language by incorporating speech disfluencies such as filler words like "hmm" and "uh" and using common phrases such as "mhm" and "gotcha", along with more human-like intonation and response latency. Duplex is currently in development and had a limited release in late 2018 for Google Pixel users. During the limited release, Pixel phone users in Atlanta, New York, Phoenix, and San Francisco were only able to use Duplex to make restaurant reservations. As of October 2020, Google has expanded Duplex to businesses in eight countries. Criticism After the announcement, concerns were made over the ethical and societal questions that artificial intelligence technology such as Duplex raises. For instance, human operators may not notice that they are speaking with a digital robot when conversing with Duplex, which some critics view as unethical or deceitful. Concerns over privacy were also identified, as conversations with Duplex are recorded in order for the virtual assistant to analyze and respond. Privacy advocates have also raised concerns of how the millions of vocal samples gathered from consumers are fed back into the algorithms of virtual assistants, making these forms of AI smarter with each use. Though these features individualize the user experience, critics are unsure about the long term implications of giving "the company unprecedented access to human patterns and preferences that are crucial to the next phase of artificial intelligence". While transparency was referred to as a key part to the experience when the technology was revealed, Google later further clarified in a statement saying, "We are designing this feature with disclosure built-in, and we'll make sure the system is appropriately identified." Google further added that, in certain jurisdictions, the assistant would inform those on the other end of the phone that the call is being recorded. Reception PC World's Mark Hachman gave a favorable review of the Google Assistant, saying that it was a "step up on Cortana and Siri." Digital Trends called it "smarter than Google Now ever was". Criticism In July 2019 Belgian public broadcaster VRT NWS published an article revealing that third-party contractors paid to transcribe audio clips collected by Google Assistant listened to sensitive information about users. Sensitive data collected from Google Home devices and Android phones included names, addresses, and other private conversations such as business calls or bedroom conversations. From more than 1000 recordings analyzed, 153 were recorded without 'Okay Google' command. Google officially acknowledged that 0.2% of recordings are being listened to by language experts to improve Google's services. On August 1, 2019, Germany's Hamburg Commissioner for Data Protection and Freedom of Information has initiated an administrative procedure to prohibit Google from carrying out corresponding evaluations by employees or third parties for the period of three months to provisionally protect the rights of privacy of data subjects for the time being, citing GDPR. A Google spokesperson stated that Google paused “language reviews” in all European countries while it investigated recent media leaks. See also Amazon Alexa Bixby Cortana Home automation (Smart Home) Internet of things (IoT) Mycroft (software) Siri Smart devices Speech recognition Voice command device References External links Google Assistant Supported Languages Google Assistant for Developers Android (operating system) Assistant Natural language processing software Virtual assistants
4763330
https://en.wikipedia.org/wiki/Dynamic%20Kernel%20Module%20Support
Dynamic Kernel Module Support
Dynamic Kernel Module Support (DKMS) is a program/framework that enables generating Linux kernel modules whose sources generally reside outside the kernel source tree. The concept is to have DKMS modules automatically rebuilt when a new kernel is installed. Framework An essential feature of DKMS is that it automatically recompiles all DKMS modules if a new kernel version is installed. This allows drivers and devices outside of the mainline kernel to continue working after a Linux kernel upgrade. Another benefit of DKMS is that it allows the installation of a new driver on an existing system, running an arbitrary kernel version, without any need for manual compilation or precompiled packages provided by the vendor. DKMS was written by the Linux Engineering Team at Dell in 2003. It is included in many distributions, such as Ubuntu, Debian, Fedora, SUSE, Mageia and Arch. DKMS is free software released under the terms of the GNU General Public License (GPL) v2 or later. DKMS supports both the rpm and deb package formats out of the box. See also Binary blob References External links Building a kernel module using Dynamic Kernel Module Support (DKMS) on CentOS Wiki Dynamic Kernel Module Support on ArchWiki Dell Linux kernel
3539962
https://en.wikipedia.org/wiki/Classified%20information%20in%20the%20United%20States
Classified information in the United States
The United States government classification system is established under Executive Order 13526, the latest in a long series of executive orders on the topic. Issued by President Barack Obama in 2009, Executive Order 13526 replaced earlier executive orders on the topic and modified the regulations codified to 32 C.F.R. 2001. It lays out the system of classification, declassification, and handling of national security information generated by the U.S. government and its employees and contractors, as well as information received from other governments. The desired degree of secrecy about such information is known as its sensitivity. Sensitivity is based upon a calculation of the damage to national security that the release of the information would cause. The United States has three levels of classification: Confidential, Secret, and Top Secret. Each level of classification indicates an increasing degree of sensitivity. Thus, if one holds a Top Secret security clearance, one is allowed to handle information up to the level of Top Secret, including Secret and Confidential information. If one holds a Secret clearance, one may not then handle Top Secret information, but may handle Secret and Confidential classified information. The United States does not have a British-style Official Secrets Act; instead, several laws protect classified information, including the Espionage Act of 1917, the Atomic Energy Act of 1954 and the Intelligence Identities Protection Act of 1982. A 2013 report to Congress noted that the relevant laws have been mostly used to prosecute foreign agents, or those passing classified information to them, and that leaks to the press have rarely been prosecuted. The legislative and executive branches of government, including US presidents, have frequently leaked classified information to journalists. Congress has repeatedly resisted or failed to pass a law that generally outlaws disclosing classified information. Most espionage law criminalizes only national defense information; only a jury can decide if a given document meets that criterion, and judges have repeatedly said that being "classified" does not necessarily make information become related to the "national defense". Furthermore, by law, information may not be classified merely because it would be embarrassing or to cover illegal activity; information may be classified only to protect national security objectives. The United States over the past decades under the Obama and Clinton administrations has released classified information to foreign governments for diplomatic goodwill, known as declassification diplomacy. Examples include information on Augusto Pinochet to the government of Chile. In October 2015, US Secretary of State John Kerry provided Michelle Bachelet, Chile's president, with a pen drive containing hundreds of newly declassified documents. Terminology In the U.S., information is called "classified" if it has been assigned one of the three levels: Confidential, Secret, or Top Secret. Information that is not so labeled is called "Unclassified information". The term declassified is used for information that has had its classification removed, and downgraded refers to information that has been assigned a lower classification level but is still classified. Many documents are automatically downgraded and then declassified after some number of years. The U.S. government uses the term Controlled Unclassified Information to refer to information that is not Confidential, Secret, or Top Secret, but whose dissemination is still restricted. Reasons for such restrictions can include export controls, privacy regulations, court orders, and ongoing criminal investigations, as well as national security. Information that was never classified is sometimes referred to as "open source" by those who work in classified activities. Public Safety Sensitive (PSS) refers to information that is similar to Law Enforcement Sensitive but could be shared between the various public safety disciplines (Law Enforcement, Fire, and Emergency Medical Services). Peter Louis Galison, a historian and Director in the History of Science Dept. at Harvard University, claims that the U.S. Government produces more classified information than unclassified information. Levels and categories of classification The United States government classifies sensitive information according to the degree which the unauthorized disclosure would damage national security. The three primary levels of classification (from least to greatest) are Confidential, Secret, and Top Secret. However, even Top Secret clearance does not allow one to access all information at, or below, Top Secret level. Access requires the clearance necessary for the sensitivity of the information, as well as a legitimate need to obtain the information. For example, all US military pilots are required to obtain at least a Secret clearance, but they may only access documents directly related to their orders. To ensure that only those with a legitimate need to know can access information, classified information may have additional categorizations/markings and access controls that could prevent even someone with a sufficient level of clearance from seeing it. Examples of this include: Special Access Program (SAP), Sensitive Compartmented Information (SCI), Restricted Data (RD), and Alternative or Compensatory Control Measures (ACCM). Since all federal departments are part of the Executive Branch, the classification system is governed by Executive Order rather than by law. Typically each president will issue a new executive order, either tightening classification or loosening it. The Clinton administration made a major change in the classification system by issuing an executive order that for the first time required all classified documents to be declassified after 25 years unless they were reviewed by the agency that created the information and determined to require continuing classification. Primary levels Confidential This is the lowest classification level of information obtained by the government. It is defined as information that would "damage" national security if publicly disclosed, again, without the proper authorization. For "C" and "(C)" designations, see also: Joint Electronics Type Designation System#Parenthetical C. Examples include information related to military strength and weapons. (During and before World War II, the U.S. had a category of classified information called Restricted, which was below confidential. The U.S. no longer has a Restricted classification, but many other nations and NATO do. The U.S. treats Restricted information it receives from other governments as Confidential. The U.S. does use the term restricted data in a completely different way to refer to nuclear secrets, as described below.) Secret This is the second-highest classification. Information is classified Secret when its unauthorized disclosure would cause "serious damage" to national security. Most information that is classified is held at the secret sensitivity. "Examples of serious damage include disruption of foreign relations significantly affecting the national security; significant impairment of a program or policy directly related to the national security; revelation of significant military plans or intelligence operations: compromise of significant military plans or intelligence operations; and compromise of significant scientific or technological developments relating to national security." Top Secret The highest security classification. "Top Secret shall be applied to information, the unauthorized disclosure of which reasonably could be expected to cause 'exceptionally grave damage' to the National Security that the original classification authority is able to identify or describe." As of 2019, around 1.25 million individuals have Top Secret clearance. "Examples of exceptionally grave damage include armed hostilities against the United States or its allies; disruption of foreign relations vitally affecting the national securely; the compromise of vital national defense plans or complex cryptology and communications intelligence systems; the revelation of sensitive intelligence operations, and the disclosure of scientific or technological developments vital to national security." Additional proscribed categories Top Secret is the highest level of classification. However some information is further categorized/marked by adding a code word so that only those who have been cleared for each code word can see it. A document marked SECRET (CODE WORD) could be viewed only by a person with a secret or top secret clearance and that specific code word clearance. Special Access Program Executive Order 13526, which forms the legal basis for the U.S. classification system, states that "information may be classified at one of the following three levels", with Top Secret as the highest level (Sec. 1.2). However, this executive order provides for special access programs that further restricted access to a small number of individuals and permit additional security measures (Sec. 4.3). These practices can be compared with (and may have inspired) the concepts multilevel security and role-based access control. U.S. law also has special provisions protecting information related to cryptography (18 USC 798), nuclear weapons and atomic energy (see Controls on atomic-energy information) and the identity of covert intelligence agents (see Intelligence Identities Protection Act). Sensitive Compartmented Information Classified information concerning or derived from sensitive intelligence sources, methods, or analytical processes. All SCI must be handled within formal access control systems established by the Director of National Intelligence. Restricted Data/Formerly Restricted Data Restricted Data (RD) and Formerly Restricted Data (FRD) are classification markings that concern nuclear information. These are the only two classifications that are established by federal law, being defined by the Atomic Energy Act of 1954. Nuclear information is not automatically declassified after 25 years. Documents with nuclear information covered under the Atomic Energy Act will be marked with a classification level (confidential, secret or top secret) and a restricted data or formerly restricted data marking. Nuclear information as specified in the act may inadvertently appear in unclassified documents and must be reclassified when discovered. Even documents created by private individuals have been seized for containing nuclear information and classified. Only the Department of Energy may declassify nuclear information. Most RD and FRD (as well as most classified information in general) are classified at either the Confidential or Secret levels; however they require extra RD/FRD specific clearances in addition to the clearance level. Unclassified Unclassified is not technically a classification; this is the default and refers to information that can be released to individuals without a clearance. Information that is unclassified is sometimes restricted in its dissemination as Controlled Unclassified Information. For example, the law enforcement bulletins reported by the U.S. media when the United States Department of Homeland Security raised the U.S. terror threat level were usually classified as "U//LES", or "Unclassified – Law Enforcement Sensitive". This information is supposed to be released only to law enforcement agencies (sheriff, police, etc.), but, because the information is unclassified, it is sometimes released to the public as well. Information that is unclassified but which the government does not believe should be subject to Freedom of Information Act requests is often classified as Controlled Unclassified Information. In addition to CUI information, information can be categorized according to its availability to be distributed (e.g., Distribution D may only be released to approved Department of Defense and U.S. Department of Defense contractor personnel). Also, the statement of NOFORN (meaning "no foreign nationals") is applied to any information that may not be released to any non-U.S. citizen. NOFORN and distribution statements are often used in conjunction with classified information or alone on SBU information. Documents subject to export controls have a specific warning to that effect. Information which is "personally identifiable" is governed by the Privacy Act of 1974 and is also subject to strict controls regardless of its level of classification. Finally, information at one level of classification may be "upgraded by aggregation" to a higher level. For example, a specific technical capability of a weapons system might be classified Secret, but the aggregation of all technical capabilities of the system into a single document could be deemed Top Secret. Use of information restrictions outside the classification system is growing in the U.S. government. In September 2005 J. William Leonard, director of the U.S. National Archives Information Security Oversight Office, was quoted in the press as saying, "No one individual in government can identify all the controlled, unclassified [categories], let alone describe their rules." Controlled Unclassified Information (CUI) One of the 9/11 Commission findings was that "the government keeps too many secrets." To address this problem, the Commission recommended that '[t]he culture of agencies feeling they own the information they gathered at taxpayer expense must be replaced by a culture in which the agencies instead feel they have a duty ... to repay the taxpayers' investment by making that information available.'" Due to over 100 designations in use by the U.S. government for unclassified information at the time, President George W. Bush issued a Presidential memorandum on May 9, 2008, in an attempt to consolidate the various designations in use into a new category known as Controlled Unclassified Information (CUI). The CUI categories and subcategories were hoped to serve as the exclusive designations for identifying unclassified information throughout the executive branch not covered by Executive Order 12958 or the Atomic Energy Act of 1954 (as amended) but still required safeguarding or dissemination controls, pursuant to and consistent with any applicable laws, regulations, and government-wide policies in place at the time. CUI would replace categories such as For Official Use Only (FOUO), Sensitive But Unclassified (SBU) and Law Enforcement Sensitive (LES). The Presidential memorandum also designated the National Archives as responsible for overseeing and managing the implementation of the new CUI framework. This memorandum has since been rescinded by Executive Order 13556 of November 4, 2010 and the guidelines previously outlined within the memo were expanded upon in a further attempt to improve the management of information across all federal agencies as well as establish a more standard, government-wide program regarding the controlled unclassification designation process itself. The U.S. Congress has attempted to take steps to resolve this, but did not succeed. The U.S. House of Representatives passed the Reducing Information Control Designations Act on March 17, 2009. The bill was referred to the Senate Committee on Homeland Security and Governmental Affairs. Because no action was taken in committee and bills expire at the end of every Congress, there is currently no bill to solve unclassified designations. For Official Use Only (FOUO) Among U.S. government information, FOUO was primarily used by the U.S. Department of Defense as a handling instruction for Controlled Unclassified Information (CUI) which may be exempt from release under exemptions two to nine of the Freedom of Information Act (FOIA). It is one of the various sub-categorizations for strictly unclassified information which, on 24 February 2012, was officially consolidated as CUI. Other departments continuing the use of this designation include the Department of Homeland Security. Public Trust According to the Department of Defense, Public Trust is a type of position, not clearance level, though General Services Administration refers to it as clearance level. Certain positions which require access to sensitive information, but not information which is classified, must obtain this designation through a background check. Public Trust Positions can either be moderate-risk or high-risk. Proper procedure for classifying U.S. government documents To be properly classified, a classification authority (an individual charged by the U.S. government with the right and responsibility to properly determine the level of classification and the reason for classification) must determine the appropriate classification level, as well as the reason information is to be classified. A determination must be made as to how and when the document will be declassified, and the document marked accordingly. Executive Order 13526 describes the reasons and requirements for information to be classified and declassified (Part 1). Individual agencies within the government develop guidelines for what information is classified and at what level. The former decision is original classification. A great majority of classified documents are created by derivative classification. For example, if one piece of information, taken from a secret document, is put into a document along with 100 pages of unclassified information, the document, as a whole, will be secret. Proper rules stipulate that every paragraph will bear a classification marking of (U) for Unclassified, (C) for Confidential, (S) for Secret, and (TS) for Top Secret. Therefore, in this example, only one paragraph will have the (S) marking. If the page containing that paragraph is double-sided, the page should be marked SECRET on top and bottom of both sides. A review of classification policies by the Office of the Director of National Intelligence aimed at developing a uniform classification policy and a single classification guide that could be used by the entire U.S. intelligence community found significant interagency differences that impaired cooperation and performance. The initial ODNI review, completed in January 2008, said in part, "The definitions of 'national security' and what constitutes 'intelligence'—and thus what must be classified—are unclear. ... Many interpretations exist concerning what constitutes harm or the degree of harm that might result from improper disclosure of the information, often leading to inconsistent or contradictory guidelines from different agencies. ... There appears to be no common understanding of classification levels among the classification guides reviewed by the team, nor any consistent guidance as to what constitutes 'damage,' 'serious damage,' or 'exceptionally grave damage' to national security. ... There is wide variance in application of classification levels." The review recommended that original classification authorities should specify clearly the basis for classifying information, for example, whether the sensitivity derives from the actual content of the information, the source, the method by which it was analyzed, or the date or location of its acquisition. Current policy requires that the classifier be "able" to describe the basis for classification but not that he or she in fact do so. Classification categories Step 3 in the classification process is to assign a reason for the classification. Classification categories are marked by the number "1.4" followed by one or more letters (a) to (h): 1.4(a) military plans, weapons systems, or operations; 1.4(b) foreign government information; 1.4(c) intelligence activities, sources, or methods, or cryptology; 1.4(d) foreign relations or foreign activities of the United States, including confidential sources; 1.4(e) scientific, technological or economic matters relating to national security; which includes defense against transnational terrorism; 1.4(f) United States Government programs for safeguarding nuclear materials or facilities; 1.4(g) vulnerabilities or capabilities of systems, installations, infrastructures, projects or plans, or protection services relating to the national security, which includes defense against transnational terrorism; and/or 1.4(h) the development, production, or use of weapons of mass destruction. Classifying non-government-generated information The Invention Secrecy Act of 1951 allows the suppression of patents (for a limited time) for inventions that threaten national security. Whether information related to nuclear weapons can constitutionally be "born secret" as provided for by the Atomic Energy Act of 1954 has not been tested in the courts. Guantanamo Bay detention camp has used a "presumptive classification" system to describe the statements of Guantanamo Bay detainees as classified. When challenged by Ammar al-Baluchi in the Guantanamo military commission hearing the 9/11 case, the prosecution abandoned the practice. Presumptive classification continues in the cases involving the habeas corpus petitions of Guantanamo Bay detainees. Protecting classified information Facilities and handling One of the reasons for classifying state secrets into sensitivity levels is to tailor the risk to the level of protection. The U.S. government specifies in some detail the procedures for protecting classified information. The rooms or buildings for holding and handling classified material must have a facility clearance at the same level as the most sensitive material to be handled. Good quality commercial physical security standards generally suffice for lower levels of classification; at the highest levels, people sometimes must work in rooms designed like bank vaults (see Sensitive Compartmented Information Facility – SCIF). The U.S. Congress has such facilities inside the Capitol Building, among other Congressional handling procedures for protecting confidentiality. The U.S. General Services Administration sets standards for locks and containers used to store classified material. The most commonly-approved security containers resemble heavy-duty file cabinets with a combination lock in the middle of one drawer. In response to advances in methods to defeat mechanical combination locks, the U.S. government switched to electromechanical locks that limit the rate of attempts to unlock them. After a specific number of failed attempts, they will permanently lock, requiring a locksmith to reset them. Classified U.S. government documents typically must be stamped with their classification on the cover and at the top and bottom of each page. Authors must mark each paragraph, title and caption in a document with the highest level of information it contains, usually by placing appropriate initials in parentheses at the beginning of the paragraph, title, or caption. Commonly, one must affix a brightly colored cover sheet to the cover of each classified document to prevent unauthorized observation of classified material (shoulder surfing) and to remind users to lock up unattended documents. The most sensitive material requires two-person integrity, where two cleared individuals are responsible for the material at all times. Approved containers for such material have two separate combination locks, both of which must be opened to access the contents. Restrictions dictate shipment methods for classified documents. Top Secret material must go by special courier; Secret material within the U.S. via registered mail; and Confidential material by certified mail. Electronic transmission of classified information largely requires the use of National Security Agency approved/certified "Type 1" cryptosystems using NSA's unpublished and classified Suite A algorithms. The classification of the Suite A algorithms categorizes the hardware that store them as a Controlled Cryptographic Item (CCI) under the International Traffic in Arms Regulations, or ITAR. CCI equipment and keying material must be controlled and stored with heightened physical security, even when the device is not processing classified information or contains no cryptographic key. NSA is currently implementing what it calls Suite B, a group of commercial algorithms such as Advanced Encryption Standard (AES), Secure Hash Algorithm (SHA), Elliptic Curve Digital Signature Algorithm (ECDSA) and Elliptic curve Diffie–Hellman (ECDH). Suite B provides protection for data up to Top Secret on non-CCI devices, which is especially useful in high-risk environments or operations needed to prevent Suite A compromise. These less stringent hardware requirements stem from the device not having to "protect" classified Suite A algorithms. Specialized computer operating systems known as trusted operating systems are available for processing classified information. These systems enforce the classification and labeling rules described above in software. Since 2005, however, they are not considered secure enough to allow uncleared users to share computers with classified activities. Thus, if one creates an unclassified document on a secret device, the resultant data is classified secret until it can be manually reviewed. Computer networks for sharing classified information are segregated by the highest sensitivity level they are allowed to transmit, for example, SIPRNet (Secret) and JWICS (Top Secret-SCI). The destruction of certain types of classified documents requires burning, shredding, pulping or pulverizing using approved procedures and must be witnessed and logged. Classified computer data presents special problems. See Data remanence. Lifetime commitment When a cleared individual leaves the job or employer for which they were granted access to classified information, they are formally debriefed from the program. Debriefing is an administrative process that accomplishes two main goals: it creates a formal record that the individual no longer has access to the classified information for that program; and it reminds the individual of their lifetime commitment to protect that information. Typically, the individual is asked to sign another non-disclosure agreement (NDA), similar to that which they signed when initially briefed, and this document serves as the formal record. The debriefed individual does not lose their security clearance; they have only surrendered the need to know for information related to that particular job. Classifications and clearances between U.S. government agencies In the past, clearances did not necessarily transfer between various U.S. government agencies. For example, an individual cleared for Department of Defense Top Secret had to undergo another investigation before being granted a Department of Energy Q clearance. Agencies are now supposed to honor background investigations by other agencies if they are still current. Because most security clearances only apply inside the agency where the holder works, if one needs to meet with another agency to discuss classified matters, it is possible and necessary to pass one's clearance to the other agency. For example, officials visiting at the White House from other government agencies would pass their clearances to the Executive Office of the President (EOP). The Department of Energy security clearance required to access Top Secret Restricted Data, Formerly Restricted Data, and National Security Information, as well as Secret Restricted Data, is a Q clearance. The lower-level L clearance is sufficient for access to Secret Formerly Restricted Data and National Security Information, as well as Confidential Restricted Data and Formerly Restricted Data. In practice, access to Restricted Data is granted, on a need-to-know basis, to personnel with appropriate clearances. At one time, a person might hold both a TS and a Q clearance, but that duplication and cost is no longer required. For all practical purposes, Q is equivalent to Top Secret, and L is equivalent to Secret. Contrary to popular lore, the Yankee White clearance given to personnel who work directly with the President is not a classification. Individuals having Yankee White clearances undergo extensive background investigations. The criteria include U.S. citizenship, unquestionable loyalty, and an absolute absence of any foreign influence over the individual, their family, or "persons to whom the individual is closely linked". Also, they must not have traveled (save while in government employ and at the instructions of the United States) to countries that are considered to be unfriendly to the United States. Yankee White cleared personnel are granted access to any information for which they have a need to know, regardless of which organization classified it or at what level. See also the Single Scope Background Investigation below, along with explicit compartmented access indoctrination. Some compartments, especially intelligence-related, may require a polygraph examination, although the reliability of the polygraph is controversial. The NSA uses the polygraph early in the clearance process while the CIA uses it at the end, which may suggest divergent opinions on the proper use of the polygraph. Standard form 312 Standard Form 312 (SF 312) is a non-disclosure agreement required under Executive Order 13292 to be signed by employees of the U.S. Federal Government or one of its contractors when they are granted a security clearance for access to classified information. The form is issued by the Information Security Oversight Office of the National Archives and Records Administration and its title is "Classified Information Nondisclosure Agreement." SF 312 prohibits confirming or repeating classified information to unauthorized individuals, even if that information is already leaked. The SF 312 replaces the earlier forms SF 189 or the SF 189-A. Enforcement of SF-312 is limited to civil actions to enjoin disclosure or seek monetary damages and administrative sanctions, "including reprimand, suspension, demotion or removal, in addition to the likely loss of the security clearance." Categories that are not classifications Compartments also exist, that employ code words pertaining to specific projects and are used to more easily manage individual access requirements. Code words are not levels of classification themselves, but a person working on a project may have the code word for that project added to their file, and then will be given access to the relevant documents. Code words may also label the sources of various documents; for example, code words are used to indicate that a document may break the cover of intelligence operatives if its content becomes known. The WWII code word Ultra identified information found by decrypting German ciphers, such as the Enigma machine, and which—regardless of its own significance—might inform the Germans that Enigma was broken if they became aware that it was known. Sensitive Compartmented Information (SCI) and Special Access Programs (SAP) The terms "Sensitive Compartmented Information" (SCI) and "Special Access Program" (SAP) are widely misunderstood as classification levels or specific clearances. In fact, the terms refer to methods of handling certain types of classified information that relate to specific national-security topics or programs (whose existence may not be publicly acknowledged) or the sensitive nature of which requires special handling, and thereby those accessing it require special approval to access it. The paradigms for these two categories, SCI originating in the intelligence community and SAP in the Department of Defense, formalize 'Need to Know' and addresses two key logistical issues encountered in the day-to-day control of classified information: Individuals with a legitimate need to know may not be able to function effectively without knowing certain facts about their work. However, granting all such individuals a blanket DoD clearance (often known as a "collateral" clearance) at the Top Secret level would be undesirable, not to mention prohibitively expensive. The government may wish to limit certain types of sensitive information only to those who work directly on related programs, regardless of the collateral clearance they hold. Thus, even someone with a Top Secret clearance cannot gain access to its Confidential information unless it is specifically granted. To be clear, "collateral" (formerly referred to as General Service or GENSER) simply means one lacks special access (e.g. SCI, SAP, COMSEC, NATO, etc.). Confidential, Secret, and Top Secret are all, by themselves, collateral clearance levels. SAP and SCI are usually found at the Top Secret classification, but there is no prohibition of applying such segregation to Confidential and Secret information. SAP and SCI implementation are roughly equivalent, and it is reasonable to discuss their implementation as one topic. For example, SAP material needs to be stored and used in a facility much like the SCIF described below. Department of Energy information, especially the more sensitive SIGMA categories, may be treated as SAP or SCI. Access to compartmented information Personnel who require knowledge of SCI or SAP information fall into two general categories: Persons with a need to know Persons with actual access Access to classified information is not authorized based on clearance status. Access is only permitted to individuals after determining they have a need to know. Need-to-know is a determination that an individual requires access to specific classified information in the performance of (or assist in the performance of) lawful and authorized government functions and duties. To achieve selective separation of program information while still allowing full access to those working on the program, a separate compartment, identified by a unique codeword, is created for the information. This entails establishing communication channels, data storage, and work locations (SCIF—Sensitive Compartmented Information Facility), which are physically and logically separated not only from the unclassified world, but from general Department of Defense classified channels as well. Thus established, all information generated within the compartment is classified according to the general rules above. However, to emphasize that the information is compartmented, all documents are marked with both the classification level and the codeword (and the caveat "Handle via <compartment name> Channels Only", or "Handle via <compartment names> Jointly" if the document contains material from multiple programs). A person is granted access to a specific compartment after the individual has: (a) had a Single Scope Background Investigation similar to that required for a collateral Top Secret clearance; (b) been "read into" or briefed on the nature and sensitivity of the compartment; and (c) signed a non-disclosure agreement (NDA). Access does not extend to any other compartment; i.e., there is no single "SCI clearance" analogous to DoD collateral Top Secret. The requirements for DCID 6/4 eligibility (a determination that an individual is eligible for access to SCI), subsumes the requirements for a TS collateral clearance. Being granted DCID 6/4 eligibility includes the simultaneous granting of a TS collateral clearance, as adjudicators are required to adjudicate to the highest level that the investigation (SSBI) supports. Examples Examples of such control systems and subsystems are: SCI – Sensitive Compartmentalized Information BYEMAN (BYE or B) COMINT or Special Intelligence (SI) Very Restricted Knowledge (VRK) Exceptionally Controlled Information (ECI), which was used to group compartments for highly sensitive information, but was deprecated as of 2011. GAMMA (SI-G) ENDSEAL (EL) HUMINT Control System (HCS) KLONDIKE (KDK) RESERVE (RSV) TALENT KEYHOLE (TK) SAP – Special Access Programs COPPER GREEN Groups of compartmented information SAPs in the Department of Defense are subdivided into three further groups, as defined in . There is no public reference to whether SCI is divided in the same manner, but news reports reflecting that only the Gang of Eight members of Congress are briefed on certain intelligence activities, it may be assumed that similar rules apply for SCI or for programs with overlapping SAP and SCI content. The groups for Department of Defense SAPs are: Acknowledged: appears as a line item as "classified project" or the equivalent in the federal budget, although details of its content are not revealed. The budget element will associate the SAP with a Department of Defense component organization, such as a Military Department (e.g. Department of the Navy), a Combatant Command (e.g. U.S. Special Operations Command) or a Defense Agency (e.g. Defense Information Systems Agency.) Unacknowledged: no reference to such SAPs is found in the publicly published federal budget; its funding is hidden in a classified annex, often called the "black budget". The Congressional defense committees, however, are briefed on the specifics of such SAPs. Waived: At the sole discretion of the Secretary of Defense , on a case-by-case basis in the interest of national security, there is no mention in the budget at all, and only the "Big 6" members of Congress: the Chairman and Ranking Minority Members of the armed services committees, the appropriations committees and the defense appropriations subcommittees; receive notification of such SAPs. Examples of SCI topics are human intelligence, communications intelligence, and intelligence collected by satellites. One or more compartments may be created for each area, and each of these compartments may contain multiple subcompartments (e.g., a specific HUMINT operation), themselves with their own code names. Specific compartmented programs will have their own specific rules. For example, it is standard that no person is allowed unaccompanied access to a nuclear weapon or to command-and-control systems for nuclear weapons. Personnel with nuclear-weapons access are under the Personnel Reliability Program. Some highly sensitive SAP or SCI programs may also use the "no lone zone" method (that is, a physical location into which no one is allowed to enter unaccompanied) described for nuclear weapons. Handling caveats The United States also has a system of restrictive caveats that can be added to a document: these are constantly changing, but can include (in abbreviated form) a requirement that the document not be shared with a civilian contractor or not leave a specific room. These restrictions are not classifications in and of themselves; rather, they restrict the dissemination of information within those who have the appropriate clearance level and possibly the need to know the information. Remarks such as "EYES ONLY" and "DO NOT COPY" also limit the restriction. One violating these directives might be guilty of violating a lawful order or mishandling classified information. For ease of use, caveats and abbreviations have been adopted that can be included in the summary classification marking (header/footer) to enable the restrictions to be identified at a glance. They are sometimes known as Dissemination Control Abbreviations. Some of these caveats are (or were): CUI: Controlled Unclassified Information Replaces the labels For Official Use Only (FOUO), Sensitive But Unclassified (SBU), and Law Enforcement Sensitive (LES). FOUO: For Official Use Only. Superseded by CUI and no longer in use with the exception of Department of Homeland Security documents. Used for documents or products which contain material which is exempt from release under the Freedom of Information Act. NFIBONLY: National Foreign Intelligence Board Departments Only NOFORN: Distribution to non-US citizens is prohibited, regardless of their clearance or access permissions (NO FOReign National access allowed). NOCONTRACTOR: Distribution to contractor personnel (non-US-government employees) is prohibited, regardless of their clearance or access permissions. ORCON: Originator controls dissemination and/or release of the document. PROPIN: Caution—Proprietary Information Involved REL<country code(s)>: Distribution to citizens of the countries listed is permitted, providing they have appropriate accesses and need to know. Example: "REL TO USA, AUS, CAN, GBR, NZL" indicates that the information may be shared with appropriate personnel from Australia, the United Kingdom, Canada, and New Zealand. FVEY is the country code used as shorthand for the Five Eyes. <nn>X<m>: Information is exempt from automatic declassification (after the statutory default of 25 years) for exemption reason <m>, and declassification review shall not be permitted for <nn> years (as determined by law or the Interagency Security Classification Appeals Panel). For the most part, the exemption reasoning and caveats are outlined in paragraphs (b)–(d) and (g)–(i) of Sec. 3.3 of Executive Order 13526, but paragraph (b) is typically the one being referenced as the exemption reason value <m>. Example: "50X1" indicates the information must remain classified for 50 years, since it pertains to intelligence activities, sources, or methods (reason (1) of Section 3.3, paragraph (b)). RESTRICTED: Distribution to non-US citizens or those holding an interim clearance is prohibited; certain other special handling procedures apply. FISA: is used in FISC and probably in FISCR since at least 2017. Classification level and caveats are typically separated by "//" in the summary classification marking. For example, the final summary marking of a document might be: SECRET//<compartment name>//ORCON/NOFORN or TOP SECRET//NOFORN/FISA Controls on atomic-energy information The Atomic Energy Act of 1954 sets requirements for protection of information about nuclear weapons and special nuclear materials. Such information is "classified from birth", unlike all other sensitive information, which must be classified by some authorized individual. However, authorized classifiers still must determine whether documents or material are classified or restricted. The U.S. Department of Energy recognizes two types of Restricted Data: Restricted Data. Data concerning the design, manufacture, or utilization of atomic weapons; production of special nuclear material; or use of special nuclear material in the production of energy. Formerly Restricted Data. Classified information jointly determined by the DOE and the Department of Defense to be related primarily to the military utilization of atomic weapons and removed from the Restricted Data category. Documents containing such information must be marked "RESTRICTED DATA" (RD) or "FORMERLY RESTRICTED DATA" (FRD) in addition to any other classification marking. Restricted Data and Formerly Restricted Data are further categorized as Top Secret, Secret, or Confidential. SIGMA categories and Critical Nuclear Weapon Design Information RESTRICTED DATA contains further compartments. The Department of Energy establishes a list of SIGMA Categories for more fine-grained control than RESTRICTED DATA. Critical Nuclear Weapon Design Information (CNWDI, colloquially pronounced "Sin-Widdy") reveals the theory of operation or design of the components of a nuclear weapon. As such, it would be SIGMA 1 or SIGMA 2 material, assuming laser fusion is not involved in the information. Access to CNWDI is supposed to be kept to the minimum number of individuals needed. In written documents, paragraphs containing the material, assuming it is Top Secret, would be marked (TS//RD-CNWDI). SIGMA information of special sensitivity may be handled much like SAP or SCI material (q.v.) Naval Nuclear Propulsion Information While most Naval Nuclear Propulsion Information is sensitive, it may or may not be classified. The desired power densities of naval reactors make their design peculiar to military use, specifically high-displacement, high-speed vessels. The proliferation of quieter- or higher-performance marine propulsion systems presents a national-security threat to the United States. Due to this fact, all but the most basic information concerning NNPI is classified. The United States Navy recognizes that the public has an interest in environmental, safety, and health information, and that the basic research the Navy carries out can be useful to industry. Sharing of classified information with other countries In cases where the United States wishes to share classified information bilaterally (or multilaterally) with a country that has a sharing agreement, the information is marked with "REL TO USA", (release) and the three-letter country code. For example, if the U.S. wanted to release classified information to the government of Canada, it would mark the document "REL TO USA, CAN". There are also group releases, such as NATO, FVEY or UKUSA. Those countries would have to maintain the classification of the document at the level originally classified (Top Secret, Secret, etc.). Claims of U.S. government misuse of the classification system Max Weber: Every bureaucracy strives to increase the superiority of its position by keeping its knowledge and intentions secret. Bureaucratic administration always seeks to evade the light of the public as best it can, because in so doing it shields its knowledge and conduct from criticism ... While the classification of information by the government is not supposed to be used to prevent information from being made public that would be simply embarrassing or reveal criminal acts, it has been alleged that the government routinely misuses the classification system to cover up criminal activity and potentially embarrassing discoveries. Steven Aftergood, director of the Project on Government Secrecy at the Federation of American Scientists notes that ... inquiring into classified government information and disclosing it is something that many national security reporters and policy analysts do, or try to do, every day. And with a few narrow exceptions—for particularly sensitive types of information—courts have determined that this is not a crime." Aftergood notes, "The universe of classified information includes not only genuine national security secrets, such as confidential intelligence sources or advanced military technologies, but an endless supply of mundane bureaucratic trivia, such as 50-year-old intelligence budget figures, as well as the occasional crime or cover-up. As early as 1956, the U.S. Department of Defense estimated that 90% of its classified documents could be publicly disclosed with no harm to national security. The National Security Archive has collected a number of examples of overclassification and government censors blacking out documents that have already been released in full, or redacting entirely different parts of the same document at different times. In The Pentagon Papers case, a classified study was published revealing that four administrations had misled the American public about their intentions in the Vietnam War, increasing the credibility gap. Tony Russo and Daniel Ellsberg were prosecuted under Espionage Law. The case prompted Harold Edgar & Benno C. Schmidt, Jr. to write a review of Espionage law in the 1973 Columbia Law Review. Their article was entitled "The Espionage Statutes and Publication of Defense Information". In it, they point out that Espionage law does not criminalize classified information, only national defense information. They point out that Congress has repeatedly resisted or failed to make the disclosing of classified information illegal, in and of itself. Instead, Congress has strictly limited which sort of classified information is illegal, and under which specific circumstances it is illegal. i.e. in Congress specifically criminalized leaking cryptographic information that is classified, but when it passed the law it specifically stated the law didn't criminalize disclosing other types of classified information. Another article that discusses the issue is by Jennifer Elsea of the Congressional Research Service. Various UFO conspiracies mention a level "Above Top Secret" used for UFO design information and related data. They suggest such a classification is intended to apply to information relating to things whose possible existence is to be denied, such as aliens, as opposed to things whose potential existence may be recognized, but for which access to information regarding specific programs would be denied as classified. The British government, for example, denied for several decades that they were either involved or interested in UFO sightings. However, in 2008, the government revealed they have monitored UFO activity for at least the past 30 years. The existence of an "Above Top Secret" classification is considered by some as unnecessary to keep the existence of aliens a secret, as they say information at the Top Secret level, or any level for that matter, can be restricted on the basis of need to know. Thus, the U.S. government could conceal an alien project without having to resort to another level of clearance, as need to know would limit the ability to have access to the information. Some suggest that claims of the existence of such a classification level may be based on the unsubstantiated belief that the levels of classification are themselves classified. As such, they feel that books claiming to contain "Above Top Secret" information on UFOs or remote viewing should arguably be taken with a grain of salt. Without making a judgment on if such classifications have been used for space aliens, it is a reality that even the names of some compartments were classified, and certainly the meaning of the code names. In the cited document, an (S) means the material it precedes is Secret and (TS) means Top Secret. According to the Department of Defense directive, "the fact of" the existence of NRO was at the secret level for many years, as well as the fact of and the actual phrase "National Reconnaissance Program" (see Paragraph II). Paragraph V(a) is largely redacted, but the introduction to the documents clarifies (see Document 19) that it refers to the now-cancelled BYEMAN code word and control channel for NRO activities. BYEMAN, the main NRO compartment, was classified as a full word, although the special security offices could refer, in an unclassified way, to "B policy". Responsible agencies Any agency designated by the President can originate classified information if it meets the content criteria; each agency is responsible for safeguarding and declassifying its own documents. The National Archives and Records Administration (NARA) has custody of classified documents from defunct agencies, and also houses the National Declassification Center (since 2010) and Information Security Oversight Office. The Interagency Security Classification Appeals Panel has representatives from the Departments of State, Defense, and Justice; the National Archives, the Office of the Director of National Intelligence; the National Security Advisor; the Central Intelligence Agency; and Information Security Oversight Office. Declassification Declassification is the process of removing the classification of a document and opening it for public inspection. Automatic declassification In accordance with Executive Order 13526, published January 5, 2010 (which superseded Executive Order 12958, as amended), an executive agency must declassify its documents after 25 years unless they fall under one of the nine narrow exemptions outlined by section 3.3 of the order. Classified documents 25 years or older must be reviewed by any and all agencies that possess an interest in the sensitive information found in the document. Documents classified for longer than 50 years must concern human intelligence sources or weapons of mass destruction, or get special permission. All documents older than 75 years must have special permission. Systematic declassification The Order also requires that agencies establish and conduct a program for systematic declassification review, based on the new and narrower criteria. This only applies to records that are of permanent historical value and less than 25 years old. Section 3.4 of Order 13526, directs agencies to prioritize the systematic review of records based upon the degree of researcher interest and the likelihood of declassification upon review. Mandatory Declassification Review A Mandatory Declassification Review, or MDR, is requested by an individual in an attempt to declassify a document for release to the public. These challenges are presented to the agency whose equity, or "ownership", is invested in the document. Once an MDR request has been submitted to an agency for the review of a particular document, the agency must respond either with an approval, a denial, or the inability to confirm or deny the existence or nonexistence of the requested document. After the initial request, an appeal can be filed with the agency by the requester. If the agency refuses to declassify that document, then a decision from a higher authority can be provided by the appellate panel, the Interagency Security Classification Appeals Panel (ISCAP). Freedom of Information Act The U.S. Freedom of Information Act (FOIA) was signed into law by President Lyndon B. Johnson on July 4, 1966, took effect the following year, and was amended in 1974, 1976, 1986, 1996 and 2002 (in 1974 over President Ford's veto). This act allows for the full or partial disclosure of previously unreleased information and documents controlled by the U.S. government. Any member of the public may ask for a classified document to be declassified and made available for any reason. The requestor is required to specify with reasonable certainty the documents of interest. If the agency refuses to declassify, the decision can be taken to the courts for a review. The FOIA does not guarantee that requested documents will be released; refusals usually fall under one of the nine of the declassification exemptions that protect highly sensitive information. History of National Archives and Records Administration role After declassification, the documents from many agencies are accessioned at the National Archives and Records Administration and put on the open shelves for the public. NARA also reviews documents for declassification. NARA first established a formal declassification program for records in 1972, and between 1973 and 1996 reviewed nearly 650 million pages of historically valuable federal records related to World War II, the Korean War, and American foreign policy in the 1950s as part of its systematic declassification review program. From 1996 to 2006, NARA had processed and released close to 460 million pages of federal records, working in partnership with the agencies that originated the records. Over the years, NARA has processed more than 1.1 billion pages of national security classified federal records, resulting in the declassification and release of ninety-one percent of the records. NARA has also provided significant support to several special projects to review and release federal records on topics of extraordinary public interest such as POW/MIAs or Nazi war crimes. Additionally, NARA works closely with reference archivists to ensure that the federal records most in demand by researchers receive priority for declassification review and performs review on demand for individuals who need records that do not fall into a priority category. NARA has improved or developed electronic systems to support declassification, automating some processes and thus ensuring a more complete record of declassification actions. With assistance from the Air Force, NARA established the Interagency Referral Center (IRC) in order to support agencies as they seek access to their equities in federal records at the National Archives at College Park and to ensure that high-demand records are processed first. In 2009, Executive Order 13526 created the National Declassification Center at NARA, which also houses the Information Security Oversight Office. Presidential libraries Presidential libraries hold in excess of 30 million classified pages, including approximately 8 million pages from the administrations of Presidents Hoover through Carter, that were subject to automatic declassification on December 31, 2006. The foreign policy materials in Presidential collections are among the highest-level foreign policy documents in the Federal government and are of significant historical value. From 1995 to 2006, the national Presidential Library system reviewed, declassified, and released 1,603,429 pages of presidential materials using systematic guidelines delegated to the Archivist of the United States. NARA has also hosted on-site agency review teams at the Eisenhower, Kennedy, and Ford Presidential Libraries to manage classified equities and all presidential libraries have robust mandatory declassification review programs to support requests of individual researchers. See also Sensitive Compartmented Information List of U.S. security clearance terms Q clearance Controlled Cryptographic Item Copyright status of work by the U.S. government Espionage Act of 1917 FAA 1600.2 Invention Secrecy Act McCollum memo Secrecy News, a newsletter that covers U.S. classification policy United States diplomatic cables leak, the leak by Chelsea Manning via WikiLeaks United States v. Reynolds References Citations Sources Information Security Oversight Office (ISOO), a component of the National Archives and Records Administration (NARA) Policy Docs at ISOO, includes Executive Order 13526 – Classified National Security Information Memorandum of December 29, 2009 – Implementation of Executive Order 13526, () Order of December 29, 2009 – Original Classification Authority () Implementing Directive; Final Rule ( 32 C.F.R. Part 2001, ) ← rest of E.O. 13526 came into full effect June 25, 2010 Executive Order 12333, text at WikiSource Executive Order 13292, text at WikiSource Security Classified and Controlled Information: History, Status, and Emerging Management Issues, Congressional Research Service, January 2, 2008 DoD 5220.22-M National Industrial Security Program Operating Manual (NISPOM) 400 Series DOE Directives by Number The 400 series of directives is where DOE keeps most security and classification-related items. Atomic Energy Act of 1954 42 USC 2168 – The Atomic Energy Act of 1954 at the US Nuclear Regulatory Commission Espionage Act 18 USC 793, 794, 798 Designation and Sharing of Controlled Unclassified Information (CUI) – Presidential Memo of May 7, 2008 National Declassification Center External links Explanation of the US Classification System Public Interest Declassification Board Department of Justice - Freedom of Information Act US Department of Defense - Freedom of Information Act The National Security Archive Open the Government.org Federation of American Scientists Declassified Documents from UCB Libraries GovPubs Criminal Prohibitions on Leaks and Other Disclosures of Classified Defense Information, Stephen P. Mulligan and Jennifer K. Elsea, Congressional Research Service, March 7, 2017. The Protection of Classified Information: The Legal Framework, Jennifer K. Elsea, Congressional Research Service, May 18, 2017. United States government information fr:Information classée secrète#États-Unis d'Amérique
379613
https://en.wikipedia.org/wiki/Grumman%20A-6%20Intruder
Grumman A-6 Intruder
The Grumman A-6 Intruder is an American twinjet all-weather attack aircraft developed and manufactured by American aircraft company Grumman Aerospace and operated by the U.S. Navy and U.S. Marine Corps. It was designed in response to a 1957 requirement issued by the Bureau of Aeronautics for an all-weather attack aircraft for Navy long-range interdiction missions and with STOL capability for Marine close air support. It was to replace the piston-engined Douglas A-1 Skyraider. The requirement allowed one or two engines, either turbojet or turboprop. The winning proposal from Grumman used two Pratt & Whitney J52 turbojet engines. The Intruder was the first Navy aircraft with an integrated airframe and weapons system. Operated by a crew of two in a side-by-side seating configuration, the workload was divided between the pilot and weapons officer (bombardier/navigator (BN)). In addition to conventional munitions, it could also carry nuclear weapons, which would be delivered using toss bombing techniques. On 19 April 1960, the first prototype made its maiden flight. The A-6 was in service with the United States Navy and Marine Corps between 1963 and 1997, multiple variants of the type being introduced during this time. From the A-6, a specialized electronic warfare derivative, the EA-6B Prowler, was developed as well as the KA-6D tanker version. It was deployed during various overseas conflicts, including the Vietnam War and the Gulf War. The A-6 was intended to be superseded by the McDonnell Douglas A-12 Avenger II, but this program was ultimately canceled due to cost overruns. Thus, when the A-6E was scheduled for retirement, its precision strike mission was initially taken over by the Grumman F-14 Tomcat equipped with a LANTIRN pod. Development Background As a result of the fair-weather limitation of the propeller-driven Skyraider in the Korean War and the advent of turbine engines, the United States Navy issued preliminary requirements in 1955 for an all-weather carrier-based attack aircraft. The U.S. Navy published an operational requirement document for it in October 1956. It released a request for proposals (RFP) in February 1957. This request called for a 'close air support attack bomber capable of hitting the enemy at any time'. Aviation authors Bill Gunston and Peter Gilchrist observe that this specification was shaped by the service's Korean War experiences, during which air support had been frequently unavailable unless fair weather conditions were present. In response to the RFP, a total of eleven design proposals were submitted by eight different companies, including Bell, Boeing, Douglas, Grumman, Lockheed, Martin, North American, and Vought. Grumman's submission was internally designated as the Type G-128. Following evaluation of the bids, the U.S. Navy announced the selection of Grumman on 2 January 1958. The company was awarded a contract for the development of their submission, which had been re-designated A2F-1, in February 1958. Grumman's design team was led by Robert Nafis and Lawrence Mead, Jr. Mead later played a lead role in the design of the Lunar Excursion Module and the Grumman F-14 Tomcat. The team was spread between two sites, the company's manufacturing plant as Bethpage and the testing facilities at Naval Weapons Industrial Reserve Plant, Calverton. During September 1959, the design was approved by the Mock-Up Review Board. The A2F-1 design incorporated several cutting-edge features for the era. In the early 1960s, it was novel for a fighter-sized aircraft to have sophisticated avionics that used multiple computers. This design experience was taken into consideration by NASA in their November 1962 decision to choose Grumman over other companies like General Dynamics-Convair (the F-111 had computerized avionics capabilities comparable to the A-6, but did not fly until 1964) to build the Lunar Excursion Module, which was a small-sized spacecraft with two onboard computers. Into flight The first prototype YA2F-1, lacking radar and the navigational and attack avionics, made its first flight on 19 April 1960, with the second prototype flying on 28 July 1960. The test program required to develop the aircraft took a long time. The very advanced navigation and attack equipment required a lot of development and changes had to be made to correct aerodynamic deficiencies and remove unwanted features. Extending the air brakes, which were mounted on the rear fuselage, changed the downwash at the horizontal tailplane which overloaded its actuator so the tailplane was moved rearwards by . Later evaluation of the aircraft showed that the airbrakes were not effective enough in controlling the speed of the aircraft and they were moved to the wing-tips. Early production aircraft were fitted with both the fuselage and wingtip air brakes, although the fuselage-mounted ones were soon disabled, and were removed from later aircraft. The trailing edge of each wing-tip split to form a much more effective speed-brake which projected above and below the wing when extended. The rudder needed a wider chord at its base to give greater exposed area to assist spin recovery. A major difference between the first six production aircraft and subsequent aircraft were the jet nozzles; close-air support by the Marine Corps required STOL performance to operate from forward airstrips. Jet deflection using tilting tailpipes was proposed. The performance benefits from varying the angle were not worthwhile, whether operating from short strips or carriers, and they were fixed at a 7 degree downward angle. Further development During February 1963, the A-6 was introduced to service with the US Navy; at this point, the type was, according to Gunston and Gilchrist, "the first genuinely all-weather attack bomber in history". However, early operating experiences found the aircraft to be imposing very high maintenance demands, particularly in the Asian theatre of operations, and serviceability figures were also low. In response, the Naval Avionics Lab launched a substantial and lengthy program to improve both the reliability and performance of the A-6's avionics suite. The successful performance of the A-6 in operations following these improvements ended proposals to produce follow-on models that featured downgraded avionics. Various specialized variants of the A-6 were developed, often in response to urgent military requirements raised during the Vietnam War. The A-6C, a dedicated interdictor, was one such model, as was the KA-6D, a buddy store-equipped aerial refuelling tanker. Perhaps the most complex variant was the EA-6B Prowler, a specialized electronic warfare derivative. The last variant to be produced was the A-6E, first introduced in 1972; it features extensive avionics improvements, including the new APQ-148 multimode radar, along with minor airframe refinements. The last A-6E was delivered in 1992. During the 1980s, a further model, designated A-6F, was being planned. Intended to feature the General Electric F404 turbofan engine, as well as various avionics and airframe improvements, this variant was cancelled under the presumption that the in-development McDonnell Douglas A-12 Avenger II would be entering production before long. Instead, a life-extension program involving the re-winging of existing A-6E aircraft was undertaken; initially a metal wing had been used before a graphite-epoxy composite wing was developed during the late 1980s. Other improvements were introduced to the fleet around this time, including GPS receivers, new computers and radar sets, more efficient J-52-409 engines, as well as increased compatibility with various additional missiles. Design The Grumman A-6 Intruder is a two-seat twin-engined monoplane, equipped to perform carrier-based attack missions regardless of prevailing weather or light conditions. The cockpit used an unusual double pane windscreen and side-by-side seating arrangement in which the pilot sat in the left seat, while the bombardier/navigator (BN) sat to the right and slightly below to give the pilot an adequate view on that side. In addition to a radar display for the BN, a unique instrumentation feature for the pilot was a cathode ray tube screen that was known as the Vertical Display Indicator (VDI). This display provided a synthetic representation of the world in front of the aircraft, along with steering cues provided by the BN, enabling head-down navigation and attack at night and in all weather conditions. The A-6's wing was relatively efficient at subsonic speeds, particularly when compared to supersonic fighters such as the McDonnell Douglas F-4 Phantom II, which are also limited to subsonic speeds when carrying a payload of bombs. The wing was also designed to provide a favorable level of maneuverability even while carrying a sizable bomb load. A very similar wing would be put on pivots on Grumman's later supersonic swing-wing Grumman F-14 Tomcat, as well as similar landing gear. For its day, the Intruder had sophisticated avionics, with a high degree of integration. To aid in identifying and isolating equipment malfunctions, the aircraft was provided with automatic diagnostic systems, some of the earliest computer-based analytic equipment developed for aircraft. These were known as Basic Automated Checkout Equipment, or BACE (pronounced "base"). There were two levels, known as "Line BACE" to identify specific malfunctioning systems in the aircraft, while in the hangar or on the flight line; and "Shop BACE", to exercise and analyze individual malfunctioning systems in the maintenance shop. This equipment was manufactured by Litton Industries. Together, the BACE systems greatly reduced the Maintenance Man-Hours per Flight Hour, a key index of the cost and effort needed to keep military aircraft operating. The Intruder was equipped to carry nuclear weapons (B43, B57, B61) which would have been delivered using semi-automated toss bombing. Operational history Entering service and Vietnam War The Intruder received a new standardized US DOD designation of A-6A in the Autumn of 1962, and entered squadron service in February 1963. The A-6 became both the U.S. Navy's and U.S. Marine Corps's principal medium and all-weather/night attack aircraft from the mid-1960s through the 1990s and as an aerial tanker either in the dedicated KA-6D version or by use of a buddy store (D-704). Whereas the A-6 fulfilled the USN and USMC all-weather ground-attack/strike mission role, this mission in the USAF was served by the Republic F-105 Thunderchief and later the F-111, the latter which also saw its earlier F-111A variants converted to a radar jammer as the EF-111 Raven, analogous to the USN and USMC EA-6B Prowler. A-6 Intruders first saw action during the Vietnam War, where the craft were used extensively against targets in Vietnam. The aircraft's long range and heavy payload () coupled with its ability to fly in all weather made it invaluable during the war. However, its typical mission profile of flying low to deliver its payload made it especially vulnerable to anti-aircraft fire, and in the eight years the Intruder was used during the Vietnam War, the U.S. Navy and U.S. Marine Corps lost a total of 84 A-6 aircraft of various series. The first loss occurred on 14 July 1965 when an Intruder from VA-75 operating from , flown by LT Donald Boecker and LT Donald Eaton, commenced a dive on a target near Laos. An explosion under the starboard wing damaged the starboard engine, causing the aircraft to catch fire and the hydraulics to fail. Seconds later the port engine failed, the controls froze, and the two crewmen ejected. Both crewmen survived. Of the 84 Intruders lost to all causes during the war, ten were shot down by surface-to-air missiles (SAMs), two were shot down by MiGs, 16 were lost to operational causes, and 56 were lost to conventional ground fire and AAA. The last Intruder to be lost during the war was from VA-35, flown by LT C. M. Graf and LT S. H. Hatfield, operating from ; they were shot down by ground fire on 24 January 1973 while providing close air support. The airmen ejected and were rescued by a Navy helicopter. Twenty U.S. Navy aircraft carriers rotated through the waters of Southeast Asia, providing air strikes, from the early 1960s through the early 1970s. Nine of those carriers lost A-6 Intruders: lost 11, lost eight, lost six, lost two, USS Independence lost four, lost 14, lost three, lost eight, and USS America lost two. Although capable of embarking aboard aircraft carriers, most U.S. Marine Corps A-6 Intruders were shore based in South Vietnam at Chu Lai and Da Nang and in Nam Phong, Thailand. Lebanon and later action A-6 Intruders were later used in support of other operations, such as the Multinational Force in Lebanon in 1983. On 4 December, one LTV A-7 Corsair II and one Intruder were downed by Syrian missiles. The Intruder's pilot, Lieutenant Mark Lange, and bombardier/navigator Lieutenant Robert "Bobby" Goodman ejected immediately before the crash; Lange died of his injuries while Goodman was captured and taken by the Syrians to Damascus where he was released on 3 January 1984. Later in the 1980s, two Naval Reserve A-7 Corsair II light attack squadrons, VA-205 and VA-304, were reconstituted as medium attack squadrons with the A-6E at NAS Atlanta, Georgia and NAS Alameda, California, respectively. Intruders also saw action in April 1986 operating from the aircraft carriers USS America and Coral Sea during the bombing of Libya (Operation El Dorado Canyon). The squadrons involved were VA-34 "Blue Blasters" (from USS America) and VA-55 "Warhorses" (from USS Coral Sea). During the Gulf War in 1991, U.S. Navy and U.S. Marine Corps A-6s flew more than 4,700 combat sorties, providing close air support, destroying enemy air defenses, attacking Iraqi naval units, and hitting strategic targets. They were also the U.S. Navy's primary strike platform for delivering laser-guided bombs. The U.S. Navy operated them from the aircraft carriers , , USS Midway, USS Ranger, USS America and , while U.S. Marine Corps A-6s operated ashore, primarily from Shaikh Isa Air Base in Bahrain. Three A-6s were shot down in combat by SAMs and AAA. The Intruder's large blunt nose and slender tail inspired a number of nicknames, including "Double Ugly", "The Mighty Alpha Six", "Iron Tadpole" and also "Drumstick". Following the Gulf War, Intruders were used to patrol the no-fly zone in Iraq and provided air support for U.S. Marines during Operation Restore Hope in Somalia. The last A-6E Intruder left U.S. Marine Corps service on 28 April 1993. Navy A-6s saw further duty over Bosnia in 1994. On 4 June 1996, during RIMPAC a US Navy A-6E performing the unusual target towing task to train Japanese Navy air defense crews was mistakenly engaged and shot down by the Japanese destroyer JS Yūgiri with its Phalanx CIWS gun. Both pilots ejected and were recovered. Retirement Despite the production of new airframes in the 164XXX Bureau Number (BuNo) series just before and after the Gulf War, augmented by a rewinging program of older airframes, the A-6E and KA-6D were quickly phased out of service in the mid-1990s in a U.S. Navy cost-cutting move driven by the Office of the Secretary of Defense to reduce the number of different type/model/series (T/M/S) of aircraft in carrier air wings and U.S. Marine aircraft groups. The A-6 was intended to be replaced by the McDonnell Douglas A-12 Avenger II, but that program was canceled due to cost overruns. The Intruder remained in service for a few more years before being retired in favor of the LANTIRN-equipped F-14D Tomcat, which was in turn replaced by the F/A-18E/F Super Hornet in the U.S. Navy and the twin-seat F/A-18D Hornet in the U.S. Marine Corps. During the 2010s, the Unmanned Carrier-Launched Airborne Surveillance and Strike program was at one point intended to produce an unmanned aerial vehicle (UAV) successor to the Intruder's long-distance strike role, but the initiative has since changed priorities towards the tanker mission instead. The last Intruders were retired on 28 February 1997. Many in the US defense establishment in general, and Naval Aviation in particular, questioned the wisdom of a shift to a shorter range carrier-based strike force, as represented by the Hornet and Super Hornet, compared to the older generation aircraft such as the Intruder and Tomcat. However, the availability of USAF Boeing KC-135 Stratotanker and McDonnell Douglas KC-10 Extender tankers modified to accommodate USN, USMC and NATO tactical aircraft in all recent conflicts was considered by certain senior decision makers in the Department of Defense to put a lesser premium on organic aerial refueling capability in the U.S. Navy's carrier air wings and self-contained range among carrier-based strike aircraft. Although the Intruder could not match the F-14's or the F/A-18's speed or air-combat capability, the A-6's range and load-carrying ability are still unmatched by newer aircraft in the fleet. At the time of retirement, several retired A-6 airframes were awaiting rewinging at the Northrop Grumman facility at St. Augustine Airport, Florida; these were later sunk off the coast of St. Johns County, Florida to form a fish haven named "Intruder Reef". Surviving aircraft fitted with the new wings, and later production aircraft (i.e., BuNo 164XXX series) not earmarked for museum or non-flying static display were stored at the AMARG storage center at Davis-Monthan Air Force Base, Arizona. Variants YA-6A and A-6A The eight prototypes and pre-production Intruder aircraft were sometimes referred to with the YA-6A designation. These were used in the development and testing of the A-6A Intruder. The initial version of the Intruder was built around the complex and advanced DIANE (Digital Integrated Attack/Navigation Equipment) suite, intended to provide a high degree of bombing accuracy even at night and in poor weather. DIANE consisted of multiple radar systems: the Norden Systems AN/APQ-92 search radar replacing the YA-6A's AN/APQ-88, and a separate AN/APG-46 for tracking, the AN/APN-141 radar altimeter, and an AN/APN-122 Doppler navigational radar to provide position updates to the Litton AN/ASN-31 inertial navigation system. An air-data computer and the AN/ASQ-61 ballistics computer integrated the radar information for the bombardier/navigator in the right-hand seat. TACAN and ADF systems were also provided for navigation. When it worked, DIANE was perhaps the most capable navigation/attack system of its era, giving the Intruder the ability to fly and fight in even very poor conditions (particularly important over Vietnam and Thailand during the Vietnam War). It suffered numerous teething problems, and it was several years before its reliability was established. Total A-6A production was 480, excluding the prototype and pre-production aircraft. A total of 47 A-6As were converted to other variants. A-6B To provide U.S. Navy squadrons with a defense suppression aircraft to attack enemy antiaircraft defense and SAM missile systems, a mission dubbed "Iron Hand" by the U.S. Navy, 19 A-6As were converted to A-6B version during 1967 to 1970. The A-6B had many of its standard attack systems removed in favor of specialized equipment to detect and track enemy radar sites and to guide AGM-45 Shrike and AGM-78 Standard anti-radiation missiles, with AN/APQ-103 radar replacing earlier AN/APQ-92 used in the A-6A, plus AN/APN-153 navigational radar replacing earlier AN/APN-122, again used in the A-6A. Between 1968 and 1977, several Intruder squadrons operated A-6Bs alongside their regular A-6As. Five were lost to all causes, and the survivors were later converted to A-6E standard in the late 1970s. A-6C 12 A-6As were converted in 1970 to A-6C standard for night attack missions against the Ho Chi Minh trail in Vietnam. They were fitted with a "Trails/Roads Interdiction Multi-sensor" (TRIM) pod in the fuselage for FLIR and low-light TV cameras, as well as a "Black Crow" engine ignition detection system. Radars were also upgraded, with the AN/APQ-112 replacing the earlier AN/APQ-103, and an AN/APN-186 navigational radar replacing the earlier AN/APN-153. A vastly improved Sperry Corporation AN/APQ-127 radar replaced the AN/APG-46 fire control radar. One of these aircraft was lost in combat; the others were later refitted to A-6E standard after the war. KA-6D To replace both the KA-3B and EA-3B Skywarrior during the early 1970s, 78 A-6As and 12 A-6Es were converted for use as tanker aircraft, providing aerial refueling support to other strike aircraft. The DIANE system was removed and an internal refueling system was added, sometimes supplemented by a D-704 refueling pod on the centerline pylon. The KA-6D theoretically could be used in the day/visual bombing role, but it apparently never was, with the standard load-out being four fuel tanks. Because it was based on a tactical aircraft platform, the KA-6D provided a capability for mission tanking, the ability to keep up with strike aircraft and refuel them in the course of a mission. A few KA-6Ds went to sea with each Intruder squadron. Their operation was integrated into the Intruder squadrons, as A-6 crew were trained to operate both aircraft and the NATOPS covered both the A6 and KA-6D. These aircraft were always in short supply, and frequently were "cross decked" from a returning carrier to an outgoing one. Many KA-6 airframes had severe G restrictions, as well as fuselage stretching due to almost continual use and high number of catapults and traps. The retirement of the aircraft left a gap in US Navy and Marine Corps refueling tanker capability. The Navy Lockheed S-3 Viking filled that gap until the new F/A-18E/F Super Hornet became operational. A-6E The definitive attack version of the Intruder with vastly upgraded navigation and attack systems, introduced in 1970 and first deployed on 9 December 1971. The earlier separate search and track (fire control) radars of the A-6A/B/C were replaced by a single Norden AN/APQ-148 multi-mode radar, and onboard computers with a more sophisticated (and generally more reliable) IC based system, as opposed to the A-6A's DIANE discrete transistor-based technology. A new AN/ASN-92 inertial navigation system was added, along with the CAINS (Carrier Aircraft Inertial Navigation System), for greater navigation accuracy. Beginning in 1979, all A-6Es were fitted with the AN/AAS-33 DRS (Detecting and Ranging Set), part of the "Target Recognition and Attack Multi-Sensor" (TRAM) system, a small, gyroscopically stabilized turret, mounted under the nose of the aircraft, containing a FLIR boresighted with a laser spot-tracker/designator and IBM AN/ASQ-155 computer. TRAM was matched with a new Norden AN/APQ-156 radar. The BN could use both TRAM imagery and radar data for extremely accurate attacks, or use the TRAM sensors alone to attack without using the Intruder's radar (which might warn the target). TRAM also allowed the Intruder to autonomously designate and drop laser-guided bombs. In addition, the Intruder used Airborne Moving Target Indicator (AMTI), which allowed the aircraft to track a moving target (such as a tank or truck) and drop ordnance on it even though the target was moving. Also, the computer system allowed the use of Offset Aim Point (OAP), giving the crew the ability to drop on a target unseen on radar by noting coordinates of a known target nearby and entering the offset range and bearing to the unseen target. In the 1980s, the A-6E TRAM aircraft were converted to the A-6E WCSI (Weapons Control System Improvement) version to add additional weapons capability. This added the ability to carry and target some of the first generation precision guided weapons, like the AGM-84 Harpoon missile, and AGM-123 Skipper. The WCSI aircraft was eventually modified to have a limited capability to use the AGM-84E SLAM standoff land attack missile. Since the Harpoon and SLAM missiles had common communication interfaces, WCSI aircraft could carry and fire SLAM missiles, but needed a nearby A-6E SWIP to guide them to target. In the early 1990s, some surviving A-6Es were upgraded under SWIP (Systems/Weapons Improvement Program) to enable them to use the latest precision-guided munitions, including AGM-65 Mavericks, AGM-84E SLAMs, AGM-62 Walleyes and the AGM-88 HARM anti-radiation missile as well as additional capability with the AGM-84 Harpoon. A co-processor was added to the AN/ASQ-155 computer system to implement the needed MIL-STD-1553 digital interfaces to the pylons, as well as an additional control panel. After a series of wing-fatigue problems, about 85% of the fleet was fitted with new graphite/epoxy/titanium/aluminum composite wings. The new wings proved to be a mixed blessing, as a composite wing is stiffer and transmits more force to the fuselage, accelerating fatigue in the fuselage. In 1990, the decision was made to terminate production of the A-6. Through the 1970s and 1980s, the A-6 had been in low-rate production of four or five new aircraft a year, enough to replace mostly accidental losses. The final production order was for 20 aircraft of the SWIP configuration with composite wings, delivered in 1993. A-6E models totaled 445 aircraft, about 240 of which were converted from earlier A-6A/B/C models. A-6F and A-6G An advanced A-6F Intruder II was proposed in the mid-1980s that would have replaced the Intruder's elderly Pratt & Whitney J52 turbojets with non-afterburning versions of the General Electric F404 turbofan used in the F/A-18 Hornet, providing substantial improvements in both power and fuel economy. The A-6F would have had totally new avionics, including a Norden AN/APQ-173 synthetic aperture radar and multi-function cockpit displays – the APQ-173 would have given the Intruder air-to-air capacity with provision for the AIM-120 AMRAAM. Two additional wing pylons were added, for a total of seven stations. Although five development aircraft were built, the U.S. Navy ultimately chose not to authorize the A-6F, preferring to concentrate on the A-12 Avenger II. This left the service in a quandary when the A-12 was canceled in 1991. Grumman proposed a cheaper alternative in the A-6G, which had most of the A-6F's advanced electronics, but retained the existing engines. This, too, was canceled. Electronic warfare versions An electronic warfare (EW)/Electronic countermeasures (ECW) version of the Intruder was developed early in the aircraft's life for the USMC, which needed a new ECM platform to replace its elderly F3D-2Q Skyknights. An EW version of the Intruder, initially designated A2F-1H (rather than A2F-1Q, as "Q" was being split to relegate it to passive electronic warfare and "H" to active) and subsequently redesignated EA-6A, first flew on 26 April 1963. It had a Bunker-Ramo AN/ALQ-86 ECM suite, with most electronics contained on the walnut-shaped pod atop the vertical fin. They were equipped with AN/APQ-129 fire control radar, and theoretically capable of firing the AGM-45 Shrike anti-radiation missile, although they were apparently not used in that role. The navigational radar is AN/APN-153. Only 28 EA-6As were built (two prototypes, 15 new-build, and 11 conversions from A-6As), serving with U.S. Marine Corps squadrons in Vietnam. It was phased out of front line service in the mid-1970s, remaining in use in reserve VMCJ units with the USMC and then the United States Navy in specialized VAQ units, primarily for training purposes. The last EA-6A had been retired by 1993. A much more highly specialized derivative of the Intruder was the EA-6B Prowler, having a "stretched" airframe with two additional systems operators, and more comprehensive systems for the electronic warfare and SEAD roles. A derivative of AN/APQ-156, AN/APS-130 was installed as the main radar for EA-6B. The navigational radar was upgraded to AN/APS-133 from the AN/APN-153 on EA-6A. In total, 170 were produced. The EA-6B took on the duties of the U.S. Air Force EF-111 Raven when the DoD decided to let the U.S. Navy handle all electronic warfare missions. The Prowler has been replaced by the EA-18G Growler in the U.S. Navy and was retired from USMC service in 2019. Variant list YA2F-1 Pre-production aircraft, eight built with the first four with rotating jet exhaust pipes, redesignated YA-6A in 1962. A2F-1 First production variant with fixed tailpipe, 484 built, redesignated A-6A in 1962. YA2F-1H Prototype electronic warfare variant, one modified from A2F-1, redesignated YEA-6A in 1962. A2F-1H Electronic warfare variant of the A2F-1 redesignated EA-6A in 1962 YA-6A Pre-production aircraft redesignated from YA2F-1 in 1962. A-6A First production variant redesignated from A2F-1 in 1962. YEA-6A One YA2F-1 electronic warfare variant prototype redesignated in 1962. EA-6A Electronic warfare variant redesignated from A2F-1H, had a redesigned fin and rudder and addition of an ECM radome, able to carry underwing ECM pods, three YA-6A and four A-6As converted and 21 built. NA-6A The redesignation of three YA-6As and three A-6As. The six aircraft were modified for special tests. NEA-6A One EA-6A aircraft was modified for special test purposes. TA-6A Proposed trainer variant with three-seat, not built. A-6B Variant fitted with avionics for the suppression of enemy air defenses (SEAD), 19 conversions from A-6A. EA-6B Prowler Electronic warfare variant of the A-6A with longer fuselage for four crew. YEA-6B The designation of two EA-6B prototypes, which were modified for special test purposes. A-6C A-6A conversion for low-level attack role with electro-optical sensors, twelve converted. KA-6D A-6A conversion for flight-refueling, 58 converted. A-6E A-6A with improved electronics. A-6E TRAM A-6E upgraded with the AN/AAS-33 Target Recognition Attack Multi-Sensor or "TRAM" pod. Capable of dropping Laser Guided Bombs without a targeting pod. Can also carry the AGM-84 Harpoon. A-6E SWIP A-6E TRAM upgraded with the AN/ALR-67 RWR and ability to carry the AGM-88 HARM, AGM-62 Walleye, AGM-84E SLAM and AGM-65 Maverick. Several versions had new composite wings. A-6F Intruder II Advanced version with updated electronics and General Electric F404 turbofans; only 5 built. A-6G Proposed cheaper alternative to the A-6F, with its advanced electronics, but existing J52 turbojets. G-128-12 Unbuilt single-seat A-6 based design proposal for the VA(L) competition for A-4 Skyhawk replacement based on existing design. Contract ultimately awarded to the LTV A-7 Corsair II. Operators United States Navy (1963–1997) United States Marine Corps (1963–1993) Aircraft on display A-6A 147867 – Alleghany Arms & Armory Museum, Smethport, Pennsylvania 151826 - Battleship Memorial Park, Mobile, Alabama; displayed as a KA-6D. KA-6D 149482 - NAS Whidbey, Oak Harbor, Washington; displayed as an A-6E. A-6E 151782 – USS Midway Museum, San Diego, California 152599 – Patriots Point Naval & Maritime Museum, Mount Pleasant, South Carolina 152603 – Richmond Municipal Airport, Richmond, Indiana 152907 – NAS Whidbey, Oak Harbor, Washington 152923 – Norfolk Naval Station/Chambers Field (former NAS Norfolk), Norfolk, Virginia 152935 – Empire State Aerosciences Museum, Glenville, New York 152936 – United States Naval Museum of Armament and Technology, NCC China Lake (North), Ridgecrest, California 154131 – Walker Field Colorado Park, Grand Junction, Colorado 154162 – Palm Springs Air Museum, Palm Springs, California 154167 – Steven F. Udvar-Hazy Center, NASM, Washington, D.C. 154170 – Flying Leatherneck Aviation Museum, MCAS Miramar, San Diego, California 154171 – Estrella Warbird Museum, Paso Robles, California 155595 – Pacific Coast Air Museum, Santa Rosa, California 155610 – National Naval Aviation Museum, NAS Pensacola, Pensacola, Florida 155627 – NAS Fallon, Fallon, Nevada 155629 – Hickory Aviation Museum, Hickory NC 155644 – Yanks Air Museum, Chino, California 155648 – Aviation Wing of the Marietta Museum of History, Dobbins ARB (formerly Atlanta NAS), Atlanta, Georgia 155661 – Camp Blanding Museum and Memorial Park, Camp Blanding, Florida 155713 – Pima Air & Space Museum (adjacent to Davis-Monthan AFB), Tucson, Arizona 156997 – Patuxent River Naval Air Museum, NAS Patuxent River, Lexington Park, Maryland 157001 – Naval Inventory Control Point, Philadelphia, Pennsylvania 157024 – Defense General Supply Center, Richmond, Virginia 158532 – USS Lexington Museum, Corpus Christi, Texas 158794 – Museum of Flight, Seattle, Washington 159567 – Naval Surface Warfare Center Dahlgren Division, Dahlgren, Virginia 159568 – Patuxent River NAS, Lexington Park, Maryland 159901 – NAF El Centro, El Centro, California 160995 – Yanks Air Museum, Chino, California 161676 – Pennsylvania College of Technology, Williamsport, Pennsylvania 162182 – Valiant Air Command Warbird Museum, Space Coast Regional Airport, Titusville, Florida 162195 – San Diego Aerospace Museum, San Diego, California 162206 – Oregon Air and Space Museum, Eugene, Oregon 164378 – Eastern Carolina Aviation Exhibit, Havelock, North Carolina 164384 – Grumman Memorial Park, Long Island, New York A-6F 162184 – Cradle of Aviation Museum, Garden City, New York 162185 – Intrepid Sea-Air-Space Museum, New York City, New York Specifications (A-6E) Notable appearances in media See also References Notes Citations Bibliography Andrade, John. U.S. Military Aircraft Designations and Serials since 1909. Leicester, UK: Midland Counties Publications, 1979, . Buttler, Tony (2010). American Secret Projects: Bombers, Attack and Anti-Submarine Aircraft 1945 to 1974. Hinckley, England: Midland Publishing. Donald, David and Jon Lake. Encyclopedia of World Military Aircraft. London: Aerospace Publishing, Single Volume edition, 1996. . Dorr, Robert F. Grumman A-6 Intruder. London: Osprey Publishing, 1987. . Dorr, Robert F. "Grumman A-6 Intruder& EA-6 Prowler". World Air Power Journal, Spring 1983, Volume 12. pp. 34–95. . . Dorr, Robert F. "Intruders and Prowlers". Air International, November 1986, Vol. 31, No. 5. pp. 227–236, 250–252. . Gunston, Bill and Mike Spick. Modern Air Combat. New York: Crescent Books, 1983. . Gunston, Bill and Peter Gilchrist. Jet Bombers: From the Messerschmitt Me 262 to the Stealth B-2. Osprey, 1993. . Hildebrandt, Erik. 1996–1997. "Burial at Sea: Navy's A-6 Intruder is Retiring, and What Could be a More Fitting End?" Air and Space Smithsonian. December 1996 – January 1997, Volume 11 (5). Pages 64–70. Also: "Burial at Sea." Hobson, Chris. Vietnam Air Losses, USAF/USN/USMC, Fixed-Wing Aircraft Losses in Southeast Asia, 1961–1973. North Branch, Minnesota: Specialty Press, 2001. . Jenkins, Dennis R. Grumman A-6 Intruder. Warbird Tech. 33. North Branch, Minnesota: Specialty Press, 2002. . Miska, Kurt H. "Grumman A-6A/E Intruder; EA-6A; EA-6B Prowler (Aircraft in Profile number 252)". Aircraft in Profile, Volume 14. Windsor, Berkshire, UK: Profile Publications Ltd., 1974, pp. 137–160. . Morgan, Mark and Rick Morgan. Intruder: The Operational History of Grumman's A-6. Atglen, Pennsylvania: Schiffer Publishing, Ltd., 2004. . Morgan, Rick. A-6 Intruder Units of the Vietnam War (Osprey Combat Aircraft #93). Oxford, UK: Osprey Publishing Limited, 2012. . Reardon, Carol. Launch the Intruders. University of Kansas Press, 2005. . Taylor, John W.R. "Grumman A-6 Intruder". Combat Aircraft of the World from 1909 to the Present. New York: G.P. Putnam's Sons, 1969. . Taylor, John W. R. Jane's All The World's Aircraft 1982–83. London: Jane's Yearbooks, 1982. . Winchester, Jim, ed. "Grumman A-6 Intruder". Military Aircraft of the Cold War (The Aviation Factfile). London: Grange Books plc, 2006. . External links A-6E Intruder Intruder Association A-6 page on globalsecurity.org Joe Baugher's website on the Grumman A-6 Intruder A-006 1960s United States attack aircraft Twinjets Carrier-based aircraft Mid-wing aircraft Aircraft first flown in 1960
15716827
https://en.wikipedia.org/wiki/Anonymous%20%28hacker%20group%29
Anonymous (hacker group)
Anonymous is a decentralized international activist- and hacktivist collective and movement primarily known for its various cyberattacks against several governments, government institutions and government agencies, corporations and the Church of Scientology. Anonymous originated in 2003 on the imageboard 4chan representing the concept of many online and offline community users simultaneously existing as an "anarchic", digitized "global brain" or "hivemind". Anonymous members (known as anons) can sometimes be distinguished in public by the wearing of Guy Fawkes masks in the style portrayed in the graphic novel and film V for Vendetta. Some anons also opt to mask their voices through voice changers or text-to-speech programs. In its early form, the concept was adopted by a decentralized online community acting anonymously in a coordinated manner, usually toward a loosely self-agreed goal and primarily focused on entertainment (or lulz). Beginning with Project Chanology in 2008—a series of protests, pranks, and hacks targeting the Church of Scientology—the Anonymous collective became increasingly associated with collaborative hacktivism on a number of issues internationally. Individuals claiming to align themselves with Anonymous undertook protests and other actions (including direct action) in retaliation against copyright-focused campaigns by motion picture and recording industry trade associations. Later targets of Anonymous hacktivism included government agencies of the United States, Israel, Tunisia, Uganda and others; the Islamic State of Iraq and the Levant; child pornography sites; copyright protection agencies; the Westboro Baptist Church; and corporations such as PayPal, MasterCard, Visa, and Sony. Anons have publicly supported WikiLeaks and the Occupy movement. Related groups LulzSec and Operation AntiSec carried out cyberattacks on U.S. government agencies, media, companies, military contractors, military personnel, and police officers, resulting in the attention of law enforcement to the groups' activities. Dozens of people have been arrested for involvement in Anonymous cyberattacks in countries including the United States, the United Kingdom, Australia, the Netherlands, Spain, India, and Turkey. Evaluations of the group's actions and effectiveness vary widely. Supporters have called the group "freedom fighters" and digital Robin Hoods, while critics have described them as "a cyber lynch-mob" or "cyber terrorists". In 2012, Time called Anonymous one of the "100 most influential people" in the world. Anonymous' media profile diminished by 2018, but the group re-emerged in 2020 to support the George Floyd protests and other causes. Philosophy Internal dissent is also a regular feature of the group. A website associated with the group describes it as "an Internet gathering" with "a very loose and decentralized command structure that operates on ideas rather than directives". Gabriella Coleman writes of the group: "In some ways, it may be impossible to gauge the intent and motive of thousands of participants, many of who don't even bother to leave a trace of their thoughts, motivations, and reactions. Among those that do, opinions vary considerably." Broadly speaking, Anons oppose Internet censorship and control and the majority of their actions target governments, organizations, and corporations that they accuse of censorship. Anons were early supporters of the global Occupy movement and the Arab Spring. Since 2008, a frequent subject of disagreement within Anonymous is whether members should focus on pranking and entertainment or more serious (and, in some cases, political) activism. Because Anonymous has no leadership, no action can be attributed to the membership as a whole. Parmy Olson and others have criticized media coverage that presents the group as well-organized or homogeneous; Olson writes, "There was no single leader pulling the levers, but a few organizational minds that sometimes pooled together to start planning a stunt." Some members protest using legal means, while others employ illegal measures such as DDoS attacks and hacking. Membership is open to anyone who wishes to state they are a member of the collective; British journalist Carole Cadwalladr of The Observer compared the group's decentralized structure to that of al-Qaeda: "If you believe in Anonymous, and call yourself Anonymous, you are Anonymous." Olson, who formerly described Anonymous as a "brand", stated in 2012 that she now characterized it as a "movement" rather than a group: "anyone can be part of it. It is a crowd of people, a nebulous crowd of people, working together and doing things together for various purposes." The group's few rules include not disclosing one's identity, not talking about the group, and not attacking media. Members commonly use the tagline "We are Anonymous. We are Legion. We do not forgive. We do not forget. Expect us." Brian Kelly writes that three of the group's key characteristics are "(1) an unrelenting moral stance on issues and rights, regardless of direct provocation; (2) a physical presence that accompanies online hacking activity; and (3) a distinctive brand." Journalists have commented that Anonymous' secrecy, fabrications, and media awareness pose an unusual challenge for reporting on the group's actions and motivations. Quinn Norton of Wired writes that: "Anons lie when they have no reason to lie. They weave vast fabrications as a form of performance. Then they tell the truth at unexpected and unfortunate times, sometimes destroying themselves in the process. They are unpredictable." Norton states that the difficulties in reporting on the group cause most writers, including herself, to focus on the "small groups of hackers who stole the limelight from a legion, defied their values, and crashed violently into the law" rather than "Anonymous’s sea of voices, all experimenting with new ways of being in the world". History 4chan raids (2003–2007) The name Anonymous itself is inspired by the perceived anonymity under which users post images and comments on the Internet. Usage of the term Anonymous in the sense of a shared identity began on imageboards, particularly the /b/ board of 4chan, dedicated to random content and to raiding other websites. A tag of Anonymous is assigned to visitors who leave comments without identifying the originator of the posted content. Users of imageboards sometimes jokingly acted as if Anonymous was a single individual. The concept of the Anonymous entity advanced in 2004 when an administrator on the 4chan image board activated a "Forced_Anon" protocol that signed all posts as Anonymous. As the popularity of imageboards increased, the idea of Anonymous as a collective of unnamed individuals became an Internet meme. Users of 4chan's /b/ board would occasionally join into mass pranks or raids. In a raid on July 12, 2006, for example, large numbers of 4chan readers invaded the Finnish social networking site Habbo Hotel with identical avatars; the avatars blocked regular Habbo members from accessing the digital hotel's pool, stating it was "closed due to fail and AIDS". Future LulzSec member Topiary became involved with the site at this time, inviting large audiences to listen to his prank phone calls via Skype. Due to the growing traffic on 4chan's board, users soon began to plot pranks off-site using Internet Relay Chat (IRC). These raids resulted in the first mainstream press story on Anonymous, a report by Fox station KTTV in Los Angeles, California in the U.S. The report called the group "hackers on steroids", "domestic terrorists", and an "Internet hate machine". Encyclopedia Dramatica (2004–present) Encyclopedia Dramatica was founded in 2004 by Sherrod DeGrippo, initially as a means of documenting gossip related to LiveJournal, but it quickly was adopted as a major platform by Anonymous for parody and other purposes. The not safe for work site celebrates a subversive "trolling culture", and documents Internet memes, culture, and events, such as mass pranks, trolling events, "raids", large-scale failures of Internet security, and criticism of Internet communities that are accused of self-censorship to gain prestige or positive coverage from traditional and established media outlets. Journalist Julian Dibbell described Encyclopædia Dramatica as the site "where the vast parallel universe of Anonymous in-jokes, catchphrases, and obsessions is lovingly annotated, and you will discover an elaborate trolling culture: Flamingly racist and misogynist content lurks throughout, all of it calculated to offend." The site also played a role in the anti-Scientology campaign of Project Chanology. On April 14, 2011, the original URL of the site was redirected to a new website named Oh Internet that bore little resemblance to Encyclopedia Dramatica. Parts of the ED community harshly criticized the changes. In response, Anonymous launched "Operation Save ED" to rescue and restore the site's content. The Web Ecology Project made a downloadable archive of former Encyclopedia Dramatica content. The site's reincarnation was initially hosted at encyclopediadramatica.ch on servers owned by Ryan Cleary, who later was arrested in relation to attacks by LulzSec against Sony. Project Chanology (2008) Anonymous first became associated with hacktivism in 2008 following a series of actions against the Church of Scientology known as Project Chanology. On January 15, 2008, the gossip blog Gawker posted a video in which celebrity Scientologist Tom Cruise praised the religion; and the Church responded with a cease-and-desist letter for violation of copyright. 4chan users organized a raid against the Church in retaliation, prank-calling its hotline, sending black faxes designed to waste ink cartridges, and launching DDoS attacks against its websites. The DDoS attacks were at first carried out with the Gigaloader and JMeter applications. Within a few days, these were supplanted by the Low Orbit Ion Cannon (LOIC), a network stress-testing application allowing users to flood a server with TCP or UDP packets. The LOIC soon became a signature weapon in the Anonymous arsenal; however, it would also lead to a number of arrests of less experienced Anons who failed to conceal their IP addresses. Some operators in Anonymous IRC channels incorrectly told or lied to new volunteers that using the LOIC carried no legal risk. During the DDoS attacks, a group of Anons uploaded a YouTube video in which a robotic voice speaks on behalf of Anonymous, telling the "leaders of Scientology" that "For the good of your followers, for the good of mankind—for the laughs—we shall expel you from the Internet." Within ten days, the video had attracted hundreds of thousands of views. On February 10, thousands of Anonymous joined simultaneous protests at Church of Scientology facilities around the world. Many protesters wore the stylized Guy Fawkes masks popularized by the graphic novel and film V for Vendetta, in which an anarchist revolutionary battles a totalitarian government; the masks soon became a popular symbol for Anonymous. In-person protests against the Church continued throughout the year, including "Operation Party Hard" on March 15 and "Operation Reconnect" on April 12. However, by mid-year, they were drawing far fewer protesters, and many of the organizers in IRC channels had begun to drift away from the project. Operation Payback (2010) By the start of 2009, Scientologists had stopped engaging with protesters and had improved online security, and actions against the group had largely ceased. A period of infighting followed between the politically engaged members (called "moralfags" in the parlance of 4chan) and those seeking to provoke for entertainment (trolls). By September 2010, the group had received little publicity for a year and faced a corresponding drop in member interest; its raids diminished greatly in size and moved largely off of IRC channels, organizing again from the chan boards, particularly /b/. In September 2010, however, Anons became aware of Aiplex Software, an Indian software company that contracted with film studios to launch DDoS attacks on websites used by copyright infringers, such as The Pirate Bay. Coordinating through IRC, Anons launched a DDoS attack on September 17 that shut down Aiplex's website for a day. Primarily using LOIC, the group then targeted the Recording Industry Association of America (RIAA) and the Motion Picture Association of America (MPAA), successfully bringing down both sites. On September 19, future LulzSec member Mustafa Al-Bassam (known as "Tflow") and other Anons hacked the website of Copyright Alliance, an anti-infringement group, and posted the name of the operation: "Payback Is A Bitch", or "Operation Payback" for short. Anons also issued a press release, stating: Anonymous is tired of corporate interests controlling the internet and silencing the people’s rights to spread information, but more importantly, the right to SHARE with one another. The RIAA and the MPAA feign to aid the artists and their cause; yet they do no such thing. In their eyes is not hope, only dollar signs. Anonymous will not stand this any longer. As IRC network operators were beginning to shut down networks involved in DDoS attacks, Anons organized a group of servers to host an independent IRC network, titled AnonOps. Operation Payback's targets rapidly expanded to include the British law firm ACS:Law, the Australian Federation Against Copyright Theft, the British nightclub Ministry of Sound, the Spanish copyright society Sociedad General de Autores y Editores, the U.S. Copyright Office, and the website of Gene Simmons of Kiss. By October 7, 2010, total downtime for all websites attacked during Operation Payback was 537.55 hours. In November 2010, the organization WikiLeaks began releasing hundreds of thousands of leaked U.S. diplomatic cables. In the face of legal threats against the organization by the U.S. government, Amazon.com booted WikiLeaks from its servers, and PayPal, MasterCard, and Visa cut off service to the organization. Operation Payback then expanded to include "Operation Avenge Assange", and Anons issued a press release declaring PayPal a target. Launching DDoS attacks with the LOIC, Anons quickly brought down the websites of the PayPal blog; PostFinance, a Swiss financial company denying service to WikiLeaks; EveryDNS, a web-hosting company that had also denied service; and the website of U.S. Senator Joe Lieberman, who had supported the push to cut off services. On December 8, Anons launched an attack against PayPal's main site. According to Topiary, who was in the command channel during the attack, the LOIC proved ineffective, and Anons were forced to rely on the botnets of two hackers for the attack, marshaling hijacked computers for a concentrated assault. Security researcher Sean-Paul Correll also reported that the "zombie computers" of involuntary botnets had provided 90% of the attack. Topiary states that he and other Anons then "lied a bit to the press to give it that sense of abundance", exaggerating the role of the grassroots membership. However, this account was disputed. The attacks brought down PayPal.com for an hour on December 8 and another brief period on December 9. Anonymous also disrupted the sites for Visa and MasterCard on December 8. Anons had announced an intention to bring down Amazon.com as well, but failed to do so, allegedly because of infighting with the hackers who controlled the botnets. PayPal estimated the damage to have cost the company US$5.5 million. It later provided the IP addresses of 1,000 of its attackers to the FBI, leading to at least 14 arrests. On Thursday, December 5, 2013, 13 of the PayPal 14 pleaded guilty to taking part in the attacks. 2011–2012 In the years following Operation Payback, targets of Anonymous protests, hacks, and DDoS attacks continued to diversify. Beginning in January 2011, Anons took a number of actions known initially as Operation Tunisia in support of Arab Spring movements. Tflow created a script that Tunisians could use to protect their web browsers from government surveillance, while fellow future LulzSec member Hector Xavier Monsegur (alias "Sabu") and others allegedly hijacked servers from a London web-hosting company to launch a DDoS attack on Tunisian government websites, taking them offline. Sabu also used a Tunisian volunteer's computer to hack the website of Prime Minister Mohamed Ghannouchi, replacing it with a message from Anonymous. Anons also helped Tunisian dissidents share videos online about the uprising. In Operation Egypt, Anons collaborated with the activist group Telecomix to help dissidents access government-censored websites. Sabu and Topiary went on to participate in attacks on government websites in Bahrain, Egypt, Libya, Jordan, and Zimbabwe. Tflow, Sabu, Topiary, and Ryan Ackroyd (known as "Kayla") collaborated in February 2011 on a cyber-attack against Aaron Barr, CEO of the computer security firm HBGary Federal, in retaliation for his research on Anonymous and his threat to expose members of the group. Using a SQL injection weakness, the four hacked the HBGary site, used Barr's captured password to vandalize his Twitter feed with racist messages, and released an enormous cache of HBGary's e-mails in a torrent file on Pirate Bay. The e-mails stated that Barr and HBGary had proposed to Bank of America a plan to discredit WikiLeaks in retaliation for a planned leak of Bank of America documents, and the leak caused substantial public relations harm to the firm as well as leading one U.S. congressman to call for a congressional investigation. Barr resigned as CEO before the end of the month. Several attacks by Anons have targeted organizations accused of homophobia. In February 2011, an open letter was published on AnonNews.org threatening the Westboro Baptist Church, an organization based in Kansas in the U.S. known for picketing funerals with signs reading "God Hates Fags". During a live radio current affairs program in which Topiary debated church member Shirley Phelps-Roper, Anons hacked one of the organization's websites. After the church announced its intentions in December 2012 to picket the funerals of the Sandy Hook Elementary School shooting victims, Anons published the names, phone numbers, and e-mail and home addresses of church members and brought down GodHatesFags.com with a DDoS attack. Hacktivists also circulated petitions to have the church's tax-exempt status investigated. In August 2012, Anons hacked the site of Ugandan Prime Minister Amama Mbabazi in retaliation for the Parliament of Uganda's consideration of an anti-homosexuality law permitting capital punishment. In April 2011, Anons launched a series of attacks against Sony in retaliation for trying to stop hacks of the PlayStation 3 game console. More than 100 million Sony accounts were compromised, and the Sony services Qriocity and PlayStation Network were taken down for a month apiece by cyberattacks. In August 2011, Anons launched an attack against BART in San Francisco, which they dubbed #OpBart. The attack, made in response to the killing of Charles Hill a month prior, resulted in customers' personal information leaked onto the group's website. When the Occupy Wall Street protests began in New York City in September 2011, Anons were early participants and helped spread the movement to other cities such as Boston. In October, some Anons attacked the website of the New York Stock Exchange while other Anons publicly opposed the action via Twitter. Some Anons also helped organize an Occupy protest outside the London Stock Exchange on May 1, 2012. Anons launched Operation Darknet in October 2011, targeting websites hosting child pornography. In particular, the group hacked a child pornography site called "Lolita City" hosted by Freedom Hosting, releasing 1,589 usernames from the site. Anons also said that they had disabled forty image-swapping pedophile websites that employed the anonymity network Tor. In 2012, Anons leaked the names of users of a suspected child porn site in OpDarknetV2. Anonymous launched the #OpPedoChat campaign on Twitter in 2012 as a continuation of Operation Darknet. In attempt to eliminate child pornography from the internet, the group posted the emails and IP addresses of suspected pedophiles on the online forum PasteBin. In 2011, the Koch Industries website was attacked following their attack upon union members, resulting in their website being made inaccessible for 15 minutes. In 2013, one member, a 38-year-old truck driver, pleaded guilty when accused of participating in the attack for a period of one minute, and received a sentence of two years federal probation, and ordered to pay $183,000 restitution, the amount Koch stated they paid a consultancy organization, despite this being only a denial of service attack. On January 19, 2012, the U.S. Department of Justice shut down the file-sharing site Megaupload on allegations of copyright infringement. Anons responded with a wave of DDoS attacks on U.S. government and copyright organizations, shutting down the sites for the RIAA, MPAA, Broadcast Music, Inc., and the FBI. In April 2012, Anonymous hacked 485 Chinese government websites, some more than once, to protest the treatment of their citizens. They urged people to "fight for justice, fight for freedom, [and] fight for democracy". In 2012, Anonymous launched Operation Anti-Bully: Operation Hunt Hunter in retaliation to Hunter Moore's revenge porn site, "Is Anyone Up?" Anonymous crashed Moore's servers and publicized much of his personal information online, including his social security number. The organization also published the personal information of Andrew Myers, the proprietor of "Is Anyone Back", a copycat site of Moore's "Is Anyone Up?" In response to Operation Pillar of Defense, a November 2012 Israeli military operation in the Gaza Strip, Anons took down hundreds of Israeli websites with DDoS attacks. Anons pledged another "massive cyberassault" against Israel in April 2013 in retaliation for its actions in Gaza, promising to "wipe Israel off the map of the Internet". However, its DDoS attacks caused only temporary disruptions, leading cyberwarfare experts to suggest that the group had been unable to recruit or hire botnet operators for the attack. 2013 On November 5, 2013, Anonymous protesters gathered around the world for the Million Mask March. Demonstrations were held in 400 cities around the world to coincide with Guy Fawkes Night. Operation Safe Winter was an effort to raise awareness about homelessness through the collection, collation, and redistribution of resources. This program began on November 7, 2013 after an online call to action from Anonymous UK. Three missions using a charity framework were suggested in the original global spawning a variety of direct actions from used clothing drives to pitch in community potlucks feeding events in the UK, US and Turkey. The #OpSafeWinter call to action quickly spread through the mutual aid communities like Occupy Wall Street and its offshoot groups like the open-source-based OccuWeather. With the addition of the long-term mutual aid communities of New York City and online hacktivists in the US, it took on an additional three suggested missions. Encouraging participation from the general public, this operation has raised questions of privacy and the changing nature of the Anonymous community's use of monikers. The project to support those living on the streets while causing division in its own online network has been able to partner with many efforts and organizations not traditionally associated with Anonymous or online activists. 2014 In the wake of the fatal police shooting of unarmed African-American Michael Brown in Ferguson, Missouri, "Operation Ferguson"—a hacktivist organization that claimed to be associated with Anonymous—organized cyberprotests against police, setting up a website and a Twitter account to do so. The group promised that if any protesters were harassed or harmed, they would attack the city's servers and computers, taking them offline. City officials said that e-mail systems were targeted and phones died, while the Internet crashed at the City Hall. Prior to August 15, members of Anonymous corresponding with Mother Jones said that they were working on confirming the identity of the undisclosed police officer who shot Brown and would release his name as soon as they did. On August 14, Anonymous posted on its Twitter feed what it claimed was the name of the officer involved in the shooting. However, police said the identity released by Anonymous was incorrect. Twitter subsequently suspended the Anonymous account from its service. It was reported on November 19, 2014, that Anonymous had declared cyber war on the Ku Klux Klan (KKK) the previous week, after the KKK had made death threats following the Ferguson riots. They hacked the KKK's Twitter account, attacked servers hosting KKK sites, and started to release the personal details of members. On November 24, 2014, Anonymous shut down the Cleveland city website and posted a video after Tamir Rice, a twelve-year-old boy armed only with a BB gun, was shot to death by a police officer in a Cleveland park. Anonymous also used BeenVerified to uncover the phone number and address of a police officer involved in the shooting. 2015 In January 2015, Anonymous released a video and a statement via Twitter condemning the attack on Charlie Hebdo, in which 12 people, including eight journalists, were fatally shot. The video, claiming that it is "a message for al-Qaeda, the Islamic State and other terrorists", was uploaded to the group's Belgian account. The announcement stated that "We, Anonymous around the world, have decided to declare war on you, the terrorists" and promises to avenge the killings by "shut[ting] down your accounts on all social networks." On January 12, they brought down a website that was suspected to belong to one of these groups. Critics of the action warned that taking down extremists' websites would make them harder to monitor. On June 17, 2015, Anonymous claimed responsibility for a Denial of Service attack against Canadian government websites in protest of the passage of bill C-51—an anti-terror legislation that grants additional powers to Canadian intelligence agencies. The attack temporarily affected the websites of several federal agencies. On October 28, 2015, Anonymous announced that it would reveal the names of up to 1,000 members of the Ku Klux Klan and other affiliated groups, stating in a press release, "You are terrorists that hide your identities beneath sheets and infiltrate society on every level. The privacy of the Ku Klux Klan no longer exists in cyberspace." On November 2, a list of 57 phone numbers and 23 email addresses (that allegedly belong to KKK members) was reportedly published and received media attention. However, a tweet from the "@Operation_KKK" Twitter account the same day denied it had released that information The group stated it planned to, and later did, reveal the names on November 5. Since 2013, Saudi Arabian hacktivists have been targeting government websites protesting the actions of the regime. These actions have seen attacks supported by the possibly Iranian backed Yemen Cyber Army. An offshoot of Anonymous self-described as Ghost Security or GhostSec started targeting Islamic State-affiliated websites and social media handles. In November 2015, Anonymous announced a major, sustained operation against ISIS following the November 2015 Paris attacks, declaring: "Anonymous from all over the world will hunt you down. You should know that we will find you and we will not let you go." ISIS responded on Telegram by calling them "idiots", and asking "What they hack?" By the next day, however, Anonymous claimed to have taken down 3.824 pro-ISIS Twitter accounts, and by the third day more than 5.000, and to have doxxed ISIS recruiters. A week later, Anonymous increased their claim to 20.000 pro-ISIS accounts and released a list of the accounts. The list included the Twitter accounts of Barack Obama, Hillary Clinton, The New York Times, and BBC News. The BBC reported that most of the accounts on the list appeared to be still active. A spokesman for Twitter told The Daily Dot that the company is not using the lists of accounts being reported by Anonymous, as they have been found to be "wildly inaccurate" and include accounts used by academics and journalists. In 2015, a group that claimed to be affiliated with Anonymous, calling themselves as AnonSec, claimed to have hacked and gathered almost 276 GB of data from NASA servers including NASA flight and radar logs and videos, and also multiple documents related to ongoing research. AnonSec group also claimed gaining access of a Global Hawk Drone of NASA, and released some video footage purportedly from the drone's cameras. A part of the data was released by AnonSec on Pastebin service, as an Anon Zine. NASA has denied the hack, asserting that the control of the drones were never compromised, but has acknowledged that the photos released along with the content are real photographs of its employees, but that most of these data are already available in the public domain. 2016 The Blink Hacker Group, associating themselves with the Anonymous group, claimed to have hacked the Thailand prison websites and servers. The compromised data has been shared online, with the group claiming that they give the data back to Thailand Justice and the citizens of Thailand as well. The hack was done in response to news from Thailand about the mistreatment of prisoners in Thailand. In late 2017, the QAnon conspiracy theory first emerged on 4chan, and adherents used similar terminology and branding as Anonymous. In response, anti-Trump members of Anonymous warned that QAnon was stealing the collective's branding and vowed to oppose the theory. A group calling themselves Anonymous Africa launched a number of DDoS attacks on websites associated with the controversial South African Gupta family in mid-June 2016. Gupta-owned companies targeted included the websites of Oakbay Investments, The New Age, and ANN7. The websites of the South African Broadcasting Corporation and a political parties Economic Freedom Fighters and Zimbabwe's Zanu-PF were also attacked for "nationalist socialist rhetoric and politicising racism." 2020 In February 2020, Anonymous hacked the United Nations' website and created a page for Taiwan, a country which has not had a seat at the UN since 1971. The hacked page featured the Flag of Taiwan, the KMT emblem, a Taiwan Independence flag, and the Anonymous logo along with a caption. The hacked server belonged to the United Nations Department of Economic and Social Affairs. In the wake of protests across the U.S following the murder of George Floyd, Anonymous released a video on Facebook as well as sending it out to the Minneapolis Police Department on May 28, 2020, titled "Anonymous Message To The Minneapolis Police Department", in which they state that they are going to seek revenge on the Minneapolis Police Department, and "expose their crimes to the world". According to Bloomberg, the video was initially posted on an unconfirmed Anonymous Facebook page on May 28. According to BBC News, that same Facebook page had no notoriety and published videos of dubious content linked to UFOs and "China's plan to take over the world". It gained repercussions after the video about George Floyd was published and the Minneapolis police website, which is responsible for the police officer, was down. Later, Minnesota Governor Tim Walz said that every computer in the region suffered a sophisticated attack. According to BBC News, the attack on the police website using DDoS (Distributed Denial of Service) was unsophisticated. According to researcher Troy Hunt, these breaches of the site may have happened from old credentials. Regarding unverified Twitter posts that also went viral, where radio stations of police officers playing music and preventing communication are shown, experts point out that this is unlikely to be due to a hack attackif they are real. Later, it was confirmed by CNET that the leaks made from the police website are false and that someone is taking advantage of the repercussions of George Floyd's murder to spread misinformation. On May 31, 2020, a person or group claiming to be part of the Anonymous collective tweeted that they have uploaded a number of documents to the Scribd account OpDeathEaters that contain incriminating evidence against U.S. President Donald J. Trump and Jeffrey Epstein in multiple cases of pedophilia, rape and sexual assault. According to what they found Epstein and Trump were co-defendants in the one case. They further claimed to have evidence that the British royal family had Princess Diana murdered because she had incriminating evidence against the royal family's involvement in a sex-trafficking ring. Other high-profile accusations involved Bill Gates and Naomi Campbell. Anonymous, issued an open letter to Gates. In the letter, Anonymous claimed Gates had advocated for strict measures, including a surveillance system that could track infected people. Anonymous said: "In the midst of a historical pandemic, much of the world is looking to you for solutions, and it seems that this is no mistake, because you have positioned yourself as the Nostradamus of disease." Gates recently suggested implementing a "national tracking system similar to South Korea" during an online Ask Me Anything (AMA) session on Reddit. Gates also advocated for social distancing as a viable way to reduce the number of infections. Later the same day most of these tweets disappeared from their accounts, but continued circulating as screenshots on Twitter, Facebook and Reddit. Another accusation is that Epstein is connected to deaths of Michael Jackson, Kurt Cobain, Avicii, Paul Walker, Chester Bennington, Chris Cornell and Marilyn Monroe, claiming that they were murdered by Epstein's pedophile ring. In 2020, Anonymous started cyber-attacks against the Nigerian government. They started the operation to support the #EndSARS movement in Nigeria. The group's attacks were tweeted by a member of Anonymous called LiteMods. The websites of EFCC, INEC and various other Nigerian government websites were taken-down with DDoS attacks. The websites of some banks were compromised. A video spread around claiming that Anonymous gave the Nigerian government 72 hours, but some Anonymous members denied this. 2021 The Texas Heartbeat Act, a law which bans abortions after six weeks of pregnancy, came into effect in Texas on September 1, 2021. The law relies on private citizens to file civil lawsuits against anyone who performs or induces an abortion, or aids and abets one, once "cardiac activity" in an embryo can be detected via transvaginal ultrasound, which is usually possible beginning at around six weeks of pregnancy. Shortly after the law came into effect, anti-abortion organizations set up websites to collect "whistleblower" reports of suspected violators of the bill. On September 3, Anonymous announced "Operation Jane", a campaign focused on stymying those who attempted to enforce the law by "exhaust[ing] the investigational resources of bounty hunters, their snitch sites, and online gathering spaces until no one is able to maintain data integrity". On September 11, the group hacked the website of the Republican Party of Texas, replacing it with text about Anonymous, an invitation to join Operation Jane, and a Planned Parenthood donation link. On September 13, Anonymous released a large quantity of private data belonging to Epik, a domain registrar and web hosting company known for providing services to websites that host far-right, neo-Nazi, and other extremist content. Epik had briefly provided services to an abortion "whistleblower" website run by the anti-abortion Texas Right to Life organization, but the reporting form went offline on September 4 after Epik told the group they had violated their terms of service by collecting private information about third parties. The data included domain purchase and transfer details, account credentials and logins, payment history, employee emails, and unidentified private keys. The hackers claimed they had obtained "a decade's worth of data" which included all customers and all domains ever hosted or registered through the company, and which included poorly encrypted passwords and other sensitive data stored in plaintext. Later on September 13, the Distributed Denial of Secrets (DDoSecrets) organization said they were working to curate the allegedly leaked data for public download, and said that it consisted of "180gigabytes of user, registration, forwarding and other information". Publications including The Daily Dot and The Record by Recorded Future subsequently confirmed the veracity of the hack and the types of data that had been exposed. Anonymous released another leak on September 29, this time publishing bootable disk images of Epik's servers; more disk images as well as some leaked documents from the Republican Party of Texas appeared on October 4. 2022 On February 25, 2022, Twitter accounts associated with Anonymous declared that they had launched a 'cyber operations' against the Russian Federation, in retaliation for the invasion of Ukraine ordered by Russian president Vladimir Putin. The group later temporarily disabled websites such as RT.com and the website of the Defence Ministry along with other state owned websites. That same day, the group leaked 200GB of documents and emails belonging to the Russian Ministry of Defense. Anonymous also leaked 200GB worth of emails from the Belarusian weapons manufacturer Tetraedr, which provided logistical support for Russia in the Russian invasion of Ukraine. On top of that Anonymous also hacked into Russian TV channels and played Ukrainian music through them and showed uncensored news of what was happening in Ukraine. Related groups LulzSec In May 2011, the small group of Anons behind the HBGary Federal hack—including Tflow, Topiary, Sabu, and Kayla—formed the hacker group "Lulz Security", commonly abbreviated "LulzSec". The group's first attack was against Fox.com, leaking several passwords, LinkedIn profiles, and the names of 73,000 X Factor contestants. In May 2011, members of Lulz Security gained international attention for hacking into the American Public Broadcasting Service (PBS) website. They stole user data and posted a fake story on the site that claimed that rappers Tupac Shakur and Biggie Smalls were still alive and living in New Zealand. LulzSec stated that some of its hacks, including its attack on PBS, were motivated by a desire to defend WikiLeaks and its informant Chelsea Manning. In June 2011, members of the group claimed responsibility for an attack against Sony Pictures that took data that included "names, passwords, e-mail addresses, home addresses and dates of birth for thousands of people." In early June, LulzSec hacked into and stole user information from the pornography website www.pron.com. They obtained and published around 26,000 e-mail addresses and passwords. On June 14, 2011, LulzSec took down four websites by request of fans as part of their "Titanic Take-down Tuesday". These websites were Minecraft, League of Legends, The Escapist, and IT security company FinFisher. They also attacked the login servers of the multiplayer online game EVE Online, which also disabled the game's front-facing website, and the League of Legends login servers. Most of the takedowns were performed with DDoS attacks. LulzSec also hacked a variety of government-affiliated sites, such as chapter sites of InfraGard, a non-profit organization affiliated with the FBI. The group leaked some of InfraGard member e-mails and a database of local users. On June 13, LulzSec released the e-mails and passwords of a number of users of senate.gov, the website of the U.S. Senate. On June 15, LulzSec launched an attack on cia.gov, the public website of the U.S. Central Intelligence Agency, taking the website offline for several hours with a distributed denial-of-service attack. On December 2, an offshoot of LulzSec calling itself LulzSec Portugal attacked several sites related to the government of Portugal. The websites for the Bank of Portugal, the Assembly of the Republic, and the Ministry of Economy, Innovation and Development all became unavailable for a few hours. On June 26, 2011, the core LulzSec group announced it had reached the end of its "50 days of lulz" and was ceasing operations. Sabu, however, had already been secretly arrested on June 7 and then released to work as an FBI informant. His cooperation led to the arrests of Ryan Cleary, James Jeffery, and others. Tflow was arrested on July 19, 2011, Topiary was arrested on July 27, and Kayla was arrested on March 6, 2012. Topiary, Kayla, Tflow, and Cleary pleaded guilty in April 2013 and were scheduled to be sentenced in May 2013. In April 2013, Australian police arrested the alleged LulzSec leader Aush0k, but subsequent prosecutions failed to establish police claims. AntiSec Beginning in June 2011, hackers from Anonymous and LulzSec collaborated on a series of cyber attacks known as "Operation AntiSec". On June 23, in retaliation for the passage of the immigration enforcement bill Arizona SB 1070, LulzSec released a cache of documents from the Arizona Department of Public Safety, including the personal information and home addresses of many law enforcement officers. On June 22, LulzSec Brazil took down the websites of the Government of Brazil and the President of Brazil. Later data dumps included the names, addresses, phone numbers, Internet passwords, and Social Security numbers of police officers in Arizona, Missouri, and Alabama. AntiSec members also stole police officer credit card information to make donations to various causes. On July 18, LulzSec hacked into and vandalized the website of British newspaper The Sun in response to a phone-hacking scandal. Other targets of AntiSec actions have included FBI contractor ManTech International, computer security firm Vanguard Defense Industries, and defense contractor Booz Allen Hamilton, releasing 90,000 military e-mail accounts and their passwords from the latter. In December 2011, AntiSec member "sup_g" (alleged by the U.S. government to be Jeremy Hammond) and others hacked Stratfor, a U.S.-based intelligence company, vandalizing its web page and publishing 30,000 credit card numbers from its databases. AntiSec later released millions of the group's e-mails to Wikileaks. Arrests and trials Since 2009, dozens of people have been arrested for involvement in Anonymous cyberattacks, in countries including the U.S., UK, Australia, the Netherlands, Spain, and Turkey. Anons generally protest these prosecutions and describe these individuals as martyrs to the movement. The July 2011 arrest of LulzSec member Topiary became a particular rallying point, leading to a widespread "Free Topiary" movement. The first person to be sent to jail for participation in an Anonymous DDoS attack was Dmitriy Guzner, an American 19-year-old. He pleaded guilty to "unauthorized impairment of a protected computer" in November 2009 and was sentenced to 366 days in U.S. federal prison. On June 13, 2011, officials in Turkey arrested 32 individuals that were allegedly involved in DDoS attacks on Turkish government websites. These members of Anonymous were captured in different cities of Turkey including Istanbul and Ankara. According to PC Magazine, these individuals were arrested after they attacked websites as a response to the Turkish government demand to ISPs to implement a system of filters that many have perceived as censorship. Chris Doyon (alias "Commander X"), a self-described leader of Anonymous, was arrested in September 2011 for a cyberattack on the website of Santa Cruz County, California. He jumped bail in February 2012 and fled across the border into Canada. In September 2012, journalist and Anonymous associate Barrett Brown, known for speaking to media on behalf of the group, was arrested hours after posting a video that appeared to threaten FBI agents with physical violence. Brown was subsequently charged with 17 offenses, including publishing personal credit card information from the Stratfor hack. Operation Avenge Assange Several law enforcement agencies took action after Anonymous' Operation Avenge Assange. In January 2011, the British police arrested five male suspects between the ages of 15 and 26 with suspicion of participating in Anonymous DDoS attacks. During July 19–20, 2011, as many as 20 or more arrests were made of suspected Anonymous hackers in the US, UK, and Netherlands. According to the statements of U.S. officials, suspects' homes were raided and suspects were arrested in Alabama, Arizona, California, Colorado, Washington DC, Florida, Massachusetts, Nevada, New Mexico, and Ohio. Additionally, a 16-year-old boy was held by the police in south London on suspicion of breaching the Computer Misuse Act 1990, and four were held in the Netherlands. AnonOps admin Christopher Weatherhead (alias "Nerdo"), a 22-year-old who had reportedly been intimately involved in organizing DDoS attacks during "Operation Payback", was convicted by a UK court on one count of conspiracy to impair the operation of computers in December 2012. He was sentenced to 18 months' imprisonment. Ashley Rhodes, Peter Gibson, and another male had already pleaded guilty to the same charge for actions between August 2010 and January 2011. Analysis Evaluations of Anonymous' actions and effectiveness vary widely. In a widely shared post, blogger Patrick Gray wrote that private security firms "secretly love" the group for the way in which it publicizes cyber security threats. Anonymous is sometimes stated to have changed the nature of protesting, and in 2012, Time called it one of the "100 most influential people" in the world. In 2012, Public Radio International reported that the U.S. National Security Agency considered Anonymous a potential national security threat and had warned the president that it could develop the capability to disable parts of the U.S. power grid. In contrast, CNN reported in the same year that "security industry experts generally don't consider Anonymous a major player in the world of cybercrime" due to the group's reliance on DDoS attacks that briefly disabled websites rather than the more serious damage possible through hacking. One security consultant compared the group to "a jewelry thief that drives through a window, steal jewels, and rather than keep them, waves them around and tosses them out to a crowd ... They're very noisy, low-grade crimes." In its 2013 Threats Predictions report, McAfee wrote that the technical sophistication of Anonymous was in decline and that it was losing supporters due to "too many uncoordinated and unclear operations". Graham Cluley, a security expert for Sophos, argued that Anonymous' actions against child porn websites hosted on a darknet could be counterproductive, commenting that while their intentions may be good, the removal of illegal websites and sharing networks should be performed by the authorities, rather than Internet vigilantes. Some commentators also argued that the DDoS attacks by Anonymous following the January 2012 Stop Online Piracy Act protests had proved counterproductive. Molly Wood of CNET wrote that "[i]f the SOPA/PIPA protests were the Web's moment of inspiring, non-violent, hand-holding civil disobedience, #OpMegaUpload feels like the unsettling wave of car-burning hooligans that sweep in and incite the riot portion of the play." Dwight Silverman of the Houston Chronicle concurred, stating that "Anonymous' actions hurt the movement to kill SOPA/PIPA by highlighting online lawlessness." The Oxford Internet Institute's Joss Wright wrote that "In one sense the actions of Anonymous are themselves, anonymously and unaccountably, censoring websites in response to positions with which they disagree." Gabriella Coleman has compared the group to the trickster archetype and said that "they dramatize the importance of anonymity and privacy in an era when both are rapidly eroding. Given that vast databases track us, given the vast explosion of surveillance, there's something enchanting, mesmerizing and at a minimum thought-provoking about Anonymous' interventions". When asked what good Anonymous had done for the world, Parmy Olson replied: In some cases, yes, I think it has in terms of some of the stuff they did in the Middle East supporting the pro-democracy demonstrators. But a lot of bad things too, unnecessarily harassing people – I would class that as a bad thing. DDOSing the CIA website, stealing customer data and posting it online just for shits and giggles is not a good thing. Quinn Norton of Wired wrote of the group in 2011: I will confess up front that I love Anonymous, but not because I think they're the heroes. Like Alan Moore's character V who inspired Anonymous to adopt the Guy Fawkes mask as an icon and fashion item, you're never quite sure if Anonymous is the hero or antihero. The trickster is attracted to change and the need for change, and that's where Anonymous goes. But they are not your personal army – that's Rule 44 – yes, there are rules. And when they do something, it never goes quite as planned. The internet has no neat endings. Furthermore, Landers assessed the following in 2008: Anonymous is the first internet-based super-consciousness. Anonymous is a group, in the sense that a flock of birds is a group. How do you know they’re a group? Because they’re travelling in the same direction. At any given moment, more birds could join, leave, peel off in another direction entirely. Media portrayal Sam Esmail shared in an interview with Motherboard that he was inspired by Anonymous when creating the USA Network hacktivist drama, Mr. Robot. Furthermore, Wired calls the "Omegas", a fictitious hacker group in the show, "a clear reference to the Anonymous offshoot known as LulzSec". A member of Anonymous called Mr. Robot "the most accurate portrayal of security and hacking culture ever to grace the screen". In the TV series Elementary a hacktivist collective called "Everyone" plays a recurring role; there are several hints and similarities to Anonymous. See also Memetic persona Luther Blissett (nom de plume) Crowd psychology John Doe Proteus effect Composition Emergent organization Fourth-generation warfare Self-organization Spontaneous order Adhocracy Activism Electronic civil disobedience Leaderless resistance Streisand effect Other related articles Anti-mask laws CyberBerkut Derp (hacker group) LulzRaft Panopticon Securax RedHack We Are Legion: The Story of the Hacktivists References Notes Citations Bibliography External links Anonymity Cyberattacks Hacker groups Information society Intellectual property activism Internet-based activism Internet culture Internet memes Internet trolling Internet vigilantism Organizations established in 2003 Articles containing video clips Hacking in the 2000s Hacking in the 2010s Hacking in the 2020s Cyberattack gangs Anonymity pseudonyms 4chan Anti-cult organizations Hacktivists Cybercrime in the United States
8895690
https://en.wikipedia.org/wiki/Count%20key%20data
Count key data
Count key data (CKD) is a direct-access storage device (DASD) data recording format introduced in 1964, by IBM with its IBM System/360 and still being emulated on IBM mainframes. It is a self-defining format with each data record represented by a Count Area that identifies the record and provides the number of bytes in an optional Key Area and an optional Data Area. This is in contrast to devices using fixed sector size or a separate format track. Count key data (CKD) also refers to the set of channel commands (collectively Channel Command Words, CCWs) that are generated by an IBM mainframe for execution by a DASD subsystem employing the CKD recording format. The initial set of CKD CCWs, introduced in 1964, was substantially enhanced and improved into the 1990s. CKD Track Format "The beginning of a track is signalled when the index marker (index point) is detected.… The marker is automatically recognized by a special sensing device." Following the index marker is the home address, which indicates the location of this track on the disk, and contains other control information internal to the control unit. A fixed-length gap follows the home address. Next, each track contains a Record 0 (R0), the track descriptor record, which is "designed to enable the entire content of a track to be moved to alternate tracks if a portion of the primary track becomes defective." Following R0 are the data blocks, separated by gaps. The principle of CKD records is that since data block lengths can vary, each block has an associated count field which identifies the block and indicates the size of the key, if used (user-defined up to 255 bytes), and the size of the data area, if used. The count field has the identification of the record in cylinder-head-record format, the length of the key, and the length of the data. The key may be omitted or consist of a string of characters. Each CKD record consists of a count field, an optional key field, and an optional "user" data field with error correction/detection information appended to each field and gaps separating each field. Because of the gaps and other information, the recorded space is larger than that required for just the count data, key data, or user data. IBM provides a "reference card" for each device, which can be used to compute the number of blocks per track for various block sizes, and to optimize the block size for the device. Later, programs were written to do these calculations. Because blocks are normally not split between tracks, specification of an incorrect block size can waste up to half of each track. Most often, the key is omitted and the record is located sequentially or by direct cylinder-head-record addressing. If it is present, the key is typically a copy of the first bytes of the data record (for "unblocked" records, or a copy of the highest key in the block, for "blocked" records), but can be any data which will be used to find the record, usually using the Search Key Equal or Search Key High or Equal CCW. The key (and hence the record) is locatable via hardware commands. Since the introduction of IBM's System/360 in 1964, nearly all IBM large and intermediate system DASDs have used the count key data record format. The advantages of count key data record format are: The record size can be exactly matched to the application block size CPU and memory requirements can be reduced by exploiting search-key commands. IBM CKD subsystems initially operated synchronously with the system channel and can process information in the gaps between the various fields, thereby achieving higher performance by avoiding the redundant transfer of information to the host. Both synchronous and asynchronous operations are supported on later subsystems. Reduced CPU and memory prices and higher device and interface speeds have somewhat nullified the advantages of CKD, and it is retained only because IBM's flagship operating system z/OS does not support sector-oriented interfaces. Originally CKD records had a one-to-one correspondence to a physical track of a DASD device; however over time the records have become more and more virtualized such that in modern IBM mainframes there is no longer a direct correspondence between the a CKD record ID and a physical layout of a track. IBM's CKD DASD subsystems Programming Access to specific classes of I/O devices by an IBM mainframe is under the control of Channel Command Words (CCWs), some of which are generic (e.g. No Operation) but many of which are specific to the type of I/O device (e.g. Read Backwards for a tape drive). The group of CCWs defined by IBM for DASD fall into five broad categories: Control control of the DASD including the path thereto Sense sense status of the DASD including the path thereto; some sense commands affect the status of the controller and DASD in a fashion more in keeping with a control command, e.g., RESERVE, RELEASE Write write information to the controller or DASD (which may be buffered or cached in the path) Search compare information from the CPU with information stored in the DASD; the Channel operates in the Write mode while the storage unit operates in the Read mode. Read read information from the DASD (which may be buffered or cached in the path) CKD CCWs are the specific set of CCWs used to access CKD DASD subsystems. This is in contrast to fixed block architecture (FBA) CCWs which are used to access FBA DASD subsystems. CKD DASD are addressed like other Input/Output devices; for System/360 and System/370 DASD are addressed directly, through channels and the associated control units (SCU or Storage Control Unit), initially using three hexadecimal digits, one for channel and two for control unit and device, providing addressing for up to 16 channels, for up to 256 DASD access mechanisms/channel and 4,096 DASD addresses total. Modern IBM mainframes use four hexidecimal digits as an arbitrary subchannel number within a channel subsystem subset, whose definition includes the actual channels, control units and device, providing addressing for up to 65,536 DASD per channel subsystem subset. In practice, physical and design constraints of the channel and of the controllers limited the maximum number of attached DASD attachable to a system to a smaller amount than the number that could be addressed. Packaging Initially there was a high degree of correspondence between the logical view of DASD accesses and the actual hardware, as shown in the illustration above. Three digit labels were typically affixed to identify the address of channel, control unit and device. On low end systems the Channel and the Control Unit were frequently physically integrated but remained logically separate. IBM's New Attachment Strategy beginning with the 3830 Model 2 in 1972 physically separated the SCU into two physical entities, a director and a controller while keeping them logically the same. The controller handles the CKD track formatting and is packaged with the first drive or drives in a string of drives and having a model number with the letter "A" as a prefix, an "A-Unit" (or "A-Box") as in 3350 Model A2 containing a controller and two DASDs. DASD without a controller, that is B-Units, have a "B" prefix in their model number. CKD subsystems and directors were offered by IBM and plug compatible competitors until at least 1996 (2301 to 3390 Model 9); in total 22 unique DASD offered by IBM configured in at least 35 different subsystem configurations. Plug-compatible offered many of the same DASD including 4 CKD subsystems featuring unique DASD. Initial CKD feature set The initial feature set provided by IBM with its 1964 introduction of the CKD track format and associated CCWs included: . Defective/Alternative Track enables an alternate track to replace a defective track transparent to the access method in use. Record overflow records can exceed the maximum track length of a DASD Multitrack operations specific CCWs can continue onto the next sequential head Command chaining CCWs could be chained together to construct complex channel programs. The gaps in a CKD track format provided sufficient time between the commands so that all channel and SCU activity necessary to complete a command can be performed in the a gap between appropriate fields. Such programs can search a large amount of information stored on a DASD, upon successful completion returning only the desired data and thereby freeing CPU resources for other activity. This mode of operating synchronous to the gap was later enhanced by additional CCWs enabling a nonsychronous mode of operation. Channel switching an SCU can be shared between channels initially two channel switching was provided and it was expanded to up to eight channels in later SCUs. The channels can be on the same or different CPUS. A Scan feature set was also provided but not continued into future CKD subsystems beyond the 2314. Forty one CCWs implemented the feature set: Notes: O = optional feature S = standard feature MT = multitrack: when supported CCW will continue to operate on next heads in sequence to end of cylinder ‡ = TIC (Transfer In Channel) and other standard commands not shown. † = code same as MT Off except as listed 1. File Scan Feature (9 CCWs) only available on 2841 for 2302, 2311 and 2321; they were not available on subsequent DASD controllers for DASD later than 2314. 2. Count is number of bytes in search argument, including mask bytes The CCWs were initially were executed by two types of SCU attached to the system's high speed Selector Channels. The 2820 SCU controlled the 2301 Drum while the 2841 SCU controlled combinations of the 2302 Disk Storage, 2311 Disk Drive, 2321 Data Cell and/or 7320 Drum Storage. IBM quickly replaced the 7320 with the faster and larger 2303. Subsequently, the feature set was implemented on the 2314 family of storage controls and an integrated attachment of the System 370 Model 25. The following example of a channel program reads a disk record identified by a Key field. The track containing the record and the desired value of the key is known. The SCU will search the track to find the requested record. In this example <> indicate that the channel program contains the storage address of the specified field. SEEK <cylinder/head number> SEARCH KEY EQUAL <key value> TIC *-8 Back to search if not equal READ DATA <buffer> The TIC (transfer in channel) will cause the channel program to branch to the SEARCH command until a record with a matching key (or the end of the track) is encountered. When a record with a matching key is found the SCU will include Status Modifier in the channel status, causing the channel to skip the TIC CCW; thus the channel program will not branch and the channel will execute the READ command. Block Multiplexer Channel Enhancements The block multiplexor channel was introduced beginning in 1971 on some high end System/360 systems along with the 2835 Control Unit and associated 2305 DASD, This channel was then standard on IBM System/370 and subsequent mainframes; when contrasted to the prior Selector channel it offered performance improvements for high speed devices such as DASD, including: Multiple Requesting Allowed multiple channel programs, to be simultaneously active in the facility as opposed to only one with a Selector channel. The actual number of subchannels provided depends upon the system model and its configuration. Sometimes described as disconnected command chaining, the control unit could disconnect at various times during a chained set of CCWs, for example, disconnection for a Seek CCW, freeing the channel for another subchannel. Command Retry The channel and storage control under certain conditions can inter-operate to cause a CCW to be retried without an I/O interruption. This procedure is initiated by the storage control and used to recover from correctable errors. Rotational Position Sensing Rotational position sensing (RPS) was implemented with two new CCWs, SET SECTOR and READ SECTOR enabled the channel to delay command chaining until the disk rotated to a specified angular track position. RPS permits channel disconnection during most of the rotational delay period and thus contributes to increased channel utilization. The control unit implements RPS by dividing each track into equal angular segments. Example Channel Program The following example channel program will format a track with an R0 and three CKD records. SEEK <cylinder/head number> SET FILE MASK <allow write operations> SET SECTOR <sector number=0> WRITE R0 <cylinder/head/R0, key length=0, data length=6> WRITE CKD <cylinder/head/R1, key length, data length> WRITE CKD <cylinder/head/R2, key length, data length> WRITE CKD <cylinder/head/R3, key length, data length> In this example the Record 0 conforms to IBM programming standards. With a block multiplexer channel the channel is free during the time the DASD is seeking and again while the disk rotates to beginning of the track. A selector channel would be busy for the entire duration of this sample program. Defect skipping Defect skipping allows data to be written before and after one of more surface defects allowing all of a track to be used except for that portion that has the defect. This also eliminates the time that was formerly required to seek to an alternate track. Only a limited number of defects could be skipped so alternate tracks remained supported for those tracks with excess defects. Defect skipping was introduced in 1974 with the 3340 attached via the 3830 Model 2 Storage Control Unit or integrated attachments on small systems. Defect skipping was essentially a factory only feature until 1981 when CCWs for management along with associated utilities were released. Dynamic paths First introduced with the 3380 DASD on the 3880 Storage Control Unit in 1981 the feature was included with the later CKD DASD subsystems. The dynamic path selection function controls operation of the two controllers, including simultaneous data transfer over the two paths. When supported by the operating system, each controller can serve as an alternate path in the event the other controller is unavailable. Three additional commands, Set Path Group ID, Sense Path Group ID, and Suspend Multipath Reconnection, are used to support attachment of the 3380 Models having two controllers at the head of a string. The Set Path Group ID command, with the dynamic path selection (DPS) function, provides greater flexibility in operations on reserved devices. Once a path group for a device has been established, it may be accessed over any path which is a member of the group to which it is reserved. In addition, on 370-XA systems which set the multipath mode bit in the function control byte (byte 0) to a 1, block multiplex reconnections will occur on the first available path which is a member of the group over which the channel program was initiated (regardless of the reservation state of the device). If the controller designated in the I/O address is busy or disabled, the dynamic path selection allows an alternate path to the device to be established via another storage director and the other controller in the model AA. Nonsynchronous operation Prior to the 1981 introduction of the 3880 director, CKD records were synchronously accessed, all activities required that one CCW be ended and the next initiated in the gaps between the CKD fields. The gap size placed limitations on cable length but did provide for very high performance since complex chains of CCWs could be performed by the subsystem in real time without use of CPU memory or cycles. Nonsynchronous operation provided by the Extended CKD ("ECKD") set of CCWs removed the gap timing constraint. The five additional ECKD CCWs are Define Extent, Locate Record, Write Update Data, Write Update Key and Data, and Write CKD Next Track. In nonsynchronous operation, the transfer of data between the channel and the storage control is not synchronized with the transfer of data between the storage control and the device. Channel programs can be executed such that channel and storage control activities required to end execution of one command and advance to the next do not have to occur during the inter-record gap between two adjacent fields. An intermediate buffer in the storage control allows independent operations between the channel and the device. A major advantage of ECKDs is far longer cables; depending upon application it may improve performance. ECKD CCWs are supported on all subsequent CKD subsystems. This example nonsynchronous channel program reads records R1 and R2 from track X'0E' in cylinder X'007F'. Both records have a key length of 8 and a data length of X'64' (10010) bytes. Define Extent <extent= X'007F 0000' through track X'0081 000E'> Locate Record <cylinder = X'007F', head = X'000E' Read Key and Data <key record = X'001038'> Read Data <record = X'001108'> Caching Caching first introduced in DASD CKD subsystems by Memorex (1978) and StorageTek (1981) was subsequently introduced in late 1981 by IBM on the 3880 Model 13 for models of the 3380 with dynamic pathing. The cache is dynamically managed by an algorithm; high activity data is accessed from the high-performance cache and low activity data is accessed from less-expensive DASD storage. A large memory in the Director, the cache, is divided into track slots that store data from the 3380 tracks. A smaller area is a directory that contains entries that allow data to be located in the cache. Caches were also provided on subsequently introduced storage controls. Other extensions Over time a number of path control, diagnostic and/or error recovery CCWs were implemented on one or more storage controls. For example: Unconditional Reserve allowed the releasing a device reserved to another channel and reserving the device to the channel issuing the command. Read Multiple Count Key Data could more efficiently read full tracks allowing for more efficient backups. Beyond System/370 Reduced CPU and memory prices and higher device and interface speeds have somewhat nullified the advantages of CKD, and support continues by IBM to this date because its flagship operating system z/OS continues to use CKD CCWs for many functions. Originally CKD records had a one-to-one correspondence to a physical track of a DASD device; however over time the records have become more and more virtualized such that in a modern IBM mainframe there is no longer a direct correspondence between the a CKD record ID and a physical layout of a track. An IBM mainframe constructs CKD track images in memory and executes the ECKD and CKD channel programs against the image. To bridge between the native fixed block sized disks and the variable length ECKD/CKD record format, the CKD track images in memory are mapped onto a series of fixed blocks suitable for transfer to and from an FBA disk subsystem. Of the 83 CKD CCWs implemented for System/360 and System/370 channels 56 are emulated on System/390 and later systems. See also Block (data storage) Data set (IBM mainframe) Fixed-block architecture (FBA) Record (computer science) Track (disk drive) Volume Table of Contents (VTOC) Notes References Further reading Development of 360/370 Architecture - A Plain Man's View P.J. Gribbin, February 10, 1989, Chapters 8–10. IBM storage devices IBM mainframe operating systems File system management
44210893
https://en.wikipedia.org/wiki/National%20Development%20Programme%20in%20Computer%20Aided%20Learning
National Development Programme in Computer Aided Learning
The National Development Programme in Computer Aided Learning (NDPCAL) was the earliest large-scale education programme in the United Kingdom to explore the use of computers for teaching and learning. First proposed in 1969 to the Department of Education and Science by the National Council for Educational Technology. it ran from 1973 to 1977 spending £2.5M to support some 35 projects covering a range of subjects. About half the money was spent on projects in universities and the rest on projects in schools, colleges, industrial and military training. Richard Hooper was appointed its Director and operated with a small central team and the programme was administered by the Council for Educational Technology. Origins During the 1960s various projects in the US and the UK using mainframe and mini-computers began to develop the field of Computer Aided Learning and there was much debate about its value and effectiveness. The National Council for Educational Technology produced advice to government in 1969 to run a national development programme to explore the value of these approaches. The Department for Education and Science (DES) announced in 1972 the approval by then Secretary of State Margaret Thatcher of a "national development programme in computer assisted learning." Following the announcement of the programme, the post of director was advertised and Richard Hooper was selected. Strategy NDPCAL's strategy was to work mainly with existing projects in Computer Aided Learning but also to develop feasibility projects with those with good ideas. It required joint funding from the host establishment and stipulated effective evaluation and monitoring processes but allowed a significant degree of autonomy to the projects. The approach of the central team was active and interventionist, working alongside potential projects in their early stages to help develop their design and approach. They required four monthly accounting periods and carefully controlling expenditure. Governance CET was asked to provide administrative services to the new programme, and the programme's central staff were CET employees but executive control was with a committee made up of civil servants from seven government departments plus a group of co-opted advisers. This programme committee was chaired by the DES and held the final say on proposals from the programme director. It also involved itself in project evaluation, setting up sub-committees of three or so of its members to look in detail at a particular proposal or project. Although each of the thirty projects had its own steering committee national linkage was maintained through a member of the national programme committee sitting on each project steering committee. Setting Up From January 1973 to early summer 1973, there was a phase of exploration and consultation and from the summer of 1973 to the end of the year, there was the setting up of the programme's management structure and of the first generation of major projects, notably in the university sector. Hooper was supported by two assistant directors, Gillian Frewin (from ICL) and Roger Miles (from the Army School of Instructional Technology). They were supported by two other executive posts and three secretaries. The programme formulated two main aims over its lifetime (Hooper, 1975, p17): to develop and secure the assimilation of computer assisted and computer managed learning on a regular institutional basis at reasonable cost to make recommendations to appropriate agencies in the public and private sector (including Government) concerning possible future levels and types of investment in computer assisted and computer managed learning in education and training. Two evaluations were set up, one to consider the educational benefits and one to consider the financial aspects. Breadth of Projects This first government funded programme focused on their use for learning subjects other than programming. It supported some 35 projects, seven in schools, a number in higher education but the majority were based on the British armed services’ growing interest in developing more automated and managed approaches to training. The hardware was limited; the computers were large expensive cabinets of complicated electronics accessed mainly by paper tape with Teletype printouts but already the focus was more on the way technology could be used to improve teaching and learning than as a subject in its own right. NDPCAL funded a wide range of different projects - of different types, covering a range of subjects and age ranges sectors. Some of these, such as Chelsea College's computers in the undergraduate science curriculum, developed into the computers in the curriculum project and Hertfordshire's computer-managed mathematics helped the Advisory Unit for Computer Based Education (AUCBE) at Hatfield develop. It classified projects into different stages Stage 1 - Design and Feasibility - a project that shows that a particular application of CAL or CML is feasible by developing and piloting applications. Stage 2 - Development and Transferability - the creation of a working system for increasing numbers of students across a number of institutions. Stage 3 - Model Operation - a fully operational project able to act as a model for others. Stage 4 - Assimilation and Dissemination - national funding is being phased out and the institution has taken ownership with other new institutions taking it up. About half the project funds were spent on projects in universities and polytechnics, about one-sixth of the project funds was spent on schools based projects and the rest on military and industrial training. Some of the projects are listed below. Computer Based Learning Project on Applied Statistics for Social Science, Leeds University - Director: J.R. Hartley Computer Assisted Learning in Engineering Sciences Director: Dr. P.R. Smith Faculty of Engineering, Computer Assisted Teaching Unit, Queen Mary College. Computer Assisted Learning in Chemistry Director: Dr. P.B. Ayscough Dept. of Physical Chemistry, The University of Leeds. Computers in the Undergraduate Science Curriculum Director: Dr. I. McKenzie, University College London Hertfordshire Computer Managed Mathematics in Schools Director: Dr. W.Tagg, Advisory Unit for Computer Based Education Evaluation NDPCAL set up two independent evaluations: an educational evaluation carried out by the University of East Anglia and a financial evaluation by Peat, Marwick, Mitchell and Co. The Educational Evaluation, UNCAL (Understanding Computer Assisted Learning) was carried out over a period of three years evaluation project and reported findings about CAL in general. Its findings echo many of the later findings of the effectiveness of e-learning : It is the versatility of the computer as an aid that assures its educational future CAL, like most innovation, provides an add-on experience at an add-on cost Much of the learning seen within NDPCAL fell into the category of higher-order learning CAL is a demanding medium for learning - virtually guaranteeing the students engagement Some forms of CAL enforce a strict role of learner on the student - this may need to be complemented by other forms CAL offers the student uninhibited learning opportunities within a 'privacy of risk' Learning may be inhibited by interface problems - where the student needs to put extra effort into keyboard skills, learning new computer protocols Current CAL still requires more adaption of the student to the machine Students like working on CAL but are frustrated by technical problems CAL is change-oriented not efficiency-oriented CAL supports teacher development since it encourages a team approach At present CAL development requires access to high level computer expertise. The financial evaluation reported some tentative but interesting conclusions in their study that again reflect later findings on e-learning: CAL will always be an extra cost There are no realisable cash savings or benefits from CAL Claims that CAL will 'save' academic staff time are oversimplified and unjustifiable The time taken to develop science packages varies between 200 and 400 hours Inter-institutional development has been a success leading to substantial savings Large scale applications of CAL require full-time staff and regular computer time. They calculated the 'national or total cost per student terminal hour' in the range £4-£15 by comparison the cost of conventional teaching was in the range £0.60-£2.50 per student hour. References Computer-aided design Education in the United Kingdom Educational technology projects Governmental educational technology organizations Information technology organisations based in the United Kingdom United Kingdom educational programs
39039288
https://en.wikipedia.org/wiki/Philip%27s%20Music%20Writer
Philip's Music Writer
In computing, Philip's Music Writer or PMW, formerly known as Philip's Music Scribe or PMS, is a music scorewriter written by Philip Hazel. It was mentioned in the Center for Computer Assisted Research in the Humanities publication Computing in Musicology in 1993 and remains under active development as free software. Development The software was originally written in order for Hazel to typeset recorder music for his children. It was written in BCPL for an IBM mainframe at the University of Cambridge and also ran on a system running Panos, which was later sold as the Acorn Business Computer. The program was subsequently ported to Acorn's Archimedes running Arthur and later ported to Unix-like systems. It began as commercial software and was later released as free software. On-screen proof-reading was rudimentary on the Acorn Business Computer, which used the BBC Micro for screen output. The Arthur version initially ran at the command line, but was later converted to use the WIMP and outline fonts. Sibelius was released in 1993. Hazel later observed that composers and arrangers generally preferred such WYSIWYG editors, while music engravers tended to prefer text input scorewriters, because of the increased degree of control available. The learning of such text input requires more time investment by the user, so the notation was designed with the aim of being "both compact and easy to learn". The Linux version (ported in 2003) is "back to its roots", being command line driven. The software uses PostScript fonts named PMW-Music and PMW-Alpha, which were conceived by the author and Richard Hallas. The fonts were originally designed as outline fonts. , it remains under active development as free software. Features Musical notation is provided to the software in textual form, which generates output to a printer or for saving in PostScript or Drawfile format. Simple MIDI files and sound output can also be generated. Reception The software was mentioned in the Center for Computer Assisted Research in the Humanities publication Computing in Musicology in 1993, and chapter 18 in Beyond MIDI: The Handbook of Musical Codes, MIT Press (1997, ). References RISC OS software
29683239
https://en.wikipedia.org/wiki/Dominion%20Voting%20Systems
Dominion Voting Systems
Dominion Voting Systems Corporation is a company that sells electronic voting hardware and software, including voting machines and tabulators, in the United States and Canada. The company's headquarters are in Toronto, Ontario, and Denver, Colorado. It develops software in offices in the United States, Canada, and Serbia. Dominion produces electronic voting machines, which allow voters to cast their vote electronically, as well as optical scanning devices to tabulate paper ballots. Dominion voting machines have been used in countries around the world, primarily in Canada and the United States. Dominion systems are employed in Canada's major party leadership elections, and they are also employed across the nation in local and municipal elections. Dominion products have been increasingly used in the United States in recent years. In the 2020 United States presidential election, equipment manufactured by Dominion was used to process votes in twenty-eight states, including the swing states of Wisconsin and Georgia. The company was subjected to extensive attention following the election, at which then-president Donald Trump was defeated by Joe Biden, with Trump and various surrogates promoting conspiracy theories, alleging that Dominion was part of an international cabal to steal the election from Trump, and that it used its voting machines to transfer millions of votes from Trump to Biden. There is no evidence supporting these claims, which have been debunked by various groups including election technology experts, government and voting industry officials, and the Cybersecurity and Infrastructure Security Agency (CISA). These conspiracy theories were further discredited by hand recounts of the ballots cast in the 2020 presidential elections in Georgia and Wisconsin; the hand recounts in these states found that Dominion voting machines had accurately tabulated votes, that any error in the initial tabulation was human error, and that Biden had defeated Trump in both battleground states. In December 2020 and January 2021, Fox News, Fox Business, Newsmax, and the American Thinker rescinded allegations they had reported about Dominion and Smartmatic after one or both companies threatened legal action for defamation. In January 2021, Dominion filed defamation lawsuits against former Trump campaign lawyers Sidney Powell and Rudy Giuliani, seeking $1.3 billion in damages from each. After Dominion filed the lawsuit against Powell, One America News Network (OANN) removed all references to Dominion and Smartmatic from its website without issuing public retractions. During ensuing months, Dominion filed suits seeking $1.6 billion from each of Fox News, Newsmax, OANN and former Overstock.com CEO Patrick Byrne, while also suing Mike Lindell and MyPillow. Despite motions by the defendants to dismiss the lawsuits, judges said the cases against Fox News, Lindell, and MyPillow could proceed. Company Dominion Voting Systems Corporation was founded in 2002 in Toronto, Ontario, by John Poulos and James Hoover. The company develops proprietary software in-house and sells electronic voting hardware and software, including voting machines and tabulators, in the U.S. and Canada and retains a development team in their Serbian office. The company maintains headquarters in Toronto and in Denver, Colorado. Its name comes from the Dominion Elections Act. Acquisitions In May 2010, Dominion acquired Premier Election Solutions (formerly Diebold Election Systems, Inc.) from Election Systems & Software (ES&S). ES&S had just acquired Premier from Diebold and was required to sell off Premier by the United States Department of Justice for anti-trust concerns. In June 2010, Dominion acquired Sequoia Voting Systems. In 2018, Dominion was acquired by its management and Staple Street Capital, a private equity firm. Officers Poulos, President and CEO of Dominion, has a BS in electrical engineering from the University of Toronto and an MBA from INSEAD. Hoover, Vice President, has an MS in mechanical engineering from the University of Alberta. Equipment Dominion Voting Systems (DVS) sells electronic voting systems hardware and software, including voting machines and tabulators, in the United States and Canada. This equipment includes the DVS ImageCast Evolution (ICE), ImageCast X (ICX), and ImageCast Central (ICC). ImageCast Evolution is an optical scan tabulator designed for use in voting precincts that scans and tabulates marked paper ballots. The ICE will also mark ballots for voters with disabilities using an attached accessibility device that enables all voters to cast votes with paper ballots on the same machine. When a marked paper ballot is inserted, the tabulator screen display messages indicating whether the ballot has been successfully input. Causes of rejection include a blank ballot, an overvoted ballot, and unclear marks. After the polls close, results from the encrypted memory cards of each ICE tabulator can be transferred and uploaded to the central system to tally and report the results. ImageCast X is described as an accessible ballot-marking device that allows a voter to use various methods to input their choices. An activation card is required for use, which is provided by a poll worker. The machine has audio capability for up to ten languages, as required by the U.S. Department of Justice, and includes a 19" full-color screen for visual operation, audio and visual marking interfaces and Audio-Tactile Interface (ATI). ATI is a hand-held controller that coordinates with headphones and connects directly to the ICE. During the voting process, the machine generates a marked paper ballot that serves as the official ballot record. The display can be adjusted with contrast and zoom functions that automatically reset at the end of the session. The ATI device has raised keys with tactile function, includes the headphone jack and a T-coil coupling, and has a T4 rating for interference. It uses light pressure switches and may be equipped with a pneumatic switch, commonly known as "sip-n-puff", or a set of paddles. ImageCast Central uses commercial off-the-shelf Canon DR-X10C or Canon DR-G1130 scanners at a central tabulation location to scan vote-by-mail and post-voting ballots like provisional ballots, ballots requiring duplication and ballots scanned into multi-precinct ICE tabulators. The results are dropped into a folder located on the server where they can be accessed by the Adjudication Client software. Software DVS voting machines operate using a suite of proprietary software applications, including Election Management System, Adjudication Client, and Mobile Ballot Printing. The software allows for various settings, including cumulative voting, where voters can apply multiple votes on one or more candidates, and Ranked Order Voting, where voters rank candidates in order of choice and the system shifts votes as candidates are eliminated. The Election Management System (EMS) includes a set of applications that handle pre- and post-voting activities, including ballot layout, programming media for voting equipment, generation of audio files, importing results data, and accumulating and reporting results. Adjudication Client is a software application with administrative and ballot inspection roles. It allows a jurisdiction to resolve problems in a ballot on screen that would normally be rejected, to be remade or hand counted because of one or more exceptional conditions like a blank ballot, write-ins, over-votes, marginal marks and under-votes. The application configures user accounts, reasons for exception, batch management and report generation, which in some jurisdictions must be performed by an administrator directly on a server. Ballot inspection allows users to review ballots with exceptional conditions and either accept or resolve the ballot according to state laws. Each adjudicated ballot is marked with the username of the poll worker who made the change. Mobile Ballot Printing operates in conjunction with the EMS, which creates printable ballot images in .pdf format including tints and watermarks. The image is exported to a laptop and then printed on blank paper to provide a ballot record. After configuration and setup are complete, the laptop only contains geopolitical information and no voter data. The system will also generate reports in Excel, Word and .pdf format, including total number of ballots printed and ballot style. Operations United States Dominion is the second-largest seller of voting machines in the United States. In 2016, its machines served 70 million voters in 1,600 jurisdictions. In 2019, the state of Georgia selected Dominion Voting Systems to provide its new statewide voting system beginning in 2020. In total, 28 states used Dominion voting machines to tabulate their votes during the 2020 United States presidential election, including most of the swing states. Dominion's role in this regard led supporters of then-President Donald Trump to promote conspiracy theories about the company's voting machines, following Trump's defeat to Joe Biden in the election. Canada In Canada, Dominion's systems are deployed nationwide. Currently, Dominion provides optical scan paper ballot tabulation systems for provincial elections, including Ontario and New Brunswick. Dominion also provides ballot tabulation and voting systems for Canada's major party leadership elections, including those of the Liberal Party of Canada, the Conservative Party of Canada, and the Progressive Conservative Party of Ontario. Ontario was the first Canadian province to use Dominion's tabulator machines in select municipalities in the 2006 municipal elections. New Brunswick used Dominion's 763 tabulator machines in the 2014 provincial elections. There were some problems with the reporting of tabulator counts after the election, and at 10:45 p.m. Elections New Brunswick officially suspended the results reporting count with 17 ridings still undeclared. The Progressive Conservatives and the People's Alliance of New Brunswick called for a hand count of all ballots. Recounts were held in 7 of 49 ridings and the results were upheld with variations of 1–3 votes per candidate per riding. This delay in results reporting was caused by an off-the-shelf software application unrelated to Dominion. In June 2018, Elections Ontario used Dominion's tabulator machines for the provincial election and deployed them at 50 percent of polling stations. Dominion's architecture was also widely used in the 2018 Ontario municipal elections on October 22, 2018, particularly for online voting. However, 51 of the province's municipalities had their elections impacted when the company's colocation centre provider imposed an unauthorized bandwidth cap due to the massive increase in voting traffic in the early evening, thus making it impossible for many voters to cast their vote at peak voting time. The affected municipalities extended voting times to compensate for the glitch; most prominently, the city of Greater Sudbury, the largest city impacted by the cap, extended voting for a full 24 hours and announced no election results until the evening of October 23. 2020 US presidential election Following the 2020 United States presidential election, Donald Trump, his attorneys and other right-wing personalities amplified the canard originated by the proponents of the far-right QAnon conspiracy theory that Dominion Voting Systems had been compromised, resulting in millions of votes intended for Trump either being deleted or going to rival Joe Biden. Within days after the election, the Trump campaign knew that many of the allegations were baseless. Trump cited the pro-Trump OANN media outlet, which itself claimed to cite a report from Edison Research, an election monitoring group. Edison Research said that they did not write such a report, and that they "have no evidence of any voter fraud." Trump and others also made unsubstantiated claims that Dominion had close ties to the Clinton family or other Democrats. There is no evidence for any of these claims, which have been debunked by various groups including election technology experts, government and voting industry officials, and the CISA. On November 12, 2020, CISA released a statement that confirmed "there is no evidence that any voting system deleted or lost votes, changed votes or was in any way compromised." The statement was signed by various government and voting industry officials including the presidents of the National Association of State Election Directors and the National Association of Secretaries of State. Trump's personal attorney Rudy Giuliani made several false assertions about Dominion, including that its voting machines used software developed by a competitor, Smartmatic, which he claimed actually owned Dominion, and which he said was founded by the former socialist Venezuelan leader Hugo Chávez. Giuliani also falsely asserted that Dominion voting machines sent their voting data to Smartmatic at foreign locations and that it is a "radical-left" company with connections to antifa. These accusations of a connection between Dominion and Smartmatic were made on conservative television outlets, and Smartmatic sent them a letter demanding a retraction and threatening legal action. Fox News host Lou Dobbs had been outspoken during his program about the accusations, and on December 18 his program aired a video segment refuting the accusations, though Dobbs himself did not comment. Fox News hosts Jeanine Pirro and Maria Bartiromo had also been outspoken about the allegations, and both their programs aired the same video segment over the following two days. Smartmatic also demanded a retraction from Newsmax, which had also promoted baseless conspiracy allegations about the company and Dominion, and on December 21 a Newsmax host acknowledged the network had no evidence the companies had a relationship, adding "No evidence has been offered that Dominion or Smartmatic used software or reprogrammed software that manipulated votes in the 2020 election." The host further acknowledged that Smartmatic is not owned by any foreign entity, nor is it connected to George Soros, as had been alleged. Dominion sent a similar letter to former Trump attorney Sidney Powell demanding she retract her allegations and retain all relevant records; the Trump legal team later instructed dozens of staffers to preserve all documents for any future litigation. In a related hoax, Dennis Montgomery, a software designer with a history of making dubious claims, asserted that a government supercomputer program was used to switch votes from Trump to Biden on voting machines. Trump attorney Sidney Powell promoted the allegations on Lou Dobbs's Fox Business program two days after the election, and again two days later on Maria Bartiromo's program, claiming to have "evidence that that is exactly what happened." Christopher Krebs, the director of the Cybersecurity and Infrastructure Security Agency, characterized the claim as "nonsense" and a "hoax." Asserting that Krebs's analysis was "highly inaccurate, in that there were massive improprieties and fraud," Trump fired him by tweet days later. Powell also asserted she had an affidavit from a former Venezuelan military official, a portion of which she posted on Twitter without a name or signature, who asserted that Dominion voting machines would print a paper ballot showing who a voter had selected, but change the vote inside the machine. Apparently speaking about the ICE machine, one source responded that this was incorrect, and that Dominion voting machines are only a "ballot marking device" system in which the voter deposits their printed ballot into a box for counting. The disinformation campaign against Dominion led to their employees being stalked, harassed, and receiving death threats. Ron Watkins, a leading proponent of the QAnon conspiracy theory, posted videos on Twitter in early December of a Dominion employee using one of the machines, falsely stating that the employee was pictured tampering with election results. The employee received death threats as a result, and a noose was found hanging outside his home. Eric Coomer, Dominion's director of product strategy and security, went into hiding soon after the election due to fear for his and his family's safety. He said that his personal address had been posted online, as had those of everyone from his parents and siblings to ex-girlfriends. In one of their lawsuits, Dominion explained they had spent $565,000 on security as a result. After questions about the reliability of the company's systems surfaced during the election, Edward Perez, an election technology expert at the Open Source Election Technology Institute stated, "Many of the claims being asserted about Dominion and questionable voting technology is misinformation at best and, in many cases, they're outright disinformation." Defamation lawsuits Eric Coomer, Dominion's director of product strategy and security, filed a defamation suit on December 22, 2020, against the Trump campaign, two campaign lawyers, and several conservative media outlets and individuals. Coomer asserted they had characterized him as a "traitor" and that as a result he was subjected to "multiple credible death threats". Coomer also said that a conservative activist had falsely claimed in a podcast that Coomer had participated in a September 2020 conference call with members of antifa, and that during the call he had said, "Don't worry about the election, Trump is not gonna win. I made f-ing sure of that. Hahahaha." In April 2021, Newsmax published a retraction and apology on its website, saying it "found no evidence" to support the allegations against Coomer. Powell, who had asserted there was a recording of Coomer saying this, acknowledged in a July 2021 deposition that no such recording existed. In December 2020, defamation attorneys representing Dominion sent letters to Trump personal attorney Rudy Giuliani, White House counsel Pat Cipollone, and former Dominion contractor and self-described whistleblower Mellissa Carone, advising them to preserve all records relating to the company and demanding Giuliani and Carone cease making defamatory claims about the company, warning that legal action was "imminent." Carone had alleged in testimony that "Everything that happened at that [Detroit ballot counting facility] was fraud. Every single thing." The letter to Carone asserted that, despite Carone presenting herself as a Dominion insider, she was "hired through a staffing agency for one day to clean glass on machines and complete other menial tasks." Michigan Circuit Court judge Timothy Kenny had previously ruled that an affidavit Carone had filed was "simply not credible." Dominion sent My Pillow CEO and Trump supporter Mike Lindell a letter in January 2021 stating, "you have positioned yourself as a prominent leader of the ongoing misinformation campaign" and that "litigation regarding these issues is imminent". Lindell told The New York Times, "I would really welcome them to sue me because I have all the evidence against them". Under threat of litigation, on January 15, 2021, the conservative online magazine American Thinker published a retraction of stories it published asserting that Dominion engaged in a conspiracy to rig the election, acknowledging, "these statements are completely false and have no basis in fact". Fox News, Fox Business, and Newsmax all aired fact checks of their own previous coverage of Dominion and competitor Smartmatic after Smartmatic threatened lawsuits, and after they were listed as other groups spreading falsehoods about Dominion in the lawsuit by Dominion against Giuliani. Dominion filed defamation lawsuits against former Trump campaign lawyer Sidney Powell on January 8, 2021, and against Giuliani on January 25, 2021, asking for $1.3 billion in damages from each. Both lawsuits accused Powell and Giuliani of waging a "viral disinformation campaign" against the company involving "demonstrably false" claims. In the lawsuit against Powell, the company stated that Powell had "doubled down" after Dominion formally notified her that her claims were false, and had posted on Twitter to spread false claims to over one million followers. One of Dominion's attorneys explained they had filed the lawsuit against Powell first "because she's been the most prolific and in many ways has been the originator of these false statements". The suit against Giuliani is based on more than fifty statements by Giuliani made on his podcast, at legislative hearings, on Twitter, and in conservative media, and alleges that Giuliani made false statements about the company in an attempt to earn money from legal fees and through his podcast. After Dominion filed the lawsuit against Powell, OANN removed all references to Dominion and Smartmatic from its website without issuing public retractions. On February 16, 2021, Dominion announced that they would be filing a lawsuit against Lindell in response to OANN airing a program by Lindell titled Absolute Proof that repeated his false claims about the 2020 election. Six days later, Dominion filed defamation lawsuits against My Pillow and Lindell. The lawsuit, which asked for $1.3 billion in damages, alleged, "MyPillow's defamatory marketing campaign—with promo codes like 'FightforTrump,' '45,' 'Proof,' and 'QAnon'—has increased MyPillow sales by 30-40% and continues duping people into redirecting their election-lie outrage into pillow purchases." Dominion further stated its intentions were to stop Lindell from using MyPillow to profit off of election conspiracy theories at the expense of Dominion. On March 22, 2021, Powell moved to dismiss Dominion's lawsuit against her, arguing that "no reasonable person" would conclude that her accusations against Dominion "were truly statements of fact." On March 26, 2021, Dominion filed a $1.6 billion defamation lawsuit against Fox News, alleging that Fox and some of its pundits spread conspiracy theories about Dominion, and allowed guests to make false statements about the company. A Fox News motion to dismiss the suit was denied on December 16, 2021, by a Delaware Superior Court judge. On August 4, 2021, a federal judge granted a sanctions motion against Gary Fielder and Ernest John Walker, the attorneys that filed suit against Dominion, Facebook, and others claiming fraud in the 2020 presidential election. Plaintiff's attorneys were ordered to pay $180,000 for the defendants' attorneys' fees, with the judge finding that the lawsuit was filed "in bad faith" and made frivolous arguments. On August 10, 2021, Dominion announced that they were suing conservative news channels OANN and Newsmax, plus former Overstock.com CEO Patrick M. Byrne. The next day, district judge Carl J. Nichols dismissed motions by Lindell, Powell, and Giuliani to dismiss the lawsuits against them. See also List of electronic voting machines in New York state Electronic voting in Canada References External links 2002 establishments in Ontario Election technology companies Manufacturing companies based in Toronto Canadian companies established in 2002 Electronics companies established in 2002 Electronic voting
59202810
https://en.wikipedia.org/wiki/Roy%20Engle
Roy Engle
Roy Winfield Engle (November 25, 1917 – July 7, 2005) was an American football and baseball player and coach. After completing a college football career at the University of Southern California (USC), he served as the head football coach at Santa Barbara College of the University of California—now known as the University of California, Santa Barbara—from 1949 to 1951, compiling a record of 14–14. He was also the head baseball coach at Santa Barbara from 1952 to 1952, tallying a mark of 30–25. Engle also spent a summer in the Chicago Cubs and St. Louis Browns minor league program. Head coaching record College football References 1917 births 2005 deaths American football fullbacks Baseball catchers Saint Mary's Pre-Flight Air Devils football players Tulsa Oilers (baseball) players Tyler Trojans players UC Santa Barbara Gauchos football coaches UC Santa Barbara Gauchos baseball coaches USC Trojans football coaches USC Trojans football players People from Sheldon, Iowa Players of American football from Iowa
69633298
https://en.wikipedia.org/wiki/LA%20Cyber%20Lab
LA Cyber Lab
The Los Angeles Cyber Lab, a 501(c) nonprofit organization, founded in August 2017, with Cisco Systems, is a "public-private cybersecurity initiative designed to help the (small and mid-sized) business community stay ahead of security threats" In 2018, Department of Homeland Security awarded LA Cyber Lab a $3 million grant. In 2019, with the backing of IBM, it added a threat-intelligence Web portal and an app for Android and IOS. Cyber NYC, is New York City's similar cybersecurity initiative. New York City Cyber Command is a city government agency. Zimperium enterprise mobile security apps are sponsored by State of Michigan, NYC Cyber Command and Los Angeles Metro Rail. References External links LA Cyber Lab Cybersecurity Stakeholder Agencies - Los Angeles Area Fire Chiefs Association Impressionist Films. LA Cyber Lab - Ad Spot 1 - Vimeo Non-profit organizations based in the United States
17899954
https://en.wikipedia.org/wiki/Maine%20Learning%20Technology%20Initiative
Maine Learning Technology Initiative
The Maine Learning Technology Initiative (MLTI) is an initiative that gives learning technology to all of the 7th-12th graders attending public schools in Maine, Hawaii, and Vermont. Currently, it hands out a school's choice between either iPads, MacBook Airs, Hewlett-Packard ElitePads, Hewlett-Packard ProBooks, and CTL Classmate PC Netbooks to students. Before that, it gave iBooks and later MacBooks to students. When it began in Maine in 2002, it was one of the first such initiatives anywhere in the world and first in the United States to equip all students with a laptop. History In 2000, Governor Angus King proposed The Maine Learning Technology Initiative, to provide laptops for every middle school student and teacher in the state. One of his primary reasons was a $71 million budget surplus in 1999. He "immediately called for a large portion of this surprise windfall to be put into an endowment for ‘the procurement of portable, wireless computer devices for students.’" After the initial public reaction to the plan it became clear that more discussion and examination of this concept was needed and thus in the summer of 2000 the Legislature and Governor King convened a Joint Task Force on the Maine Learning Technology Endowment which had the task to look in-depth at the issues around this proposal and recommend the best course for Maine to follow. In a Wired Magazine interview, Governor King said, "'I think we're going to demonstrate the power of one-to-one computer access that's going to transform education...the economic future will belong to the technologically adept.'" John Waters explains that the keys to the success of MLTI are the professional development accompanying the implementation of the program, the strategic vendor relationship with Apple, and quality local leadership. In early 2001 the Task Force issued its report with the recommendation that Maine pursue a plan to deploy learning technology to all of Maine's students and teachers in 7th and 8th grade and then to look at continuing the program to other grade levels. The Task Force report also included several guiding principles which have been embedded into the work of MLTI. During that spring legislation was authorized to begin the program for the school year beginning in September, 2002. In late September 2001, the Department of Education issued the RFP for MLTI and after scoring all of the proposals selected Apple Computer, Inc. as the award winner. In late December 2001 the Department and Apple formally began to implement the Maine Learning Technology Initiative. According to Garthwait and Weller, "By fall of 2002, more than 17,000 seventh grade students and their teachers had laptops during school." By the beginning of the 2003-2004 school year, another 17,000 laptops were introduced to the new seventh graders. From the start, MLTI included professional development for teachers and principals. According to Geoffrey Fletcher, The program brings Maine's principals together twice a year for either a half day or a full day, in clusters based on the counties they work in. During the sessions, staff from Apple, the supplier for the 1-to-1 program, demonstrate new applications that have been or will be installed on the computers, MLTI staff help with administrative and logistical issues, and members of both staffs discuss different ways these applications can be used with students. When the program was conceived, the MLTI team decided that school districts would decide whether or not the students could take the iBooks home at the end of the school day. As of January 2004, "more than half the school districts in Maine allow the students to take the iBooks home." One of the initial motivators for the MLTI was for students in Maine to be technologically literate. At the time of the initiative, Susan Gendron was the Commissioner of Education for the state of Maine. When asked about the rationale behind the technology initiative, Gendron said "we wanted our students to be among the most tech savvy in the country. But by ‘tech savvy’ we meant their ability to use computer tools for innovation, creation and problem solving, not their ability to defrag a hard drive or rip a CD". Governor King stressed that the program was about "learning, not about technology". King was searching for initiatives that would lead to a dramatic improvement in education. He met with Seymour Papert, an educational technology guru to discuss an increase in student to computer ratios as a means of improving education and the future workforce. Papert is also currently one of the principals for the One Laptop Per Child initiative, which is a Miami-based initiative aiming to create affordable educational devices in the developing world. Papert’s advice was to create a 1-1 ratio of students to computers to maximize technological potential, and the budget surplus provided an avenue to attain that ratio. King also stressed that these computers would address the "Digital Divide"—the divide between those with access to transformational informational technology and those without. The Initiative was proposed as an "equity tool" aimed to service Maine’s diverse demographics—both geographically and socioeconomically. The Initiative chose to focus their efforts on 7th to 8th graders because middle school students represented a perfect balance of maturity to take care of the equipment and a youthful curiosity toward school. These students still had malleable attitudes toward learning that could be positively altered. Susan Gendron went with Apple, Inc. because they wanted a vendor that could help their students and teachers inside and outside of the classroom. Specifically, Gendron was looking to make sure that students and teachers "have the necessary tools for innovation and creativity". Apple offered a competitive price and became Maine’s top choice to supply the laptops for the Initiative. Within their contract, Apple agreed to provide Maine’s schools with educational software, professional development, technical support, repair and replacement services. Professional development sessions led by Apple staff are scheduled to keep principals and teachers updated on new educational software that is installed on the school computers, and to make sure the staff knows how to use these programs with their students. In 2013, Governor Paul LePage identified a need to include a Windows-based teaching solution for schools throughout Maine, as he recognized that "[i]t is important that [Maine's] students are using technology that they will see and use in the workplace”. Gov. LePage identified Hewlett Packard's ProBook 4400 running Windows 8 as an affordable solution to ensure students are obtaining technology skills demanded by the modern workplace. HP's bid for the MLTI project was seen by LePage as "the lowest-priced proposal, and the laptops use an operating system that is commonly used in the workplace in Maine. These laptops will provide students with the opportunity to enhance their learning and give them experience on the same technology and software they will see in their future careers.” To support their 1-1 Model, Maine chose to reference free educational tools such as the Open Educational Resources Commons project, which provides free reusable academic programs. The Open Educational Resources Commons Project provides resources such as classroom management help, career and technical educational resources, and many other free and easy to use teaching and learning content from around the world. Another goal of the Initiative was to increase the relevancy problem in schools. Computers give students an immediate answer to the question of "When will we ever have to use this?" and provide multiple learning modalities for diverse learners. In 2009, it was announced that the MLTI program would expand into the state's high schools. That year, Maine launched the first phase of a rollout to its high schools, in which half the high schools in the state participated. Though it took six years to expand the program to high schools, it was an important part of the original vision of the program. According to Maine's learning technology policy director Jeff Mao, "’We've always imagined this as a 7-12 program.’" One major change in the high school version of the program is that high schools will have to pay for many of the costs associated with the program. While middle schools get money for "software, hardware, network infrastructure, warranties, technical support, professional development, and data-backup services," high schools only get money for installing wireless networks into their schools. As a result, only 50% of the high schools in the state of Maine chose to participate in the rollout for the 2009-2010 school year. In 2013, Governor Paul LePage considered eliminating the MLTI, being unconvinced it was needed to grow distance learning programs in schools, and due to concerns that students were too reliant on technology. His Education Commissioner, Stephen Bowen, convinced him to maintain the program by pointing out that bulk purchases of computers saved taxpayer money, and that technology was as essential as electricity and heat. LePage also switched the computers in the program to Windows operating system-based computers, on the belief that most employers in Maine use such a system. While the state choose a Windows platform, choice was also allowed. Districts could choose from any of the five solutions, with the state paying for anything up to the price of its choice. As a result, over 90 percent of districts in Maine chose to remain with Apple with 60 percent choosing Apple's Primary Solution which provides Apple iPads to students and MacBook Airs and iPad Minis to teachers. The remaining districts that chose Apple received MacBook Airs for both staff and students. Less than 10 percent chose the HP Windows solution. Criticism Due to the one-size-fits-all approach, the program has created a large financial burden for many rural areas of the state. For example, the Maine School Administrative District #39, which oversees schools in the low mountains of western Maine, has had many problems with the introduction of Apple iBooks into schools. Schools in the district, including Buckfield Junior/Senior High School, had previously used PCs with a network that was "hard to integrate with Apple products without buying expensive products the district cannot afford." Another expense that the district incurred as a result of the program was the cost of hiring a new technician to solve problems outside of the repair warranty of the computers. Finally, the district does not allow students to take the laptops home after school due to the extra costs involved, including the cost of buying additional power adapters for all students and the cost of maintenance for breakage that occurs at the students’ homes. {{Quotation|The hardest part of the initiative for these rural districts was the fact that they were forced to integrate Apple products into formerly Windows-centered buildings. However, "as John Sculley told The Guardian newspaper in 1997: 'People talk about technology, but Apple was a marketing company. It was the marketing company of the decade.'" As a result of their marketing, persistence, and strategic relationship with the state of Maine on the initiative, Apple won the proposal and entered many schools statewide. References External links Maine Learning Technology Initiative Education in Maine
26429269
https://en.wikipedia.org/wiki/VGA%20text%20mode
VGA text mode
VGA text mode was introduced in 1987 by IBM as part of the VGA standard for its IBM PS/2 computers. Its use on IBM PC compatibles was widespread through the 1990s and persists today for some applications on modern computers. The main features of VGA text mode are colored (programmable 16 color palette) characters and their background, blinking, various shapes of the cursor (block/underline/hidden static/blinking), and loadable fonts (with various glyph sizes). The Linux console traditionally uses hardware VGA text modes, and the Win32 console environment has an ability to switch the screen to text mode for some text window sizes. Data arrangement Text buffer Each screen character is represented by two bytes aligned as a 16-bit word accessible by the CPU in a single operation. The lower, or character, byte is the actual code point for the current character set, and the higher, or attribute, byte is a bit field used to select various video attributes such as color, blinking, character set, and so forth. This byte-pair scheme is among the features that the VGA inherited from the EGA, CGA, and ultimately from the MDA. Depending on the mode setup, attribute bit 7 may be either the blink bit or the fourth background color bit (which allows all 16 colors to be used as background colours). Attribute bit 3 (foreground intensity) also selects between fonts A and B (see below). Therefore, if these fonts are not the same, this bit is simultaneously an additional code point bit. Attribute bit 0 also enables underline, if certain other attribute bits are set to zero (see below). Colors are assigned in the same way as in 4-bit indexed color graphic modes (see VGA color palette). VGA modes have no need for the MDA's reverse and bright attributes because foreground and background colors can be set explicitly. Underline The VGA hardware has the ability to enable an underline on any character that has attribute bit 0 set. However, since this is an MDA-compatible feature, the attribute bits not used by the MDA must be set to zero or the underline will not be shown. This means that only bits 3 (intensity) and 7 (blink) can be set concurrently with bit 0 (underline). With the default VGA palette, setting bit 0 to enable underline will also change the text colour to blue. This means text in only two colors can be underlined (5555FF and 0000AA with the default palette). Despite all this, the underline is not normally visible in color modes, as the location of the underline defaults to a scanline below the character glyph, rendering it invisible. If the underline location is set to a visible scanline (as it is by default when switching to an MDA-compatible monochrome text mode), then the underline will appear. Fonts Screen fonts used in EGA and VGA are monospace raster fonts containing 256 glyphs. All glyphs in a font are the same size, but this size can be changed. Typically, glyphs are 8 dots wide and 8–16 dots high, however the height can be any value up to a maximum of 32. Each row of a glyph is coded in an 8-bit byte, with high bits to the left of the glyph and low bits to the right. Along with several hardware-dependent fonts stored in the adapter's ROM, the text mode offers 8 loadable fonts. Two active font pointers (font A and font B) select two of the available fonts, although they usually point to the same font. When they each point to different fonts, attribute bit 3 (see above) acts as a font selection bit instead of as a foreground color bit. On real VGA hardware, this overrides the bit's use for color selection, but on many clones and emulators, the color selection remains — meaning one font is displayed as normal intensity, and the other as high-intensity. This error can be overcome by changing the palette registers to contain two copies of an 8-color palette. There are modes with a character box width of 9 dots (e.g. the default 80×25 mode), however the 9th column is used for spacing between characters, so the content cannot be changed. It is always blank, and drawn with the current background colour. An exception to this is in Line Graphics Enable mode, which causes code points 0xC0 to 0xDF inclusive to have the 8th column repeated as the 9th. These code points cover those box-drawing characters which must extend all the way to the right side of the glyph box. For this reason, placing letter-like characters in code points 0xC0–0xDF should be avoided. The box-drawing characters from 0xB0 to 0xBF are not extended, as they do not point to the right and so do not require extending. Cursor The shape of the cursor is restricted to a rectangle the full width of the character box, and filled with the foreground color of the character at the cursor's current location. Its height and position may be set to anywhere within a character box;. The EGA and many VGA clones allowed a split-box cursor (appearing as two rectangles, one at the top of the character box and one at the bottom), by setting the end of the cursor before the start, however if this is done on the original VGA, the cursor is completely hidden instead. The VGA standard does not provide a way to alter the blink rate, although common workarounds involve hiding the cursor and using a normal character glyph to provide a so-called software cursor. A mouse cursor in TUI (when implemented) is not usually the same thing as a hardware cursor, but a moving rectangle with altered background or a special glyph. Some text-based interfaces, such as that of Impulse Tracker, went to even greater lengths to provide a smoother and more graphic-looking mouse cursor. This was done by constantly re-generating character glyphs in real-time according to the cursor's on-screen position. Access methods There are generally two ways to access VGA text-mode for an application: through the Video BIOS interface or by directly accessing video RAM and I/O ports. The latter method is considerably faster, and allows quick reading of the text buffer, for which reason it is preferred for advanced TUI programs. The VGA text buffer is located at physical memory address 0xB8000. Since this address is usually used by 16-bit x86 processes operating in real-mode, it is also the first half of memory segment 0xB800. The text buffer data can be read and written, and bitwise operations can be applied. A part of text buffer memory above the scope of the current mode is accessible, but is not shown. The same physical addresses are used in protected mode. Applications may either have this part of memory mapped to their address space or access it via the operating system. When an application (on a modern multitasking OS) does not have control over the console, it accesses a part of system RAM instead of the actual text buffer. For computers in the 1980s, very fast manipulation of the text buffer, with the hardware generating the individual pixels as fast as they could be displayed, was extremely useful for a fast UI. Even on relatively modern hardware, the overhead of text mode emulation via hardware APA (graphics) modes (in which the program generates individual pixels and stores them into the video buffer) may be noticeable. Modes and timings Video signal From the monitor's side, there is no difference in input signal in a text mode and an APA mode of the same size. A text mode signal may have the same timings as VESA standard modes. The same registers are used on adapter's side to set up these parameters in a text mode as in APA modes. The text mode output signal is essentially the same as in graphic modes, but its source is a text buffer and character generator, not a framebuffer as in APA. PC common text modes Depending on the graphics adapter used, a variety of text modes are available on IBM PC compatible computers. They are listed on the table below: VGA and compatible cards support MDA, CGA and EGA modes. All colored modes have the same design of text attributes. MDA modes have some specific features (see above) — a text could be emphasized with bright, underline, reverse and blinking attributes. The most common text mode used in DOS environments and initial Windows consoles is the default 80 columns by 25 rows, or 80×25, with 16 colors and 8×16 pixels large characters. VGA cards always have a built-in font of this size whereas other sizes may require downloading a differently sized font. This mode was available on practically all IBM and compatible personal computers. Linux kernel 2.6 and later assumes modes from 0000h to 00FFh as standard (hexadecimal), if VGA BIOS supports, and it understands them as increased by 0x0100. Same for VESA BIOS modes from 0100h to 07FFh (Linux increases them by 0x0100). Modes from 0900h to 09FFh are Video7 special modes, (Usually 0940h=80×43, 0941h=132×25, 0942h=132×44, 0943h=80×60, 0944h=100×60, 0945h=132×28 for the standard Video7 BIOS). Linux 2.x allows to check supported video resolutions by kernel argument "vga=ask" or "vga=<MODE_NUMBER>". Later versions of Linux allow specifying resolution by modes from 1000h to 7FFFh. The code has a "0xHHWW" form where HH is a number of rows and WW is a number of columns. E.g., 1950h (0x1950) corresponds to a 80×25 mode, 2B84h (0x2b84) to 132×43 etc. (Linux 3.x and later allows to set resolution by "video=<conn>:<xres>x<yres>", but it is for video framebuffer graphical mode.) Two other VGA text modes, 80×40 and 80×50, exist but are less common. Windows NT 4.0 displayed its system messages during the boot process in 80×50 text mode. Character sizes and graphical resolutions for the extended VESA-compatible Super VGA text modes are manufacturer's dependent. Some cards (e.g. S3) supported custom very large text modes, like 132×43 and 132×25. Like as in graphic modes, graphic adapters of 2000s commonly are capable to set up an arbitrarily-sized text mode (in reasonable limits) instead of choosing its parameters from some list. SVGATextMode On Linux and DOS systems with so named SVGA cards, a program called SVGATextMode can be used to set up better looking text modes than EGA and VGA standard ones. This is particularly useful for large (≥ 17") monitors, where the normal 80×25 VGA text mode's 720×400 pixel resolution is far lower than a typical graphics mode would be. SVGATextMode allows setting of the pixel clock and higher refresh rate, larger font size, cursor size, etc., and allows a better use of the potential of a video card and monitor. In non-Windows systems, the use of SVGATextMode (or alternative options such as the Linux framebuffer) to obtain a sharp text is critical for LCD monitors of 1280×1024 (or higher resolution) because none of so named standard text modes fits to this matrix size. SVGATextMode also allows a fine tuning of video signal timings. Despite the name of this program, only a few of its supported modes conform to SVGA (i.e. VESA) standards. General restrictions VGA text mode has some hardware-imposed limitations. Because these are too restrictive for modern (post 2000) applications, the hardware text mode on VGA compatible video adapters only has a limited use. * 8 colors may be used by font A and other 8 colors by font B; so, if font A ≠ font B (512 characters mode), then the palette should be halved and a text may effectively use only 8 colors. ** Normally, first 8 colors of the same palette. If blink is disabled, then all 16 colors are available for background. See also General article about text mode of computer display References Text user interface IBM PC compatibles DOS on IBM PC compatibles
20437399
https://en.wikipedia.org/wiki/Compute%20Node%20Linux
Compute Node Linux
Compute Node Linux (CNL) is a runtime environment based on the Linux kernel for the Cray XT3, Cray XT4, Cray XT5, Cray XT6, Cray XE6 and Cray XK6 supercomputer systems based on SUSE Linux Enterprise Server. CNL forms part of the Cray Linux Environment. systems running CNL were ranked 3rd, 6th and 8th among the fastest supercomputers in the world. See also INK (operating system) References External links SUSE Linux Enterprise Server Cray software Linux kernel variant
537171
https://en.wikipedia.org/wiki/Red%20Pike%20%28cipher%29
Red Pike (cipher)
Red Pike is a classified United Kingdom government encryption algorithm, proposed for use by the National Health Service by GCHQ, but designed for a "broad range of applications in the British government" . Little is publicly known about Red Pike, except that it is a block cipher with a 64-bit block size and 64-bit key length. According to the academic study of the cipher cited below and quoted in a paper by Ross Anderson and Markus Kuhn, it "uses the same basic operations as RC5" (add, XOR, and left shift) and "has no look-up tables, virtually no key schedule and requires only five lines of code"; "the influence of each key bit quickly cascades" and "each encryption involves of the order of 100 operations". Red Pike is available to approved British government contractors in software form, for use in confidential (not secret) government communication systems. GCHQ also designed the Rambutan cryptosystem for the same segment. Given that Red Pike is a British encryption algorithm, its name likely refers to a particular fell in the western English Lake District. Supposed source code In February 2014, the supposed source code for Red Pike was posted as follows to the Cypherpunk mailing list. /* Red Pike cipher source code */ #include <stdint.h> typedef uint32_t word; #define CONST 0x9E3779B9 #define ROUNDS 16 #define ROTL(X, R) (((X) << ((R) & 31)) | ((X) >> (32 - ((R) & 31)))) #define ROTR(X, R) (((X) >> ((R) & 31)) | ((X) << (32 - ((R) & 31)))) void encrypt(word * x, const word * k) { unsigned int i; word rk0 = k[0]; word rk1 = k[1]; for (i = 0; i < ROUNDS; i++) { rk0 += CONST; rk1 -= CONST; x[0] ^= rk0; x[0] += x[1]; x[0] = ROTL(x[0], x[1]); x[1] = ROTR(x[1], x[0]); x[1] -= x[0]; x[1] ^= rk1; } rk0 = x[0]; x[0] = x[1]; x[1] = rk0; } void decrypt(word * x, const word * k) { word dk[2] = { k[1] - CONST * (ROUNDS + 1), k[0] + CONST * (ROUNDS + 1) }; encrypt(x, dk); } See also Type 1 product References C Mitchell, S Murphy, F Piper, P Wild. (1996). Red Pike — an assessment. Codes and Ciphers Ltd 2/10/96. Paper by Anderson and Kuhn which includes excerpts from (Mitchell et al., 1996). Another version is "The use of encryption and related services with the NHSnet" Block ciphers GCHQ
474163
https://en.wikipedia.org/wiki/Intel%20iAPX%20432
Intel iAPX 432
The iAPX 432 (Intel Advanced Performance Architecture) is a discontinued computer architecture introduced in 1981. It was Intel's first 32-bit processor design. The main processor of the architecture, the general data processor, is implemented as a set of two separate integrated circuits, due to technical limitations at the time. Although some early 8086, 80186 and 80286-based systems and manuals also used the iAPX prefix for marketing reasons, the iAPX 432 and the 8086 processor lines are completely separate designs with completely different instruction sets. The project started in 1975 as the 8800 (after the 8008 and the 8080) and was intended to be Intel's major design for the 1980s. Unlike the 8086, which was designed the following year as a successor to the 8080, the iAPX 432 was a radical departure from Intel's previous designs meant for a different market niche, and completely unrelated to the 8080 or x86 product lines. The iAPX 432 project is considered a commercial failure for Intel, and was discontinued in 1986. Description The iAPX 432 was referred to as a "micromainframe", designed to be programmed entirely in high-level languages. The instruction set architecture was also entirely new and a significant departure from Intel's previous 8008 and 8080 processors as the iAPX 432 programming model is a stack machine with no visible general-purpose registers. It supports object-oriented programming, garbage collection and multitasking as well as more conventional memory management directly in hardware and microcode. Direct support for various data structures is also intended to allow modern operating systems to be implemented using far less program code than for ordinary processors. Intel iMAX 432 is a discontinued operating system for the 432, written entirely in Ada, and Ada was also the intended primary language for application programming. In some aspects, it may be seen as a high-level language computer architecture. These properties and features resulted in a hardware and microcode design that was more complex than most processors of the era, especially microprocessors. However, internal and external buses are (mostly) not wider than 16-bit, and, just like in other 32-bit microprocessors of the era (such as the 68000 or the 32016), 32-bit arithmetical instructions are implemented by a 16-bit ALU, via random logic and microcode or other kinds of sequential logic. The iAPX 432 enlarged address space over the 8080 was also limited by the fact that linear addressing of data could still only use 16-bit offsets, somewhat akin to Intel's first 8086-based designs, including the contemporary 80286 (the new 32-bit segment offsets of the 80386 architecture was described publicly in detail in 1984). Using the semiconductor technology of its day, Intel's engineers weren't able to translate the design into a very efficient first implementation. Along with the lack of optimization in a premature Ada compiler, this contributed to rather slow but expensive computer systems, performing typical benchmarks at roughly 1/4 the speed of the new 80286 chip at the same clock frequency (in early 1982). This initial performance gap to the rather low-profile and low-priced 8086 line was probably the main reason why Intel's plan to replace the latter (later known as x86) with the iAPX 432 failed. Although engineers saw ways to improve a next generation design, the iAPX 432 capability architecture had now started to be regarded more as an implementation overhead rather than as the simplifying support it was intended to be. Originally designed for clock frequencies of up to 10 MHz, actual devices sold were specified for maximum clock speeds of 4 MHz, 5 MHz, 7 MHz and 8 MHz with a peak performance of 2 million instructions per second at 8 MHz. History Development Intel's 432 project started in 1976, a year after the 8-bit Intel 8080 was completed and a year before their 16-bit 8086 project began. The 432 project was initially named the 8800, as their next step beyond the existing Intel 8008 and 8080 microprocessors. This became a very big step. The instruction sets of these 8-bit processors were not very well fitted for typical Algol-like compiled languages. However, the major problem was their small native addressing ranges, just 16K for 8008 and 64K for 8080, far too small for many complex software systems without using some kind of bank switching, memory segmentation, or similar mechanism (which was built into the 8086, a few years later on). Intel now aimed to build a sophisticated complete system in a few LSI chips, that was functionally equal to or better than the best 32-bit minicomputers and mainframes requiring entire cabinets of older chips. This system would support multiprocessors, modular expansion, fault tolerance, advanced operating systems, advanced programming languages, very large applications, ultra reliability, and ultra security. Its architecture would address the needs of Intel's customers for a decade. The iAPX 432 development team was managed by Bill Lattin, with Justin Rattner as the lead engineer (although one source states that Fred Pollack was the lead engineer). (Rattner would later become CTO of Intel.) Initially the team worked from Santa Clara, but in March 1977 Lattin and his team of 17 engineers moved to Intel's new site in Portland. Pollack later specialized in superscalarity and became the lead architect of the i686 chip Intel Pentium Pro. It soon became clear that it would take several years and many engineers to design all this. And it would similarly take several years of further progress in Moore's Law, before improved chip manufacturing could fit all this into a few dense chips. Meanwhile, Intel urgently needed a simpler interim product to meet the immediate competition from Motorola, Zilog, and National Semiconductor. So Intel began a rushed project to design the 8086 as a low-risk incremental evolution from the 8080, using a separate design team. The mass-market 8086 shipped in 1978. The 8086 was designed to be backward-compatible with the 8080 in the sense that 8080 assembly language could be mapped on to the 8086 architecture using a special assembler. Existing 8080 assembly source code (albeit no executable code) was thereby made upward compatible with the new 8086 to a degree. In contrast, the 432 had no software compatibility or migration requirements. The architects had total freedom to do a novel design from scratch, using whatever techniques they guessed would be best for large-scale systems and software. They applied fashionable computer science concepts from universities, particularly capability machines, object-oriented programming, high-level CISC machines, Ada, and densely encoded instructions. This ambitious mix of novel features made the chip larger and more complex. The chip's complexity limited the clock speed and lengthened the design schedule. The core of the design — the main processor — was termed the General Data Processor (GDP) and built as two integrated circuits: one (the 43201) to fetch and decode instructions, the other (the 43202) to execute them. Most systems would also include the 43203 Interface Processor (IP) which operated as a channel controller for I/O, and an Attached Processor (AP), a conventional Intel 8086 which provided "processing power in the I/O subsystem". These were some of the largest designs of the era. The two-chip GDP had a combined count of approximately 97,000 transistors while the single chip IP had approximately 49,000. By comparison, the Motorola 68000 (introduced in 1979) had approximately 40,000 transistors. In 1983, Intel released two additional integrated circuits for the iAPX 432 Interconnect Architecture: the 43204 Bus Interface Unit (BIU) and 43205 Memory Control Unit (MCU). These chips allowed for nearly glueless multiprocessor systems with up to 63 nodes. The project's failures Some of the innovative features of the iAPX 432 were detrimental to good performance. In many cases, the iAPX 432 had a significantly slower instruction throughput than conventional microprocessors of the era, such as the National Semiconductor 32016, Motorola 68010 and Intel 80286. One problem was that the two-chip implementation of the GDP limited it to the speed of the motherboard's electrical wiring. A larger issue was the capability architecture needed large associative caches to run efficiently, but the chips had no room left for that. The instruction set also used bit-aligned variable-length instructions instead of the usual semi-fixed byte or word-aligned formats used in the majority of computer designs. Instruction decoding was therefore more complex than in other designs. Although this did not hamper performance in itself, it used additional transistors (mainly for a large barrel shifter) in a design that was already lacking space and transistors for caches, wider buses and other performance oriented features. In addition, the BIU was designed to support fault-tolerant systems, and in doing so up to 40% of the bus time was held up in wait states. Another major problem was its immature and untuned Ada compiler. It used high-cost object-oriented instructions in every case, instead of the faster scalar instructions where it would have made sense to do so. For instance the iAPX 432 included a very expensive inter-module procedure call instruction, which the compiler used for all calls, despite the existence of much faster branch and link instructions. Another very slow call was enter_environment, which set up the memory protection. The compiler ran this for every single variable in the system, even when variables were used inside an existing environment and did not have to be checked. To make matters worse, data passed to and from procedures was always passed by value-return rather than by reference. When running the Dhrystone benchmark, parameter passing took ten times longer than all other computations combined. According to the New York Times, "the i432 ran 5 to 10 times more slowly than its competitor, the Motorola 68000". Impact and similar designs The iAPX 432 was one of the first systems to implement the new IEEE-754 Standard for Floating-Point Arithmetic. An outcome of the failure of the 432 was that microprocessor designers concluded that object support in the chip leads to a complex design that will invariably run slowly, and the 432 was often cited as a counter-example by proponents of RISC designs. However, some hold that the OO support was not the primary problem with the 432, and that the implementation shortcomings (especially in the compiler) mentioned above would have made any CPU design slow. Since the iAPX 432 there has been only one other attempt at a similar design, the Rekursiv processor, although the INMOS Transputer's process support was similar — and very fast. Intel had spent considerable time, money, and mindshare on the 432, had a skilled team devoted to it, and was unwilling to abandon it entirely after its failure in the marketplace. A new architect—Glenford Myers—was brought in to produce an entirely new architecture and implementation for the core processor, which would be built in a joint Intel/Siemens project (later BiiN), resulting in the i960-series processors. The i960 RISC subset became popular for a time in the embedded processor market, but the high-end 960MC and the tagged-memory 960MX were marketed only for military applications. According to the New York Times, Intel's collaboration with HP on the Merced processor (later known as Itanium) was the company's comeback attempt for the very high-end market. Architecture The iAPX 432 instructions have variable length, between 6 and 321 bits. Unusually, they are not byte-aligned, that is, they may contain odd numbers of bits and directly follow each other without regard to byte boundaries. Object-oriented memory and capabilities The iAPX 432 has hardware and microcode support for object-oriented programming and capability-based addressing. The system uses segmented memory, with up to 224 segments of up to 64 KB each, providing a total virtual address space of 240 bytes. The physical address space is 224 bytes (16 MB). Programs are not able to reference data or instructions by address; instead they must specify a segment and an offset within the segment. Segments are referenced by access descriptors (ADs), which provide an index into the system object table and a set of rights (capabilities) governing accesses to that segment. Segments may be "access segments", which can only contain Access Descriptors, or "data segments" which cannot contain ADs. The hardware and microcode rigidly enforce the distinction between data and access segments, and will not allow software to treat data as access descriptors, or vice versa. System-defined objects consist of either a single access segment, or an access segment and a data segment. System-defined segments contain data or access descriptors for system-defined data at designated offsets, though the operating system or user software may extend these with additional data. Each system object has a type field which is checked by microcode, such that a Port Object cannot be used where a Carrier Object is needed. User programs can define new object types which will get the full benefit of the hardware type checking, through the use of type control objects (TCOs). In Release 1 of the iAPX 432 architecture, a system-defined object typically consisted of an access segment, and optionally (depending on the object type) a data segment specified by an access descriptor at a fixed offset within the access segment. By Release 3 of the architecture, in order to improve performance, access segments and data segments were combined into single segments of up to 128 kB, split into an access part and a data part of 0–64 KB each. This reduced the number of object table lookups dramatically, and doubled the maximum virtual address space. The iAPX432 recognizes fourteen types of predefined system objects: instruction object contains executable instructions domain object represents a program module and contains references to subroutines and data context object represents the context of a process in execution type-definition object represents a software-defined object type type-control object represents type-specific privilege object table identifies the system's collection of active object descriptors storage resource object represents a free storage pool physical storage object identifies free storage blocks in memory storage claim object limits storage that may be allocated by all associated storage resource objects process object identifies a running process port object represents a port and message queue for interprocess communication carrier Carriers carry messages to and from ports processor contains state information for one processor in the system processor communication object is used for interprocessor communication Garbage collection Software running on the 432 does not need to explicitly deallocate objects that are no longer needed. Instead, the microcode implements part of the marking portion of Edsger Dijkstra's on-the-fly parallel garbage collection algorithm (a mark-and-sweep style collector). The entries in the system object table contain the bits used to mark each object as being white, black, or grey as needed by the collector. The iMAX 432 operating system includes the software portion of the garbage collector. Instruction format Executable instructions are contained within a system "instruction object". Since instructions are bit-aligned a 16-bit bit displacement into the instruction object allows the object to contain up to 8192 bytes of instructions (65,536 bits). Instructions consist of an operator, consisting of a class and an opcode, and zero to three operand references. "The fields are organized to present information to the processor in the sequence required for decoding". More frequently used operators are encoded using fewer bits. The instruction begins with the 4 or 6 bit class field which indicates the number of operands, called the order of the instruction, and the length of each operand. This is optionally followed by a 0 to 4 bit format field which describes the operands (if there are no operands the format is not present). Then come zero to three operands, as described by the format. The instruction is terminated by the 0 to 5 bit opcode, if any (some classes contain only one instruction and therefore have no opcode). "The Format field permits the GDP to appear to the programmer as a zero-, one-, two-, or three-address architecture." The format field indicates that an operand is a data reference, or the top or next-to-top element of the operand stack. See also iAPX, for the iAPX name Notes References External links IAPX 432 manuals at Bitsavers.org Computer History Museum Intel iAPX432 Micromainframe contains a list of all the Intel documentation associated with the iAPX 432, a list of hardware part numbers and a list of more than 30 papers. Capability systems High-level language computer architecture Intel microprocessors
255874
https://en.wikipedia.org/wiki/Adium
Adium
Adium is a free and open source instant messaging client for macOS that supports multiple IM networks, including Google Talk and XMPP. In the past, it has also supported AIM, ICQ, Windows Live Messenger and Yahoo! Messenger. Adium is written using macOS's Cocoa API, and it is released under the GNU GPL-2.0-or-later and many other licenses for components that are distributed with Adium. History Adium was created by college student Adam Iser, and the first version, "Adium 1.0", was released in September 2001 and supported only AIM. The version numbers of Adium since then have followed a somewhat unusual pattern. There were several upgrades to Adium 1.0, ending with Adium 1.6.2c. At this point, the Adium team began a complete rewrite of the Adium code, expanding it into a multiprotocol messaging program. Pidgin's (formerly "Gaim") libpurple (then called "libgaim") library was implemented to add support for IM protocols other than AIM – since then the Adium team has mostly been working on the GUI. The Adium team originally intended to release these changes as "Adium 2.0". However, Adium was eventually renamed to "Adium X" and released at version 0.50, being considered "halfway to a 1.0 product". Adium X 0.88 was the first version compiled as a universal binary, allowing it to run natively on Intel-based Macs. In 2005, Adium received a "Special Mention" at the Apple Design Awards. After version Adium X 0.89.1, however, the team finally decided to change the name back to "Adium", and, as such, "Adium 1.0" was released on February 2, 2007. Apple Inc. used Adium X 0.89.1's build time in Xcode 2.3 as a benchmark for comparing the performance of the Mac Pro and Power Mac G5 Quad, and Adium 1.2's build time in Xcode 3.0 as a benchmark for comparing the performance of the eight-core Mac Pro and Power Mac G5 Quad. On November 4, 2014, Adium scored 6 out of 7 points on the Electronic Frontier Foundation's secure messaging scorecard. It lost a point because there has not been a recent independent code audit. As of late March 2019, Adium is no longer able to support the ICQ plugin. Protocols Adium supports a wide range of IM networks through the libraries libezv (for Bonjour), STTwitterEngine (for Twitter), and libpurple (for all other protocols). Adium supports the following protocols: XMPP (including Google Talk, Facebook Chat, and LiveJournal services) Twitter Bonjour IRC Novell GroupWise IBM Sametime Gadu-Gadu Skype with a plugin Skype for Business Server (previously Microsoft Lync Server, Microsoft Office Communications Server) with a plugin Telegram with a plugin QQ with a plugin Steam chat with the "Steam IM" plugin NateOn with a plugin Plugins and customization Adium makes use of a plugin architecture; many of the program's essential features are actually provided by plugins bundled inside the application package. These plugins include functionality such as file transfer, support for the Growl notifications system, Sparkle for program updates, and support for encrypted messaging with the Off-the-Record Messaging library. Adium is also highly customizable through the use of resources its developers call "Xtras". The program can be customized by the use of hundreds of third-party Xtras that alter the appearance of emoticons, dock icons, contact list styles, and messages styles. Adium can also be enhanced through the use of different sound sets. AppleScripts can also be utilized to automatically alter behavior in responses to certain triggers. Icon The icon of Adium is a green duck named Adiumy. It is also the mascot of the software. See also Comparison of instant messaging clients Comparison of instant messaging protocols List of computing mascots References External links 2001 software AIM (software) clients Free XMPP clients Free instant messaging clients MacOS instant messaging clients Yahoo! instant messaging clients Portable software Free software programmed in Objective-C Software based on WebKit MacOS-only free software Internet Relay Chat clients Internet Relay Chat
11694610
https://en.wikipedia.org/wiki/Two-body%20problem%20in%20general%20relativity
Two-body problem in general relativity
The two-body problem in general relativity is the determination of the motion and gravitational field of two bodies as described by the field equations of general relativity. Solving the Kepler problem is essential to calculate the bending of light by gravity and the motion of a planet orbiting its sun. Solutions are also used to describe the motion of binary stars around each other, and estimate their gradual loss of energy through gravitational radiation. General relativity describes the gravitational field by curved space-time; the field equations governing this curvature are nonlinear and therefore difficult to solve in a closed form. No exact solutions of the Kepler problem have been found, but an approximate solution has: the Schwarzschild solution. This solution pertains when the mass M of one body is overwhelmingly greater than the mass m of the other. If so, the larger mass may be taken as stationary and the sole contributor to the gravitational field. This is a good approximation for a photon passing a star and for a planet orbiting its sun. The motion of the lighter body (called the "particle" below) can then be determined from the Schwarzschild solution; the motion is a geodesic ("shortest path between two points") in the curved space-time. Such geodesic solutions account for the anomalous precession of the planet Mercury, which is a key piece of evidence supporting the theory of general relativity. They also describe the bending of light in a gravitational field, another prediction famously used as evidence for general relativity. If both masses are considered to contribute to the gravitational field, as in binary stars, the Kepler problem can be solved only approximately. The earliest approximation method to be developed was the post-Newtonian expansion, an iterative method in which an initial solution is gradually corrected. More recently, it has become possible to solve Einstein's field equation using a computer instead of mathematical formulae. As the two bodies orbit each other, they will emit gravitational radiation; this causes them to lose energy and angular momentum gradually, as illustrated by the binary pulsar PSR B1913+16. For binary black holes numerical solution of the two body problem was achieved after four decades of research, in 2005, when three groups devised the breakthrough techniques. Historical context Classical Kepler problem The Kepler problem derives its name from Johannes Kepler, who worked as an assistant to the Danish astronomer Tycho Brahe. Brahe took extraordinarily accurate measurements of the motion of the planets of the Solar System. From these measurements, Kepler was able to formulate Kepler's laws, the first modern description of planetary motion: The orbit of every planet is an ellipse with the Sun at one of the two foci. A line joining a planet and the Sun sweeps out equal areas during equal intervals of time. The square of the orbital period of a planet is directly proportional to the cube of the semi-major axis of its orbit. Kepler published the first two laws in 1609 and the third law in 1619. They supplanted earlier models of the Solar System, such as those of Ptolemy and Copernicus. Kepler's laws apply only in the limited case of the two-body problem. Voltaire and Émilie du Châtelet were the first to call them "Kepler's laws". Nearly a century later, Isaac Newton had formulated his three laws of motion. In particular, Newton's second law states that a force F applied to a mass m produces an acceleration a given by the equation F=ma. Newton then posed the question: what must the force be that produces the elliptical orbits seen by Kepler? His answer came in his law of universal gravitation, which states that the force between a mass M and another mass m is given by the formula where r is the distance between the masses and G is the gravitational constant. Given this force law and his equations of motion, Newton was able to show that two point masses attracting each other would each follow perfectly elliptical orbits. The ratio of sizes of these ellipses is m/M, with the larger mass moving on a smaller ellipse. If M is much larger than m, then the larger mass will appear to be stationary at the focus of the elliptical orbit of the lighter mass m. This model can be applied approximately to the Solar System. Since the mass of the Sun is much larger than those of the planets, the force acting on each planet is principally due to the Sun; the gravity of the planets for each other can be neglected to first approximation. Apsidal precession If the potential energy between the two bodies is not exactly the 1/r potential of Newton's gravitational law but differs only slightly, then the ellipse of the orbit gradually rotates (among other possible effects). This apsidal precession is observed for all the planets orbiting the Sun, primarily due to the oblateness of the Sun (it is not perfectly spherical) and the attractions of the other planets to one another. The apsides are the two points of closest and furthest distance of the orbit (the periapsis and apoapsis, respectively); apsidal precession corresponds to the rotation of the line joining the apsides. It also corresponds to the rotation of the Laplace–Runge–Lenz vector, which points along the line of apsides. Newton's law of gravitation soon became accepted because it gave very accurate predictions of the motion of all the planets. These calculations were carried out initially by Pierre-Simon Laplace in the late 18th century, and refined by Félix Tisserand in the later 19th century. Conversely, if Newton's law of gravitation did not predict the apsidal precessions of the planets accurately, it would have to be discarded as a theory of gravitation. Such an anomalous precession was observed in the second half of the 19th century. Anomalous precession of Mercury In 1859, Urbain Le Verrier discovered that the orbital precession of the planet Mercury was not quite what it should be; the ellipse of its orbit was rotating (precessing) slightly faster than predicted by the traditional theory of Newtonian gravity, even after all the effects of the other planets had been accounted for. The effect is small (roughly 43 arcseconds of rotation per century), but well above the measurement error (roughly 0.1 arcseconds per century). Le Verrier realized the importance of his discovery immediately, and challenged astronomers and physicists alike to account for it. Several classical explanations were proposed, such as interplanetary dust, unobserved oblateness of the Sun, an undetected moon of Mercury, or a new planet named Vulcan. After these explanations were discounted, some physicists were driven to the more radical hypothesis that Newton's inverse-square law of gravitation was incorrect. For example, some physicists proposed a power law with an exponent that was slightly different from 2. Others argued that Newton's law should be supplemented with a velocity-dependent potential. However, this implied a conflict with Newtonian celestial dynamics. In his treatise on celestial mechanics, Laplace had shown that if the gravitational influence does not act instantaneously, then the motions of the planets themselves will not exactly conserve momentum (and consequently some of the momentum would have to be ascribed to the mediator of the gravitational interaction, analogous to ascribing momentum to the mediator of the electromagnetic interaction.) As seen from a Newtonian point of view, if gravitational influence does propagate at a finite speed, then at all points in time a planet is attracted to a point where the Sun was some time before, and not towards the instantaneous position of the Sun. On the assumption of the classical fundamentals, Laplace had shown that if gravity would propagate at a velocity on the order of the speed of light then the solar system would be unstable, and would not exist for a long time. The observation that the solar system is old enough allowed him to put a lower limit on the speed of gravity that turned out to be many orders of magnitude faster than the speed of light. Laplace's estimate for the speed of gravity is not correct in a field theory which respects the principle of relativity. Since electric and magnetic fields combine, the attraction of a point charge which is moving at a constant velocity is towards the extrapolated instantaneous position, not to the apparent position it seems to occupy when looked at. To avoid those problems, between 1870 and 1900 many scientists used the electrodynamic laws of Wilhelm Eduard Weber, Carl Friedrich Gauss, Bernhard Riemann to produce stable orbits and to explain the perihelion shift of Mercury's orbit. In 1890, Maurice Lévy succeeded in doing so by combining the laws of Weber and Riemann, whereby the speed of gravity is equal to the speed of light in his theory. And in another attempt Paul Gerber (1898) even succeeded in deriving the correct formula for the perihelion shift (which was identical to that formula later used by Einstein). However, because the basic laws of Weber and others were wrong (for example, Weber's law was superseded by Maxwell's theory), those hypotheses were rejected. Another attempt by Hendrik Lorentz (1900), who already used Maxwell's theory, produced a perihelion shift which was too low. Einstein's theory of general relativity Around 1904–1905, the works of Hendrik Lorentz, Henri Poincaré and finally Albert Einstein's special theory of relativity, exclude the possibility of propagation of any effects faster than the speed of light. It followed that Newton's law of gravitation would have to be replaced with another law, compatible with the principle of relativity, while still obtaining the Newtonian limit for circumstances where relativistic effects are negligible. Such attempts were made by Henri Poincaré (1905), Hermann Minkowski (1907) and Arnold Sommerfeld (1910). In 1907 Einstein came to the conclusion that to achieve this a successor to special relativity was needed. From 1907 to 1915, Einstein worked towards a new theory, using his equivalence principle as a key concept to guide his way. According to this principle, a uniform gravitational field acts equally on everything within it and, therefore, cannot be detected by a free-falling observer. Conversely, all local gravitational effects should be reproducible in a linearly accelerating reference frame, and vice versa. Thus, gravity acts like a fictitious force such as the centrifugal force or the Coriolis force, which result from being in an accelerated reference frame; all fictitious forces are proportional to the inertial mass, just as gravity is. To effect the reconciliation of gravity and special relativity and to incorporate the equivalence principle, something had to be sacrificed; that something was the long-held classical assumption that our space obeys the laws of Euclidean geometry, e.g., that the Pythagorean theorem is true experimentally. Einstein used a more general geometry, pseudo-Riemannian geometry, to allow for the curvature of space and time that was necessary for the reconciliation; after eight years of work (1907–1915), he succeeded in discovering the precise way in which space-time should be curved in order to reproduce the physical laws observed in Nature, particularly gravitation. Gravity is distinct from the fictitious forces centrifugal force and coriolis force in the sense that the curvature of spacetime is regarded as physically real, whereas the fictitious forces are not regarded as forces. The very first solutions of his field equations explained the anomalous precession of Mercury and predicted an unusual bending of light, which was confirmed after his theory was published. These solutions are explained below. General relativity, special relativity and geometry In the normal Euclidean geometry, triangles obey the Pythagorean theorem, which states that the square distance ds2 between two points in space is the sum of the squares of its perpendicular components where dx, dy and dz represent the infinitesimal differences between the x, y and z coordinates of two points in a Cartesian coordinate system (add Figure here). Now imagine a world in which this is not quite true; a world where the distance is instead given by where F, G and H are arbitrary functions of position. It is not hard to imagine such a world; we live on one. The surface of the earth is curved, which is why it is impossible to make a perfectly accurate flat map of the earth. Non-Cartesian coordinate systems illustrate this well; for example, in the spherical coordinates (r, θ, φ), the Euclidean distance can be written Another illustration would be a world in which the rulers used to measure length were untrustworthy, rulers that changed their length with their position and even their orientation. In the most general case, one must allow for cross-terms when calculating the distance ds where the nine functions gxx, gxy, …, gzz constitute the metric tensor, which defines the geometry of the space in Riemannian geometry. In the spherical-coordinates example above, there are no cross-terms; the only nonzero metric tensor components are grr = 1, gθθ = r2 and gφφ = r2 sin2 θ. In his special theory of relativity, Albert Einstein showed that the distance ds between two spatial points is not constant, but depends on the motion of the observer. However, there is a measure of separation between two points in space-time — called "proper time" and denoted with the symbol dτ — that is invariant; in other words, it does not depend on the motion of the observer. which may be written in spherical coordinates as This formula is the natural extension of the Pythagorean theorem and similarly holds only when there is no curvature in space-time. In general relativity, however, space and time may have curvature, so this distance formula must be modified to a more general form just as we generalized the formula to measure distance on the surface of the Earth. The exact form of the metric gμν depends on the gravitating mass, momentum and energy, as described by the Einstein field equations. Einstein developed those field equations to match the then known laws of Nature; however, they predicted never-before-seen phenomena (such as the bending of light by gravity) that were confirmed later. Geodesic equation According to Einstein's theory of general relativity, particles of negligible mass travel along geodesics in the space-time. In uncurved space-time, far from a source of gravity, these geodesics correspond to straight lines; however, they may deviate from straight lines when the space-time is curved. The equation for the geodesic lines is where Γ represents the Christoffel symbol and the variable q parametrizes the particle's path through space-time, its so-called world line. The Christoffel symbol depends only on the metric tensor gμν, or rather on how it changes with position. The variable q is a constant multiple of the proper time τ for timelike orbits (which are traveled by massive particles), and is usually taken to be equal to it. For lightlike (or null) orbits (which are traveled by massless particles such as the photon), the proper time is zero and, strictly speaking, cannot be used as the variable q. Nevertheless, lightlike orbits can be derived as the ultrarelativistic limit of timelike orbits, that is, the limit as the particle mass m goes to zero while holding its total energy fixed. Schwarzschild solution An exact solution to the Einstein field equations is the Schwarzschild metric, which corresponds to the external gravitational field of a stationary, uncharged, non-rotating, spherically symmetric body of mass M. It is characterized by a length scale rs, known as the Schwarzschild radius, which is defined by the formula where G is the gravitational constant. The classical Newtonian theory of gravity is recovered in the limit as the ratio rs/r goes to zero. In that limit, the metric returns to that defined by special relativity. In practice, this ratio is almost always extremely small. For example, the Schwarzschild radius rs of the Earth is roughly 9 mm ( inch); at the surface of the Earth, the corrections to Newtonian gravity are only one part in a billion. The Schwarzschild radius of the Sun is much larger, roughly 2953 meters, but at its surface, the ratio rs/r is roughly 4 parts in a million. A white dwarf star is much denser, but even here the ratio at its surface is roughly 250 parts in a million. The ratio only becomes large close to ultra-dense objects such as neutron stars (where the ratio is roughly 50%) and black holes. Orbits about the central mass The orbits of a test particle of infinitesimal mass about the central mass is given by the equation of motion where is the specific relative angular momentum, and is the reduced mass. This can be converted into an equation for the orbit where, for brevity, two length-scales, and , have been introduced. They are constants of the motion and depend on the initial conditions (position and velocity) of the test particle. Hence, the solution of the orbit equation is Effective radial potential energy The equation of motion for the particle derived above can be rewritten using the definition of the Schwarzschild radius rs as which is equivalent to a particle moving in a one-dimensional effective potential The first two terms are well-known classical energies, the first being the attractive Newtonian gravitational potential energy and the second corresponding to the repulsive "centrifugal" potential energy; however, the third term is an attractive energy unique to general relativity. As shown below and elsewhere, this inverse-cubic energy causes elliptical orbits to precess gradually by an angle δφ per revolution where A is the semi-major axis and e is the eccentricity. Here δφ is not the change in the φ-coordinate in (t, r, θ, φ) coordinates but the change in the argument of periapsis of the classical closed orbit. The third term is attractive and dominates at small r values, giving a critical inner radius rinner at which a particle is drawn inexorably inwards to r = 0; this inner radius is a function of the particle's angular momentum per unit mass or, equivalently, the a length-scale defined above. Circular orbits and their stability The effective potential V can be re-written in terms of the length a = h/c: Circular orbits are possible when the effective force is zero: i.e., when the two attractive forces—Newtonian gravity (first term) and the attraction unique to general relativity (third term)—are exactly balanced by the repulsive centrifugal force (second term). There are two radii at which this balancing can occur, denoted here as rinner and router: which are obtained using the quadratic formula. The inner radius rinner is unstable, because the attractive third force strengthens much faster than the other two forces when r becomes small; if the particle slips slightly inwards from rinner (where all three forces are in balance), the third force dominates the other two and draws the particle inexorably inwards to r = 0. At the outer radius, however, the circular orbits are stable; the third term is less important and the system behaves more like the non-relativistic Kepler problem. When a is much greater than rs (the classical case), these formulae become approximately Substituting the definitions of a and rs into router yields the classical formula for a particle of mass m orbiting a body of mass M. The following equation where ωφ is the orbital angular speed of the particle, is obtained in non-relativistic mechanics by setting the centrifugal force equal to the Newtonian gravitational force: Where is the reduced mass. In our notation, the classical orbital angular speed equals At the other extreme, when a2 approaches 3rs2 from above, the two radii converge to a single value The quadratic solutions above ensure that router is always greater than 3rs, whereas rinner lies between  rs and 3rs. Circular orbits smaller than  rs are not possible. For massless particles, a goes to infinity, implying that there is a circular orbit for photons at rinner =  rs. The sphere of this radius is sometimes known as the photon sphere. Precession of elliptical orbits The orbital precession rate may be derived using this radial effective potential V. A small radial deviation from a circular orbit of radius router will oscillate in a stable manner with an angular frequency which equals Taking the square root of both sides and expanding using the binomial theorem yields the formula Multiplying by the period T of one revolution gives the precession of the orbit per revolution where we have used ωφT = 2 and the definition of the length-scale a. Substituting the definition of the Schwarzschild radius rs gives This may be simplified using the elliptical orbit's semi-major axis A and eccentricity e related by the formula to give the precession angle Since the closed classical orbit is an ellipse in general, the quantity A(1 − e2) is the semi-latus rectum l of the ellipse. Hence, the final formula of angular apsidal precession for a unit complete revolution is Beyond the Schwarzschild solution Post-Newtonian expansion In the Schwarzschild solution, it is assumed that the larger mass M is stationary and it alone determines the gravitational field (i.e., the geometry of space-time) and, hence, the lesser mass m follows a geodesic path through that fixed space-time. This is a reasonable approximation for photons and the orbit of Mercury, which is roughly 6 million times lighter than the Sun. However, it is inadequate for binary stars, in which the masses may be of similar magnitude. The metric for the case of two comparable masses cannot be solved in closed form and therefore one has to resort to approximation techniques such as the post-Newtonian approximation or numerical approximations. In passing, we mention one particular exception in lower dimensions (see R = T model for details). In (1+1) dimensions, i.e. a space made of one spatial dimension and one time dimension, the metric for two bodies of equal masses can be solved analytically in terms of the Lambert W function. However, the gravitational energy between the two bodies is exchanged via dilatons rather than gravitons which require three-space in which to propagate. The post-Newtonian expansion is a calculational method that provides a series of ever more accurate solutions to a given problem. The method is iterative; an initial solution for particle motions is used to calculate the gravitational fields; from these derived fields, new particle motions can be calculated, from which even more accurate estimates of the fields can be computed, and so on. This approach is called "post-Newtonian" because the Newtonian solution for the particle orbits is often used as the initial solution. When this method is applied to the two-body problem without restriction on their masses, the result is remarkably simple. To the lowest order, the relative motion of the two particles is equivalent to the motion of an infinitesimal particle in the field of their combined masses. In other words, the Schwarzschild solution can be applied, provided that the M + m is used in place of M in the formulae for the Schwarzschild radius rs and the precession angle per revolution δφ. Modern computational approaches Einstein's equations can also be solved on a computer using sophisticated numerical methods. Given sufficient computer power, such solutions can be more accurate than post-Newtonian solutions. However, such calculations are demanding because the equations must generally be solved in a four-dimensional space. Nevertheless, beginning in the late 1990s, it became possible to solve difficult problems such as the merger of two black holes, which is a very difficult version of the Kepler problem in general relativity. Gravitational radiation If there is no incoming gravitational radiation, according to general relativity, two bodies orbiting one another will emit gravitational radiation, causing the orbits to gradually lose energy. The formulae describing the loss of energy and angular momentum due to gravitational radiation from the two bodies of the Kepler problem have been calculated. The rate of losing energy (averaged over a complete orbit) is given by where e is the orbital eccentricity and a is the semimajor axis of the elliptical orbit. The angular brackets on the left-hand side of the equation represent the averaging over a single orbit. Similarly, the average rate of losing angular momentum equals The rate of period decrease is given by where Pb is orbital period. The losses in energy and angular momentum increase significantly as the eccentricity approaches one, i.e., as the ellipse of the orbit becomes ever more elongated. The radiation losses also increase significantly with a decreasing size a of the orbit. See also Binet equation Center of mass (relativistic) Gravitational two-body problem Kepler problem Newton's theorem of revolving orbits Schwarzschild geodesics Notes References Bibliography (See Gravitation (book).) External links Animation showing relativistic precession of stars around the Milky Way supermassive black hole Excerpt from Reflections on Relativity by Kevin Brown. Exact solutions in general relativity
216844
https://en.wikipedia.org/wiki/Network%20Information%20Service
Network Information Service
The Network Information Service, or NIS (originally called Yellow Pages or YP), is a client–server directory service protocol for distributing system configuration data such as user and host names between computers on a computer network. Sun Microsystems developed the NIS; the technology is licensed to virtually all other Unix vendors. Because British Telecom PLC owned the name "Yellow Pages" as a registered trademark in the United Kingdom for its paper-based, commercial telephone directory, Sun changed the name of its system to NIS, though all the commands and functions still start with "yp". A NIS/YP system maintains and distributes a central directory of user and group information, hostnames, e-mail aliases and other text-based tables of information in a computer network. For example, in a common UNIX environment, the list of users for identification is placed in and secret authentication hashes in . NIS adds another "global" user list which is used for identifying users on any client of the NIS domain. Administrators have the ability to configure NIS to serve password data to outside processes to authenticate users using various versions of the Unix crypt(3) hash algorithms. However, in such cases, any NIS(0307) client can retrieve the entire password database for offline inspection. Successor technologies The original NIS design was seen to have inherent limitations, especially in the areas of scalability and security, so other technologies have come to replace it. Sun introduced NIS+ as part of Solaris 2 in 1992, with the intention for it to eventually supersede NIS. NIS+ features much stronger security and authentication features, as well as a hierarchical design intended to provide greater scalability and flexibility. However, it was also more cumbersome to set up and administer, and was more difficult to integrate into an existing NIS environment than many existing users wished. NIS+ has been removed from Solaris 11. As a result, many users choose to stick with NIS, and over time other modern and secure distributed directory systems, most notably Lightweight Directory Access Protocol (LDAP), came to replace it. For example, slapd (the standalone LDAP daemon) generally runs as a non-root user, and SASL-based encryption of LDAP traffic is natively supported. On large LANs, DNS servers may provide better nameserver functionality than NIS or LDAP can provide, leaving just site-wide identification information for NIS master and slave systems to serve. However, some functionssuch as the distribution of netmask information to clients, as well as the maintenance of e-mail aliasesmay still be performed by NIS or LDAP. NIS maintains an NFS database information file as well as so called maps. See also Dynamic Host Configuration Protocol (DHCP) Hesiod (name service) Name Service Switch (NSS) Network information system, for a broader use of NIS to manage other system and networks References External links RHEL 9 will remove support for NIS Alexander Bokovoy, Sr. Principal Software Engineer slide show Unix network-related software Sun Microsystems software Network management Directory services Inter-process communication
6373791
https://en.wikipedia.org/wiki/SCOPE%20Alliance
SCOPE Alliance
The SCOPE Alliance was a non-profit and influential Network Equipment provider (NEP) industry group aimed at standardizing "carrier-grade" systems for telecom in the Information Age. The SCOPE Alliance was founded in January 2006 by a group of NEP's, including Alcatel, Ericsson, Motorola, NEC, Nokia, and Siemens. In 2007, it added significantly to its membership. Mission Active between 2006 and 2012, its mission was to enable and promote the availability of open carrier-grade base platforms based on commercial off-the-shelf (COTS) hardware/software and free and open-source software building blocks, and promote interoperability between such components. SCOPE wanted to accelerate the deployment of carrier-grade base platforms (CGBP) for service provider applications so that NEP's could use them to build better solutions for their customers. By 2011, SCOPE achieved its aim, having accelerated innovation in carrier-grade communications technology and ATCA, NEPs sell integrated hardware/software systems to carriers, with three Computing supply chains (Hardware, Operating system, and Middleware) with well-established industry groups promoting interoperability between products from different vendors. SCOPE published "profiles" aimed at influencing specification groups to focus on the needs of NEP customers (carriers). While SCOPE's focused on open standards like ATCA and Carrier Grade Linux, there is no reason "Proprietary Supplier" could not adopt the SCOPE standards. Open Source Achievements SCOPE's influence on adapting 'Open Standards' for carrier-grade open-source standards and software is summarized in the table: NFV, SDN, 5G, Cloud transformation Age SCOPE was also interested in advancing Network virtualization ("As a consortium of NEPs, it is important for SCOPE to address the lack of standardization in the area of virtualization"), publishing white papers on hardware virtualization, and a white paper on Java Virtualization describing "an environment where high availability Java EE and native application can co-exist and be supervised in the same fashion in a clustered environment". In 2010 SCOPE organized workshop to discuss the effect of Cloud Computing on traditional Carrier-Grade Platforms and telecom networks, publishing a Cloud Computing white paper in 2011. SCOPE was placed into "hibernation", effectively retired, by NEPs in January 2012. Telecom carriers (NEP customers) wanted direct involvement in driving transformation, so instead, both groups combined forces on ETSI Network function virtualization standardization, Software-defined networking adoption, and 5G network slicing initiatives. Publications SCOPE published various publications, including the following: SCOPE: Technical Position Paper (2008). Virtualization for Carrier-Grade Telecommunications Solution (2008). Virtualization: State of the Art (2008): focuses on system virtualization. Virtualization: Use Cases (2008). Virtualization: Requirements (2008) CPU Benchmarking Recommendations v1.0 (2009). Carrier Grade Base Platform (CGBP) (2009): Middleware reference standard. MW Portability: Use Cases (2009). JSR 319: Availability Management for Java - Java Community (2010). Carrier Grade Requirements for Cloud Computing: A SCOPE Alliance Perspective presented to the 2011 OpenSAF Conference. Telecom Grade Cloud Computing v1.0, white paper (2011), describing the characteristics of cloud computing usable for carrier-grade services. See also OpenSAF OpenHPI Carrier Grade Linux Cloud Computing Network Functions Virtualization Network equipment providers Linux Foundation References External links Carrier Grade Linux from the Linux Foundation Technology consortia Computer standards Linux Foundation projects Telecommunications standards Free software Free software for cloud computing Virtualization-related software for Linux
43853076
https://en.wikipedia.org/wiki/Predixion%20Software
Predixion Software
Predixion Software is a software company focusing on edge analytics for connected assets. It was founded in late 2009 and is headquartered in Aliso Viejo, CA. In September 2016, the company was acquired by Greenwave Systems. History 2009 Predixion Software was established in late 2009 with four founding partners: Stuart Frost (founder of DATAllegro, acquired by Microsoft in 2008), Simon Arkell, Stephen DeSantis, and Jamie MacLennan 2010 The first product release was announced in September 2010; company closed its Series A Funding Round in October 2010 for $5 million led by DFJ Frontier. 2011 Predixion accepted into the EMC Select Program; EMC led company’s Series B Funding Round in September 2011 for $6 million. Early 2012, company released predictive analytics software for the healthcare industry. In August 2013, Predixion closed its largest funding round at $20 million for Series C led by Accenture and GE. 2014 In January, company released “Predixion in the Classroom” program allowing students and teachers free access to some Predixion Insight services. In October, Predixion joined with Salesforce.com, In December, the company relocated corporate headquarters to Aliso Viejo, California. Awards Trend-Setting Products in Data and Information Management for 2015 by Database Trends and Applications (December, 2014) Microsoft Health Users Group Innovation Award Big Data 50 – selected as one of the hottest Big Data startups of 2014. CIO Review's 100 Most Promising Technology Companies (2014). Outstanding Technology CEO (Simon Arkell) from OC Tech Alliance (2014). References Software companies of the United States Data management software
23550923
https://en.wikipedia.org/wiki/High-frequency%20trading
High-frequency trading
High-frequency trading (HFT) is a type of algorithmic financial trading characterized by high speeds, high turnover rates, and high order-to-trade ratios that leverages high-frequency financial data and electronic trading tools. While there is no single definition of HFT, among its key attributes are highly sophisticated algorithms, co-location, and very short-term investment horizons. HFT can be viewed as a primary form of algorithmic trading in finance. Specifically, it is the use of sophisticated technological tools and computer algorithms to rapidly trade securities. HFT uses proprietary trading strategies carried out by computers to move in and out of positions in seconds or fractions of a second. In 2017, Aldridge and Krawciw estimated that in 2016 HFT on average initiated 10–40% of trading volume in equities, and 10–15% of volume in foreign exchange and commodities. Intraday, however, proportion of HFT may vary from 0% to 100% of short-term trading volume. Previous estimates reporting that HFT accounted for 60–73% of all US equity trading volume, with that number falling to approximately 50% in 2012 were highly inaccurate speculative guesses. High-frequency traders move in and out of short-term positions at high volumes and high speeds aiming to capture sometimes a fraction of a cent in profit on every trade. HFT firms do not consume significant amounts of capital, accumulate positions or hold their portfolios overnight. As a result, HFT has a potential Sharpe ratio (a measure of reward to risk) tens of times higher than traditional buy-and-hold strategies. High-frequency traders typically compete against other HFTs, rather than long-term investors. HFT firms make up the low margins with incredibly high volumes of trades, frequently numbering in the millions. A substantial body of research argues that HFT and electronic trading pose new types of challenges to the financial system. Algorithmic and high-frequency traders were both found to have contributed to volatility in the Flash Crash of May 6, 2010, when high-frequency liquidity providers rapidly withdrew from the market. Several European countries have proposed curtailing or banning HFT due to concerns about volatility. History High-frequency trading has taken place at least since the 1930s, mostly in the form of specialists and pit traders buying and selling positions at the physical location of the exchange, with high-speed telegraph service to other exchanges. The rapid-fire computer-based HFT developed gradually since 1983 after NASDAQ introduced a purely electronic form of trading. At the turn of the 21st century, HFT trades had an execution time of several seconds, whereas by 2010 this had decreased to milli- and even microseconds. Until recently, high-frequency trading was a little-known topic outside the financial sector, with an article published by the New York Times in July 2009 being one of the first to bring the subject to the public's attention. On September 2, 2013, Italy became the world's first country to introduce a tax specifically targeted at HFT, charging a levy of 0.02% on equity transactions lasting less than 0.5 seconds. Market growth In the early 2000s, high-frequency trading still accounted for fewer than 10% of equity orders, but this proportion was soon to begin rapid growth. According to data from the NYSE, trading volume grew by about 164% between 2005 and 2009 for which high-frequency trading might be accounted. As of the first quarter in 2009, total assets under management for hedge funds with high-frequency trading strategies were $141 billion, down about 21% from their peak before the worst of the crises, although most of the largest HFTs are actually LLCs owned by a small number of investors. The high-frequency strategy was first made popular by Renaissance Technologies who use both HFT and quantitative aspects in their trading. Many high-frequency firms are market makers and provide liquidity to the market which lowers volatility and helps narrow bid–offer spreads, making trading and investing cheaper for other market participants. Market share In the United States in 2009, high-frequency trading firms represented 2% of the approximately 20,000 firms operating today, but accounted for 73% of all equity orders volume. The major U.S. high-frequency trading firms include Virtu Financial, Tower Research Capital, IMC, Tradebot and Citadel LLC. The Bank of England estimates similar percentages for the 2010 US market share, also suggesting that in Europe HFT accounts for about 40% of equity orders volume and for Asia about 5–10%, with potential for rapid growth. By value, HFT was estimated in 2010 by consultancy Tabb Group to make up 56% of equity trades in the US and 38% in Europe. As HFT strategies become more widely used, it can be more difficult to deploy them profitably. According to an estimate from Frederi Viens of Purdue University, profits from HFT in the U.S. has been declining from an estimated peak of $5bn in 2009, to about $1.25bn in 2012. Though the percentage of volume attributed to HFT has fallen in the equity markets, it has remained prevalent in the futures markets. According to a study in 2010 by Aite Group, about a quarter of major global futures volume came from professional high-frequency traders. In 2012, according to a study by the TABB Group, HFT accounted for more than 60 percent of all futures market volume in 2012 on U.S. exchanges. Strategies High-frequency trading is quantitative trading that is characterized by short portfolio holding periods. All portfolio-allocation decisions are made by computerized quantitative models. The success of high-frequency trading strategies is largely driven by their ability to simultaneously process large volumes of information, something ordinary human traders cannot do. Specific algorithms are closely guarded by their owners. Many practical algorithms are in fact quite simple arbitrages which could previously have been performed at lower frequency—competition tends to occur through who can execute them the fastest rather than who can create new breakthrough algorithms. The common types of high-frequency trading include several types of market-making, event arbitrage, statistical arbitrage, and latency arbitrage. Most high-frequency trading strategies are not fraudulent, but instead exploit minute deviations from market equilibrium. Market making According to SEC: A "market maker" is a firm that stands ready to buy and sell a particular stock on a regular and continuous basis at a publicly quoted price. You'll most often hear about market makers in the context of the Nasdaq or other "over the counter" (OTC) markets. Market makers that stand ready to buy and sell stocks listed on an exchange, such as the New York Stock Exchange, are called "third market makers". Many OTC stocks have more than one market-maker. Market-makers generally must be ready to buy and sell at least 100 shares of a stock they make a market in. As a result, a large order from an investor may have to be filled by a number of market-makers at potentially different prices. There can be a significant overlap between a "market maker" and "HFT firm". HFT firms characterize their business as "Market making" – a set of high-frequency trading strategies that involve placing a limit order to sell (or offer) or a buy limit order (or bid) in order to earn the bid-ask spread. By doing so, market makers provide counterpart to incoming market orders. Although the role of market maker was traditionally fulfilled by specialist firms, this class of strategy is now implemented by a large range of investors, thanks to wide adoption of direct market access. As pointed out by empirical studies, this renewed competition among liquidity providers causes reduced effective market spreads, and therefore reduced indirect costs for final investors." A crucial distinction is that true market makers don't exit the market at their discretion and are committed not to, where HFT firms are under no similar commitment. Some high-frequency trading firms use market making as their primary strategy. Automated Trading Desk (ATD), which was bought by Citigroup in July 2007, has been an active market maker, accounting for about 6% of total volume on both the NASDAQ and the New York Stock Exchange. In May 2016, Citadel LLC bought assets of ATD from Citigroup. Building up market making strategies typically involves precise modeling of the target market microstructure together with stochastic control techniques. These strategies appear intimately related to the entry of new electronic venues. Academic study of Chi-X's entry into the European equity market reveals that its launch coincided with a large HFT that made markets using both the incumbent market, NYSE-Euronext, and the new market, Chi-X. The study shows that the new market provided ideal conditions for HFT market-making, low fees (i.e., rebates for quotes that led to execution) and a fast system, yet the HFT was equally active in the incumbent market to offload nonzero positions. New market entry and HFT arrival are further shown to coincide with a significant improvement in liquidity supply. Quote stuffing Quote stuffing is a form of abusive market manipulation that has been employed by high-frequency traders (HFT) and is subject to disciplinary action. It involves quickly entering and withdrawing a large number of orders in an attempt to flood the market creating confusion in the market and trading opportunities for high-frequency traders. Ticker tape trading Much information happens to be unwittingly embedded in market data, such as quotes and volumes. By observing a flow of quotes, computers are capable of extracting information that has not yet crossed the news screens. Since all quote and volume information is public, such strategies are fully compliant with all the applicable laws. Filter trading is one of the more primitive high-frequency trading strategies that involves monitoring large amounts of stocks for significant or unusual price changes or volume activity. This includes trading on announcements, news, or other event criteria. Software would then generate a buy or sell order depending on the nature of the event being looked for. Tick trading often aims to recognize the beginnings of large orders being placed in the market. For example, a large order from a pension fund to buy will take place over several hours or even days, and will cause a rise in price due to increased demand. An arbitrageur can try to spot this happening then buy up the security, then profit from selling back to the pension fund. This strategy has become more difficult since the introduction of dedicated trade execution companies in the 2000s which provide optimal trading for pension and other funds, specifically designed to remove the arbitrage opportunity. Event arbitrage Certain recurring events generate predictable short-term responses in a selected set of securities. High-frequency traders take advantage of such predictability to generate short-term profits. Statistical arbitrage Another set of high-frequency trading strategies are strategies that exploit predictable temporary deviations from stable statistical relationships among securities. Statistical arbitrage at high frequencies is actively used in all liquid securities, including equities, bonds, futures, foreign exchange, etc. Such strategies may also involve classical arbitrage strategies, such as covered interest rate parity in the foreign exchange market, which gives a relationship between the prices of a domestic bond, a bond denominated in a foreign currency, the spot price of the currency, and the price of a forward contract on the currency. High-frequency trading allows similar arbitrages using models of greater complexity involving many more than four securities. The TABB Group estimates that annual aggregate profits of high-frequency arbitrage strategies exceeded US$21 billion in 2009, although the Purdue study estimates the profits for all high frequency trading were US$5 billion in 2009. Index arbitrage Index arbitrage exploits index tracker funds which are bound to buy and sell large volumes of securities in proportion to their changing weights in indices. If a HFT firm is able to access and process information which predicts these changes before the tracker funds do so, they can buy up securities in advance of the trackers and sell them on to them at a profit. News-based trading Company news in electronic text format is available from many sources including commercial providers like Bloomberg, public news websites, and Twitter feeds. Automated systems can identify company names, keywords and sometimes semantics to make news-based trades before human traders can process the news. Low-latency strategies A separate, "naïve" class of high-frequency trading strategies relies exclusively on ultra-low latency direct market access technology. In these strategies, computer scientists rely on speed to gain minuscule advantages in arbitraging price discrepancies in some particular security trading simultaneously on disparate markets. Another aspect of low latency strategy has been the switch from fiber optic to microwave technology for long distance networking. Especially since 2011, there has been a trend to use microwaves to transmit data across key connections such as the one between New York City and Chicago. This is because microwaves travelling in air suffer a less than 1% speed reduction compared to light travelling in a vacuum, whereas with conventional fiber optics light travels over 30% slower. Order properties strategies High-frequency trading strategies may use properties derived from market data feeds to identify orders that are posted at sub-optimal prices. Such orders may offer a profit to their counterparties that high-frequency traders can try to obtain. Examples of these features include the age of an order or the sizes of displayed orders. Tracking important order properties may also allow trading strategies to have a more accurate prediction of the future price of a security. Effects The effects of algorithmic and high-frequency trading are the subject of ongoing research. High frequency trading causes regulatory concerns as a contributor to market fragility. Regulators claim these practices contributed to volatility in the May 6, 2010 Flash Crash and find that risk controls are much less stringent for faster trades. Members of the financial industry generally claim high-frequency trading substantially improves market liquidity, narrows bid–offer spread, lowers volatility and makes trading and investing cheaper for other market participants. An academic study found that, for large-cap stocks and in quiescent markets during periods of "generally rising stock prices", high-frequency trading lowers the cost of trading and increases the informativeness of quotes; however, it found "no significant effects for smaller-cap stocks", and "it remains an open question whether algorithmic trading and algorithmic liquidity supply are equally beneficial in more turbulent or declining markets. ...algorithmic liquidity suppliers may simply turn off their machines when markets spike downward." In September 2011, market data vendor Nanex LLC published a report stating the contrary. They looked at the amount of quote traffic compared to the value of trade transactions over 4 and half years and saw a 10-fold decrease in efficiency. Nanex's owner is an outspoken detractor of high-frequency trading. Many discussions about HFT focus solely on the frequency aspect of the algorithms and not on their decision-making logic (which is typically kept secret by the companies that develop them). This makes it difficult for observers to pre-identify market scenarios where HFT will dampen or amplify price fluctuations. The growing quote traffic compared to trade value could indicate that more firms are trying to profit from cross-market arbitrage techniques that do not add significant value through increased liquidity when measured globally. More fully automated markets such as NASDAQ, Direct Edge, and BATS, in the US, gained market share from less automated markets such as the NYSE. Economies of scale in electronic trading contributed to lowering commissions and trade processing fees, and contributed to international mergers and consolidation of financial exchanges. The speeds of computer connections, measured in milliseconds or microseconds, have become important. Competition is developing among exchanges for the fastest processing times for completing trades. For example, in 2009 the London Stock Exchange bought a technology firm called MillenniumIT and announced plans to implement its Millennium Exchange platform which they claim has an average latency of 126 microseconds. This allows sub-millisecond resolution timestamping of the order book. Off-the-shelf software currently allows for nanoseconds resolution of timestamps using a GPS clock with 100 nanoseconds precision. Spending on computers and software in the financial industry increased to $26.4 billion in 2005. May 6, 2010 Flash Crash The brief but dramatic stock market crash of May 6, 2010 was initially thought to have been caused by high-frequency trading. The Dow Jones Industrial Average plunged to its largest intraday point loss, but not percentage loss, in history, only to recover much of those losses within minutes. In the aftermath of the crash, several organizations argued that high-frequency trading was not to blame, and may even have been a major factor in minimizing and partially reversing the Flash Crash. CME Group, a large futures exchange, stated that, insofar as stock index futures traded on CME Group were concerned, its investigation had found no support for the notion that high-frequency trading was related to the crash, and actually stated it had a market stabilizing effect. However, after almost five months of investigations, the U.S. Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) issued a joint report identifying the cause that set off the sequence of events leading to the Flash Crash and concluding that the actions of high-frequency trading firms contributed to volatility during the crash. The report found that the cause was a single sale of $4.1 billion in futures contracts by a mutual fund, identified as Waddell & Reed Financial, in an aggressive attempt to hedge its investment position. The joint report also found that "high-frequency traders quickly magnified the impact of the mutual fund's selling." The joint report "portrayed a market so fragmented and fragile that a single large trade could send stocks into a sudden spiral", that a large mutual fund firm "chose to sell a big number of futures contracts using a computer program that essentially ended up wiping out available buyers in the market", that as a result high-frequency firms "were also aggressively selling the E-mini contracts", contributing to rapid price declines. The joint report also noted "HFTs began to quickly buy and then resell contracts to each other – generating a 'hot-potato' volume effect as the same positions were passed rapidly back and forth." The combined sales by Waddell and high-frequency firms quickly drove "the E-mini price down 3% in just four minutes". As prices in the futures market fell, there was a spillover into the equities markets where "the liquidity in the market evaporated because the automated systems used by most firms to keep pace with the market paused" and scaled back their trading or withdrew from the markets altogether. The joint report then noted that "Automatic computerized traders on the stock market shut down as they detected the sharp rise in buying and selling." As computerized high-frequency traders exited the stock market, the resulting lack of liquidity "...caused shares of some prominent companies like Procter & Gamble and Accenture to trade down as low as a penny or as high as $100,000". While some firms exited the market, high-frequency firms that remained in the market exacerbated price declines because they "'escalated their aggressive selling' during the downdraft". In the years following the flash crash, academic researchers and experts from the CFTC pointed to high-frequency trading as just one component of the complex current U.S. market structure that led to the events of May 6, 2010. Granularity and accuracy In 2015 the Paris-based regulator of the 28-nation European Union, the European Securities and Markets Authority, proposed time standards to span the EU, that would more accurately synchronize trading clocks "to within a nanosecond, or one-billionth of a second" to refine regulation of gateway-to-gateway latency time—"the speed at which trading venues acknowledge an order after receiving a trade request". Using these more detailed time-stamps, regulators would be better able to distinguish the order in which trade requests are received and executed, to identify market abuse and prevent potential manipulation of European securities markets by traders using advanced, powerful, fast computers and networks. The fastest technologies give traders an advantage over other "slower" investors as they can change prices of the securities they trade. Risks and controversy According to author Walter Mattli, the ability of regulators to enforce the rules has greatly declined since 2005 with the passing of the Regulation National Market System (Reg NMS) by the US Securities and Exchange Commission. As a result, the NYSE's quasi monopoly role as a stock rule maker was undermined and turned the stock exchange into one of many globally operating exchanges. The market then became more fractured and granular, as did the regulatory bodies, and since stock exchanges had turned into entities also seeking to maximize profits, the one with the most lenient regulators were rewarded, and oversight over traders' activities was lost. This fragmentation has greatly benefitted HFT. High-frequency trading comprises many different types of algorithms. Various studies reported that certain types of market-making high-frequency trading reduces volatility and does not pose a systemic risk, and lowers transaction costs for retail investors, without impacting long term investors. Other studies, summarized in Aldridge, Krawciw, 2017 find that high-frequency trading strategies known as "aggressive" erode liquidity and cause volatility. High-frequency trading has been the subject of intense public focus and debate since the May 6, 2010 Flash Crash. At least one Nobel Prize–winning economist, Michael Spence, believes that HFT should be banned. A working paper found "the presence of high frequency trading has significantly mitigated the frequency and severity of end-of-day price dislocation". In their joint report on the 2010 Flash Crash, the SEC and the CFTC stated that "market makers and other liquidity providers widened their quote spreads, others reduced offered liquidity, and a significant number withdrew completely from the markets" during the flash crash. Politicians, regulators, scholars, journalists and market participants have all raised concerns on both sides of the Atlantic. This has led to discussion of whether high-frequency market makers should be subject to various kinds of regulations. In a September 22, 2010 speech, SEC chairperson Mary Schapiro signaled that US authorities were considering the introduction of regulations targeted at HFT. She said, "high frequency trading firms have a tremendous capacity to affect the stability and integrity of the equity markets. Currently, however, high frequency trading firms are subject to very little in the way of obligations either to protect that stability by promoting reasonable price continuity in tough times, or to refrain from exacerbating price volatility." She proposed regulation that would require high-frequency traders to stay active in volatile markets. A later SEC chair Mary Jo White pushed back against claims that high-frequency traders have an inherent benefit in the markets. SEC associate director Gregg Berman suggested that the current debate over HFT lacks perspective. In an April 2014 speech, Berman argued: "It's much more than just the automation of quotes and cancels, in spite of the seemingly exclusive fixation on this topic by much of the media and various outspoken market pundits. (...) I worry that it may be too narrowly focused and myopic." The Chicago Federal Reserve letter of October 2012, titled "How to keep markets safe in an era of high-speed trading", reports on the results of a survey of several dozen financial industry professionals including traders, brokers, and exchanges. It found that risk controls were poorer in high-frequency trading, because of competitive time pressure to execute trades without the more extensive safety checks normally used in slower trades. "some firms do not have stringent processes for the development, testing, and deployment of code used in their trading algorithms." "out-of control algorithms were more common than anticipated prior to the study and that there were no clear patterns as to their cause." The CFA Institute, a global association of investment professionals, advocated for reforms regarding high-frequency trading, including: Promoting robust internal risk management procedures and controls over the algorithms and strategies employed by HFT firms. Trading venues should disclose their fee structure to all market participants. Regulators should address market manipulation and other threats to the integrity of markets, regardless of the underlying mechanism, and not try to intervene in the trading process or to restrict certain types of trading activities. Flash trading Exchanges offered a type of order called a "Flash" order (on NASDAQ, it was called "Bolt" on the Bats stock exchange) that allowed an order to lock the market (post at the same price as an order ) for a small amount of time (5 milliseconds). This order type was available to all participants but since HFT's adapted to the changes in market structure more quickly than others, they were able to use it to "jump the queue" and place their orders before other order types were allowed to trade at the given price. Currently, the majority of exchanges do not offer flash trading, or have discontinued it. By March 2011, the NASDAQ, BATS, and Direct Edge exchanges had all ceased offering its Competition for Price Improvement functionality (widely referred to as "flash technology/trading"). Violations and fines Regulation and enforcement In March 2012, regulators fined Octeg LLC, the equities market-making unit of high-frequency trading firm Getco LLC, for $450,000. Octeg violated Nasdaq rules and failed to maintain proper supervision over its stock trading activities. The fine resulted from a request by Nasdaq OMX for regulators to investigate the activity at Octeg LLC from the day after the May 6, 2010 Flash Crash through the following December. Nasdaq determined the Getco subsidiary lacked reasonable oversight of its algo-driven high-frequency trading. In October 2013, regulators fined Knight Capital $12 million for the trading malfunction that led to its collapse. Knight was found to have violated the SEC's market access rule, in effect since 2010 to prevent such mistakes. Regulators stated the HFT firm ignored dozens of error messages before its computers sent millions of unintended orders to the market. Knight Capital eventually merged with Getco to form KCG Holdings. Knight lost over $460 million from its trading errors in August 2012 that caused disturbance in the U.S. stock market. In September 2014, HFT firm Latour Trading LLC agreed to pay a SEC penalty of $16 million. Latour is a subsidiary of New York-based high-frequency trader Tower Research Capital LLC. According to the SEC's order, for at least two years Latour underestimated the amount of risk it was taking on with its trading activities. By using faulty calculations, Latour managed to buy and sell stocks without holding enough capital. At times, the Tower Research Capital subsidiary accounted for 9% of all U.S. stock trading. The SEC noted the case is the largest penalty for a violation of the net capital rule. In response to increased regulation, such as by FINRA, some have argued that instead of promoting government intervention, it would be more efficient to focus on a solution that mitigates information asymmetries among traders and their backers; others argue that regulation does not go far enough. In 2018, the European Union introduced the MiFID II/MiFIR regulation. Order types On January 12, 2015, the SEC announced a $14 million penalty against a subsidiary of BATS Global Markets, an exchange operator that was founded by high-frequency traders. The BATS subsidiary Direct Edge failed to properly disclose order types on its two exchanges EDGA and EDGX. These exchanges offered three variations of controversial "Hide Not Slide" orders and failed to accurately describe their priority to other orders. The SEC found the exchanges disclosed complete and accurate information about the order types "only to some members, including certain high-frequency trading firms that provided input about how the orders would operate". The complaint was made in 2011 by Haim Bodek. Reported in January 2015, UBS agreed to pay $14.4 million to settle charges of not disclosing an order type that allowed high-frequency traders to jump ahead of other participants. The SEC stated that UBS failed to properly disclose to all subscribers of its dark pool "the existence of an order type that it pitched almost exclusively to market makers and high-frequency trading firms". UBS broke the law by accepting and ranking hundreds of millions of orders priced in increments of less than one cent, which is prohibited under Regulation NMS. The order type called PrimaryPegPlus enabled HFT firms "to place sub-penny-priced orders that jumped ahead of other orders submitted at legal, whole-penny prices". Quote stuffing In June 2014, high-frequency trading firm Citadel LLC was fined $800,000 for violations that included quote stuffing. Nasdaq's disciplinary action stated that Citadel "failed to prevent the strategy from sending millions of orders to the exchanges with few or no executions". It was pointed out that Citadel "sent multiple, periodic bursts of order messages, at 10,000 orders per second, to the exchanges. This excessive messaging activity, which involved hundreds of thousands of orders for more than 19 million shares, occurred two to three times per day." Spoofing and layering In July 2013, it was reported that Panther Energy Trading LLC was ordered to pay $4.5 million to U.S. and U.K. regulators on charges that the firm's high-frequency trading activities manipulated commodity markets. Panther's computer algorithms placed and quickly canceled bids and offers in futures contracts including oil, metals, interest rates and foreign currencies, the U.S. Commodity Futures Trading Commission said. In October 2014, Panther's sole owner Michael Coscia was charged with six counts of commodities fraud and six counts of "spoofing". The indictment stated that Coscia devised a high-frequency trading strategy to create a false impression of the available liquidity in the market, "and to fraudulently induce other market participants to react to the deceptive market information he created". In November 7, 2019, it was reported that Tower Research Capital LLC was ordered to pay $67.4 million in fines to the CFTC to settle allegations that three former traders at the firm engaged in spoofing from at least March 2012 through December 2013. The New York-based firm entered into a deferred prosecution agreement with the Justice Department. Market manipulation In October 2014, Athena Capital Research LLC was fined $1 million on price manipulation charges. The high-speed trading firm used $40 million to rig prices of thousands of stocks, including eBay Inc, according to U.S. regulators. The HFT firm Athena manipulated closing prices commonly used to track stock performance with "high-powered computers, complex algorithms and rapid-fire trades", the SEC said. The regulatory action is one of the first market manipulation cases against a firm engaged in high-frequency trading. Reporting by Bloomberg noted the HFT industry is "besieged by accusations that it cheats slower investors". Advanced trading platforms Advanced computerized trading platforms and market gateways are becoming standard tools of most types of traders, including high-frequency traders. Broker-dealers now compete on routing order flow directly, in the fastest and most efficient manner, to the line handler where it undergoes a strict set of risk filters before hitting the execution venue(s). Ultra-low latency direct market access (ULLDMA) is a hot topic amongst brokers and technology vendors such as Goldman Sachs, Credit Suisse, and UBS. Typically, ULLDMA systems can currently handle high amounts of volume and boast round-trip order execution speeds (from hitting "transmit order" to receiving an acknowledgment) of 10 milliseconds or less. Such performance is achieved with the use of hardware acceleration or even full-hardware processing of incoming market data, in association with high-speed communication protocols, such as 10 Gigabit Ethernet or PCI Express. More specifically, some companies provide full-hardware appliances based on FPGA technology to obtain sub-microsecond end-to-end market data processing. Buy side traders made efforts to curb predatory HFT strategies. Brad Katsuyama, co-founder of the IEX, led a team that implemented THOR, a securities order-management system that splits large orders into smaller sub-orders that arrive at the same time to all the exchanges through the use of intentional delays. This largely prevents information leakage in the propagation of orders that high-speed traders can take advantage of. In 2016, after having with Intercontinental Exchange Inc. and others failed to prevent SEC approval of IEX's launch and having failed to sue as it had threatened to do over the SEC approval, Nasdaq launched a "speed bump" product of its own to compete with IEX. According to Nasdaq CEO Robert Greifeld "the regulator shouldn't have approved IEX without changing the rules that required quotes to be immediately visible". The IEX speed bump—or trading slowdown—is 350 microseconds, which the SEC ruled was within the "immediately visible" parameter. The slowdown promises to impede HST ability "often [to] cancel dozens of orders for every trade they make". Outside of US equities, several notable spot foreign exchange (FX) trading platforms—including ParFX, EBS Market, and Thomson Reuters Matching—have implemented their own "speed bumps" to curb or otherwise limit HFT activity. Unlike the IEX fixed length delay that retains the temporal ordering of messages as they are received by the platform, the spot FX platforms' speed bumps reorder messages so the first message received is not necessarily that processed for matching first. In short, the spot FX platforms' speed bumps seek to reduce the benefit of a participant being faster than others, as has been described in various academic papers. See also Complex event processing Computational finance Dark liquidity Data mining Erlang (programming language) used by Goldman Sachs Flash Boys Flash trading Front running Hedge fund Hot money Market maker Mathematical finance Offshore fund Pump and dump Jump Trading References External links Preliminary Findings Regarding the Market Events of May 6, 2010, Report of the staffs of the CFTC and SEC to the Joint Advisory Committee on Emerging Regulatory Issues, May 18, 2010 High-Frequency Trading: Background, Concerns, and Regulatory Developments Congressional Research Service Where is the Value in High Frequency Trading? (2010) Álvaro Cartea, José Penalva High Frequency Trading and the Risk Monitoring of Automated Trading (2013) Robert Fernandez Regulating Trading Practices (2014) Andreas M. Fleckner, The Oxford Handbook of Financial Regulation Financial markets Electronic trading systems Share trading Mathematical finance
3711211
https://en.wikipedia.org/wiki/Ngspice
Ngspice
Ngspice is a mixed-level/mixed-signal electronic circuit simulator. It is a successor of the latest stable release of Berkeley SPICE, version 3f.5, which was released in 1993. A small group of maintainers and the user community contribute to the ngspice project by providing new features, enhancements and bug fixes. Ngspice is based on three open-source free-software packages: Spice3f5, Xspice and Cider1b1: SPICE is the origin of all electronic circuit simulators, its successors are widely used in the electronics community. Xspice is an extension to Spice3 that provides additional C language code models to support analog behavioral modeling and co-simulation of digital components through a fast event-driven algorithm. Cider adds a numerical device simulator to ngspice. It couples the circuit-level simulator to the device simulator to provide enhanced simulation accuracy (at the expense of increased simulation time). Critical devices can be described with their technology parameters (numerical models), all others may use the original ngspice compact models. Status Ngspice implements three classes of analysis: Nonlinear DC analyses Nonlinear transient analyses Linear AC analyses Transient analysis includes transient noise simulation. AC analysis includes small-signal noise simulation, pole-zero and transfer function analysis. Ngspice implements various circuits elements, like resistors, capacitors, inductors (single or mutual), transmission lines and a growing number of semiconductor devices like diodes, bipolar transistors, MOSFETs (both bulk and SOI), MESFETs, JFETs and HFETs. New models can be added to the simulator using: Behavioral modeling: Internal B-, E-, and G-sources, as well as R, C and L devices, offer modeling by mathematical expressions, driven by node voltages, branch currents, parameters and constants. The Xspice codemodel interface: This is a C-code interface that helps the modeling process by simplifying the access to simulator's internal structure. ADMS verilog model compiler: The ADMS model compiler generates C code from Verilog-A model descriptions for integration into ngspice. C language coded models with spice format: As an open-source project, Ngspice allows new models to be linked to the sources and compiled. Ngspice supports parametric netlists (i.e. netlists can contain parameters and expressions). PSPICE compatible parametric macromodels, often released by manufacturers, can be imported as-is into the simulator. Polynomial sources are available. Ngspice provides an internal scripting language to facilitate complex simulation and evaluation control flows. Ngspice may be compiled into a shared library (*.dll or *.so) readily to be integrated into a calling program. Its interface provides access to all simulation parameters, input and output data. tclspice, another shared library version, offers an interface to Tcl/Tk (software). Ngspice is licensed under the BSD-3-Clause license. This permissive open source license allows its integration as a simulation engine into several — proprietary or free/libre — EDA tools such as KiCad, EAGLE (program), CoolSPICE, Altium and others. Ngspice has a command line input interface and offers plotting capability. An open source GUI with schematic entry, simulation and plotting is provided by Qucs-S. Recent progresses on Ngspice have been presented at conferences such as FOSDEM and FSiC. See also Comparison of EDA Software List of free electronics circuit simulators References External links Download site ngspice manual Free simulation software Electronic circuit simulators Electronic design automation software for Linux Free software programmed in C
17011621
https://en.wikipedia.org/wiki/InterConnection.org
InterConnection.org
InterConnection.org (IC) is an American 501(c)(3), non-profit organization headquartered in Seattle, Washington. InterConnection was established in 1999 by Charles Brennick. The organization's original focus was on developing and donating websites to non-profits in developing countries. The program soon expanded to include computer donations and technology training. In 2004 the InterConnection Computer Reuse and Learning Center opened in Seattle as a hub to serve both local and international communities. Mission InterConnection's mission is to make information technology accessible to underserved communities around the world. IC is founded on the belief that technology has the power to create opportunity for everyone. It is for this reason IC works diligently in the proliferation of information technology and training. Activities In pursuance of its Mission InterConnection manages and promotes a number of program activities. The following are four of InterConnection's main program activities: Equipment collection As technology advances many businesses, organizations and individuals find themselves with outdated computer equipment but no way to dispose of it properly. InterConnection's Computer Reuse and Learning Center, in Seattle, Washington, provides a location to properly dispose of old equipment in an environmentally friendly way. InterConnection accepts all computer equipment, regardless of condition. IC manages programs for both computer recycling and computer reuse. In 2007 alone, InterConnection kept 6,000 PCs out of landfills. All donations received by (IC) are guaranteed to be handled appropriately; unusable equipment is recycled, viable equipment is refurbished and reused. This allows individuals, businesses and organizations to address e-waste issues in a simple environmentally friendly way. IC is a 501(c)(3); all donations are tax-deductible and secure. Donors are provided with the necessary tax-deduction documentation. Refurbished computers Computers refurbished by InterConnection's low-income volunteers are made available to under served communities around the world for a small sourcing fee. This program not only creates a training environment for local community members it also makes information technology accessible around the globe. InterConnection's refurbished systems create academic, professional and economic opportunity for people throughout the developing world. All of InterConnection's refurbished computers are upgraded and fully tested before shipment. IC also provides order customization at the partners request. Internet services InterConnection's original purpose was to design and host free websites for small organizations working in the developing world. Now, IC focuses primarily on its refurbished computer and training programs. However, InterConnection still provides organizations with free web-design, web-hosting and free email services. By helping organizations create a web presence, IC is able to improve organizations ability to communicate and connect with like minded groups and individuals around the world. Partnerships Computer recipients InterConnection has supplied over 25,000 refurbished computers to organizations around the world. The following is information on just a few of InterConnection's computer recipients: Partner organizations References External links InterConnection.org – InterConnection's Homepage KiroTV News – Spotlight on InterConnection Seattle Weekly – Focus on InterConnection UN Environment Programme – Faqs about E-waste Non-profit organizations based in Seattle Charities based in Washington (state) Development charities based in the United States Organizations established in 1999 Recycling organizations Recycling in the United States Computer recycling 1999 establishments in Washington (state)
58968942
https://en.wikipedia.org/wiki/Qiskit
Qiskit
Qiskit is an open-source software development kit (SDK) for working with quantum computers at the level of circuits, pulses, and algorithms. It provides tools for creating and manipulating quantum programs and running them on prototype quantum devices on IBM Quantum Experience or on simulators on a local computer. It follows the circuit model for universal quantum computation, and can be used for any quantum hardware (currently supports superconducting qubits and trapped ions) that follows this model. Qiskit was founded by IBM Research to allow software development for their cloud quantum computing service, IBM Quantum Experience. Contributions are also made by external supporters, typically from academic institutions. The primary version of Qiskit uses the Python programming language. Versions for Swift and JavaScript were initially explored, though the development for these versions have halted. Instead, a minimal re-implementation of basic features is available as MicroQiskit, which is made to be easy to port to alternative platforms. A range of Jupyter notebooks are provided with examples of quantum computing being used. Examples include the source code behind scientific studies that use Qiskit, as well as a set of exercises to help people to learn the basics of quantum programming. An open source textbook based on Qiskit is available as a university level quantum algorithms or quantum computation course supplement. Components Qiskit is made up of elements that work together to enable quantum computing. The central goal of Qiskit is to build a software stack that makes it easy for anyone to use quantum computers, regardless of their skill level or area of interest; Qiskit allows users to easily design experiments and applications and run them on real quantum computers and/or classical simulators. Qiskit provides the ability to develop quantum software both at the machine code level of OpenQASM, and at abstract levels suitable for end-users without quantum computing expertise. This functionality is provided by the following distinct components. Qiskit Terra The element Terra is the foundation on which the rest of Qiskit is built. Qiskit Terra provides tools to create quantum circuits at or close to the level of quantum machine code. It allows the processes that run on quantum hardware to be explicitly constructed in terms of quantum gates. It also provides tools to allow quantum circuits to be optimized for a particular device, as well as managing batches of jobs and running them on remote-access quantum devices and simulators. The following shows a simple example of Qiskit Terra. In this, a quantum circuit is created for two qubits, which consists of the quantum gates required to create a Bell state. The quantum circuit then ends with quantum measurements, which extract a bit from each qubit. from qiskit import QuantumCircuit qc = QuantumCircuit(2, 2) qc.h(0) qc.cx(0, 1) qc.measure([0,1], [0,1]) Qiskit Aer The element Aer provides high-performance quantum computing simulators with realistic noise models. In the near-term, development of quantum software will depend largely on simulation of small quantum devices. For Qiskit, this is provided by the Aer component. This provides simulators hosted locally on the user's device, as well as HPC resources available through the cloud. The simulators can also simulate the effects of noise for simple and sophisticated noise models. Continuing with the previous example: Once the quantum circuit has been created, it can be run on a backend (either quantum hardware or a simulator). In the following example, a local simulator is used.from qiskit import Aer, execute backend = Aer.get_backend("qasm_simulator") job = execute(qc, backend) result = job.result() print(result.get_counts(qc))The final print statement here will show the results returned by the backend. This is a Python dictionary that describes the bit strings obtained from multiple runs of the quantum circuit. In the quantum circuit used in this example, the bit strings '00' and '11' should be the only possible results, and should occur with equal probability. The full results will therefore typically have the samples split approximately equally between the two, such as {'00':519, '11':505} . Experiments done on quantum hardware using Qiskit have been used in many research papers, such as in tests of quantum error correction , generation of entanglement and simulation of far-from-equilibrium dynamics. Qiskit Ignis As of version 0.7.0, released on 6th December 2021, Qiskit Ignis has been deprecated and superseded by the Qiskit Experiments project. The element Ignis provides tools for quantum hardware verification, noise characterization, and error correction. Ignis is a component that contains tools for characterizing noise in near-term devices, as well as allowing computations to be performed in the presence of noise. This includes tools for benchmarking near-term devices, error mitigation and error correction. Ignis is meant for those who want to design quantum error correction codes, or who wish to study ways to characterize errors through methods such as tomography, or even to find a better way for using gates by exploring dynamical decoupling and optimal control. Qiskit Aqua As of version 0.9.0, released on 2nd April 2021, Qiskit Aqua has been deprecated with its support ending and eventual archival being no sooner than 3 months from that date. The element Aqua provided a library of cross-domain algorithms upon which domain-specific applications can be built. However, the Qiskit 0.25.0 release included a restructuring of the applications and algorithms. What previously has been referred to as Qiskit Aqua, the single applications and algorithms module of Qiskit, is now split into dedicated application modules for Optimization, Finance, Machine Learning and Nature (including Physics & Chemistry). The core algorithms and opflow operator functionality were moved to Qiskit Terra. Additionally, to the restructuring, all algorithms follow a new unified paradigm: algorithms are classified according to the problems they solve, and within one application class algorithms can be used interchangeably to solve the same problem. This means that, unlike before, algorithm instances are decoupled from the problem they solve. Qiskit Optimization Qiskit Optimization is an open-source framework that covers the whole range from high-level modeling of optimization problems, with automatic conversion of problems to different required representations, to a suite of easy-to-use quantum optimization algorithms that are ready to run on classical simulators, as well as on real quantum devices via Qiskit. The Optimization module enables easy, efficient modeling of optimization problems using docplex. Qiskit Finance Qiskit Finance is an open-source framework that contains uncertainty components for stock/securities problems, Ising translators for portfolio optimizations and data providers to source real or random data to finance experiments. Qiskit Machine Learning The Machine Learning package simply contains sample datasets at present. It has some classification algorithms such as QSVM and VQC (Variational Quantum Classifier), where this data can be used for experiments, and there is also QGAN (Quantum Generative Adversarial Network) algorithm. Qiskit Nature Qiskit Nature is an open-source framework that supports problems including ground state energy computations, excited states and dipole moments of molecule, both open and closed-shell. The code comprises chemistry drivers, which when provided with a molecular configuration will return one and two-body integrals as well as other data that is efficiently computed classically. This output data from a driver can then be used as input in Qiskit Nature that contains logic which is able to translate this into a form that is suitable for quantum algorithms. See also IBM Quantum Experience References IBM software Quantum programming
10291825
https://en.wikipedia.org/wiki/BBS%20software%20for%20the%20TI-99/4A
BBS software for the TI-99/4A
There are several notable bulletin board systems (BBS) for the Texas Instruments TI-99/4A. Technology writer Ron Albright wrote of several BBS applications written for the TI-99/4A in the March 1985 article Touring The Boards in the monthly TI-99/4A magazine MICROpendium. While Albright's article references several notable bulletin board systems, it does not confirm what was the first BBS system written for the TI-99/4A. Zyolog BBS The first commercially available BBS system written for the TI-99/A in 1983 by Dr. Bryan Wilcutt, DC.S, when he was 15 years old. The name Zyolog was a play on words from Zylog who made low end 8-bit chips and was the first processor type used by the author. The software was officially copyrighted in 1985. The Bulletin Board Software was written in a mixture of TI Extended BASIC and TI Assembly Language for the TMS9900 processor. The author ran the BBS system until moving to the Amiga platform in 1991. Over 200 Zyolog BBS systems existed world wide. TIBBS One of the most popular BBS applications for the TI-99/4A in the early to mid 1980s was aptly named TIBBS (Texas Instruments Bulletin Board System). TIBBS was purported to be the first BBS written to run on the TI-99/4A microcomputer. Its author, Ralph Fowler of Atlanta, Georgia, began the program because he was told by TI's engineers that the machine was not powerful enough to support a BBS. Approximately 200 copies of the application were officially licensed by Fowler and many TIBBS systems popped up around the World. Operators ranged from teenagers to one sysop in Sacramento, California who was over 70 years old. After Texas Instruments ceased producing the 99/4A, its enthusiasts became even more supportive of each other and TIBBS continued into the late 1980s. Eventually Fowler made the program public domain and moved to a different PC platform. Phillip (P.J.) Holly's BBS 12-year-old programmer Phillip (P.J.) Holly aired a BBS written in TI Extended Basic around late 1982 or early 1983 in the Northwest Chicago suburbs. His code was given to fellow BBS friends, and eventually used as a starting point for the Chicago TI-User's Group BBS, which later was coded in assembly language using TI's Editor Assembler. Holly wrote his BBS software on his own due to the lack of available BBS software options for the TI-99/4A. Months later, he discovered Mr. Fowler's TIBBS in Atlanta. SoftWorx Houston, Texas based programmer Mark Shields wrote a BBS program called SoftWorx in the summer of 1983 which served his board The USS Enterprise. Shields' inspiration came after watching the motion picture WarGames. The application originally made outgoing calls in an attempt to locate other computers, and was eventually adapted to accept calls. The user interface was modeled directly on Nick Naimo's Networks II BBS software which had been written for the Apple 2. Shields used TI Extended BASIC as the basis for his application. No actual code from the Naimo's software was used, although the online experience to modem users at the time was comparable. Shields donated the application to the public domain and several sites briefly sprang up in the 1980s. References Bulletin board system software Texas Instruments TI-99/4A
7482183
https://en.wikipedia.org/wiki/Vimeo
Vimeo
Vimeo, Inc. () is an American video hosting, sharing, and services platform provider headquartered in New York City. Vimeo focuses on the delivery of high-definition video across a range of devices. Vimeo's business model is through software as a service (SaaS). They derive revenue by providing subscription plans for businesses and video content producers. Vimeo provides its subscribers with tools for video creation, editing, and broadcasting, enterprise software solutions, as well as the means for video professionals to connect with clients and other professionals. As of 2021, the site has 200 million users, with around 1.6 million subscribers to its services. The site was initially built by Jake Lodwick and Zach Klein in 2004 as a spin-off of CollegeHumor to share humor videos among colleagues, though put to the side to support the growing popularity of CollegeHumor. IAC acquired CollegeHumor and Vimeo in 2006, and after Google had acquired YouTube for over , IAC directed more effort into Vimeo to compete against YouTube, focusing on providing curated content and high-definition video to distinguish itself from other video sharing sites. Lodwick and Klein eventually left by 2009, and IAC implemented a more corporate-focused structure to build out Vimeo's services, with current CEO Anjali Sud having been in place since July 2017. IAC spun off Vimeo as a standalone public company in May 2021. History Initial growth from CollegeHumor (2004–2009) Vimeo was founded in November 2004 by Connected Ventures, the parent company of the humor-based website CollegeHumor, as a side project of web developers Jake Lodwick and Zach Klein to share and tag short videos with their friends. The idea for a video-sharing site was inspired after CollegeHumor received a large number of views from a posted video clip of the October 2004 Saturday Night Live show that included Ashlee Simpson's infamous lip-syncing incident. The name Vimeo was created by Lodwick as a play on the words video and me. As CollegeHumor was drawing in audiences, Vimeo was put to the side while Lodwick and Klein focused on supporting the main CollegeHumor site. Vimeo's user base grew only by a small amount during the next few years principally by word-of-mouth. IAC, owned by Barry Diller, acquired a majority ownership of Connected Ventures in August 2006, as they were drawn by the success of CollegeHumor which was bringing around 6 million visitors a month at the time. In reviewing the assets of Connected Ventures, IAC discovered the Vimeo property; this came at the same time that Google had purchased YouTube for in October 2006. By the start of 2007, IAC had directed Lodwick, Klein, and Andrew Pile to work on Vimeo full time and expand its capabilities. To differentiate themselves from YouTube and other video sharing sites that had appeared since Google's purchase, Vimeo was focused on the content creator with better upload tools, and better curation of content on the site rather than on popularity. By October 2007, Vimeo was the first video sharing site to offer high-definition content to users via Flash-based high-definition video playback. While IAC's acquisition of Connected Ventures helped to target Vimeo's direction, the corporate nature of IAC created issues with many of the original staff of CollegeHumor and Vimeo. Lodwick was planning to leave the company near the end of 2007, as he said that IAC's incorporation of business processes hampered their creativity, but was fired a few weeks before that point. Klein left in early 2008. Developing high-definition content delivery (2009–2017) Vimeo began rolling out a major redesign of its site in 2009 aimed to put the user's focus on the video, which ultimately was completed by January 2012. The new version was aimed to feature the video playback as the central focus of the design, contrasting with the numerous user interface elements that YouTube had within its layout at the time. From 2008 to 2014, Vimeo had blocked the hosting of video game-related videos as they typically were longer than their normal content and took much of the site's resources. Vimeo did allow machinima videos with a narrative structure. The ban was lifted by October 2014. In December 2014, Vimeo introduced 4K support, though it would only allow downloading due to the low market penetration of 4K displays at the time. Streaming of 4K content launched the following year, along with adaptive bitrate streaming support. In March 2017, Vimeo introduced 360-degree video support, including support for virtual reality platforms and smartphones, stereoscopic video, and an online video series providing guidance on filming and producing 360-degree videos. Support for High-dynamic-range video up to 8k was added in 2017, and AV1 encoding in June 2019. Transition to a software provider (2016–2019) Vimeo acquired VHX, a platform for premium over-the-top subscription video channels, in May 2016, subsequently offering this as a service to its sites customers. Vimeo acquired the existing service Livestream in September 2017 to bolster its associated staff and technology, eventually integrated its streaming technology as Vimeo Live, another service offering for its service subscribers as Vimeo OTT. Around 2016, Vimeo had expressed its intentions to enter into the subscription video-on-demand market with its own original programming, with the intent of spending "tens of millions" on content to populate the service as to compete with services like Netflix. According to IAC CEO Joey Levin, some of the original programming would have been from content creators already on Vimeo, paid for their material to be used on the service, thus reducing their own costs in producing content in comparison to Netflix. However, by June 2017, Vimeo had scrapped this plan, recognizing that not only that they were far behind Netflix and others in this area but that they also had generally had far fewer potential viewers and that their ultimate goal, converting those viewers into customers of the site, would be difficult. CEO Anjali Sud also said they knew they did not have the financial resources to compete with Netflix in terms of creating original content. On this move, Vimeo decided to focus more heavily on supporting its content creators and customers, transitioning itself away from being simply a content-hosting or video-sharing website and move into the software as a service model. According to Sud, Vimeo saw that the demand for online video services had shifted away from Hollywood productions and media producers and was gaining more traction by large businesses, and just as Vimeo had originally drawn attention from indie filmmakers at its start, they saw an opportunity to help with smaller businesses needed video sharing capabilities but lacking the resources to develop those internally. The company introduced a number of tiers and services aimed for business use atop their existing services. Vimeo no longer considered itself a competitor to YouTube or other video-sharing sites, and instead called itself "the Switzerland for creators", according to Sud. Creators were allowed to copy and share their videos to any other video-sharing site as long as they continued to use Vimeo's video editing tools for preparing their creations. In early 2017, Vimeo released collaborative review tools for its users, allowing them to privately share to other users to get feedback tied to individual frames of the video, thus keeping the video creation workflow entirely within the Vimeo service. Vimeo acquired Magisto, an artificial intelligence (AI)-backed video creation service with over 100 million users, in April 2019. While the deal's terms were not disclosed, the purchase was reportedly valued at $200 million. Through the acquisition, Magisto's staff were brought into Vimeo, and their existing userbase gained access to Vimeo's toolset. For Vimeo, they saw Magisto's technology helpful for smaller businesses that may not have the funds or skills to product professional videos and could be aided by Magisto's technology. By February 2020, Vimeo launched Vimeo Create, the integration of Magisto's tools into the Vimeo platform to let its users easy create videos guided by AI agents from stock video footage offered by Vimeo and the users' own sources. Transition to a standalone public company (2020–present) In November 2020, spurred by growth in Vimeo's services due to the COVID-19 pandemic, IAC raised for Vimeo in anticipation of spinning off the subsidiary as its own company, giving Vimeo a valuation. IAC formally announced plans to spin off Vimeo as a public-owned company in December 2020, with the process expected to close by the second quarter of 2021. Vimeo would become the 11th company spun-out from IAC following this. Another round of investment in January 2021 brought an additional , raising its total valuation to an estimated . Vimeo's shareholders agreed to going public on May 14, 2021. The company was fully spun out as its own entity on May 25, 2021, and started trading on Nasdaq under the ticker symbol VMEO. Corporate affairs Leadership After the departures of Lodwick and Klein, IAC brought in a more corporate structure to the company. By January 2009, Dae Mellencamp joined IAC as general manager of Vimeo. She served as CEO until March 19, 2012, when Kerry Trainor joined Vimeo as CEO. Around 2016, several high-level executives announced their departure from Vimeo, including Trainor. IAC's CEO Joey Levin was named as interim CEO for Vimeo during its search for a new CEO. After a year-long search, IAC promoted then general manager Anjali Sud as the CEO. Product offerings and revenue structure In contrast with other video-sharing sites, Vimeo does not use any advertising either on its pages or embedded in videos. Instead, Vimeo sells its services and products to content creators for revenue as a software as a service (SaaS) model. The site offers a free tier of services, limiting uploads to 500 megabytes a week and with a cap on total data, while paid subscriber tiers, first introduced in 2008, give the user higher weekly upload allowances and greater storage capacity. Starting around 2016, Vimeo has also shifted towards supporting businesses through its offerings. In September 2016 it introduced a Business tier plan to allow for intra-business collaboration as well as for businesses to host informational videos for their customers. With the acquisition of Livestream in 2017, Vimeo added another tier for Premium subscribers, offering unlimited uploads and streaming events through Vimeo Live. Vimeo launched Vimeo Enterprise, a set of tools designed for large organizations that allow users to manage and share live and on-demand video across workspaces, in August 2019. Vimeo established partnerships with Mailchimp, HubSpot, and Constant Contact marketing platforms to allow their clients to easily integrate Vimeo videos into mail and other promotional campaigns. Vimeo also partnered with TikTok to give TikTok commercial users access to Vimeo's editing tools for video content. In addition to subscriptions, Vimeo has other revenue streams through additional services to its customers. Creators could sell access to individual videos since 2013, and later could offer subscription-based access in 2015, with Vimeo taking a 10% cut of the sales. Vimeo has offered a video on demand service since 2015, allowing its partners to sell Vimeo videos through their websites to their customers. Via its VHX acquisition, Vimeo offers an over-the-top media service (OTT), Vimeo OTT, which Vimeo subscribers can use to create custom mobile apps to provide on-demand video to the app's subscribers, with Vimeo handling the subscriptions, billing, and content delivery. In 2018, the site launched Vimeo Stock to allow content creators to offer videos as stock footage to be used by others. Vimeo Create was introduced in 2020 to allow users to create videos with the help of artificial intelligence. To further promote Vimeo as a home for professional video support, Vimeo opened a "For Hire" job marketplace in September 2019, allowing companies seeking professional video services to freely post job requests for the site's users to browse and respond to. Vimeo Record was launched in October 2020 to allow businesses to use recorded video messaging within their company or with their clients to aid in communications. Customer size By December 2013, Vimeo had attracted more than 100 million unique visitors per month, and more than 22 million registered users. At this time, fifteen percent of Vimeo's traffic came from mobile devices. As of February 2013, Vimeo accounted for 0.11% of all Internet bandwidth, following far behind its larger competitors, video sharing sites YouTube and Facebook. The community of Vimeo includes indie filmmakers and their fans. The Vimeo community has adopted the name "Vimeans," which references active members of the Vimeo community who engage with other users on a regular basis. In 2019, enterprise customers were Vimeo's fastest-growing segment in terms of revenue according to Glenn Schiffman, IAC's Chief Financial Officer. CTO Mark Kornfilt said they had nearly 1 million subscribers as of April 2019, making up a majority of the firm's annual revenue. This had grown to over 1.2 million by March 2020, and over in annual revenue, The site had also obtained 175 million registered users by April 2020. and over 200 million by November 2020 attributed to increased use of Vimeo due to the COVID-19 pandemic. Prior to going public, the company had about 1.6 million paying subscribers. Events Vimeo launched its Vimeo Festival and Awards program in 2010, and held subsequent events every eighteen months. The Festival allows video creators to submit their films for a small fee for consideration across multiple categories. Typically more than 5,000 videos are submitted, and these are narrowed to a field of 1,000 for select judge-panel to vote for winners across multiple categories. This culminates with a live awards presentation showing thewinning films in each category. Each category winner carries a cash prize and an overall best-in-show prize. Vimeo had established the Festival and Awards to help give video and filmmakers an opportunity to highlight their work on Vimeo's pages and gain potential work from clients. In 2008, Vimeo launched its Staff Picks, highlighting videos in a special channel as picked by the company's employees as some of the best work by its users. This feature was expanded in 2016 to give special laurels to videos that the staff felt were "Best of the Month" and "Best of the Year", as well as adding Staff Pick Premiere for newly added videos to the Staff Picks channel. In 2020, Vimeo invited previous Staff Picks recipients to create videos about their favorite small business owners and the impact of the COVID-19 pandemic as part of its Stories in Place program. Impacts in other countries As of March 2021, Vimeo remains blocked in China. Starting May 4, 2012, the site was blocked in India by some ISPs under orders from the Department of Telecommunications, without any stated reasons. Vimeo was blocked in India in December 2014, due to fears that the website was spreading ISIS propaganda through some of its user-made videos. However, on December 31, the site was unblocked in India. On January 9, 2014, Vimeo is blocked in Turkey without clear reasons. In May 2014, Tifatul Sembiring, Indonesia's then-Communications Minister, said on his personal Twitter account that video sharing site Vimeo would be banned. Citing Indonesia's anti-pornography law, passed in 2008, the minister said the site included displays of "nudity or nudity-like features". Legal cases In January 2019, the Commercial Court of Rome determined that Vimeo's video-hosting platform played an “active role” in copyright infringement and the posting of Italian television programs owned by media conglomerate Mediaset. After Vimeo declined to remove over 2,000 copyrighted videos at the request of Mediaset, the company was forced to pay $9,700,000 in penalties. In 2009, Capitol Records/EMI sued Vimeo for copyright infringement based on user-uploaded videos, which Vimeo defended on the basis of the Digital Millennium Copyright Act (DMCA) safe harbor provisions under . Two rulings at the Southern District of New York in 2013 ruled primarily in favor of Vimeo, and the Second Circuit issued a Vimeo-favorable ruling in 2016. Back at the district court, Vimeo won another favorable ruling in May 2021. In June 2019, California pastor James Domen, founder of the "ex-gay" ministry Church United, sued Vimeo after the website, in 2018, removed 89 of his videos for violating its content guidelines. The videos expressed opposition to homosexuality and advocated so-called "conversion therapy". A federal court dismissed Domen's suit in 2020, and the Second Circuit Court of Appeals affirmed the dismissal in 2021. After its acquisition of Magisto, Vimeo was sued by the state of Illinois in September 2019 for violations of the state's Biometric Information Privacy Act (BIPA). The class-action suit alleged that Vimeo, via Magisto's software, had automatically scanned and tracked specific individuals in the videos uploaded through the service, identifying their gender, age, and race, without prior consent as required by BIPA. Vimeo asserted that the technology only used machine learning to identify areas in videos that equated to human faces, and that "Determining whether an area represents a human face or a volleyball does not equate to 'facial recognition'." Vimeo also affirmed that they, via Magisto, "neither collects nor retains any facial information capable of recognizing an individual". Currently, the case remains pending in federal district court after Vimeo lost a bid to attempt to settle the suit. See also Comparison of video hosting services References External links 2004 establishments in New York City 2006 mergers and acquisitions American entertainment websites Internet properties established in 2004 Video hosting Video on demand services Companies listed on the Nasdaq Corporate spin-offs Publicly traded companies based in New York City
33496160
https://en.wikipedia.org/wiki/Mobile%20app
Mobile app
A mobile application or app is a computer program or software application designed to run on a mobile device such as a phone, tablet, or watch. Mobile applications often stand in contrast to desktop applications which are designed to run on desktop computers, and web applications which run in mobile web browsers rather than directly on the mobile device. Apps were originally intended for productivity assistance such as email, calendar, and contact databases, but the public demand for apps caused rapid expansion into other areas such as mobile games, factory automation, GPS and location-based services, order-tracking, and ticket purchases, so that there are now millions of apps available. Many apps require Internet access. Apps are generally downloaded from app stores, which are a type of digital distribution platforms. The term "app", short for "software application", has since become very popular; in 2010, it was listed as "Word of the Year" by the American Dialect Society. Apps are broadly classified into three types: native apps, hybrid and web apps. Native applications are designed specifically for a mobile operating system, typically iOS or Android. Web apps are written in HTML5 or CSS and typically run through a browser. Hybrid apps are built using web technologies such as JavaScript, CSS, and HTML5 and function like web apps disguised in a native container. Overview Most mobile devices are sold with several apps bundled as pre-installed software, such as a web browser, email client, calendar, mapping program, and an app for buying music, other media, or more apps. Some pre-installed apps can be removed by an ordinary uninstall process, thus leaving more storage space for desired ones. Where the software does not allow this, some devices can be rooted to eliminate the undesired apps. Apps that are not preinstalled are usually available through distribution platforms called app stores. These may operated by the owner of the device's mobile operating system, such as the App Store (iOS) or Google Play Store; by the device manufacturers, such as the Galaxy Store and Huawei AppGallery; or by third parties, such as the Amazon Appstore and F-Droid. Usually, they are downloaded from the platform to a target device, but sometimes they can be downloaded to laptops or desktop computers. Apps can also be installed manually, for example by running an Android application package on Android devices. Some apps are freeware, while others have a price, which can be upfront or a subscription. Some apps also include microtransactions and/or advertising. In any case, the revenue is usually split between the application's creator and the app store. The same app can, therefore, cost a different price depending on the mobile platform. Mobile apps were originally offered for general productivity and information retrieval, including email, calendar, contacts, the stock market and weather information. However, public demand and the availability of developer tools drove rapid expansion into other categories, such as those handled by desktop application software packages. As with other software, the explosion in number and variety of apps made discovery a challenge, which in turn led to the creation of a wide range of review, recommendation, and curation sources, including blogs, magazines, and dedicated online app-discovery services. In 2014 government regulatory agencies began trying to regulate and curate apps, particularly medical apps. Some companies offer apps as an alternative method to deliver content with certain advantages over an official website. With a growing number of mobile applications available at app stores and the improved capabilities of smartphones, people are downloading more applications to their devices. Usage of mobile apps has become increasingly prevalent across mobile phone users. A May 2012 comScore study reported that during the previous quarter, more mobile subscribers used apps than browsed the web on their devices: 51.1% vs. 49.8% respectively. Researchers found that usage of mobile apps strongly correlates with user context and depends on user's location and time of the day. Mobile apps are playing an ever-increasing role within healthcare and when designed and integrated correctly can yield many benefits. Market research firm Gartner predicted that 102 billion apps would be downloaded in 2013 (91% of them free), which would generate $26 billion in the US, up 44.4% on 2012's US$18 billion. By Q2 2015, the Google Play and Apple stores alone generated $5 billion. An analyst report estimates that the app economy creates revenues of more than €10 billion per year within the European Union, while over 529,000 jobs have been created in 28 EU states due to the growth of the app market. Types Mobile applications may be classified by numerous methods. A common scheme is to distinguish native, web-based, and hybrid apps. Native app All apps targeted toward a particular mobile platform are known as native apps. Therefore, an app intended for Apple device does not run in Android devices. As a result, most businesses develop apps for multiple platforms. While developing native apps, professionals incorporate best-in-class user interface modules. This accounts for better performance, consistency and good user experience. Users also benefit from wider access to application programming interfaces and make limitless use of all apps from the particular device. Further, they also switch over from one app to another effortlessly. The main purpose for creating such apps is to ensure best performance for a specific mobile operating system. Web-based app A web-based app is implemented with the standard web technologies of HTML, CSS, and JavaScript. Internet access is typically required for proper behavior or being able to use all features compared to offline usage. Most, if not all, user data is stored in the cloud. The performance of these apps is similar to a web application running in a browser, which can be noticeably slower than the equivalent native app. It also may not have the same level of features as the native app. Hybrid app The concept of the hybrid app is a mix of native and web-based apps. Apps developed using Apache Cordova, Flutter, Xamarin, React Native, Sencha Touch, and other frameworks fall into this category. These are made to support web and native technologies across multiple platforms. Moreover, these apps are easier and faster to develop. It involves use of single codebase which works in multiple mobile operating systems. Despite such advantages, hybrid apps exhibit lower performance. Often, apps fail to bear the same look-and-feel in different mobile operating systems. Development Developing apps for mobile devices requires considering the constraints and features of these devices. Mobile devices run on battery and have less powerful processors than personal computers and also have more features such as location detection and cameras. Developers also have to consider a wide array of screen sizes, hardware specifications and configurations because of intense competition in mobile software and changes within each of the platforms (although these issues can be overcome with mobile device detection). Mobile application development requires the use of specialized integrated development environments. Mobile apps are first tested within the development environment using emulators and later subjected to field testing. Emulators provide an inexpensive way to test applications on mobile phones to which developers may not have physical access. Mobile user interface (UI) Design is also essential. Mobile UI considers constraints and contexts, screen, input and mobility as outlines for design. The user is often the focus of interaction with their device, and the interface entails components of both hardware and software. User input allows for the users to manipulate a system, and device's output allows the system to indicate the effects of the users' manipulation. Mobile UI design constraints include limited attention and form factors, such as a mobile device's screen size for a user's hand. Mobile UI contexts signal cues from user activity, such as location and scheduling that can be shown from user interactions within a mobile application. Overall, mobile UI design's goal is primarily for an understandable, user-friendly interface. Mobile UIs, or front-ends, rely on mobile back-ends to support access to enterprise systems. The mobile back-end facilitates data routing, security, authentication, authorization, working off-line, and service orchestration. This functionality is supported by a mix of middleware components including mobile app servers, Mobile Backend as a service (MBaaS), and SOA infrastructure. Conversational interfaces display the computer interface and present interactions through text instead of graphic elements. They emulate conversations with real humans. There are two main types of conversational interfaces: voice assistants (like the Amazon Echo) and chatbots. Conversational interfaces are growing particularly practical as users are starting to feel overwhelmed with mobile apps (a term known as "app fatigue"). David Limp, Amazon's senior vice president of devices, says in an interview with Bloomberg, "We believe the next big platform is voice." Distribution The three biggest app stores are Google Play for Android, App Store for iOS, and Microsoft Store for Windows 10, Windows 10 Mobile, and Xbox One. Google Play Google Play (formerly known as the Android Market) is an international online software store developed by Google for Android devices. It opened in October 2008. In July 2013, the number of apps downloaded via the Google Play Store surpassed 50 billion, of the over 1 million apps available. As of September 2016, according to Statista the number of apps available exceeded 2.4 million. Over 80% of apps in the Google Play Store are free to download. The store generated a revenue of 6 billion U.S. dollars in 2015. App Store Apple's App Store for iOS and iPadOS was not the first app distribution service, but it ignited the mobile revolution and was opened on July 10, 2008, and as of September 2016, reported over 140 billion downloads. The original AppStore was first demonstrated to Steve Jobs in 1993 by Jesse Tayler at NeXTWorld Expo As of June 6, 2011, there were 425,000 apps available, which had been downloaded by 200 million iOS users. During Apple's 2012 Worldwide Developers Conference, CEO Tim Cook announced that the App Store has 650,000 available apps to download as well as 30 billion apps downloaded from the app store until that date. From an alternative perspective, figures seen in July 2013 by the BBC from tracking service Adeven indicate over two-thirds of apps in the store are "zombies", barely ever installed by consumers. Microsoft Store Microsoft Store (formerly known as the Windows Store) was introduced by Microsoft in 2012 for its Windows 8 and Windows RT platforms. While it can also carry listings for traditional desktop programs certified for compatibility with Windows 8, it is primarily used to distribute "Windows Store apps"—which are primarily built for use on tablets and other touch-based devices (but can still be used with a keyboard and mouse, and on desktop computers and laptops). Others Amazon Appstore is an alternative application store for the Android operating system. It was opened in March 2011 and as of June 2015, the app store has nearly 334,000 apps. The Amazon Appstore's Android Apps can also be installed and run on BlackBerry 10 devices. BlackBerry World is the application store for BlackBerry 10 and BlackBerry OS devices. It opened in April 2009 as BlackBerry App World. Ovi (Nokia) for Nokia phones was launched internationally in May 2009. In May 2011, Nokia announced plans to rebrand its Ovi product line under the Nokia brand and Ovi Store was renamed Nokia Store in October 2011. Nokia Store will no longer allow developers to publish new apps or app updates for its legacy Symbian and MeeGo operating systems from January 2014. Windows Phone Store was introduced by Microsoft for its Windows Phone platform, which was launched in October 2010. , it has over 120,000 apps available. Samsung Apps was introduced in September 2009. As of October 2011, Samsung Apps reached 10 million downloads. The store is available in 125 countries and it offers apps for Windows Mobile, Android and Bada platforms. The Electronic AppWrapper was the first electronic distribution service to collectively provide encryption and purchasing electronically F-Droid — Free and open Source Android app repository. Opera Mobile Store is a platform independent app store for iOS, Java, BlackBerry OS, Symbian, iOS, and Windows Mobile, and Android based mobile phones. It was launched internationally in March, 2011. There are numerous other independent app stores for Android devices. Enterprise management Mobile application management (MAM) describes software and services responsible for provisioning and controlling access to internally developed and commercially available mobile apps used in business settings. The strategy is meant to off-set the security risk of a Bring Your Own Device (BYOD) work strategy. When an employee brings a personal device into an enterprise setting, mobile application management enables the corporate IT staff to transfer required applications, control access to business data, and remove locally cached business data from the device if it is lost, or when its owner no longer works with the company. Containerization is an alternate approach to security. Rather than controlling an employee/s entire device, containerization apps create isolated pockets separate from personal data. Company control of the device only extends to that separate container. App wrapping vs. native app management Especially when employees "bring your own device" (BYOD), mobile apps can be a significant security risk for businesses, because they transfer unprotected sensitive data to the Internet without knowledge and consent of the users. Reports of stolen corporate data show how quickly corporate and personal data can fall into the wrong hands. Data theft is not just the loss of confidential information, but makes companies vulnerable to attack and blackmail. Professional mobile application management helps companies protect their data. One option for securing corporate data is app wrapping. But there also are some disadvantages like copyright infringement or the loss of warranty rights. Functionality, productivity and user experience are particularly limited under app wrapping. The policies of a wrapped app can't be changed. If required, it must be recreated from scratch, adding cost. An app wrapper is a mobile app made wholly from an existing website or platform, with few or no changes made to the underlying application. The "wrapper" is essentially a new management layer that allows developers to set up usage policies appropriate for app use. Examples of these policies include whether or not authentication is required, allowing data to be stored on the device, and enabling/disabling file sharing between users. Because most app wrappers are often websites first, they often do not align with iOS or Android Developer guidelines. Alternatively, it is possible to offer native apps securely through enterprise mobility management. This enables more flexible IT management as apps can be easily implemented and policies adjusted at any time. See also Appbox Pro (2009) App store optimization Enterprise mobile application Mobile commerce Super-app References External links User interface techniques
3180013
https://en.wikipedia.org/wiki/IDEF1X
IDEF1X
Integration DEFinition for information modeling (IDEF1X) is a data modeling language for the development of semantic data models. IDEF1X is used to produce a graphical information model which represents the structure and semantics of information within an environment or system. IDEF1X permits the construction of semantic data models which may serve to support the management of data as a resource, the integration of information systems, and the building of computer databases. This standard is part of the IDEF family of modeling languages in the field of software engineering. Overview A data modeling technique is used to model data in a standard, consistent and predictable manner in order to manage it as a resource. It can be used in projects requiring a standard means of defining and analyzing the data resources within an organization. Such projects include the incorporation of a data modeling technique into a methodology, managing data as a resource, integrating information systems, or designing computer databases. The primary objectives of the IDEF1X standard are to provide: Means for completely understanding and analyzing an organization's data resources Common means of representing and communicating the complexity of data A technique for presenting an overall view of the data required to run an enterprise Means for defining an application-independent view of data which can be validated by users and transformed into a physical database design A technique for deriving an integrated data definition from existing data resources. A principal objective of IDEF1X is to support integration. The approach to integration focuses on the capture, management, and use of a single semantic definition of the data resource referred to as a “conceptual schema.” The “conceptual schema” provides a single integrated definition of the data within an enterprise which is not biased toward any single application of data and is independent of how the data is physically stored or accessed. The primary objective of this conceptual schema is to provide a consistent definition of the meanings of and interrelationships between data that can be used to integrate, share, and manage the integrity of data. A conceptual schema must have three important characteristics: Consistent with the infrastructure of the business and true across all application areas Extendible, such that new data can be defined without altering previously defined data Transformable to both the required user views and to a variety of data storage and access structures. History The need for semantic data models was first recognized by the U.S. Air Force in the mid-1970s as a result of the Integrated Computer Aided Manufacturing (ICAM) Program. The objective of this program was to increase manufacturing productivity through the systematic application of computer technology. The ICAM Program identified a need for better analysis and communication techniques for people involved in improving manufacturing productivity. As a result, the ICAM Program developed a series of techniques known as the IDEF (ICAM Definition) Methods which included the following: IDEF0 used to produce a “function model” which is a structured representation of the activities or processes within the environment or system IDEF1 used to produce an “information model” which represents the structure and semantics of information within the environment or system IDEF2 used to produce a “dynamics model”. The initial approach to IDEF information modeling (IDEF1) was published by the ICAM program in 1981, based on current research and industry needs. The theoretical roots for this approach stemmed from the early work of Edgar F. Codd on Relational model theory and Peter Chen on the entity-relationship model. The initial IDEF1 technique was based on the work of Dr R. R. Brown and Mr T. L. Ramey of Hughes Aircraft and Mr D. S. Coleman of D. Appleton Company (DACOM), with critical review and influence by Charles Bachman, Peter Chen, Dr M. A. Melkanoff, and Dr G.M. Nijssen. In 1983, the U.S. Air Force initiated the Integrated Information Support System (I2S2) project under the ICAM program. The objective of this project was to provide the enabling technology to logically and physically integrate a network of heterogeneous computer hardware and software. As a result of this project, and industry experience, the need for an enhanced technique for information modeling was recognized. From the point of view of the contract administrators of the Air Force IDEF program, IDEF1X was a result of the ICAM IISS-6201 project and was further extended by the IISS-6202 project. To satisfy the data modeling enhancement requirements that were identified in the IISS-6202 project, a sub-contractor, DACOM, obtained a license to the Logical Database Design Technique (LDDT) and its supporting software (ADAM). From the point of view of the technical content of the modeling technique, IDEF1X is a renaming of LDDT. On September 2, 2008, the associated NIST standard, FIPS 184, has been withdrawn (decision on Federal Register vol. 73 / page 51276 ). Since September 2012, IDEF1X is part of the international standard ISO/IEC/IEEE 31320-2:2012. The standard describes the syntax and semantics of IDEF1X97, which consists of two conceptual modeling languages: a “key-style” language downward compatible with FIPS 184, which supports relational and extended relational databases, and a newer “identity-style” language suitable for object databases and object-oriented modeling. Logical database design technique The logical database design technique (LDDT) had been developed in 1982 by Robert G. Brown of The Database Design Group entirely outside the IDEF program and with no knowledge of IDEF1. Nevertheless, the central goal of IDEF1 and LDDT was the same: to produce a database-neutral model of the persistent information needed by an enterprise by modeling the real-world entities involved. LDDT combined elements of the relational data model, the E-R model, and data generalization in a way specifically intended to support data modeling and the transformation of the data models into database designs. LDDT included an environmental (namespace) hierarchy, multiple levels of model, the modeling of generalization/specialization, and the explicit representation of relationships by primary and foreign keys, supported by a well defined role naming facility. The primary keys and unambiguously role-named foreign keys expressed sometimes subtle uniqueness and referential integrity constraints that needed to be known and honored by whatever type of database was ultimately designed. Whether the database design used the integrity constraint based keys of the LDDT model as database access keys or indexes was an entirely separate decision. The precision and completeness of the LDDT models was an important factor in enabling the relatively smooth transformation of the models into database designs. Early LDDT models were transformed into database designs for IBM's hierarchical database, IMS. Later models were transformed into database designs for Cullinet's network database, IDMS, and many varieties of relational database. The LDDT software, ADAM, supported view (model) entry, view merging, selective (subset) viewing, namespace inheritance, normalization, a quality assurance analysis of views, entity relationship graph and report generation, transformation to a relational database expressed as SQL data declaration statements, and referential integrity checking SQL. Logical models were serialized with a structural modeling language. The graphic syntax of LDDT differed from that of IDEF1 and, more importantly, LDDT contained many interrelated modeling concepts not present in IDEF1. Therefore, instead of extending IDEF1, Mary E. Loomis of DACOM wrote a concise summary of the syntax and semantics of a substantial subset of LDDT, using terminology compatible with IDEF1 wherever possible. DACOM labeled the result IDEF1X and supplied it to the ICAM program, which published it in 1985. (IEEE 1998, p. iii) (Bruce 1992, p. xii) DACOM also converted the ADAM software to C and sold it under the name Leverage. IDEF1X building blocks Entities The representation of a class of real or abstract things (people, objects, places, events, ideas, combination of things, etc.) that are recognized as instances of the same class because they share the same characteristics and can participate in the same relationships. Domains A named set of data values (fixed, or possibly infinite in number) all of the same data type, upon which the actual value for an attribute instance is drawn. Every attribute must be defined on exactly one underlying domain. Multiple attributes may be based on the same underlying domain. Attributes A property or characteristic that is common to some or all of the instances of an entity. An attribute represents the use of a domain in the context of an entity. Keys An attribute, or combination of attributes, of an entity whose values uniquely identify each entity instance. Each such set constitutes a candidate key. Primary keys The candidate key selected as the unique identifier of an entity. Foreign keys An attribute, or combination of attributes of a child or category entity instance whose values match those in the primary key of a related parent or generic entity instance. A foreign key can be viewed as the result of the "migration" of the primary key of the parent or generic entity through a specific connection or categorization relationship. An attribute or combination of attributes in the foreign key can be assigned a role name reflecting its role in the child or category entity. Relationships An association between the instances of two entities or between instances of the same entity. Connection relationships A relationship having no semantics in addition to association. See constraint, cardinality. Categorization relationships A relationship in which instances of both entities represent the same real or abstract thing. One entity (generic entity) represents the complete set of things, the other (category entity) represents a sub-type or sub-classification of those things. The category entity may have one or more characteristics, or a relationship with instances of another entity, not shared by all generic entity instances. Each instance of the category entity is simultaneously an instance of the generic entity. Non-specific relationships A relationship in which an instance of either entity can be related to any number of instances of the other. View levels Three levels of view are defined in IDEF1X: entity relationship (ER), key-based (KB), and fully attributed (FA). They differ in level of abstraction. The ER level is the most abstract. It models the most fundamental elements of the subject area - the entities and their relationships. It is usually broader in scope than the other levels. The KB level adds keys and the FA level adds all the attributes. IDEF1X topics The three schema approach The three-schema approach in software engineering is an approach to building information systems and systems information management, that promotes the conceptual model as the key to achieving data integration. A schema is a model, usually depicted by a diagram and sometimes accompanied by a language description. The three schemas used in this approach are: External schema for user views Conceptual schema integrates external schemata Internal schema that defines physical storage structures. At the center, the conceptual schema defines the ontology of the concepts as the users think of them and talk about them. The physical schema describes the internal formats of the data stored in the database, and the external schema defines the view of the data presented to the application programs. The framework attempted to permit multiple data models to be used for external schemata. Modeling guidelines The modeling process can be divided into five stages of model developing. Phase zero – project Initiation The objectives of the project initiation phase include: Project definition – a general statement of what has to be done, why, and how it will get done Source material – a plan for the acquisition of source material, including indexing and filing Author conventions – a fundamental declaration of the conventions (optional methods) by which the author chooses to make and manage the model. Phase one – entity definition The objective of the entity definition phase is to identify and define the entities that fall within the problem domain being modeled. Phase two – relationship definition The objective of the relationship definition phase is to identify and define the basic relationships between entities. At this stage of modeling, some relationships may be non-specific and will require additional refinement in subsequent phases. The primary outputs from phase two are: Relationship matrix Relationship definitions Entity-level diagrams. Phase three - key definitions The objectives of the key definitions phase are to: Refine the non-specific relationships from phase two Define key attributes for each entity Migrate primary keys to establish foreign keys Validate relationships and keys. Phase four - attribute definition The objectives of the attribute definition phase are to: Develop an attribute pool Establish attribute ownership Define monkey attributes Validate and refine the data structure. IDEF1X meta model A meta model is a model of the constructs of a modeling system. Like any model, it is used to represent and reason about the subject of the model - in this case IDEF1X. The meta model is used to reason about IDEF1X, i.e., what the constructs of IDEF1X are and how they relate to one another. The model shown is an IDEF1X model of IDEF1X. Such meta models can be used for various purposes, such as repository design, tool design, or in order to specify the set of valid IDEF1X models. Depending on the purpose, somewhat different models result. There is no “one right model.” For example, a model for a tool that supports building models incrementally must allow incomplete or even inconsistent models. The meta model for formalization, however, emphasizes alignment with the concepts of the formalization and hence incomplete or inconsistent models are not allowed. Meta models have two important limitations. First, they specify syntax but not semantics. Second, a meta model must be supplemented with constraints in natural or formal language. The formal theory of IDEF1X provides both the semantics and a means to precisely express the necessary constraints. A meta model for IDEF1X is given in the adjacent figure. The name of the view is mm. The domain hierarchy and constraints are also given. The constraints are expressed as sentences in the formal theory of the meta model. The meta model informally defines the set of valid IDEF1X models in the usual way, as the sample instance tables that correspond to a valid IDEF1X model. The meta model also formally defines the set of valid IDEF1X models in the following way. The meta model, as an IDEF1X model, has a corresponding formal theory. The semantics of the theory are defined in the standard way. That is, an interpretation of a theory consists of a domain of individuals and a set of assignments: To each constant in the theory, an individual in the domain is assigned To each n-ary function symbol in the theory, an n-ary function over the domain is assigned To each n-ary predicate symbol in the theory, an n-ary relation over the domain is assigned. In the intended interpretation, the domain of individuals consists of views, such as production; entities, such as part and vendor; domains, such as qty_on_hand; connection relationships; category clusters; and so on. If every axiom in the theory is true in the interpretation, then the interpretation is called a model for the theory. Every model for the IDEF1X theory corresponding to the IDEF1X meta model and its constraints is a valid IDEF1X model. See also Conceptual model (computer science) Crow's foot notation ER/Studio Enterprise Architect (software) IDEF0 IDEF5 ISO 10303 Logic Works Weak entity References Further reading Thomas A. Bruce (1992). Designing Quality Databases With Idef1X Information Models. Dorset House Publishing. Y. Tina Lee & Shigeki Umeda (2000). "An IDEF1x Information Model for a Supply Chain Simulation". External links ISO/IEC/IEEE 31320-2:2012 FIPS Publication 184 Announcing the IDEF1X Standard December 1993 by the Computer Systems Laboratory of the National Institute of Standards and Technology (NIST). (Withdrawn by NIST 08 Sep 02 see Withdrawn FIPS by Numerical Order Index) Federal Register vol. 73 / page 51276 withdrawal decision Overview of IDEF1X at www.idef.com IDEF1X Overview from Essential Strategies, Inc. Data modeling Data modeling diagrams Data modeling languages Systems analysis
227018
https://en.wikipedia.org/wiki/Open%20standard
Open standard
An open standard is a standard that is publicly available and has various rights to use associated with it and may also have various properties of how it was designed (e.g. open process). There is no single definition, and interpretations vary with usage. The terms open and standard have a wide range of meanings associated with their usage. There are a number of definitions of open standards which emphasize different aspects of openness, including the openness of the resulting specification, the openness of the drafting process, and the ownership of rights in the standard. The term "standard" is sometimes restricted to technologies approved by formalized committees that are open to participation by all interested parties and operate on a consensus basis. The definitions of the term open standard used by academics, the European Union, and some of its member governments or parliaments such as Denmark, France, and Spain preclude open standards requiring fees for use, as do the New Zealand, South African and the Venezuelan governments. On the standard organisation side, the World Wide Web Consortium (W3C) ensures that its specifications can be implemented on a royalty-free basis. Many definitions of the term standard permit patent holders to impose "reasonable and non-discriminatory licensing" royalty fees and other licensing terms on implementers or users of the standard. For example, the rules for standards published by the major internationally recognized standards bodies such as the Internet Engineering Task Force (IETF), International Organization for Standardization (ISO), International Electrotechnical Commission (IEC), and ITU-T permit their standards to contain specifications whose implementation will require payment of patent licensing fees. Among these organizations, only the IETF and ITU-T explicitly refer to their standards as "open standards", while the others refer only to producing "standards". The IETF and ITU-T use definitions of "open standard" that allow "reasonable and non-discriminatory" patent licensing fee requirements. There are those in the open-source software community who hold that an "open standard" is only open if it can be freely adopted, implemented and extended. While open standards or architectures are considered non-proprietary in the sense that the standard is either unowned or owned by a collective body, it can still be publicly shared and not tightly guarded. The typical example of “open source” that has become a standard is the personal computer originated by IBM and now referred to as Wintel, the combination of the Microsoft operating system and Intel microprocessor. There are three others that are most widely accepted as “open” which include the GSM phones (adopted as a government standard), Open Group which promotes UNIX and the like, and the Internet Engineering Task Force (IETF) which created the first standards of SMTP and TCP/IP. Buyers tend to prefer open standards which they believe offer them cheaper products and more choice for access due to network effects and increased competition between vendors. Open standards which specify formats are sometimes referred to as open formats. Many specifications that are sometimes referred to as standards are proprietary and only available under restrictive contract terms (if they can be obtained at all) from the organization that owns the copyright on the specification. As such these specifications are not considered to be fully open. Joel West has argued that "open" standards are not black and white but have many different levels of "openness". A more open standard tends to occur when the knowledge of the technology becomes dispersed enough that competition is increased and others are able to start copying the technology as they implement it. This occurred with the Wintel architecture as others were able to start imitating the software. Less open standards exist when a particular firm has much power (not ownership) over the standard, which can occur when a firm's platform “wins” in standard setting or the market makes one platform most popular. Specific definitions of an open standard Joint IEEE, ISOC, W3C, IETF and IAB Definition On August 12, 2012, the Institute of Electrical and Electronics Engineers (IEEE), Internet Society (ISOC), World Wide Web Consortium (W3C), Internet Engineering Task Force (IETF) and Internet Architecture Board (IAB), jointly affirmed a set of principles which have contributed to the exponential growth of the Internet and related technologies. The “OpenStand Principles” define open standards and establish the building blocks for innovation. Standards developed using the OpenStand principles are developed through an open, participatory process, support interoperability, foster global competition, are voluntarily adopted on a global level and serve as building blocks for products and services targeted to meet the needs of markets and consumers. This drives innovation which, in turn, contributes to the creation of new markets and the growth and expansion of existing markets. There are five, key OpenStand Principles, as outlined below: 1. Cooperation Respectful cooperation between standards organizations, whereby each respects the autonomy, integrity, processes, and intellectual property rules of the others. 2. Adherence to Principles - Adherence to the five fundamental principles of standards development, namely Due process: Decisions are made with equity and fairness among participants. No one party dominates or guides standards development. Standards processes are transparent and opportunities exist to appeal decisions. Processes for periodic standards review and updating are well defined. Broad consensus: Processes allow for all views to be considered and addressed, such that agreement can be found across a range of interests. Transparency: Standards organizations provide advance public notice of proposed standards development activities, the scope of work to be undertaken, and conditions for participation. Easily accessible records of decisions and the materials used in reaching those decisions are provided. Public comment periods are provided before final standards approval and adoption. Balance: Standards activities are not exclusively dominated by any particular person, company or interest group. Openness: Standards processes are open to all interested and informed parties. 3. Collective Empowerment Commitment by affirming standards organizations and their participants to collective empowerment by striving for standards that: are chosen and defined based on technical merit, as judged by the contributed expertise of each participant; provide global interoperability, scalability, stability, and resiliency; enable global competition; serve as building blocks for further innovation; and contribute to the creation of global communities, benefiting humanity. 4. Availability Standards specifications are made accessible to all for implementation and deployment. Affirming standards organizations have defined procedures to develop specifications that can be implemented under fair terms. Given market diversity, fair terms may vary from royalty-free to fair, reasonable, and non-discriminatory terms (FRAND). 5. Voluntary Adoption Standards are voluntarily adopted and success is determined by the market. ITU-T definition The ITU-T is a standards development organization (SDO) that is one of the three sectors of the International Telecommunications Union (a specialized agency of the United Nations). The ITU-T has a Telecommunication Standardization Bureau director's Ad Hoc group on IPR that produced the following definition in March 2005, which the ITU-T as a whole has endorsed for its purposes since November 2005: The ITU-T has a long history of open standards development. However, recently some different external sources have attempted to define the term "Open Standard" in a variety of different ways. In order to avoid confusion, the ITU-T uses for its purpose the term "Open Standards" per the following definition: "Open Standards" are standards made available to the general public and are developed (or approved) and maintained via a collaborative and consensus driven process. "Open Standards" facilitate interoperability and data exchange among different products or services and are intended for widespread adoption. Other elements of "Open Standards" include, but are not limited to: Collaborative process – voluntary and market driven development (or approval) following a transparent consensus driven process that is reasonably open to all interested parties. Reasonably balanced – ensures that the process is not dominated by any one interest group. Due process - includes consideration of and response to comments by interested parties. Intellectual property rights (IPRs) – IPRs essential to implement the standard to be licensed to all applicants on a worldwide, non-discriminatory basis, either (1) for free and under other reasonable terms and conditions or (2) on reasonable terms and conditions (which may include monetary compensation). Negotiations are left to the parties concerned and are performed outside the SDO. Quality and level of detail – sufficient to permit the development of a variety of competing implementations of interoperable products or services. Standardized interfaces are not hidden, or controlled other than by the SDO promulgating the standard. Publicly available – easily available for implementation and use, at a reasonable price. Publication of the text of a standard by others is permitted only with the prior approval of the SDO. On-going support – maintained and supported over a long period of time. The ITU-T, ITU-R, ISO, and IEC have harmonized on a common patent policy under the banner of the WSC. However, the ITU-T definition should not necessarily be considered also applicable in ITU-R, ISO and IEC contexts, since the Common Patent Policy does not make any reference to "open standards" but rather only to "standards." IETF definition In section 7 of its RFC 2026, the IETF classifies specifications that have been developed in a manner similar to that of the IETF itself as being "open standards," and lists the standards produced by ANSI, ISO, IEEE, and ITU-T as examples. As the IETF standardization processes and IPR policies have the characteristics listed above by ITU-T, the IETF standards fulfill the ITU-T definition of "open standards." However, the IETF has not adopted a specific definition of "open standard"; both RFC 2026 and the IETF's mission statement (RFC 3935) talks about "open process," but RFC 2026 does not define "open standard" except for the purpose of defining what documents IETF standards can link to. RFC 2026 belongs to a set of RFCs collectively known as BCP 9 (Best Common Practice, an IETF policy). RFC 2026 was later updated by BCP 78 and 79 (among others). As of 2011 BCP 78 is RFC 5378 (Rights Contributors Provide to the IETF Trust), and BCP 79 consists of RFC 3979 (Intellectual Property Rights in IETF Technology) and a clarification in RFC 4879. The changes are intended to be compatible with the "Simplified BSD License" as stated in the IETF Trust Legal Provisions and Copyright FAQ based on RFC 5377. In August 2012, the IETF combined with the W3C and IEEE to launch OpenStand and to publish The Modern Paradigm for Standards. This captures "the effective and efficient standardization processes that have made the Internet and Web the premiere platforms for innovation and borderless commerce". The declaration is then published in the form of RFC 6852 in January 2013. European Interoperability Framework for Pan-European eGovernment Services The European Union defined the term for use within its European Interoperability Framework for Pan-European eGovernment Services, Version 1.0 although it does not claim to be a universal definition for all European Union use and documentation. To reach interoperability in the context of pan-European eGovernment services, guidance needs to focus on open standards. The word "open" is here meant in the sense of fulfilling the following requirements: The standard is adopted and will be maintained by a not-for-profit organization, and its ongoing development occurs on the basis of an open decision-making procedure available to all interested parties (consensus or majority decision etc.). The standard has been published and the standard specification document is available either freely or at a nominal charge. It must be permissible to all to copy, distribute and use it for no fee or at a nominal fee. The intellectual property - i.e. patents possibly present - of (parts of) the standard is made irrevocably available on a royalty-free basis. There are no constraints on the re-use of the standard Network Centric Operations Industry Consortium definition The Network Centric Operations Industry Consortium (NCOIC) defines open standard as the following: Specifications for hardware and/or software that are publicly available implying that multiple vendors can compete directly based on the features and performance of their products. It also implies that the existing open system can be removed and replaced with that of another vendor with minimal effort and without major interruption. Danish government definition The Danish government has attempted to make a definition of open standards, which also is used in pan-European software development projects. It states: An open standard is accessible to everyone free of charge (i.e. there is no discrimination between users, and no payment or other considerations are required as a condition of use of the standard) An open standard of necessity remains accessible and free of charge (i.e. owners renounce their options, if indeed such exist, to limit access to the standard at a later date, for example, by committing themselves to openness during the remainder of a possible patent's life) An open standard is accessible free of charge and documented in all its details (i.e. all aspects of the standard are transparent and documented, and both access to and use of the documentation is free) French law definition The French Parliament approved a definition of "open standard" in its "Law for Confidence in the Digital Economy." The definition is: By open standard is understood any communication, interconnection or interchange protocol, and any interoperable data format whose specifications are public and without any restriction in their access or implementation. Indian Government Definition A clear Royalty Free stance and far reaching requirements case is the one for India's Government 4.1 Mandatory Characteristics An Identified Standard will qualify as an “Open Standard”, if it meets the following criteria: 4.1.1 Specification document of the Identified Standard shall be available with or without a nominal fee. 4.1.2 The Patent claims necessary to implement the Identified Standard shall be made available on a Royalty-Free basis for the lifetime of the Standard. 4.1.3 Identified Standard shall be adopted and maintained by a not-for-profit organization, wherein all stakeholders can opt to participate in a transparent, collaborative and consensual manner. 4.1.4 Identified Standard shall be recursively open as far as possible. 4.1.5 Identified Standard shall have technology-neutral specification. 4.1.6 Identified Standard shall be capable of localization support, where applicable, for all Indian official Languages for all applicable domains. Italian Law definition Italy has a general rule for the entire public sector dealing with Open Standards, although concentrating on data formats, in Art. 68 of the Code of the Digital Administration (Codice dell'Amministrazione Digitale) [applications must] allow representation of data under different formats, at least one being an open data format. [...] [it is defined] an open data format, a data format which is made public, is thoroughly documented and neutral with regard to the technological tools needed to peruse the same data. Spanish law definition A Law passed by the Spanish Parliament requires that all electronic services provided by the Spanish public administration must be based on open standards. It defines an open standard as royalty free, according to the following definition: An open standard fulfills the following conditions: it is public, and its use is available on a free [gratis] basis, or at a cost that does not imply a difficulty for the user. its use is not subject to the payment of any intellectual [copyright] or industrial [patents and trademarks] property right. Venezuelan law definition The Venezuelan Government approved a "free software and open standards law." The decree includes the requirement that the Venezuelan public sector must use free software based on open standards, and includes a definition of open standard: Article 2: for the purposes of this Decree, it shall be understood as k) Open standards: technical specifications, published and controlled by an organization in charge of their development, that have been accepted by the industry, available to everybody for their implementation in free software or other [type of software], promoting competitivity, interoperability and flexibility. South African Government definition The South African Government approved a definition in the "Minimum Interoperability Operating Standards Handbook" (MIOS). For the purposes of the MIOS, a standard shall be considered open if it meets all of these criteria. There are standards which we are obliged to adopt for pragmatic reasons which do not necessarily fully conform to being open in all respects. In such cases, where an open standard does not yet exist, the degree of openness will be taken into account when selecting an appropriate standard: it should be maintained by a non-commercial organization participation in the ongoing development work is based on decision making processes that are open to all interested parties. open access: all may access committee documents, drafts and completed standards free of cost or for a negligible fee. It must be possible for everyone to copy, distribute and use the standard free of cost. The intellectual rights required to implement the standard (e.g.essential patent claims) are irrevocably available, without any royalties attached. There are no reservations regarding reuse of the standard. There are multiple implementations of the standard. New Zealand official interoperability framework definition The E-Government Interoperability Framework (e-GIF) defines open standard as royalty free according to the following text: While a universally agreed definition of "open standards" is unlikely to be resolved in the near future, the e-GIF accepts that a definition of “open standards” needs to recognise a continuum that ranges from closed to open, and encompasses varying degrees of "openness." To guide readers in this respect, the e-GIF endorses "open standards" that exhibit the following properties: Be accessible to everyone free of charge: no discrimination between users, and no payment or other considerations should be required as a condition to use the standard. Remain accessible to everyone free of charge: owners should renounce their options, if any, to limit access to the standard at a later date. Be documented in all its details: all aspects of the standard should be transparent and documented, and both access to and use of the documentation should be free. The e-GIF performs the same function in e-government as the Road Code does on the highways. Driving would be excessively costly, inefficient, and ineffective if road rules had to be agreed each time one vehicle encountered another. Bruce Perens' definition One of the most popular definitions of the term "open standard", as measured by Google ranking, is the one developed by Bruce Perens. His definition lists a set of principles that he believes must be met by an open standard: Availability: Open Standards are available for all to read and implement. Maximize End-User Choice: Open Standards create a fair, competitive market for implementations of the standard. They do not lock the customer into a particular vendor or group. No Royalty: Open Standards are free for all to implement, with no royalty or fee. Certification of compliance by the standards organization may involve a fee. No Discrimination: Open Standards and the organizations that administer them do not favor one implementor over another for any reason other than the technical standards compliance of a vendor's implementation. Certification organizations must provide a path for low and zero-cost implementations to be validated, but may also provide enhanced certification services. Extension or Subset: Implementations of Open Standards may be extended, or offered in subset form. However, certification organizations may decline to certify subset implementations, and may place requirements upon extensions (see Predatory Practices). Predatory Practices: Open Standards may employ license terms that protect against subversion of the standard by embrace-and-extend tactics. The licenses attached to the standard may require the publication of reference information for extensions, and a license for all others to create, distribute, and sell software that is compatible with the extensions. An Open Standard may not otherwise prohibit extensions. Bruce Perens goes on to explain further the points in the standard in practice. With regard to availability, he states that "any software project should be able to afford a copy without undue hardship. The cost should not far exceed the cost of a college textbook". Microsoft's definition Vijay Kapoor, national technology officer, Microsoft, defines what open standards are as follows: Let's look at what an open standard means: 'open' refers to it being royalty-free, while 'standard' means a technology approved by formalized committees that are open to participation by all interested parties and operate on a consensus basis. An open standard is publicly available, and developed, approved and maintained via a collaborative and consensus driven process. Overall, Microsoft's relationship to open standards was, at best, mixed. While Microsoft participated in the most significant standard-setting organizations that establish open standards, it was often seen as oppositional to their adoption. Open Source Initiative's definition The Open Source Initiative defines the requirements and criteria for open standards as follows: The Requirement An "open standard" must not prohibit conforming implementations in open source software. The Criteria To comply with the Open Standards Requirement, an "open standard" must satisfy the following criteria. If an "open standard" does not meet these criteria, it will be discriminating against open source developers. No Intentional Secrets: The standard MUST NOT withhold any detail necessary for interoperable implementation. As flaws are inevitable, the standard MUST define a process for fixing flaws identified during implementation and interoperability testing and to incorporate said changes into a revised version or superseding version of the standard to be released under terms that do not violate the OSR. Availability: The standard MUST be freely and publicly available (e.g., from a stable web site) under royalty-free terms at reasonable and non-discriminatory cost. Patents: All patents essential to implementation of the standard MUST: be licensed under royalty-free terms for unrestricted use, or be covered by a promise of non-assertion when practiced by open source software No Agreements: There MUST NOT be any requirement for execution of a license agreement, NDA, grant, click-through, or any other form of paperwork to deploy conforming implementations of the standard. No OSR-Incompatible Dependencies: Implementation of the standard MUST NOT require any other technology that fails to meet the criteria of this Requirement. Ken Krechmer's definition Ken Krechmer identifies ten "rights": Open Meeting Consensus Due Process Open IPR One World Open Change Open Documents Open Interface Open Use On-going Support World Wide Web Consortium's definition As a provider of Web technology ICT Standards, notably XML, http, HTML, CSS and WAI, the World Wide Web Consortium (W3C) follows a process that promotes the development of quality standards. Looking at the end result, the spec alone, up for adoption, is not enough. The participative/inclusive process leading to a particular design, and the supporting resources available with it should be accounted when we talk about Open Standards: transparency (due process is public, and all technical discussions, meeting minutes, are archived and referencable in decision making) relevance (new standardization is started upon due analysis of the market needs, including requirements phase, e.g. accessibility, multi-linguism) openness (anybody can participate, and everybody does: industry, individual, public, government bodies, academia, on a worldwide scale) impartiality and consensus (guaranteed fairness by the process and the neutral hosting of the W3C organization, with equal weight for each participant) availability (free access to the standard text, both during development, at final stage, and for translations, and assurance that core Web and Internet technologies can be implemented Royalty-Free) maintenance (ongoing process for testing, errata, revision, permanent access, validation, etc.) In August 2012, the W3C combined with the IETF and IEEE to launch OpenStand and to publish The Modern Paradigm for Standards. This captures "the effective and efficient standardization processes that have made the Internet and Web the premiere platforms for innovation and borderless commerce". Digital Standards Organization definition The Digital Standards Organization (DIGISTAN) states that "an open standard must be aimed at creating unrestricted competition between vendors and unrestricted choice for users." Its brief definition of "open standard" (or "free and open standard") is "a published specification that is immune to vendor capture at all stages in its life-cycle." Its more complete definition as follows: "The standard is adopted and will be maintained by a not-for-profit organization, and its ongoing development occurs on the basis of an open decision-making procedure available to all interested parties. The standard has been published and the standard specification document is available freely. It must be permissible to all to copy, distribute, and use it freely. The patents possibly present on (parts of) the standard are made irrevocably available on a royalty-free basis. There are no constraints on the re-use of the standard. A key defining property is that an open standard is immune to vendor capture at all stages in its life-cycle. Immunity from vendor capture makes it possible to improve upon, trust, and extend an open standard over time." This definition is based on the EU's EIF v1 definition of "open standard," but with changes to address what it terms as "vendor capture." They believe that "Many groups and individuals have provided definitions for 'open standard' that reflect their economic interests in the standards process. We see that the fundamental conflict is between vendors who seek to capture markets and raise costs, and the market at large, which seeks freedom and lower costs... Vendors work hard to turn open standards into franchise standards. They work to change the statutory language so they can cloak franchise standards in the sheep's clothing of 'open standard.' A robust definition of "free and open standard" must thus take into account the direct economic conflict between vendors and the market at large." Free Software Foundation Europe's definition The Free Software Foundation Europe (FSFE) uses a definition which is based on the European Interoperability Framework v.1, and was extended after consultation with industry and community stakeholders. FSFE's standard has been adopted by groups such as the SELF EU Project, the 2008 Geneva Declaration on Standards and the Future of the Internet, and international Document Freedom Day teams. According to this definition an Open Standard is a format or protocol that is: Subject to full public assessment and use without constraints in a manner equally available to all parties; Without any components or extensions that have dependencies on formats or protocols that do not meet the definition of an Open Standard themselves; Free from legal or technical clauses that limit its utilisation by any party or in any business model; Managed and further developed independently of any single vendor in a process open to the equal participation of competitors and third parties; Available in multiple complete implementations by competing vendors, or as a complete implementation equally available to all parties. FFII's definition The Foundation for a Free Information Infrastructure's definition is said to coincide with the definition issued in the European Interoperability Framework released in 2004. A specification that is public, the standard is inclusive and it has been developed and is maintained in an open standardization process, everybody can implement it without any restriction, neither payment, to license the IPR (granted to everybody for free and without any condition). This is the minimum license terms asked by standardization bodies as W3C. Of course, all the other bodies accept open standards. But specification itself could cost a fair amount of money (i.e. 100-400 Eur per copy as in ISO because copyright and publication of the document itself). UK government definition The UK government's definition of open standards applies to software interoperability, data and document formats. The criteria for open standards are published in the “Open Standards Principles” policy paper and are as follows. Collaboration - the standard is maintained through a collaborative decision-making process that is consensus based and independent of any individual supplier. Involvement in the development and maintenance of the standard is accessible to all interested parties. Transparency - the decision-making process is transparent, and a publicly accessible review by subject matter experts is part of the process. Due process - the standard is adopted by a specification or standardisation organisation, or a forum or consortium with a feedback and ratification process to ensure quality. Fair access - the standard is published, thoroughly documented and publicly available at zero or low cost. Zero cost is preferred but this should be considered on a case-by-case basis as part of the selection process. Cost should not be prohibitive or likely to cause a barrier to a level playing field. Market support - other than in the context of creating innovative solutions, the standard is mature, supported by the market and demonstrates platform, application and vendor independence. Rights - rights essential to implementation of the standard, and for interfacing with other implementations which have adopted that same standard, are licensed on a royalty free basis that is compatible with both open source and proprietary licensed solutions. These rights should be irrevocable unless there is a breach of licence conditions. The Cabinet Office in the UK recommends that government departments specify requirements using open standards when undertaking procurement exercises in order to promote interoperability and re-use, and avoid technological lock-in. Comparison of definitions Examples of open standards Note that because the various definitions of "open standard" differ in their requirements, the standards listed below may not be open by every definition. System World Wide Web architecture specified by W3C Hardware Extended Industry Standard Architecture (EISA) (a specification for plug-in boards to 16-bit IBM-architecture PCs, later standardized by the IEEE) Industry Standard Architecture (ISA) (is a retroactively named specification for plug-in boards to 8-bit IBM-architecture PCs. The short-lived EISA, and renaming of ISA was in response to IBM's move from "AT standard bus" to proprietary Micro Channel Architecture). Peripheral Component Interconnect (PCI) (a specification by Intel Corporation for plug-in boards to IBM-architecture PCs) Accelerated Graphics Port (AGP) (a specification by Intel Corporation for plug-in boards to IBM-architecture PCs) PCI Industrial Computer Manufacturers Group (PICMG) (an industry consortium developing Open Standards specifications for computer architectures ) Synchronous dynamic random-access memory (SDRAM) and its DDR SDRAM variants (by JEDEC Solid State Technology Association) Universal Serial Bus (USB) (by USB Implementers Forum) DiSEqC by Eutelsat—under the "IPR, TRADEMARK AND LOGO" section of the Recommendation for Implementation document, it is stated: DiSEqC is an open standard, no license is required or royalty is to be paid to the rightholder EUTELSAT. DiSEqC is a trademark of EUTELSAT. Conditions for use of the trademark and the DiSEqC can be obtained from EUTELSAT. File formats Computer Graphics Metafile (CGM) (file format for 2D vector graphics, raster graphics, and text defined by ISO/IEC 8632) Darwin Information Typing Architecture (DITA) (a format and architecture to create and maintain technical documentation defined by OASIS) Hypertext Markup Language (HTML), Extensible HTML (XHTML) and HTML5 (specifications of the W3C for structured hyperlinked document formatting) Office Open XML (a specification by Microsoft for document, spreadsheet and presentation formats, approved by ISO as ISO/IEC 29500) (openness is contested ) Ogg (a container for Vorbis, FLAC, Speex, Opus (audio formats) & Theora (a video format), by the Xiph.Org Foundation) OpenDocument Format (ODF) (by OASIS for document, spreadsheet, presentation, graphics, math and other formats, approved by ISO as ISO/IEC 26300) Opus (audio codec, defined by IETF RFC 6716) Portable Document Format (PDF/X) (a specification by Adobe Systems Incorporated for formatted documents, later approved by ISO as ISO 15930-1:2001 ) Portable Network Graphics (PNG) (a bitmapped image format that employs lossless data compression, approved by ISO as ISO/IEC 15948:2004) Scalable Vector Graphics (SVG) (a specification for two-dimensional vector graphics developed by the World Wide Web Consortium (W3C)). Protocols Connected Home over IP (also known as "Project Connected Home over IP" or "CHIP" for short) is a proprietary, royalty-free home automation connectivity standard project which features compatibility among different smart home and Internet of things (IoT) products and software. Internet Protocol (IP) (a specification of the IETF for transmitting packets of data on a network - specifically, IETF RFC 791 Matrix is an open communication protocol for decentralized real-time communication which is mostly used for instant messaging MQTT (Message Queuing Telemetry Transport) is a lightweight, publish-subscribe network protocol that transports messages between devices. Transmission Control Protocol (TCP) (a specification of the IETF for implementing streams of data on top of IP - specifically, IETF RFC 793) OMA Data Synchronization and Device Management (a platform-independent data synchronization protocol, specified by The SyncML Initiative/Open Mobile Alliance) XMPP - an open protocol for near-real-time instant messaging (IM) and presence information (a.k.a. buddy lists) Programming languages ANSI C (a general-purpose programming language, approved by ISO as ISO/IEC 9899) Ada, a multi-paradigm programming language, defined by joint ISO/ANSI standard, combined with major Amendment ISO/IEC 8652:1995/Amd 1:2007 MUMPS, a dynamically typed programming language, originally designed for database-driven applications in the healthcare industry approved by ISO as ISO/IEC 11756:1992 and ISO/IEC 11756:1999 Other Data2Dome a standard for planetarium dome content distribution. Apdex (Application Performance Index) (specifies a uniform way to analyze and report on the degree to which the measured performance of software applications meets user expectations Application Response Measurement (ARM) (defines an API for C and Java programming language to measure application transaction response times, adopted by The Open Group) CD-ROM (Yellow Book) (a specification for data interchange on read-only 120 mm optical data disks, approved by ISO as ISO/IEC 10149 and ECMA as ECMA-130) Common Information Model (CIM) (a specification by DMTF for defining how managed elements in an IT environment are represented as a common set of objects and relationships between them) Universal Data Element Framework (UDEF) an open standard by The Open Group that provides the foundation for building an enterprise-wide Controlled vocabulary enabling Interoperability. CIPURSE an open standard by OSPT Alliance which is a set of specifications to implement secure element (contactless smart card, NFC SIM, Embedded Secure element)for Urban Transport Network and Value Added Services. OpenReference, an open reference model for business performance, processes and practices, Pipeline Open Data Standard (PODS) Examples of associations JEDEC Solid State Technology Association - sets SDRAM Open standard Open Geospatial Consortium - develops and publishes open standards for spatial data and services Open Handset Alliance - sets Open standards mobile device hardware. OSPT Alliance - sets open standard named CIPURSE A4L - the Access For Learning Association sets k12 educational data interoperability structures. USB Implementers Forum - sets standards for Universal Serial Bus World Wide Web Consortium (W3C) - sets Open standards for the Internet, such as protocols, programming languages, etc. Patents In 2002 and 2003 the controversy about using reasonable and non-discriminatory (RAND) licensing for the use of patented technology in web standards increased. Bruce Perens, important associations as FSF or FFII and others have argued that the use of patents restricts who can implement a standard to those able or willing to pay for the use of the patented technology. The requirement to pay some small amount per user, is often an insurmountable problem for free/open source software implementations which can be redistributed by anyone. Royalty free (RF) licensing is generally the only possible license for free/open source software implementations. Version 3 of the GNU General Public License includes a section that enjoins anyone who distributes a program released under the GPL from enforcing patents on subsequent users of the software or derivative works. One result of this controversy was that many governments (including the Danish, French and Spanish governments singly and the EU collectively) specifically affirmed that "open standards" required royalty-free licenses. Some standards organizations, such as the W3C, modified their processes to essentially only permit royalty-free licensing. Patents for software, formulas and algorithms are currently enforceable in the US but not in the EU. The European Patent Convention expressly prohibits algorithms, business methods and software from being covered by patents. The US has only allowed them since 1989 and there has been growing controversy in recent years as to either the benefit or feasibility. A standards body and its associated processes cannot force a patent holder to give up its right to charge license fees, especially if the company concerned is not a member of the standards body and unconstrained by any rules that were set during the standards development process. In fact, this element discourages some standards bodies from adopting an "open" approach, fearing that they will lose out if their members are more constrained than non-members. Few bodies will carry out (or require their members to carry out) a full patent search. Ultimately, the only sanctions a standards body can apply on a non-member when patent licensing is demanded is to cancel the standard, try to rework around it, or work to invalidate the patent. Standards bodies such as W3C and OASIS require that the use of required patents be granted under a royalty-free license as a condition for joining the body or a particular working group, and this is generally considered enforceable. Examples of patent claims brought against standards previously thought to be open include JPEG and the Rambus case over DDR SDRAM. The H.264 video codec is an example of a standards organization producing a standard that has known, non-royalty-free required patents. Often the scope of the standard itself determines how likely it is that a firm will be able to use a standard as patent-like protection. Richard Langlois argues that standards with a wide scope may offer a firm some level of protection from competitors but it is likely that Schumpeterian creative destruction will ultimately leave the firm open to being "invented around" regardless of the standard a firm may benefit from. Quotes EU Commissioner Erkki Liikanen: "Open standards are important to help create interoperable and affordable solutions for everybody. They also promote competition by setting up a technical playing field that is level to all market players. This means lower costs for enterprises and, ultimately, the consumer." (World Standards Day, 14 October 2003) Jorma Ollila, Chairman of Nokia's Board of Directors: "... Open standards and platforms create a foundation for success. They enable interoperability of technologies and encourage innovativeness and healthy competition, which in turn increases consumer choice and opens entirely new markets," W3C Director Tim Berners-Lee: "The decision to make the Web an open system was necessary for it to be universal. You can't propose that something be a universal space and at the same time keep control of it." In the opening address of The Southern African Telecommunications Networks and Applications Conference (SATNAC) 2005, then Minister of Science and Technology, Mosibudi Mangena stressed need for open standards in ICT: See also Conformity assessment Free software Free standard Network effect Open data Open-design movement Open-source hardware Open specifications Open system (computing) Specification (technical standard) Vendor lock-in References Further reading Opening Standards: The Global Politics of Interoperability, Laura DeNardis, editor, MIT Press, 2011. Experts from industry, academia, and public policy examine what is at stake economically and politically in debates about open standards. External links Open U.S. Standards Development for Telecommunications Bruce Perens: Open Standards: Principles and Practice Ken Krechmer: The Principles of Open Standards Bob Sutor: Open Standards vs. Open Source: How to think about software, standards, and Service Oriented Architecture at the beginning of the 21st century European Commission: Valoris report on Open Document Formats The New York Times: Steve Lohr: 'Plan by 13 Nations Urges Open Technology Standards' UNDP-APDIP International Open Source Network: Free/Open Source Software: Open Standards Primer OpenStandards.net: An Open Standards Portal Is OpenDocument an Open Standard? Yes! develops a unified definition of "open standard" from multiple sources, then applies it to a particular standard Open Source Initiative: Open Standard Requirement for Software Open Standards: Definitions of "Open Standards" from the Cover Pages Foundation for a Free Information Infrastructure FFII Workgroup on Open Standards. "Standard Categories and Definitions": Categories and definitions of the different types of standards American National Standards Institute Critical Issue Paper: Current Attempts to Change Established Definition of “Open” Standards ITSSD Comments Concerning SCP/13/2 – Standards and Patents, Institute for Trade, Standards and Sustainable Development, (March 2009) Supplement to ITSSD Comments Concerning the WIPO Report on Standards and Patents (SCP/13/2) Paragraph 44, Institute for Trade, Standards and Sustainable Development, (January 2010) Open Data Standards Association, RY Open Standard License, A license dedicated to open standards Standards Technological change fr:Format ouvert
1544
https://en.wikipedia.org/wiki/Agamemnon
Agamemnon
In Greek mythology, Agamemnon (; Agamémnōn) was a king of Mycenae, the son, or grandson, of King Atreus and Queen Aerope, the brother of Menelaus, the husband of Clytemnestra and the father of Iphigenia, Electra or Laodike (Λαοδίκη), Orestes and Chrysothemis. Legends make him the king of Mycenae or Argos, thought to be different names for the same area. When Menelaus's wife, Helen, was taken to Troy by Paris, Agamemnon commanded the united Greek armed forces in the ensuing Trojan War. Upon Agamemnon's return from Troy, he was killed (according to the oldest surviving account, Odyssey 11.409–11) by Aegisthus, the lover of his wife Clytemnestra. In old versions of the story, the scene of the murder, when it is specified, is usually the house of Aegisthus, who has not taken up residence in Agamemnon's palace, and it involves an ambush and the deaths of Agamemnon's followers as well (or it seems to be an ancestral home of both Agamemnon and Aegisthus since Agamemnon's wife is stated to be there as well and Agamemnon was said to have wept and kissed the land of his birth). In some later versions Clytemnestra herself does the killing, or she and Aegisthus act together, killing Agamemnon in his own home. Etymology His name in Greek, Ἀγαμέμνων, means "very steadfast", "unbowed" or "very resolute". The word comes from *Ἀγαμέδμων from ἄγαν, "very much" and μέδομαι, "think on". Ancestry and early life Agamemnon was a descendant of Pelops, son of Tantalus. According to the usual version of the story, followed by the Iliad and Odyssey of Homer, Agamemnon and his younger brother Menelaus were the sons of Atreus, king of Mycenae, and Aerope, daughter of the Cretan king Catreus. However, according to another tradition, Agamemnon and Menelaus were the sons of Atreus' son Pleisthenes, with their mother being Aerope, Cleolla, or Eriphyle. According to this tradition Pleisthenes died young, with Agamemnon and Menelaus being raised by Atreus. Agamemnon had a sister Anaxibia (or Astyoche) who married Strophius, the son of Crisus. Agamemnon's father, Atreus, murdered the sons of his twin brother Thyestes and fed them to Thyestes after discovering Thyestes' adultery with his wife Aerope. Thyestes fathered Aegisthus with his own daughter, Pelopia, and this son vowed gruesome revenge on Atreus' children. Aegisthus murdered Atreus, restored Thyestes to the throne, and took possession of the throne of Mycenae and jointly ruled with his father. During this period, Agamemnon and his brother Menelaus took refuge with Tyndareus, King of Sparta. There they respectively married Tyndareus' daughters Clytemnestra and Helen. Agamemnon and Clytemnestra had four children: one son, Orestes, and three daughters, Iphigenia, Electra, and Chrysothemis. Menelaus succeeded Tyndareus in Sparta, while Agamemnon, with his brother's assistance, drove out Aegisthus and Thyestes to recover his father's kingdom. He extended his dominion by conquest and became the most powerful prince in Greece. Agamemnon's family history had been tarnished by murder, incest, and treachery, consequences of the heinous crime perpetrated by his ancestor, Tantalus, and then of a curse placed upon Pelops, son of Tantalus, by Myrtilus, whom he had murdered. Thus misfortune hounded successive generations of the House of Atreus, until atoned by Orestes in a court of justice held jointly by humans and gods. Trojan War Sailing for Troy Agamemnon gathered the reluctant Greek forces to sail for Troy. In order to recruit Odysseus, who was feigning madness so as to not have to go to war, Agamemnon sent Palamedes, who threatened to kill Odysseus' infant son Telemachus. Odysseus was forced to stop acting mad in order to save his son and joined the assembled Greek forces. Preparing to depart from Aulis, a port in Boeotia, Agamemnon's army incurred the wrath of the goddess Artemis. There are several reasons throughout myth for such wrath: in Aeschylus' play Agamemnon, Artemis is angry for the young men who will die at Troy, whereas in Sophocles' Electra, Agamemnon has slain an animal sacred to Artemis, and subsequently boasted that he was Artemis' equal in hunting. Misfortunes, including a plague and a lack of wind, prevented the army from sailing. Finally, the prophet Calchas announced that the wrath of the goddess could only be propitiated by the sacrifice of Agamemnon's daughter Iphigenia. Classical dramatizations differ on how willing either father or daughter was to this fate; some include such trickery as claiming she was to be married to Achilles, but Agamemnon did eventually sacrifice Iphigenia. Her death appeased Artemis, and the Greek army set out for Troy. Several alternatives to the human sacrifice have been presented in Greek mythology. Other sources, such as Iphigenia at Aulis, say that Agamemnon was prepared to kill his daughter, but that Artemis accepted a deer in her place, and whisked her away to Tauris in the Crimean Peninsula. Hesiod said she became the goddess Hecate. During the war, but before the events of the Iliad, Odysseus contrived a plan to get revenge on Palamedes for threatening his sons life. By forging a letter from Priam, king of the Trojans, and caching some gold in Palamedes tent, Odysseus had Palamedes accused of treason, and Agamemnon ordered him stoned to death. The Iliad The Iliad tells the story about the quarrel between Agamemnon and Achilles in the final year of the war. In book 1, following one of the Achaean Army's raids, Chryseis, daughter of Chryses, one of Apollo's priests, was taken as a war prize by Agamemnon. Chryses pleaded with Agamemnon to free his daughter but was met with little success. Chryses then prayed to Apollo for the safe return of his daughter, which Apollo responded to by unleashing a plague over the Achaean Army. After learning from the Prophet Calchas that the plague could be dispelled by returning Chryseis to her father, Agamemnon reluctantly agreed (but first berated Calchas for previously forcing Agamemnon to sacrifice his daughter Iphigenia) and released his prize. However, as compensation for his lost prize, Agamemnon demanded a new prize. He stole an attractive slave called Briseis, one of the spoils of war, from Achilles. This created a rift between Achilles and Agamemnon, causing Achilles to withdraw from battle and refuse to fight. Agamemnon then received a dream from Zeus telling him to rally his forces and attack the Trojans in book 2. After several days of fighting, including duels between Menelaus and Paris, and between Ajax and Hector, the Achaeans were pushed back to the fortifications around their ships. In book 9, Agamemnon, having realized Achilles's importance in winning the war against the Trojan Army, sent ambassadors begging for Achilles to return, offering him riches and the hand of his daughter in marriage. Achilles refused, only being spurred back into action when Patroclus was killed in battle by Hector, eldest son of King Priam and Queen Hecuba. In book 19 Agamemnon reconciled with Achilles, giving him the offered rewards for returning to the war, before Achilles went out to turn back the Trojans and duel Hector. After Hector's death, Agamemnon assisted Achilles in performing Patroclus' funeral in book 23. Agamemnon volunteered for the javelin throwing contest, one of the games being held in Patroclus honor, but his skill with the javelin is so well known that Achilles awarded him the prize without contest. Although not the equal of Achilles in bravery, Agamemnon was a representative of "kingly authority". As commander-in-chief, he summoned the princes to the council and led the army in battle. His chief fault was his overwhelming haughtiness; an over-exalted opinion of his position that led him to insult Chryses and Achilles, thereby bringing great disaster upon the Greeks. Agamemnon was the commander-in-chief of the Greeks during the Trojan War. During the fighting, Agamemnon killed Antiphus and fifteen other Trojan soldiers, according to one source. In the Iliad itself, he is shown to slaughter hundreds more in Book 11 during his aristea, loosely translated to "day of glory", which is the most similar to Achilles' aristea in Book 21. They both are compared to lions and destructive fires in battle, their hands are described as "splattered with gore" and "invincible," the Trojans flee to the walls, they both are appealed to by one of their victims, they are both avoided by Hector, they both are wounded in the arm or hand, and they both kill the one who wounded them. Even before his aristea, Agamemnon is considered to be one of the three best warriors on the Greek side, as proven when Hector challenges any champion of the Greek side to fight him in Book 7, and Agamemnon (along with Diomedes and Ajax the Greater) is one of the three Hector most wishes to fight out of the nine strongest Greek warriors who volunteer. End of the war According to Sophocles's Ajax, after Achilles had fallen in battle Agamemnon and Menelaus award Achilles armor to Odysseus. This angers Ajax, who feels he is the now the strongest among the Achaean warriors and so deserves the armor. Ajax considers killing them, but is driven to madness by Athena and instead slaughters the herdsmen and cattle that had not yet been divided as spoils of war. He then commits suicide in shame for his actions. As Ajax dies he curses the sons of Atreus (Agamemnon and Menelaus), along with the entire Achaean army. Agamemnon and Menelaus consider leaving Ajax's body to rot, denying him a proper burial, but are convinced otherwise by Odysseus and Ajax's half-brother Teucer. After the capture of Troy, Cassandra, the doomed prophetess and daughter of Priam, fell to Agamemnon's lot in the distribution of the prizes of war. Return to Greece and death After a stormy voyage, Agamemnon and Cassandra land in Argolis, or, in another version, are blown off course and land in Aegisthus's country. Clytemnestra, Agamemnon's wife, has taken Aegisthus, son of Thyestes, as a lover. When Agamemnon comes home he is slain by Aegisthus (in the oldest versions of the story) or by Clytemnestra. According to the accounts given by Pindar and the tragedians, Agamemnon is slain in a bath by his wife alone, after being ensnared by a blanket or a net thrown over him to prevent resistance.In Homer's version of the story in the Odyssey, Aegisthus ambushes and kills Agamemnon in a feasting hall under the pretense of holding a feast in honor of Agamemnon's return home from Troy. Clytemnestra also kills Cassandra. Her jealousy of Cassandra, and her wrath at the sacrifice of Iphigenia and at Agamemnon's having gone to war over Helen of Troy, are said to be the motives for her crime. Aegisthus and Clytemnestra then rule Agamemnon's kingdom for a time, Aegisthus claiming his right of revenge for Atreus's crimes against Thyestes (Thyestes then crying out "thus perish all the race of Pleisthenes!", thus explaining Aegisthus' action as justified by his father's curse). Agamemnon's son Orestes later avenges his father's murder, with the help or encouragement of his sister Electra, by murdering Aegisthus and Clytemnestra (his own mother), thereby inciting the wrath of the Erinyes (English: the Furies), winged goddesses who track down wrongdoers with their hounds' noses and drive them to insanity. The Curse of the House of Atreus Agamemnon's family history is rife with misfortune, born from several curses contributing to the miasma around the family. The curse begins with Agamemnon's great-grandfather Tantalus, who is in Zeus's favor until he tries to feed his son Pelops to the gods in order to test their omniscience, as well as stealing some ambrosia and nectar. Tantalus is then banished to the underworld, where he stands in a pool of water that evaporates every time he reaches down to drink, and above him is a fruit tree whose branches are blown just out of reach by the wind whenever he reaches for the fruit. This begins the cursed house of Atreus, and his descendants would face similar or worse fates. Later, using his relationship with Poseidon, Pelops convinces the god to grant him a chariot so he may beat Oenomaus, king of Pisa, in a race, and win the hand of his daughter Hippodamia. Myrtilus, who in some accounts helps Pelops win his chariot race, attempts to lie with Pelops's new bride Hippodamia. In anger, Pelops throws Myrtilus off a cliff, but not before Myrtilus curses Pelops and his entire line. Pelops and Hippodamia have many children, including Atreus and Thyestes, who are said to have murdered their half-brother Chrysippus. Pelops banishes Atreus and Thyestes to Mycanae, where Atreus becomes king. Thyestes later conspires with Atreus's wife, Aerope, to supplant Atreus, but they are unsuccessful. Atreus then kills Thyestes' son and cooks him into a meal which Thyestes eats, and afterwards Atreus taunts him with the hands and feet of his now dead son. Thyestes, on the advice of an oracle, then has a son with his own daughter Pelopia. Pelopia tries to expose the infant Aegisthus, but he is found by a shepherd and raised in the house of Atreus. When Aegisthus reaches adulthood Thyestes reveales the truth of his birth, and Aegithus then kills Atreus. Atreus and Aerope have three children, Agamemnon, Menelaus, and Anaxibia. The continued miasma surrounding the house of Atreus expresses itself in several events throughout their lives. Agamemnon is forced to sacrifice his own daughter, Iphigenia, to appease the gods and allow the Greek forces to sail for Troy. When Agamemnon refuses to return Chryseis to her father Chryses, he brings plague upon the Greek camp. He is also later killed by his wife, Clytemnestra, who conspires with her new lover Aegisthus in revenge for the death of Iphigenia. Menelaus's wife, Helen of Troy, runs away with Paris, ultimately leading to the Trojan War. According to book 4 of the Odyssey, after the war his fleet is scattered by the gods to Egypt and Crete. When Menelaus finally returns home, his marriage with Helen is now strained and they produce no sons. Both Agamemnon and Menelaus are cursed by Ajax for not granting him Achilles's armor as he commits suicide. Agamemnon and Clytemnestra have three remaining children, Electra, Orestes, and Chrysothemis. After growing to adulthood and being pressured by Electra, Orestes vows to avenge his father Agamemnon by killing his mother Clytemnestra and Aegisthus. After successfully doing so, he wanders the Greek countryside for many years constantly plagued by the Erinyes (Furies) for his sins. Finally, with the help of Athena and Apollo he is absolved of his crimes, dispersing the miasma, and the curse on house Atreus comes to an end. Other stories Athenaeus tells a tale of how Agamemnon mourns the loss of his friend or lover Argynnus, when he drowns in the Cephisus river. He buries him, honored with a tomb and a shrine to Aphrodite Argynnis. This episode is also found in Clement of Alexandria, in Stephen of Byzantium (Kopai and Argunnos), and in Propertius, III with minor variations. The fortunes of Agamemnon have formed the subject of numerous tragedies, ancient and modern, the most famous being the Oresteia of Aeschylus. In the legends of the Peloponnesus, Agamemnon was regarded as the highest type of a powerful monarch, and in Sparta he was worshipped under the title of Zeus Agamemnon. His tomb was pointed out among the ruins of Mycenae and at Amyclae. In works of art, there is considerable resemblance between the representations of Zeus, king of the gods, and Agamemnon, king of men. He is generally depicted with a sceptre and diadem, conventional attributes of kings. Agamemnon's mare is named Aetha. She is also one of two horses driven by Menelaus at the funeral games of Patroclus. In Homer's Odyssey Agamemnon makes an appearance in the kingdom of Hades after his death. There, the former king meets Odysseus and explains just how he was murdered before he offers Odysseus a warning about the dangers of trusting a woman. Agamemnon is a character in William Shakespeare's play Troilus and Cressida, set during the Trojan War. In media and art Visual arts General works The Mask of Agamemnon, discovered by Heinrich Schliemann in 1876, on display at National Archeological Museum of Athens, Athens The Tomb of Agamemnon, by Louis Desprez, 1787, on display at The Metropolitan Museum of Art, New York Clytemnestra and Agamemnon, by Pierre-Narcisse Guérin, 1817, on display at the Musée des Beaux-Arts d'Orléans, Orléans Electra at the Tomb of Agamemnon, by Frederic Leighton, 1868, on display at Ferens Art Gallery, Kingston upon Hull Agamemnon Killing Odios, anonymous, 1545, on display at The Metropolitan Museum of Art, New York With Iphigenia Sacrifice of Iphigenia, by Arnold Houbraken, 1690-1700, on display at the Rijks Museum, Amsterdam The Sacrifice of Iphigenia, by Charles de la Fosse, 1680, on display at the Palace of Versailles, Versailles The Sacrifice of Iphigenia, by Gaetano Gandolfi, 1789, on display at The Metropolitan Museum of Art, New York Sacrificio di Ifigenia, by Pietro Testa, 1640 The Sacrifice of Iphigenia, by Giovanni Battista Tiepolo, 1757, on display at the Villa Varmarana, Vicenza Sacrifice of Iphigenia, by Jan Steen, 1671, on display at the Leiden Collection, New York The Sacrifice of Iphigenia, by Sebastian Bourdon, 1653, on display at the Musée des Beaux-Arts d'Orléans, Orléans With Achilles The Quarrel Between Agamemnon and Achilles, by Giovanni Battista Gaulli, 1695, on display at the Museé de l’Oise, Beauvais The Anger of Achilles, by Jacques-Louis David, 1819, on display at Kimbell Art Museum, Fort Worth The Wrath of Achilles, by Michel-Martin Drolling, 1810, on display at the École des Beaux-Arts, Paris Quarrel of Achilles and Agamemnon, by William Page, on display at the Smithsonian American Art Museum, Washington DC Portrayal in film and television The 1924 film Helena by Karl Wüstenhagen The 1956 film Helen of Troy by Robert Douglas The 1961 film The Trojan Horse by Nerio Bernardi The 1962 film The Fury of Achilles by Mario Petri The 1962 film Electra by Theodoros Dimitriou The 1968 TV miniseries The Odyssey by Rolf Boysen The 1977 film Iphigenia by Kostas Kazakos The 1981 film Time Bandits by Sean Connery The 1997 TV miniseries The Odyssey by Yorgo Voyagis The 2003 TV miniseries Helen of Troy by Rufus Sewell The 2004 film Troy by Brian Cox The 2018 TV miniseries Troy: Fall of a City by Johnny Harris See also HMS Agamemnon National Archaeological Museum of Athens Citations General references Secondary sources Aeschylus, Agamemnon in Aeschylus, with an English translation by Herbert Weir Smyth, Ph. D. in two volumes, Vol 2, Cambridge, Massachusetts, Harvard University Press, 1926, Online version at the Perseus Digital Library. Apollodorus, Apollodorus, The Library, with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1921. Online version at the Perseus Digital Library. Athenaeus, The Learned Banqueters, Volume VI: Books 12-13.594b, edited and translated by S. Douglas Olson, Loeb Classical Library No. 345, Cambridge, Massachusetts, Harvard University Press, 2011. . Online version at Harvard University Press. Collard, Christopher and Martin Cropp (2008a), Euripides Fragments: Aegeus–Meleanger, Loeb Classical Library No. 504, Cambridge, Massachusetts, Harvard University Press, 2008. . Online version at Harvard University Press. Collard, Christopher and Martin Cropp (2008b), Euripides Fragments: Oedipus-Chrysippus: Other Fragments, Loeb Classical Library No. 506, Cambridge, Massachusetts, Harvard University Press, 2008. . Online version at Harvard University Press. Dictys Cretensis, The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian, translated by R. M. Frazer (Jr.). Indiana University Press. 1966. Euripides, Helen, translated by E. P. Coleridge in The Complete Greek Drama, edited by Whitney J. Oates and Eugene O'Neill, Jr. Volume 2. New York. Random House. 1938. Online version at the Perseus Digital Library. Euripides, Iphigenia in Tauris, translated by Robert Potter in The Complete Greek Drama, edited by Whitney J. Oates and Eugene O'Neill, Jr. Volume 2. New York. Random House. 1938. Online version at the Perseus Digital Library. Euripides, Orestes, translated by E. P. Coleridge in The Complete Greek Drama, edited by Whitney J. Oates and Eugene O'Neill, Jr. Volume 1. New York. Random House. 1938. Online version at the Perseus Digital Library. Gantz, Timothy, Early Greek Myth: A Guide to Literary and Artistic Sources, Johns Hopkins University Press, 1996, Two volumes: (Vol. 1), (Vol. 2). Grimal, Pierre, The Dictionary of Classical Mythology, Wiley-Blackwell, 1996. . Hard, Robin, The Routledge Handbook of Greek Mythology: Based on H.J. Rose's "Handbook of Greek Mythology", Psychology Press, 2004, . Google Books. Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library. Homer, The Odyssey with an English Translation by A.T. Murray, PH.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1919. Online version at the Perseus Digital Library. Hyginus, Gaius Julius, Fabulae, in The Myths of Hyginus, edited and translated by Mary A. Grant, Lawrence: University of Kansas Press, 1960. Online version at ToposText. Most, G.W., Hesiod: The Shield, Catalogue of Women, Other Fragments, Loeb Classical Library, No. 503, Cambridge, Massachusetts, Harvard University Press, 2007, 2018. . Online version at Harvard University Press. Parada, Carlos, Genealogical Guide to Greek Mythology, Jonsered, Paul Åströms Förlag, 1993. . Pausanias, Pausanias Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1918. Online version at the Perseus Digital Library. Sophocles, The Ajax of Sophocles. Edited with introduction and notes by Sir Richard Jebb, Sir Richard Jebb. Cambridge. Cambridge University Press. 1893 Online version at the Perseus Digital Library. Primary sources Homer, Iliad Euripides, Electra Sophocles, Electra Seneca, Agamemnon Aeschylus, The Libation Bearers Homer, Odyssey I, 28–31; XI, 385–464 Aeschylus, Agamemnon Apollodorus, Epitome, II, 15 – III, 22; VI, 23 External links Agamemnon – World History Encyclopedia Greek mythological heroes Achaean Leaders Kings of Mycenae Kings in Greek mythology Deities in the Iliad Metamorphoses characters Characters in Greek mythology Deeds of Artemis Filicides
63474
https://en.wikipedia.org/wiki/Chapter%207%2C%20Title%2011%2C%20United%20States%20Code
Chapter 7, Title 11, United States Code
Chapter 7 of Title 11 of the United States Code (Bankruptcy Code) governs the process of liquidation under the bankruptcy laws of the United States, in contrast to Chapters 11 and 13, which govern the process of reorganization of a debtor. Chapter 7 is the most common form of bankruptcy in the United States. For businesses When a troubled business is unable to pay its creditors, it may file (or be forced by its creditors to file) for bankruptcy in a federal court under Chapter 7. A Chapter 7 filing means that the business ceases operations unless those operations are continued by the Chapter 7 trustee. A Chapter 7 trustee is appointed almost immediately, with broad powers to examine the business's financial affairs. The trustee generally liquidates the assets and distributes the proceeds to the creditors. This may or may not mean that all employees will lose their jobs. When a large company enters Chapter 7 bankruptcy, entire divisions of the company may be sold intact to other companies during the liquidation. The investors who took the least amount of risk prior to the bankruptcy are generally paid first. For example, secured creditors will have taken less risk, because the credit that they will have extended is usually backed by collateral, such as assets of the debtor company. Fully secured creditors—that is, creditors, such as collateralized bondholders and mortgage lenders, for whom the value of collateral equals or exceeds the amount of debt outstanding—have a legally enforceable right to the collateral securing their loans or to the equivalent value, a right that generally cannot be defeated by bankruptcy. They are therefore not entitled to participate in any distribution of liquidated assets that the bankruptcy trustee might make. In a Chapter 7 case, a corporation or partnership does not receive a bankruptcy discharge, whereas an individual may (see ). Once all assets of the corporate or partnership debtor have been fully administered, the case is closed. The debts of the corporation or partnership theoretically continue to exist until applicable statutory periods of limitations expire. For individuals Individuals who reside, have a place of business, or own property in the United States may file for bankruptcy in a federal court under Chapter 7 ("straight bankruptcy", or liquidation). Chapter 7, as with other bankruptcy chapters, is not available to individuals who have had bankruptcy cases dismissed within the prior 180 days under specified circumstances. In a Chapter 7 bankruptcy, the individual is allowed to keep certain exempt property. Most liens, however (such as real estate mortgages and security interests for car loans), survive. The value of property that can be claimed as exempt varies from state to state. Other assets, if any, are sold (liquidated) by the trustee to repay creditors. Many types of unsecured debt are legally discharged by the bankruptcy proceeding, but there are various types of debt that are not discharged in a Chapter 7. Common exceptions to discharge include child support, income taxes less than 3 years old, property taxes, student loans (unless the debtor prevails in a difficult-to-win adversary proceeding brought to determine the dischargeability of the student loan), and fines and restitution imposed by a court for any crimes committed by the debtor. Spousal support is likewise not covered by a bankruptcy filing, nor are property settlements through divorce. Despite their potential non-dischargeability, all debts must be listed on bankruptcy schedules. A Chapter 7 bankruptcy stays on an individual's credit report for 10 years from the date of filing the Chapter 7 petition. This contrasts with a Chapter 13 bankruptcy, which stays on an individual's credit report for 7 years from the date of filing the Chapter 13 petition. This may make credit less available or may make lending terms less favorable, although high debt can have the same effect. That must be balanced against the removal of actual debt from the filer's record by the bankruptcy, which tends to improve creditworthiness. Consumer credit and creditworthiness is a complex subject, however. Future ability to obtain credit is dependent on multiple factors and difficult to predict. Another aspect to consider is whether the debtor can avoid a challenge by the United States Trustee to his or her Chapter 7 filing as abusive. One factor in considering whether the U.S. Trustee can prevail in a challenge to the debtor's Chapter 7 filing is whether the debtor can otherwise afford to repay some or all of his debts out of disposable income in the five year time frame provided by Chapter 13. If so, then the U.S. Trustee may succeed in preventing the debtor from receiving a discharge under Chapter 7, effectively forcing the debtor into Chapter 13. Some bankruptcy practitioners assert that the U.S. Trustee has become more aggressive in recent times in pursuing (what the U.S. Trustee believes to be) abusive Chapter 7 filings. Through these activities the U.S. Trustee has achieved a regulatory system that Congress and most creditor-friendly commentors have consistently espoused, i.e., a formal means test for Chapter 7. The Bankruptcy Abuse Prevention and Consumer Protection Act of 2005 has clarified this area of concern by making changes to the U.S. Bankruptcy Code that include, along with many other reforms, language imposing a means test for Chapter 7 cases. Creditworthiness and the likelihood of receiving a Chapter 7 discharge are some of the issues to be considered in determining whether to file bankruptcy. The importance of the effects of bankruptcy on creditworthiness is sometimes overemphasized because by the time many debtors are ready to file for bankruptcy, their credit score is already ruined. Also, new credit extended post-petition is not covered by the discharge, so creditors may offer new credit to the newly-bankrupt. Methods of filing for bankruptcy Federal bankruptcy forms Functionally, templates are more or less the computer based equivalent of paper bankruptcy forms. The official Federal bankruptcy forms prescribed in the Federal Bankruptcy Rules come as Microsoft Word and Adobe Acrobat formatted templates where each bankruptcy form is represented by a Word or Acrobat file. While these forms are electronic in nature and reside on a computer, they do not contain intelligence that would guide the debtor. The debtor still has to fill in each bankruptcy form separately as they would with paper forms and the debtor still has to grapple with the complexity of bankruptcy law. Bankruptcy software In bankruptcy software, the debtor interacts with the software through a web page and is shielded from the actual bankruptcy forms and from the intricacies of bankruptcy law. The debtor responds to questions in an interview setting, much like with tax programs such as TurboTax or automated documents made through HotDocs. The debtor enters names and addresses, a list of their creditors and assets and other financial information and the software generates all the court-ready forms and delivers them to the debtor via email or a download link. The accuracy of the forms is nevertheless imperfect, as it is difficult for software to ensure that the debtor understands what has to be disclosed, what the exemptions for their state are, whether they qualify for said exemptions, and whether expenses included on the means test are allowable. Non-attorney petition preparer An alternative to do-it-yourself is the bankruptcy petition preparer. This method appeals to those who cannot afford the higher cost of bankruptcy attorneys and at the same time do not want the hassle and uncertainty of self-prepared document templates and software. Bankruptcy petition preparers fill this need. The bankruptcy forms are prepared by trained individuals rather than by debtor themselves. However, having a preparer or paralegal prepare the petition does not guarantee compliance with all applicable laws, or assure that maximum advantage will be taken of exemptions. As with online bankruptcy software, debtors in some cases submit their bankruptcy information through a simple web page interface. Rather than having some software automatically generate the forms, trained paralegals use the information to prepare the document and then deliver them to the debtor. Bankruptcy trustees will check the bankruptcy petition to ensure that the petition was prepared properly, much like the trustee would do if a lawyer had prepared the forms. The BAPCPA provides guidelines for petition preparers to follow to protect the consumer. Bankruptcy attorney A bankruptcy attorney can advise the consumer on when the best time to file is, whether they qualify for a chapter 7 or need to file a chapter 13, ensure that all requirements are fulfilled so that the bankruptcy will go smoothly, and whether the debtor's assets will be safe if they file. With expanded requirements of the BAPCPA bankruptcy act of 2005, filing a personal chapter 7 bankruptcy is complicated. Many attorneys that used to practice bankruptcy in addition to their other fields, have stopped doing so due to the additional requirements, liability and work involved. After the petition is filed, the attorney can provide other services. 2005 bankruptcy law revision: the BAPCPA On October 17, 2005, the Bankruptcy Abuse Prevention and Consumer Protection Act (BAPCPA) went into effect. This legislation was the biggest reform to the bankruptcy laws since 1978. The legislation was enacted after years of lobbying efforts by banks and lending institutions and was intended to prevent abuses of the bankruptcy laws. The changes to Chapter 7 were extensive. Means test The most noteworthy change brought by the 2005 BAPCPA amendments occurred within. The amendments effectively subject most debtors who have an income, as calculated by the Code, above the debtor's state census median income to a 60-month disposable income based test. This test is referred to as the "means test". The means test provides for a finding of abuse if the debtor's disposable monthly income is higher than a specified floor amount or portion of their debts. If a presumption of abuse is found under the means test, it may only be rebutted in the case of "special circumstances." Debtors whose income is below the state's median income are not subject to the means test. Under this test, any debtor with more than $182.50 in monthly disposable income, under the formula, would face a presumption of abuse. Notably, the Code-calculated income is based on the prior six months and may be higher or lower than the debtor's actual current income at the time of filing for bankruptcy. This has led some commentators to refer to the bankruptcy code's “current monthly income” as “presumed income”. If the debtor's debt is not primarily consumer debt, then the means test is inapplicable. The inapplicability to non-consumer debt allows business debtors to "abuse" credit without repercussion unless the court finds "cause." "Special circumstances" does not confer judicial discretion; rather, it gives a debtor an opportunity to adjust income by documenting additional expenses or loss of income in situations caused by a medical condition or being called or order to active military service. However, the assumption of abuse is only rebutted where the additional expenses or adjustments for loss of income are significant enough to change the outcome of the means test. Otherwise, abuse is still presumed despite the "special circumstances." Credit counseling Another major change to the law enacted by BAPCPA deals with eligibility. §109(h) provides that a debtor will no longer be eligible to file under either Chapter 7 or Chapter 13 unless within 180 days prior to filing, the debtor received an “individual or group briefing” from a nonprofit budget and credit counseling agency approved by the United States trustee or bankruptcy administrator. The new legislation also requires that all individual debtors in either Chapter 7 or Chapter 13 complete an “instructional course concerning personal financial management.” If a Chapter 7 debtor does not complete the course, this constitutes grounds for denial of discharge pursuant to new §727(a)(11). The financial management program is experimental and the effectiveness of the program is to be studied for 18 months. Theoretically, if the educational courses prove to be ineffective, the requirement may disappear. Applicability of exemptions BAPCPA attempted to eliminate the perceived “forum shopping” by changing the rules on claiming exemptions. Under BAPCPA, a debtor who has moved from one state to another within two years of filing (730 days) the bankruptcy case must use exemptions from the place of the debtor's domicile for the majority of the 180-day period preceding the two years (730 days) before the filing §522(b)(3). If the new residency requirement would render the debtor ineligible for any exemption, then the debtor can choose the federal exemptions. BAPCPA also “capped” the amount of a homestead exemption that a debtor can claim in bankruptcy, despite state exemption statutes. Also, there is a “cap” placed upon the homestead exemption in situations where the debtor, within 1,215 days (about 3 years and 4 months) preceding the bankruptcy case, added value to a homestead. The provision provides that “any value in excess of $125,000” added to a homestead can not be exempted. The only exception is if the value was transferred from another homestead within the same state or if the homestead is the principal residence of a family farmer (§522(p)). This “cap” would apply in situations where a debtor has purchased a new homestead in a different state, or where the debtor has increased the value to his or her homestead (presumably through a remodeling or addition). Lien avoidance Some types of liens may be avoided through a Chapter 7 bankruptcy case. However, BAPCPA limited the ability of debtors to avoid liens through bankruptcy. The definition of “household goods” was changed limiting “electronic equipment” to one radio, one television, one VCR, and one personal computer with related equipment. The definition now excludes works of art not created by the debtor or a relative of the debtor, jewelry worth more than $500 (except wedding rings), and motor vehicles (§522(f)(1)(B)). Prior to BAPCPA, the definition of household goods was broader so that more items could have been included, including more than one television, VCR, radio, etc. Other changes Decreased the number and type of debts that could be discharged in bankruptcy. Decreased limits for discharge of debts incurred discharging luxury goods. Expanded the scope of student loans not dischargeable without undue hardship. Increase the time in which a debtor may have multiple discharges from 6 to 8 years. Limited the duration of the automatic stay, particularly for debtors who had filed within one year of a previous bankruptcy. Automatic stay may be extended at the discretion of the court. BAPCPA limited the applicability of the automatic stay in eviction proceedings. If the landlord has already obtained a judgment of possession prior to the bankruptcy case being filed, a debtor must deposit an escrow for rent with the Bankruptcy Court, and the stay may be lifted if the debtor does not pay the landlord in full within 30 days thereafter, §362(b)(22). The stay also would not apply in a situation where the eviction is based on “endangerment” of the rented property or “illegal use of controlled substances” on the property, §362(b)(23). BAPCPA enacts a provision that protects creditors from monetary penalties for violating the stay if the debtor did not give “effective” notice pursuant to [§342(g)]. The new notice provisions require the debtor to give notice of the bankruptcy to the creditor at an “address filed by the creditor with the court” or “at an address stated in two communications from the creditor to the debtor within 90 days of the filing of the bankruptcy case". References Further reading External links United States Bankruptcy Code via usbankruptcycode.org United States bankruptcy legislation Title 11 of the United States Code Corporate liquidations
17885039
https://en.wikipedia.org/wiki/Computer%20security%20software
Computer security software
Computer security software or cybersecurity software is any computer program designed to influence information security. This is often taken in the context of defending computer systems or data, yet can incorporate programs designed specifically for subverting computer systems due to their significant overlap, and the adage that the best defense is a good offense. The defense of computers against intrusion and unauthorized use of resources is called computer security. Similarly, the defense of computer networks is called network security. The subversion of computers or their unauthorized use is referred to using the terms cyberwarfare, cybercrime, or security hacking (later shortened to hacking for further references in this article due to issues with hacker, hacker culture and differences in white/grey/black 'hat' color identification). Types Below, various software implementations of Cybersecurity patterns and groups outlining ways a host system attempts to secure itself and its assets from malicious interactions, this includes tools to deter both passive and active security threats. Although both security and usability are desired, today it is widely considered in computer security software that with higher security comes decreased usability, and with higher usability comes decreased security. Prevent access The primary purpose of these types of systems is to restrict and often to completely prevent access to computers or data except to a very limited set of users. The theory is often that if a key, credential, or token is unavailable then access should be impossible. This often involves taking valuable information and then either reducing it to apparent noise or hiding it within another source of information in such a way that it is unrecoverable. Cryptography and Encryption software Steganography and Steganography tools A critical tool used in developing software that prevents malicious access is Threat Modeling. Threat modeling is the process of creating and applying mock situations where an attacker could be trying to maliciously access data in cyberspace. By doing this, various profiles of potential attackers are created, including their intentions, and a catalog of potential vulnerabilities are created for the respective organization to fix before a real threat arises. Threat modeling covers a wide aspect of cyberspace, including devices, applications, systems, networks, or enterprises. Cyber threat modeling can inform organizations with their efforts pertaining to cybersecurity in the following ways: Risk Management Profiling of current cybersecurity applications Considerations for future security implementations Regulate access The purpose of these types of systems is usually to restrict access to computers or data while still allowing interaction. Often this involves monitoring or checking credential, separating systems from access and view based on importance, and quarantining or isolating perceived dangers. A physical comparison is often made to a shield. A form of protection whose use is heavily dependent on the system owners preferences and perceived threats. Large numbers of users may be allowed relatively low-level access with limited security checks, yet significant opposition will then be applied toward users attempting to move toward critical areas. Access control Firewall Sandbox Monitor access The purpose of these types of software systems is to monitor access to computers systems and data while reporting or logging the behavior. Often this is composed of large quantities of low priority data records / logs, coupled with high priority notices for unusual or suspicious behavior. Diagnostic program Intrusion detection system (IDS) Intrusion prevention system (IPS) Log management software Records Management Security information management Security event management Security information and event management (SIEM) Surveillance monitor These programs use algorithms either stolen from, or provided by, the police and military internet observation organizations to provide the equivalent of a police Radio scanner. Most of these systems are born out of mass surveillance concepts for internet traffic, cell phone communication, and physical systems like CCTV. In a global perspective they are related to the fields of SIGINT and ELINT and approach GEOINT in the global information monitoring perspective. Several instant messaging programs such as ICQ (founded by "former" members of Unit 8200), or WeChat and QQ (rumored 3PLA/4PLA connections) may represent extensions of these observation apparati. Block or remove malware The purpose of these types of software is to remove malicious or harmful forms of software that may compromise the security of a computer system. These types of software are often closely linked with software for computer regulation and monitoring. A physical comparison to a doctor, scrubbing, or cleaning ideas is often made, usually with an "anti-" style naming scheme related to a particular threat type. Threats and unusual behavior are identified by a system such as a firewall or an intrusion detection system, and then the following types of software are used to remove them. These types of software often require extensive research into their potential foes to achieve complete success, similar to the way that complete eradication of bacteria or viral threats does in the physical world. Occasionally this also represents defeating an attackers encryption, such as in the case of data tracing, or hardened threat removal. Anti-keyloggers Anti-malware Anti-spyware Anti-subversion software Anti-tamper software Antivirus software See also Computer security Data security Emergency management software Cloud Workload Protection Platforms Computer Antivirus Software References
4075003
https://en.wikipedia.org/wiki/LaunchBar
LaunchBar
LaunchBar is an application launcher for macOS. It provides access to user's applications and files, by entering short abbreviations of the searched item's name. It uses an adaptive algorithm that 'learns' a user's preferred abbreviations for a particular application. For example, after training, Adobe Photoshop may be launched by simply typing 'pho' and Time Machine can be opened by typing 'tm' even though that sequence of characters does not appear anywhere in the name of the application. LaunchBar also provides capabilities beyond application launching, such as file management and piping the current selection to a command line utility, along with clipboard management and a built-in calculator. LaunchBar is distributed as crippleware shareware - full usage of the application requires paying the registration fee, but up to 7 abbreviations may be used per session without paying anything. According to user interface researcher Bruce Tognazzini, "LaunchBar should be able to outperform a visual interface for complex, repetitive switching sequences by an expert user". History LaunchBar began as a series of shell scripts for the NeXTSTEP platform, then migrated to OPENSTEP where it was developed into a full-fledged application. It was ported to Mac OS X in 2001 as LaunchBar 3. In 2005, Apple introduced Spotlight, which took over LaunchBar's default position at the top-right corner of the screen. In response, LaunchBar was changed to display its window at the center of the screen, below the menu bar. In 2014, LaunchBar 6 was released with a redesigned interface, additional indexing rules and built-in actions, live web searches and usage statistics. See also Comparison of application launchers External links Official site Interview with Norbert Heger, LaunchBar's developer LaunchBar demo screencast MacWorld review References Utilities for macOS Application launchers MacOS-only software
11959068
https://en.wikipedia.org/wiki/3wPlayer
3wPlayer
3wPlayer is malware that disguises itself as a Media player. It can infect computers running Microsoft Windows. It is designed to exploit users who download video files, instructing them to download and install the program in order to view the video. The 3wPlayer employs a form of social engineering to infect computers. Seemingly desirable video files, such as recent movies, are released via BitTorrent or other distribution channels. These files resemble conventional AVI files, but are engineered to display a message when played on most media player programs, instructing the user to visit the 3wPlayer website and download the software to view the video. The 3wPlayer is infected with Trojan.Win32.Obfuscated.en According to Symantec, 3wPlayer "may download" a piece of adware they refer to as Adware.Lop, which "adds its own toolbar and search button to Internet Explorer". A Perl script posted online can reportedly decrypt 3wplayer files back into AVI. This claim has been tested with mixed results, as the intended AVI file is rarely the desired video file. Some developers have made an application to automatically identify 3wPlayer encrypted files. Clones There are multiple 3wPlayer clones: DivoCodec and X3Codec The DivoCodec or Divo Codec or X3Codec has also been identified as a trojan similar to 3wPlayer. Users are instructed to download the codec in order to view or play an AVI/MP4/MP3/WMA file, often downloaded via P2P programs. Instead of actual codecs, DivoCodec installs malware on the users computer. The DivoCodec is polymorphic and can change its structure. It has also been known to write to another process' virtual memory (process hijacking). DomPlayer The DomPlayer is similar to the DivoCodec and 3wPlayer. Users are also instructed to download the player in order to view an AVI file. As with DivoCodec, false .avi are easily spotted because of the duration of the file, usually lying at 10–12 seconds, of which one can conclude that there is no chance that that file may be a film/TV series, despite the size of the file. This is not always the case however, as many distributors have recently begun falsifying the file meta data to display normal durations and file sizes. x3 player x3 player is similar to DomPlayer, and instructs users to download this player to view the avi file. Also circulated is a 5-second ASF video which is disguised as an MP3 file instructing users to install this player. References External links Symantec security briefing Clears the avi file from any player related data [https://web.archive.org/web/20071102131716/http://www.goitexpert.com/entry.cfm?entry=DomPlayer-3wPlayer-Fix Same as above, just with a little explanation (removes the faulty header from avi files) Rogue software Software that bundles malware
49967482
https://en.wikipedia.org/wiki/Ramanujan%20College
Ramanujan College
Ramanujan College is a constituent college of University of Delhi. It is named after the Indian mathematician Srinivas Ramanujan. It is located in Kalkaji, near Nehru Place in South Delhi. The college runs fifteen courses in the disciplines of Humanities, Commerce, Management, Mathematical Sciences, Computer Science and Vocational Studies. It is also the study center for the students of the Non- Collegiate Women's Education Board, University of Delhi and the Indira Gandhi National Open University. Ramanujan College has been accredited grade "A" by the National Assessment and Accreditation Council (NAAC). Ramanujan College has also been selected by the MHRD as a Teaching Learning Center and National Resource Center. History The Ramanujan College, formerly known as Deshbandhu College (Evening) was established in 1958 by the Ministry of Rehabilitation, Government of India, in the memory of Late Deshbandhu Gupta, a patriot who had dedicated his life to the national freedom struggle. The Deshbandhu College (Evening) operated from the premises of the Deshbandhu College that functioned in the morning hours. Originally run by the Ministry of Rehabilitation and Education as a men's College, Deshbandhu College (Evening) became a co-educational institution in 1994. This college, which is 100% funded by the University Grants Commission has been maintained by the University of Delhi since 1972. Until the early 90's the College catered to a large number of students who were gainfully employed in the daytime and pursued their studies in the convenience provided by an evening College. During its initial years the College used to function in the late evening hours and developed as a reputed College especially in the field of Commerce studies. Other than B. Com (Honours) and B.Com., the College offered B.A. Programme and Honours courses in English, Hindi and Political Science to its students. The college was also unique because it offered the study of regional languages: Bengali, Punjabi, Sindhi. Late Dr. M. L. Jotwani, an eminent scholar of Sindhi literature and language was a member of the college teaching faculty and did the college proud when he received the Padma Shri in 2003. In the year 2010, Deshbandhu College (Evening) was renamed as Ramanujan College, and allocated separate space of seven acres of land within the existing College campus in Kalkaji, New Delhi. From an evening College it became a full-fledged morning college in 2012. Academics Undergraduate Courses B.Com. (H) B.Com. Bachelor of Management Studies (B.M.S.) B. A. (H) Economics B. A. (H) English B.A. (H) Hindi B. A. (H) Philosophy B. A. (H) Political Science B. A. (H) Applied Psychology B. A. Program B. Sc. (H) Computer Science B. Sc. (H) Mathematics B.Sc. (H) Environmental Science B. Sc. (H) Statistics B. Voc. Banking Operations B.Voc. Software Development Certificate and Diploma Courses International Financial Reporting Standards (IFRS) Tally Human Rights Mass Media Radio Jockey and Broadcasting Business Analytics Research and Data Analysis Happiness by The School of Happiness Departments and Faculty The college has highly qualified, committed and talented teaching faculty members who facilitate the students to keep abreast with academic challenges and developments. Their teaching-learning methods encourage inter-disciplinary approaches through innovation projects, conferences, seminars, talks and workshops. Experiential learning techniques are used for an effective implementation of the curriculum.  A large number of the teachers have original published academic and creative works in reputed national and international journals. There are 16 Departments in the college: Applied Psychology, Commerce, Computer Science, Economics, English, Environmental Studies, Hindi, History, Management Studies, Mathematics, Philosophy, Physical Education, Political Science, Punjabi, Statistics and Vocation. These departments regularly organise National and International Conferences/ Workshops/ Seminars/ Guest Lectures on regular basis to augment the knowledge base of students and faculty members. Publications Ramanujan College publishes two journals annually: Ramanujan International Journal of Business and Research (ISSN: 2455-5959) (UGC Listed) International Journal of Applied Ethics (ISSN: 2321-2497). College Campus In 2017, a new independent environment-friendly building of the college has been constructed with state of art class-rooms, laboratories, staff rooms and an auditorium. The new building is supported by a gigantic service centre with provision of a sewage treatment plant, solar panels, water harvesting plant, electric sub-station and power generators. Some key features of the college campus are: Clean & Green Campus with uninterrupted Electricity & Water Supply ICT enabled Smart Classrooms with Projectors Multiple Computer Labs with latest Hardware & Software with 1500 Desktops and Laptops Language & Media Lab with latest Audio - Visual Equipment Psychology and Accounting & Finance Lab Wi-Fi Connectivity Indoor and Outdoor Gymnasium Fully Equipped Medical Room Library The college has a fully computerized library, which works on Online Public Access Catalogue (OPAC) system that helps to locate all the reading material available on the computer. The library is spacious and has been divided into various sections— reference, textbook and newspapers & magazines. Fully Air-Conditioned separate reading rooms for the students and teachers make it more user-friendly. The library is well stocked with more than 45,000 books and subscribes to various e-journals. It also provides access to in-house/remote access to e-publications and other subscribed resources of the University of Delhi. Academic Catalysts: Centers & Cells Antha Prerna Cell The Antha Prerna Cell-Engineering Ideas to Reality, was inaugurated on 18 January 2019, under the aegis of the Department of Computer Science, Ramanujan College. The Cell aims to provide a platform for the students to give life to their innovative ideas by implementing them on industrial projects and nurture them as future corporate professionals. This will bridge the gap between industry and academics and make the students industry-ready. The Cell will benefit not only students from all the disciplines by involving them in various aspects of the live projects. Centre for Robotics and Artificial Intelligence The Centre for Robotics and Artificial Intelligence was established in November 2013. Centre for Robotics and Artificial Intelligence is working on emerging areas in the field of robotics, embedded systems and machine intelligence. The students under the centre are currently working on the star innovation project “Robotics in healthcare” in which they are trying to reduce the costs of existing machines and increasing their efficiency. Centre for Ethics and Values Centre for Ethics and Values (CEV) at Ramanujan College was established in 2010. CEV was created with an objective of functioning as a Resource Centre for imparting values-based education to students and professionals. The role of CEV was envisaged as creation of awareness in the learners’ community that skills and ethical values are essentially complementary. CEV is engaged with rigorous investigation of ethical issues, enlightened dialogue and the dissemination of knowledge that will lead to informed moral choice. Centre for Human Rights Studies The Centre for Human Rights Studies was established in February 2015, under of aegis of the Department of Political Science. The Centre coordinates a three-month UGC-Sponsored Certificate Course on Human Rights. Eminent resource persons are invited to deliver lectures on the prescribed syllabus. Students are required to submit a project report along with a Power-Point presentation at the end of the course. Centre for Social Innovation The Centre for Social Innovation was established in January 2016 with a vision to enact greater social links and creating new societal development and further to seek inspiration from the ventures so that others can take a step forward. The centre was set up as part of the star innovation project entitled “Sociovation,” sanctioned by University of Delhi, aimed to create a common platform to provide unison of all socio-innovators, students and the needy to be able to make a huge difference in bringing a radical change in the society.  It has identified the four core issues as its thrust areas: Women Empowerment, Water Conservation, Illiteracy, Cleanliness and Sanitation. Research Development and Services Cell The Cell was formed in 2017 to promote research and development activities in the college. The cell successfully completed several academic and consultancy projects. To promote research amongst students, cell also provides incentives to them. Thrust: Entrepreneurship Cell It was formed in October 2015 to harness the creative and skillful qualities of the students and create successful business ventures run by the students. Ramanujan College also has 15 students societies engaged in dancing, music, social entrepreneurship, commerce, fashion, and more. Appreciable Highlights DDU Kaushal Kendra Ramanujan College has been sanctioned Deen Dayal Upadhyay Centre for Knowledge Acquisition and Up-gradation of Skilled Human Abilities and Livelihood (KAUSHAL) by University Grants Commission in 2016. Ramanujan College is the only institution in the University of Delhi which has been sanctioned this scheme under which the college is offering B. Vocational Courses in Banking Operations and Software Development; Teaching Learning Centre Ramanujan College was awarded Teaching Learning Centre (TLC) under the Pandit Madan Mohan Malaviya National Mission on Teachers and Teaching (PMMMNMTT), sponsored by the Ministry of Human Resource Development in 2017. The Centre runs along the mission, objectives and the guidelines set up by the MHRD and has at its core the idea of facilitating teaching leaning process to the teachers across the country, especially those located in the remote areas of the country. Ramanujan College is the second college of the University of Delhi to be awarded the Teaching Learning Centre (TLC), the most sought-after scheme of the MHRD. Under this scheme, the centre has successfully trained more than 3500 teachers till March 2020. To continue the teaching-learning process in COVID-19 pandemic with the same pace, the centre has organized FDP/FIP/Workshop online having virtual engagement with all participants/learners. This initiative of the centre has received an overwhelming response from all over the country and more than 22000 teachers have been trained since April 2020 and several programmes are still ongoing. National Resource Centre The Ministry of HRD notified Ramanujan College as the National Resource Centre (NRC) in 2019 for three extremely pertinent and interrelated disciplines, namely, Human Rights, Environment and Ethics. The NRC's role is to conceptualise, create and disseminate contemporary knowledge in the field of study. This initiative is undertaken by the MHRD under the ambit of the Annual Refresher Programme in Teaching (ARPIT). International Collaborations The Ramanujan College has signed an Agreement of Academic Cooperation with MCI Management Center Innsbruck, Austria for a period of five years recognizing the benefits to be gained by both institutions through a cooperation in teaching, academic research and operations, promote exchange of students, staff and knowledge within the interests and abilities of each institution. Exchange of students and academic staff along with joint development and organization of academic programs, courses or seminars and research will be the focus of this collaborative effort. Rankings It is ranked 53rd across India by National Institutional Ranking Framework in 2021. References External links Official website of Ramanujan College Delhi University website profile of Ramanujan College Universities and colleges in Delhi South Delhi district Educational institutions established in 2010 Delhi University 2010 establishments in Delhi
8877936
https://en.wikipedia.org/wiki/Midrange%20computer
Midrange computer
Midrange computers, or midrange systems, was a class of computer systems which fall in between mainframe computers and microcomputers. This class of machine emerged in the 1960s, with models from Digital Equipment Corporation (PDP line), Data General (NOVA), Hewlett-Packard (HP3000) widely used in science and research as well as for business - and referred to as minicomputers. IBM favored the term "midrange computer" for their comparable, but more business-oriented, systems. IBM Midrange Systems System/3 was the first of the IBM midrange systems (1969) System/32 (introduced in 1975) was a 16-bit single-user system also known as the IBM 5320. System/34 (1977) was intended as successor to both the 3 and the 32. System/38 (1979) was the first Midrange system to have an integrated relational database management system (DBMS). The S/38 had 48-bit addressing, and ran the CPF operating system. System/36 (1983) had two 16-bit processors with an operating system that supported multiprogramming. AS/400 was introduced under that name in 1988, renamed eServer iSeries in 2000, and subsequently became IBM System i in 2006. It ran the OS/400 operating system. IBM Power Systems was introduced in April 2008, a convergence of IBM System i and IBM System p. Positioning The main similarity of midrange computers and mainframes - they are both oriented for decimal-precision computing and high volume input and output (I/O), but most midrange computers have an (reduced and specially designed) internal architecture with limited compatibility to mainframes. The low-end mainframe can be more affordable and less powerful that a hi-end midrange system, but midrange system still was a "replacement solution" with another service process, different OS and internal architecture. The difference between similar-size Midrange and Supermini/Minicomputer - is a computing purposes: Super/mini oriented for float-point scientific computing, midrange - for decimal business-oriented computing, but without clear distinction border between classes. Earliest midrange computers was a single-user business calculation machines; the virtualization, typical feature of mainframes since 1972 (partially from 1965), was ported to midrange systems only in 1977; the multi-user support was added to midranges in 1976 instead of 1972 for mainframes (but that's still a significantly earlier that a limited release of x86 virtualization (1985/87) or multi-user support (1983)). Latest midrange systems are primarily mid-class multi-user local network servers that can handle the large-scale processing of many business applications. Although not as powerful and reliably as full-size mainframe computers, they are less costly to buy, operate, and maintain than mainframe systems and thus meet the computing needs of many organizations. Midrange systems was relatively popular as powerful network servers to help manage large Internet Web sites, but more oriented for corporate intranets and extranets, and other networks. Today, midrange systems include servers used in industrial process-control and manufacturing plants and play major roles in computer-aided manufacturing (CAM). They can also take the form of powerful technical workstations for computer-aided design (CAD) and other computation and graphics-intensive applications. Midrange system are also used as front-end servers to assist mainframe computers in telecommunications processing and network management. Since the end of 1980s, when the client–server model of computing became predominant, computers of the comparable class are instead usually known as workgroup servers and online transaction processing servers to recognize that they usually "serve" end users at their "client" computers. For the 1990-2000's, in some non-critical cases both lines also was replaced by web servers, oriented for working with global network, but with less security background, and mainly based using General purpose architecture (currently x86 or ARM). See also IBM mainframe Superminicomputer Minicomputer Microcomputer List of IBM products References Midrange
3786377
https://en.wikipedia.org/wiki/Federal%20Criminal%20Police%20Office%20%28Germany%29
Federal Criminal Police Office (Germany)
The Federal Criminal Police Office of Germany (in German: , abbreviated ) is the federal investigative police agency of Germany, directly subordinated to the Federal Ministry of the Interior. It is headquartered in Wiesbaden, Hesse, and maintains major branch offices in Berlin and Meckenheim near Bonn. It has been headed by Holger Münch since December 2014. Primary jurisdiction of the agency includes coordinating cooperation between the federation and state police forces; investigating cases of international organized crime, terrorism and other cases related to national security; counterterrorism; the protection of members of the constitutional institutions, and of federal witnesses. When requested by the respective state authorities or the federal minister of the interior, it also assumes responsibility for investigations in certain large-scale cases. Furthermore, the Attorney General of Germany can direct it to investigate cases of special public interest. History The Federal Criminal Police Office was established in 1951, and Wiesbaden, in the State of Hesse, was designated as its seat. The German police in general is – by definition of the German constitution – organized at the level of the states of the federation (e.g. North Rhine-Westphalia Police, Bavarian State Police, Berlin Police). Exceptions are the Federal Police, the Federal Criminal Police Office (BKA) and the German Parliament Police. Because of historic reasons all these federal police forces have a specific and limited legal jurisdiction. This is because after World War II, it was decided that there should not be another all-powerful police force like the Reich Security Main Office (consisting of the Gestapo, Sicherheitsdienst, the Reichskriminalpolizeiamt). Missions The formation of the BKA is based on several articles of the German constitution, which give the federal government the exclusive ability to pass laws on the coordination of criminal policing in Germany. The jurisdictions of the BKA are defined in the Bundeskriminalamtgesetz (BKAG): Investigation and threat prevention in cases of national and international terrorism. Investigating the international trade with narcotics, arms, munitions, explosives and internationally organized money-laundering and counterfeiting. Investigating crimes when a state public prosecutor, a state police force or the state's interior minister, the federal public prosecutor or the Federal Ministry of the Interior (Germany) task the BKA with a criminal investigation. Personal protection of the constitutional bodies of Germany and their foreign guests, e.g. the President of Germany, Parliament of Germany, Cabinet of Germany, Federal Constitutional Court and other institutions), the BKA also investigates major crimes against these institutions. Protection of federal witnesses. Investigating crimes against critical infrastructures in Germany. Coordinating cooperation between the federal and state police forces (especially state criminal investigation authorities) and with foreign investigative authorities (in Germany the state police forces are mainly responsible for policing). Coordinating the cooperation with international law enforcement agencies like the FBI. The BKA is also the national central bureau for Europol and Interpol. Additionally, the BKA provides liaison officers for over 60 German embassies worldwide, who work with local law enforcement agencies. Collecting and analyzing criminal intelligence as a national crime office. Providing IT-Infrastructure for German law enforcement agencies, e.g. police databases , Schengen Information System, Automated Fingerprint Identification System (AFIS), Anti-Terror-Database (ATD). Providing assistance to other national and international law enforcement agencies in forensic and criminological research matters. Acting as a clearing house for identifying and cataloging images and information on victims of child sexual exploitation, similar to the National Center for Missing & Exploited Children in the United States. Organization Since its establishment in 1951, the BKA's number of staff has grown substantially. This has notably been driven by the fight against the left-wing terrorism in the 1970s and the internationalization of crime in the decades thererafter. Thus its structure has been undergoing constant reorganized. The last major reform was implemented in July 2016 and resulted in the structure described below. The BKA is currently organized in eleven divisions. The President of the BKA is supported by its staff in the so-called "Leitungsstab" (which has not the status of a division): Staff LS – Management (in German: Stab LS – Leitungsstab) Office of the BKA-President and the Vice-Presidents Press and media relations Law enforcement advisement, situation reporting Strategic affairs Resources, organization National cooperation Division ZI – Central Information Management (in German: Abteilung ZI – Zentrales Informationsmanagment) 24/7 Operations Center Language and Translation Service Information and data services, police records administration Law enforcement information and message exchange Security screening / vetting Identification services Automatic Fingerprint Identification System (AFIS) DNA-Analytics Database facial recognition system (GES) Identikit pictures Fugitive and search services (International police cooperation, Legal assistance agreements) Common search, public search/manhunt Schengen Searches (SIRENE) Interpol Searches Target searches, manhunt Division ST – State Security (in German: Abteilung ST – Polizeilicher Staatsschutz) Situational reporting, analysis Threat assessment Situation center State Security National Security, Politically motivated crime – Terrorism / Extremism Left-wing and right-wing politically motivated crime, including the cooperation with the Z Commission State sponsored terrorism Politically motivated crime by foreigners Politically motivated arms crime, proliferation, CBRN arms Counter-espionage State sponsored cybercrime and cyber-espionage War crimes, Crimes against international criminal law and humanity (including German central office for international criminal law) Financial investigations State Security Division SO – Serious and Organized Crime (in German: Abteilung SO – Schwere und Organisierte Kriminalität) Property crime Counterfeiting Cybercrime Capital and major crimes, violent crimes, sexual offences, child abuse and child pornography (similar to the National Center for Missing and Exploited Children) Organized and gang crimes Drug crimes Human trafficking Environmental crimes Crimes concerning arms and explosives Financial and Economic crime, comprising a specialised unit for case-integrated financial investigations (VIVA), that also assumes the role of Asset Recovery Office Germany (police) and a forensic auditing service (WPD) Division SG – Protection Division (in German: Abteilung SG – Sicherungsgruppe) SG E (Operations) Personal protection details (Protection of the German Chancellor and other cabinet members) Foreign Dignitary protection (when invited by the federal government) Reconnaissance and mobile support units Special units like the ASE (Foreign Special Missions) which have similar tasks like the Secret Service Counter Assault Team SG F (Situation center) Mission support, internal organization and logistics Tactical Operations Center The Protection Group protects the members of Germany's constitutional bodies and their foreign guests and is often the most visible part of the BKA. Specially selected and trained officers with special equipment and vehicles provide round-the-clock personal security to those they protect. The Protection Group is now headquartered in Berlin. Division OE – Operational Mission and Investigative Support (in German: Abteilung OE – Operative Einsatz- und Ermittlungsunterstützung) Technical Operational Service (TOS) Technology monitoring, logistics Analysis of (new) technologies (evaluation for law enforcement use and criminal potential) IT-Forensics Case and mission support, e.g. at crime scenes and while conducting search warrants Securing and processing of digital evidence Research and development, live forensics (Mass) Data analysis, Video Competence Center (CC Video) Technical Mission Support, development of technical equipment Comptence Center for Information Technology Surveillance (CC ITÜ), Lawful interception Telecommunications Surveillance (TKÜ) Information Technology Surveillance (ITÜ) Mobile Mission Commando (MEK) Plain-clothes SWAT unit specialised in surveillance and apprehension of fugitives in mobile situations Central federal support group for major nuclear threat defense (ZUB) Adviser and Negotiation Group, e.g. for hostage-takings in foreign countries Witness protection program (federal level) Division KT – Forensic Science Institute (in German: Abteilung KT – Kriminalistisches Institut) Disaster Victims Identification Task Force The DVI-Team (in German: Identifizierungskommission, IdKo) is an event driven organisation of mainly forensic specialists dedicated to identification of disaster victims. The DVI's past missions include several airplane crashes, the Eschede train disaster and the 2004 Indian Ocean earthquake. Crime scene unit Bomb squad, explosive ordnance disposal (EOD) and improvised explosive ordnance disposal (IEDD), CBRN crime scenes Research and development on crime scene procedures Institution for technical and natural sciences, reporting for law enforcement, public prosecutors and courts Ballicstics, arson and explosion investigations DNA analytics, investigation of material and micro traces Analysis of handwritings and documents, voice recognition Central laboratory for physical, biological and chemical analysis, toxicology Digital electronics, data reconstruction, video, picture, signal and krypto analysis Division IT – Information Technology (in German: Abteilung IT – Informationstechnik) Information and communication management common IT software, e.g. operation systems, office tools law enforcement databases, e.g. various INPOL databases, Europol (SIENA), Schengen (SIRENE), anti-terror-database (ATD) digital (police) radio management, mobile communications Division IZ – International Coordination, Training and Research Center (in German: Abteilung IZ – Internationale Koordinierung, Bildungs- und Forschungszentrum) EU and international cooperation, e.g. Europol and Interpol Coordination of BKA liaison officers at German embassies Consulting center for police legal questions, law enforcement and legal politics Police training (national/international) Common training, management Specialised criminal police training, police training International police training and logistics support Institute of Law Enforcement Studies Federal University, Departmental Branch of the Federal Criminal Police Criminological and law enforcement research Research and consulting center terrorism/extremism Research and consulting center law enforcement statistics, dark field research Research and consulting center cybercrime Research and consulting center organized crime, economic crime, criminal prevention Public relations, internet editorial staff Division ZV – Central and Administrative Affairs (in German: Abteilung ZV – Zentral- und Verwaltungsaufgaben) Common human resources management Facility management Budget management Internal Organization Operations and internal security Prevention of corruption Logistics, car pool, workshops Legal Affairs Division TE - International terrorism, religious motivated extremism and terrorism Established on November 1, 2019 the division TE consists of sections from the division ST who are tasked with the collection of information and investigations in the fields of terrorism, religious motivated extremism and jihadism. Division CC - Cybercrime The division's main tasks lie in investigations in the fields of cybercrime and computer-oriented crime Investigations against individuals and networks which target high-profile targets in Germany Information gathering and analysis as assistance of ongoing investigations Combating cyber-attacks against critical infrastructure and institutions of the German government Consulting in the development of strategies and legal frameworks in combating cybercrime Joint Centres and Task Forces The BKA is part of several joint centers and platforms for combatting crime: Joint Counter-Terrorism Centre (GTAZ) The GTAZ was created in 2004 as a fusion center for 40 German law enforcement, intelligence and other public agencies who are tasked with combatting international jihadi terrorism. Its goal is to optimize and speed up communication between these agencies as a cooperation platform. The GTAZ is not an agency of its own. All participating agencies work under their own jurisdiction. It is located in Berlin. All state police forces (16), state intelligence services (16), the Federal Criminal Police Office (through its Division ST), the uniformed Federal Police (former border patrol), the Military Counterintelligence Service, the Federal Intelligence Service, the Federal Office for the Protection of the Constitution, the Public Prosecutor General, the Customs Investigations Bureau and the Federal Office for Migration and Refugees are part of the GTAZ. The GTAZ has several working groups which focus on threat assessment, operational information exchange, case analysis, structural analysis and other topics. Joint Terrorism and Extremism Prevention Centre (GETZ) The GETZ is a similar fusion center established in 2012. It was structured after the model of and parallel to the GTAZ. It is located in Cologne and focuses on politically-motivated crime like right and left-wing extremism and terrorism, espionage, proliferation and international terrorism (not including jihadi terrorism). The GTAZ is not an agency of its own. All participating agencies work under their own jurisdiction. All state police forces (16), state intelligence services (16), the Federal Criminal Police Office (through its Division ST), the uniformed Federal Police (former border patrol), the Military Counterintelligence Service, the Federal Intelligence Service, the Federal Office for the Protection of the Constitution, the Public Prosecutor General, the Customs Investigations Bureau and the Federal Office for Migration and Refugees are part of the GTAZ. The GTAZ has several working groups which focus on threat assessment, operational information exchange and other topics. National Cyber Threat Prevention Centre (NCAZ) The NCAZ is a fusion center focusing on cyber threat, their assessment and possible countermeasures. Like the GTAZ it is just a platform and not an agency of its own. It has no own jurisdiction. The Federal Office for Information Security, the Federal Criminal Police Office (BKA), the uniformed Federal Police (former border patrol), the Military Counterintelligence Service, the Federal Intelligence Service and the Federal Office for the Protection of the Constitution are part of the NCAZ. Joint Analysis and Strategy Centre Illegal Migration (GASIM) The GASIM is a federal information and communication center combatting illegal migration. It is administrated by the Federal Police (former border patrol) and located at Potsdam. Joint Financial Investigative Unit (GFG) The BKA Division SO has established a standing GFG (task force) with the Customs Investigations Bureau combatting financial crimes, especially money laundering. Joint Internet Centre (GIZ) The GIZ is a cooperation platform for evaluating and analysing jihadi terrorist propaganda on websites and social media channels. It should bring together the professional and technical expertise of the participating agencies. It is administrated by the Federal Office for the Protection of the Constitution. The Federal Criminal Police Office (BKA), the Military Counterintelligence Service, the Federal Intelligence Service, the Federal Office for the Protection of the Constitution and the Public Prosecutor General are part of the GIZ. They all work under their own jurisdiction. Coordinated Internet Intelligence (KIA) The KIA is another cooperation platform for evaluating and analysing extremist and terrorist internet propaganda. It was created in 2012 after the model of the GIZ. At first it brought together the professional and technical expertise of the participating agencies in the field of right-wing extremism. It was a direct reaction to the discovery of the NSU murders. Now and in contrast to the GIZ, KIA is divided into three platforms as further fields were added: KIA-R covers right-wing extremism and terrorism, KIA-L covers left-wing extremism and terrorism and KIA-A covers international politically-motivated crime (except jiadi terrorism). The Federal Criminal Police Office (BKA), the Military Counterintelligence Service and the Federal Office for the Protection of the Constitution are part of the GIZ. They all work under their own jurisdiction. For special cases the BKA creates task forces, which are called "Besondere Aufbauorganisation" (abbreviated: BAO). These task forces can integrate personnel from different divisions and state police forces as well. On some occasions international police forces participate too. Personnel General structure The BKA currently employs more than 7100 people (as of July 2020). More than 3800 are sworn law enforcement officers of various ranks including upper management. Furthermore, the BKA has more than 1100 civil servants (e.g. analysts as well as administrative or technical personnel). Another 2200 employees work for the BKA as scientists (forensic and natural sciences) and academics (criminology and law enforcement research). The BKA received more than 1,000 additional job positions in 2017. In the case of law enforcement officers, the BKA has employees in two career tracks of the German civil service. In the upper service (pay grades A9-A13g, comparable to military officer ranks up to Captain) or in the higher service (pay grades A13h and above, comparable to military staff officer ranks of Major and above). In contrast, some state police forces in Germany such as Bavarian State Police and the Federal Police also have lower level career tracks with only two years of training and lower entrance requirements). Recruitment The BKA recruits its personnel through different procedures: The civilian personnel (e.g. analysts, scientists, administrative personnel) is recruited similar to private companies. Potential law enforcement officers are recruited in a longer process. They have to pass a written and oral exam (interview, group discussions, psychological test), a sport test (endurance, strength, reaction), a medical examination and security screening. Personnel of the upper service usually needs to have passed a university entrance qualification (usually Abitur or Fachabitur). Law enforcement personnel in the career path of the higher service generally need to have passed a master's degree or a second state examination for direct recruitment. As a rule, the few directly recruited law enforcement officers for this career path are usually lawyers. However, a large proportion of the officers in the BKA's higher service career path are promoted law enforcement officers from the upper career path, who have proven themselves very well. Police training After the law enforcement officer applicants for the upper career path pass the mentioned exams, they study at the Federal University for Applied Administrative Sciences (Departmental Branch of the Federal Criminal Police) for three years at different locations. While studying (law, criminal proceedings, constitutional law, criminology, police tactics, ethics) they also receive traditional police training like martial arts (Krav Maga, Jiu Jiutsu, Judo), shooting, basic driving and crime scene investigation. During their studies the police candidates complete an 8-month internship at a local state police office and an 8-month internship in several investigative, support and analysis units of the BKA. Higher service personnel of the BKA study for two years at the German Police University in Münster (formerly the Police Command and Staff Academy). There they usually earn a Master of Arts degree in police management. They study together with the officers of this career of the Federal Police and the police forces of the federal states. Police ranks The BKA has the same rank structure as the other police forces in Germany. As a criminal police branch, the different ranks are preceded by the description "Kriminal-". The uniformed police forces normally have the description "Polizei-" like "Polizeikommissar". The rank of police candidates or recruits is "Kriminalkommissaranwärter (KKA)". The entry level after finishing the three year studies is "Kriminalkommissar", meaning Detective Inspector. The criminal police ranks are divided into the "Gehobener Dienst" (upper service) and "Höherer Dienst" (higher service). The upper service is the investigative level of the BKA. The higher service could be described as the middle management of the BKA. To enter the higher service members of the upper service have to pass an additional exam. After passing the test and acception for the higher service, these recruits have to study an additional two years at police university in Münster. The higher service can also be entered by external, non-police personnel from selected academic fields. Leadership The BKA is headed by three top executives, a president (Präsident des Bundeskriminalamtes) and two vice-presidents (Vizepräsidnet beim Bundeskriminalamt), which in German BKA-lingo are referred to as "Amtsleitung", to be translated into 'management of the agency'. The president of the BKA is a political civil servant, who is appointed by the President of Germany upon recommendation from the Minister of the Interior and the cabinet. He or she can be provisionally retired by the federal president, as stipulated in §54 of the Law on Federal Civil Servants. The post is graded as B9 in the payscale for federal civil servants (which is the same as a lieutenant general or a vice admiral in the armed forces). His or her vice-presidents, who to this day have mostly been career officials from the ranks, are in the B6 paygrade.Anlage I BBesG - Einzelnorm Presidents Dr. Max Hagemann (1951–1952) Dr. Hanns Jess (1952–1955) Reinhard Dullien (1955–1964) Paul Dickopf (1965–1971) Horst Herold (1971 – March 1981) Heinrich Boge (March 1981 – 1990) Hans-Ludwig Zachert (1990 – April 1996) Klaus Ulrich Kersten (April 1996 – February 26, 2004) Jörg Ziercke (February 26, 2004 – December 2014) Holger Münch (since 1 December 2014) Vice-Presidents Rolf Holle Werner Heinl Ernst Voss Günther Ermisch Reinhardt Rupprecht Herbert Tolksdorf (till 1983) Gerhard Boeden (1983 till 1987) Gerhard Köhler (1990 till 1993) Bernhard Falk (1993 till 2010) Rudolf Atzbach Jürgen Stock (von 2004 till 2014) Jürgen Maurer (2010 till März 2013) Peter Henzler (since April 2013) Michael Kretschmer (since March 2015) Equipment Firearms BKA police officers are equipped with the SIG Sauer P229 as a duty firearm. Selected units are also equipped with Heckler & Koch MP5 machine pistols. Additionally the police officers are equipped with pepperspray and bulletproof vests. The special mission unit MEK is equipped with Glock pistols, Heckler & Koch MP5 and other weapons. The Protection Group is also allowed to carry additional military-grade weapons, e.g. the ASE unit or the protection details (only revolvers are allowed in certain foreign countries). The use of these weapons and force in general is controlled by a special law, the UZwG. BKA police officers are authorized to carry their duty firearms concealed while off-duty. Vehicles The Protection Group of the BKA utilizes armoured cars from different manufacturers for their protection mission, e.g. like Mercedez-Benz W221 (for the President of Germany), Audi A8 L or BMW. Cases and investigations Red Army Faction 1998 Eschede train disaster (DVI-Team) 2004 Indian Ocean earthquake (DVI-Team) 2007 bomb plot in Germany (so called "Sauerland-Group") 2011 Discovery of the National Socialist Underground murders and National Socialist Underground (NSU) 2016 Berlin truck attack Borussia Dortmund team bus bombing Images See also Crime in Germany Federal Criminal Police Office of Austria References External links BKA official website Criminal investigation National Central Bureaus of Interpol Federal law enforcement agencies of Germany Organisations based in Wiesbaden Buildings and structures in Wiesbaden Government agencies established in 1951 1951 establishments in Germany
8072127
https://en.wikipedia.org/wiki/NTFS-3G
NTFS-3G
NTFS-3G is an open-source cross-platform implementation of the Microsoft Windows NTFS file system with read/write support. NTFS-3G often uses the FUSE file system interface, so it can run unmodified on many different operating systems. It is runnable on Linux, FreeBSD, NetBSD, OpenSolaris, illumos, BeOS, QNX, WinCE, Nucleus, VxWorks, Haiku, MorphOS, Minix, macOS and OpenBSD. It is licensed under the GNU General Public License. It is a partial fork of ntfsprogs and is under active maintenance and development. NTFS-3G was introduced by one of the senior Linux NTFS developers, Szabolcs Szakacsits, in July 2006. The first stable version was released on February 21, 2007, as version 1.0. The developers of NTFS-3G later formed a company, Tuxera Inc., to further develop the code. NTFS-3G is now the free "community edition", while Tuxera NTFS is the proprietary version. Features NTFS-3G supports all operations for writing files: files of any size can be created, modified, renamed, moved, or deleted on NTFS partitions. Transparent compression is supported, as well as system-level encryption. Support to modify access control lists and permissions is available. NTFS partitions are mounted using the Filesystem in Userspace (FUSE) interface. NTFS-3G supports hard links, symbolic links, and junctions. With the help of NTFS reparse point plugins, it can be made to read chunk-deduplicated files, system-compressed files, and OneDrive files. NTFS-3G provides complete support and translation of NTFS access control list (ACL) to POSIX ACL permissions. A "usermap" utility is included to record the mapping from UIDs to Windows NT SIDs. NTFS-3G supports partial NTFS journaling, so if an unexpected computer failure leaves the file system in an inconsistent state, the volume can be repaired. As of 2009, a volume having an unclean journal file is recovered and mounted by default. The ‘norecover’ mount option can be used to disable this behavior. Performance Benchmarks show that the driver's performance via FUSE is comparable to that of other filesystems' drivers in-kernel, provided that the CPU is powerful enough. On embedded or old systems, the high processor usage can severely limit performance. Tuxera sells optimized versions of the driver that claims to have improved CPU utilization for embedded systems and MacOS. The slowness of NTFS-3G (and FUSE in general) on embedded systems is attributed to the frequent context switching associated with FUSE calls. Some open-source methods provided to reduce this overhead include: The underlying FUSE layer has an option called to use larger blocks when writing. Using a larger block means fewer context switches. This is in fact a solution recommended by Tuxera. A patch is available to use an even larger block. There is also a Linux kernel option called to reduce the writes on file access. Synology Inc. uses a modified NTFS-3G on their NAS systems. It replaces the ntfs-3g inode caching with a different mechanism with unsure benefit. (It also includes an alternative Security Identifier translation for the NAS.) History NTFS-3G forked from the Linux-NTFS project on October 31, 2006. On February 21, 2007, Szabolcs Szakacsits announced "the release of the first open-source, freely available, stable read/write NTFS driver, NTFS-3G 1.0." On October 5, 2009, NTFS-3G for Mac was brought under the auspices of Tuxera Ltd. and a proprietary version called Tuxera NTFS for Mac was made available. On April 12, 2011, it was announced that Ntfsprogs project was merged with NTFS-3G. NTFS-3g added TRIM support in version 2015.3.14. NTFS-3G fixed CVE-2017-0358 in version 2016.2.22. NTFS-3G fixed CVE-2019-9755 in version 2017.3.23AR.4. Advanced version The software's main maintainer Jean-Pierre André has kept the development active on SourceForge, providing bug fixes and new features. He ran a parallel release system on his website as the NTFS-3G Advanced Version (NTFS-3G AR). Each version was run through a test suite and was considered stable. Linux distributions that have switched to NTFS-3G AR include Debian and its derivatives (Ubuntu, PureOS, Pardus, Parrot OS, Trisquel), Gentoo Linux, and LiGurOS. As of August 30 of 2021, the previously two collaborating projects merged and moved to GitHub See also Captive NTFS References External links NTFS-3G Community Edition NTFS-3G Advanced Version -- Obsolete as it became the new NTFS-3G Community Edition NTFS-3G for Mac OS X ("Catacombae") – Obsolete Writing on NTFS volumes on Mac OS X through NTFS-3G and OS X FUSE for free (works with Lion & Mountain Lion) Disk file systems File systems supported by the Linux kernel Unix file system-related software Userspace file systems
31076977
https://en.wikipedia.org/wiki/2011%20California%20Golden%20Bears%20football%20team
2011 California Golden Bears football team
The 2011 California Golden Bears football team represented the University of California, Berkeley in the 2011 NCAA Division I FBS football season. Led by tenth-year head coach Jeff Tedford, the Bears are members of the North Division of the Pac-12 Conference. Due to reconstruction at California Memorial Stadium, California played their home games in AT&T Park, now known as Oracle Park, in San Francisco, California. The season opener against Fresno State, officially a neutral-site game, was played at Candlestick Park. The regular season also ended with an away game with a matchup at Arizona State on November 25. Following the team's first losing season during Tedford's tenure as head coach, the Golden Bears improved to 7–5 (4–5 in the Pac-12) to finish fourth in the North Division. Cal also became bowl-eligible for the first time since 2009 and accepted a bid to play in the Holiday Bowl, which they lost to Texas. With a victory against Presbyterian College on September 17, Tedford became the winningest coach in program history. Preseason Following the program's first losing season during Jeff Tedford's tenure as head coach, several coaching changes were implemented. Offensive coordinator Andy Ludwig departed for the same position at San Diego State. He was replaced by Jim Michalczik, who will also be the offensive line coach due to the departure of Steve Marshall for Colorado. Michalczik had previously been the offensive line coach at Cal from 2002–2008. Wide receivers coach Kevin Daft was succeeded by Eric Kiesau, who will also function as the passing game coordinator, while defensive backs coach Al Simmons was succeeded by Ashley Ambrose, who held the same position at Colorado. Marcus Arroyo joined the coaching staff as quarterbacks coach and assistant head coach. Starting running back Shane Vereen, who finished the 2010 season with 1,167 yards to become the program's first 1,000-yard rusher since the 2008 season, decided to forego a remaining season of eligibility and enter the NFL Draft, having graduated in December. He was drafted in the second round by the New England Patriots, following defensive end Cameron Jordan, who was drafted in the first round. Safety Chris Conte was selected in the third round by the Chicago Bears, and linebacker Mike Mohamed was picked in the sixth round by the Denver Broncos. The team held its spring football from March 15 to April 24. Junior transfer Zach Maynard, the half brother of wide receiver Keenan Allen, was named the starting quarterback on May 14. Maynard had played at Buffalo from 2008–2009 before transferring to Cal in 2010. Schedule Game summaries Fresno State The third meeting between Cal and Fresno State was a neutral-site game played at Candlestick Park in San Francisco. It saw both quarterbacks each start their first career games for their respective teams: Cal's Zach Maynard had transferred from Buffalo in 2010, while Fresno State's Derek Carr had redshirted the previous season. On the opening drive Maynard's first pass was intercepted on the Cal 16-yard line and Fresno State was able to score first on a 9-yard run by running back Robbie Rouse. However, the Bears were able to rebound. Their second possession saw a 67-yard scoring drive put together which was capped off with a 1-yard touchdown run by running back Isi Sofele, although the PAT was blocked. On their third possession of the game, Sofele was able to break free for a 39-yard run, but the PAT was blocked again. The Bears scored for a third time on their fourth possession when Maynard connected with wide receiver Marvin Jones for a 42-yard score. The Bulldogs had a chance to add some points at the beginning of the second quarter, but a 35-yard field goal missed. The Fresno State defense was able to keep Cal scoreless, and with Cal backed up on its own 6-yard line, were able to capitalize on a botched snap that Sofele tried to recover in the Cal end zone, but was knocked lose for a fumble and recovered by the Bulldogs for a touchdown. This trimmed Cal's lead to 19–14 at the half. On Cal's first possession of the third quarter, Maynard was able to hit Jones for a second touchdown, this one for 23 yards. Carr was intercepted on the ensuing possession, but the Bears were unable to convert it into points. However, the Cal defense was then able to sack Carr and force a fumble, which was returned for a 22-yard score. The Bears' final score of the game came on a 40-yard field goal in the fourth quarter, while the Bulldogs were able to put together a drive that ended with a 7-yard touchdown pass from Carr to receiver Josh Harper for the game's final score. In his first start for the Bears, Zach Maynard passed for 266 yards, with two touchdowns and an interception, while running for 53 yards. Starting running back Isi Sofele scored the second rushing touchdown of his career and rushed for 83 total yards. Keenan Allen had 112 yards receiving, while Marvin Jones had 118, with two touchdowns, marking the first time two Cal receivers had 100 yards in a game since 2006. For Fresno State, Derek Carr passed for 142 yards, a touchdown and an interception, while Robbie Rouse had 86 rushing yards, with a touchdown. Colorado Although both teams were in the Pac-12, this game counted as a non-conference game in Pac-12 standings, as it was scheduled before Colorado joined the Pac-12. The Bears visited Boulder for the first time since 1982, where they were shut out during the first quarter by the Buffaloes. Zach Maynard was intercepted on Cal's second possession of the game to set up a Colorado 27-yard field goal. However Cal got on the board to open the second quarter on a 2-yard pass from Maynard to fullback Nico Dumont, although a PAT was blocked for the second week in a row. After both teams traded field goals, Cal added to its lead on a 7-yard pass from Maynard to tight end Anthony Miller to make it 16–6 at the half. Colorado started out the second half with a 37-yard pass from Tyler Hansen to tight end Ryan Deehan. Late in the quarter Cal scored on a 20-yard reception by Miller. The Buffs responded with a 66-yard touchdown from Hansen to receiver Paul Richardson to close the third quarter. The two connected again when Richardson opened the fourth quarter for a 78-yard score to give Colorado their first lead of the game. However Cal regained the lead on the ensuing possession on a 20-yard run by running back C.J. Anderson. The Buffs were able to tie the game on a 32-yard field goal with 30 seconds left to force overtime. Cal won the coin toss and elected to play defense to start overtime. Colorado drove to the 4-yard line before settling for a 22-yard field goal. After reaching a first down at the 15-yard line, Cal was pushed back to the 35-yard line by two penalties, setting up a 1st-and-30 situation. However, on first down Maynard connected with Keenan Allen for a 32-yard gain to the 3-yard line. One play later, Maynard passed to Allen for a 5-yard touchdown to give the Bears their first overtime win since 2006. The victory tied head coach Jeff Tedford with Andy Smith for the most in school history with 74. Zach Maynard passed for 243 yards, four touchdowns, and an interception, Isi Sofele had 84 yards on the ground, while Keenan Allen had 97 receiving yards and a touchdown. Colorado's Tyler Hansen threw for 474 yards, a team record, including three touchdowns, while Paul Richardson had 284 receiving yards, a school record. The Buffs offense outgained the Bears 582–370 in the loss. Presbyterian Cal's home opener was the first matchup for the two teams, with Presbyterian making the longest trip for an away game in school history. The Bears' first score of the game came on a 9-yard run by running back C. J. Anderson, followed by a 1-yard run by Isi Sofele that was set up by an interception of Ryan Singer. Sofele opened the second quarter with a 3-yard run, and the Bears scored on the following possession with a 51-yard reception by Marvin Jones. The Blue Hose got on the scoreboard when cornerback Justin Bethel blocked a punt and returned it for a touchdown, with the PAT missing. After Cal tight end Spencer Hagan scored on a 16-yard reception, Presbyterian's second score came on a 29-yard interception return by Bethel, with the two-point conversion missing. The final score of the half came on a 21-yard reception by Keenan Allen to make it 42–12 at the half. The second half opened with an 88-yard kickoff return for a score by Cal running back Brendan Bigelow. Backup running backs Covaughn DeBoskie-Johnson and Dasarte Yarnway rushed for 6 and 7-yard touchdowns, respectively, for Cal's final scores. The victory made Cal head coach Jeff Tedford the winningest in program history with 75 wins. Zach Maynard threw for 215 yards, with three touchdowns and an interception, Marvin Jones had 123 receiving yards with a score, while Isi Sofele had his first 100-yard rushing game with 105 and two scores. Cal outgained Presbyterian with 581 yards of total offense to 48, and controlled the ball for two thirds of the game. Washington In the conference opener for both teams, Cal faced Washington in Seattle, not having defeated the Huskies since 2008. On the Bears' opening possession Zach Maynard connected with receiver Keenan Allen for a 90-yard score, the longest reception in school history. Washington responded on the ensuing possession on a 20-yard touchdown reception by tight end Austin Seferian-Jenkins from Keith Price. The Huskies scored again on a 2-yard run by running back Chris Polk to take the lead, which they never relinquished. After the Bears opened the second quarter with a 29-yard field goal, the Huskies scored their third straight touchdown with a 20-yard reception by Seferian-Jenkins. After Cal scored with a 36-yard field goal, a fumble by Price was recovered by the Bears. This set up a 1-yard score by running back C.J. Anderson for the final points of the half. A 52-yard field goal attempt by Washington's Erik Folk missed to close out the half, with the Huskies leading 21–20. Cal's final points of the game came on a 25-yard field goal, with Washington responding with a 40-yard field goal. The Huskies padded their lead with a 70-yard reception by Chris Polk to open the fourth quarter. Early in the quarter Cal was unable to capitalize on a fumble recovery and turned the ball over on downs. The Bears had a chance in the final minutes of the game and were able to drive to the Huskies' 2-yard line, but were unable to get the ball into the end zone, making Steve Sarkisian's record against Cal 3–0. Zach Maynard passed for 349 yards and a touchdown, the first time a Cal quarterback had passed for 300 yards since October 2009. Keenan Allen had a career-high 197 yards with a score, while Isi Sofele had 98 yards on the ground. Keith Price threw for 292 yards with three touchdowns, while Chris Polk was the Huskies' leading receiver and rusher, with 85 and 60 yards, respectively, scoring both through the air and on the ground. Oregon Both teams came off a bye week as Cal traveled north to #9 Oregon. The Bears scored on the game's opening drive with a 27-yard field goal, but the Ducks came right back with a 53-yard scoring run by running back LaMichael James. Cal scored again on the ensuing possession with a 38-yard field goal, with Oregon adding its final score of the half on a 17-yard run by receiver De'Anthony Thomas late in the quarter. The Cal defense was able to hold Oregon scoreless in the second quarter, while a 54-yard field goal, a career-high for kicker Giorgio Tavecchio, and a 12-yard touchdown reception by receiver Keenan Allen gave the Bears the lead 15–14. Darron Thomas was intercepted in Cal territory in the closing minutes of the half for the game's sole turnover, but the Bears were unable to add to their lead when a 40-yard field goal attempt was blocked. However, the second half was a different story, as Oregon shut out Cal and took control of the game. The Ducks scored three touchdowns in the third quarter, with a 23-yard reception by Thomas, a 68-yard run by running back Kenjon Barner (with a successful 2-point conversion), and a 21-yard reception by Thomas, his third touchdown of the game. The final score of the game came on a 3-yard touchdown by receiver Lavasier Tuinei early in the fourth quarter. Backup Allan Bridgford stepped in for Zach Maynard during the quarter, but the Bears failed to put up further points on the board despite putting together a drive that went down to the Oregon goal line on Bridgford's first series. Zach Maynard threw for 218 yards and a touchdown, while Allan Bridgford had 103 passing yards in relief. Isi Sofele had 119 yards on the ground, while Keenan Allen had 170 through the air and the Bears' sole touchdown. Oregon's Darron Thomas threw for 198 yards, with three touchdowns and an interception, while De'Anthony Thomas had 114 receiving yards, two touchdown receptions, and a rushing touchdown. LaMichael James, the nation's leading rusher, finished with 239 yards, which put him over the 4,000 mark for his career. The game was his fourth straight with at least 200 yards rushing, a feat unparalleled in school history. However, he was injured early in the fourth quarter and had to be carted off the field. USC The Bears returned home to AT&T Park in search of their first conference win against the Trojans, whom they had not beaten since 2003. On the fourth play of the game a Keenan Allen fumble was recovered by the Trojans, but they were unable to convert it into points, when a trick play called by USC head coach Lane Kiffin on fourth down at the Cal 8-yard line resulted in a fumble that Cal recovered. A second Cal fumble was recovered by the Trojans when Zach Maynard was sacked in Cal territory, setting up a 39-yard field goal. USC scored three times in the second quarter, first on a 39-yard pass by Matt Barkley to receiver Marqise Lee. Maynard was then intercepted on the Cal 23 on the ensuing possession to set up a 29-yard field goal, and receiver Brandon Carswell had a 7-yard reception late in the quarter. The Bears were able to drive downfield in the final minutes of the half but failed to get on the board when Maynard was intercepted at the goal line. USC led 20–0 at the half, with Cal being shut out for four straight quarters going back to the previous week against Oregon. USC opened the second half with a 23-yard field goal and Cal ended the shutout with a 27-yard field goal. The Bears scored again when Maynard rushed in from 3 yards out, although the PAT was blocked. Maynard threw his third interception late in the fourth quarter to set up a 2-yard scoring run by running back Curtis McNeal. After turning the ball over on downs, Cal recovered a fumble by USC but was unable to score again. Zach Maynard threw for 294 yards and had a touchdown run, but accounted for four of the Bears' turnovers with three interceptions and a fumble. Keenan Allen had a career-high 160 receiving yards, while Isi Sofele, coming off a breakout game the previous week against Oregon, was held to 44 yards on the ground. Cal committed five turnovers for the first time since November 2008 and went 0–3 in conference play for the first time in Jeff Tedford's tenure as head coach. Matt Barkley threw for 195 yards and two scores while Curtis McNeal led the Trojans on the ground with 86 yards and a score, while Marquise Lee had 81 yards through the air and a touchdown. Utah Source: The first conference game between the two teams was their first matchup since the 2009 Poinsettia Bowl. After a scoreless first quarter Cal's first points of the game came in the second with a 5-yard scoring run by Isi Sofele. A fumble by Utah quarterback Jon Hays was recovered and resulted in a 35-yard field goal. After the Bears made a 37-yard field goal, Hays was picked off by linebacker Mychal Kendricks which set up a 12-yard touchdown reception by Keenan Allen to make it 20–0 at the half. Quarterback Zach Maynard scored on a 4-yard run in the third quarter, while a second interception of Hays set up a 29-yard field goal attempt which missed. Hays threw his third interception of the game to open the fourth quarter, which was returned by Josh Hill for a 32-yard score. The Utes ended their scoring drought with a 36-yard field goal on the ensuing possession and running back John White had a 14-yard touchdown run in the final minutes. In an improvement from the previous week, the Bears had no turnovers in their first conference win of the season. Zach Maynard threw for 255 yards, with a touchdown on the ground and through the air, Keenan Allen had 78 receiving yards and a score, while Isi Sofele rushed for 84 yards and a score. Utah's Jon Hay threw for 148 yards with three interceptions, while the Utes were held to 13 net yards on the ground. UCLA Cal got on the board first on 1-yard run by Isi Sofele that was set up by a fumble recovered from UCLA quarterback Kevin Prince. The Bruins responded in the second quarter with an 11-yard score by running back Johnathan Franklin and began to take control of the game when Zach Maynard's s first interception of the game set up a 32-yard field goal. Running back Derrick Coleman scored on a 2-yard run to make it 17–7 at the half. The Bears were able to close the gap in the third quarter with a 1-yard touchdown run by running back C.J. Anderson that was set up by a muffed UCLA punt return, but were unable to tie the game when a 42-yard field goal attempt missed. The fourth quarter saw Coleman score his second touchdown on a 20-yard run, which was set up by an interception of Maynard. He followed this up with a 24-yard scoring run as the result of another interception. Maynard threw a third interception in a row to end any chances of a comeback late in the quarter. Zach Maynard threw for 199 yards and was intercepted four times, with UCLA safety Tevin McDonald accounting for three picks. Isi Sofele had 73 rushing yards with one score, while Keenan Allen had 83 receiving yards. UCLA snapped a three-game losing streak to Cal, with quarterback Kevin Prince passing for only 92 yards, but rushing for 163 on the ground, double that of running back Derrick Coleman, who rushed for 80 yards and three scores. Washington State Cal faced Washington State at home, who came in with the second ranked passing offense in the Pac-12. On the Bears' first series Zach Maynard connected with tight end Anthony Miller for a 19-yard touchdown. Cal scored again midway through the quarter on a 1-yard run by Isi Sofele, although the PAT was blocked. Washington State had a chance to score some points at the beginning of the second quarter on a 52-yard field goal attempt, but the snap went high and resulted in a 28-yard loss. The Bears were unable to capitalize on this opportunity, but scored on the ensuing possession on a 5-yard run by C.J. Anderson and added a 43-yard field goal to make it 23–0 at the half. On the opening drive of the third quarter, fullback Will Kapp, son of former Cal quarterback Joe Kapp, was able to break free on 4th and 1 for a 43-yard touchdown, the first of his career. A second Cougars attempt at a field goal, this one from 27 yards, missed. On the following possession, Maynard was pulled from the game after recovering a fumble by Sofele and backup Allan Bridgford stepped in. Washington State recovered a second fumble by Sofele in Cal territory to close the quarter, and was able to end the shutout on a 5-yard run by running back Rickey Galvin. Zach Maynard finished the game with 118 passing yards and a touchdown, while Isi Sofele had 138 yards on the ground, a career-high, and accounted for a touchdown. Receiver Keenan Allen had 85 receiving yards to put him over the 1,000 mark for the season which he reached in nine games, the fastest in school history. The win put the Bears within a victory of bowl eligibility. Washington State's Marshall Lobbestael passed for 155 yards, receiver Marquess Wilson had 85 yards through the air, and running back Rickey Galvin accounted for 73 yards on the ground and a touchdown. Oregon State Source: Cal's final home game of the season was a matchup against Oregon State, who they had not beaten at home since 1997. The Beavers scored first on the opening drive with a 32-yard field goal and had an opportunity to add to their score when Zach Maynard was picked off on the ensuing possession, but were forced to punt. The Bears responded with a 19-yard strike from Maynard to receiver Michael Calvin to take the lead, which was never relinquished. They added to the lead on a nine-minute scoring drive in the second quarter that was capped off by a 5-yard run by Maynard. With a minute left in the half, Oregon State was able to drive down the field and kick a 46-yard field goal to cut the lead to 14–6. Isi Sofele scored on a 20-yard run to open the third quarter, although the PAT missed. The Beavers were able to put together a drive that was halted when a pass from Sean Mannion was deflected and intercepted at the Cal 4-yard line. They had another opportunity to score early in the fourth quarter, but this too was stopped by a turnover when a fumble was recovered at the Cal 3-yard line. The final score of the game came on a 32-yard Cal field goal. Mannion was picked off a second time late in the quarter at the Cal 20-yard line. Zach Maynard threw for 128 yards, with one passing touchdown and a rushing touchdown. Isi Sofele set a new career high with 190 yards rushing and had a touchdown, also going over the 1,000 yard mark for the season. Oregon State's Sean Mannion passed for 247 yards with two interceptions, while the Beavers' ground attack was held to only 27 yards, their second-fewest total of the season. The victory snapped a four-game losing streak to Oregon State going back to 2006 and made Cal bowl-eligible for the eighth time in nine-year after finishing the previous season with a 5–7 record. Stanford Cal started out the 114th Big Game by driving into Stanford territory on the first play, but Isi Sofele fumbled the ball away on the second. This set up a 34-yard scoring run by receiver Ty Montgomery. The Bears responded with a 25-yard field goal and after Andrew Luck was intercepted, were able to jump ahead with a 17-yard touchdown reception by Keenan Allen. Cal's final score of the half came early in the second quarter with a 19-yard field goal. Stanford got the go ahead score on a 25-yard touchdown by running back Tyler Gaffney to make it 14–13. A Cardinal 33-yard field goal attempt to close the half missed. The Cardinal opened the quarter by scoring with a 4-yard reception by tight end Levine Toilolo, followed by a 9-yard touchdown reception by fullback Ryan Hewitt. The Bears were shut out, with Sofele having his second fumble of the game to end the quarter. Cal was able to mount a comeback in the fourth quarter with a 2-yard reception by tight end Spencer Hagan and a successful two-point conversion. A 35-yard field goal by Stanford made it a two score game, and the Bears were able to score again in the closing minutes on a 1-yard run by running back C. J. Anderson. However Stanford recovered the onside kick to retain control of the Stanford Axe and stave off an upset. Zach Maynard threw for 280 yards and two scores, while Keenan Allen had 97 through the air and a touchdown. Isi Sofele, after two back to back 100-yard rushing games, was limited to 84. Stanford's Andrew Luck passed for 257 yards, two scores, and a pick. Arizona State Cal's regular season finale came on the road against Arizona State. The Bears scored on their opening drive with a 48-yard field goal. The Sun Devils responded with a 1-yard scoring run by running back Cameron Marshall. Cal came back with an 18-yard touchdown run by Isi Sofele, and capitalized on a fumble recovery with a 16-yard scoring run by Zach Maynard. The second quarter opened with a 17-yard touchdown reception by Arizona State receiver Aaron Pflugrad. After C. J. Anderson scored on a 1-yard run, Sun Devils quarterback Brock Osweiler was intercepted to set up a 27-yard field goal. Osweiler then connected with receiver Rashad Ross for a 35-yard score. Arizona State forced a fumble on the kickoff, setting up a 4-yard touchdown reception by tight end Trevor Kohl to jump ahead 28–27 at the half. The Sun Devils scored on a 47-yard field goal to start the third quarter, followed by Cal scoring on a 3-yard run by Anderson. He scored again on a 74-yard reception, followed by Arizona State ending the quarter with a 24-yard touchdown run by Marshall. The Bears began the fourth quarter with a 19-yard field goal, followed by one from 30 yards that was set up by a fumble recovery. Osweiler threw his second interception of the game later in the quarter to prevent any attempt at a Sun Devils comeback. Zach Maynard threw for 237 yards, with one touchdown pass and a touchdown run. Isi Sofele rushed for 145 yards and a score, while C. J. Anderson rushed for two touchdowns and had a scoring reception. Arizona State's Brock Osweiler passed for 264 yards, with three scores and two picks. Rashad Ross had 108 yards through the air and a touchdown, while Cameron Marshall rushed for 157 yards and two touchdowns. The Sun Devils ended their season with a four-game losing streak. Holiday Bowl Cal had last played Texas in 1970, and the Longhorns owned a 4–0 record against the Bears, although this marked their first meeting in a bowl. Cal scored first on a 47-yard field goal, then committed its first turnover of the game when Zach Maynard was intercepted by cornerback Quandre Diggs. Despite a defensive stop, Maynard was sacked on the following possession and fumbled deep in Cal territory. However Texas was unable to capitalize on this when a 38-yard field goal missed. The Longhorns got on the board midway through the second quarter on a trick play that involved receiver Jaxon Shipley throwing a 4-yard touchdown pass to quarterback David Ash to make it 7–3 at the half, the combined score tying a Holiday Bowl record for lowest ever in the first half. Cal opened the third quarter on a 6-yard scoring run by Isi Sofele to take the lead for one series, as Texas responded with a 47-yard scoring reception by receiver Marquise Goodwin. Sofele then fumbled the ball on ensuing possession, but the Cal defense again held. Maynard was sacked on the final Cal possession of the quarter, resulting in another fumble. A 37-yard run by Goodwin then set up a 4-yard touchdown run by running back Cody Johnson for the game's final score. Late in the quarter the final Cal turnover occurred when receiver Marvin Jones fumbled after making a reception. Zach Maynard threw for 188 yards and was picked off once and sacked six times, accounting for three turnovers. Isi Sofele was held to 52 rushing yards with one touchdown run, while Marvin Jones and Keenan Allen logged 88 and 82 yards through the air, respectively. Cal committed five turnovers, the most since the October 29 matchup against UCLA. Texas quarterback David Ash threw for 142 yards and a score and was sacked twice, but earned Offensive MVP honors. Receiver Marquise Goodwin had 49 yards through the air and 33 on the ground with a touchdown reception, as the Longhorns outrushed the Bears 109–7, giving Cal their lowest single-game rushing total since 2000. Linebacker Keenan Robinson was named the Defensive MVP. Postseason Defensive line coach Tosh Lupoi, who had been with the program for twelve years, and wide receivers coach Eric Kiesau, who had joined the program just that year, both left to join the Washington coaching staff in mid January 2012. Coach and former NFL wide receiver Wes Chandler succeeded Kiesau on January 18. Coach and former NFL linebacker Todd Howard was named as Lupoi's replacement after having fulfilled the same role at Washington State the previous season. In the 2012 NFL Draft, six Cal players were selected, the most out of any Pac-12 school. Linebacker Mychal Kendricks and offensive tackle Mitchell Schwartz were taken in the second round, punter Bryan Anger in the third, receiver Marvin Jones in the fifth, and safety D. J. Campbell and defensive end Trevor Guyton in the seventh. Anger became the highest drafted punter since 1995. References External links 2011 California Football Media Guide California California Golden Bears football seasons California Golden Bears football
8726769
https://en.wikipedia.org/wiki/Trilinos
Trilinos
Trilinos is a collection of open-source software libraries, called packages, intended to be used as building blocks for the development of scientific applications. The word "Trilinos" is Greek and conveys the idea of "a string of pearls", suggesting a number of software packages linked together by a common infrastructure. Trilinos was developed at Sandia National Laboratories from a core group of existing algorithms and utilizes the functionality of software interfaces such as the BLAS, LAPACK, and MPI (the message-passing interface for distributed-memory parallel programming). In 2004, Trilinos received an R&D100 Award. Several supercomputing facilities provide an installed version of Trilinos for their users. These include the National Energy Research Scientific Computing Center (NERSC), Blue Waters at the National Center for Supercomputing Applications, and the Titan supercomputer at Oak Ridge National Laboratory. Cray supercomputers come with Trilinos installed as part of the Cray Scientific and Math Libraries. Features Trilinos contains packages for: Constructing and using sparse graphs and matrices, and dense matrices and vectors. Iterative and direct solution of linear systems. Parallel multilevel and algebraic preconditioning. Solution of non-linear, eigenvalue and time-dependent problems. PDE-constrained optimization problems. Partitioning and load balancing of distributed data structures. Automatic differentiation. Discretizing partial differential equations. Trilinos supports distributed-memory parallel computation through the Message Passing Interface (MPI). In addition, some Trilinos packages have growing support for shared-memory parallel computation. They do so by means of the Kokkos package in Trilinos, which provides a common C++ interface over various parallel programming models, including OpenMP, POSIX Threads, and CUDA. Programming languages Most Trilinos packages are written in C++. Trilinos version 12.0 and later requires C++11 support. Some Trilinos packages, like ML and Zoltan, are written in C. A few packages, like Epetra, have optional implementations of some computational kernels in Fortran, but Fortran is not required to build these packages. Some Trilinos packages have bindings for other programming languages. These include Python, C, Fortran, and Matlab. Software licenses Each Trilinos package may have its own software license. Most packages are Open-source; most of these have a Modified BSD license, while a few packages are under the GNU Lesser General Public License (LGPL). The BLAS, and LAPACK libraries are required dependencies. See also BLAS LAPACK Message Passing Interface List of numerical-analysis software Sandia National Laboratories References External links Numerical libraries Concurrent programming libraries Free mathematics software C++ numerical libraries
57838528
https://en.wikipedia.org/wiki/Chen%20Guangxi
Chen Guangxi
Chen Guangxi (; 1903–1992) was a Chinese engineer, computer scientist, and professor who founded the discipline of computer science at the Harbin Institute of Technology. Early life Chen Guangxi was born in Tongcheng, Anhui Province on May 21, 1903, with his ancestral hometown in Shangyu, Zhejiang. His father was a former Qing dynasty government official. In May 1920, Chen left China to study and work in France. In 1922, Chen graduated as an agricultural machinery student and obtained a degree in agricultural mechanics. In 1929, he graduated from the School of Engineering of the University of Leuven, Belgium with degrees in Process Manufacturing, Civil Engineering and Mining Engineering. In 1930 he graduated from the Graduate School of Geology and obtained a degree in engineering geology. Chen returned to war-torn China in October 1930. He found a faculty place to teach mathematics in a middle school in Beijing after one year of unemployment. He successively worked as a lecturer at National Labour University, a course interpreter in the Chinese Northeast Navy in Qingdao, a physics and chemistry teacher in Kaifeng and a math teacher in Beijing Fu Jen Catholic University high school successively. In September 1933, he worked as a lecturer in the Department of Mathematics and Physics at Fu Jen Catholic University. In September 1938, he was promoted to professor by Fu Jen. In 1945, at the end of the Second World War, Ministry of Education of the Chinese Nationalist Government newly founded the National Peking Senior Industrial Vocational School and appointed Chen Guangxi as principal. In late 1949, upon the establishment of Communist China, Chen joined a design institute in the Ministry of Machinery Industry and worked as the chief engineer. Chen paid close attention to electronic computers were introduced in western countries in early 1950s. In 1957, he left for Harbin. At Harbin Institute of Technology, Chen initiated the first electronic computer discipline in China. Research career Analog computer In 1958, Chen and his team developed the first-ever structural analog computer in China. The machine could speak a few words and play chess. This intelligent chess-playing computer could calculate 40,000 times per second, was capable of logical reasoning, and could complete specific tasks of judgment. Magnetic core In 1963, Professor Chen presided over the development of ultra-small magnetic core, the precondition for the development of megacomputers. Later, under the guidance of Chen, the team carried out the magnetic core molding experiment of "rolling into a belt and rubbering into a core" and achieved success. This method was cutting-edge, was quickly promoted in China. The molding technology supported the development of high-speed large-capacity memory and the manufacturing of transistor computers and small-scale integrated circuit computers. This project was a major contribution to the development of computer technology in China. Fault-tolerant computer system Chen was praised as pioneer of fault-tolerant computer system research in China for his research in this field. In 1973, Chen Guangxi, 70 years old, proposed a major research project to solve the problem of computer reliability, and worked on computer fault tolerance technology. The project was listed as a national project by Chinese Commission for Science, Technology and Industry for National Defense, and received a research fund of over 1 million Chinese yuan. Chen and his development team successfully developed RCJ-1, China's first fault-tolerant computer and "a dual-mode fault-tolerant system with self-test and self-correction". and its reliability increased by more than four times. Chen Guangxi also co-published the first fault-tolerant computing monograph "Diagnosis and Fault Tolerance of Digital Systems" in cooperation with Professor Chen Tingyu of Chongqing University. In 1981, it was published by the National Defense Industry Press and became a national textbook. Chen's computer fault-tolerant research team has achieved a number of impressive achievements, and has established a high prestige in China's aerospace industry, banking system and high-reliability computing field. The fault-tolerant technology was applied in the Chinese manned spaceflight program Shenzhou. Other projects Chen has also guided other research projects such as algorithm hardware implementation and database machine,and have achieved significant scientific and technological achievements. Chen has won a number of first and second prizes of National Science and Technology Progress Award. The computer science discipline Chen initiated in HIT was listed key discipline, opened up key laboratories and postdoctoral research stations, and continued to play a leading role in China. Legacy In 2005, a bronze statue of Chen Guangxi was unveiled in the School of Computer Science and Technology, HIT. References 1903 births 1992 deaths Fu Jen Catholic University faculty Old University of Leuven alumni Harbin Institute of Technology faculty Harbin Institute of Technology alumni Chinese computer scientists Engineers from Anhui People from Anqing
5031225
https://en.wikipedia.org/wiki/Head-of-line%20blocking
Head-of-line blocking
Head-of-line blocking (HOL blocking) in computer networking is a performance-limiting phenomenon that occurs when a line of packets is held up by the first packet. Examples include input buffered network switches, out-of-order delivery and multiple requests in HTTP pipelining. Network switches A switch may be composed of buffered input ports, a switch fabric and buffered output ports. If first-in first-out (FIFO) input buffers are used, only the oldest packet is available for forwarding. More recent arrivals cannot be forwarded if the oldest packet cannot be forwarded because its destination output is busy. The output may be busy if there is output contention. Without HOL blocking, the new arrivals could potentially be forwarded around the stuck oldest packet to their respective destinations. HOL blocking can produce performance-degrading effects in input-buffered systems. This phenomenon limits the throughput of switches. For FIFO input buffers, a simple model of fixed-sized cells to uniformly distributed destinations, causes the throughput to be limited to 58.6% of the total as the number of links becomes large. One way to overcome this limitation is by using virtual output queues. Only switches with input buffering can suffer HOL blocking. With sufficient internal bandwidth, input buffering is unnecessary; all buffering is handled at outputs and HOL blocking is avoided. This no-input-buffering architecture is common in small to medium-sized ethernet switches. Out-of-order delivery Out-of-order delivery occurs when sequenced packets arrive out of order. This may happen due to different paths taken by the packets or from packets being dropped and resent. HOL blocking can significantly increase packet reordering. Reliably broadcasting messages across a lossy network among a large number of peers is a difficult problem. While atomic broadcast algorithms solve the single point of failure problem of centralized servers, those algorithms introduce a head-of-line blocking problem. The Bimodal Multicast algorithm, a randomized algorithm that uses a gossip protocol, avoids head-of-line blocking by allowing some messages to be received out-of-order. In HTTP One form of HOL blocking in HTTP/1.1 is when the number of allowed parallel requests in the browser is used up, and subsequent requests need to wait for the former ones to complete. HTTP/2 addresses this issue through request multiplexing, which eliminates HOL blocking at the application layer, but HOL still exists at the transport (TCP) layer. HTTP/3 uses QUIC instead of TCP which removes HOL blocking in the transport layer. See also Bufferbloat FIFO HTTP pipelining Network scheduler Pipeline stall Queue References Queue management
2678406
https://en.wikipedia.org/wiki/OMNI%20%28SCIP%29
OMNI (SCIP)
OMNI is an encryption device manufactured by L-3 Communications. It adds secure voice and secure data to any standard analog telephone or modem connected computer. SCIP signalling allows interoperability with other SCIP devices such as the Secure Terminal Equipment (STE) phone. In bypass mode, STU-IIIs can communicate with one another using the OMNI to enhance the quality of the voice and data. Algorithms used by the OMNI include Type 1 encryption methods. Models The Standard model is limited to a 56 kbit/s. The Xi upgrade allows data rates up to 2 Mbit/s. External links L-3 OMNI Web Site L-3 Voice Transmitter website Encryption devices
12953313
https://en.wikipedia.org/wiki/IEEE%20Software
IEEE Software
IEEE Software is a bimonthly peer-reviewed magazine and scientific journal published by the IEEE Computer Society covering all aspects of software engineering, processes, and practices. Its mission is to be the best source of reliable, useful, peer-reviewed information for leading software practitioners—the developers and managers who want to keep up with rapid technology change. It was established in 1983 and is published by the IEEE Computer Society. According to the Journal Citation Reports, the journal has a 2018 impact factor of 2.945. IEEE Software received the APEX 2016 Award of Excellence in the “Magazines, Journals & Tabloids — Electronic” category. IEEE Software's November/December 2016 issue, “The Role of the Software Architect,” won the 2017 Folio Eddies Digital Award in the "Standalone Digital Magazine; Association/Non-Profit (B-to-B) – Standalone Digital Magazine – less than 6 issues" category. IEEE Software also received an honorable mention in the Folio Digital Awards in 2018. Editors-in-chief The following individuals are or have been editor-in-chief of the journal: See also IEEE Transactions on Software Engineering IET Software References External links Software Computer science journals Software engineering publications Publications established in 1983 Bimonthly journals English-language journals
32514388
https://en.wikipedia.org/wiki/ACSI%20College-Iloilo
ACSI College-Iloilo
History It was founded as Associated Computer Systems Institute in 1984 with its first campus at Luna, La Paz, Iloilo City. At first it offered diploma courses in computer. It changed its name to ACSI Business and Computer School, Inc, and moved to a new location at the City Proper of Iloilo. In 2008 the college offered two CHED programs: Bachelor of Science in Computer Science and Bachelor of Science in Computer Systems. Following that, it changed its name to ACSI College. It caters both courses accredited by Technical Education and Skills Development Authority and Commission on Higher Education of the Philippines in information technology, hospitality management, health sciences, business and short term courses. Academic programs ACSI College has various programs in bachelor's and non-bachelor's degrees. The college is under and accredited by CHED (Commission on Higher Education) and Technical Education and Skills Development Authority (TESDA). ACSI is a CHED Certified Higher Education Institution. ACSI College Iloilo offers courses in regular and night classes. Information technology Bachelor of Science in Computer Science Bachelor of Science in Information Systems Associate in Computer Technology (TESDA Recognized) Hospitality Hotel and Restaurant Services (NC IV) Health sciences Caregiver Program/Caregiver Course (NC II) Short term courses AutoCAD Adobe Photoshop CS3, CS4, CS5 & CS6 Microsoft Office Word 2007 & 2010 Microsoft Office Excel 2007 & 2010 Microsoft Office Groove 2007 & 2010 Microsoft Office InfoPath 2007 & 2010 Microsoft Office OneNote 2007 & 2010 Microsoft Office PowerPoint 2007 & 2010 Microsoft Office Publisher 2007 & 2010 Microsoft Office Access 2007 & 2010 Microsoft Office Outlook 2007 & 2010 Student organizations Rotaract Club of ACSI(Rotary Club of Midtown Iloilo): The ACSI College Iloilo chapter of Rotary International ACSI IT Society: An organization exclusively for IT (Information Technology)/computer courses students. References Universities and colleges in Iloilo City
1529284
https://en.wikipedia.org/wiki/Kino%20%28software%29
Kino (software)
Kino was a free software GTK+-based video editing software application for Linux and other Unix-like operating systems. The development of Kino was started at the end of 2000 by Dan Dennedy and Arne Schirmacher. The project's aim was: "Easy and reliable DV editing for the Linux desktop with export to many usable formats." The program supported many basic and detailed audio/video editing and assembling tasks. Kino has been included in several Linux distributions, including Debian, Puppy Linux and Ubuntu. BSD ports are also available. Development towards major feature implementations in Kino was slowed due to the lead developer, Dan Dennedy's inclination towards the development of Media Lovin' Toolkit. Dennedy indicated when he released Kino 1 that he was returning to work on the MLT Framework to support Kdenlive (another Linux non-linear digital video editor), "since its latest version shows much promise". As of August 5, 2013, the official website for Kino indicated that the project is "dead" and that users should try alternative software. Features Kino can import raw DV-AVI and DV files, as well as capture footage from digital camcorders using the raw1394 and dv1394 libraries. It can also import (as well as export) multiple still frames as JPEG, PNG, TIFF, PPM, and others image file types. Kino has the ability to export to camcorders using the ieee1394 or video1394 libraries. Kino can also export audio as WAV, Ogg Vorbis, MP3 using LAME, or MP2. Using FFmpeg, Kino can export audio/video as MPEG-1, MPEG-2, and MPEG-4 and is integrated with DVD Video authoring utilities. Some features included in version 1.3.4 include: capture from FireWire cameras, fast and frame-accurate navigation/scrubbing, vi keybindings, storyboard view with drag-n-drop, trimmer with 3 point insert editing, fine-grain thumbnail viewer, support for jog shuttle USB devices, drag-n-drop from file manage, Undo/Redo up to 99X. Kino provides a range of audio and video effects and transitions. Audio effects include silence, fade in/out, gain envelope, dub (from file), mix (from file), and crossfading support. Video effects include black/white, sepia tone, multiple color balance and masking tools, reverse (i.e. inverse or negative), mirror, kaleidescope, swap (flip), fade to/from black, blur (triangle), soft focus, titler and pixelate. Transitions include fade to/from color dissolve, push wipe, barn door wipe, color differences, and extensible wipes with numerous common SMPTE wipes (box, bar, diagonal, barn door, clock, matrix, four box, iris, and checkerboard). Release history Reception In reviewing Kino 1.3.4 in January 2012 Terry Hancock of Free Software Magazine found that the application was only suitable for simple or very limited video editing tasks. He praised its simplicity and ease-of-learning even for users new to video editing, but criticized its lack of multi-track capabilities and described the process of adding background music or synchronizing new sounds as "laborious". He concluded: "I'd say it was basically up to editing home movies to get rid of the boring parts. I've also found it useful for mining old public-domain videos from the Internet Archive to extract useful snippets of video. This, plus its ease of use, make it a valuable niche application, but certainly not for any serious video project." See also Comparison of video editing software List of video editing software Kdenlive References External links Kino official website former location Kino official website archives on Archive.org Kino - 2003 Tutorial on professional video editing, from Linux Magazine (PDF) Kino's lead developer's website Video editing software Free video software Video editing software that uses GTK
4988264
https://en.wikipedia.org/wiki/IMS%20Associates%2C%20Inc.
IMS Associates, Inc.
IMS Associates, Inc., or IMSAI, was a microcomputer company, responsible for one of the earliest successes in personal computing, the IMSAI 8080. The company was founded in 1973 by William Millard and was based in San Leandro, California. Their first product launch was the IMSAI 8080 in 1975. One of the company's subsidiaries was ComputerLand. IMS stood for "Information Management Sciences". IMS Associates required all executives and key employees to take the EST Standard Training. Forbes considered Millard's requirements - which placed a heavy emphasis on self-actualization and encouraged vast discrepancies between executives and staff - were a key contributor to the downfall of the company, and Paul Freiberger and Michael Swaine concurred in Fire in the Valley: The Making of The Personal Computer, noting that Millard's EST-induced unwillingness to admit a task might be impossible was a key factor in IMSAI's demise. History Consultancy In May 1972, William Millard began business individually as IMS Associates (IMS) in the area of computer consultancy and engineering, using his home as an office. The work done by IMS was similar to that Millard had done previously for the city and county of San Francisco. By 1973, Millard founded IMS Associates, Inc. Millard soon found capital for his business, and received several contracts, all for software. IMS provided advanced engineering and software management to mainframe users, including business and the United States Government. IMSAI 8080 In 1974, IMS was contacted by a client which wanted a "workstation system" that could complete jobs for any General Motors new-car dealership. IMS planned a system including a terminal, small computer, printer, and special software. Five of these work stations were to have common access to a hard disk, which would be controlled by a small computer. Eventually, product development was stopped. Millard and his chief engineer Joe Killian turned to the microprocessor. Intel had announced the 8080 chip, and compared to the 4004 to which IMS Associates had been first introduced, the 8080 looked like a "real computer". Full-scale development of the IMSAI 8080 was put into action, and by October 1975 an ad was placed in Popular Electronics, receiving positive reactions. IMS shipped the first IMSAI 8080 kits on 16 December 1975 and shortly after turned to fully assembled units. Between 17,000 and 20,000 units were eventually produced, with an additional 2500 produced under the Fischer-Freitas name thereafter. Transition In 1976, as IMS had completed its transition from a consultancy firm into a manufacturing firm, the name of the company was changed to IMSAI Manufacturing Corporation. ComputerLand The release of the Z80 by Zilog in 1976 quickly put an end to the dominance of 8080 machines as the new chip had an improved instruction set, could be clocked at faster speeds, and had on-chip DRAM refresh. IMSAI sales quickly plummeted and so in 1977 Millard decided to take the company through another transition, this time from a computer manufacturing company to a computer retailer. He established a chain of franchised retail outlets, initially called Computer Shack (the name was changed to ComputerLand following legal threats from Radio Shack). ComputerLand retailed not only IMSAI 8080s, but also computers from companies including Apple, North Star, and Cromemco. The 8080 sold poorly in comparison, and IMSAI developed the IMSAI VDP-80, an all-in-one computer which worked poorly. Many franchise dealers refused to retail most IMSAI products except those that retained popularity including the IMSAI 8080. With most of the IMSAI resources stripped to fund ComputerLand's expansion, and with Millard's attention diverted, IMS Associates, Inc. went into a "tailspin", and filed for bankruptcy in October 1979. The trademark was eventually acquired by Thomas "Todd" Fischer and Nancy Freitas (former early employees who undertook continued support after the parent company folded), now doing business as Fischer-Freitas Company (since October 1978), who continued manufacturing and service support under their newly acquired and trademarked IMSAI badge (such as the IMSAI Series Two), and continue support to this day. ComputerLand stores continued to prosper retailing IBM computers until IBM abandoned the 8-bit ISA bus in 1984; the franchises became independent following a series of bitter and costly legal battles with Millard. Pop culture WarGames (1983 film), in which the IMSAI 8080 appeared in a key role References External links Official IMSAI website Oral history interview with Seymour Rubenstein, Charles Babbage Institute. University of Minnesota. American companies established in 1973 American companies disestablished in 1979 Companies based in California Computer companies established in 1973 Computer companies disestablished in 1979 Defunct computer companies of the United States Defunct computer hardware companies
43582065
https://en.wikipedia.org/wiki/SmartThings
SmartThings
SmartThings Inc. is an American home automation company headquartered in Mountain View, California with a software development center in Minneapolis, Minnesota. Founded in 2012, it focuses on the development of eponymous automation software and an associated array of client applications and cloud platforms for smart homes and the consumer Internet of Things. Since August 2014 SmartThings has been a subsidiary of Samsung Electronics. SmartThings cites its platform as having 62 million active users, a number it claims increased 70% through 2019 and 2020. History SmartThings was allegedly conceived by co-founder and once-CEO Alex Hawkinson in the winter of 2011. Hawkinson tells that his family's unoccupied mountain house in Colorado was extensively damaged by water pipes that first froze and subsequently burst resulting in some $80,000 worth of damage. Hawkinson noted that he could have prevented the damages had he known what was happening inside the house. Through 2011 and 2012, Hawkinson and his SmartThings co-founders worked to build a prototype of their desired solution to such problems. That prototype would go on to form the basis of a successful Kickstarter campaign which the developers launched in September 2012 and that would go on to secure US$1.2 million in backing, making it the second largest, smart-home focussed crowdfunding project to date. Raising $3 million in a December 2012 seed funding round, SmartThings would go on to commercially launch its products in August 2013 before raising a further 12.5 million in a Series A funding round in late 2013. In August 2014, Samsung Electronics announced that they had reached an agreement to acquire SmartThings. The financial terms of the deal were never publicly disclosed but estimated as high as $200 million by some trade publications at the time. Products and services Initially SmartThings produced a suite of custom hardware and software services, including smart home hubs and sensors. In June 2020, SmartThings' engineering head Mark Benson announced that SmartThings would pivot away from manufacturing its own hardware and instead focus on software. The company hopes to enlist other companies to manufacture and distribute SmartThings hardware. In October 2020, SmartThings announced that Aeotec will take over its European hardware line. In December 2020, Aeotec revealed that it would also manage the SmartThings hardware portfolio throughout Australia, Canada, the United Kingdom, and the United States. As of February 2021, SmartThings develops software and cloud services. References External links Official Website Smart home hubs American companies established in 2012 Companies based in Mountain View, California Samsung Electronics American subsidiaries of foreign companies Internet of things companies Kickstarter-funded products IOS software Android (operating system) software Windows Phone software WatchOS software 2014 mergers and acquisitions
23968321
https://en.wikipedia.org/wiki/TeXworks
TeXworks
TeXworks is free and open-source application software, available for Windows, Linux and macOS. It is a Qt-based graphical user interface to the TeX typesetting system and its LaTeX, ConTeXt, and XeTeX extensions. TeXworks is targeted at direct generation of PDF output. It has a built-in PDF viewer using the poppler library; the viewer has auto-refresh capability, and also features SyncTeX support (which allows the user to synchronize the PDF viewer position with the source, and vice versa with a single click). The developer of TeXworks is Jonathan Kew (who also developed XeTeX), who deliberately modelled TeXworks on Richard Koch’s award-winning TeXShop software for macOS to lower the entry barrier to the TeX world for those using desktop operating systems other than macOS. Kew argued against complex user interfaces like that of TeXnicCenter or Kile, which he described as intimidating for new users. TeXworks requires a TeX installation: TeX Live, MiKTeX, or MacTeX. MiKTeX 2.8 (and 2.9) comes bundled with TeXworks, even in the base installation. One limitation of TeXworks is, that it does not by itself support multi-stage typesetting, like, for example, PNG or SVG output via intermediate DVI. This is a design choice, because such workflows are considered too advanced for the beginning user. The user has to write a shell or batch script to use such workflows with TeXworks. Version history (Mar 2021) TeXworks 0.6.6 released (Mar 2020) TeXworks 0.6.5 released (Mar 2020) TeXworks 0.6.4 released (Mar 2019) TeXworks 0.6.3 released (Apr 2017) TeXworks 0.6.2 released (May 2016) TeXworks 0.6.1 released (Apr 2016) TeXworks 0.6.0 released (Apr 2015) TeXworks 0.4.6 released (Apr 2013) TeXworks 0.4.5 released (Apr 2012) TeXworks 0.4.4 released (Jun 2011) TeXworks 0.4.3 released (Jun 2011) TeXworks 0.4.2 released (May 2011) TeXworks 0.4.1 released (Mar 2011) TeXworks 0.4.0 released (Oct 2009) TeXworks 0.2.3 released (Oct 2009) TeXworks 0.2.2 released (Oct 2009) TeXworks 0.2.1 released (Sep 2009) TeXworks 0.2.0 released See also Comparison of TeX editors XeTeX References External links TeXworks – lowering the entry barrier to the TeX world TeX Users Group (Discuss the TeXworks front end) LaTeX-Community (Subforum for TeXworks) Search keyword "texworks" in all forums Cross-platform free software Free TeX editors TeX editors TeX editors that use Qt