id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
376805 | https://en.wikipedia.org/wiki/Open%20Transport | Open Transport | Open Transport was the name given by Apple Inc. to its implementation of the Unix-originated System V STREAMS networking stack. Based on code licensed from Mentat's Portable Streams product, Open Transport was built to provide the classic Mac OS with a modern TCP/IP implementation, replacing MacTCP. Apple also added its own implementation of AppleTalk to the stack to support legacy networks.
History
STREAMS
Prior to the release of Open Transport, the classic Mac OS used a variety of stand-alone INITs to provide networking functionality. The only one that was widely used throughout the OS was the AppleTalk system. Among the other protocol stacks supported, MacTCP was becoming increasingly important as the Internet boom started to gain momentum. MacTCP emulated the Berkeley sockets system, widely used among Unix-like operating systems.
MacTCP and the previous generation AppleTalk library were slow on PowerPC-based Macintoshes because they were written for previous generation 680x0-based Macintoshes and therefore ran under emulation on PowerPC-based machines. MacTCP was also lacking in features, however, and a major upgrade was clearly needed if Apple was to keep its hand in the Internet market.
Through the late 1980s several major efforts to re-combine the many Unix derivatives into a single system were underway, and the most significant among these was the AT&T-led System V. System V included an entirely new networking stack, STREAMS, replacing the existing Berkeley sockets system. STREAMS had a number of advantages over sockets, including the ability to support multiple networking stacks at the same time, the ability to plug in modules into the middle of existing stacks to provide simple mechanisms for filtering and similar duties, while offering a single application programming interface to the user programs. At the time it appeared STREAMS would become the de facto standard.
This change in the market led Apple to move to support STREAMS as well. It also presented two practical advantages to the company; STREAMS' multiprotocol support would allow them to support both TCP/IP and AppleTalk from a single interface, and a portable cross-platform version of STREAMS was available for purchase commercially, one that included a high-quality TCP implementation. Using STREAMS also appeared to offer a way to "one up" Microsoft, whose own TCP/IP networking system, Winsock, was based on the apparently soon-to-be-obsolete sockets.
OT
Open Transport was introduced in May 1995 with the Power Mac 9500. It was included with System 7.5.2, a release for the new PCI based Power Macs, and became available for older hardware later. MacTCP was not supported on PCI-based Macs, but older systems could switch between MacTCP and Open Transport using a Control Panel called Network Software Selector. Unlike MacTCP, Open Transport allowed users to save and switch between configuration sets.
Developer opinion on Open Transport was divided. Some felt it offered enormous speed improvements over MacTCP. Some developers also liked it because it was flexible in the way it allowed protocols to be "stacked" to apply filters and other such duties. However, the system was also large and complex. The flexibility of the Open Transport architecture, into which one could plug any desired protocol, was felt by some to be thoroughly overcomplicated. Additionally, most Unix code still used sockets, not STREAMS, and so MacTCP offered real advantages in terms of porting software to the Mac.
The vaunted flexibility of the Open Transport architecture was undermined and ultimately made obsolete by the rapid rise of TCP/IP networking during the mid-90s. The same is true in the wider Unix market; System V was undermined by the rapid rise of free Unix-like systems, notably Linux. As these systems grew in popularity, the vast majority of programmers ignored the closed STREAMS in favour of the BSD-licensed Sockets. Open Transport was abandoned during the move to OS X, which, being derived from BSD, had a networking stack based entirely on sockets.
Open Transport received deprecation status starting in Mac OS X 10.4 and its SDKs. Open Transport support was removed entirely from OS X starting with version 10.9 (Mavericks).
References
Macintosh operating systems APIs
Internet Protocol based network software
Classic Mac OS |
842479 | https://en.wikipedia.org/wiki/Prestel | Prestel | Prestel (abbrev. from press telephone), the brand name for the UK Post Office Telecommunications's Viewdata technology, was an interactive videotex system developed during the late 1970s and commercially launched in 1979. It achieved a maximum of 90,000 subscribers in the UK and was eventually sold by BT in 1994.
The technology was a forerunner of on-line services today. Instead of a computer, a television set connected to a dedicated terminal was used to receive information from a remote database via a telephone line. The service offered thousands of pages ranging from consumer information to financial data but with limited graphics.
Initial development
Prestel was created based on the work of Samuel Fedida at the then Post Office Research Station in Martlesham, Suffolk. In 1978, under the management of David Wood the software was developed by a team of programmers recruited from within the Post Office Data Processing Executive. As part of the privatisation of British Telecom, the team were moved into a "Prestel Division" of BT.
Database
A Prestel database is commonly referred to as the tree structure. The structure is shown pictorially as an inverted tree with the data considered as leaves of the tree, accessed via branches which serve as a means of classifying the information. There exists quite a lot of jargon regarding such structures but in order to appreciate the concept it is necessary to mention just the node, page and frame. Nodes are the junction pages in the tree at which a number of choices can be made leading to other nodes or to the information itself. Pages are the final levels in the tree and contain the actual data—these may be divided into frames which are really screenfuls of information.
The public Prestel database consisted of a set of individual frames, which were arranged in 24 lines of 40 characters each, similar to the display used by the Ceefax and ORACLE teletext services provided by the BBC and ITV television companies. Of these, the top line was reserved for the name of the Information Provider (IP), the price and the page number, and the bottom line was reserved for system messages. Thus there remained 22 lines (of 40 characters each) in which the IP could present information to the end user.
A page should be considered as a logical unit of data within the database and the frame a physical unit. Unfortunately the terms node, page and frame are often used synonymously which may lead to some confusion. To the user of course a node is the same as a page and they are both identified by a page number. To access a particular item of information, a simple progression down through the nodes to the page is all that is required, and then the frames of that page can be stepped through. This is facilitated by each node displaying up to ten choices, one of which may be taken by the user responding with the appropriate digit from 0 to 9. This simple method of access may be thought of as a question and answer session: the computer displays a question 'Which of the ten choices do you want to make?’ and the user replies with the appropriate digit. A choice of 9 at node 17 moves the user to page 179. The flexibility of this logical access method is increased firstly by allowing cross-referencing from one branch of the tree to another and secondly by providing a few simple commands available to the user for accessing certain pages directly.
While this principle had considerable advantages in user simplicity and computer efficiency over the "keyword/thesaurus principle" used in many other systems, it has two very real disadvantages which have now been recognized: lack of flexibility and slowness.
Page numbers were from one to nine digits in length (i.e., in the range 0 to 999999999) created in a tree like structure, whereby lower level pages could only exist if the higher numbered parent pages had already been created. Thus creating page 7471 required pages 747, 74 and 7 to exist, but generally the three digit node 747 would have been created in order to register the relevant main IP account. Single and double digit pages were special pages reserved by Prestel for general system information purposes, as were the 1nn-199nn sets of three digit nodes; e.g., page 1a was the standard Prestel Main Index. Pages starting with a 9 were for system management functions, and were limited to three digits in length; e.g., page 92 showed details of the user's Prestel bill, and page 910 gave IPs access to online editing facilities.
Available characters consisted of upper and lower case alphanumeric characters as well as punctuation and simple arithmetic symbols, using a variant of ISO 646 and CCITT standard. This layout was later formalised in the 1981 CEPT videotex standard as the CEPT3 profile. By embedding cursor-control characters within the page data, it was also possible to encode simple animations by re-writing parts of the screen already displayed. These were termed "dynamic frames" and could not be created online using conventional editing terminals, but required specialist software and uploading via the "bulk update" facility. No timing options were available beyond that imposed by the available transmission speed, usually 1,200 bps.
The IP logo on line 1 occupied at least 43 bytes, depending on the number of control characters, so the space available for the IP's data is 877 characters at most. Lines could either occupy the full forty character positions, or be terminated early with a CR/LF sequence. Each control character took up two bytes, despite displaying as a single space, so the more complex a page, the less actual information could be presented. It was almost impossible, therefore, to display a right hand border to a page.
Routing from page to page through the database was arranged by the use of numbered items on index pages, which used the space in the frame routing table to map the index links directly to other page numbers. Thus an index on page 747 might have links requiring the user to key 1 for "UK Flights", key 2 for "Flights to Europe", or key 3 for "Hotels" which represented links to page 74781, 74782, and 74791 respectively. The routing table for a particular frame only allowed specification of routes for digits 0–9, so double digit routes would typically be sent via an "intermediate" frame, usually a spare frame elsewhere in the IP's database, to which the first digit of all similarly numbered items would link. Since pressing a number would interrupt a page that was currently being displayed, the keying of a double digit route would not generally inconvenience the viewer with the display of the intermediate frame.
Pages did not scroll, but could effectively be extended by the use of frames, which required alphanumeric suffixes to be appended to the numeric page numbers. Thus keying page *7471# actually resulted in the display of frame 7471a which could be extended by use of follow-on frames 7471b, 7471c, etc., each of which was accessed by repeated use of the "#" key. Because the Prestel system was originally designed to be operated solely by means of a simple numeric keypad it was not possible to access frames other than the top level frame directly (i.e., in this case pages other than "7471a").
This follow-on frame facility was exploited extensively by the implementation of telesoftware on Prestel whereby computer programmes, notably for the BBC Micro, were available for download from Prestel. Generally speaking, the first two or three frames acted as header pages. For example, one such programme was described on frame 70067a and 70067b, while frame 70067c gave the number of subsequent frames containing the programme, and a crosscheck sum. Special software enabled this crosscheck sum to be compared with a value calculated from the result of downloading all the required frames in order to verify a successful download. The actual telesoftware programme started on frame 70067d. In the event that the check failed it was necessary to download the entire programme again starting from the beginning.
Each frame had a single-character type code associated with it. Most frames would be "i" (for "Information" types) but other types included response frames, mailbox pages, or gateway pages. Special frame types could also be specified which caused the follow-on frame to be automatically displayed, with or without the usual clear-screen code, as soon as the current frame had finished being transmitted. These were mainly used by "dynamic frames", as it provided a mechanism to continue animations which would not otherwise fit within the number of characters available in a standard frame.
Views
Information providers
There were two levels of information provider (IP) – firstly a "Main IP" who rented pages from Post Office Telecommunications (PO)/British Telecom (BT) directly, and thus owned a three-digit node or "master page" in the database. This required an ongoing investment, consisting of a minimal annual payment to become an information provider. The price of this basic package was £5,500 per annum in 1983, equivalent to around £29,000 in 2021. The charge included:
the facility to enter and amend information and to retrieve response frames
100 frames
capacity to store 10 completed response frames
editing training for staff (two-day seminar)
copy of IP editing manual;
annual print-out of frames in use (if required)
bulk update facilities (if required).
Additional frames were available in batches of 500 for £500 per annum (over £2,600 in 2021) while Closed User Groups and Sub-IP Facility cost respectively £250 (over £1,300 in 2021).
Those with smaller requirements or budget could rent pages from a main information provider rather than from the Post Office/British Telecom. The main IP had to pay an additional £250 to obtain the privilege but could then rent out individual pages at a market rate. Unlike the main IP, sub-IPs had to pay a per-minute charge for editing online, 8p per minute at Monday-Friday 8 am-6 pm or 8p per 4-minute in all other times for sub-IPs in 1983 (over 35p as at end 2014). Sub-IPs were restricted to pages under a 4 or more digit node within a Main IP's area, and could only edit existing pages. Sub-IP accounts were unable to create or delete pages or frames themselves.
Editing of pages was possible in one of two ways, either directly by creating or amending pages using special editing keyboards whilst connected online to the main Update Computer, or by creating pages offline and updating them in bulk to the main Update Computer. Bulk update required that pages be created offline by the use of editing terminals which could store pages, or by micro-computers such as that provided by Apple or Acorn. The pages were then transmitted to the UDC online in bulk via a special dialup port and protocol, or sent via magnetic tape to the Update Centre (UDC) where they were uploaded by Network Operations (NOC) staff.
Using the online editor facility, IPs were also able to view information about their pages which was hidden from the ordinary end user, such as the time and date of the last update, whether the frame was in a Closed User Group (CUG), the price to view the frame (if any) and the "frame count" or number of times the frame had been accessed. The frame count was not accumulated over all IRC's but related only to the computer which was being viewed at the time so gaining national access counts was a manual exercise.
IPs and sub-IPs accessed the Edit computer using their normal ID and password, but had a separate password to access the editing facility. Bulk uploads only required the edit password and the IPs account number.
Users
Having logged on, each user was taken direct to their default main index page, known as the Welcome Page. For standard users, this would be page 1a, the general top level index to the whole of Prestel. However, if a user signed up through, or later joined, products or services from major IPs, such as Club 403, Micronet 800, Prestel Travel, CitiService, etc., they would be given a different welcome page, so that after logon they were routed directly to 800a, 403a, 747a etc.
From the Welcome Page it was possible for any user to find pages of information in several different ways, or a combination of them. Printed directories were available which gave the full page numbers corresponding to the items in an alphabetical index. Pages were accessed directly by keying "*page number#". Individual pages often had links to related pages which could be accessed by use of one or two digit routing codes. This feature was widely used on sets of index pages which were commonly grouped by subject heading, provided both by the Post Office/BT and by individual IPs. Because of the numerical limitation, it was often necessary to go through a series of index pages in order to reach the desired page. Extension frames which might be required to view further information on a topic could be only accessed by use of the "#" key. From 1987 onwards, it became possible to use access Prestel pages via use of special alphabetic codes, provided that the IP who owned the page set up a special keyword mapped onto that page. Thus, by keying *M NEWS#, it was possible for a user to route directly to page *40111# to obtain news about micro-computers.
Many standard mailbox frames were available offering various designs for greetings cards or seasonal messages such as Valentine Cards. Messages could only occupy a single frame, so the main message text field could typically take up to a maximum of 100 words, depending upon how many other fields were required and what graphics were used on the frame. Mailbox frames were completed by entering relevant details and pressing the # key on each field. Completing the last, or only, of which lead to the request to "KEY 1 TO SEND KEY 2 NOT TO SEND". Assuming all went well, this led to a subsequent final screen confirming successful dispatch, or if there were problems (such as a mistake in entering the Mailbox number) then an appropriate error frame was displayed. If it was desired to send the message to more than one recipient then it was necessary to re-key the message text into a fresh message frame, although some popular micro-computers of the time provided the facility to store the message so that it could be copied and pasted into a new message.
Special commands were also available. For example, to facilitate movement around the database it was possible to step back through a maximum of 3 frames or pages by use of the special key combination "*#". In the event of corruption of a page in transmission it was possible to refresh the page by means of the code *00, which had the advantage of avoiding any page charge being raised again. Alternatively, if the user wished to update a page to see the latest information, for example of flight arrival times, the *09 command would retrieve the latest updates, at the same time re-billing any page display charge. If all else failed, a user could simply return to the first page which he saw after logging onto the system by use of the *0# combination, which brought up their default Main Index. Exceptionally, information could be hidden on a frame by an IP which could only be revealed by use of the 'Reveal" key of the keypad (e.g., to show an answer to a quiz). The same 'Reveal' key was also used to hide the data once more.
Infrastructure
With a view to supporting the planned major expansion program, a new Prestel infrastructure was designed around two different types of data center: Update Centre (UDC), where IPs could create, modify and delete their pages of information, and Information Retrieval Centre (IRC), which mirrored copy of the pages is provided to end-users. In practice there only ever was one Update Centre, and this always housed just one update computer, named "Duke", but within six months of public launch there were in addition two dedicated information retrieval computers.
In those early days of the public service all the live Prestel computers were located in St Alphage House, a 1960s office block on Fore Street in the City of London. At the time the National Operations Centre (NOC) was located in the same building on the same floor. The computers and the NOC were later moved to Baynard House, (on Queen Victoria Street, also in the City of London) which acted as a combined UDC and IRC. Both types of machine, together with other development hardware, remained in service there until 1994 when the Prestel service was sold by BT to a private company.
Each IRC normally housed two information retrieval computers, although in some IRCs in London just a single machine was present. IRCs were generally located within major telephone exchanges, rather than in BT Data Processing Centres, in order to give room for the extensive communications requirements. Exchange buildings were ideally suited to housing the large numbers of rack mounted 1200/75 baud modems and associated cabling as well as the racks of 16-port Multi-Channel Asynchronous Communications Control Units (MCACCUs) or multiplexors from GEC which gave the modems logical access into the computers.
In the new infrastructure, IRCs were connected to the UDC in a star network configuration, originally via leased line permanent (not packet switched) connections, based on the X.25 protocol, operating at 2.4 kilobits per second (kbit/s). By mid 1981, these private circuit links had been replaced with dedicated 4-wire X25 circuits over the new public Packet Switch Stream (PSS) network operating at 4.8 kbit/s.
By June 1980, there were four singleton retrieval computers in London, plus six other machines installed in pairs at IRC sites in Birmingham, Edinburgh and Manchester. Fully equipped IRC machines had a design capacity of 200 user ports each but these first ten machines were initially only capable of supporting approximately 1,000 users between them, expandable later to 2,000 users.
By September 1980, there were five IRC machines in London plus pairs of machines at Birmingham, Nottingham, Edinburgh, Glasgow, Manchester, Liverpool and Belfast offering a total of 914 user ports. Further IRC's were planned at Luton, Reading, Sevenoaks, Brighton, Leeds, Newcastle, Cardiff, Bristol, Bournemouth, Chelmsford and Norwich by the end of 1980. In some of these locations where there was insufficient Prestel traffic to warrant siting an IRC computer, the plans were to site multiplex equipment in a suitable exchange building from where connections were made over X25 to the nearest proper IRC. As at the end of 1980, there was actually a total of 1500 live computer ports available and by July 1981, the number of IRC computers has been expended to 18, increasing the coverage of the telephone subscriber population from 30% to 62%.
In 1982, using the multiplexor technique described above, a virtual IRC was created in Boston, Massachusetts giving access to a machine in the UK known as Hogarth in order to provide Prestel services to subscribers from across the United States via the Telenet packet switching network.
The Prestel Mailbox service was originally launched on Enterprise computer to support messaging solely between users on that machine and by 1984, the facility had been rolled out nationwide. This required a further type of Prestel computer dedicated to the exchange of messages. The only example of this type, which became known as Pandora, was co-located with the UDC in Baynard House, London.
Originally Prestel IRC computers were directly dialled by means of an ordinary telephone number (e.g., the Enterprise computer in Croydon was accessed by dialling 01 686 0311. By 1984, the special short dialling codes 618 and 918 were in use in order to give access to the nearest IRC at local telephone call rates, at least across most parts of the UK.
In 1987, the entire local access network was being overhauled and shared with other Dialcom Group companies – users connecting and not automatically logging into Prestel would be greeted with a menu allowing access to Prestel, Telecom Gold, etc.
Hardware and software
Prestel computers were based on GEC 4000 series minicomputer with small differences in the accumulation according to the function of the machine. IRC main machines were originally GEC 4082 equipped with 384 Kbytes memory core store machines, six 70 Mbyte HDD and 100 ports for 1500 initial users. The network grew to the point that in June 1980 there were four stand-alone retrieval computers in the London area with six other computers installed in pairs in Birmingham, Edinburgh and Manchester. The ten computers could output to approximately 1000 user ports, expandable to 2000. The GEC 4082 computer with 512 megabyte capacity will interconnect to the 10 and later to 20 retrieval computers to handle the data files. The initial data base consists of approximately 164,000 information pages (June, 1980) with planned update capacity of 260,000 pages. A page consists of a maximum of 960 data characters (5x7 bits each, suggesting approximately 35,000 bits per page).
This arrangement effectively limited the size of the public service database to around 250,000 frames so in order to cope with planned growth by 1981 the IRC machines had been expanded by the addition of two further data drives.
Each IRC computer was configured with 208 ports and so was able to support 200 simultaneous Prestel users, the remaining 8 ports being used for test and control functions. Access for the ordinary user was provided via the duplex asynchronous interface provided by banks of GEC 16-port multi-channel asynchronous control units (MCACCU) known more simply as multiplexers. These devices in turn were accessed via banks of standard Post Office Modems No. 20 operating at 1200/75 bit/s, which were connected directly to the public switched telephone network (PSTN).
As seen in 1979, it had important strengths and weaknesses: "The strengths of viewdata include its visual attractiveness, its ease of use, low cost and its wide range of applications. Its weaknesses include its small information window, unsophisticated search methods, its limited storage capacity and its lack of computer power for users. How rapidly viewdata will become established, and the exact role it will fulfil, is as yet a matter of speculation."
By 1981, this configuration had changed with memory doubled to 768 kbytes but with data discs reduced to six, corresponding to the number at the IRC machines and with just a single transaction disc.
In addition to the MCACCU units required to support 1200/75 dial up access, the Update Centre machines were also connected to special modems provided to support online bulk updating by IPs. Banks of 300/300 bit/s full duplex asynchronous V21 modems supported computer to computer links for the more sophisticated IP while 1200 bit/s half duplex V23 modems supported so called intelligent editing terminals (i.e. those capable of storing a number of frames offline before uploading to the UDC). In addition twin 9-track NRZI tape decks of 800 bytes/inch capacity were provided in order to support bulk offline updates.
Although technically categorised as minicomputers, these GEC machines were physically very large by today's standards, each occupying several standard communications cabinets, each standing high by wide. The CDC 9762 hard disc drives were housed separately in large stand-alone units, each one about the size of a domestic washing machine. (See images in the photo of the GEC Computers' Development Centre). The 70 Mbyte capacity hard discs themselves were in fact removable units, each consisting of a stack of five 14-inch platters, standing high, that could be lifted in and out of the drive unit.
The GEC machines cost in excess of £200,000 each at GEC standard prices, in addition to which there were the costs of all the associated communications equipment. Putting together all of the computer and communications equipment required for a single IRC was a major undertaking and took some 15 months from order placement to commissioning.
GEC 4000 series computers were capable of running a number of operating systems but in practice Prestel machines exclusively ran OS4000 which itself was developed by GEC. This in turn supported BABBAGE, the so-called high level assembler language in which all the Prestel software for both IRC and UDC machines (and later the messaging machine) was written.
In 1987, a Prestel Admin computer was introduced which supported the user registration process: the capture of user details from the paper Prestel Application Form (PAF), the transfer of data to the relevant Prestel computer, and the printing of the welcome letter for users. This machine, also based upon GEC 4082 equipment, was the first to be equipped with 1 Mbyte of memory which was required to support the Rapport relational database. This product from Logica was an early example of deployment of a system written in a 4GL database language which supported all features of the Prestel Admin application.
Monitoring equipment
In order to proactively manage the potentially large numbers of user connections to Prestel computers, special monitoring equipment was developed by Post Office research and development engineers. This was known by the acronym VAMPIRE, short for Viewdata Access Monitor and Priority Incident Reporting Equipment – a title which more or less describes its function. The device used private circuits to connect modem ports on each computer or remote IRC multiplexor node, with a display on a television screen Prestel Prestel at the Regional Centre responsible for the administration of IRC. The VAMPIRE screen consisted of a matrix of small squares, so arranged that all ports for a single IRC computer could be displayed on a single television with each square representing the state of a port simply by means of the colour. Free ports were shown as green, occupied ports as yellow, incoming calls as pale blue and faulty ports as red, such that the state of a whole Prestel machine or concentrator node could be determined at a glance.
It was apparently planned to extend this facility via a system designated the Data Recording and Concentrator Unit for Line Applications known as DRACULA, which would generate a summary view so that the state of multiple computers could be displayed on a single screen. This device was never deployed since the number of VAMPIRE sets needed to monitor every Prestel computer and concentrator never got beyond a couple of dozen, spread over many Regional Prestel Centre offices.
Messaging
In 1983, the Prestel messaging service known as "Prestel Mailbox" was launched, initially hosted on the computer known as "Enterprise", and later available from all IRC computers by means of a centralised messaging computer known as "Pandora". This facility extended the original day one concept of "Response Frames" whereby an end user could send a message back to the IP who owned the page via special pages, for example to order goods or services. The user's name, address, telephone number, and date could be added automatically to the message when the IP set up the response frame by means of codes which triggered extraction of key data from the users account held on the IRC computer. Initially response frames were ingathered by an IP from each IRC individually, but later the facility to collect messages from all IRCs at the UDC from where they could be ingathered centrally was implemented, and with the introduction of Mailbox, they could be retrieved from any IRC.
In order to use the new Prestel Mailbox service, the user went to page *7# which gave access to a set of frames where new "free format" messages could be created, or pre-formatted messages filled out and stored messages could be retrieved, and other related facilities were hosted. Many standard mailbox frames were available offering various designs for greetings cards or seasonal messages such as Valentine Cards. In order to compose a new message, a blank message frame, which could also be accessed directly via *77#, was displayed with the sender's mailbox number pre-filled, leaving space for the recipient's mailbox number and the text of the message itself. Messages could only occupy a single frame, so the main message text field could typically take up to a maximum of 100 words, depending upon how many other fields were required and what graphics were used on the frame. Mailbox frames were completed by entering relevant details and pressing the # key on each field. Completing the last, or only, of which lead to the request to "KEY 1 TO SEND KEY 2 NOT TO SEND". Assuming all went well, this led to a subsequent final screen confirming successful dispatch, or if there were problems (such as a mistake in entering the Mailbox number) then an appropriate error frame was displayed. If it was desired to send the message to more than one recipient then it was necessary to re-key the message text into a fresh message frame, although some popular micro-computers of the time provided the facility to store the message so that it could be copied and pasted into a new message.
Prestel Mailbox numbers were generally based upon the last 9 digits of the user's telephone number, without spaces or punctuation. For example, the Prestel Mailbox number for Prestel Headquarters which had the telephone number 01-822 2211 would be simply 018222211, while that for a user in Manchester with telephone number 061-228 7878 would be 612287878. In keeping with the established telephone number practice, but unlike the convention with today's internet mailboxes, Prestel Mailbox numbers were published by default, and were available via the Prestel computers in a dedicated directory accessible from page *486#. On request, ex-directory mailbox numbers were available, usually employing a dummy telephone number format such as the series 01999nnnn, and later the series 01111nnnn.
Every time a user logged into Prestel, a Mailbox banner on their Welcome page, usually flashing, would alert them if they had any new messages waiting. Similarly, upon the user's request to sign off the system via *90#, a warning would appear if any new messages had arrived, with an option to read them, before the user was allowed to disconnect. Messages were retrieved from page *930#, where they were presented to the recipient in chronological order. After reading a new message, the user had to choose between deleting the message, or saving it, before the next message was presented. Initially only three messages could be saved at any one time, and these stored messages were accessible via page *931#.
Use of the basic Mailbox service was free, that is to say there were no registration charges for owning a mailbox, or for sending new messages or for storing received messages, although even by 1984 only five messages could be saved once they had been read.
By 1984 the basic Mailbox service had been extended to give automatic access to the Telex service which at the time was still relatively common in business and was the standard way to reach remoter parts of the globe. Using a special Telex Link page, the message was composed in the usual way and then the destination country chosen and the Telex number entered before sending just like a standard message. Telex Link added the necessary Telex codes as required and tried to send the message as many times as required before positively confirming receipt by means of a special Mailbox message. Telexes could be sent to Prestel Mailbox users from a standard Telex terminal by using Telex Link number and inserting "MBX" and the relevant mailbox number as the first line of the telex message itself. The incoming telex message appeared to the Prestel recipient just as an ordinary Mailbox message but with the telex number inserted at the top of the frame.
Because of the charges inherent in use of the Telex service, messages sent via Prestel Telex Link were chargeable, in 1984 at the rate of 50p for destinations in the UK, £1.00 for Europe, £2.00 for North America, £3.00 for elsewhere and even £5.00 for sending to ships via INMARSAT. There was no charge to Prestel users for receiving Telex messages.
In the same year, when there were some 70,000 users registered, up to 100,000 mailboxes and telexes were sent each week via Prestel Mailbox.
From July 1989, a new mailbox system was introduced which allowed for single messages of up to five frames in length, storing of messages prior to sending, sending to multiple recipients, either individually or via a mailing list, forwarding of messages, and requesting an acknowledgment of receipt. Whilst sending a simple mailbox using none of the new facilities remained free, all of the new options were charged at 1p per use per recipient. For the first time, the sending of spam was accounted for and permitted, albeit at 20p per recipient. In addition, the stored message facility was replaced by a summary page, which listed all the messages, both new and old, that were waiting. The user could then pick which message to view, rather than being required to read through them all in chronological order. As only the first 20 could be accessed, this effectively allowed for up to 19 messages to be stored while allowing the continued reception of new mail.
A security breach of the Prestel mailbox of the Duke of Edinburgh and subsequent trial led to the Computer Misuse Act 1990.
Public take-up
While teletext services were provided free of charge, and were encoded as part of the regular television transmissions, Prestel data were transmitted via telephone lines to a set-top box terminal, computer, or dedicated terminal. While this enabled interactive services and a crude form of e-mail to be provided, gaining access to Prestel also involved purchasing a suitable terminal, and arranging with a Post Office engineer for the installation of a connection point known as a Jack 96A. (From the early 1980s, the "New Plan" sockets were fitted as standard on new lines and on any change of rented handset, and terminals or modems then required no special connections.)
Thereafter it was necessary to pay both a monthly subscription and the cost of local telephone calls. On top of this, some services (notably parts of Micronet 800) sold content on a paid-for basis. Each Prestel screen carried a price in pence in the top right-hand corner. Single screens could cost up to 99p.
The original idea was to persuade consumers to buy a modified television set with an inbuilt modem and a keypad remote control in order to access the service, but no more than a handful of models were ever marketed and they were prohibitively expensive. Eventually set-top boxes were made available, and some organisations made these available as part of their subscription, for example branded Tandata terminals were provided by the Nottingham Building Society for its customers, who could make financial transactions via Prestel.
Because the communication over telephone lines did not use any kind of error correction protocol, it was prone to interference from line noise which would result in garbled text. This was particularly problematic with early home modems which used acoustic couplers, because most home phones were hard-wired to the wall at that time.
Regardless of the hardware choice Prestel was an expensive proposition, and as a result, Prestel only ever gained a limited market penetration among private consumers, achieving a total of just 90,000 subscribers, with the largest user groups being Micronet 800 with 20,000 users and Prestel Travel with 6,500 subscribers. Micro Arts Group computer graphics Software and Magazine had 400 pages and interactive art software to download. This prefigured mixed media websites on the Internet. It is the only organisation from this period still operating.
The costs for businesses interested in publishing on Prestel were also expensive. This ensured that only the largest or most forward thinking companies were interested in the service.
During the daytime, when business usage was high, there was a per-minute charge to use Prestel, but in the evenings and weekends, traditionally the quiet times, it was free apart from the telephone call. With Micronet being so popular, suddenly the quiet times became fairly busy.
The BT Prestel software development team developed a number of national variants of Prestel, all of which ran on GEC Computers. They were sold to the PTTs of other countries, including Australia, Austria, Belgium, Italy, Hungary, Hong Kong, Germany, Netherlands, New Zealand, Singapore and Yugoslavia. Italy was the largest system with 180,000 subscribers. The Singapore system had a notable technology difference in that pages were not returned over the modem connection, but were returned using teletext methods over one of four television channels reserved specially for the purpose, which had all scan lines encoded in teletext format. This higher bandwidth enabled use of a feature called Picture Prestel which was used to carry significantly higher resolution pictures than were available on other Prestel systems. It was also demonstrated at the 1982 Worlds Fair in Knoxville, Tennessee.
The original Prestel system, designed for cost effectiveness and simplicity, employed a rudimentary graphic capability known as serial mosaics. Through juxtaposition of the special mosaic characters, crude but recognizable graphic representations could be made on the screen. This graphic scheme had its limitations. To change colors between two mosaic graphic characters or between any two characters in general, a color change command was required. This command signal, however, physically occupied a blank space on the screen.The French sought to overcome this limitation when they joined the videotex world in the mid-1970s. They called their system Antiope. While based on the same mosaic graphics that were employed by the British, Antiope added a new feature, parallel attributes, or the ability to change the color from one cell to another without the need for a blank space.
At approximately the same time, the Canadians adapted standard computer graphic commands into a set of functions called alphageometrics. These alphageometric functions did away with the block mosaic graphics used by the British and French and replaced them with drawing instructions, such as: DRAW LINE, DRAW ARC, DRAW POLYGON, etc. Through use of these geometric commands much higher resolution could be achieved than with the mosaic commands. This alphageometric scheme was integrated into the Canadian videotex system which the Canadians referred to as "Telidon".
Successes
In contrast to the demise of the British system, the French equivalent of Prestel, Teletel/Minitel, received substantial public backing when millions of Minitel terminals were handed out free to telephone subscribers (causing Alcatel huge financial problems). As a consequence the Teletel network became very popular in France, and remained well used, with access later also possible over the Internet. After a short postponement, Minitel closed finally on 30 June 2012.
In 1979 the New Opportunity Press launched Careerdata, an interactive graduate recruitment service devised and designed by Anthony Felix, the New Opportunity Press MD, and supported by GEC's Hirst Research centre in Wembley, London, who provided 12 terminals which were installed in the largest UK university careers advisory services. This was the first commercial application on the new medium and was featured in the Prestel Road Show which toured the UK and some European centres. A closed-access videotex system based on the Prestel model was developed by the travel industry, and continues to be almost universally used to this day by travel agents throughout the country: see Viewdata. The Prestel technology was also sold abroad to several countries, and in 1984 Prestel won a UK Queen's Award to Industry both for its innovative technology and use of British products (it largely ran on equipment provided by GEC Computers).
In 1979 Michael Aldrich developed an online shopping system, a type of e-commerce, using a modified domestic colour television equipped with the Prestel chipset and connected to a real-time transaction-processing computer via a domestic dial-up telelphone line. During the 1980s he sold these online shopping systems to large corporations mainly in the UK. All the terminals on these systems could also access the Prestel systems. Aldrich installed a travel industry system in Thomson Holidays in 1981.
Other implementations
The Prestel system was customised and resold by GEC Computers to several other countries, including: Austria, Australia, Germany, Hong Kong, Hungary, Italy, Malaysia, Netherlands, New Zealand, Singapore, and the former Yugoslavia.
Telecom Australia re-branded their system Viatel, with the centre of operations in Windsor, Melbourne, Australia. During the Black Monday stock market crash the system's stock trading system was highly used. The system in Italy run by SIP was heavily used during the 1990 FIFA World Cup for reporting the match progress and scores. The Singapore system provided a much higher receive bandwidth than was available over dial-up modems at the time by broadcasting the return frames using the Teletext technique of embedding them in broadcast television signals. Four VHF TV channels were dedicated to this with all the scan lines used for Teletext encoding, which enabled the system to provide a feature called Picture Prestel to convey higher resolution images. The Yugoslav system was based in Zagreb, with additional IRCs located in Rijeka, Ljubljana, and Split.
The American Viewtron videotex service was modelled after Prestel.
Homelink
In 1983 the UK's first online banking service opened with Homelink, which was a cooperation between the Nottingham Building Society and the Bank of Scotland.
See also
Compunet
World War II Colossus computer, also built by the Post Office Research Laboratories.
Minitel, a similar system developed in France.
Bildschirmtext, a similar system developed in Germany.
Singapore Teleview
Notes
References
<div class="references - medium">
Fedida, S. and Malik, R. (1979). The Viewdata Revolution. London, UK, Associated Business Press,
External links
Prestel Magazine. October 1983
Review of Prestel from 1983
Text and images from a booklet given out at A Fanfare for Prestel event at Wembley in March 1980.
A Short History of Prestel
Micro Arts Group 1984-2022
Celebrating the Viewdata Revolution Including several Prestel Brochures
BT Group
History of telecommunications in the United Kingdom
Legacy systems
Pre–World Wide Web online services
Videotex |
23018642 | https://en.wikipedia.org/wiki/Chromium%20%28web%20browser%29 | Chromium (web browser) | Chromium is a free and open-source web browser project, principally developed and maintained by Google. This codebase provides the vast majority of code for the Google Chrome browser, which is proprietary software and has some additional features.
The Chromium codebase is widely used. Microsoft Edge, Samsung Internet, Opera, and many other browsers are based on the code. Moreover, significant portions of the code are used by several app frameworks.
Google itself does not provide an official stable version of the Chromium browser, although it provides some official API keys for some included functionality, such as speech to text, text to speech, translation, etc. All versions released with the Chromium name and logo are built by either The Chromium Projects, or other parties.
Licensing
Chromium is an entirely free and open-source software project. The Google-authored portion is shared under the 3-clause BSD license. Other parts are subject to a variety of licenses, including MIT, LGPL, Ms-PL, and an MPL/GPL/LGPL tri-license.
This licensing permits any party to build the codebase and share the resulting browser executable with the Chromium name and logo. Thus many Linux distributions do this, as well as FreeBSD and OpenBSD.
Differences from Google Chrome
Chromium provides the vast majority of source code for Google Chrome, so the name "Chromium" was chosen by Google because chromium metal is used in chrome plating.
Features
Chromium lacks the following Chrome features:
Automatic browser updates
API keys for some Google services, including browser sync
The Widevine DRM module
Licensed codecs for the popular H.264 video and AAC audio formats
Tracking mechanisms for usage and crash reports
Branding and licensing
While Chrome has the same user interface functionality as Chromium, it changes the color scheme to the Google-branded one. Unlike Chromium, Chrome is not open-source, so its binaries are licensed as freeware under the Google Chrome Terms of Service.
Development
The Chromium browser codebase contains about 35 million source lines of code.
Contributors
Chromium has been a Google project since its inception, and Google employees have done the bulk of the development work.
Google refers to this project and the offshoot Chromium OS as "The Chromium Projects", and its employees use @chromium.org email addresses for this development work. However, in terms of governance, "Chromium Projects" are not independent entities; Google retains firm control of them.
The Chromium browser codebase is widely used, so others have made important contributions, most notably Microsoft, Igalia, Yandex, Intel, Samsung, LG, Opera, and Brave. Some employees of these companies also have @chromium.org email addresses.
Programming languages
C++ is the primary language, comprising about half of the codebase. This includes the Blink and V8 engines, the implementation of HTTP and other protocols, the internal caching system, and other essential browser components.
Some of the user interface is implemented in HTML, CSS, and JavaScript. An extensive collection of web platform tests are also written in these languages.
About 10% of the codebase is written in C. This is mostly from third-party libraries that provide essential functionality, such as SQLite and numerous codecs.
Support for mobile operating systems requires special languages: Java for Android, and for iOS both Swift and Objective-C. (A copy of Apple's WebKit engine is also in the codebase, since it is required for iOS browsers.)
Logistics
The bug tracking system is a publicly accessible website. Participants are identified by their email addresses.
The Chromium continuous integration system automatically builds and tests the codebase several times a day.
Builds are identified by a four-part version number that is major.minor.build.patch. This versioning scheme and the branch points that occur every six to seven weeks are from Google Chrome and its development cycle.
History
2008 to 2010
Google Chrome debuted in September 2008, and along with its release, the Chromium source code was also made available, allowing builds to be constructed from it.
Upon release, Chrome was criticized for storing a user's passwords without the protection of a master password. Google has insisted that a master password provides no real security against knowledgeable hackers, but users argued that it would protect against co-workers or family members borrowing a computer and being able to view stored passwords as plaintext. In December 2009, Chromium developer P. Kasting stated: "A master password was issue 1397. That issue is closed. We will not implement a master password. Not now, not ever. Arguing for it won't make it happen. 'A bunch of people would like it' won't make it happen. Our design decisions are not democratic. You cannot always have what you want."
Version 3 was the first alpha available for Linux. Chromium soon incorporated native theming for Linux, using the GTK+ toolkit to allow it fit into the GNOME desktop environment. Version 3 also introduced JavaScript engine optimizations and user-selectable themes.
Version 6 introduced features for user interface minimalism, as one of Google's goals was to make the browser "feel lightweight (cognitively and physically) and fast". The changes were a unified tools menu, no home button by default (although user configurable), a combined reload/stop button, and the bookmark bar deactivated by default. It also introduced an integrated PDF reader, WebM and VP8 support for use with HTML5 video, and a smarter URL bar.
Version 7 boosted HTML5 performance to twice that of prior versions via hardware acceleration.
Version 8 focused on improved integration into Chrome OS and improved cloud features. These include background web applications, host remoting (allowing users centrally to control features and settings on other computers) and cloud printing.
Version 9 introduced a URL bar feature for exposing phishing attacks, plus sandboxing for the Adobe Flash plug-in. Other additions were the WebGL library and access for the new Chrome Web Store.
2011
In February, Google announced that it was considering large-scale user interface (UI) changes, including at least partial elimination of the URL bar, which had been a mainstay of browsers since the early years of the Web. The proposed UI was to be a consolidation of the row of tabs and the row of navigation buttons, the menu, and URL bar into a single row. The justification was freeing up more screen space for web page content. Google acknowledged that this would result in URLs not always being visible to the user, that navigation controls and menus may lose their context, and that the resulting single line could be quite crowded. However, by August, Google decided that these changes were too risky and shelved the idea.
In March, Google announced other directions for the project. Development priorities focused on reducing the size of the executable, integrating web applications and plug-ins, cloud computing, and touch interface support. Thus a multi-profile button was introduced to the UI, allowing users to log into multiple Google and other accounts in the same browser instance. Other additions were malware detection and support for hardware-accelerated CSS transforms.
By May, the results of Google's attempts to reduce the file size of Chromium were already being noted. Much of the early work in this area concentrated on shrinking the size of WebKit, the image resizer, and the Android build system. Subsequent work introduced a more compact mobile version that reduced the vertical space of the UI.
Other changes in 2011 were GPU acceleration on all pages, adding support for the new Web Audio API, and the Google Native Client (NaCl) which permits native code supplied by third parties as platform-neutral binaries to be securely executed within the browser itself. Google's Skia graphics library was also made available for all Chromium versions.
Since 2012
The sync service added for Google Chrome in 2012 could also be used by Chromium builds. The same year, a new API for high-quality video and audio communication was added, enabling web applications to access the user's webcam and microphone after asking permission to do so. Then GPU accelerated video decoding for Windows and support for the QUIC protocol were added.
In 2013, Chromium's modified WebKit rendering engine was officially forked as the Blink engine.
Other changes in 2013 were the ability to reset user profiles and new browser extension APIs. Tab indicators for audio and webcam usage were also added, as was automatic blocking of files detected as malware.
Version 69 introduced a new browser theme, as part of the 10th anniversary of Google Chrome. The same year, new measures were added to curtail abusive advertising.
Starting in March 2021, the Google Chrome sync service can no longer be used by Chromium builds.
Browsers based on Chromium
In addition to Google Chrome, many other notable web browsers have been based on the Chromium code.
Active
Amazon Silk
Avast Secure Browser developed by Avast
Beaker, a peer-to-peer web browser
Blisk is a browser available for Windows 7 and later, OS X 10.9 and later that aims to provide an array of useful tools for Web development.
Brave is an open-source web browser that aims to block website trackers and remove intrusive internet advertisements.
CodeWeavers CrossOver Chromium is an unofficial bundle of a Wine derivative and Chromium Developer Build 21 for Linux and macOS, first released on 15 September 2008 by CodeWeavers as part of their CrossOver project.
Comodo Dragon is a rebranded version of Chromium for 32-bit Windows 8.1, 8, Windows 7 and Vista produced by the Comodo Group. According to the developer, it provides improved security and privacy features.
Cốc Cốc is a freeware web browser focused on the Vietnamese market, developed by Vietnamese company Cốc Cốc, based on Chromium open-source code for Windows. According to data published by StatCounter in July 2013, Cốc Cốc has passed Opera to become one of the top 5 most popular browsers in Vietnam within 2 months after its official release.
Dissenter is a fork of Brave browser that adds a comment section to any URL.
Epic Browser is a privacy-centric web browser developed by Hidden Reflex of India and based on Chromium source code.
Falkon an open-source Qt-based GUI, using the Chromium-based QtWebEngine.
qutebrowser a Qt-based GUI with Vim-like keybindings, using the Chromium-based QtWebEngine.
Microsoft Edge is Chromium-based as of 15 January 2020.
Naver Whale is a South Korean freeware web browser developed by Naver Corporation, which is also available in English. It became available on Android on 13 April 2018.
Opera began to base its web browser on Chromium with version 15.
Qihoo 360 Secure Browser is a Chromium-based Chinese web browser developed by Qihoo.
SalamWeb is a web browser based on Chromium for Muslims, which only allows Halal websites/information.
Samsung Internet shipped its first Chromium-based browser in a Galaxy S4 model released in 2013.
Sleipnir is a Chromium derivative browser for Windows and macOS. One of its main features is linking to Web apps (Facebook, Twitter, Dropbox, etc.) and smartphone apps (Google Map, etc.). It also boasts what it calls "beautiful text," and has unique graphical tabs, among other features.
Slimjet: A Chromium-based web browser released by FlashPeak that features built-in webpage translation, PDF viewing capability and a PPAPI flash plugin, features usually missing from Chromium-based browsers currently not supported.
SRWare Iron is a freeware release of Chromium for Windows, macOS and Linux, offering both installable and portable versions. Iron disables certain configurable Chromium features that could share information with third parties and additional tracking features that Google adds to its Chrome browser.
Torch is a browser based on Chromium for Windows. It specialises in media downloading and has built-in media features, including a torrent engine, video grabber and sharing button.
ungoogled-chromium is a browser based on Chromium. Initially developed for Linux, versions for Windows and MacOS were later added. It removes Google services built into Chromium.
Vivaldi is a browser for Windows, macOS and Linux developed by Vivaldi Technologies. Chromium-based Vivaldi aims to revive the rich features of the Presto-era Opera with its own proprietary modifications.
Yandex Browser is a browser created by the Russian software company Yandex for macOS, Windows, Linux, Android and iOS. The browser integrates Yandex services, which include a search engine, a machine translation service and cloud storage. On Android it provides ability to install chrome extensions on a mobile browser.
Discontinued
Flock – a browser that specialized in providing social networking and had Web 2.0 facilities built into its user interface. It was based on Chromium starting with version 3.0. Flock was discontinued in April 2011.
Redcore – a browser developed by Chinese company Redcore Times (Beijing) Technology Ltd. and marketed as a domestic product that was developed in-house, but was revealed to be based on Chromium
Rockmelt – a Chromium-based browser for Windows, macOS, Android and iOS under a commercial proprietary licence. It integrated features from Facebook and Twitter, but was discontinued in April 2013 and fully retired at 10am PT on 31 July 2013. On 2 August 2013, Rockmelt was acquired by Yahoo! Rockmelt's extensions and its website was shut down after 31 August 2013. Yahoo! plans to integrate Rockmelt's technology into other products.
Use in app frameworks
Significant portions of the Chromium code are used by some application frameworks. Notable examples are Electron, the Chromium Embedded Framework, and the Qt WebEngine. These frameworks have been used to create many apps.
References
External links
Cloud clients
Cross-platform free software
Free and open-source Android software
Free software programmed in C++
Free web browsers
Google Chrome
Google software
MacOS web browsers
Portable software
Software based on WebKit
Software using the BSD license
Windows web browsers
2008 software |
61975455 | https://en.wikipedia.org/wiki/Athene%20%28research%20center%29 | Athene (research center) | ATHENE, formerly Center for Research in Security and Privacy (CRISP), is the national research center for IT security and privacy in Germany and the largest research center for IT security in Europe. The research center is located in Darmstadt and deals with key issues of IT security in the digitization of government, business and society.
ATHENE established a new research area in IT security research, the IT security of large systems, which is the focus of its research. Up to now, isolated aspects such as individual protocols or encryption methods have mostly been investigated. Research into the IT security of large systems should lead to a measurable increase in IT security. The research spectrum ranges from basic research to application.
Director of ATHENE is Michael Waidner.
Organisation
ATHENE is an institution of the Fraunhofer Society and an alliance of the Fraunhofer Institute for Secure Information Technology (Fraunhofer SIT), the Fraunhofer Institute for Computer Graphics Research (Fraunhofer IGD), the Technische Universität Darmstadt (TU Darmstadt) and the Darmstadt University of Applied Sciences (h_da). All institutions are based in Darmstadt.
ATHENE is funded by the Federal Ministry of Education and Research (BMBF) and the Hessian Ministry of Higher Education, Research and the Arts (HMWK).
Research themes
The following research themes have emerged under the main topic IT security of large systems. The institute conducts research on analysis techniques for large software systems and the design of mechanisms for securing sensitive data. The idea behind the latter is privacy by design. In addition, the institute conducts research on fundamental engineering issues of securing critical infrastructures and develops analysis techniques for increasing the security of mobile platforms and methods for measuring IT security and data protection.
History
ATHENEs history dates back to 1961, when the German Data Center (German: Deutsches Rechenzentrum (DRZ)) was founded in Darmstadt. At that time, the German Data Center was equipped with one of the most powerful mainframe computers in Germany and thus became the first mainframe computer center in Germany that could be used for research purposes by universities and scientific institutions. After the ARPANET succeeded in connecting computers with each other, communication between the machines became the focus of research at the DRZ. The DRZ had merged in 1973 with other research institutions in this field to form the Society for Mathematics and Data Processing (German: Gesellschaft für Mathematik und Datenverarbeitung (GMD)). As a result, resources were pooled and working groups networked and the society established the Institute for Remote Data Processing, which was renamed the Institute for Telecooperation Technology in 1992. Under the leadership of Heinz Thielmann, the institute became more and more involved with IT security issues and with the rise of the Internet, IT security became increasingly important, so that in 1998 it was renamed the Institute for Secure Telecooperation. In 2001, GMD merged with the Fraunhofer Society into the Fraunhofer Institute for Secure Information Technology (Fraunhofer SIT).
In 1975, José Luis Encarnação established the Interactive Graphics Systems (GRIS) research group within the Institute for Information Management and Interactive Systems of the Department of Computer Science of the Technische Hochschule Darmstadt (TH Darmstadt), now called Technische Universität Darmstadt. GRIS later collaborated with the Center for Computer Graphics in 1984. A working group, which emerged from this collaboration, was taken up by the Fraunhofer Society and in 1987 the Fraunhofer Institute for Computer Graphics Research (Fraunhofer IGD) was established. Founding Director of the Fraunhofer IGD was José Luis Encarnação.
In 1996, Johannes Buchmann was appointed Professor of Theoretical Computer Science at the Department of Computer Science of TH Darmstadt. His appointment is regarded as the birth of IT security at TH Darmstadt. In 2001, Claudia Eckert, who also headed Fraunhofer SIT from 2001 to 2011, was appointed Professor of Information Security at TU Darmstadt.
In 1999, Darmstadt's universities and research institutions founded the Competence Center for Applied Security (CAST), the largest network for cyber security in German-speaking countries.
In 2002, the Darmstadt Center for IT-Security (German: Darmstädter Zentrum für IT-Sicherheit (DZI)) was founded, which in 2008 became the Center for Advanced Security Research Darmstadt (CASED). Founding Director of CASED was Buchmann. In 2010, Michael Waidner became Director of Fraunhofer SIT. In response to Buchmann and Waidners efforts, the European Center for Security and Privacy by Design (EC SPRIDE) was founded in 2011. CASED and EC SPRIDE were part of LOEWE, the research excellence programm of the state Hesse. Buchmann and Waidner developed the centers into the largest research institutions for IT security in Europe. In 2015, CASED and EC SPRIDE merged into Center for Research in Security and Privacy (CRISP).
In 2012, Intel founded the Intel Collaborative Research Institute for Secure Computing at the Technische Universität Darmstadt. It was the first Intel collaborative research center for IT security outside of the United States. In 2014, the German Research Foundation (DFG) also established the Collaborative Research Centre Cryptography–Based Security Solutions (CROSSING), which deals with cryptography-based security solutions. In 2016, the Federal Ministry of Finance decided to make the region around Darmstadt the pre-eminent hub for the digital transformation of the economy. The Federal Ministry of Finance set up the "Digital Hub Cybersecurity" and "Digital Hub FinTech" centres in the region to help start-ups in Germany commercialise, scale and internationalise their solutions and companies.
Researchers at ATHENE played a major role in establishing the field of post quantum cryptography internationally. In 2018, the stateful hash-based signature scheme XMSS developed by a team of researchers under the direction of Buchmann became the first international standard for post-quantum signature schemes. XMSS is the first future-proof secure and practical signature scheme with minimal security requirements. The work began in 2003. Since 1 January 2019, CRISP has been the national research centre for IT security in Germany. CRISP was later renamed ATHENE.
References
External links
Website of Fraunhofer Institute for Secure Information Technology
Website of the Fraunhofer Institute for Computer Graphics Research
Website of the Technische Universität Darmstadt
Website of the Darmstadt University of Applied Sciences
Computer security organizations
Information technology research institutes
Fraunhofer Society
Technische Universität Darmstadt
Darmstadt |
1579249 | https://en.wikipedia.org/wiki/Optimal%20asymmetric%20encryption%20padding | Optimal asymmetric encryption padding | In cryptography, Optimal Asymmetric Encryption Padding (OAEP) is a padding scheme often used together with RSA encryption. OAEP was introduced by Bellare and Rogaway, and subsequently standardized in PKCS#1 v2 and RFC 2437.
The OAEP algorithm is a form of Feistel network which uses a pair of random oracles G and H to process the plaintext prior to asymmetric encryption. When combined with any secure trapdoor one-way permutation , this processing is proved in the random oracle model to result in a combined scheme which is semantically secure under chosen plaintext attack (IND-CPA). When implemented with certain trapdoor permutations (e.g., RSA), OAEP is also proved secure against chosen ciphertext attack. OAEP can be used to build an all-or-nothing transform.
OAEP satisfies the following two goals:
Add an element of randomness which can be used to convert a deterministic encryption scheme (e.g., traditional RSA) into a probabilistic scheme.
Prevent partial decryption of ciphertexts (or other information leakage) by ensuring that an adversary cannot recover any portion of the plaintext without being able to invert the trapdoor one-way permutation .
The original version of OAEP (Bellare/Rogaway, 1994) showed a form of "plaintext awareness" (which they claimed implies security against chosen ciphertext attack) in the random oracle model when OAEP is used with any trapdoor permutation. Subsequent results contradicted this claim, showing that OAEP was only IND-CCA1 secure. However, the original scheme was proved in the random oracle model to be IND-CCA2 secure when OAEP is used with the RSA permutation using standard encryption exponents, as in the case of RSA-OAEP.
An improved scheme (called OAEP+) that works with any trapdoor one-way permutation was offered by Victor Shoup to solve this problem.
More recent work has shown that in the standard model (that is, when hash functions are not modeled as random oracles) it is impossible to prove the IND-CCA2 security of RSA-OAEP under the assumed hardness of the RSA problem.
Algorithm
In the diagram,
n is the number of bits in the RSA modulus.
k0 and k1 are integers fixed by the protocol.
m is the plaintext message, an (n − k0 − k1 )-bit string
G and H are mask generation functions based on chosen cryptographic hash functions
⊕ is an xor operation.
To encode,
messages are padded with k1 zeros to be n − k0 bits in length.
r is a randomly generated k0-bit string
G expands the k0 bits of r to n − k0 bits.
X = m00...0 ⊕ G(r)
H reduces the n − k0 bits of X to k0 bits.
Y = r ⊕ H(X)
The output is X || Y where X is shown in the diagram as the leftmost block and Y as the rightmost block.
Usage in RSA:
The encoded message can then be encrypted with RSA. The deterministic property of RSA is now avoided by using the OAEP encoding.
To decode,
recover the random string as r = Y ⊕ H(X)
recover the message as m00...0 = X ⊕ G(r)
Security
The "all-or-nothing" security is from the fact that to recover m, one must recover the entire X and the entire Y; X is required to recover r from Y, and r is required to recover m from X. Since any changed bit of a cryptographic hash completely changes the result, the entire X, and the entire Y must both be completely recovered.
Implementation
In the PKCS#1 standard, the random oracles G and H are identical. The PKCS#1 standard further requires that the random oracles be MGF1 with an appropriate hash function.
See also
Key encapsulation
References
Public-key encryption schemes
Padding algorithms |
33433518 | https://en.wikipedia.org/wiki/Crayon%20Shin-chan%3A%20Blitzkrieg%21%20Pig%27s%20Hoof%27s%20Secret%20Mission | Crayon Shin-chan: Blitzkrieg! Pig's Hoof's Secret Mission | , also known as Tip and Run! Pig Hoof Battle!, is a 1998 anime film. It is the sixth film based on the popular comedy manga and anime series Crayon Shin-chan. The film was released to theatres on April 18, 1998 in Japan.
Plot
The story begins when a secret agent Orioke hides on a houseboat stealing a disk necessary to make a secret weapon from the airship of the secret society Pig's Hoof where the students of Futaba Kindergarten were dining. An airship of Pig's Hoof took boat with Orioke, Shinnosuke, Kazama, Masao, Nene and Bo.
From there Shinnosuke, Masao, Nene, Kazama and Bo go with the agent everywhere as hostages of the Pig's Hoof. Simultaneously, an SML agent goes to the children's homes collecting photos of them to recognize and rescue them. But when he reaches the Noharas, Misae with Himawari and Hiroshi follow him to Hong Kong to rescue their son. There they get the agent to take them with him and help him.
Meanwhile, Shinnosuke and his friends, who are kidnapped along with the houseboat, are captured in the Pig's Hoof airship. Eventually, three executives of Pig's Hoof, Barrel, Blade, Mama and the leader, Mouse appear and insist Orioke to return the disk but Orioke refuses this. They decide to take Orioke and Shinnosuke to the headquarters of Pig's Hoof.
On the way to the headquarters, Orioke succeeds in letting Shinnosuke and his friends escape from the airship but she couldn't escape. Kin'niku and the Noharas who came to help Shinnosuke and others crash lands on the ground when the plane as being attacked by the airship of Pig's Hoof. The three walk to the Pig's Hoof headquarters. On the other hand, Shinnosuke and his friends start walking toward the place where people were, but the step was toward the Pig's Hoof headquarters.
Shinnosuke and his friends find a secret entrance of the laboratory of Dr. Obukuro and enter. But they get caught because of the surveillance camera, and they meet with Orioke again. Mouse takes the disk back and starts the computer, Buriburizaemon is projected on it. The plan of the mouse is to spread and conquer computer viruses around the world. In the meantime, Misae joins Shinnosuke and others.
In desperation, Dr. Obukuro invades Shinnosuke into the program to prevent the spread of computer viruses. Mouse opens a black hole-like entrance and orders him to go wild as much as he wants, but Himawari interferes with the operation of the computer.
Buriburizaemon wants to hear in Shinnosuke's words. So Shinnosuke begins to talk about "Buriburizaemon's Adventure". Buriburizaemon thanks Shinnosuke and disappeared from the program. On the other hand, the Orioke defeated Mama, and the computer virus gets deleted.
However, Mouse activates the self-destruct device at the headquarters. Kin'niku command everyone to evacuate the place. The airship carrying all of them tries to escape from the headquarters, but it is too heavy to climb because there are many people on board. At that time, when Shinnosuke saw an illusion of Buriburizaemon who pushed up the airship. When the airship safely left the headquarters, the illusion of Buriburizaemon slowly disappeared into the flames.
After finishing hard work, Kin'niku and Orioke, who were previously a couple, remarry and went on a picnic with the Nohara family. While everyone strangely says, "It was like someone pushed up the airship at that time," Shinnosuke draws a picture of Buriburizaemon and says "The Adventures of Buriburizaemon."
Cast
Akiko Yajima as Shinnosuke Nohara
Miki Narahashi as Misae Nohara
Keiji Fujiwara as Hiroshi Nohara
Satomi Kōrogi as Himawari Nohara
Kotono Mitsuishi as Orioke
Tesshō Genda as Kin'niku
Tarō Ishida as Mouse
Kaneto Shiozawa as Buriburizaemon
Kōichi Yamadera as Barrel
Minori Matsushima as Mama
Show Hayami as Blade
Junpei Takiguchi as Dr. Obukuro
Hiroshi Masuoka as Angela Ome
Mari Mashiba as Toru Kazama
Tamao Hayashi as Nene Sakurada
Teiyū Ichiryūsai as Masao Sato
Chie Satō as Bo Suzuki
IZAM as himself
Yoshito Usui as Cartoonist
Rokurō Naya as Bunta Takakura (principal)
Yumi Takada as Midori Yoshinaga
Michie Tomizawa as Ume Matsuzaka
Yūko Satō as Justice
Soundtrack
The theme song of the movie is PURENESS. It is written by IZAM, composed by KUZUKI and sang by SHAZNA.
Staff
Original: Yoshito Usui
Director: Keiichi Hara
Screenplay: Keiichi Hara
Storyboard: Keiichi Hara
Character Design: Katsunori Hara
Animation director: Katsunori Hara and Noriyuki Tsutsumi
Setting design: Masaaki Yuasa
Cinematography: Toshiyuki Umeda
Music: Toshiyuki Arakawa and Shinji Miyazaki
Producer: Hitoshi Mogi, Kenji Ōta and Takashi Horiuchi
Production companies: Shin-Ei Animation, TV Asahi and ADK
See also
List of Crayon Shin-chan films
References
External links
Films directed by Keiichi Hara
1998 anime films
Blitzkrieg! Pig's Hoof's Secret Mission
Toho animated films
Animated films set in Tokyo
Films set in Hong Kong
Japanese films
Films scored by Shinji Miyazaki |
28443547 | https://en.wikipedia.org/wiki/LXC | LXC | Linux Containers (LXC) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel.
The Linux kernel provides the cgroups functionality that allows limitation and prioritization of resources (CPU, memory, block I/O, network, etc.) without the need for starting any virtual machines, and also the namespace isolation functionality that allows complete isolation of an application's view of the operating environment, including process trees, networking, user IDs and mounted file systems.
LXC combines the kernel's cgroups and support for isolated namespaces to provide an isolated environment for applications. Early versions of Docker used LXC as the container execution driver, though LXC was made optional in v0.9 and support was dropped in Docker v1.10. References to Linux containers commonly refer to Docker containers running on Linux.
Overview
LXC provides operating system-level virtualization through a virtual environment that has its own process and network space, instead of creating a full-fledged virtual machine. LXC relies on the Linux kernel cgroups functionality that was released in version 2.6.24. It also relies on other kinds of namespace isolation functionality, which were developed and integrated into the mainline Linux kernel.
Security
Originally, LXC containers were not as secure as other OS-level virtualization methods such as OpenVZ: in Linux kernels before 3.8, the root user of the guest system could run arbitrary code on the host system with root privileges, just as they can in chroot jails. Starting with the LXC 1.0 release, it is possible to run containers as regular users on the host using "unprivileged containers". Unprivileged containers are more limited in that they cannot access hardware directly. However, even privileged containers should provide adequate isolation in the LXC 1.0 security model, if properly configured.
Alternatives
LXC is similar to other OS-level virtualization technologies on Linux such as OpenVZ and Linux-VServer, as well as those on other operating systems such as FreeBSD jails, AIX Workload Partitions and Solaris Containers. In contrast to OpenVZ, LXC works in the vanilla Linux kernel requiring no additional patches to be applied to the kernel sources. Version 1 of LXC, which was released on 20 February 2014, is a long-term supported version and intended to be supported for five years. LXC 2.0 and 3.0 are long-term support releases: LXC 2.0 will be supported until June 1, 2021; LXC 3.0 will be supported until June 1, 2023.
LXD
LXD is a system container manager, basically an alternative to LXC's tools, not a "rewrite of LXC". In fact it is building on top of LXC to provide a new, better user experience.
See also
Open Container Initiative
Container Linux (formerly CoreOS Linux)
Docker, a project automating deployment of applications inside software containers
Apache Mesos, a large-scale cluster management platform based on container isolation
Operating system-level virtualization implementations
Proxmox Virtual Environment, an open-source server virtualization management platform supporting LXC containers and KVM
Anbox, uses LXC to execute Android applications in other Linux distributions
References
External links
and
IBM developerworks article about LXC
"Evading from Linux Containers" by Marco D'Itri
Presentation about cgroups and namespaces, the underlying technology of Linux containers, by Rami Rosen
Presentation about Linux Containers and the future cloud, by Rami Rosen
LXC : Install and configure the Linux Containers
LSS: Secure Linux containers (LWN.net)
Introduction to Linux Containers
, April 2013
Free virtualization software
Linux kernel features
Linux-only free software
Operating system security
Virtualization-related software for Linux |
25658582 | https://en.wikipedia.org/wiki/2010%20Miami%20Hurricanes%20football%20team | 2010 Miami Hurricanes football team | The 2010 Miami Hurricanes football team represented the University of Miami during the 2010 NCAA Division I FBS football season. The Hurricanes were coached by Randy Shannon during the regular season, then coached by Jeff Stoutland (interim) during their bowl game and played their home games at Sun Life Stadium. They are members of the Coastal Division of the Atlantic Coast Conference. They finished the season 7–6, 5–3 in ACC play and were invited to the Sun Bowl where they were defeated by Notre Dame, 33–17.
Schedule
Pre-season
Following Miami's loss to Wisconsin in the Champs Sports Bowl, defensive lineman Allen Bailey, wide receiver Leonard Hankerson, and offensive lineman Orlando Franklin announced they would be returning for their senior seasons despite being considered possible candidates to enter the 2010 NFL Draft.
In January it was reported that defensive line coach and recruiting coordinator Clint Hurtt would be leaving to go to the University of Louisville to become their new defensive line coach. On January 27, Miami hired Rick Petri to replace Hurtt as the defensive line coach. Petri had previously coached at the University of Kentucky and had once coached the defensive line before at Miami between 1993 and 1995. Wide receivers coach Aubrey Hill was named the new recruiting coordinator. In February, running backs coach Tommie Robinson left to take the same position for the Arizona Cardinals. He was replaced by Mike Cassano, who was previously running backs coach and recruiting coordinator at Florida International University, and had also coached at the University of Massachusetts Amherst under current Miami offensive coordinator Mark Whipple. Head coach Randy Shannon later announced that defensive assistant Michael Barrow would be returning to his position of full-time linebackers coach after defensive coordinator John Lovett had filled the position during the 2009 season. Barrow had been the linebackers coach in 2007 and 2008.
Miami began spring practice on February 23. Quarterback Jacory Harris only participated in non-throwing drills while recovering from a shoulder injury. Miami concluded spring practice with its spring game on March 27.
In March 2010, the Miami track team signed Latwan Anderson, who was also a four-star defensive back recruit in football. Anderson will walk on to the football team in the fall. Anderson's track scholarship will convert to a football scholarship once he plays in his first football game.
On May 12, Randy Shannon signed a new four-year contract with Miami.
On July 9, The New York Times reported that offensive lineman Seantrel Henderson, one of the top high school recruits in the nation, would attend the University of Miami in August 2010, academically cleared to play the 2010 season. Henderson previously signed a national letter of intent to join the USC Trojans. Henderson signed with the Trojans after assurances from coach Lane Kiffin that the Trojans football program would not be hit with major penalties following infractions made in previous seasons. After the NCAA penalized the Trojans in June 2010, Henderson was released from his letter of intent, allowing him to freely sign with another football program.
Senior defensive end Stephen Wesley was dismissed from the team at the end of July, reportedly for academic reasons. On August 2, junior wide receiver Thearon Collier was also dismissed due to team violations.
Miami began fall practice on August 5.
2010 recruiting class
Roster
Depth chart
(prior to Game 12 versus South Florida)
Rankings
Regular season
Florida A&M
1st Quarter
Leonard Hankerson 19 yard touchdown pass from Jacory Harris. Bosher PAT Good. (7-0 MIA)
Leonard Hankerson 40 yard touchdown pass from Jacory Harris. Bosher PAT Good. (14-0 MIA)
2nd Quarter
Damien Berry 32 yard touchdown pass from Jacory Harris. Bosher PAT Good. (21-0 MIA)
Ray Ray Armstrong 22 yard interception return. Bosher PAT Good. (28-0 MIA)
Mike James 1 yard touchdown run. Bosher PAT Good. (35-0 MIA)
3rd Quarter
Lamar Miller 5 yard touchdown run. Bosher PAT Good. (42-0 MIA)
4th Quarter
Matt Bosher 24 yard field goal. (45-0 MIA)
Ohio State
Miami and Ohio State last played when they met in the 2003 Fiesta Bowl playing for the national championship, a game won by Ohio State in double overtime, 31–24. In 2010, Miami lost to Ohio State 36–24, but the Buckeyes' win was later vacated after an NCAA investigation of their program.
Pittsburgh
Miami and Pittsburgh last met in 2003 at Pittsburgh in a game won by Miami 28–14. Miami is 21–9–1 all time against Pittsburgh.
Clemson
Miami and Clemson last met in 2009 at Miami in a game won by Clemson 40–37. Miami is 5–3 all time against Clemson.
Florida State
Jermaine Thomas scored a career-high three touchdowns, all in the first 21 minutes, and Chris Thompson ended the scoring with a 90-yard touchdown run in the 4th quarter. The 90-yard run is the longest run Miami has ever allowed in the history of its football program. The 23rd ranked Seminoles enjoyed a surprisingly easy 45-17 victory. The 45 points is the second-most points Florida State has scored in the series.
Duke
Miami and Duke last met in 2009 at Miami in a game won by Miami 34–16. Miami is 6–1 all time against Duke.
North Carolina
Miami and North Carolina last met in 2009 at Chapel Hill in a game won by North Carolina 33–24. Miami is 5–8 all time against North Carolina.
Virginia
Miami and Virginia last met in 2009 at Miami in a game won by Miami 52–17. Miami is 5–3 all time against Virginia.
Maryland
Miami and Maryland last met in 2006 at College Park in a game won by Maryland 14–13. Miami is 7–8 all time against Maryland.
Georgia Tech
Miami and Georgia Tech last met in 2009 at Miami in a game won by Miami 33–17. Miami is 5–10 all time against Georgia Tech.
Virginia Tech
Miami and Virginia Tech last met in 2009 at Blacksburg in a game won by Virginia Tech 31–7. Miami is 17–10 all time against Virginia Tech.
South Florida
Miami was upset by South Florida losing to the Bulls in over time 23–20. Miami is now 2–1 all time against South Florida.
Sun Bowl: Notre Dame
Shannon fired, Al Golden hired
Head coach Randy Shannon was fired the day following the South Florida loss.
On December 12, 2010, ESPN reported that Miami had offered the new head coaching position to former Temple University head coach Al Golden.
In press-conference remarks upon his hiring on December 13, 2010, Golden emphasized the importance of the Miami legacy. "It's the most recognizable brand in college football," he said; "I go back to the former players that are here, the five national championships, 20 national award winners, countless All-Americans, incredible tradition; it's a dream job."
Golden also announced after the bowl game that offensive coordinator Mark Whipple would not be retained and his replacement could come from the NFL.
References
Miami
Miami Hurricanes football seasons
Miami Hurricanes football |
46900313 | https://en.wikipedia.org/wiki/Office%20of%20Personnel%20Management%20data%20breach | Office of Personnel Management data breach | In June 2015, the United States Office of Personnel Management (OPM) announced that it had been the target of a data breach targeting personnel records. Approximately 22.1 million records were affected, including records related to government employees, other people who had undergone background checks, and their friends and family. One of the largest breaches of government data in U.S. history, information that was obtained and exfiltrated in the breach included personally identifiable information such as Social Security numbers, as well as names, dates and places of birth, and addresses. State-sponsored hackers working on behalf of the Chinese government carried out the attack.
The data breach consisted of two separate, but linked, attacks. It is unclear when the first attack occurred but the second attack happened on May 7, 2014, when attackers posed as an employee of KeyPoint Government Solutions, a subcontracting company. The first attack was discovered March 20, 2014, but the second attack was not discovered until April 15, 2015. In the aftermath of the event, Katherine Archuleta, the director of OPM, and the CIO, Donna Seymour, resigned.
Discovery
The first breach, named "X1" by the Department of Homeland Security (DHS), was discovered March 20, 2014 when a third party notified DHS of data exfiltration from OPM's network.
With regards to the second breach, named "X2", the New York Times had reported that the infiltration was discovered using United States Computer Emergency Readiness Team (US-CERT)'s
Einstein intrusion-detection program. However, the Wall Street Journal, Wired, Ars Technica, and Fortune later reported that it was unclear how the breach was discovered. They reported that it may have been a product demonstration of CyFIR, a commercial forensic product from a Manassas, Virginia security company CyTech Services that uncovered the infiltration. These reports were subsequently discussed by CyTech Services in a press release issued by the company on June 15, 2015 to clarify contradictions made by OPM spokesman Sam Schumach in a later edit of the Fortune article. However, it was not CyTech Services that uncovered the infiltration; rather, it was detected by OPM personnel using a software product of vendor Cylance. Ultimately, the conclusive House of Representatives' Majority Staff Report on the OPM breach discovered no evidence suggesting that CyTech Services knew of Cylance's involvement or had prior knowledge of an existing breach at the time of its product demonstration, leading to the finding that both tools independently "discovered" the malicious code running on the OPM network.
Data theft
Theft of security clearance information
The data breach compromised highly sensitive 127-page Standard Form 86 (SF 86) (Questionnaire for National Security Positions). SF-86 forms contain information about family members, college roommates, foreign contacts, and psychological information. Initially, OPM stated that family members' names were not compromised, but the OPM subsequently confirmed that investigators had "a high degree of confidence that OPM systems containing information related to the background investigations of current, former, and prospective federal government employees, to include U.S. military personnel, and those for whom a federal background investigation was conducted, may have been exfiltrated." The Central Intelligence Agency, however, does not use the OPM system; therefore, it may not have been affected.
Theft of personal details
J. David Cox, president of the American Federation of Government Employees, wrote in a letter to OPM director Katherine Archuleta that, based on the incomplete information that the AFGE had received from OPM, "We believe that the Central Personnel Data File was the targeted database, and that the hackers are now in possession of all personnel data for every federal employee, every federal retiree, and up to one million former federal employees." Cox stated that the AFGE believes that the breach compromised military records, veterans' status information, addresses, dates of birth, job and pay history, health insurance and life insurance information, pension information, and data on age, gender, and race.
Theft of fingerprints
The stolen data included 5.6 million sets of fingerprints. Biometrics expert Ramesh Kesanupalli said that because of this, secret agents were no longer safe, as they could be identified by their fingerprints, even if their names had been changed.
Perpetrators
The overwhelming consensus is that the cyberattack was carried out by state-sponsored attackers for the Chinese government. The attack originated in China, and the backdoor tool used to carry out the intrusion, PlugX, has been previously used by Chinese-language hacking groups that target Tibetan and Hong Kong political activists. The use of superhero names is also a hallmark of Chinese-linked hacking groups.
The House Committee on Oversight and Government Reform report on the breach strongly suggested the attackers were state actors due to the use of a very specific and highly developed piece of malware. U.S. Department of Homeland Security official Andy Ozment testified that the attackers had gained valid user credentials to the systems they were attacking, likely through social engineering. The breach also consisted of a malware package which installed itself within OPM's network and established a backdoor. From there, attackers escalated their privileges to gain access to a wide range of OPM's systems. In an article that came out before the House Oversight report Ars Technica reported on poor security practices at OPM contractors that at least one worker with root access to every row in every database was physically located in China and another contractor had two employees with Chinese passports,. However these were discussed as poor security practices, but not the actual source of the leak.
China denied responsibility for the attack.
In 2017, Chinese national Yu Pingan was arrested on charges of providing the "Sakula" malware used in the OPM data breach and other cyberintrusions. The FBI arrested Yu at Los Angeles International Airport after he had flown to the U.S. for a conference. Yu spent 18 months at the San Diego federal detention center and pleaded guilty to the federal offense of conspiracy to commit computer hacking and was subsequently deported to China. He was sentenced to time served in February 2019 and permitted to return to China; by the end of that year, Yu was working as a teacher at the government-run Shanghai Commercial School in central Shanghai. Yu was sentenced to pay $1.1 million in restitution to companies targeted by the malware, although there is little possibility of actual repayment. Yu was one of a very small number of Chinese hackers to be arrested and convicted in the U.S.; most hackers are never apprehended.
Motive
Whether the attack was motivated by commercial gain remains unclear. It has been suggested that hackers working for the Chinese military intend to compile a database of Americans using the data obtained from the breach.
Warnings
The OPM had been warned multiple times of security vulnerabilities and failings. A March 2015 OPM Office of the Inspector General semi-annual report to Congress warned of "persistent deficiencies in OPM's information system security program," including "incomplete security authorization packages, weaknesses in testing of information security controls, and inaccurate Plans of Action and Milestones."
A July 2014 story in The New York Times quoted unnamed senior American officials saying that Chinese hackers had broken into OPM. The officials said that the hackers seemed to be targeting files on workers who had applied for security clearances, and had gained access to several databases, but had been stopped before they obtained the security clearance information. In an interview later that month, Katherine Archuleta, the director of OPM, said that the most important thing was that no personal identification information had been compromised.
Responsibility
Some lawmakers made calls for Archuleta to resign citing mismanagement and that she was a political appointee and former Obama campaign official with no degree or experience in human resources. She responded that neither she nor OPM chief information officer Donna Seymour would do so. "I am committed to the work that I am doing at OPM," Archuleta told reporters. "I have trust in the staff that is there." On July 10, 2015, Archuleta resigned as OPM director.
Daniel Henninger, deputy editorial page director of the Wall Street Journal, speaking on Fox News' Journal Editorial Report, criticized the appointment of Archuleta to be "in charge of one of the most sensitive agencies" in the U.S. government, saying: "What is her experience to run something like that? She was the national political director of Barack Obama's 2012 re-election campaign. She's also the head of something called the Latina Initiative. She's a politico, right? ... That is the kind of person they have put in."
Security experts have stated that the biggest problem with the breach was not the failure to prevent remote break-ins, but the absence of mechanisms to detect outside intrusion and the lack of proper encryption of sensitive data. OPM CIO Donna Seymour countered that criticism by pointing to the agency's aging systems as the primary obstacle to putting such protections in place, despite having encryption tools available. DHS Assistant Secretary for Cybersecurity and Communications Andy Ozment explained further that, "If an adversary has the credentials of a user on the network, then they can access data even if it's encrypted, just as the users on the network have to access data, and that did occur in this case. So encryption in this instance would not have protected this data."
Investigation
A July 22, 2015 memo by Inspector General Patrick McFarland said that OPM's Chief Information Officer Donna Seymour was slowing her investigation into the breach, leading him to wonder whether or not she was acting in good faith. He did not raise any specific claims of misconduct, but he did say that her office was fostering an "atmosphere of mistrust" by giving him "incorrect or misleading" information. On Monday 22 February 2016, CIO Donna Seymour resigned, just two days before she was scheduled to testify before a House panel that is continuing to investigate the data breach.
In 2018, the OPM was reportedly still vulnerable to data thefts, with 29 of the Government Accountability Office's 80 recommendations remaining unaddressed. In particular, the OPM was reportedly still using passwords that had been stolen in the breach. It also had not discontinued the practice of sharing administrative accounts between users, despite that practice having been recommended against as early as 2003.
Reactions
FBI Director James Comey stated: "It is a very big deal from a national security perspective and from a counterintelligence perspective. It's a treasure trove of information about everybody who has worked for, tried to work for, or works for the United States government."
Speaking at a forum in Washington, D.C., Director of National Intelligence James R. Clapper said: "You have to kind of salute the Chinese for what they did. If we had the opportunity to do that, I don't think we'd hesitate for a minute."
See also
2020 United States Treasury and Department of Commerce data breach
Cyberwarfare by China
Operation Aurora
Yahoo! data breaches
References
Data breaches in the United States
Cyberattacks
Cyberwarfare in China
Cyberwarfare in the United States
United States Office of Personnel Management |
60914436 | https://en.wikipedia.org/wiki/Side%20project%20time | Side project time | Side project time is a type of employee benefit constituting a guarantee from employers that their employees may work on their personal projects during some part (usually a percentage) of their time at work. Side project time is limited by two stipulations: what the employee works on is the intellectual property of their employer, and if requested, an explanation must be able to be given as to how the project benefits the company in some way, even tangentially. Among well known implementations of this benefit are Google's 20% Project, an initiative where the company's employees are allocated twenty-percent of their paid work time to pursue personal projects. The objective of the program is to inspire innovation in participating employees and ultimately increase company potential. Google's 20% Project was influenced by a comparable program, launched in 1948, by manufacturing multinational 3M which guaranteed employees "15% time"—to dedicate up to 15 percent of their paid hours to a personal interest; it was during his side project time that Arthur Fry invented the Post-It Note. For Google's part, Gmail and AdSense both arose out of side projects.
Technology company Google is credited for popularising the 20% concept. Other major companies that have at one time or another offered some or all of their employees the benefit include the BBC (10%), Apple (a few contiguous weeks yearly), and Atlassian (20%). Some, such as LinkedIn, have trialed more restrictive versions of such initiatives in which employees must first pitch their project and have it approved by their manager to work on it during company time.
Side project time has been criticized by some academics, such as Queens College sociology professor Abraham Walker, as "exploitative" due to how it grants employers the intellectual property rights over the personal business ideas of their employees that the employer would have never requested to be worked on otherwise.
History
3M and 15% time
The 15% project, an initiative established by corporation 3M. At the time of this program's implementation, the United States’ work force was composed of highly inflexible employment opportunities in rigid business structures. After WWII ended, 3M developed an ethos: Innovate or die, which provided enterprise for the company and inspired the launch of this program. This original project has widely successful outcomes, resulting in scientists developing and manufacturing products that remain utilized internationally, even decades later.
Google implementation
Since before its IPO in 2004 the founders of Google have encouraged the 20% project system. Compared to its predecessor, a five-percent increase in the time dedicated to projects allows for further positive growth in the company's levels. Over the last twenty years, this project enabled the creation of key Google services such as Gmail.
As recognition of the clear benefits of retaining such a scheme grew, schools have replicated this system for their students in the classroom environment. The production of such creatively stimulated, ungraded work allows for peers to experiment with ideas without fear of assessment and increases their involvement in their general studies. Further, other small businesses are now using this system in their day-to-day functions, including software company Atlassian, as a safeguard to counter damp growth rates and a general lack of innovation.
In 2013, Google discontinued 20 percent time.
The 20% Project is responsible for the development of many Google services. Founders Sergey Brin and Larry Page advised that workers “spend 20% of their time working on what they think will most benefit Google”. Google's email service ‘Gmail’ was created by the developer Paul Buchheit on his 20% time. In his project "Caribou", Buchheit used his knowledge from university software experience to create the service. The freedom to use his time in such a way allowed him to ultimately develop a fundamental Google service. Buchheit's colleague, Susan Wojcicki, utilised her time to create their product AdSense. Finally, developer Krishna Bharat created Google News as an individual pursuit and hobby.
Other companies
Australian enterprise company Atlassian has been using the 20% project since 2008. Co-founder Mike Cannon-Brookes stated that “innovation slows as the company grows”. And as such the scheme was introduced to re-inspire innovation. The induction of the system was a six-month trial, granting $1 million to engineers and allowing them to work on private projects based on personal interests. Part of this 20% time is their annual "Ship It’"day, where employees are challenged with a task to create any product and then ship this item within 24 hours. Workers created products which ranged from refined beer to ‘Jira’ software updates.
American project management software company TargetProcess adopted the 20% project for their small company. The company was composed of 110 members when the initiative was introduced. Company founder Michael Dubakov identified a lack of innovation from his employees, with their daily routines occupied with monotonous work. Dubakov was inspired by the output derived from 20% projects in Google and 3M but was unsure about the limit on employee involvement. Despite driving the project at Google, only certain employees were granted this time – meaning most workers could not use this opportunity for innovation. Dubakov decided to allow all employees to pursue individual projects, to reduce boredom and inspire innovation. Initially the company introduced “Orange Fridays” in 2013, an allocated 4 hours of each Friday afternoon to attend workshops, learn about and develop new technology. From this, the company saw a rise in investment opportunities and company growth. The company developed a culture to innovate, with no pressure applied to employees moving from their regular schedule to innovate and learn.
In 2016, TargetProcess introduced the Open Allocation Experiment. This initiative was an extension of “Orange Fridays” and was applied to a majority of employees. The goal was to provide a more comprehensible user experience and amend issues with the TargetProcess product. Involved members were granted the opportunity to manage their schedule and individually pursue new product design. This experiment highlighted positive growth in the company, observed from a 10-month review. Dubakov implemented deadlines, ensuring each individual met their personal goals. As a result, members reported to have felt an increase in personal motivation. The main company detriment that arose from this experiment was decreased company unity. With each employee pursuing individual projects a lack of management led to the company embodying the different visions from all employees, affecting company alignment. This experiment was ceased after the founder believed the company was unprepared for this shift in work dynamic. TargetProcess would focus on backlog creation, with training programs for product development operating in conjunction. Self-organisation was a key concern for their 20% project as not all employees could manage their projects whilst reaching regular work deadlines. Another issue upheld in this experiment was the reward scheme, granted when an individual initiated a new product or scheme. This undermined the work of those employees not involved in the experiment and led to an unbalance in motivation.
Notable projects
Gmail
From the 20% Project, Google produced forefront services that drive the company today. One outcome of this project is Gmail, Google's email service. Developer Paul Buchheit created this service under the project title ‘Caribou’. This service was developed without the awareness of other employees and was publicised several years later. By 2006, this service was available on computers and mobile devices. After 8 years of activity, Gmail had 425 million users. In May 2014, Gmail set the record as the first Android application to reach one billion installations on the Google Play store.
AdSense
The 20% Project aided in the development of AdSense, a program where publishers can produce media advertisements for a targeted audience. This service allows for website publishers to generate revenue on a per-click basis. This service was publicly released on June 18, 2003. This service was envisioned by Gmail's founder Paul Buchheit, who wanted appropriate ads to run throughout the Gmail service, but the project was pursued by Susan Wojcicki, who curated a team of developers who created the platform in their dedicated 20% time. After two years of its inception, the service was generating 15 percent of the company's revenue. The service can now offer ads in the form of simple text, flash video or rich media.
Google News
The news aggregator Google News is another result of the 20% Project. This service was publicized in January 2006, but the beta was introduced in September 2002. The founder of this service was Krishna Bharat, who developed this software in his dedicated project time. The service sources from 20,000 different publishers, providing articles in 28 languages. Now, the service has many new features, including Google News Alerts, which emails “alerts” on chosen keyword topics.
Atlassian
Another company that implemented the system is Atlassian. After six months of the project initiating, the company saw major improvements to Jira, Bamboo and Confluence. The Bamboo team introduced Stash 1.0 in May throughout the dedicated project time. Throughout two designated ‘Innovation Week’ workshops, the company shipped 12 features. Another 20% Project allocation is ‘Ship It’ day, that allows customers to pursue any project. Employees used this time to refine the Jira service desk and improving the Jira software for loading screens.
TargetProcess
TargetProcess implemented 20% Project initiatives such as the Open Allocation Experiment and Orange Fridays to inspire innovation from employees. Since the implementation of the project, investment opportunities have risen. The company grew over 10% between 2008 and 2016 during the project's operation. Founder, Michael Dubakov, observed increased enthusiasm from employees.
Benefits and detriments
The 20% Project is designed to let employees experiment without the pressure of company decline or the threat of unemployment. For companies that thrive from the conception of services and products, innovative and entrepreneurial thought is vital to success.
However, for an operating business, productivity can be negatively affected by the 20% Project. The loss of time previously spent on major company-aligned projects can negatively affect a company's overall performance.
The allocation of this project time is not consistent. Former Google employee and Yahoo! CEO Marissa Mayer once stated “I’ve got to tell you the dirty little secret of Google's 20% time. It's really 120% time.”
Chris Mims mentioned that the 20% Project was “as good as dead”. This is a concern as it suggests that this project is destructive over long-term periods. In Google executive Laszlo Bock's book, Work Rules!, he mentions that the concept has “waxed and waned.” He states that workers in fact dedicate 10% of their time on personal projects, increasing focus time after the idea begins to “demonstrate impact.” He mentions that “the idea of 20 per cent time is more important than the reality of it.” Workers should always be driven towards individual innovation, yet it should operate “somewhat outside the lines of formal management.”
Atlassian Co-Founder Mike Cannon-Brookes implemented the 20% Project as a test to see if it produced any notable improvements or detriments to the company's output. They funded a six-month trial with one million Australian dollars. During this process, workers tackled inherent structural difficulties within the scheme. An employee mentioned that it was difficult to balance this 20% time “amongst all the pressures to deliver new features and bug fixes.”; the program introduced more deadlines for their employees. As a result, the company found that this 20% Project in fact became 1.1% of their working time. Another issue faced was the difficulty in the organisation and team-work involved in the projects. As employees would organise groups to create new software, they would struggle to work with employees who had other commitments and alternate time schedules. The company blogs have included fewer references to the 20% Project over the last decade with references that this scheme loses effect in long-term practices. The company's ‘Ship It’ day still highlights the prosperity of time dedicated to employee-based innovation.
Dubakov reported that the 20% Project proved beneficial for the company. The benefit of this separated time is that each member feels less pressure to complete tasks, being able to advance their skill set and review previous work. This time was not only used for new projects but to educate about content relating to the job. This allocation of time allowed for individuals to complete single tasks, improving time delivery but negatively affecting synergy. The company reflected an emergent vision as a result of collective individual projects.
See also
List of Google products
Genius hour
References
Google
Employment
Innovation
Workplace
Human resource management
3M
Project management
Workplace programs |
5845752 | https://en.wikipedia.org/wiki/Tecnomatix | Tecnomatix | Tecnomatix Technologies, Ltd. (formerly NASDAQ: TCNO) is a provider of Manufacturing Process Management and Product lifecycle management software to the electronics, automotive, aerospace and heavy equipment industries, currently owned by Siemens AG. Tecnomatix's eMPower is a suite of end-to-end Manufacturing Process Management solutions for the collaborative development and optimization of manufacturing processes across the extended enterprise and supply chain.
History
Founded in Israel in 1983, the Tecnomatix Corporation provided Manufacturing Process Management (MPM) solutions for the automotive, electronics, aerospace and other manufacturing and processing industries. The Tecnomatix products suite offered software and services in all process monitoring and control, production management and execution.
Shlomo Dovrat was the founder of Tecnomatix and served as CEO and President from its inception until 1995. In 1993, Dovrat led Tecnomatix's IPO on the NASDAQ (TCNO). He served as Chairman of the Board of Directors from 1995 until December 2001. In 1994, Dovrat was succeeded as CEO by Harel Beit-On (also the company's President. In 2001 Beit-On was appointed Chairman of the Board of Directors, and served as Chairman until the company's acquisition in 2005.
In 1999, Tecnomatix acquired Unicam Software Inc., a provider of production engineering software to the printed circuit board (PCB) assembly market.
In 2003, Tecnomatix acquired USDATA Corporation. USDATA was the creator of the supervisory-level control (SCADA) product FactoryLink, and the manufacturing execution systems (MES) product Xfactory.
In 2005, Tecnomatix was acquired by the UGS Corporation and the Tecnomatix product was combined with UGS' existing MPM solutions. The current Tecnomatix software line includes Part Manufacturing, Assembly Planning, Resource Planning, Plant Simulation, Human Performance, Quality, Production Management, Manufacturing Data Management.
In January 2007 UGS was purchased by Siemens AG, and today the Tecnomatix solutions are available from Siemens PLM Software. Siemens PLM Software announced Tecnomatix version 9 in June 2009.
See also
UGS Corporation
Manufacturing Process Management
Product lifecycle management
References
External links
Official Tecnomatix Website
Business software
Product lifecycle management
Computer-aided design software
Computer-aided manufacturing software
Software companies of Israel
Siemens software products |
1423909 | https://en.wikipedia.org/wiki/System%20Development%20Corporation | System Development Corporation | System Development Corporation (SDC) was a computer software company based in Santa Monica, California. Founded in 1955, it is considered the first company of its kind.
History
SDC began as the systems engineering group for the SAGE air-defense system at the RAND Corporation. In April 1955, the government contracted with RAND to help write software for the SAGE project. Within a few months, RAND's System Development Division had 500 employees developing SAGE applications. Within a year, the division had up to 1,000 employees. RAND spun off the group in 1957 as a non-profit organization that provided expertise for the United States military in the design, integration, and testing of large, complex, computer-controlled systems. SDC became a for-profit corporation in 1969, and began to offer its services to all organizations rather than only to the American military.
The first two systems that SDC produced were the SAGE system, written for the IBM AN/FSQ-7 [Q-7] computer, and the SAGE System Training Program [SSTP], written for the IBM 701 series of computers. The Q-7 was notable in that it was based on vacuum tubes. Intended as a duplex, with two computers for operational sites, there was a single Q-7 installed at the SDC complex in Santa Monica (2400 and 2500 Colorado; now occupied by the Water Garden). It was said that at the time the Q-7 building, a separate structure, required half of the air conditioning then used in the entire city of Santa Monica - perhaps in jest, but close to the truth.
In late 1961 SDC became the Computer Program Integration Contractor [CPIC] for the Air Force Satellite Control Network, and maintained that role for many years. As a part of that role, SDC wrote software for and delivered it to the AFSCN's then string of satellite tracking stations both in the US and abroad.
Ownership
In 1985, SDC was sold by its board of directors to the Burroughs Corporation.
In 1986, Burroughs merged with the Sperry Corporation to form Unisys, and SDC was folded into Unisys Defense Systems.
In 1991, Unisys Defense Systems was renamed Paramax, a wholly owned subsidiary of Unisys, so that it could be spun off to reduce Unisys debt.
In 1995, Unisys sold Paramax to the Loral Corporation, although a small portion of it, containing some projects that had originated in SDC, remained with Unisys.
In 1996, Loral sold Paramax to Lockheed Martin.
In 1997, the Paramax business unit was separated from Lockheed Martin under the control of Frank Lanza (who had been Loral's president and CEO); and became a subsidiary of L-3 Communications.
In 2019, L-3 Communications merged with Harris Corporation to form L3Harris Technologies.
Software projects
In the 1960s, SDC developed the timesharing system for the AN/FSQ-32 (Q32) mainframe computer for Advanced Research Projects Agency (ARPA). The Q-32 was one of the first systems to support both multiple users and inter-computer communications. Experiments with a dedicated modem connection to the TX-2 at the Massachusetts Institute of Technology led to computer communication applications such as e-mail. In the 1960s, SDC also developed the JOVIAL programming language (Jules' Own Version of the International Algorithmic Language, for Jules Schwartz) and the Time-Shared Data Management System (TDMS), an inverted file database system. Both were commonly used in real-time military systems.
References
Claude Baum, The System Builders: The Story of SDC, System Development Corp., Santa Monica, CA, 1981. .
Robert Buderi, The Invention that Changed the World: How a small group of radar pioneers won the Second World War and launched a technological revolution. Touchstone, New York, NY, 1998.
Martin Campbell-Kelly, From Airline Reservations to Sonic the Hedgehog. A History of the Software Industry. MIT Press, Cambridge, MA, 2003. .
Further reading
Records of the System Development Corporation at Charles Babbage Institute, University of Minnesota. Includes a history file with information about the RAND Corporation, the System Development Division, and the System Development Corporation. Contains correspondence, meetings and minutes, symposiums and presentations, product literature, technical literature, reports on systems engineering, systems design, human-computer interaction, and user interfaces, and a subject file.
Oral history interview with Jules I. Schwartz at Charles Babbage Institute, University of Minnesota. When Rand organized the System Development Corporation, Schwartz went to the new company. Schwartz describes his association with SAGE, his work on timesharing for the AN/FSQ-32 computer, computer networks, and control system projects (including TDMS).
Oral history interview with Robert M. Fano at Charles Babbage Institute, University of Minnesota. Fano discusses his move to computer science from information theory. Topics include System Development Corporation (SDC) among others.
External links
System Development Corporation history
Defunct software companies
Lockheed Martin
L3Harris Technologies |
54116165 | https://en.wikipedia.org/wiki/Pronous%20%28mythology%29 | Pronous (mythology) | In Greek mythology, Pronous (Ancient Greek: Πρόνοος Pronoos means 'careful, prudent') was the name of the following characters:
Pronoos, son of Deucalion and Pyrrha, the legendary progenitors, and brother of Orestheus and Marathonios. In one source, he was named as the father of Hellen.
Pronous, son of Phegeus, king of Psophis. Along with his brother Agenor he killed Alcmaeon (counted among the Epigoni), following his father's instructions. These brothers were thereafter killed by the sons of Alcmaeon (Amphoterus and Acarnan), or perhaps by their own sister Arsinoe, wife of Alcmaeon. Otherwise, Pausanias calls the two sons of Phegeus, Axion and Temenus.
Pronous, one of the Trojans. He was killed by Patroclus during the Trojan War.
Pronous, one of the Suitors of Penelope from Ithaca along with 11 other wooers. He, with the other suitors, was killed by Odysseus with the assistance of Eumaeus, Philoetius, and Telemachus.
Notes
References
Apollodorus, The Library with an English translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. . Online version at the Perseus Digital Library. Greek text available from the same website.
Gantz, Timothy, Early Greek Myth: A Guide to Literary and Artistic Sources, Johns Hopkins University Press, 1996, Two volumes: (Vol. 1), (Vol. 2).
Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library.
Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library.
Pausanias, Description of Greece with an English translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library
Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library.
Sextus Propertius, Elegies from Charm. Vincent Katz. trans. Los Angeles. Sun & Moon Press. 1995. Online version at the Perseus Digital Library. Latin text available at the same website.
Deucalionids
Princes in Greek mythology
Trojans
People of the Trojan War
Suitors of Penelope
Ithacan characters in Greek mythology
Thessalian characters in Greek mythology
Characters in Greek mythology
Arcadian mythology |
408407 | https://en.wikipedia.org/wiki/System%20Shock%202 | System Shock 2 | System Shock 2 is a 1999 action role-playing survival horror video game designed by Ken Levine and co-developed by Irrational Games and Looking Glass Studios. Originally intended to be a standalone title, its story was changed during production into a sequel to the 1994 game System Shock. The alterations were made when Electronic Arts—who owned the System Shock franchise rights—signed on as publisher.
The game takes place on board a starship in a cyberpunk depiction of 2114. The player assumes the role of a soldier trying to stem the outbreak of a genetic infection that has devastated the ship. Like System Shock, gameplay consists of first-person combat and exploration. It also incorporates role-playing system elements, in which the player can develop skills and traits, such as hacking and psionic abilities.
System Shock 2 was originally released in August 1999 for Microsoft Windows. The game received critical acclaim but failed to meet commercial sales expectations. Many critics later determined that the game was highly influential in subsequent game design, particularly on first-person shooters, and considered it far ahead of its time. It has been included in several "greatest games of all time" lists. In 2007, Irrational Games released a spiritual successor to the System Shock series, titled BioShock, to critical acclaim and strong sales. System Shock 2 had been in intellectual property limbo following the closure of Looking Glass Studios. Night Dive Studios were able to secure the rights to the game and System Shock franchise in 2013 to release an updated version of System Shock 2 for modern operating systems, including for OS X and Linux, and announced plans to release an Enhanced Edition of the game. OtherSide Entertainment announced in 2015 that they have been licensed the rights from Night Dive Studios to produce a sequel, System Shock 3, but by 2020 have since been transferred to Tencent.
Gameplay
As in its predecessor, System Shock, gameplay in System Shock 2 is an amalgamation of the action role-playing game and survival horror genres. The developers achieved this gameplay design by rendering the experience as a standard first-person shooter and adding a character customization and development system, which are considered as signature role-play elements. The player uses melee and projectile weapons to defeat enemies, while a role-playing system allows the development of useful abilities. Navigation is presented from a first-person view and complemented with a heads-up display that shows character and weapon information, a map, and a drag and drop inventory.
The backstory is explained progressively through the player's acquisition of audio logs and encounters with ghostly apparitions. At the beginning of the game, the player chooses a career in a branch of the Unified National Nominate, a fictional military organization. Each branch of service gives the player a set of starting bonuses composed of certain skills, though may thereafter freely develop as the player chooses. The Marine begins with bonuses to weaponry, the Navy officer is skilled in repairing and hacking, and the OSA agent gets a starting set of psionic powers.
The player can upgrade their skills by using "cyber-modules" given as rewards for completing objectives such as searching the ship and then spend them at devices called "cyber-upgrade units" to obtain enhanced skills. Operating system (O/S) units allow one-time character upgrades to be made (e.g. permanent health enhancement). An in-game currency called "nanites" may be spent on items at vending machines, including ammunition supplies and health packs. "Quantum Bio-Reconstruction Machines" can be activated and reconstitute the player for 10 nanites if they die inside the area in which the machine resides. Otherwise, the game ends and progress must be resumed from a save point. The player can hack devices, such as keypads to open alternate areas and vending machines to reduce prices. When a hack is attempted, a minigame begins that features a grid of green nodes; the player must connect three in a straight row to succeed. Optionally, electronic lock picks, called "ICE-picks", can be found that will automatically hack a machine, regardless of its difficulty.
Throughout the game, the player can procure various weapons, including melee weapons, pistols, shotguns, and alien weapons. Non-melee weapons degrade with use and will break if they are not regularly repaired with maintenance tools. There are a variety of ammunition types, each of which is most damaging to a specific enemy. For example, organic enemies are vulnerable to anti-personnel rounds, while mechanical foes are weak against armor-piercing rounds. Similarly, energy weapons cause the most damage against robots and cyborgs, and the annelid-made exotic weaponry is particularly harmful to organic targets. Because ammunition is scarce, to be effective the player must use it sparingly and carefully search rooms for supplies.
The game includes a research function. When new objects are encountered in the game, especially enemies, their organs can be collected and, when combined with chemicals found in storage rooms, the player can research the enemies and thus improve their damage against them. Similarly, some exotic weapons and items can only be used after being researched. OSA agents effectively have a separate weapons tree available to them. Psionic powers can be learned, such as invisibility, fireballs, and teleportation.
Plot
Backstory
In 2072, after the Citadel Station's demise, TriOptimum's attempts to cover up the incident were exposed to the media and the corporation was brought up on charges from multiple individuals and companies for the ensuing scandal. The virus developed there killed the station's population; the ruthless malevolent A.I supercomputer named SHODAN controlled, and eventually destroyed the Citadel Station in hopes of enslaving and destroying humanity. After a massive number of trials, the company went bankrupt and their operations were shut down. The United Nations Nominate (UNN), a UN successor, was established to combat the malevolence and corruption of power-hungry corporations, including TriOptimum. Artificial intelligence was reduced to most rudimentary tasks in order to prevent the creation of another SHODAN-like malevolent AI, and development of new technologies was halted. Meanwhile, the hacker (the original game's main protagonist), who became the most famous person in the world, vanished from public eye.
In 2100, 28 years later, the company's failed stocks and assets were bought by a Russian oligarch named Anatoly Korenchkin, a former black market operator who sought to make money in legitimate ways. He re-licensed and restored the company to its former status in the following decade. Along with producing healthcare and consumer products, Korenchkin signed weapons contracts with various military organizations, private and political-owned. The new UNN was almost virtually powerless with Korenchkin exercising control over them.
In January 2114, 42 years after the Citadel events and 14 years into rebuilding TriOptimum, the company created an experimental FTL starship, the Von Braun, which is now on its maiden voyage. The ship is also followed by a UNN space vessel, the Rickenbacker, which is controlled by Captain William Bedford Diego, son of Edward Diego, the Citadel Station's infamous commander and public hero of the Battle of the Boston Harbor during the Eastern States Police Action. Because the Rickenbacker does not have an FTL system of its own, the two ships are attached for the trip. However, Korenchkin was egotistical enough to make himself the captain of the Von Braun despite being inexperienced.
In July 2114, 5 months into the journey, the ships respond to a distress signal from the planet Tau Ceti V, outside the Solar System. A rescue team is sent to the planet's surface where they discover strange eggs; these eggs, found in an old ejection pod, infect the rescue team and integrate them into an alien communion known as "the Many" - a psychic hive mind generated by parasitic worms which can infect and mutate a human host. The parasites eventually spread to both ships and take over or kill most of their crews.
Story
Owing to a computer malfunction, the remaining soldier awakens with amnesia in a cryo-tube on the medical deck of the Von Braun, being implanted with an illegal cyber-neural interface. He is immediately contacted by another survivor, Dr. Janice Polito, who guides him to safety before the cabin depressurizes. She demands that he meets her on deck 4 of the Von Braun. Along the way, the soldier battles the infected crew members. The Many also telepathically communicate with him, attempting to convince him to join them. After restarting the ship's engine core, the soldier reaches deck 4 and discovers that Polito is dead. He is then confronted by SHODAN. It is revealed she has been posing as Polito to gain the soldier's trust.
SHODAN mentions that she is responsible for creating the Many through her bioengineering experiments on Citadel Station. The Hacker, who created her, ejected the grove that contained her experiments to prevent them contaminating Earth, an act that allowed part of SHODAN to survive in the grove. The grove crash-landed on Tau Ceti V. While SHODAN went into forced hibernation, The Many evolved beyond her control. SHODAN tells the soldier that his only chance for survival lies in helping destroy her creations. Efforts to regain control of XERXES, the main computer on the Von Braun, fail. SHODAN informs the soldier that destroying the ship is their only option, but he must transmit her program to the Rickenbacker first. While en route, the soldier briefly encounters two survivors, Thomas "Tommy" Suarez and Rebecca Siddons, who flee the ship aboard an escape pod.
With the transfer complete, the soldier travels to the Rickenbacker and learns both ships have been enveloped by the infection's source, a gigantic mass of bio-organic tissue that has wrapped itself over the two ships. The soldier enters the biomass and destroys its core, stopping the infection. SHODAN congratulates him and tells of her intentions to merge real space and cyberspace through the Von Braun's faster-than-light drive. The soldier confronts SHODAN in cyberspace and defeats her. The final scene shows Tommy and Rebecca receiving a message from the Von Braun. Tommy responds, saying they will return and noting that Rebecca is acting strange. Rebecca is shown speaking in a SHODAN-like voice, asking Tommy if he "likes her new look", as the screen fades to black.
History
Development
Development of System Shock 2 began in 1997 when Looking Glass Studios approached Irrational Games with an idea to co-develop a new game. The development team were fans of System Shock and sought to create a similar game. Early story ideas were similar to the novella Heart of Darkness. In an early draft, the player was tasked with assassinating an insane commander on a starship. The original title of the game, according to its pitch document, was Junction Point. The philosophy of the design was to continue to develop the concept of a dungeon crawler, like Ultima Underworld: The Stygian Abyss, in a science fiction setting, the basis for System Shock. However, the press mistook System Shock to be closer to a Doom clone which was cited for poor financial success of System Shock. With Junction Point, the goal was to add in significant role-playing elements and a persistent storyline as to distance the game from Doom.
The title took 18 months to create with a budget of $1.7 million and was pitched to several publishers until Electronic Arts—who owned the rights to the Shock franchise—responded by suggesting the game become a sequel to System Shock. The development team agreed; Electronic Arts became the publisher and story changes were made to incorporate the franchise. The project was allotted one year to be completed and to compensate for the short time frame, the staff began working with Looking Glass Studio's unfinished Dark Engine, the same engine used to create Thief: The Dark Project.
The designers included role-playing elements in the game. Similar to Ultima Underworld, another Looking Glass Studios project, the environment in System Shock 2 is persistent and constantly changes without the player's presence. Paper-and-pencil role-playing games were influential; the character customization system was based on Travellers methodology and was implemented in the fictional military branches which, by allowing multiple character paths, the player could receive a more open-ended gameplay experience. Horror was a key focus and four major points were identified to successfully incorporate it. Isolation was deemed primary, which resulted in the player having little physical contact with other sentient beings. Secondly, a vulnerability was created by focusing on a fragile character. Last were the inclusion of moody sound effects and "the intelligent placement of lighting and shadows". The game's lead designer, Ken Levine, oversaw the return of System Shock villain SHODAN. Part of Levine's design was to ally the player with her, as he believed that game characters were too trusting, stating "good guys are good, bad guys are bad. What you see and perceive is real". Levine sought to challenge this notion by having SHODAN betray the player: "Sometimes characters are betrayed, but the player never is. I wanted to violate that trust and make the player feel that they, and not [only] the character, were led on and deceived". This design choice was controversial with the development team.
Several problems were encountered during the project. Because the team comprised two software companies, tension emerged regarding job assignments and some developers left the project. Additionally, many employees were largely inexperienced, but in retrospect project manager Jonathan Chey felt this was advantageous, stating "inexperience also bred enthusiasm and commitment that might not have been present with a more jaded set of developers." The Dark Engine posed problems of its own. It was unfinished, forcing the programmers to fix software bugs when encountered. In contrast, working closely with the engine code allowed them to write additional features. Not all setbacks were localized; a demonstration build at E3 was hindered when it was requested all guns be removed from the presentation due to then-recent Columbine High School massacre.
Release
A demo for the game, featuring a tutorial and a third of the first mission, was released on August 2, 1999. Nine days later, System Shock 2 was shipped to retailers. An enhancement patch was released a month later and added significant features, such as co-operative multiplayer and control over weapon degradation and enemy respawn rates. A port was planned for the Dreamcast but was canceled.
End-of-support
Around 2000, with the end-of-support for the game by the developer and publisher, remaining bugs and compatibility with newer operating systems and hardware became a growing problem. To compensate the missing support, some fans of the game became active in the modding community to update the game. For instance, the "Rebirth" graphical enhancement mod replaced many low-polygonal models with higher quality ones, a "Shock Texture Upgrade Project" increased the resolution of textures, and an updated level editor was released by the user community.
Intellectual property debacle and re-release
The intellectual property (IP) rights of System Shock 2 were caught for years in complications between Electronic Arts and Meadowbrook Insurance Group (the parent company of Star Insurance Company), the entity that acquired the assets of Looking Glass Studios on their closure, though according to a lawyer for Star Insurance, they themselves have since acquired the lingering intellectual property rights from EA.
In the player community, attempts had been made to update and patch System Shock 2 for known issues on newer operating systems and limitations that had been hard-coded into the game. In 2009, a complete copy of System Shock 2s Dark Engine source code was discovered in the possession of an ex-Looking Glass Studios employee who was at the time continuing his work for Eidos Interactive. In late April 2010, a user on the Dreamcast Talk forum disassembled the contents of a Dreamcast development kit he had purchased, and among the content he received into some of the source code for Looking Glass games, including System Shock. An unknown user, going only as "Le Corbeau" (The Raven), issued a patch for System Shock 2 and Thief 2 in 2012 that resolved several of the known issue with the Dark engine and other features. It is believed that the patches were enabled by the Dreamcast kit, using a combination of the available source code and by disassembling libraries off the development kit. The patch became known informally as the "NewDark" patch to distinguish it from other efforts to improve the game.
At about the same time, Stephen Kick of Night Dive Studios had been seeking to license the System Shock property as to create System Shock 3. Star Insurance had not been willing to grant that license but did agree to allow Night Dive Studios to bring System Shock 2 to modern systems. Shortly after getting this approval, the NewDark patch had been released, and Kick attempted to contact "Le Corbeau" to discuss the use of their patch, but the user was impossible to contact. Kick decided to approach GOG.com for a timed-exclusive release on their digital distribution website in February 2013, where the game was the most requested to be added to the catalog. This version, considered by GOG.com to be a "collector's edition", included the "Le Corbeau" NewDark patch. In addition, the updates allow user-made modifications to be applied more seamlessly. The release also contains additional material such as the game's soundtrack, maps of the Von Braun, and the original pitch document for the game. The update rights also allowed a Mac OS X version of System Shock 2 to be subsequently released on June 18, 2013, through GOG.com. The title was later also available on Steam on May 10, 2013. In April 2014 a Linux version was also released. "Le Corbeau" has continued to update the game since 2012, with their patches being incorporated into the versions that Night Dive distributes through GOG.com and Steam.
Since then, Night Dive Studios also acquired the rights to System Shock, releasing an enhanced version of the game in September 2015. Kick has reported they have acquired full rights to the series since then.
Reception
System Shock 2 received critical acclaim. It received over a dozen awards, including seven "Game of the Year" prizes. Reviews were very positive and lauded the title for its hybrid gameplay, moody sound design, and engaging story. System Shock 2 is regarded by critics as highly influential, particularly on first-person shooters and the horror genre. In a retrospective article, GameSpot declared the title "well ahead of its time" and stated that it "upped the ante in dramatic and mechanical terms" by creating a horrific gameplay experience. Despite critical acclaim, the title did not perform well commercially; only 58,671 copies were sold by April 2000.
Several publications praised the title for its open-ended gameplay. With regard to character customization, Trent Ward of IGN stated the best element of the role-playing system was allowing gamers to "play the game as completely different characters", and felt this made each play-through unique. Erik Reckase writing for Just Adventure agreed, saying "There are very few games that allow you [to] play the way you want". Alec Norands of Allgame believed that the different character classes made the game “diverse enough to demand instant replayability". Robert Mayer from Computer Games Magazine called System Shock 2 "a game that truly defies classification in a single genre", and ensured that "the action is occasionally fast-paced, it's more often tactical, placing a premium on thought rather than on reflexes."
Buck DeFore reviewed the PC version of the game for Next Generation, rating it four stars out of five, and stated that "Bluntly put, System Shock 2 is a welcome visit to the lost arts of the good old days, and an immersive experience as long as you don't mind some of the cobwebs that come along with it."
A number of critics described the game as frightening. Computer and Video Games described the atmosphere as "gripping" and guaranteed readers they would "jump out of [their] skin" numerous times. Allgame found the sound design particularly effective, calling it “absolutely, teeth-clenchingly disturbing", while PC Gamers William Harms christened System Shock 2 as the most frightening game he had ever played. Some critics found the weapon degradation system to be irritating, and members of the development team have also expressed misgivings about the system. The role-playing system was another point of contention; GameSpot described the job system as "badly unbalanced" because the player can develop skills outside their career choice. Allgame felt similarly about the system, saying it "leaned towards a hacker character".
Along with Deus Ex, Sid Shuman of GamePro christened System Shock 2 "[one of the] twin barrels of modern [first-person shooter] innovation", owing to its complex role-playing gameplay. IGN writer Cam Shea referred to the game as "another reinvention of the FPS genre", citing the story, characters, and RPG system. PC Zone lauded the game as a "fabulous example of a modern-day computer game" and named it "a sci-fi horror masterpiece". The title has been inducted into a number of features listing the greatest games ever made, including ones by GameSpy, Edge, Empire, IGN, GameSpot and PC Gamer. IGN also ranked System Shock 2 as the 35th greatest first-person shooter of all time. X-Play called it the second scariest game of all time, behind Silent Hill 2. SHODAN has proven to be a popular character among most critics, including IGN, GameSpot and The Phoenix.
System Shock 2 won PC Gamer USs 1999 "Best Roleplaying Game" and "Special Achievement in Sound" awards, and was a runner-up in the magazine's overall "Game of the Year" category. The editors of Computer Gaming World nominated it for their "Role-Playing Game of the Year" prize, which ultimately went to Planescape: Torment.
Legacy
Enhanced edition
On a stream during the 20th anniversary of System Shock 2 on August 11, 2019, Night Dive announced an Enhanced Edition of the game was in development. Nightdive had been able to acquire and use the original game's source code, allowing them to improve upon the original. They plan to port the game to their KEX engine, the same engine they are using for the System Shock enhanced edition, and will work to make sure that the co-operative play features are better implemented. The enhanced version will also aim to support all existing mods and custom maps developed by the gaming community, though will require work with the community to help with compatibility. Kick has stated that while they would like to work with "Le Corbeau" to incorporate their patches into the Enhanced Edition, they will likely need to deviate so that Night Dive can improve upon the original title.
Alongside development of the Enhanced Edition will be a virtual reality (VR) version, though this will release at a later time as the Enhanced Edition. The VR version will use gameplay features that were introduced with Half-Life: Alyx, and will be cross-play compatible in its multiplayer mode with PC users that are not using VR modes.
Sequel projects
System Shock 2 has amassed a cult following, with fans asking for a sequel. On January 9, 2006, GameSpot reported that Electronic Arts had renewed its trademark protection on the System Shock name, leading to speculation that System Shock 3 might be under development. Three days later, Computer and Video Games reported a reliable source had come forward and confirmed the title's production. Electronic Arts UK made no comment when confronted with the information. PC Gamer UK stated the team behind The Godfather: The Game (EA Redwood Shores) was charged with its creation. Ken Levine, when asked whether he would helm the third installment, replied: "that question is completely out of my hands". He expressed optimism at the prospect of System Shock 3, but revealed that EA had not shown interest in his own proposal for a sequel, and was not optimistic with regards to their abilities. Electronic Arts did not confirm a new title in the series and allowed the System Shock trademark registration to lapse. Redwood Shores' next release was 2008's Dead Space, a game with noted similarities in theme and presentation to the System Shock series. According to Dead Space designers Ben Wanat and Wright Bagwell, their project was originally intended to be System Shock 3, before the release of Resident Evil 4 inspired them to go back to the drawing board and develop it into something more along those lines, eventually becoming Dead Space.
In November 2015, Night Dive Studios, after acquiring the rights to the System Shock franchise, stated they were considering developing a third title in the series. In December 2015, OtherSide Entertainment, a studio founded by former Looking Glass Studios designer Paul Neurath, announced they were developing System Shock 3 with rights granted to them by Night Dive Studios. OtherSide had acquired rights to make sequels to System Shock some years before this point but did not have the rights to the series name, which Night Dive was able to provide. The sequel will feature Terri Brosius reprising her voice for SHODAN, and will include work from original System Shock concept artist Robb Waters. Warren Spector, the producer of the first System Shock, announced in February 2016 that he has joined OtherSide Entertainment and will be working on System Shock 3. According to Spector, the narrative will pick up immediately from the end of System Shock 2, with SHODAN having taken over Rebecca's body. System Shock 3 will use the Unity game engine, with a teaser shown during Unity's press event at the 2019 Game Developers Conference.
Starbreeze Studios was originally planning to provide a $12 million "publishing-only" investment in System Shock 3, allowing OtherSide to retain all rights while seeking a 120% return on investment followed by equal shares of revenue splitting. Starbreeze's investment would allow the game to be developed for consoles in addition to the planned personal computer versions. However, in the wake of several financial problems in late 2018, Starbreeze has given back the publishing rights to System Shock 3 to OtherSide, and separated itself from the project. OtherSide stated they had the capability to self-publish System Shock 3 should they be unable to find a publishing partner but would prefer to have a publishing partner. At least twelve OtherSide employees working on System Shock 3, including several in lead roles, left the studio between late 2019 and early 2020. One former employee stated that the game's development team was "no longer employed"; however, in April 2020, OtherSide's vice president of marketing and development, Walter Somol, stated that the team was "still here" and progress on the project was "coming along nicely", but they were working remotely, due to the COVID-19 pandemic. After several journalists noted that the System Shock 3 websites had been transferred to ownership under Tencent in May 2020, Otherside confirmed that they had been unable to continue the series as a smaller studio and transferred the licensed rights to Tencent to continue its development, though Spector affirmed that OtherSide is still involved in its development alongside Tencent.
Spiritual successors
In 2007, Irrational Games—briefly known as 2K Boston/2K Australia—released a spiritual successor to the System Shock series, entitled BioShock. The game takes place in an abandoned underwater utopian community destroyed by the genetic modification of its populace and shares many gameplay elements with System Shock 2: reconstitution stations can be activated, allowing the player to be resurrected when they die; hacking, ammo conservation, and exploration are integral parts of gameplay; and unique powers may be acquired via plasmids, special abilities that function similarly to psionics in System Shock 2. The two titles also share plot similarities and employ audio logs and encounters with ghostly apparitions to reveal backstory. In BioShock Infinite, Irrational Games included a gameplay feature called "1999 Mode", specifically in reference to System Shock 2s release year, designed to provide a similar game experience with a higher difficulty and long-lasting effects of choices made that would remind players of System Shocks unforgiving nature.
In 2017, Arkane Studios published Prey, which takes place on a space station named Talos I, similar to System Shock. It, too, features psionic abilities, in the form of "Neuromods", as a fundamental gameplay feature, and uses a mixture of audio logs and pieces of text to advance the game's backstory. Prey also features elements like hacking, crafting, and features a heavy emphasis on side-quest exploration and careful conservation of ammunition and "Psi Points", player stat that controls how many psi abilities can be used. The game features crew members who have become infected, though, not as the result of an AI, but instead as the result of a failure of containment around a mysterious alien species known as the Typhon. Within the game, references are made to System Shocks developers, such as the "Looking Glass" technology that plays a significant role in the story's plot.
References
External links
1999 video games
Action role-playing video games
Cancelled Dreamcast games
Cooperative video games
Cyberpunk video games
First-person shooters
Horror video games
Irrational Games
Linux games
Looking Glass Studios games
MacOS games
Multiplayer and single-player video games
Survival horror video games
System Shock
Fiction set around Tau Ceti
Windows games
Video game sequels
Video games developed in the United States
Video games scored by Eric Brosius
Video games set in the 22nd century
Immersive sims |
16154653 | https://en.wikipedia.org/wiki/2594%20Acamas | 2594 Acamas | 2594 Acamas is a mid-sized Jupiter trojan from the Trojan camp, approximately in diameter. It was discovered on 4 October 1978, by American astronomer Charles Kowal at the Palomar Observatory in California. The dark Jovian asteroid has a longer-than average rotation period of 26 hours and possibly an elongated shape. It was named after the Thracian leader Acamas from Greek mythology.
Orbit and classification
Acamas is a dark Jovian asteroid in a 1:1 orbital resonance with Jupiter. It is located in the trailering Trojan camp at the Gas Giant's Lagrangian point, 60° behind on its orbit . It is also a non-family asteroid of the Jovian background population.
It orbits the Sun at a distance of 4.6–5.5 AU once every 11 years and 5 months (4,159 days; semi-major axis of 5.06 AU). Its orbit has an eccentricity of 0.08 and an inclination of 6° with respect to the ecliptic. The body's observation arc begins with a precovery taken at Palomar in September 1953, or 25 years prior to its official discovery observation.
Physical characteristics
Acamas is an assumed, carbonaceous C-type asteroid, while most larger Jupiter trojans are D-type asteroids.
Rotation period
In September 2013, a rotational lightcurve of Acamas was obtained from photometric observations in the R-band by astronomers at the Palomar Transient Factory in California. Lightcurve analysis gave a rotation period of hours with a brightness amplitude of 0.50 magnitude (). A high brightness variation typically indicates that the body has an elongated rather than spherical shape.
Diameter and albedo
According to the survey carried out by the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Acamas measures 25.87 kilometers in diameter and its surface has an albedo 0.06, while the Collaborative Asteroid Lightcurve Link assumes a standard albedo for a carbonaceous asteroid of 0.057 and calculates a diameter of 19.21 kilometers based on an absolute magnitude of 12.31.
Naming
This minor planet was named by IAU's Minor Planet Names Committee from Greek mythology after the warrior Acamas (son of Eussorus), ally of Troy and leader of the Thracian contingent during the Trojan War. He was killed by Ajax.
The name was suggested by Frederick Pilcher and published by the Minor Planet Center on 6 February 1993 ().
References
External links
Asteroid Lightcurve Database (LCDB), query form (info )
Dictionary of Minor Planet Names, Google books
Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center
Asteroid 2594 Acamas at the Small Bodies Data Ferret
002594
Discoveries by Charles T. Kowal
Minor planets named from Greek mythology
Named minor planets
19781004 |
68459 | https://en.wikipedia.org/wiki/Storyboard | Storyboard | A storyboard is a graphic organizer that consists of illustrations or images displayed in sequence for the purpose of pre-visualizing a motion picture, animation, motion graphic or interactive media sequence. The storyboarding process, in the form it is known today, was developed at Walt Disney Productions during the early 1930s, after several years of similar processes being in use at Walt Disney and other animation studios.
Origins
Many large budget silent films were storyboarded, but most of this material has been lost during the reduction of the studio archives during the 1970s and 1980s. Special effects pioneer Georges Méliès is known to have been among the first filmmakers to use storyboards and pre-production art to visualize planned effects. However, storyboarding in the form widely known today was developed at the Walt Disney studio during the early 1930s. In the biography of her father, The Story of Walt Disney (Henry Holt, 1956), Diane Disney Miller explains that the first complete storyboards were created for the 1933 Disney short Three Little Pigs. According to John Canemaker, in Paper Dreams: The Art and Artists of Disney Storyboards (1999, Hyperion Press), the first storyboards at Disney evolved from comic book-like "story sketches" created in the 1920s to illustrate concepts for animated cartoon short subjects such as Plane Crazy and Steamboat Willie, and within a few years the idea spread to other studios.
According to Christopher Finch in The Art of Walt Disney (Abrams, 1974), Disney credited animator Webb Smith with creating the idea of drawing scenes on separate sheets of paper and pinning them up on a bulletin board to tell a story in sequence, thus creating the first storyboard. Furthermore, it was Disney who first recognized the necessity for studios to maintain a separate "story department" with specialized storyboard artists (that is, a new occupation distinct from animators), as he had realized that audiences would not watch a film unless its story gave them a reason to care about the characters. The second studio to switch from "story sketches" to storyboards was Walter Lantz Productions in early 1935; by 1936 Harman-Ising and Leon Schlesinger Productions also followed suit. By 1937 or 1938, all American animation studios were using storyboards.
Gone with the Wind (1939) was one of the first live-action films to be completely storyboarded. William Cameron Menzies, the film's production designer, was hired by producer David O. Selznick to design every shot of the film.
Storyboarding became popular in live-action film production during the early 1940s and grew into a standard medium for the previsualization of films. Pace Gallery curator Annette Micheloson, writing of the exhibition Drawing into Film: Director's Drawings, considered the 1940s to 1990s to be the period in which "production design was largely characterized by the adoption of the storyboard". Storyboards are now an essential part of the creative process.
Use
Film
A film storyboard (sometimes referred to as a shooting board), is essentially a series of frames, with drawings of the sequence of events in a film, similar to a comic book of the film or some section of the film produced beforehand. It helps film directors, cinematographers and television commercial advertising clients visualize the scenes and find potential problems before they occur. Besides this, storyboards also help estimate the cost of the overall production and save time. Often storyboards include arrows or instructions that indicate movement. For fast-paced action scenes, monochrome line art might suffice. For slower-paced dramatic films with an emphasis on lighting, color impressionist style art might be necessary.
In creating a motion picture with any degree of fidelity to a script, a storyboard provides a visual layout of events as they are to be seen through the camera lens. In the case of interactive media, it is the layout and sequence in which the user or viewer sees the content or information. In the storyboarding process, most technical details involved in crafting a film or interactive media project can be efficiently described either in a picture or in additional text.
Theatre
A common misconception is that storyboards are not used in theatre. Directors and playwrights frequently use storyboards as special tools to understand the layout of the scene. The great Russian theatre practitioner Stanislavski developed storyboards in his detailed production plans for his Moscow Art Theatre performances (such as of Chekhov's The Seagull in 1898). The German director and dramatist Bertolt Brecht developed detailed storyboards as part of his dramaturgical method of "fabels."
Animatics
In animation and special effects work, the storyboarding stage may be followed by simplified mock-ups called "animatics" to give a better idea of how a scene will look and feel with motion and timing. At its simplest, an animatic is a sequence of still images (usually taken from a storyboard) displayed in sync with rough dialogue (i.e., scratch vocals) or rough soundtrack, essentially providing a simplified overview of how various visual and auditory elements will work in conjunction to one another.
This allows the animators and directors to work out any screenplay, camera positioning, shot list, and timing issues that may exist with the current storyboard. The storyboard and soundtrack are amended if necessary, and a new animatic may be created and reviewed by the production staff until the storyboard is finalized. Editing at the animatic stage can help a production avoid wasting time and resources on the animation of scenes that would otherwise be edited out of the film at a later stage. A few minutes of screen time in traditional animation usually equates to months of work for a team of traditional animators, who must painstakingly draw and paint countless frames, meaning that all that labor (and salaries already paid) will have to be written off if the final scene simply does not work in the film's final cut. In the context of computer animation, storyboarding helps minimize the construction of unnecessary scene components and models, just as it helps live-action filmmakers evaluate what portions of sets need not be constructed because they will never come into the frame.
Often storyboards are animated with simple zooms and pans to simulate camera movement (using non-linear editing software). These animations can be combined with available animatics, sound effects, and dialog to create a presentation of how a film could be shot and cut together. Some feature film DVD special features include production animatics, which may have scratch vocals or may even feature vocals from the actual cast (usually where the scene was cut after the vocal recording phase but before the animation production phase).
Animatics are also used by advertising agencies to create inexpensive test commercials. A variation, the "rip-o-matic", is made from scenes of existing movies, television programs or commercials, to simulate the look and feel of the proposed commercial. Rip, in this sense, refers to ripping-off an original work to create a new one.
Photomatic
A Photomatic (probably derived from 'animatic' or photo-animation) is a series of still photographs edited together and presented on screen in a sequence. Sound effects, voice-overs, and a soundtrack are added to the piece to show how a film could be shot and cut together. Increasingly used by advertisers and advertising agencies to research the effectiveness of their proposed storyboard before committing to a 'full up' television advertisement.
The Photomatic is usually a research tool, similar to an animatic, in that it represents the work to a test audience so that the commissioners of the work can gauge its effectiveness.
Originally, photographs were taken using a color negative film. A selection would be made from contact sheets and prints made. The prints would be placed on a rostrum and recorded to videotape using a standard video camera. Any moves, pans or zooms would have to be made in-camera. The captured scenes could then be edited.
Digital photography, web access to stock photography and non-linear editing programs have had a marked impact on this way of filmmaking also leading to the term 'digimatic'. Images can be shot and edited very quickly to allow important creative decisions to be made 'live'. Photo composite animations can build intricate scenes that would normally be beyond many test film budgets.
Photomatix was also the trademarked name of many of the booths found in public places which took photographs by coin operation. The Photomatic brand of the booths was manufactured by the International Mutoscope Reel Company of New York City. Earlier versions took only one photo per coin, and later versions of the booths took a series of photos. Many of the booths would produce a strip of four photos in exchange for a coin.
Comic books
Some writers have used storyboard type drawings (albeit rather sketchy) for their scripting of comic books, often indicating staging of figures, backgrounds, and balloon placement with instructions to the artist as needed often scribbled in the margins and the dialogue or captions indicated. John Stanley and Carl Barks (when he was writing stories for the Junior Woodchuck title) are known to have used this style of scripting.
In Japanese comics, the word is used for rough manga storyboards.
Business
Storyboards used for planning advertising campaigns such as corporate video production, commercials, a proposal or other business presentations intended to convince or compel to action are known as presentation boards. Presentation boards will generally be a higher quality render than shooting boards as they need to convey expression, layout, and mood. Modern ad agencies and marketing professionals will create presentation boards either by hiring a storyboard artist to create hand-drawn illustrated frames or often use sourced photographs to create a loose narrative of the idea they are trying to sell. Storyboards can also be used to visually understand the consumer experience by mapping out the customer's journey brands can better identify potential pain points and anticipate their emerging needs.
Some consulting firms teach the technique to their staff to use during the development of client presentations, frequently employing the "brown paper technique" of taping presentation slides (in sequential versions as changes are made) to a large piece of kraft paper which can be rolled up for easy transport. The initial storyboard may be as simple as slide titles on Post-It notes, which are then replaced with draft presentation slides as they are created.
Storyboards also exist in accounting in the ABC System activity-based costing (ABC) to develop a detailed process flowchart which visually shows all activities and the relationships among activities. They are used in this way to measure the cost of resources consumed, identify and eliminate non-value-added costs, determine the efficiency and effectiveness of all major activities, and identify and evaluate new activities that can improve future performance.
A "quality storyboard" is a tool to help facilitate the introduction of a quality improvement process into an organization.
"Design comics" are a type of storyboard used to include a customer or other characters into a narrative. Design comics are most often used in designing websites or illustrating product-use scenarios during design. Design comics were popularized by Kevin Cheng and Jane Jao in 2006.
Architectural studios
Occasionally, architectural studios need a storyboard artist to visualize presentations of their projects. Usually, a project needs to be seen by a panel of judges and nowadays it’s possible to create virtual models of proposed new buildings, using advanced computer software to simulate lights, settings, and materials. Clearly, this type of work takes time – and so the first stage is a draft in the form of a storyboard, to define the various sequences that will subsequently be computer-animated.
Novels
Storyboards are now becoming more popular with novelists. Because most novelists write their stories by scenes rather than chapters, storyboards are useful for plotting the story in a sequence of events and rearranging the scenes accordingly.
Interactive media
More recently the term storyboard has been used in the fields of web development, software development, and instructional design to present and describe, in written, interactive events as well as audio and motion, particularly on user interfaces and electronic pages.
Software
Storyboarding is used in software development as part of identifying the specifications for a particular set of software. During the specification phase, screens that the software will display are drawn, either on paper or using other specialized software, to illustrate the important steps of the user experience. The storyboard is then modified by the engineers and the client while they decide on their specific needs. The reason why storyboarding is useful during software engineering is that it helps the user understand exactly how the software will work, much better than an abstract description. It is also cheaper to make changes to a storyboard than an implemented piece of software.
An example is the Storyboards system for designing GUI apps for iOS and macOS.
Scientific research
Storyboards are used in linguistic fieldwork to elicit spoken language. An informant is usually presented with a simplified graphical depiction of a situation or story, and asked to describe the depicted situation, or to re-tell the depicted story. The speech is recorded for linguistic analysis.
Benefits
One advantage of using storyboards is that it allows (in film and business) the user to experiment with changes in the storyline to evoke stronger reaction or interest. Flashbacks, for instance, are often the result of sorting storyboards out of chronological order to help build suspense and interest.
Another benefit of storyboarding is that the production can plan the movie in advance. In this step, things like the type of camera shot, angle, and blocking of characters are decided.
The process of visual thinking and planning allows a group of people to brainstorm together, placing their ideas on storyboards and then arranging the storyboards on the wall. This fosters more ideas and generates consensus inside the group.
Creation
Storyboards for films are created in a multiple-step process. They can be created by hand drawing or digitally on a computer. The main characteristics of a storyboard are:
Visualize the storytelling.
Focus the story and the timing in several key frames (very important in animation).
Define the technical parameters: description of the motion, the camera, the lighting, etc.
If drawing by hand, the first step is to create or download a storyboard template. These look much like a blank comic strip, with space for comments and dialogue. Then sketch a "thumbnail" storyboard. Some directors sketch thumbnails directly in the script margins. These storyboards get their name because they are rough sketches not bigger than a thumbnail. For some motion pictures, thumbnail storyboards are sufficient.
However, some filmmakers rely heavily on the storyboarding process. If a director or producer wishes, more detailed and elaborate storyboard images are created. These can be created by professional storyboard artists by hand on paper or digitally by using 2D storyboarding programs. Some software applications even supply a stable of storyboard-specific images making it possible to quickly create shots that express the director's intent for the story. These boards tend to contain more detailed information than thumbnail storyboards and convey more of the mood for the scene. These are then presented to the project's cinematographer who achieves the director's vision.
Finally, if needed, 3D storyboards are created (called 'technical previsualization'). The advantage of 3D storyboards is they show exactly what the film camera will see using the lenses the film camera will use. The disadvantage of 3D is the amount of time it takes to build and construct the shots. 3D storyboards can be constructed using 3D animation programs or digital puppets within 3D programs. Some programs have a collection of low-resolution 3D figures which can aid in the process. Some 3D applications allow cinematographers to create "technical" storyboards which are optically-correct shots and frames.
While technical storyboards can be helpful, optically-correct storyboards may limit the director's creativity. In classic motion pictures such as Orson Welles' Citizen Kane and Alfred Hitchcock's North by Northwest, the director created storyboards that were initially thought by cinematographers to be impossible to film. Such innovative and dramatic shots had "impossible" depth of field and angles where there was "no room for the camera" – at least not until creative solutions were found to achieve the ground-breaking shots that the director had envisioned.
See also
Filmmaking
Graphic organizer
Pre-production
Screenwriting
Script breakdown
List of film-related topics
References
Notes
Bibliography
Halligan, Fionnuala (2013) Movie Storyboards: The Art of Visualizing Screenplays. Chronicle Books.
Filmmaking
Film production
Film and video terminology
Cinematic techniques
Animation techniques
Infographics
Home video supplements
American inventions
Comics terminology
Audiovisual introductions in 1933
Walt Disney Animation Studios |
15026619 | https://en.wikipedia.org/wiki/John%20Durham | John Durham | John Henry Durham (born March 16, 1950) is an American lawyer who served as the United States attorney for the District of Connecticut (D.C.) from 2018 to 2021. By April 2019, he had been assigned to investigate the origins of the Federal Bureau of Investigation (FBI) investigation into Russian interference in the 2016 U.S. elections, and in October 2020 he was appointed special counsel for the Department of Justice on that matter, a position he still holds.
He previously served as an assistant U.S. attorney in various positions in D.C. for 35 years. He is known for his role as special prosecutor in the 2005 destruction of interrogation tapes created by the Central Intelligence Agency (CIA), during which he decided not to file any criminal charges related to the destruction of tapes of torture at a CIA facility. By April 2019, U.S. Attorney General William Barr had tasked Durham with overseeing a review of the origins of the Russia investigation and to determine if intelligence collection involving the Trump campaign was "lawful and appropriate". Barr disclosed in December 2020 that he had elevated Durham's status to special counsel in October, ensuring that his investigation could continue after the Trump administration ended.
Early life and education
Durham was born in Boston, Massachusetts. He earned a Bachelor of Arts degree from Colgate University in 1972 and a Juris Doctor from the University of Connecticut School of Law in 1975. After graduation, he was a VISTA volunteer for two years (1975–1977) on the Crow Indian Reservation in Montana.
Career
Connecticut state government
After Durham's volunteer work, he became a state prosecutor in Connecticut. From 1977 to 1978, he served as a Deputy Assistant State's Attorney in the Office of the Chief State's Attorney. From 1978 to 1982, Durham served as an Assistant State's Attorney in the New Haven State's Attorney's Office.
Federal government
Following those five years as a state prosecutor, Durham became a federal prosecutor, joining the U.S. Attorney's Office for the District of Connecticut. From 1982 to 1989, he served as an attorney and then supervisor in the New Haven Field Office of the Boston Strike Force in the Justice Department's Organized Crime and Racketeering Section. From 1989 to 1994, he served as Chief of the Office's Criminal Division. From 1994 to 2008, he served as the Deputy U.S. Attorney, and served as the U.S. Attorney in an acting and interim capacity in 1997 and 1998.
In December 2000, Durham revealed secret Federal Bureau of Investigation (FBI) documents that convinced a judge to vacate the 1968 murder convictions of Enrico Tameleo, Joseph Salvati, Peter J. Limone and Louis Greco because they had been framed by the agency. In 2007, the documents helped Salvati, Limone, and the families of the two other men, who had died in prison, win a $101.7 million civil judgment against the government.
In 2008, Durham led an inquiry into allegations that FBI agents and Boston Police had ties with the mafia. He also led a series of high-profile prosecutions in Connecticut against the New England Mafia and corrupt politicians, including former governor John G. Rowland.
From 2008 to 2012, Durham served as the acting U.S. attorney for the Eastern District of Virginia.
On November 1, 2017, he was nominated by President Donald Trump to serve as U.S. Attorney for Connecticut. On February 16, 2018, his nomination was confirmed by voice vote of the Senate. He was sworn in on February 22, 2018.
Attorney General William Barr secretly appointed Durham Special Counsel on October 19, 2020.
Durham resigned as U.S. Attorney effective February 28, 2021. He was one of 56 remaining Trump-appointed U.S. attorneys President Joe Biden asked to resign in February 2021. He remains Special Counsel as of September 2021.
Appointments as special investigator
Whitey Bulger case
Amid allegations that FBI informants James "Whitey" Bulger and Stephen "The Rifleman" Flemmi had corrupted their handlers, US Attorney General Janet Reno named Durham special prosecutor in 1999. He oversaw a task force of FBI agents brought in from other offices to investigate the Boston office's handling of informants. In 2002, Durham helped secure the conviction of retired FBI agent John J. Connolly Jr., who was sentenced to 10 years in prison on federal racketeering charges for protecting Bulger and Flemmi from prosecution and warning Bulger to flee just before the gangster's 1995 indictment. Durham's task force also gathered evidence against retired FBI agent H. Paul Rico who was indicted in Oklahoma on state charges that he helped Bulger and Flemmi kill a Tulsa businessman in 1981. Rico died in 2004 before the case went to trial.
CIA interrogation tapes destruction
In 2008, Durham was appointed by Attorney General Michael Mukasey to investigate the destruction of CIA videotapes of detainee interrogations. On November 8, 2010, Durham closed the investigation without recommending any criminal charges be filed. Durham's final report remains secret but was the subject of an unsuccessful lawsuit under the Freedom of Information Act filed by The New York Times reporter Charlie Savage.
Torture investigation
In August 2009, Attorney General Eric Holder appointed Durham to lead the Justice Department's investigation of the legality of CIA's use of so-called "enhanced interrogation techniques" in the torture of detainees. Durham's mandate was to look at only those interrogations that had gone "beyond the officially sanctioned guidelines", with Attorney General Holder saying interrogators who had acted in "good faith" based on the guidance found in the Torture Memos issued by the Bush Justice Department were not to be prosecuted. Later in 2009, University of Toledo law professor Benjamin G. Davis attended a conference where former officials of the Bush administration had told conference participants shocking stories, and accounts of illegality on the part of more senior Bush officials. Davis wrote an appeal to former Bush officials to take their accounts of illegality directly to Durham. A criminal investigation into the deaths of two detainees, Gul Rahman in Afghanistan and Manadel al-Jamadi in Iraq, was opened in 2011. It was closed in 2012 with no charges filed.
Special counsel to review origins of Trump-Russia investigation
Beginning in 2017, President Trump and his allies alleged that the FBI investigation (known as Crossfire Hurricane) of possible contacts between his associates and Russian officials (which led to the Mueller investigation) was a "hoax" or "witch hunt" that was baselessly initiated by his political enemies. In April 2019, Attorney General William Barr announced that he had launched a review of the origins of the FBI's investigation into Russian interference in the 2016 United States elections and it was reported in May that he had assigned Durham to lead it several weeks earlier. Durham was given the authority "to broadly examin[e] the government's collection of intelligence involving the Trump campaign's interactions with Russians," reviewing government documents and requesting voluntary witness statements. In December 2020, Barr revealed to Congress that he had secretly appointed Durham special counsel on October 19. He stayed on in this capacity after he resigned as U.S. Attorney. The U.S. Justice Department's first official expenditure report for the special investigation showed that it had spent $1.5 million from Oct 19, 2020, to March 31, 2021; Durham was not required to report expenditures made before being designated special counsel.
As of November 2021 Durham had secured single-count indictments against two Americans for making a false statement, and a five-count false statement indictment against a Russian national.
Investigation into origins of FBI investigation "Crossfire Hurricane"
On October 24, 2019, it was reported that what had been a review of the Russia investigation was now a criminal probe into the matter. The Justice Department could now utilize subpoena power for both witness testimony and documents. Durham also had at his disposal the power to convene a grand jury and file criminal charges, if needed. The New York Times reported on November 22 that the Justice Department inspector general had made a criminal referral to Durham regarding Kevin Clinesmith, an FBI attorney who had altered an email during the process of acquiring a wiretap warrant renewal on Carter Page, and that referral appeared to be at least part of the reason Durham's investigation was elevated to criminal status. On August 14, 2020, Clinesmith pleaded guilty to a felony violation of altering an email used to maintain Foreign Intelligence Surveillance Act (FISA) warrants. He changed an email to falsely add a claim that Page was "not a source" for the CIA, to a statement by the CIA liaison that Carter Page had a prior operational relationship with the CIA from 2008 to 2013. The Page warrants began in October 2016, months after the FBI's Crossfire Hurricane investigation was opened, so Clinesmith's action and indictment were unrelated to the original basis of Durham's investigation into the origins of the FBI investigation.
The day Justice Department inspector general Michael Horowitz released his report on the 2016 FBI Crossfire Hurricane investigation, which found the investigation was properly predicated and debunked a number of conspiracy theories regarding its origins, Durham issued a statement saying, "we do not agree with some of the report’s conclusions as to predication and how the FBI case was opened." Many observers inside and outside the Justice Department, including the inspector general, expressed surprise that Durham would issue such a statement, as federal investigators typically do not publicly comment on their ongoing investigations. Barr also released a statement challenging the findings of the report. Horowitz later testified to the Senate that prior to release of the report he had asked Durham for any information he had that might change the report's findings, but "none of the discussions changed our findings." The Washington Post reported that Durham could not provide evidence of any setup by American intelligence.
The New York Times reported in December 2019 that Durham was examining the role of former CIA director John Brennan in assessing Russian interference in 2016, requesting emails, call logs and other documents. Brennan had been a vocal critic of Trump and a target of the president's accusations of improper activities toward him. The Times reported Durham was specifically examining Brennan's views of the Steele dossier and what he said about it to the FBI and other intelligence agencies. Brennan and former director of national intelligence James Clapper had testified to Congress that the CIA and other intelligence agencies did not rely on the dossier in preparing the January 2017 intelligence community assessment of Russian interference, and allies of Brennan said he disagreed with the FBI view that the dossier should be given significant weight, as the CIA characterized it as "internet rumor." The Times reported in February 2020 that Durham was examining whether intelligence community officials, and specifically Brennan, had concealed or manipulated evidence of Russian interference to achieve a desired result. FBI and NSA officials told Durham that his pursuit of this line of inquiry was due to his misunderstanding of how the intelligence community functions. Durham interviewed Brennan for eight hours on August 21, 2020, after which a Brennan advisor said Durham told Brennan he was not a subject or target of a criminal investigation, but rather a witness to events.
The New York Times reported in September 2020 that Durham had also sought documents and interviews regarding how the FBI handled an investigation into the Clinton Foundation. The FBI had investigated the Foundation and other matters related to Hillary Clinton, but had found no basis for prosecution, nor did John Huber, a U.S. attorney appointed by Trump's first attorney general Jeff Sessions, after a two-year investigation ending in January 2020.
On November 2, 2020, the day before the presidential election, New York magazine reported that:
Indictment of attorney
On September 16, 2021, Durham indicted Michael Sussmann, a partner for the law firm Perkins Coie, alleging he falsely told FBI general counsel James Baker during a September 2016 meeting that he was not representing a client for their discussion. Durham alleged Sussman was actually representing "a U.S. Technology Industry Executive, a U.S. Internet Company and the Hillary Clinton Presidential Campaign." Sussmann focuses on privacy and cybersecurity law and had approached Baker to discuss what he and others believed to be suspicious communications between computer servers at the Russian Alfa-Bank and the Trump Organization. Sussmann had represented the Democratic National Committee regarding the Russian hacking of its computer network. Sussmann's attorneys have denied he was representing the Clinton campaign. Perkins Coie represented the Clinton presidential campaign, and one of its partners, Marc Elias, commissioned Fusion GPS to conduct opposition research on Trump, which led to the production of the Steele dossier. Sussmann, a former federal prosecutor, characterized the allegations against him as politically motivated and pleaded not guilty the day after his indictment. As with the charge against Clinesmith, the charge against Sussmann was unrelated to the FBI investigation into links between Trump associates and Russian officials, which began in July 2016.
During a 2018 congressional deposition, Baker stated, "I don’t remember [Sussmann] specifically saying that he was acting on behalf of a particular client," though the Durham investigation found handwritten notes taken by assistant director of the FBI Counterintelligence Division Bill Priestap which paraphrase Baker telling him after the meeting that Sussmann "said not doing this for any client." The notes also say "Represents DNC, Clinton Foundation, etc.," though they did not say Sussmann told Baker this during the meeting; Baker had also said during his deposition that he was generally familiar with Sussmann's work, as they were friends. The Priestap notes constitute hearsay and it was not clear if they would be admissible in court as evidence under the hearsay rule.
The New York Times reported Durham had records showing Sussmann had billed the Clinton campaign for certain hours he spent working on the Alfa-Bank matter. His attorneys said he did so because he needed to demonstrate internally that he was engaged in billable work, though the work involved consulting with Elias, and the campaign paid a flat monthly fee to Perkins Coie but was not actually charged for those billed hours.
In a December 2021 court filing, Sussmann's attorneys presented portions of two documents provided to them by Durham days earlier which they asserted undermined the indictment. One document was a summary of an interview Durham's investigators conducted with Baker in June 2020 in which he did not say that Sussmann told him he was not there on behalf of any client, but rather that Baker had assumed it and that the issue never came up. A second document was a June 2019 Justice Department inspector general interview with Baker in which he said the Sussmann meeting "related to strange interactions that some number of people that were his clients, who were, he described as I recall it, sort of cybersecurity experts, had found." The New York Times reported that the narrow charge against Sussmann was contained in a 27-page indictment that elaborated on activities of cybersecurity researchers who were not charged, including what their attorneys asserted were selected email excerpts that falsely portrayed them as not actually believing their claims. Trump and his supporters seized on that information to assert the Alfa-Bank matter was a hoax devised by Clinton supporters and so the Trump-Russia investigation had been unjustified. Sussmann's attorneys told the court that the new evidence "underscores the baseless and unprecedented nature of this indictment" and asked that his trial date be moved from July to May 2022. A Durham prosecutor later asserted that subsequent to his 2019 and 2020 interviews, Baker "affirmed and then re-affirmed his now-clear recollection of the defendant’s false statement" after refreshing his memory with contemporaneous or near-contemporaneous notes.
In a February 2022 court motion related to Sussmann's prosecution, Durham alleged that Sussmann associate Rodney Joffe and his associates had "exploited" capabilities his company had through a pending cybersecurity contract with the Executive Office of the President (EOP) to acquire nonpublic government Domain Name System (DNS) and other data traffic "for the purpose of gathering derogatory information about Donald Trump." Joffe was not charged and his attorney did not immediately comment. After Sussmann's September 2021 indictment, The New York Times reported that in addition to analyzing suspicious communications involving a Trump server, Sussmann and analysts he worked with became aware of data from a YotaPhone — a Russian-made smartphone rarely used in the United States — that had accessed networks serving the White House, Trump Tower and a Michigan hospital company, Spectrum Health. Like the Alfa-Bank server, a Spectrum Health server also communicated with the Trump Organization server. Sussmann notified CIA counterintelligence of the findings in February 2017, but it was not known if they were investigated. Durham alleged in his February 2022 court motion that Sussmann had claimed his information "demonstrated that Trump and/or his associates were using supposedly rare, Russian-made wireless phones in the vicinity of the White House and other locations," but Durham said he found no evidence to support that. Sussmann's attorneys responded that Durham knew Sussman had not made such a claim to the CIA. Durham alleged Sussmann's data showed a Russian phone provider connection involving the EOP "during the Obama administration and years before Trump took office." Attorneys for an analyst who examined the YotaPhone data said researchers were investigating malware in the White House; a spokesman for Joffe said his client had lawful access under a contract to analyze White House DNS data for potential security threats. The spokesman asserted Joffe's work was in response to hacks of the EOP in 2015 and of the DNC in 2016, as well as YotaPhone queries in proximity to the EOP and the Trump campaign, that raised "serious and legitimate national security concerns about Russian attempts to infiltrate the 2016 election" that was shared with the CIA. Durham asserted that Sussmann bringing his information to the CIA was part of a broader effort to raise the intelligence community's suspicions of Trump's connections to Russia shortly after he took office. Durham did not allege that any eavesdropping of Trump communications content occurred, nor did he assert the Clinton campaign was involved or that the alleged DNS monitoring activity was unlawful or occurred after Trump took office.
Durham's filing triggered a furor among right-wing media outlets, including misinformation about what Durham had alleged, which was challenged by other outlets and lawyers for the involved parties. Fox News falsely reported that Durham claimed Hillary Clinton's campaign had paid a technology company to "infiltrate" White House and Trump Tower servers; that narrative actually came from Trump ally Kash Patel. The Washington Examiner claimed that this all meant there had been spying on Trump's White House office. Charlie Savage of The New York Times disputed these claims and explained that "Mr. Durham's filing never used the word 'infiltrate.' And it never claimed that Mr. Joffe's company was being paid by the Clinton campaign." Sussmann's attorneys asserted Durham's motion contained falsehoods "intended to further politicize this case, inflame media coverage, and taint the jury pool" as part of a pattern of Durham's behavior since Sussmann's indictment. Durham objected to a motion by Sussmann's attorneys to have the "factual background" section struck from Durham's motion, stating that "If third parties or members of the media have overstated, understated, or otherwise misinterpreted facts contained in the Government’s Motion, that does not in any way undermine the valid reasons for the Government’s inclusion of this information."
Hillary Clinton responded to the right-wing media attacks by hinting at defamation: "It's funny the more trouble Trump gets into the wilder the charges and conspiracy theories about me seem to get. Fox leads the charge with accusations against me, counting on their audience to fall for it again. As an aside, they're getting awfully close to actual malice in their attacks."
Sussmann's attorneys also explained that "Although the Special Counsel implies that in Mr. Sussmann's February 9, 2017 meeting, he provided Agency-2 with (Executive Office of the President) data from after Mr. Trump took office, the Special Counsel is well aware that the data provided to Agency-2 pertained only to the period of time before Mr. Trump took office, when Barack Obama was President," a time period (2015 and 2016) where much investigation of Russian hacks of Democratic Party and White House networks had occurred: "...cybersecurity researchers were 'deeply concerned' to find data suggesting Russian-made YotaPhones were in proximity to the Trump campaign and the White House, so 'prepared a report of their findings, which was subsequently shared with the C.I.A'."
Alfa Bank investigation
CNN reported later in September that the Durham grand jury had subpoenaed documents from Perkins Coie. CNN had viewed emails between Sussmann and others who were researching the server communications, including Joffe, showing that Durham's indictment of Sussmann cited only portions of the emails. The indictment included an unidentified researcher stating in an email, "The only thing that drive[s] us at this point is that we just do not like [Trump]." CNN's review of other emails indicated the researchers later broadened the scope of their examination for presentation to the FBI. Joffe's attorney asserted the indictment contained cherry-picked information to misrepresent what had transpired. Defense lawyers for the scientists who researched the Alfa Bank-Trump internet traffic said that Durham's indictment is misleading and that their clients stand by their findings.
Indictment of Steele dossier source
On November 4, 2021, Russian national Igor Danchenko was arrested and indicted on allegations he made five false statements to the FBI. Danchenko was a major source for the Steele dossier which made controversial allegations about Trump and was used by the FBI to secure a surveillance warrant against former Trump campaign aide Carter Page after he had left the campaign. The Steele dossier was not used as a basis to open the FBI investigation into links between Trump associates and Russian officials.
The indictment alleged Danchenko lied by saying he had not discussed the dossier with an unnamed U.S.-based public relations executive, identified by his attorney as Charles Dolan, Jr., a longtime political associate of Bill and Hillary Clinton. Durham alleged Danchenko used Dolan as a source for the dossier, though Dolan's attorney said he was a witness in the case. The indictment suggested, though did not directly assert, that Dolan may have been a source of the dossier allegation that a video existed of Trump having a liaison with prostitutes in a Moscow hotel. The indictment noted that Dolan was given a tour of that hotel in June 2016, including the presidential suite Trump stayed in during the alleged 2013 encounter. The Washington Post reported soon after Danchenko's indictment that Dolan had worked in public relations for Russia for eight years ending 2014. After the Steele dossier was publicly released by Buzzfeed News in January 2017, Dolan emailed a Russian client, whose web server company was later implicated in the Democratic National Committee cyber attacks, stating, "I’m hoping that this is exposed as fake news. I may be wrong but I have doubts about the authenticity."
Danchenko pleaded not guilty to the charges and a trial was scheduled for April 2022.
Awards and accolades
In 2011, Durham was included on The New Republic's list of Washington's most powerful, least famous people.
In 2004, Durham was decorated with the Attorney General's Award for Exceptional Service and, in 2012, with the Attorney General's Award for Distinguished Service.
Personal life
According to CNN, Durham is "press-shy" and is known for his tendency to avoid the media. United States Attorney Deirdre Daly once described him as "tireless, fair and aggressive" while United States Senator Chris Murphy characterized him as "tough-nosed ... apolitical and serious".
See also
Mueller report
Timeline of Russian interference in the 2016 United States elections
Timeline of investigations into Donald Trump and Russia (2019)
Timeline of investigations into Donald Trump and Russia (2020–2021)
Trump–Ukraine scandal
References
External links
Biography at U.S. Department of Justice
1950 births
Colgate University alumni
Lawyers from Boston
Living people
Trump administration personnel
United States Attorneys for the District of Connecticut
Connecticut Republicans
Massachusetts Republicans
United States Department of Justice lawyers
University of Connecticut School of Law alumni |
27059655 | https://en.wikipedia.org/wiki/Cloudera | Cloudera | Cloudera, Inc. is an American software company providing enterprise data management systems that make significant use of Apache Hadoop. As of January 31, 2021, the company had approximately 1,800 customers.
History
Cloudera, Inc. was formed on June 27, 2008, by Christophe Bisciglia (from Google), Amr Awadallah (from Yahoo!), Jeff Hammerbacher (from Facebook), and Mike Olson (from Oracle). Awadallah oversaw a business unit performing data analysis using Hadoop while at Yahoo!; Hammerbacher used Hadoop to develop some of Facebook's data analytics applications; and Olson formerly served as the CEO of Sleepycat Software, the company that created Berkeley DB. The four were joined in 2009 by Doug Cutting, a co-founder of Hadoop.
In March 2009, Cloudera released a commercial distribution of Hadoop, in conjunction with a $5 million investment led by Accel Partners. This was followed by a $25 million funding round in October 2010, a $40M funding round in November 2011, and a $160M funding round in March 2014.
In June 2013, Tom Reilly became chief executive officer of the company, although Olson remained as chairman of the board and chief strategist. Both left the company in June 2019. Rob Bearden was appointed as Cloudera's CEO in January 2020.
In March 2014, Intel invested $740 million in Cloudera for an 18% stake in the company. These shares were repurchased by Cloudera in December 2020 for $314 million.
On April 28, 2017, the company became a public company via an initial public offering. Over the next four years, the company's share price declined in the wake of falling sales figures and the rise of public cloud services like Amazon Web Services. In October 2021, the company went private after an acquisition by KKR and Clayton, Dubilier & Rice.
Cloudera has formed partnerships with companies such as Dell, IBM, and Oracle.
Products and services
Cloudera provides the Cloudera Data Platform, a collection of products related to cloud services and data processing. Some of these services are provided through public cloud servers such as Microsoft Azure or Amazon Web Services, while others are private cloud services that require a subscription. Cloudera markets these products for purposes related to machine learning and data analysis.
References
External links
American companies established in 2008
2008 establishments in California
2017 initial public offerings
2021 mergers and acquisitions
Business intelligence companies
Business intelligence
Business analysis
Big data companies
Cloud computing providers
Cloud infrastructure
Companies based in Palo Alto, California
Companies formerly listed on the New York Stock Exchange
Data companies
Data visualization software
Hadoop
Software companies based in the San Francisco Bay Area
Software companies established in 2008
Software companies of the United States
Kohlberg Kravis Roberts companies
Private equity portfolio companies |
25008091 | https://en.wikipedia.org/wiki/Information%20technology%20architecture | Information technology architecture | Information technology architecture is the process of development of methodical information technology specifications, models and guidelines, using a variety of Information Technology notations, for example UML, within a coherent Information Technology architecture framework, following formal and informal Information Technology solution, enterprise, and infrastructure architecture processes. These processes have been developed in the past few decades in response to the requirement for a coherent, consistent approach to delivery of information technology capabilities. They have been developed by information technology product vendors and independent consultancies, based on real experiences in the information technology marketplace and collaboration amongst industry stakeholders, for example the Open Group. Best practice Information Technology architecture encourages the use of open technology standards and global technology interoperability. Information Technology (I.T) Architecture can also be called a high-level map or plan of the information assets in an organization, including the physical design of the building that holds the hardware.
Grady Booch, Ivar Jacobson, and James Rumbaugh are accredited with developing the first Unified Modeling Language (UML), a widely used technology modeling language.
IBM was an early developer of formal solution and infrastructure architecture methodologies for information technology.
References
External links
The IEEE association for advancement of technology
Institute for Enterprise Architecture Developments
What is IT Architecture and types of architecture
Information technology
Enterprise architecture |
2543892 | https://en.wikipedia.org/wiki/GUID%20Partition%20Table | GUID Partition Table | The GUID Partition Table (GPT) is a standard for the layout of partition tables of a physical computer storage device, such as a hard disk drive or solid-state drive, using universally unique identifiers, which are also known as globally unique identifiers (GUIDs). Forming a part of the Unified Extensible Firmware Interface (UEFI) standard (Unified EFI Forum-proposed replacement for the PC BIOS), it is nevertheless also used for some BIOS systems, because of the limitations of master boot record (MBR) partition tables, which use 32 bits for logical block addressing (LBA) of traditional 512-byte disk sectors.
All modern personal computer operating systems support GPT. Some, including macOS and Microsoft Windows on the x86 architecture, support booting from GPT partitions only on systems with EFI firmware, but FreeBSD and most Linux distributions can boot from GPT partitions on systems with either the BIOS or the EFI firmware interface.
History
The Master Boot Record (MBR) partitioning scheme, widely used since the early 1980s, imposed limitations for use of modern hardware. The available size for block addresses and related information is limited to 32 bits. For hard disks with 512byte sectors, the MBR partition table entries allow a maximum size of 2 TiB (2³² × 512bytes) or 2.20 TB (2.20 × 10¹² bytes).
In the late 1990s, Intel developed a new partition table format as part of what eventually became the Unified Extensible Firmware Interface (UEFI). The GUID Partition Table is specified in chapter 5 of the UEFI 2.8 specification. GPT uses 64 bits for logical block addresses, allowing a maximum disk size of 264 sectors. For disks with 512byte sectors, the maximum size is 8 ZiB (264 × 512bytes) or 9.44 ZB (9.44 × 10²¹ bytes). For disks with 4,096byte sectors the maximum size is 64 ZiB (264 × 4,096bytes) or 75.6 ZB (75.6 × 10²¹ bytes).
In 2010, hard-disk manufacturers introduced drives with 4,096byte sectors (Advanced Format). For compatibility with legacy hardware and software, those drives include an emulation technology (512e) that presents 512byte sectors to the entity accessing the hard drive, despite their underlying 4,096byte physical sectors.
Features
Like MBR, GPTs use logical block addressing (LBA) in place of the historical cylinder-head-sector (CHS) addressing. The protective MBR is stored at LBA 0, and the GPT header is in LBA 1. The GPT header has a pointer to the partition table (Partition Entry Array), which is typically at LBA 2. Each entry on the partition table has a size of 128 bytes.
The UEFI specification stipulates that a minimum of 16,384 bytes, regardless of sector size, are allocated for the Partition Entry Array. Thus, on a disk with 512-byte sectors, at least 32 sectors are used for the Partition Entry Array, and the first usable block is at LBA 34 or higher, while on a 4,096-byte sectors disk, at least 4 sectors are used for the Partition Entry Array, and the first usable block is at LBA 6 or higher.
MBR variants
Protective MBR (LBA 0)
For limited backward compatibility, the space of the legacy Master Boot Record (MBR) is still reserved in the GPT specification, but it is now used in a way that prevents MBR-based disk utilities from misrecognizing and possibly overwriting GPT disks. This is referred to as a protective MBR.
A single partition of type , encompassing the entire GPT drive (where "entire" actually means as much of the drive as can be represented in an MBR), is indicated and identifies it as GPT. Operating systems and tools which cannot read GPT disks will generally recognize the disk as containing one partition of unknown type and no empty space, and will typically refuse to modify the disk unless the user explicitly requests and confirms the deletion of this partition. This minimizes accidental erasures. Furthermore, GPT-aware OSes may check the protective MBR and if the enclosed partition type is not of type or if there are multiple partitions defined on the target device, the OS may refuse to manipulate the partition table.
If the actual size of the disk exceeds the maximum partition size representable using the legacy 32-bit LBA entries in the MBR partition table, the recorded size of this partition is clipped at the maximum, thereby ignoring the rest of the disk. This amounts to a maximum reported size of 2 TiB, assuming a disk with 512 bytes per sector (see 512e). It would result in 16 TiB with 4 KiB sectors (4Kn), but since many older operating systems and tools are hard coded for a sector size of 512 bytes or are limited to 32-bit calculations, exceeding the 2 TiB limit could cause compatibility problems.
Hybrid MBR (LBA 0 + GPT)
In operating systems that support GPT-based boot through BIOS services rather than EFI, the first sector may also still be used to store the first stage of the bootloader code, but modified to recognize GPT partitions. The bootloader in the MBR must not assume a sector size of 512 bytes.
Partition table header (LBA 1)
The partition table header defines the usable blocks on the disk. It also defines the number and size of the partition entries that make up the partition table (offsets 80 and 84 in the table).
Partition entries (LBA 2–33)
After the header, the Partition Entry Array describes partitions, using a minimum size of 128 bytes for each entry block. The starting location of the array on disk, and the size of each entry, are given in the GPT header. The first 16 bytes of each entry designate the partition type's globally unique identifier (GUID). For example, the GUID for an EFI system partition is . The second 16 bytes are a GUID unique to the partition. Then follow the starting and ending 64 bit LBAs, partition attributes, and the 36 character (max.) Unicode partition name. As is the nature and purpose of GUIDs and as per RFC 4122, no central registry is needed to ensure the uniqueness of the GUID partition type designators.
The 64-bit partition table attributes are shared between 48-bit common attributes for all partition types, and 16-bit type-specific attributes:
Microsoft defines the type-specific attributes for basic data partition as:
Google defines the type-specific attributes for Chrome OS kernel as:
Operating-system support
UNIX and Unix-like systems
Windows: 32-bit versions
Windows 7 and earlier do not support UEFI on 32-bit platforms, and therefore do not allow booting from GPT partitions.
Windows: 64-bit versions
Limited to 128 partitions per disk.
Partition type GUIDs
Each partition has a "partition type GUID" that identifies the type of the partition and therefore partitions of the same type will all have the same "partition type GUID". Each partition also has a "partition unique GUID" as a separate entry, which as the name implies is a unique id for each partition.
See also
Advanced Active Partition (AAP)
Apple Partition Map (APM)
Boot Engineering Extension Record (BEER)
BSD disklabel
Device Configuration Overlay (DCO)
Extended Boot Record (EBR)
Host Protected Area (HPA)
Partition alignment
Rigid Disk Block (RDB)
Volume Table of Contents (VTOC)
Notes
References
External links
Microsoft TechNet: Disk Sectors on GPT Disks (archived page)
Microsoft Windows Deployment: Converting MBR to GPT without dats loss
Microsoft TechNet: Troubleshooting Disks and File Systems
Microsoft TechNet: Using GPT Drives
Microsoft: FAQs on Using GPT disks in Windows
Microsoft Technet: How Basic Disks and Volumes Work A bit MS-specific but good figures relate GPT to older MBR format and protective-MBR, shows layouts of complete disks, and how to interpret partition-table hexdumps.
Apple Developer Connection: Secrets of the GPT
Make the most of large drives with GPT and Linux
Convert Windows Vista SP1+ or 7 x86_64 boot from BIOS-MBR mode to UEFI-GPT mode without Reinstall
Support for GPT (Partition scheme) and HDD greater than 2.19 TB in Microsoft Windows XP
Setting up a RAID volume in Linux with >2TB disks
BIOS
Unified Extensible Firmware Interface
Booting
Disk partitions |
7057598 | https://en.wikipedia.org/wiki/Kitchen%20Table%20International | Kitchen Table International | Kitchen Table International was a fictitious computer company created as a faux amalgam of Radio Shack, Apple Inc., Commodore Business Machines, and other organizations of the time, and was the subject of one of the earliest regular computer humor columns, appearing in Wayne Green's 80 Micro magazine from January 1981 through July, 1983. It was invented by computer journalist David D. Busch, and billed as the "world's leading supplier of fictitious hardware, software, firmware, and limpware". Each month a new "innovation" was introduced that poked fun at the infant personal computer industry. These included a "black phosphor" computer monitor, and a programming language with all the worst features of BASIC and COBOL, called BASBOL. The fictional company's flagship product was the TLS-8E, a computer which was sold with a factory-applied coating of oxidation on its peripheral edge card connectors ("to protect them from electricity"), a 5-inch "sloppy" disk drive, and a keyboard that eschewed the familiar QWERTY array for a 16-key matrix that included a TBA (To Be Announced) key.
According to Busch, the operation was founded by one "Scott Nolan Hollerith" (after Adventure programmer Scott Adams, Atari co-founder Nolan Bushnell, and computer pioneer Herman Hollerith). S.N. Hollerith, it was said, graduated from the University of California at Phoenix in 1970 with a degree in Slide Rule Design, and quickly built KTI into a multi-thousand-dollar empire on a foundation of selling maintenance upgrades for DROSS-DOS 8E, a microcomputer operating system that was a subset of CP/M. In 1981, KTI introduced the world's "first" 32-bit microprocessor, created by piggy-backing two 16-bit chips on top of each other, until it was discovered that, at best, only one of the two chips actually functioned at any given time and, at worst, they spent a lot of time fighting over whose turn it was. The KTI staff gradually phased Hollerith out of active participation by relocating to a new, high-tech facility in Cupertino, California, and not telling him where it was.
Many of the phony products "introduced" by Kitchen Table International were actually introduced later. Several years after the company demonstrated its Reverse LPRINT command, which allowed a dot-matrix printer to function as a scanner (the demo was actually a videotape run backwards, showing sheets of text feeding into a printer and coming out blank after they had been "scanned"), Thunderware introduced the Thunderscan scanner, which replaced the ribbon cartridge of an Apple ImageWriter with a scanning module.
Sorry About The Explosion!
The Kitchen Table columns won the only Best Fiction Book award from the Computer Press Association for Busch in 1985, when he collected, revised, and edited the existing columns and some new material into a book, Sorry About The Explosion! published by Prentice-Hall. Never a best-seller, it achieved cult status largely from the popularity of the monthly KTI columns. The title of the book came from the subject line of a "memo" in the book from KTI R & D director Otto Wirk to company president Scott Nolan Hollerith sheepishly explaining that the organization's latest product had an MTBE (Mean Time Between Explosions) of only 100 hours.
Although not as pointed as the classic The Devil's DP Dictionary by Stan Kelly-Bootle, the book, and the Kitchen Table International columns it was largely based upon, poked fun at the foibles of companies like Apple Computer, Radio Shack, Commodore, and Atari in an era when the early computer magazines were filled with technical articles, code listings, and discussions of the latest and greatest hardware, and not much regular humor. When the KTI column ceased publication in July, 1983, Busch collected all the existing material, reorganized it by topic, and wrote new pieces to produce Sorry About The Explosion!.
References
External links
Fictional companies
Computer humor |
25009 | https://en.wikipedia.org/wiki/Privacy | Privacy | Privacy (, ) is the ability of an individual or group to seclude themselves or information about themselves, and thereby express themselves selectively.
When something is private to a person, it usually means that something is inherently special or sensitive to them. The domain of privacy partially overlaps with security, which can include the concepts of appropriate use and protection of information. Privacy may also take the form of bodily integrity. The right not to be subjected to unsanctioned invasions of privacy by the government, corporations, or individuals is part of many countries' privacy laws, and in some cases, constitutions.
The concept of universal individual privacy is a modern concept primarily associated with Western culture, particularly British and North American, and remained virtually unknown in some cultures until recent times. Now, most cultures recognize the ability of individuals to withhold certain parts of personal information from wider society. With the rise of technology, the debate regarding privacy has shifted from a bodily sense to a digital sense. As the world has become digital, there have been conflicts regarding the legal right to privacy and where it is applicable. In most countries, the right to a reasonable expectation to digital privacy has been extended from the original right to privacy, and many countries, notably the US, under its agency, the Federal Trade Commission, and those within the European Union (EU), have passed acts that further protect digital privacy from public and private entities and grant additional rights to users of technology.
With the rise of the Internet, there has been an increase in the prevalence of social bots, causing political polarization and harassment. Online harassment has also spiked, particularly with teenagers, which has consequently resulted in multiple privacy breaches. Selfie culture, the prominence of networks like Facebook and Instagram, location technology, and the use of advertisements and their tracking methods also pose threats to digital privacy.
Through the rise of technology and immensity of the debate regarding privacy, there have been various conceptions of privacy, which include the right to be let alone as defined in "The Right to Privacy", the first U.S. publication discussing privacy as a legal right, to the theory of the privacy paradox, which describes the notion that users' online may say they are concerned about their privacy, but in reality, are not. Along with various understandings of privacy, there are actions that reduce privacy, the most recent classification includes processing of information, sharing information, and invading personal space to get private information, as defined by Daniel J. Solove. Conversely, in order to protect a users's privacy, multiple steps can be taken, specifically through practicing encryption, anonymity, and taking further measures to bolster the security of their data.
History
Privacy has historical roots in ancient Greek philosophical discussions. The most well-known of these was Aristotle's distinction between two spheres of life: the public sphere of the polis, associated with political life, and the private sphere of the oikos, associated with domestic life. In the United States, more systematic treatises of privacy did not appear until the 1890s, with the development of privacy law in America.
Technology
As technology has advanced, the way in which privacy is protected and violated has changed with it. In the case of some technologies, such as the printing press or the Internet, the increased ability to share information can lead to new ways in which privacy can be breached. It is generally agreed that the first publication advocating privacy in the United States was the 1890 article by Samuel Warren and Louis Brandeis, "The Right to Privacy", and that it was written mainly in response to the increase in newspapers and photographs made possible by printing technologies.
In 1948, 1984, written by George Orwell, was published. A classic dystopian novel, 1984 describes the life of Winston Smith in 1984, located in Oceania, a totalitarian state. The all-controlling Party, the party in power led by Big Brother, is able to control power through mass surveillance and limited freedom of speech and thought. George Orwell provides commentary on the negative effects of totalitarianism, particularly on privacy and censorship. Parallels have been drawn between 1984 and modern censorship and privacy, a notable example being that large social media companies, rather than the government, are able to monitor a user's data and decide what is allowed to be said online through their censorship policies, ultimately for monetary purposes.
In the 1960s, people began to consider how changes in technology were bringing changes in the concept of privacy. Vance Packard’s The Naked Society was a popular book on privacy from that era and led US discourse on privacy at that time. In addition, Alan Westin's Privacy and Freedom shifted the debate regarding privacy from a physical sense, how the government controls a person's body (i.e. Roe v. Wade) and other activities such as wiretapping and photography. As important records became digitized, Westin argued that personal data was becoming too accessible and that a person should have complete jurisdiction over his or her data, laying the foundation for the modern discussion of privacy.
New technologies can also create new ways to gather private information. For example, in the United States, it was thought that heat sensors intended to be used to find marijuana-growing operations would be acceptable. Contrary to popular opinion, in 2001 in Kyllo v. United States (533 U.S. 27) it was decided that the use of thermal imaging devices that can reveal previously unknown information without a warrant does indeed constitute a violation of privacy. In 2019, after developing a corporate rivalry in competing voice-recognition software, Apple and Amazon required employees to listen to intimate moments and faithfully transcribe the contents.
Police and government
Police and citizens often conflict on what degree the police can intrude a citizen's digital privacy. For instance, in 2012, the Supreme Court ruled unanimously in United States v. Jones (565 U.S. 400), in the case of Antoine Jones who was arrested of drug possession using a GPS tracker on his car that was placed without a warrant, that warrantless tracking infringes the Fourth Amendment. The Supreme Court also justified that there is some "reasonable expectation of privacy" in transportation since the reasonable expectation of privacy had already been established under Griswold v. Connecticut (1965). The Supreme Court also further clarified that the Fourth Amendment did not only pertain to physical instances of intrusion but also digital instances, and thus United States v. Jones became a landmark case.
In 2014, the Supreme Court ruled unanimously in Riley v. California (573 U.S. 373), where David Leon Riley was arrested after he was pulled over for driving on expired license tags when the police searched his phone and discovered that he was tied to a shooting, that searching a citizen's phone without a warrant was an unreasonable search, a violation of the Fourth Amendment. The Supreme Court concluded that the cell phones contained personal information different than trivial items, and went beyond to state that information stored on the cloud was not necessarily a form of evidence. Riley v. California evidently became a landmark case, protecting the digital protection of citizen's privacy when confronted with the police.
A recent notable occurrence of the conflict between law enforcement and a citizen in terms of digital privacy has been in the 2018 case, Carpenter v. United States (585 U.S. ). In this case, the FBI used cell phone records without a warrant to arrest Timothy Ivory Carpenter on multiple charges, and the Supreme Court ruled that the warrantless search of cell phone records violated the Fourth Amendment, citing that the Fourth Amendment protects "reasonable expectations of privacy" and that information sent to third parties still falls under data that can be included under "reasonable expectations of privacy".
Beyond law enforcement, many interactions between the government and citizens have been revealed either lawfully or unlawfully, specifically through whistleblowers. One notable example is Edward Snowden, who released multiple operations related to the mass surveillance operations of the National Security Agency (NSA), where it was discovered that the NSA continues to breach the security of millions of people, mainly through mass surveillance programs whether it was collecting great amounts of data through third party private companies, hacking into other embassies or frameworks of international countries, and various breaches of data, which prompted a culture shock and stirred international debate related to digital privacy.
Internet
Andrew Grove, co-founder and former CEO of Intel Corporation, offered his thoughts on internet privacy in an interview published in May 2000:
Legal discussions of Internet privacy
The Internet has brought new concerns about privacy in an age where computers can permanently store records of everything: "where every online photo, status update, Twitter post and blog entry by and about us can be stored forever", writes law professor and author Jeffrey Rosen.
One of the first instances of privacy being discussed in a legal manner was in 1914, the Federal Trade Commission (FTC) was established, under the Federal Trade Commission Act, whose initial goal was to promote competition amongst businesses and prohibit unfair and misleading businesses. However, since the 1970s, the FTC has become involved in privacy law and enforcement, the first instance being the FTC's implementation and enforcement of the Fair Credit Reporting Act (FCRA), which regulates how credit card bureaus can use a client's data and grants consumer's further credit card rights. In addition to the FCRA, the FTC has implemented various other important acts that protect consumer privacy. For example, the FTC passed the Children's Online Privacy Protection Act (COPPA) of 1998, which regulates services geared towards children under the age of thirteen, and the Red Flags Rule, passed in 2010, which warrants that companies have measures to protect clients against identity theft, and if clients become victims of identity theft, that there are steps to alleviate the consequences of identity theft.
In 2018, the European Union (EU)'s General Data Protection Regulation (GDPR) went into effect, a privacy legislation that replaced the Data Protection Directive of 1995. The GDPR requires how consumers within the EU must have complete and concise knowledge about how companies use their data and have the right to gain and correct data that a companies stores regarding them, enforcing stricter privacy legislations compared to the Data Protection Directive of 1995.
Social networking
Several online social network sites (OSNs) are among the top 10 most visited websites globally. Facebook for example, as of August 2015, was the largest social-networking site, with nearly 2.7 billion members, who upload over 4.75 billion pieces of content daily. While Twitter is significantly smaller with 316 million registered users, the US Library of Congress recently announced that it will be acquiring and permanently storing the entire archive of public Twitter posts since 2006.
A review and evaluation of scholarly work regarding the current state of the value of individuals' privacy of online social networking show the following results: "first, adults seem to be more concerned about potential privacy threats than younger users; second, policy makers should be alarmed by a large part of users who underestimate risks of their information privacy on OSNs; third, in the case of using OSNs and its services, traditional one-dimensional privacy approaches fall short". This is exacerbated by deanonymization research indicating that personal traits such as sexual orientation, race, religious and political views, personality, or intelligence can be inferred based on a wide variety of digital footprints, such as samples of text, browsing logs, or Facebook Likes.
Intrusions of social media privacy are known to affect employment in the United States. Microsoft reports that 75 percent of U.S. recruiters and human-resource professionals now do online research about candidates, often using information provided by search engines, social-networking sites, photo/video-sharing sites, personal web sites and blogs, and Twitter. They also report that 70 percent of U.S. recruiters have rejected candidates based on internet information. This has created a need by many candidates to control various online privacy settings in addition to controlling their online reputations, the conjunction of which has led to legal suits against both social media sites and US employers.
Selfie culture
Selfies are popular today. A search for photos with the hashtag #selfie retrieves over 23 million results on Instagram and 51 million with the hashtag #me. However, due to modern corporate and governmental surveillance, this may pose a risk to privacy. In a research study which takes a sample size of 3763, researchers found that for users posting selfies on social media, women generally have greater concerns over privacy than men, and that users' privacy concerns inversely predict their selfie behavior and activity.
Online harassment
After the 1999 Columbine Shooting, where violent video games and music were thought to be one of the main influences on the killers, some states began to pass anti-bullying laws where some included cyber-bullying laws. The suicide of 13-year-old Megan Meier, where Meier was harassed on Myspace, prompted Missouri to pass anti-harassment laws though the perpetrators were later declared innocent. Through the rise of smartphones and the rise in popularity with social media such as Facebook and Instagram, messaging, online forums, gaming communities, and email, online harassment continued to grow. 18-year-old Jessica Logan committed suicide in 2009 after her boyfriend sent explicit photos of her to various teenagers in different high schools, where she was then harassed through Myspace, which led to her school passing anti-harassment laws. Further notable occurrences where digital privacy was invaded include the death of Tyler Clementi and Amanda Todd, whose death instigated Canadian funding towards studies on bullying and legislation regarding cyber-bullying to be passed, but problems regarding the lack of protection for users were risen, by Todd's mother herself, since this bill allowed for companies to completely access a user's data.
All U.S. states have now passed laws regarding online harassment. 15% of adolescents aged 12-18 have been subject to cyberbullying, according to a 2017 report conducted by the National Center for Education Statistics and Bureau of Justice. Within the past year, 15.7% of high schoolers were subject to cyberbullying according to the CDC's 2019 Youth Risk Behavior Surveillance System.
Bot accounts
Bots originate from the 1980s, where they were known as an IRC (Internet Relayed Chat) which served essential purposes such as stating the date and time and over time now expand to other purposes, such as flagging copyright on articles.
Forms of social media, such as Twitter, Facebook, and Instagram, have a prevalent activity of social bots, different than IRC bots, representing accounts that are not human and perform autonomous behavior to some degree. Bots, especially those with malicious intent, became most prevalent in the 2016 U.S. presidential election, where both the Trump and Clinton campaign had millions of bots essentially working on their account to influence the election. A subsection of these bots would target and assault certain journalists, causing journalists to stop reporting on matters since they dreaded further harassment. In the same election, Russian Twitter bots, presenting themselves as Midwestern swing-voter Republicans were used to amplify and spread misinformation. In October 2020, data scientist Emilio Ferrara found that 19% of tweets related to the 2016 election were generated by bots. Following the election, A 2017 study revealed that nearly 48 million Twitter accounts are bots. Furthermore, it was found that approximately 33% of tweets related to Brexit were produced by bots. Data indicates that the use of bots has increased since the 2016 election, and AI-driven bots are becoming hard to detect, and soon will be able to emulate human-like behavior, by being able to comment, submit comments regarding policy, etc. affecting political debate, outcomes of an election, and a user's perception of those online and the humans they interact with.
Although bots have been used in a negative context when politics are described, many bots have been used to protect against online harassment. For example, since 2020, Facebook researchers have been developing Web-Enabled Simulation (WES) bots that emulate bad human behavior and then engineers use this data to determine the best correctives.
Privacy and location-based services
Increasingly, mobile devices facilitate location tracking. This creates user privacy problems. A user's location and preferences constitute personal information. Their improper use violates that user's privacy. A recent MIT study by de Montjoye et al. showed that 4 spatio-temporal points, approximate places and times, are enough to uniquely identify 95% of 1.5M people in a mobility database. The study
further shows that these constraints hold even when the resolution of the dataset is low. Therefore, even coarse or blurred datasets provide little anonymity.
Several methods to protect user privacy in location-based services have been proposed, including the use of anonymizing servers and blurring of information. Methods to quantify privacy have also been proposed, to calculate the equilibrium between the benefit of providing accurate location information and the drawbacks of risking personal privacy.
Advertising on mobile devices
When the internet was first introduced, the internet became the predominant medium of advertising, shifting from newspapers and magazines. With the growth of digital advertisements, people began to be tracked using HTTP cookies, and this data was used to target relevant audiences. Since the introduction of iPhones and Androids, data brokers were also planted within apps, for further tracking. Since the growth of cookies, resulting in a $350 billion digital industry especially focused on mobile devices, digital privacy has become the main source of concern for many mobile users, especially with the rise of privacy scandals, such as the Cambridge Analytica Scandal. Recently, Apple has introduced features that prohibit advertisers from tracking a user's data without their consent, as seen with their implementation of pop-up notifications that let users decide the extent to which a company can track their behavior. Google has begun to roll out similar features, but concerns have risen about how a privacy-conscious internet will function without advertisers being able to use data from users as a form of capital. Apple has set a precedent implementing stricter crackdown on privacy, especially with their introduction of the pop-up feature, which has made it harder for businesses, especially small businesses, on other mediums, like Facebook to target relevant audiences, since these advertisers no longer have relevant data. Google, contrary to Apple, has remained relatively lax in its crackdown, supporting cookies until at least 2023, until a privacy-conscious internet solution is found.
Ethical controversies over location privacy
There have been scandals regarding location privacy. One instance was the scandal considering AccuWeather, where it was revealed the AccuWeather was selling locational data, which consisted of a user's locational data even if they opted out of AccuWeather to track his or her location, to Reveal Mobile, a company that monetizes data related a user's location. Other international cases are similar to when in 2017, a leaky API inside the McDelivery App exposed private data, which considered of home addresses, of 2.2 million users With the rise of such scandals, many large American technology companies such as Google, Apple, and Facebook have been subjected to hearings and pressure under the U.S. legislative system. In 2011, with the rise of locational technology, US Senator Al Franken wrote an open letter to Steve Jobs, noting the ability of iPhones and iPads to record and store users' locations in unencrypted files, although Apple denied doing so. This conflict has perpetuated further into 2021, a recent example being where the U.S. state of Arizona found in a court case that Google mislead its users and stored the location of users regardless of their location settings.
Metadata
The ability to do online inquiries about individuals has expanded dramatically over the last decade. Importantly, directly observed behavior, such as browsing logs, search queries, or contents of a public Facebook profile, can be automatically processed to infer secondary information about an individual, such as sexual orientation, political and religious views, race, substance use, intelligence, and personality.
In Australia, the Telecommunications (Interception and Access) Amendment (Data Retention) Act 2015 made a distinction between collecting the contents of messages sent between users and the metadata surrounding those messages.
Protection of privacy on the Internet
Covert collection of personally identifiable information has been identified as a primary concern by the U.S. Federal Trade Commission. Although some privacy advocates recommend the deletion of original and third-party HTTP cookies, Anthony Miyazaki, marketing professor at Florida International University and privacy scholar, warns that the "elimination of third-party cookie use by Web sites can be circumvented by cooperative strategies with third parties in which information is transferred after the Web site's use of original domain cookies." As of December 2010, the Federal Trade Commission is reviewing policy regarding this issue as it relates to behavioral advertising.
Legal right to privacy
Most countries give citizens rights to privacy in their constitutions. Representative examples of this include the Constitution of Brazil, which says "the privacy, private life, honor and image of people are inviolable"; the Constitution of South Africa says that "everyone has a right to privacy"; and the Constitution of the Republic of Korea says "the privacy of no citizen shall be infringed." The Italian Constitution also defines the right to privacy. Among most countries whose constitutions do not explicitly describe privacy rights, court decisions have interpreted their constitutions to intend to give privacy rights.
Many countries have broad privacy laws outside their constitutions, including Australia's Privacy Act 1988, Argentina's Law for the Protection of Personal Data of 2000, Canada's 2000 Personal Information Protection and Electronic Documents Act, and Japan's 2003 Personal Information Protection Law.
Beyond national privacy laws, there are international privacy agreements. The United Nations Universal Declaration of Human Rights says "No one shall be subjected to arbitrary interference with [their] privacy, family, home or correspondence, nor to attacks upon [their] honor and reputation." The Organisation for Economic Co-operation and Development published its Privacy Guidelines in 1980. The European Union's 1995 Data Protection Directive guides privacy protection in Europe. The 2004 Privacy Framework by the Asia-Pacific Economic Cooperation is a privacy protection agreement for the members of that organization.
Argument against legal protection of privacy
The argument against the legal protection of privacy is predominant in the US. The landmark US Supreme Court case, Griswold v. Connecticut, established a reasonable expectation to privacy. However, some conservative justices do not consider privacy to be a legal right, as when discussing the 2003 case, Lawrence v. Texas (539 U.S. 558), Supreme Court Justice Antonin Scalia did not consider privacy to be a right, and Supreme Court Justice Clarence Thomas argued that there is "no general right to privacy" in the U.S. Constitution in 2007. Many Republican interest groups and activists desire for appointed justices to be like Justice Thomas and Scalia since they uphold originalism, which indirectly helps strengthen the argument against the legal protection of privacy.
Free market vs consumer protection
Approaches to privacy can, broadly, be divided into two categories: free market or consumer protection.
One example of the free market approach is to be found in the voluntary OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data. The principles reflected in the guidelines, free of legislative interference, are analyzed in an article putting them into perspective with concepts of the GDPR put into law later in the European Union.
In a consumer protection approach, in contrast, it is claimed that individuals may not have the time or knowledge to make informed choices, or may not have reasonable alternatives available. In support of this view, Jensen and Potts showed that most privacy policies are above the reading level of the average person.
By country
Australia
The Privacy Act 1988 is administered by the Office of the Australian Information Commissioner. The initial introduction of privacy law in 1998 extended to the public sector, specifically to Federal government departments, under the Information Privacy Principles. State government agencies can also be subject to state based privacy legislation. This built upon the already existing privacy requirements that applied to telecommunications providers (under Part 13 of the Telecommunications Act 1997), and confidentiality requirements that already applied to banking, legal and patient / doctor relationships.
In 2008 the Australian Law Reform Commission (ALRC) conducted a review of Australian privacy law and produced a report titled "For Your Information". Recommendations were taken up and implemented by the Australian Government via the Privacy Amendment (Enhancing Privacy Protection) Bill 2012.
In 2015, the Telecommunications (Interception and Access) Amendment (Data Retention) Act 2015 was passed, to some controversy over its human rights implications and the role of media.
European Union
Although there are comprehensive regulations for data protection in the European Union, one study finds that despite the laws, there is a lack of enforcement in that no institution feels responsible to control the parties involved and enforce their laws. The European Union also champions the Right to be Forgotten concept in support of its adoption by other countries.
India
Since the introduction of the Aadhaar project in 2009, which resulted in all 1.2 billion Indians being associated with a 12-digit biometric-secured number. Aadhaar has uplifted the poor in India by providing them with a form of identity and preventing the fraud and waste of resources, as normally the government would not be able to allocate its resources to its intended assignees due to the ID issues . With the rise of Aadhaar, India has debated whether Aadhaar violates an individual's privacy and whether any organization should have access to an individual's digital profile, as the Aadhaar card became associated with other economic sectors, allowing for the tracking of individuals by both public and private bodies. Aadhaar databases have suffered from security attacks as well and the project was also met with mistrust regarding the safety of the social protection infrastructures. In 2017, where the Aadhar was challenged, the Indian Supreme Court declared privacy as a human right, but postponed the decision regarding the constitutionality of Aadhaar for another bench. In September 2018, the Indian Supreme Court determined that the Aadhaar project did not violate the legal right to privacy.
United Kingdom
In the United Kingdom, it is not possible to bring an action for invasion of privacy. An action may be brought under another tort (usually breach of confidence) and privacy must then be considered under EC law. In the UK, it is sometimes a defence that disclosure of private information was in the public interest. There is, however, the Information Commissioner's Office (ICO), an independent public body set up to promote access to official information and protect personal information. They do this by promoting good practice, ruling on eligible complaints, giving information to individuals and organisations, and taking action when the law is broken. The relevant UK laws include: Data Protection Act 1998; Freedom of Information Act 2000; Environmental Information Regulations 2004; Privacy and Electronic Communications Regulations 2003. The ICO has also provided a "Personal Information Toolkit" online which explains in more detail the various ways of protecting privacy online.
United States
Although the US Constitution does not explicitly include the right to privacy, individual as well as locational privacy are implicitly granted by the Constitution under the 4th Amendment. The Supreme Court of the United States has found that other guarantees have "penumbras" that implicitly grant a right to privacy against government intrusion, for example in Griswold v. Connecticut. In the United States, the right of freedom of speech granted in the First Amendment has limited the effects of lawsuits for breach of privacy. Privacy is regulated in the US by the Privacy Act of 1974, and various state laws. The Privacy Act of 1974 only applies to Federal agencies in the executive branch of the Federal government. Certain privacy rights have been established in the United States via legislation such as the Children's Online Privacy Protection Act (COPPA), the Gramm–Leach–Bliley Act (GLB), and the Health Insurance Portability and Accountability Act (HIPAA).
Unlike the EU and most EU-member states, the US does not recognize the right to privacy of non-US citizens. The UN's Special Rapporteur on the right to privacy, Joseph A. Cannataci, criticized this distinction.
Conceptions of privacy
Privacy as contextual integrity
The theory of contextual integrity defines privacy as an appropriate information flow, where appropriateness, in turn, is defined as conformance with legitimate, informational norms specific to social contexts.
Right to be let alone
In 1890, the United States jurists Samuel D. Warren and Louis Brandeis wrote "The Right to Privacy", an article in which they argued for the "right to be let alone", using that phrase as a definition of privacy. This concept relies on the theory of natural rights and focuses on protecting individuals. The citation was a response to recent technological developments, such as photography, and sensationalist journalism, also known as yellow journalism.
There is extensive commentary over the meaning of being "let alone", and among other ways, it has been interpreted to mean the right of a person to choose seclusion from the attention of others if they wish to do so, and the right to be immune from scrutiny or being observed in private settings, such as one's own home. Although this early vague legal concept did not describe privacy in a way that made it easy to design broad legal protections of privacy, it strengthened the notion of privacy rights for individuals and began a legacy of discussion on those rights in the US.
Limited access
Limited access refers to a person's ability to participate in society without having other individuals and organizations collect information about them.
Various theorists have imagined privacy as a system for limiting access to one's personal information. Edwin Lawrence Godkin wrote in the late 19th century that "nothing is better worthy of legal protection than private life, or, in other words, the right of every man to keep his affairs to himself, and to decide for himself to what extent they shall be the subject of public observation and discussion." Adopting an approach similar to the one presented by Ruth Gavison Nine years earlier, Sissela Bok said that privacy is "the condition of being protected from unwanted access by others—either physical access, personal information, or attention."
Control over information
Control over one's personal information is the concept that "privacy is the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others." Generally, a person who has consensually formed an interpersonal relationship with another person is not considered "protected" by privacy rights with respect to the person they are in the relationship with. Charles Fried said that "Privacy is not simply an absence of information about us in the minds of others; rather it is the control we have over information about ourselves. Nevertheless, in the era of big data, control over information is under pressure.
States of privacy
Alan Westin defined four states—or experiences—of privacy: solitude, intimacy, anonymity, and reserve. Solitude is a physical separation from others; Intimacy is a "close, relaxed; and frank relationship between two or more individuals" that results from the seclusion of a pair or small group of individuals. Anonymity is the "desire of individuals for times of 'public privacy.'" Lastly, reserve is the "creation of a psychological barrier against unwanted intrusion"; this creation of a psychological barrier requires others to respect an individual's need or desire to restrict communication of information concerning himself or herself.
In addition to the psychological barrier of reserve, Kirsty Hughes identified three more kinds of privacy barriers: physical, behavioral, and normative. Physical barriers, such as walls and doors, prevent others from accessing and experiencing the individual. (In this sense, "accessing" an individual includes accessing personal information about him or her.) Behavioral barriers communicate to others—verbally, through language, or non-verbally, through personal space, body language, or clothing—that an individual does not want them to access or experience him or her. Lastly, normative barriers, such as laws and social norms, restrain others from attempting to access or experience an individual.
Secrecy
Privacy is sometimes defined as an option to have secrecy. Richard Posner said that privacy is the right of people to "conceal information about themselves that others might use to their disadvantage".
In various legal contexts, when privacy is described as secrecy, a conclusion is reached: if privacy is secrecy, then rights to privacy do not apply for any information which is already publicly disclosed. When privacy-as-secrecy is discussed, it is usually imagined to be a selective kind of secrecy in which individuals keep some information secret and private while they choose to make other information public and not private.
Personhood and autonomy
Privacy may be understood as a necessary precondition for the development and preservation of personhood. Jeffrey Reiman defined privacy in terms of a recognition of one's ownership of their physical and mental reality and a moral right to self-determination. Through the "social ritual" of privacy, or the social practice of respecting an individual's privacy barriers, the social group communicates to developing children that they have exclusive moral rights to their bodies — in other words, moral ownership of their body. This entails control over both active (physical) and cognitive appropriation, the former being control over one's movements and actions and the latter being control over who can experience one's physical existence and when.
Alternatively, Stanley Benn defined privacy in terms of a recognition of oneself as a subject with agency—as an individual with the capacity to choose. Privacy is required to exercise choice. Overt observation makes the individual aware of himself or herself as an object with a "determinate character" and "limited probabilities." Covert observation, on the other hand, changes the conditions in which the individual is exercising choice without his or her knowledge and consent.
In addition, privacy may be viewed as a state that enables autonomy, a concept closely connected to that of personhood. According to Joseph Kufer, an autonomous self-concept entails a conception of oneself as a "purposeful, self-determining, responsible agent" and an awareness of one's capacity to control the boundary between self and other—that is, to control who can access and experience him or her and to what extent. Furthermore, others must acknowledge and respect the self's boundaries—in other words, they must respect the individual's privacy.
The studies of psychologists such as Jean Piaget and Victor Tausk show that, as children learn that they can control who can access and experience them and to what extent, they develop an autonomous self-concept. In addition, studies of adults in particular institutions, such as Erving Goffman's study of "total institutions" such as prisons and mental institutions, suggest that systemic and routinized deprivations or violations of privacy deteriorate one's sense of autonomy over time.
Self-identity and personal growth
Privacy may be understood as a prerequisite for the development of a sense of self-identity. Privacy barriers, in particular, are instrumental in this process. According to Irwin Altman, such barriers "define and limit the boundaries of the self" and thus "serve to help define [the self]." This control primarily entails the ability to regulate contact with others. Control over the "permeability" of the self's boundaries enables one to control what constitutes the self and thus to define what is the self.
In addition, privacy may be seen as a state that fosters personal growth, a process integral to the development of self-identity. Hyman Gross suggested that, without privacy—solitude, anonymity, and temporary releases from social roles—individuals would be unable to freely express themselves and to engage in self-discovery and self-criticism. Such self-discovery and self-criticism contributes to one's understanding of oneself and shapes one's sense of identity.
Intimacy
In a way analogous to how the personhood theory imagines privacy as some essential part of being an individual, the intimacy theory imagines privacy to be an essential part of the way that humans have strengthened or intimate relationships with other humans. Because part of human relationships includes individuals volunteering to self-disclose most if not all personal information, this is one area in which privacy does not apply.
James Rachels advanced this notion by writing that privacy matters because "there is a close connection between our ability to control who has access to us and to information about us, and our ability to create and maintain different sorts of social relationships with different people." Protecting intimacy is at the core of the concept of sexual privacy, which law professor Danielle Citron argues should be protected as a unique form of privacy.
Physical privacy
Physical privacy could be defined as preventing "intrusions into one's physical space or solitude." An example of the legal basis for the right to physical privacy is the U.S. Fourth Amendment, which guarantees "the right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures".
Physical privacy may be a matter of cultural sensitivity, personal dignity, and/or shyness. There may also be concerns about safety, if, for example one is wary of becoming the victim of crime or stalking.
Organizational
Government agencies, corporations, groups/societies and other organizations may desire to keep their activities or secrets from being revealed to other organizations or individuals, adopting various security practices and controls in order to keep private information confidential. Organizations may seek legal protection for their secrets. For example, a government administration may be able to invoke executive privilege or declare certain information to be classified, or a corporation might attempt to protect valuable proprietary information as trade secrets.
Privacy self-synchronization
Privacy self-synchronization is a hypothesized mode by which the stakeholders of an enterprise privacy program spontaneously contribute collaboratively to the program's maximum success. The stakeholders may be customers, employees, managers, executives, suppliers, partners or investors. When self-synchronization is reached, the model states that the personal interests of individuals toward their privacy is in balance with the business interests of enterprises who collect and use the personal information of those individuals.
An individual right
David Flaherty believes networked computer databases pose threats to privacy. He develops 'data protection' as an aspect of privacy, which involves "the collection, use, and dissemination of personal information". This concept forms the foundation for fair information practices used by governments globally. Flaherty forwards an idea of privacy as information control, "[i]ndividuals want to be left alone and to exercise some control over how information about them is used".
Richard Posner and Lawrence Lessig focus on the economic aspects of personal information control. Posner criticizes privacy for concealing information, which reduces market efficiency. For Posner, employment is selling oneself in the labour market, which he believes is like selling a product. Any 'defect' in the 'product' that is not reported is fraud. For Lessig, privacy breaches online can be regulated through code and law. Lessig claims "the protection of privacy would be stronger if people conceived of the right as a property right", and that "individuals should be able to control information about themselves".
A collective value and a human right
There have been attempts to establish privacy as one of the fundamental human rights, whose social value is an essential component in the functioning of democratic societies.
Priscilla Regan believes that individual concepts of privacy have failed philosophically and in policy. She supports a social value of privacy with three dimensions: shared perceptions, public values, and collective components. Shared ideas about privacy allows freedom of conscience and diversity in thought. Public values guarantee democratic participation, including freedoms of speech and association, and limits government power. Collective elements describe privacy as collective good that cannot be divided. Regan's goal is to strengthen privacy claims in policy making: "if we did recognize the collective or public-good value of privacy, as well as the common and public value of privacy, those advocating privacy protections would have a stronger basis upon which to argue for its protection".
Leslie Regan Shade argues that the human right to privacy is necessary for meaningful democratic participation, and ensures human dignity and autonomy. Privacy depends on norms for how information is distributed, and if this is appropriate. Violations of privacy depend on context. The human right to privacy has precedent in the United Nations Declaration of Human Rights: "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers." Shade believes that privacy must be approached from a people-centered perspective, and not through the marketplace.
Dr. Eliza Watt, Westminster Law School, University of Westminster in London, UK, proposes application of the International Human Right Law (IHRL) concept of “virtual control” as an approach to deal with extraterritorial mass surveillance by state intelligence agencies. Dr. Watt envisions the “virtual control” test, understood as a remote control over the individual's right to privacy of communications, where privacy is recognized under the ICCPR, Article 17. This, she contends, may help to close the normative gap that is being exploited by nation states.
Privacy paradox and economic valuation
The privacy paradox is a phenomenon in which online users state that they are concerned about their privacy but behave as if they were not. While this term was coined as early as 1998, it wasn't used in its current popular sense until the year 2000.
Susan B. Barnes similarly used the term privacy paradox to refer to the ambiguous boundary between private and public space on social media. When compared to adults, young people tend to disclose more information on social media. However, this does not mean that they are not concerned about their privacy. Susan B. Barnes gave a case in her article: in a television interview about Facebook, a student addressed her concerns about disclosing personal information online. However, when the reporter asked to see her Facebook page, she put her home address, phone numbers, and pictures of her young son on the page.
The privacy paradox has been studied and scripted in different research settings. Several studies have shown this inconsistency between privacy attitudes and behavior among online users. However, by now an increasing number of studies have also shown that there are significant and at times large correlations between privacy concerns and information sharing behavior, which speaks against the privacy paradox. A meta-analysis of 166 studies published on the topic reported an overall small but significant relation between privacy concerns and informations sharing or use of privacy protection measures. So although there are several individual instances or anecdotes where behavior appear paradoxical, on average privacy concerns and privacy behaviors seem to be related, and several findings question the general existence of the privacy paradox.
However, the relationship between concerns and behavior is likely only small, and there are several arguments that can explain why that is the case. According to the attitude-behavior gap, attitudes and behaviors are in general and in most cases not closely related. A main explanation for the partial mismatch in the context of privacy specifically is that users lack awareness of the risks and the degree of protection. Users may underestimate the harm of disclosing information online. On the other hand, some researchers argue that the mismatch comes from lack of technology literacy and from the design of sites. For example, users may not know how to change their default settings even though they care about their privacy. Psychologists Sonja Utz and Nicole C. Krämer particularly pointed out that the privacy paradox can occur when users must trade-off between their privacy concerns and impression management.
Research on irrational decision making
A study conducted by Susanne Barth and Menno D.T. de Jo demonstrates that decision making takes place on an irrational level, especially when it comes to mobile computing. Mobile applications in particular are often built up in such a way that spurs decision making that is fast and automatic without assessing risk factors. Protection measures against these unconscious mechanisms are often difficult to access while downloading and installing apps. Even with mechanisms in place to protect user privacy, users may not have the knowledge or experience to enable these mechanisms.
Users of mobile applications generally have very little knowledge of how their personal data are used. When they decide which application to download, they typically do not rely on the information provided by application vendors regarding the collection and use of personal data. Other research finds that users are much more likely to be swayed by cost, functionality, design, ratings, reviews and number of downloads than requested permissions, regardless of how important users may claim permissions to be when asked.
A study by Zafeiropoulou specifically examined location data, which is a form of personal information increasingly used by mobile applications. Their survey also found evidence that supports the existence of privacy paradox for location data. Privacy risk perception in relation to the use of privacy-enhancing technologies survey data indicates that a high perception of privacy risk is an insufficient motivator for people to adopt privacy protecting strategies, while knowing they exist. It also raises a question on what the value of data is, as there is no equivalent of a stock-market for personal information.
The economic valuation of privacy
The willingness to incur a privacy risk is suspected to be driven by a complex array of factors including risk attitudes, personal value for private information, and general attitudes to privacy (which may be derived from surveys). One experiment aiming to determine the monetary value of several types of personal information indicated relatively low evaluations of personal information.
Information asymmetry
Users are not always given the tools to live up to their professed privacy concerns, and they are sometimes willing to trade private information for convenience, functionality, or financial gain, even when the gains are very small. One study suggests that people think their browser history is worth the equivalent of a cheap meal. Another finds that attitudes to privacy risk do not appear to depend on whether it is already under threat or not.
Inherent necessity for privacy violation
It is suggested by Andréa Belliger and David J. Krieger that the privacy paradox should not be considered a paradox, but more of a privacy dilemma, for services that cannot exist without the user sharing private data. However, the general public is typically not given the choice whether to share private data or not, making it difficult to verify any claim that a service truly cannot exist without sharing private data.
Privacy Calculus
The privacy calculus model posits that two factors determine privacy behavior, namely privacy concerns (or perceived risks) and expected benefits. By now, the privacy calculus was supported by several studies, and it stands in direct contrast to the privacy paradox. Both perspectives can be consoled if they are understood from a more moderate position: Behavior is neither completely paradoxical nor completely logical, and the consistency between concerns and behavior depends on users, situations, or contexts.
Actions which reduce privacy
As with other conceptions of privacy, there are various ways to discuss what kinds of processes or actions remove, challenge, lessen, or attack privacy. In 1960 legal scholar William Prosser created the following list of activities which can be remedied with privacy protection:
Intrusion into a person's private space, own affairs, or wish for solitude
Public disclosure of personal information about a person which could be embarrassing for them to have revealed
Promoting access to information about a person which could lead the public to have incorrect beliefs about them
Encroaching someone's personality rights, and using their likeness to advance interests which are not their own
From 2004 to 2008, building from this and other historical precedents, Daniel J. Solove presented another classification of actions which are harmful to privacy, including collection of information which is already somewhat public, processing of information, sharing information, and invading personal space to get private information.
Collecting information
In the context of harming privacy, information collection means gathering whatever information can be obtained by doing something to obtain it. Examples include surveillance and interrogation. Another example is how consumers and marketers also collect information in the business context through facial recognition which has recently caused a concern for things such as privacy. There is currently research being done related to this topic.
Aggregating information
It can happen that privacy is not harmed when information is available, but that the harm can come when that information is collected as a set, then processed together in such a way that the collective reporting of pieces of information encroaches on privacy. Actions in this category which can lessen privacy include the following:
data aggregation, which is connecting many related but unconnected pieces of information
identification, which can mean breaking the de-identification of items of data by putting it through a de-anonymization process, thus making facts which were intended to not name particular people to become associated with those people
insecurity, such as lack of data security, which includes when an organization is supposed to be responsible for protecting data instead suffers a data breach which harms the people whose data it held
secondary use, which is when people agree to share their data for a certain purpose, but then the data is used in ways without the data donors’ informed consent
exclusion is the use of a person's data without any attempt to give the person an opportunity to manage the data or participate in its usage
Information dissemination
Information dissemination is an attack on privacy when information which was shared in confidence is shared or threatened to be shared in a way that harms the subject of the information.
There are various examples of this. Breach of confidentiality is when one entity promises to keep a person's information private, then breaks that promise. Disclosure is making information about a person more accessible in a way that harms the subject of the information, regardless of how the information was collected or the intent of making it available. Exposure is a special type of disclosure in which the information disclosed is emotional to the subject or taboo to share, such as revealing their private life experiences, their nudity, or perhaps private body functions. Increased accessibility means advertising the availability of information without actually distributing it, as in the case of doxxing. Blackmail is making a threat to share information, perhaps as part of an effort to coerce someone. Appropriation is an attack on the personhood of someone, and can include using the value of someone's reputation or likeness to advance interests which are not those of the person being appropriated. Distortion is the creation of misleading information or lies about a person.
Invasion
Invasion of privacy, a subset of expectation of privacy, is a different concept from the collecting, aggregating, and disseminating information because those three are a misuse of available data, whereas invasion is an attack on the right of individuals to keep personal secrets. An invasion is an attack in which information, whether intended to be public or not, is captured in a way that insults the personal dignity and right to private space of the person whose data is taken.
Intrusion
An intrusion is any unwanted entry into a person's private personal space and solitude for any reason, regardless of whether data is taken during that breach of space. Decisional interference is when an entity somehow injects itself into the personal decision-making process of another person, perhaps to influence that person's private decisions but in any case doing so in a way that disrupts the private personal thoughts that a person has.
Examples of invasions of privacy
In 2019, contract workers for Apple and Amazon reported being forced to continue listening to "intimate moments" captured on the companies' smart speakers in order to improve the quality of their automated speech recognition software.
Techniques to improve privacy
Similarly to actions which reduce privacy, there are multiple angles of privacy and multiple techniques to improve them to varying extents. When actions are done at an organizational level, they may be referred to as cybersecurity.
Encryption
Individuals can encrypt e-mails via enabling either two encryption protocols, S/MIME, which is built into companies like Apple or Outlook and thus most common, or PGP. The Signal messaging app, which encrypts messages so that only the recipient can read the message, is notable for being available on many mobile devices and implementing a form of perfect forward secrecy.
Anonymity
Anonymizing proxies or anonymizing networks like I2P and Tor can be used to prevent Internet service providers (ISP) from knowing which sites one visits and with whom one communicates, by hiding IP addresses and location, but does not necessarily protect a user from third party data mining. Anonymizing proxies are built into a user's device, in comparison to a Virtual Private Network (VPN), where users must download software. Using a VPN hides all data and connections that are exchanged between servers and a user's computer, resulting in the online data of the user being unshared and secure, providing a barrier between the user and their ISP, and is especially important to use when a user is connected to public Wi-Fi. However, users should understand that all their data does flow through the VPN's servers rather than the ISP. Users should decide for themselves if they wish to use either an anonymizing proxy or a VPN.
In a more non-technical sense, using incognito mode or private browsing mode will prevent a user's computer from saving history, Internet files, and cookies, but the ISP will still have access to the users' search history. Using anonymous search engines will not share a user's history, clicks, and will obstruct ad blockers.
User empowerment
Concrete solutions on how to solve paradoxical behavior still do not exist. Many efforts are focused on processes of decision making, like restricting data access permissions during application installation, but this would not completely bridge the gap between user intention and behavior. Susanne Barth and Menno D.T. de Jong believe that for users to make more conscious decisions on privacy matters, the design needs to be more user-oriented.
Other security measures
In a social sense, simply limiting the amount of personal information that users posts on social media could increase their security, which in turn makes it harder for criminals to perform identity theft. Moreover, creating a set of complex passwords and using two-factor authentication can allow users to be less susceptible to their accounts being compromised when various data leaks occur. Furthermore, users should protect their digital privacy by using anti-virus software, which can block harmful viruses like a pop-up scanning for personal information on a users' computer.
Legal methods
Although there are laws that promote the protection of users, in some countries, like the U.S., there is no federal digital privacy law and privacy settings are essentially limited by the state of current enacted privacy laws. To further their privacy, users can start conversing with representatives, letting representatives know that privacy is a main concern, which in turn increases the likelihood of further privacy laws being enacted.
See also
Civil liberties
Digital identity
Global surveillance
Identity theft in the United States
Open data
Open access
Privacy-enhancing technologies
Privacy policy
Solitude
Transparency
Wikipedia's privacy policy – Wikimedia Foundation
Works cited
References
External links
Glenn Greenwald: Why privacy matters. Video on YouTube, provided by TED. Published 10 October 2014.
International Privacy Index world map, The 2007 International Privacy Ranking, Privacy International (London).
"Privacy" entry in the Stanford Encyclopedia of Philosophy.
Privacy law
Human rights
Identity management
Digital rights
Civil rights and liberties |
4737178 | https://en.wikipedia.org/wiki/Korg%20Wavestation | Korg Wavestation | The Korg Wavestation is a vector synthesis synthesizer first produced in the early 1990s and later re-released as a software synthesizer in 2004. Its primary innovation was Wave Sequencing, a method of multi-timbral sound generation in which different PCM waveform data are played successively, resulting in continuously evolving sounds. The Wavestation's "Advanced Vector Synthesis" sound architecture resembled early vector synths such as the Sequential Circuits Prophet VS.
Designed as a "pure" synthesizer rather than a music workstation, it lacked an on-board song sequencer, yet the Wavestation, unlike any synthesizer prior to its release, was capable of generating complex, lush timbres and rhythmic sequences that sounded like a complete soundtrack by pressing only one key. Keyboard Magazine readers gave the Wavestation its "Hardware Innovation of the Year" award, and in 1995 Keyboard listed it as one of the "20 Instruments that Shook the World."
The Wavestation lineup consisted of four models: the Wavestation and Wavestation EX keyboards, and the Wavestation A/D and Wavestation SR rackmount sound modules.
Design concept
The two primary synthesis concepts designed into the Wavestation were Wave Sequencing and vector synthesis, the latter Korg dubbed "Advanced Vector Synthesis". Although the Korg Wavestation was the first keyboard that used Wave Sequencing, its roots can be traced back to the preceding variations of wavetable-lookup synthesis, including the multiple-wavetable synthesizers realized as PPG Wave that was produced by Palm Products GmbH in the early 80s, and the vector synthesis realized as Prophet VS by Sequential Circuits, Inc. in 1986 and Kawai K1 in 1988.
Wave Sequencing improved the vector synthesis on Prophet VS, by incorporating the ability to crossfade up to 255 waveforms, rather than only four. Moreover, a wave sequence can be programmed to "jump" to any PCM wave in ROM memory, whereas similar synths were designed to move sequentially through the wavetable. By combining wave sequencing with vector synthesis—the process of mixing and morphing between multiple waveforms of audio samples—the Wavestation differed from other sample-based synthesizers of the digital era.
Wave sequencing
A wave sequence is a programmed list of PCM waves playing in succession. Each step in the wave sequence can have a different duration, pitch, fine tuning, level and crossfade amount. Additionally, wave sequences can be looped (forward or forward/backward directions) to play indefinitely or for a finite duration; they may also be synchronized to the Wavestation's internal clock (at a user-adjustable tempo) or to MIDI clock signals from a sequencer. The result is a continuously changing sound, producing either a smooth blend of crossfaded waves, or semi-arpeggiated and rhythmic sequences, or a combination of both. In a Patch, different wave sequences can be assigned to each of the four oscillators, thus the Wavestation is capable of generating four distinct wave sequences playing simultaneously during a single note. In Performance mode, up to thirty-two discrete wave sequences can be played at the same time by layering eight 4-voice patches, although the actual number of playable wave sequences may be less because an additional oscillator is required to execute a crossfade.
Vector synthesis
Simply, vector synthesis is dynamic timbre control over 2 or more voices (oscillators). On the Wavestation, vector synthesis can be applied on any two or four-oscillator patch. The volume blend (or mix ratio) between oscillators is varied over time via a dedicated mix envelope, in real-time via the front panel's vector joystick, or via other controllers such as LFOs, aftertouch, and MIDI.
The mix envelope for a two-oscillator patch structure is arranged on a horizontal line or axis, and is interpreted as one-dimensional vector synthesis. Two dimensional vector synthesis requires a four-oscillator structure, with oscillators A & C arranged on the horizontal X axis and oscillators B & D on the vertical Y axis. A patch's mix envelope can be looped for a finite or indefinite amount of time as well as modulated by controllers. Moving the joystick whilst playing overrides the pre-programmed mix envelope, giving the user dynamic control over the timbre of the sound.
Features and specifications
The internal synthesis architecture was based on the "AI Synthesis" system used in Korg's previous M and T-series synthesizers. The Wavestation offered 32-voice polyphony, up to four digital oscillators per patch, with a non-resonant low-pass filter and an amplifier block for each oscillator. Modulators, LFOs and envelope generators were offered as control sources for those blocks. The effects section contained two DSP blocks capable of a wide range of processing algorithms, such as reverb, delay, chorus, flanger, phaser, etc.
Memory allocation
Similar in structure to the Ensoniq VFX and Korg's M1, the Wavestation's top level of sound control is the Performance, which organizes up to eight Patches (parts) and two independent effect processors. A Performance also controls keyboard zoning, MIDI channel assignment, velocity switching, and other parameters. Each Bank contains 50 Performances.
Producing the synthesizer voices are Patches, the middle tier in the programming hierarchy. A patch consists of 1, 2, or 4 digital oscillators (A, B, C, and D). Tone generation is achieved by assigning any of the 20-bit PCM samples and single-cycle waveforms or a wave sequence to an oscillator. Each oscillator has its own digital filter, amplifier, amp envelope, general purpose envelope, two LFOs, and numerous modulation routings. The mix envelope for vector synthesis is also found at the Patch level. There are 35 Patches per Bank.
Wave Sequences are at the bottom tier of the Wavestation's programming structure. The Wavestation treats these as if they are discrete PCM waveforms when assigning them to oscillators in a Patch, although a wave sequence itself is created from a list of PCM waveforms in ROM or Card memory. The maximum number of steps per wave sequence is 255, and the maximum number of steps allocated per Bank is 500. It is therefore possible to exhaust the step memory in a Bank with two very long wave sequences. 32 wave sequences are available per Bank.
The Wavestation's Multimode organizes up to 16 Performances (one per MIDI channel) and two effects into a Multiset, which allows for multi-timbral reception from a MIDI sequencer or master keyboard controller. Multisets have two major drawbacks. First, a single, complex Performance may use up all of the polyphony in the Wavestation with only one or two keys depressed, so multiple complex Performances would result in extreme voice-stealing. The other drawback is that effects are vital to the overall sound of the Wavestation, but a Multiset cannot have 32 effects. So it ignores the original effects in Performances and assigns two new effects for use with all 16 Performances. Since a Performance can also transmit and receive multi-timbrally on eight parts, Multisets are generally superfluous.
Models
Wavestation (1990) – The first Wavestation keyboard to reach the market, it premiered the vector synthesis and wave sequencing concepts under the Korg brand. Its 2MB soundset was synth-oriented which lacked acoustic piano sounds and drums, relying instead on sampled waveforms from classic synthesizers of the 80s, most of the Prophet VS waveforms, and numerous attack transients and instrument samples from Korg's sample library. It could take Korg's proprietary PCM and RAM type expansion cards. The user interface comprised a 64×240 backlit, graphical LCD display with soft-key menu system (the buttons under the display), a data entry dial similar to that used on Roland's Alpha-Juno keyboards, a numeric keypad and other function buttons. A 61-key semi-weighted keyboard, pitch and modulation wheels, and a vector joystick comprised the player controls. The Wavestation received much critical acclaim, including Keyboard Magazine's "Hardware Innovation of the Year."
Wavestation EX (1991) – Identical in form to the original Wavestation keyboard, Korg created the EX in response to player feedback and criticism. The EX doubled the ROM to 4MB by adding 119 new samples (most notably piano, drums, and the remaining Prophet VS waves), and eight new digital effects. Bugs in the operating system were also fixed (though several still remained). Those who had purchased an original Wavestation could buy the EXK-WS upgrade kit to convert their keyboards to the EX version. The iconic Macintosh start-up sound was generated on a Wavestation EX by Apple sound designer Jim Reekes.
Wavestation A/D (1991) – It was the first rackmount version of the Wavestation technology. Korg replaced the large joystick with a smaller version, the same display from the keyboard versions was retained, and an additional RAM bank added. A unique feature was its analog inputs, capable of accepting guitar, mic and line-level signal; it allowed the effect processors to process those signals in realtime (particularly useful with the vocoders in the new EX effects). All of the keyboard's front panel buttons also survived the transition, making the programming process identical to the original Wavestation. The A/D inputs also were an option when creating wave sequences, incorporating the input signal into the synthesis engine in realtime.
Wavestation SR (1992) – The last hardware implementation of the Wavestation was a 1-unit rackmount model. It lacked the A/D inputs of its predecessor, the screen was downsized to a character-based 16×2 LCD, and most buttons, function keys, and the joystick disappeared. Marketed as a preset module, it featured eight ROM preset banks with Patches and Performances previously sold on expansion cards from Korg and Sound Source Unlimited, Inc. Without an external MIDI sound editor, programming was a very difficult task due to the small display, although all parameters can be edited from the panel.
Software Wavestation (2004) – Fourteen years after the first Wavestation appeared, Korg released a software-based emulation of the synthesizer which also included all the instrument patches from Korg's line of expansion ROM cards. In late 2006 Korg released version 1.6 of the software Wavestation which added a resonant filter. Also, "50 new Performances, 35 new Patches, and 32 new Wave Sequences are added to take advantage of this new resonant filter."
iWavestation (2016) – This is a native program for the iOS platform (iPhone and iPad) which recreates the physical synthesizer. It can be downloaded from Apple's AppStore. It includes all the instrument patches from Korg's expansion ROM cards as in-app purchases. Current version (as of May 2021) is 1.1.1.
Design history
The Wavestation was designed by a team which included Dave Smith, who designed the Prophet-5 and, along with Roland, helped to invent the MIDI protocol in the early 1980s. His synthesizer company, Sequential Circuits, was purchased by Yamaha in 1988. The division was renamed DSD (intended by Yamaha to stand for Dave Smith Designs). The team, ex-SCI engineers Dave Smith, John Bowen, Scott Peterson, and Stanley Jungleib, then went on to Korg in May 1989 and designed the Wavestation, refining many Prophet VS concepts.
The Wavestation A/D was the brainchild of Joe Bryan, then-Senior Design Engineer at Korg R&D. A guitar player, he wanted "something that worked with a simple MIDI guitar that would merge the guitar, synth and effects, and could be controlled from one or two buttons on the guitar." The idea was of little interest to his colleagues at first. Nevertheless, he found a prototype of a Sequential Circuits Prophet 2000 sampler and literally hacksawed the analog-to-digital converter circuitry from it, soldered that and a digital interface to the Wavestation's ROM bus to create the first prototype of the Wavestation A/D. The prototype convinced Bryan's colleagues of his idea.
Musical impact
The Wavestation is known as one of the best synth pad generators, and has been used by many musicians to explore uncommon synthesis textures. A few notable mainstream artists that used Wavestations in the early 1990s were Joe Zawinul, Jan Hammer, Phil Collins, Gary Numan, Keith Emerson, and Tony Banks of Genesis (who also used them on the band's 2007 European Tour) Depeche Mode, Steve Hillier of Dubstar, Michael Jackson, Ed Wynne of Ozric Tentacles, Ulf Langheinrich and Alan Clark and Guy Fletcher of Dire Straits. Soundtrack composer Mark Snow also used a Wavestation SR when scoring episodes of the X-Files.
The sound of the Wavestation is familiar to users of the Apple Macintosh, since the startup chime that has featured on every Mac from the Quadra 700 to the Quadra 800 was created by Jim Reekes on a Korg Wavestation. Reekes said, "The startup sound was done in my home studio on a Korg Wavestation. It's a C Major chord, played with both hands stretched out as wide as possible (with 3rd at the top, if I recall)." The sound in question is a slightly modified "Sandman" factory preset.
Legacy
The OASYS (2005) and Korg Kronos (2011) workstations have full-blown wave sequencing and vector synthesis implementation (complete with joystick), along with virtual analog, sample-based synthesis, and 16 MIDI + 16 digital audio tracks.
Software emulations
Korg now produces a collection of software-based versions of its classic synthesizers, called the Korg Legacy Collection. With the Wavestation it incorporates the entire library of the original Wavestation's samples, wave sequences and presets making the vector synthesis concept more affordable and known to a wider audience. A native version for iOS, named iWavestation, has also been released.
References
Bibliography
Further reading
External links
Vintage Synth
Unofficial Wavestation Information Site
Korg Wavestation Audio Workshop (German) 26 min. (16.6MB MP3 format) Link not working
Korg synthesizers
Software synthesizers
Digital synthesizers
Polyphonic synthesizers |
563616 | https://en.wikipedia.org/wiki/Ubisoft | Ubisoft | Ubisoft Entertainment SA (; ; formerly Ubi Soft Entertainment SA) is a French video game company headquartered in Montreuil with development studios across the world. Its video game franchises include Assassin's Creed, Far Cry, For Honor, Just Dance, Prince of Persia, Rabbids, Rayman, Tom Clancy's, and Watch Dogs.
History
Origins and first decade (1986–1996)
By the 1980s, the Guillemot family had established themselves as a support business for farmers in the Brittany province of France and other regions, including into the United Kingdom. The five sons of the family – Christian, Claude, Gérard, Michel, and Yves – helped with the company's sales, distribution, accounting, and management with their parents before university. All 5 gained business experience while at university, which they brought back to the family business after graduating. The brothers came up with the idea of diversification to sell other products of use to farmers; Claude began with selling CD audio media. Later, the brothers expanded to computers and additional software that included video games. In the 1980s, they saw that the costs of buying computers and software from a French supplier was more expensive than buying the same materials in the United Kingdom and shipping to France, and came upon the idea of a mail-order business around computers and software. Their mother said they could start their own business this way as long as they managed it themselves and equally split its shares among the 5 of them. Their first business was Guillemot Informatique, founded in 1984. They originally only sold through mail order, and then were getting orders from French retailers, since they were able to undercut other suppliers by up to 50% of the cost of some titles. By 1986, this company was earning about 40 million French francs (roughly at that time). In 1985, the brothers established Guillemot Corporation for similar distribution of computer hardware. As demand continued, the brothers recognised that video game software was becoming a lucrative property and decided that they needed to get into the industry's development side, already having insight on the publication and distribution side. Ubi Soft (formally named Ubi Soft Entertainment S.A.) was founded by the brothers on 28 March 1986. The name "Ubi Soft" was selected to represent "ubiquitous" software.
Ubi Soft initially operated out of offices in Paris, moving to Créteil by June 1986. The brothers used the chateau in Brittany as the primary space for development, hoping the setting would lure developers, as well as to have a better way to manage expectations of their developers. The company hired Nathalie Saloud as manager, Sylvie Hugonnier as director of marketing and public relations, and programmers, though Hugonnier had left the company by May 1986 to join Elite Software. Games published by Ubi Soft in 1986 include Zombi, Ciné Clap, Fer et Flamme, Masque, and Graphic City, a sprite editing programme. As their first game, Zombi had sold 5,000 copies by January 1987. Ubi Soft also entered into distribution partnerships for the game to be released in Spain and West Germany. Ubi Soft started importing products from abroad for distribution in France, with 1987 releases including Elite Software's Commando and Ikari Warriors, the former of which sold 15,000 copies by January 1987. In 1988, Yves Guillemot was appointed as Ubi Soft's chief executive officer.
By 1988, the company had about 6 developers working from the chateau. These included Michel Ancel, a teenager at the time noted for his animation skills, and Serge Hascoët, who applied to be a video game tester for the company. The costs of maintaining the chateau became more expensive, and the developers were given the option to relocate to Paris. Ancel's family which had moved to Brittany for his job could not afford the cost of living in Paris and returned to Montpellier in southern France. The Guillemot brothers told Ancel to keep them abreast of anything he might come up with there. Ancel returned with Frédéric Houde with a prototype of a game with animated features that caught the brothers' interest. Michel Guillemot decided to make the project a key one for the company, establishing a studio in Montreuil to house over 100 developers in 1994, and targeting a line of 5th generation consoles such as the Atari Jaguar and PlayStation. Their game, Rayman, was released in 1995. Yves managed Guillemot Informatique, making deals with Electronic Arts, Sierra On-Line and MicroProse to distribute their games in France. Guillemot Informatique began expanding to other markets, including the United States, the United Kingdom, and Germany. They entered the video game distribution and wholesale markets and by 1993 had become the "largest" distributor of video games in France.
Worldwide growth (1996–2003)
In 1996, Ubi Soft listed its initial public offering and raised over in funds to help them to expand the company. Within 2 years, the company established worldwide studios in Annecy (1996), Shanghai (1996), Montreal (1997), and Milan (1998).
A difficulty that the brothers found was the lack of an intellectual property that would have a foothold in the United States market. When "widespread growth" of the Internet arrived around 1999, the brothers decided to take advantage of this by founding game studios aimed at online free-to-play titles, including GameLoft; this allowed them to license the rights to Ubi Soft properties to these companies, increasing the share value of Ubi Soft five-fold. With the extra infusion of , they were able to then purchase Red Storm Entertainment in 2000, giving them access to the Tom Clancy's series of stealth and spy games. Ubi Soft helped with Red Storm to continue to expand the series, bringing titles like Tom Clancy's Ghost Recon and Tom Clancy's Rainbow Six series. The company got a foothold in the United States when it worked with Microsoft to develop Tom Clancy's Splinter Cell, an Xbox-exclusive title released in 2002 to challenge the PlayStation-exclusive Metal Gear Solid series, by combining elements of Tom Clancy's series with elements of an in-house developed game called The Drift.
In March 2001, Gores Technology Group sold The Learning Company's entertainment division (which included games originally published by Brøderbund, Mattel Interactive, Mindscape and Strategic Simulations) to them. The sale included the rights to intellectual properties such as the Myst and Prince of Persia series. Ubisoft Montreal developed the Prince of Persia title into Prince of Persia: The Sands of Time released in 2003. At the same time, Ubi Soft released Beyond Good & Evil, Ancel's project after Rayman; it was one of Ubi Soft's first commercial "flop" at its release alongside a 2003 release market and which since has gained a cult following.
Around 2001, Ubi Soft established its editorial department headed by Hascoët, initially named as editor in chief and later known as the company's Chief Content Officer. Hascoët had worked alongside Ancel on Rayman in 1995 to help refine the game, and saw the opportunity to apply that across all of Ubi Soft's games. Until 2019, most games published by Ubisoft was reviewed through the editorial department and personally by Hascoët.
Continued expansion (2003–2015)
On 9 September 2003, Ubi Soft announced that it would change its name to Ubisoft, and introduced a new logo known as "the swirl". In December 2004, gaming corporation Electronic Arts purchased a 19.9% stake in the firm. Ubisoft referred to the purchase as "hostile" on EA's part. Ubisoft's brothers recognised they had not considered themselves within a competitive market, and employees had feared that an EA takeover would drastically alter the environment within Ubisoft. EA's CEO at the time, John Riccitiello, assured Ubisoft the purchase was not meant as a hostile manoeuvre, and EA ended up selling the shares in 2010.
In February 2005, Ubisoft acquired the NHL Rivals, NFL Fever, NBA Inside Drive and MLB Inside Pitch franchises from Microsoft Game Studios.
Ubisoft established another IP, Assassin's Creed, first launched in 2007; Assassin's Creed was originally developed by Ubisoft Montreal as a sequel to Prince of Persia: The Sands of Time and instead transitioned to a story about Assassins and the Templar Knights. In July 2006, Ubisoft bought the Driver franchise from Atari, Inc. for a sum of €19 million in cash for the franchise, technology rights, and most assets. Within 2008, Ubisoft made a deal with Tom Clancy for perpetual use of his name and intellectual property for video games and other auxiliary media. In July 2008, Ubisoft made the acquisition of Hybride Technologies, a Piedmont-based studio. In November 2008, Ubisoft acquired Massive Entertainment from Activision. In January 2013, Ubisoft acquired South Park: The Stick of Truth from THQ for $3.265 million.
Ubisoft announced plans in 2013 to invest $373 million into its Quebec operations over 7 years. The publisher is investing in the expansion of its motion capture technologies and consolidating its online games operations and infrastructure in Montreal. By 2020, the company would employ more than 3,500 staff at its studios in Montreal and Quebec City.
In July 2013, Ubisoft announced a breach in its network resulting in the potential exposure of up to 58 million accounts including usernames, email address, and encrypted passwords. The firm denied any credit/debit card information could have been compromised, issued directives to all registered users to change their account passwords, and recommended updating passwords on any other website or service where a same or similar password had been used. All the users who registered were emailed by the Ubisoft company about the breach and a password change request. Ubisoft promised to keep the information safe.
In March 2015, the company set up a Consumer Relationship Centre in Newcastle-upon-Tyne. The centre is intended to integrate consumer support teams and community managers. Consumer Support and Community Management teams at the CRC are operational 7 days a week.
Attempted takeover by Vivendi (2015–2018)
Since around 2015, the French mass media company Vivendi has been seeking to expand its media properties through acquisitions and other business deals. In addition to advertising firm Havas, Ubisoft was 1 of the 1st target properties identified by Vivendi, which as of September 2017 has an estimated valuation of $6.4 billion. Vivendi, in 2 actions during October 2015, bought shares in Ubisoft stock, giving them a 10.4% stake in Ubisoft, an action that Yves Guillemot considered "unwelcome" and feared a hostile takeover. In a presentation during the Electronic Entertainment Expo 2016, Yves Guillemot stressed the importance that Ubisoft remain an independent company to maintain its creative freedom. Guillemot later described the need to fight off the takeover: "...when you're attacked with a company that has a different philosophy, you know it can affect what you've been creating from scratch. So you fight with a lot of energy to make sure it can't be destroyed." Vice-President of Live Operations, Anne Blondel-Jouin, expressed similar sentiment in an interview with PCGamesN, stating that Ubisoft's success was partly due to "...being super independent, being very autonomous."
Vivendi acquired stake in mobile game publisher Gameloft, owned by the Guillemots, and started acquiring Ubisoft shares. In the following February, Vivendi acquired €500 million worth of shares in Gameloft, gaining more than 30% of the shares and requiring the company under French law to make a public tender offer; this action enabled Vivendi to complete the takeover of Gameloft by June 2016. Following Vivendi's actions with Gameloft in February 2016, the Guillemots asked for more Canadian investors in the following February to fend off a similar Vivendi takeover; by this point, Vivendi had increased their share in Ubisoft to 15%, exceeding the estimated 9% that the Guillemots owned. By June 2016, Vivendi had increased its shares to 20.1% and denied it was in the process of a takeover.
By the time of Ubisoft's annual board meeting in September 2016, Vivendi had gained 23% of the shares, while the Guillemots were able to increase their voting share to 20%. A request was made at the board meeting to place Vivendi representatives on Ubisoft's board, given the size of their shareholdings. The Guillemots argued against this, reiterating that Vivendi should be seen as a competitor, and succeeded in swaying other voting members to deny any board seats to Vivendi.
Vivendi continued to buy shares in Ubisoft, approaching the 30% mark that could trigger a takeover; as of December 2016, Vivendi held a 25.15% stake in Ubisoft. Reuters reported in April 2017 that Vivendi's takeover of Ubisoft would likely happen that year and Bloomberg Businessweek observed that some of Vivendi's shares would reach the 2-year holding mark, which would grant them double voting power, and would likely meet or exceed the 30% threshold. The Guillemot family has since raised its stake in Ubisoft; as of June 2017, the family held 13.6% of Ubisoft's share capital, and 20.02% of the company's voting rights. In October 2017, Ubisoft announced it reached a deal with an "investment services provider" to help them purchase back 4 million shares by the end of the year, preventing others, specifically Vivendi, from buying these.
In the week before Vivendi would gain double-voting rights for previously purchased shares, the company, in quarterly results published in November 2017, announced that it had no plans to acquire Ubisoft for the next 6 months, nor would seek board positions due to the shares they held during that time, and that it "would ensure that its interest in Ubisoft would not exceed the threshold of 30% through the doubling of its voting rights." Vivendi remained committed to expanding in the video game sector, identifying that their investment in Ubisoft could represent a capital gain of over 1 billion euros.
On 20 March 2018, Ubisoft and Vivendi struck a deal ending any potential takeover, with Vivendi agreeing to sell all of its shares, over 30 million, to other parties and agreeing to not buy any Ubisoft shares for 5 years. Some of those shares were sold to Tencent, which after the transaction held about 5.6 million shares of Ubisoft (approximately 5% of all shares). The same day, Ubisoft announced a partnership with Tencent to help bring their games into the Chinese market. Vivendi completely divested its shares in Ubisoft by March 2019.
Since 2018
Since 2018, Ubisoft's studios have continued to focus on some franchises, including Assassin's Creed, Tom Clancy's, Far Cry, and Watch Dogs. As reported by Bloomberg Businessweek, while Ubisoft as a whole had nearly 16,000 developers by mid-2019, larger than some of its competitors, and producing 5 to 6 major AAA releases each year compared to the 2 or 3 from the others, the net revenue earned per employee was the lowest of the 4 due to generally lower sales of its games. Bloomberg Business attributed this partially due to spending trends by video game consumers purchasing fewer games with long playtimes, as most of Ubisoft's major releases tend to be. To counter this, Ubisoft in October 2019 postponed 3 of the 6 titles it had planned in 2019 to 2020 or later, as to help place more effort on improving the quality of the existing and released games. Due to overall weak sales in 2019, Ubisoft stated in January 2020 that it would be reorganizing its editorial board to provide a more comprehensive look at its game portfolio and devise greater variation in its games which Ubisoft's management said had fallen stagnant, too uniform and had contributed to weak sales.
Stemming from a wave of sexual misconduct accusations of the #MeToo movement in June and July 2020, Ubisoft had a number of employees accused of misconduct from both internal and external sources. Between Ubisoft's internal investigation and a study by the newspaper Libération, employees had been found to have records of sexual misconduct and troubling behaviour, going back up to 10 years, which had been dismissed by the human resources departments. As a result, some Ubisoft staff either quit or were fired, including Hascoët, Maxime Béland, the co-founder of Ubisoft Toronto, and Yannis Mallat, the managing director of Ubisoft's Canadian studios. Yves Guillemot implemented changes in the company to address these issues as it further investigated the extent of the misconduct claims.
In 2020, they announced that they would be making an open world Star Wars game. The deal marked an end to EA's exclusive rights to make Star Wars titles.
Ubisoft stated in its end of 2020 fiscal year investor call in February 2021 that the company will start to make AAA game releases less of a focus and put more focus on mobile and freemium games following fiscal year 2022. CFO Frederick Duguet stated to investors that "we see that we are progressively, continuously moving from a model that used to be only focused on AAA releases to a model where we have a combination of strong releases from AAA and strong back catalog dynamics, but also complimenting our program of new releases with free-to-play and other premium experiences." Later that year, the company announced it would start branding games developed by its 1st-party developers as "Ubisoft Originals".
In October 2021, Ubisoft participated in a round of financing in Animoca Brands.
After earlier stating their intent to explore blockchain games, Ubisoft announced its Ubisoft Quartz blockchain program in December 2021, allowing players to buy uniquely identified customization items for games and then sell and trade them based on the Tezos currency, which Ubisoft claimed was an energy efficient cryptocurrency. This marked the first "AAA" effort into blockchain games.
Subsidiaries
Former
Technology
Ubisoft Connect
Ubisoft Connect, formerly Uplay, is a digital distribution, digital rights management, multiplayer and communications service for PC created by Ubisoft. First launched alongside Assassin's Creed II as a rewards program to earn points towards in-game content for completing achievements within Ubisoft, it expanded into a desktop client and storefront for Windows machine alongside other features. Ubisoft later separated the rewards program out as its Ubisoft Club program, integrated with Uplay. Ubisoft Connect was announced in October 2020 as a replacement for UPlay and its Ubisoft Club to launch on October 29, 2020 alongside Watch Dogs: Legion. Connect replaces UPlay and the Club's previous functions while adding support for cross-platform play and save progression for some games. It includes the same reward progression system that the Club offered to gain access to in-game content. Some games on the UPlay service will not be updated to support these reward features that they previously had under the Ubisoft Club; for those, Ubisoft will unlock all rewards for all players.
Uplay/Ubisoft Connect serves to manage the digital rights for Ubisoft's games on Windows computers, which has led to criticism when it was first launch, as some games required always-on digital rights management, causing loss of same game data should players lose their Internet connection. The situation was aggravated after Ubisoft's servers were struck with denial of service attacks that made the Ubisoft games unplayable due to this DRM scheme. Ubisoft eventually abandoned the always-on DRM scheme and still require all Ubisoft games to perform a start-up check through Uplay/Ubisoft Connect their servers when launched.
Ubisoft Anvil
Ubisoft Anvil, formerly named Scimitar, is a proprietary game engine developed wholly within Ubisoft Montreal in 2007 for the development of the 1st Assassin's Creed game and has since been expanded and used for most Assassin's Creed titles and other Ubisoft games.
Disrupt
The Disrupt game engine was developed by Ubisoft Montreal and is used for the Watch Dogs games. The majority of the engine was built from scratch and uses a multithreaded renderer, running on fully deferred physically based rendering pipeline with some technological twists to allow for more advanced effects.
Dunia Engine
The Dunia Engine is a software fork of the CryEngine that was originally developed by Crytek, with modifications made by Ubisoft Montreal. The CryEngine at the time could render some outdoor environmental spaces. Crytek had created a demo of its engine called X-Isle: Dinosaur Island which it had demonstrated at the Electronic Entertainment Expo 1999. Ubisoft saw the demo and had Crytek build out the demo into a full title, becoming the 1st Far Cry, released in 2004. That year, Electronic Arts established a deal with Crytek to build a wholly different title with an improved version of the CryEngine, leaving them unable to continue work on Far Cry. Ubisoft assigned Ubisoft Montreal to develop console versions of Far Cry, and arranging with Crytek to have all rights to the Far Cry series and a perpetual licence on the CryEngine.
In developing Far Cry 2, Ubisoft Montreal modified the CryEngine to include destructable environments and a more realistic physics engine. This modified version became the Dunia Engine which premiered with Far Cry 2 in 2008. The Dunia Engine continued to be improved, such as adding weather systems, and used as the basis of all future Far Cry games, and James Cameron's Avatar: The Game, developed by Ubisoft Montreal.
Ubisoft introduced the Dunia 2 engine 1st in Far Cry 3 in 2012, which was made to improve the performance of Dunia-based games on consoles and to add more complex rendering features such as global illumination. According to Remi Quenin, one of the engine's architects at Ubisoft Montreal, the state of the Dunia Engine as of 2017 includes "vegetation, fire simulation, destruction, vehicles, systemic AI, wildlife, weather, day/night cycles, [and] non linear storytelling" which are elements of the Far Cry games. For Far Cry 6, Ubisoft introduced more features to the Dunia 2 engine such as ray tracing support on the PC version and support for AMD's open source variable resolution technology, FidelityFX.
Snowdrop
The Snowdrop game engine was co-developed by Massive Entertainment and Ubisoft for Tom Clancy's The Division. The core of the game engine is powered by a "node-based system" which simplifies the process of connecting different systems like rendering, AI, mission scripting and the user interface. The engine was used in Tom Clancy's The Division 2 and other Ubisoft games such as South Park: The Fractured but Whole, Mario + Rabbids: Kingdom Battle, and Starlink: Battle for Atlas. The engine is next-gen ready and will be used in Massive's Avatar: Frontiers of Pandora and a Star Wars open-world game.
Games
According to Guillemot, Ubisoft recognised that connected sandbox games, with seamless switches between single and multiplayer modes provided the players with more fun, leading the company to switch from pursuing single-player only games to internet connected ones. According to Guillemot, Ubisoft internally refers to its reimagined self as 'before The Division and an 'after The Division.
In an interview with The Verge, Anne Blondel-Jouin, executive producer of The Crew turned vice-president of live operations, noted that The Crew was an early game of Ubisoft's to require a persistent internet connection in order to play. This raised concerns for gamers and internally at the company.
Film and television
Ubisoft initiated its Ubisoft Film & Television division then named Ubisoft Motion Pictures in 2011. Initially developing media works tied to Ubisoft's games, it has since diversified to other works including about video games. Productions include the live-action film Assassin's Creed (2016) and the series Rabbids Invasion (2013), and Mythic Quest (2020–present).
Litigation
2020 sexual misconduct accusations and dismissals
From June to July 2020, a wave of sexual misconduct accusations occurred through the video game industry as part of the ongoing #MeToo Movement, including some Ubisoft's employees. Ashraf Ismail, the creative director of Assassin's Creed: Valhalla, stepped down to deal with personal issues related to allegations made towards him; he was later terminated by Ubisoft in August 2020 after their internal investigations. Ubisoft announced 2 executives that were also accused of misconduct had been placed on leave, and that they were performing an internal review of other accusations and their own policies. Yves Guillemot stated on 2 July 2020 that he had appointed Lidwine Sauer as their head of workplace culture who is "empowered to examine all aspects of our company's culture and to suggest comprehensive changes that will benefit all of us", in addition to other internal and external programs to deal with ongoing issues that may have contributed to these problems. Specific accusations were made at Ubisoft Toronto where the studio co-founder Maxime Béland, also the vice president of editorial for Ubisoft as a whole, was forced to resign by Ubisoft's management due to sexual misconduct issues and led some employees working there to express strong concerns that "The way the studio—HR and management—disregards complaints just enables this behavior from men." Tommy François, the vice president of editorial and creative services, had been placed on disciplinary leave around July and by August, Ubisoft announced his departure from the company.
Spurred by these claims, the newspaper Libération had begun a deeper investigation into the workplace culture at Ubisoft. The paper ran a 2-part report printed on 1 and 10 July 2020 that claimed that Ubisoft had a toxic workplace culture. A component of that workplace was from accusations related to Hascoët. The issues identified by Libération and corroborated by employees from other studios suggested that some of these problems had extended from the human resource heads of the company ignoring complaints made against Hascoët, using sexual misconduct and harassment to intimidate those who criticized him, on the basis that the creative leads were producing valuable products for the company. On 11 July 2020, the company issued a press release, announcing departures which include the voluntary resignations of Hascoët, Yannis Mallat, the managing director of Ubisoft's Canadian studios, and Cécile Cornet, the company's global head of human resources. Yves Guillemot temporarily filled in Hascoët's former role.
A following report from Bloomberg News by Jason Schreier corroborated these details, with employees of Ubisoft's main Paris headquarters comparing it to a fraternity house. Schreier had found that the issues with Hascoët had gone back years and had affected the creative development on the Assassin's Creed series and other products as to avoid the use of female protagonists. Ubisoft had already been criticized for failing to support female player models in Assassin's Creed Unity or in Far Cry 4, which the company claimed was due to difficulty in animating female characters despite having done this in earlier games. Ubisoft employees, in Schreier's report, said that in the following Assassin's Creed games which did feature female protagonists at release, including Assassin's Creed Syndicate and Assassin's Creed Origins, there were serious considerations of removing or downplaying the female leads from the editorial department. This was due to a belief that Hascoët had set in the department that female characters did not sell video games. Further, because of Hascoët's clout in the company, the developers would have to make compromises to meet Hascoët's expectations, such as the inclusion of a strong male character if they had included female leads or if they had used cutscenes, a narrative concept Hascoët reportedly did not like. Hascoët's behavior among other content decisions made by Hascoët had "appeared to affect" the quality of Ubisoft's games by 2019; both Tom Clancy's The Division 2 and Tom Clancy's Ghost Recon Breakpoint "underperformed", which gave Ubisoft justification to diminish Hascoët's oversight with the aforementioned January 2020 changes in the editorial department and gave its members more autonomy. There remained questions as to what degree CEO Yves Guillemot knew of these issues prior to their public reporting; employees reported that Hascoët has been very close with the Guillemot brothers since the founding of the editorial department around 2001 and that some of the prior complaints of sexual misconduct had been reported directly to Yves and were dismissed. Gamasutra also spoke to some former and current Ubisoft employees during this period from its worldwide studios, corroborating that these issues appears to replicate across multiple studios, stemming from Ubisoft's main management.
Ubisoft had a shareholders' meeting on 22 July 2020 addressing these more recent issues. Changes in the wake of the departures included a reorganization of both the editorial team and the human resources team. 2 positions, Head of Workplace Culture and Head of Diversity and Inclusion, would be created to oversee the safety and morale of employees going forward. To encourage this, Ubisoft said it would tie the performance bonus of team leaders to how well they "create a positive and inclusive workplace environment" so that these changes are propagated throughout the company. Ahead of a September 2020 "Ubisoft Forward" media presentation, Yves Guillemot issued a formal apology for the company on their lack of responsibility in the matters prior to these events. Guillemot said "This summer, we learned that certain Ubisoft employees did not uphold our company's values, and that our system failed to protect the victims of their behavior. I am truly sorry to everyone who was hurt. We have taken significant steps to remove or sanction those who violated our values and code of conduct, and we are working hard to improve our systems and processes. We are also focused on improving diversity and inclusivity at all levels of the company. For example, we will invest $1 million over the next five years in our graduate program. The focus will be on creating opportunities for under-represented groups, including women and people of color." Guillemot sent out a company-wide letter in October 2020 summarizing their investigation, finding that nearly 25% of the employees had experienced or witnessed misconduct in the last 2 years, and that the company was implementing a 4-point plan to correct these problems, with a focus to "guarantee a working environment where everyone feels respected and safe". The company hired Raashi Sikka, Uber's former head of diversity and inclusion in Europe and Asia, as vice president of global diversity and inclusion for Ubisoft in December 2020 to follow on to this commitment.
In September 2020 Michel Ancel left Ubisoft and the games industry to work on a wildlife preserve, stating that his project Beyond Good & Evil 2 at Ubisoft and Wild as Wild Sheep Studio was left in good hands before he left. As part of their coverage from the sexual misconduct issues, Libération found that Ancel's attention towards Beyond Good & Evil 2 to be haphazard, which had resulted in delays and restarts since the game's 1st announcement in 2010. The team considered Ancel's management style to be abusive, having dismissed some of their work and forcing them to restart on development pathways. While the team at Ubisoft Montpellier had reported on Ancel's lack of organization and leadership on the project to management as early as 2017, Libération claimed it was his close relationship with Yves Guillemot that allowed the situation to continue until 2020 when a more indepth review of all management was performed in wake of the sexual misconduct allegations. Ancel stated he was not aware of the issues from the team and asserts his departure was stress-related. In November 2020, Hugues Ricour, the managing director of Ubisoft Singapore, stepped down from that role after these internal reviews and remained with the company.
The French workers' union Solidaires Informatique initiated a class action lawsuit against Ubisoft in relation to the allegations; Solidaires Informatique had previously represented workers in a case of workplace concerns at French developer Quantic Dream. At the trial in May 2021, Le Télégramme reported that very little had changed within the company, as many of the HR staff that were part of the problem remained in their positions within the company, both in its France headquarters and its Canadian divisions. Employees reported to the newspaper that nothing had changed despite the new guidelines. In response to this report, Ubisoft stated that "Over a period of several months, Ubisoft has implemented major changes across its organization, internal processes and procedures in order to guarantee a safe, inclusive and respectful working environment for all team members." and "These concrete actions demonstrate the profound changes that have taken place at every level of the company. Additional initiatives are underway and are being rolled out over the coming months."
Solidaires Informatique and two former Ubisoft employees filed a 2nd lawsuit within the French courts in July 2021. As translated by Kotaku, the complaints states that Ubisoft "as a legal entity for institutional sexual harassment for setting up, maintaining and reinforcing a system where sexual harassment is tolerated because it is more profitable for the company to keep harassers in place than to protect its employees". The complaint names some of those identified during the initial 2020 accusations, including Hascoët, François, and Cornet, as directly responsible for maintaining conditions that promoted the harassment.
In July 2021, Activision Blizzard was sued by the California Department of Fair Employment and Housing (DFEH) on accusations the company maintained a hostile workplace towards women and discriminated against women in hiring and promotions. Among other reactions, this led to the Activision Blizzard employees staging a walkout on July 28, 2021 to protest the management's dismissive response to the lawsuit. About 500 employees across Ubisoft signed a letter in solidarity with the Activision Blizzard employees, stating that "It should no longer be a surprise to anyone: employees, executives, journalists, or fans that these heinous acts are going on. It is time to stop being shocked. We must demand real steps be taken to prevent them. Those responsible must be held accountable for their actions." Ubisoft CEO Yves Guillemot sent a letter to all Ubisoft employees in response to this open letter, stating "We have heard clearly from this letter that not everyone is confident in the processes that have been put in place to manage misconduct reports" and that "We have made important progress over the past year". This reply prompted another open letter from Ubisoft employees that derided Guillemot's response in that "Ubisoft continues to protect and promote known offenders and their allies. We see management continuing to avoid this issue", and that the company had generally ignored issues that employees have brought up. The employees' response included 3 demands of Ubisoft management, ending the cycle of simply rotating the troublesome executives and managers between studios to avoid issues, for the employees to have a collective seat in ongoing discussions to improve the workplace situation, and establishing cross-industry collaboration for how to handle future offenses that includes non-management employees as well as union representatives.
In August 2021, a group of Ubisoft employees formed a workers' rights group, A Better Ubisoft, to seek more commitment and action from the company in response to the allegation from the past year. The group asked for having a seat at the table to discuss how the company was handling changes and improvements to avoid having these problems come up in the future. Axios reported in December 2021 that there was an "exodus" of Ubisoft employees leaving the company due to a combination to lower pay and the impact of the workplace misconduct allegation.
Ubisoft Singapore began to be investigated by Singapore's Tripartite Alliance for Fair and Progressive Employment Practices in August 2021 based on reports of sexual harassment and workplace discrimination within that studio, following a July 2021 report published by Kotaku.
Other lawsuits
In 2008, Ubisoft sued Optical Experts Manufacturing (OEM), a DVD duplication company for $25 million plus damages for the leak and distribution of the PC version of Assassin's Creed. The lawsuit claims that OEM did not take proper measures to protect its product as stated in its contract with Ubisoft. The complaint alleges that OEM admitted to all the problems in the complaint.
In April 2012, Ubisoft was sued by John L. Beiswenger, the author of the book Link who alleged copyright infringement for using his ideas in the Assassin's Creed franchise. He demanded $5.25 million in damages and a halt to the release of Assassin's Creed III which was set to be released in October 2012, along with any future games that allegedly contain his ideas. On 30 May 2012, Beiswenger dropped the lawsuit. Beiswenger was later quoted as saying he believes "authors should vigorously defend their rights in their creative works", and suggested that Ubisoft's motion to block future lawsuits from Beiswenger hints at their guilt.
In December 2014, Ubisoft offered a free game from their catalogue of recently released titles to compensate the season pass owners of Assassin's Creed Unity due to its buggy launch. The terms offered with the free game revoked the user's right to sue Ubisoft for the buggy launch of the game.
In May 2020, Ubisoft sued Chinese developer Ejoy and Apple and Google over Ejoy's Area F2 game which Ubisoft contended was a carbon copy of Tom Clancy's Rainbow Six Siege. Ubisoft sought copyright action against Ejoy, and financial damages against Apple and Google for allowing Area F2 to be distributed on their mobile app stores and profiting from its microtransactions.
References
External links
1996 initial public offerings
Companies based in Île-de-France
Companies listed on Euronext Paris
French companies established in 1986
Multinational companies headquartered in France
Seine-Saint-Denis
Video game companies established in 1986
Video game companies of France
Video game development companies
Video game publishers |
1031895 | https://en.wikipedia.org/wiki/Stephen%20Tweedie | Stephen Tweedie | Stephen C. Tweedie is a Scottish software developer who is known for his work on the Linux kernel, in particular his work on filesystems.
After becoming involved with the development of the ext2 filesystem working on performance issues, he led the development of the ext3 filesystem which involved adding a journaling layer (JBD) to the ext2 filesystem. For his work on the journaling layer, he has been described by fellow Linux developer Andrew Morton as "a true artisan".
Born in Edinburgh, Scotland in 1969, Tweedie studied computer science at Churchill College, Cambridge and the University of Edinburgh, where he did his thesis on Contention and Achieved Performance in Multicomputer Wormhole Routing Networks. After contributing to the Linux kernel in his spare time since the early nineties and working on VMS filesystem support for DEC for two years, Tweedie was employed by Linux distributor Red Hat where he continues to work on the Linux kernel.
Tweedie has published a number of papers on Linux, including Design and Implementation of the Second Extended Filesystem in 1994, Journaling the Linux ext2fs Filesystem in 1998, and Planned Extensions to the Linux Ext2/Ext3 Filesystem in 2002.
Tweedie is also a frequent speaker on the subject of Linux kernel development at technical conferences. Amongst others, he has given talks on Linux kernel development at the 1997 and 1998 USENIX Annual Technical Conferences, the 2000 UKUUG conference in London, and he gave the keynote speech at the Ottawa Linux Symposium in 2002.
References
1969 births
Living people
Linux kernel programmers
Alumni of Churchill College, Cambridge
Alumni of the University of Edinburgh |
24757714 | https://en.wikipedia.org/wiki/Harpoon%20%28video%20game%29 | Harpoon (video game) | Harpoon is a computer wargame published by Three-Sixty Pacific in 1989 for DOS. This was the first game in the Harpoon series. It was ported to the Amiga and Macintosh.
Development history
In the late 1970s, a manual wargame called SEATAG was introduced by the United States Navy for exploring tactical options. It was available in both classified and unclassified versions. SEATAG was developed into a true tactical training game called NAVTAG that ran on three networked microcomputers for the Red Side, Blue Side, and Game Control.
Former naval officer and future author Larry Bond's exposure to this system in 1980 while on active duty led to the eventual development of Harpoon.
The original game was expanded with additional releases including Harpoon BattleSet 2: North Atlantic Convoys (1989), Harpoon Battleset 3: The MED Conflict (1991), Harpoon BattleSet 4: Indian Ocean / Persian Gulf (1991), and Harpoon Designers' Series: BattleSet Enhancer (1992).
Plot
The player is the commander of either NATO or Soviet forces, commanding ships and aircraft, selecting from over 100 different weapon systems, and taking responsibility for judgment calls. The game mainly focuses on combat in the GIUK Gap.
Gameplay
Harpoon is a naval simulator that uses data reflecting real-world equipment and weaponry, based on a miniatures wargame. There are no preset battle algorithms that dictate combat outcomes, and no play balance between sides. The game includes a user's guide with an appendix on superpower politics and maritime strategies in modern warfare, a Harpoon Tactical Guide by Larry Bond, and a booklet by author Tom Clancy that deals with Russian destroyers. Clancy used the simulation to test the naval battles for Red Storm Rising, which he co-authored with Bond.
Reception
Sales of Harpoon surpassed 80,000 copies by 1993.
In the February 1990 edition of Computer Gaming World, M. Evan Brooks, a United States military officer, gave the game five stars out of five. He stated that "there is no question that Harpoon is the most detailed simulation to appear in the civilian marketplace ... a must-have for the serious naval gamer", and that he had learned more from six hours with the game than one year at the Naval War College.
In the April 1990 edition of Dragon (Issue 156), Patricia, Kirk and Hartley Lesser called this "is a true simulation with data reflecting real-world equipment and weaponry." They thought the game was "a graphical masterpiece". They concluded by giving the PC DOS/MS-DOS version of the game a perfect score of 5 out of 5, saying, "a simulation that is far more than a game – it's war!". A year later, they gave the Macintosh version a perfect score as well. Six months after that, the Lessers gave the Amiga version another perfect score.
In the May 1990 edition of Games International, Mike Siggins noted the complexity of the game and said it "is not a game for wimps." He liked the graphics, saying, "Harpoon looks superb in high resolution colour." He also thought the user interface was "well-handled". He concluded by giving both the game and the graphics an above-average rating of 8 out of 10, saying, "If tactical modern naval is your field, this is the program you've been waiting for."
In the December 1990–January 1991 edition of Info, Judith Kilbury-Cobb wrote that a preview copy of the Amiga version of Harpoon "looks killer", saying it had "more technical detail than any game has a right to." In the next issue, her comments about the finished product were more nuanced. While she acknowledged that "the wealth of tactical and strategic data on weapons, ships, subs, etc., is overwhelming", she found performance on a basic Amiga was "unbearably sluggish". She concluded by giving the game an average rating of 3.5 out of 5, saying, "Long on realism, somewhat short on playability."
The One reviewed Harpoon in 1991, calling it a "combat simulation for purists", due to the lack of "flashy action scenes" or joystick controls. The One furthermore states that the game requires "careful" and "arduous" strategic planning, and express that "It's hard to fault the accuracy and comprehensiveness of the military hardware database which supports Harpoon, and it would be unfair to criticise the lack of more usual arcade-style sequences. The game makes no claim to be anything other than a realistic and heavily strategic representation of cold war conflict – as such it succeeds." The One concludes by expressing that "Even so, it's too dryly erudite to appeal to as wide an audience as most simulations."
In 1990, Computer Gaming World named it "Wargame of the Year". The editors of Game Player's PC Strategy Guide likewise presented the game with their 1990 "Best PC Wargame" award. They dubbed it "the most detailed, authentic, and convincing simulation of modern naval warfare yet devised."
Tim Carter reviewed Harpoon Battleset 3: The MED Conflict for Computer Gaming World, and stated that "Harpoon: Battleset Three: The Mediterranean Conflict is an entertaining and thought-provoking addition to the Harpoon system. The combination of imaginative scenarios with new (and/or outdated) platforms and situations give the battleset a distinctive style of play that sets it apart from the Atlantic battlesets."
Tim Carter reviewed Harpoon BattleSet 4: Indian Ocean / Persian Gulf for Computer Gaming World, and stated that "Despite the lack of creativity in the generation of scenarios, Battleset Four: The Indian Ocean/Persian Gulf is a useful addition to the Harpoon system. Players who use the Scenario Editor will find that the new platforms make the package worth the price."
In 1994, PC Gamer US named Harpoon the 36th best computer game ever. The editors called it "probably the best known and most successful naval war game there's ever been. It's still selling today, even five years after its initial release, and military academies have been known to use the game as a training aid. Now that's realism!" In 1996, Computer Gaming World declared Harpoon the 40th-best computer game ever released. The magazine's wargame columnist Terry Coleman named it his pick for the third-best computer wargame released by late 1996.
Reviews
Computer Gaming World - Jun, 1991
Amiga Action - Mar, 1991
Top Secret - Mar, 1993
Aktueller Software Markt - Feb, 1991
References
External links
1989 video games
Amiga games
Classic Mac OS games
Cold War video games
Computer wargames
DOS games
Naval video games
Real-time strategy video games
Video games developed in the United States |
1937517 | https://en.wikipedia.org/wiki/XDH%20assumption | XDH assumption | The external Diffie–Hellman (XDH) assumption is a computational hardness assumption used in elliptic curve cryptography. The XDH assumption holds that there exist certain subgroups of elliptic curves which have useful properties for cryptography. Specifically, XDH implies the existence of two distinct groups with the following properties:
The discrete logarithm problem (DLP), the computational Diffie–Hellman problem (CDH), and the computational co-Diffie–Hellman problem are all intractable in and .
There exists an efficiently computable bilinear map (pairing) .
The decisional Diffie–Hellman problem (DDH) is intractable in .
The above formulation is referred to as asymmetric XDH. A stronger version of the assumption (symmetric XDH, or SXDH) holds if DDH is also intractable in .
The XDH assumption is used in some pairing-based cryptographic protocols. In certain elliptic curve subgroups, the existence of an efficiently-computable bilinear map (pairing) can allow for practical solutions to the DDH problem. These groups, referred to as gap Diffie–Hellman (GDH) groups, facilitate a variety of novel cryptographic protocols, including tri-partite key exchange, identity based encryption, and secret handshakes (to name a few). However, the ease of computing DDH within a GDH group can also be an obstacle when constructing cryptosystems; for example, it is not possible to use DDH-based cryptosystems such as ElGamal within a GDH group. Because the DDH assumption holds within at least one of a pair of XDH groups, these groups can be used to construct pairing-based protocols which allow for ElGamal-style encryption and other novel cryptographic techniques.
In practice, it is believed that the XDH assumption may hold in certain subgroups of MNT elliptic curves. This notion was first proposed by Scott (2002), and later by Boneh, Boyen and Shacham (2002) as a means to improve the efficiency of a signature scheme. The assumption was formally defined by Ballard, Green, de Medeiros and Monrose (2005), and full details of a proposed implementation were advanced in that work. Evidence for the validity of this assumption is the proof by Verheul (2001) and Galbraith and Rotger (2004) of the non-existence of distortion maps in two specific elliptic curve subgroups which possess an efficiently computable pairing. As pairings and distortion maps are currently the only known means to solve the DDH problem in elliptic curve groups, it is believed that the DDH assumption therefore holds in these subgroups, while pairings are still feasible between elements in distinct groups.
References
Mike Scott. Authenticated ID-based exchange and remote log-in with simple token and PIN. E-print archive (2002/164), 2002. (pdf file)
Dan Boneh, Xavier Boyen, Hovav Shacham. Short Group Signatures. CRYPTO 2004. (pdf file)
Lucas Ballard, Matthew Green, Breno de Medeiros, Fabian Monrose. Correlation-Resistant Storage via Keyword-Searchable Encryption. E-print archive (2005/417), 2005. (pdf file)
Steven D Galbraith, Victor Rotger. Easy Decision Diffie–Hellman Groups. LMS Journal of Computation and Mathematics, August 2004. ()
E.R. Verheul, Evidence that XTR is more secure than supersingular elliptic curve cryptosystems, in B. Pfitzmann (ed.) EUROCRYPT 2001, Springer LNCS 2045 (2001) 195–210.
Computational hardness assumptions
Elliptic curve cryptography
Pairing-based cryptography |
16279770 | https://en.wikipedia.org/wiki/Paradise%20Cracked | Paradise Cracked | Paradise Cracked () is a cyberpunk single-player turn-based tactics video game. It was created by MiST Land South (renamed as GFI Russia in 2006) for Microsoft Windows and released in 2002. It has several translation problems that make the game difficult to understand in English.
The player starts as a character named "Hacker" who is just as the name implies. The player starts with a pistol and the ability to hack certain objects (ATM's, Trade Robots, etc.) as well as a journal which gives maps of the area and tells them missions both current and completed. As certain missions are completed, the character's level will increase and skills can have points added to them increasing areas like strength, aim, hacking skill, health points and so forth.
The game offers the option to play solo or to join with other characters or groups (such as the mob). Hacker is constantly hunted by the law making teaming up with someone in the best interest of the player's survival. Teaming up will also add new missions to Hacker's journal.
Additional weapons, items, and body armor can be acquired by purchase or by killing another player. The character's strength determines what weight of weapons, items etc. he can carry while certain clothing will increase or decrease the number of items they can carry. The heavier the weight, the less distance the character can move in a given turn. As strength and level increase, so will a character's distance he can travel.
The game never really gained much ground in terms of commercial success.
Further reading
Review on GameSpot.com
Review on GameSpy.com
Review by PC Zone Magazine
Preview Screens at ComputerAndVideoGames.com
Another preview from CVG.com
Review on itc.ua
Review on ag.ru
Review on IgroMania.ru
External links
2002 video games
Cyberpunk video games
Turn-based tactics video games
Video games developed in Russia
Windows games
Windows-only games
Transhumanism in video games |
57254915 | https://en.wikipedia.org/wiki/2018%20Troy%20Trojans%20football%20team | 2018 Troy Trojans football team | The 2018 Troy Trojans football team represented Troy University in the 2018 NCAA Division I FBS football season. The Trojans played their home games at Veterans Memorial Stadium in Troy, Alabama, and competed in the East Division of the Sun Belt Conference. They were led by fourth-year head coach Neal Brown. They finished the season 10–3, 7–1 in Sun Belt play to finish in a tie for the East Division championship with Appalachian State. Due to their head-to-head loss to Appalachian State, they did not represent the East Division in the Sun Belt Championship Game. They were invited to the Dollar General Bowl where they defeated Buffalo.
Previous season
The Trojans finished the 2017 season 11–2, 7–1 in Sun Belt play to finish in a tie for the Sun Belt championship. They received an invitation to the New Orleans Bowl where they defeated North Texas.
Preseason
Award watch lists
Listed in the order that they were released
Sun Belt coaches poll
On July 19, 2018, the Sun Belt released their preseason coaches poll with the Trojans predicted to finish in second place in the East Division.
Preseason All-Sun Belt Teams
The Trojans had ten players at eleven positions selected to the preseason all-Sun Belt teams.
Offense
1st team
Tristan Crowder – OL
Deontae Crumitie – OL
2nd team
Deondre Douglas – WR
Defense
1st team
Hunter Reese – DL
Trevon Sanders – DL
Tron Folsom – LB
Blace Brown – DB
2nd team
Marcus Webb – DL
Marcus Jones – DB
Cedarius Rookard – DB
Special teams
2nd team
Marcus Jones – KR
Schedule
Schedule Source:
Game summaries
Boise State
Florida A&M
at Nebraska
at Louisiana–Monroe
Coastal Carolina
Georgia State
at Liberty
at South Alabama
Louisiana
at Georgia Southern
Texas State
at Appalachian State
vs. Buffalo (Dollar General Bowl)
References
Troy
Troy Trojans football seasons
LendingTree Bowl champion seasons
Troy Trojans football |
780960 | https://en.wikipedia.org/wiki/Software%20maintenance | Software maintenance | Software maintenance in software engineering is the modification of a software product after delivery to correct faults, to improve performance or other attributes.
A common perception of maintenance is that it merely involves fixing defects. However, one study indicated that over 80% of maintenance effort is used for non-corrective actions. This perception is perpetuated by users submitting problem reports that in reality are functionality enhancements to the system. More recent studies put the bug-fixing proportion closer to 21%.
History
Software maintenance and evolution of systems was first addressed by Meir M. Lehman in 1969. Over a period of twenty years, his research led to the formulation of Lehman's Laws (Lehman 1997). Key findings of his research conclude that maintenance is really evolutionary development and that maintenance decisions are aided by understanding what happens to systems (and software) over time. Lehman demonstrated that systems continue to evolve over time. As they evolve, they grow more complex unless some action such as code refactoring is taken to reduce the complexity.
In the late 1970s, a famous and widely cited survey study by Lientz and Swanson, exposed the very high fraction of life-cycle costs that were being expended on maintenance.
The survey showed that around 75% of the maintenance effort was on the first two types, and error correction consumed about 21%. Many subsequent studies suggest a similar problem magnitude. Studies show that contribution of end users is crucial during the new requirement data gathering and analysis. This is the main cause of any problem during software evolution and maintenance. Software maintenance is important because it consumes a large part of the overall lifecycle costs and also the inability to change software quickly and reliably means that business opportunities are lost.
Importance of software maintenance
The key software maintenance issues are both managerial and technical. Key management issues are: alignment with customer priorities, staffing, which organization does maintenance, estimating costs. Key technical issues are: limited understanding, impact analysis, testing, maintainability measurement.
Software maintenance is a very broad activity that includes error correction, enhancements of capabilities, deletion of obsolete capabilities, and optimization. Because change is inevitable, mechanisms must be developed for evaluation, controlling and making modifications.
So any work done to change the software after it is in operation is considered to be maintenance work.
The purpose is to preserve the value of software over the time. The value can be enhanced by expanding the customer base, meeting additional requirements, becoming easier to use, more efficient and employing newer technology. Maintenance may span for 20 years, whereas development may be 1–2 years.
Software maintenance planning
An integral part of software is maintenance, which requires an accurate maintenance plan to be constructed during the software development. It should specify how users will request modifications or report problems. The budget should include resource and cost estimates, and a new decision should be addressed for the development of every new system feature and its quality objectives. The software maintenance, which can last for 5+ years (or even decades) after the development process, calls for an effective plan which can address the scope of software maintenance, the tailoring of the post delivery/deployment process, the designation of who will provide maintenance, and an estimate of the life-cycle costs.
Software maintenance processes
This section describes the six software maintenance processes as:
The implementation process contains software preparation and transition activities, such as the conception and creation of the maintenance plan; the preparation for handling problems identified during development; and the follow-up on product configuration management.
The problem and modification analysis process, which is executed once the application has become the responsibility of the maintenance group. The maintenance programmer must analyze each request, confirm it (by reproducing the situation) and check its validity, investigate it and propose a solution, document the request and the solution proposal, and finally, obtain all the required authorizations to apply the modifications.
The process considering the implementation of the modification itself.
The process acceptance of the modification, by confirming the modified work with the individual who submitted the request in order to make sure the modification provided a solution.
The migration process (platform migration, for example) is exceptional, and is not part of daily maintenance tasks. If the software must be ported to another platform without any change in functionality, this process will be used and a maintenance project team is likely to be assigned to this task.
Finally, the last maintenance process, also an event which does not occur on a daily basis, is the retirement of a piece of software.
There are a number of processes, activities, and practices that are unique to maintainers, for example:
Transition: a controlled and coordinated sequence of activities during which a system is transferred progressively from the developer to the maintainer
Service Level Agreements (SLAs) and specialized (domain-specific) maintenance contracts negotiated by maintainers
Modification Request and Problem Report Help Desk: a problem-handling process used by maintainers to prioritize, document, and route the requests they receive
Categories of software maintenance
E.B. Swanson initially identified three categories of maintenance: corrective, adaptive, and perfective. The IEEE 1219 standard was superseded in June 2010 by P14764.
These have since been updated and ISO/IEC 14764 presents:
Corrective maintenance: Reactive modification of a software product performed after delivery to correct discovered problems. Corrective maintenance can be automated with automatic bug fixing.
Adaptive maintenance: Modification of a software product performed after delivery to keep a software product usable in a changed or changing environment.
Perfective maintenance: Modification of a software product after delivery to improve performance or maintainability.
Preventive maintenance: Modification of a software product after delivery to detect and correct latent faults in the software product before they become effective faults.
There is also a notion of pre-delivery/pre-release maintenance which is all the good things you do to lower the total cost of ownership of the software. Things like compliance with coding standards that includes software maintainability goals. The management of coupling and cohesion of the software. The attainment of software supportability goals (SAE JA1004, JA1005 and JA1006 for example). Some academic institutions are carrying out research to quantify the cost to ongoing software maintenance due to the lack of resources such as design documents and system/software comprehension training and resources (multiply costs by approx. 1.5-2.0 where there is no design data available).
Maintenance factors
Impact of key adjustment factors on maintenance (sorted in order of maximum positive impact)
Not only are error-prone modules troublesome, but many other factors can degrade performance too. For example, very complex spaghetti code is quite difficult to maintain safely.
A very common situation which often degrades performance is lack of suitable maintenance tools, such as defect tracking software, change management software, and test library software. Below describe some of the factors and the range of impact on software maintenance.
Impact of key adjustment factors on maintenance (sorted in order of maximum negative impact)
Maintenance debt
In a paper for the 27th International Conference on Software Quality Management in 2019, John Estdale introduced the term “maintenance debt” for maintenance needs generated by an implementation’s dependence on external IT factors such as libraries, platforms and tools, that have become obsolescent. The application continues to run, and the IT department forgets this theoretical liability, focussing on more urgent requirements and problems elsewhere. Such debt accumulates over time, silently eating away at the value of the software asset. Eventually something happens that makes system change unavoidable.
The owner may then discover that the system can no longer be modified – it is literally unmaintainable. Less dramatically, it may take too long, or cost too much, for maintenance to solve the business problem, and an alternative solution must be found. The software has suddenly crashed to £0 value.
Estdale defines "Maintenance Debt" as: the gap between the current implementation state of an application and the ideal, using only functionality of external components that is fully maintained and supported. This debt is often hidden or not recognized. An application’s overall maintainability is dependent on the continuing obtainability of components of all sorts from other suppliers, including:
Development tools: source editing, configuration management, compilation and build
Testing tools: test selection, execution/verification/reporting
Platforms to execute the above: hardware, operating system and other services
Production environment and any standby/Disaster Recovery facilities, including the source code language’s Run-Time Support Environment, and the wider ecosystem of job scheduling, file transfer, replicated storage, backup and archive, single sign-on, etc etc.
Separately acquired packages, eg DBMS, graphics, comms, middleware
Bought in source-code, object code libraries, and other invocable services
Any requirements arising from other applications sharing the production environment or interworking with the application in question
and of course
The availability of relevant skills, in-house, or in the marketplace.
The complete disappearance of a component could make the application un-rebuildable, or imminently unmaintainable.
See also
Application retirement
Journal of Software Maintenance and Evolution: Research and Practice
Long-term support
Search-based software engineering
Software archaeology
Software maintainer
Software development
References
Further reading
External links
Journal of Software Maintenance
IEEE standards
ISO/IEC standards |
39754755 | https://en.wikipedia.org/wiki/CAST%20%28company%29 | CAST (company) | CAST is a technology corporation, with headquarters in New York City and in France, near Paris. The firm markets software intelligence with a technology based on semantic analysis of software source code and components. It provides information and size (automated function points counting) measurement technology and expertise, and offers software, hosting and consulting services in support software analysis and measurement. The company was founded in 1990 in Paris, France, by Vincent Delaroche.
Its code quality metrics are used in application development service-level agreements, and the firm offers consultation on issues having to do with development quality and security:
History
CAST was founded in 1990 in Paris by Vincent Delaroche. In 1996 it shipped its first software product based on semantic analysis of Transact-SQL. CAST Application Intelligence Platform, was first launched in 2004, initially introducing software quality measurement. In 2017, CAST Highlight is launched as a SaaS product scanning portfolio of software to provide metrics on health, cloud migration capabilities, and Open-source license risks. Early 2019, based on the same analysis technology, the firm launches CAST Imaging, a product representing graphically source-code components of a software.
In 2012, the firm announced support for the Object Management Group (OMG) Automated Function Point (AFP) Standard, one way of measuring application development productivity.
The firm's leadership includes Bill Curtis, who developed the Capability Maturity Model at the Software Engineering Institute (SEI) in the early 1990s and then the Consortium for IT Software Quality (CISQ).
CAST's head of product development, Olivier Bonsignour, co-wrote a book with Capers Jones.
Research
The firm's Research Labs subsidiary developed a repository of industry data and issues a biennial report called CAST Research on Application Software Health (CRASH). CRASH data has been cited and published in articles in IEEE Software and researches. Its Labs was active in analyzing the phenomenon of technical debt, co-hosting a research forum on this topic with University of Maryland’s department of information systems.
It focused on analyzing applications instead of technology layers and as a consequence, most of the research had been conducted in the domain of inter- and intra-technology dependency analysis.
The firm was recognized by Oseo in 2009 and awarded the "Innovative Company" label. This recognition was renewed in 2012 through another project aiming at identifying the possible resource leaks that can contribute to application performance decrease or application crashes, at the application level across the layers and without requiring the actual execution of the application as is usually the case in dynamic program analysis.
References
Technology companies established in 1990
Software companies of France |
69169186 | https://en.wikipedia.org/wiki/One%20Step%20From%20Eden | One Step From Eden | One Step From Eden is a roguelike action video game created by American independent developer Thomas Moon Kang, and published by Humble Bundle. Following a successful crowdfunding campaign, it was released for Linux, macOS, Microsoft Windows and Nintendo Switch in March 2020, and later ported to PlayStation 4 in June 2020, and Xbox One in November 2021.
One Step From Eden features grid-based movement and combat mechanics similar to Mega Man Battle Network series, alongside deck-building gameplay and choice-based story progression. It received mostly positive reviews upon release, with praise for its visuals, combat, and controls, and criticism for its multiplayer modes and high difficulty.
Gameplay
The navigation takes place on a map with branching paths leading to combat encounters, treasure chests, and shops. The combat in One Step From Eden takes place on a 4x4 grid, where player can move the character one square at a time, and, provided they have enough mana, attack using spells in their deck; these take on the form of various close and long range attacks, and some apply status effects or restore health. The enemy moves on a separate 4x4 grid, and their spells follow patterns the player needs to dodge in time. Some battles have special conditions, such as the presence of non-player characters to save or defeat.
Each completed encounter awards the player with experience points, and choice to add new spells to their deck. Every level up and some events lead to artifacts, items with positive and negative passive abilities. Falling in battle sends the player back to the start with all gained spells and artifacts lost, but progress in the completion bar remains, which eventually leads to unlocking new characters and spells for use in future attempts.
Development
Development of the game began in April 2016, with Thomas Moon Kang naming the "Megaman Battle Network sized void that no other game could fill" as reason for creating the title. The theme and feel were in part influenced by The World Ends With You (2007); the title "One Step From Eden" is also a reference to the game. Other inspirations during development include FTL: Faster Than Light (2012), Nuclear Throne (2015), and Slay the Spire (2019).
The game was put on Kickstarter on January 3, 2019, seeking $15,000 in funding, alongside a free demo featuring one zone and three bosses from the final game. Although the developer has stated that even if the goal is not met, the development will continue, the campaign hit its target in two days; over time it also reached additional stretch goals to include a pet summoning mechanic, release the game on Nintendo Switch, and commission music from VA-11 Hall-A composer Michael Kelly.
Release
In February 2020, publisher Humble Bundle announced it would publish One Step From Eden digitally on March 26, 2020 for previously promised Linux, macOS, Windows, and Nintendo Switch platforms. The port for PlayStation 4 was released on June 14, 2020. The game would later see a physical release by Flyhigh Works for Nintendo Switch in Japan on February 25, 2021, and is planned to be released physically for both consoles by Limited Run Games in North America. The Xbox One version was announced in October 2021. It was released on November 11, 2021, and was added to the Xbox Game Pass service for console and PC on the same date.
Reception
The PC version of One Step From Eden received "generally favorable reviews", and Nintendo Switch version received "mixed or average reviews", according to review aggregator Metacritic.
Shin Imai of IGN Japan called it "a revolutionary title", and a game "anyone should try". Nintendo Life's Stuart Gipp found One Step From Eden to be "an exceptionally well-made game with great combat and responsive controls" and praised its visuals and sound, but criticized the game's slow rate of unlockables and unforgiving difficulty. The steep difficulty curve was also criticized by Jordan Rudek of Nintendo World Report, who further noted that it is "sorely lacking in accessibility options", but was generally positive about the game due to its interesting premise and variety in spells and perks. Nintendo Enthusiast's Miguel Moran complimented the game's depth in its single-player, but found that it doesn't translate well to cooperative mode, that has two players "awkwardly split one deck instead of letting them use their own", and player versus player mode, that "sticks you with predetermined decks that you have no way of studying".
External links
Official website
References
2020 video games
Card battle video games
Crowdfunded video games
Humble Games games
Indie video games
Kickstarter-funded video games
Linux games
MacOS games
Nintendo Switch games
PlayStation 4 games
Roguelike video games
Video games developed in the United States
Video games featuring female protagonists
Video games with Steam Workshop support
Windows games
Xbox One games |
1118379 | https://en.wikipedia.org/wiki/Online%20shopping | Online shopping | Online shopping is a form of electronic commerce which allows consumers to directly buy goods or services from a seller over the Internet using a web browser or a mobile app. Consumers find a product of interest by visiting the website of the retailer directly or by searching among alternative vendors using a shopping search engine, which displays the same product's availability and pricing at different e-retailers. As of 2020, customers can shop online using a range of different computers and devices, including desktop computers, laptops, tablet computers and smartphones.
An online shop evokes the physical analogy of buying products or services at a regular "bricks-and-mortar" retailer or shopping center; the process is called business-to-consumer (B2C) online shopping. When an online store is set up to enable businesses to buy from another businesses, the process is called business-to-business (B2B) online shopping. A typical online store enables the customer to browse the firm's range of products and services, view photos or images of the products, along with information about the product specifications, features and prices.
Online stores usually enable shoppers to use "search" features to find specific models, brands or items. Online customers must have access to the Internet and a valid method of payment in order to complete a transaction, such as a credit card, an Interac-enabled debit card, or a service such as PayPal. For physical products (e.g., paperback books or clothes), the e-tailer ships the products to the customer; for digital products, such as digital audio files of songs or software, the e-tailer usually sends the file to the customer over the Internet. The largest of these online retailing corporations are Alibaba, Amazon.com, and eBay.
Terminology
Alternative names for the activity are "e-tailing", a shortened form of "electronic retail" or "e-shopping", a shortened form of "electronic shopping". An online store may also be called an e-web-store, e-shop, e-store, Internet shop, web-shop, web-store, online store, online storefront and virtual store. Mobile commerce (or m-commerce) describes purchasing from an online retailer's mobile device-optimized website or software application ("app"). These websites or apps are designed to enable customers to browse through a companies' products and services on tablet computers and smartphones.
History
History of online shopping
One of the earliest forms of trade conducted online was IBM's online transaction processing (OLTP) developed in the 1960s, which allowed the processing of financial transactions in real-time. The computerized ticket reservation system developed for American Airlines called Semi-Automatic Business Research Environment (SABRE) was one of its applications. There, computer terminals located in different travel agencies were linked to a large IBM mainframe computer, which processed transactions simultaneously and coordinated them so that all travel agents had access to the same information at the same time.
The emergence of online shopping as it is known today developed with the emergence of the Internet. Initially, this platform only functioned as an advertising tool for companies, providing information about their products. It quickly moved on from this simple utility to actual online shopping transaction due to the development of interactive Web pages and secure transmissions. Specifically, the growth of the Internet as a secure shopping channel has developed since 1994, with the first sales of Sting's album Ten Summoner's Tales. Wine, chocolates, and flowers soon followed and were among the pioneering retail categories which fueled the growth of online shopping. Researchers found that having products that are appropriate for e-commerce was a key indicator of Internet success. Many of these products did well as they are generic products which shoppers did not need to touch and feel in order to buy. But also importantly, in the early days, there were few shoppers online and they were from a narrow segment: affluent, male, 30+. Online shopping has come a long way since those early days and – in the UK – accounts for significant percentage (depending on product category as percentages can vary).
Growth in online shoppers
As the revenues from online sales continued to grow significantly researchers identified different types of online shoppers, Rohm & Swaninathan identified four categories and named them "convenience shoppers, variety seekers, balanced buyers, and store-oriented shoppers". They focused on shopping motivations and found that the variety of products available and the perceived convenience of the buying online experience were significant motivating factors. This was different for offline shoppers, who were more motivated by time saving and recreational motives.
English entrepreneur Michael Aldrich was a pioneer of online shopping in 1979. His system connected a modified domestic TV to a real-time transaction processing computer via a domestic telephone line. He believed that videotex, the modified domestic TV technology with a simple menu-driven human–computer interface, was a 'new, universally applicable, participative communication medium — the first since the invention of the telephone.' This enabled 'closed' corporate information systems to be opened to 'outside' correspondents not just for transaction processing but also for e-messaging and information retrieval and dissemination, later known as e-business. His definition of the new mass communications medium as 'participative' [interactive, many-to-many] was fundamentally different from the traditional definitions of mass communication and mass media and a precursor to the social networking on the Internet 25 years later. In March 1980 he launched Redifon's Office Revolution, which allowed consumers, customers, agents, distributors, suppliers and service companies to be connected online to the corporate systems and allow business transactions to be completed electronically in real-time. During the 1980s he designed, manufactured, sold, installed, maintained and supported many online shopping systems, using videotex technology. These systems which also provided voice response and handprint processing pre-date the Internet and the World Wide Web, the IBM PC, and Microsoft MS-DOS, and were installed mainly in the UK by large corporations.
The first World Wide Web server and browser, created by Tim Berners-Lee in 1989, opened for commercial use in 1991. Thereafter, subsequent technological innovations emerged in 1994: online banking, the opening of an online pizza shop by Pizza Hut, Netscape's SSL v2 encryption standard for secure data transfer, and Intershop's first online shopping system. The first secure retail transaction over the Web was either by NetMarket or Internet Shopping Network in 1994. Immediately after, Amazon.com launched its online shopping site in 1995 and eBay was also introduced in 1995. Alibaba's sites Taobao and Tmall were launched in 2003 and 2008, respectively. Retailers are increasingly selling goods and services prior to availability through "pretail" for testing, building, and managing demand.
International statistics
Statistics show that in 2012, Asia-Pacific increased their international sales over 30% giving them over $433 billion in revenue. That is a $69 billion difference between the U.S. revenue of $364.66 billion. It is estimated that Asia-Pacific will increase by another 30% in the year 2013 putting them ahead by more than one-third of all global ecommerce sales. The largest online shopping day in the world is Singles Day, with sales just in Alibaba's sites at US$9.3 billion in 2014.
Customers
Online customers must have access to the Internet and a valid method of payment in order to complete a transaction. Generally, higher levels of education and personal income correspond to more favorable perceptions of shopping online. Increased exposure to technology also increases the probability of developing favorable attitudes towards new shopping channels.
Customer buying behaviour in digital environment
The marketing around the digital environment, customer's buying behaviour may not be influenced and controlled by the brand and firm, when they make a buying decision that might concern the interactions with search engine, recommendations, online reviews and other information. With the quickly separate of the digital devices environment, people are more likely to use their mobile phones, computers, tablets and other digital devices to gather information. In other words, the digital environment has a growing effect on consumer's mind and buying behaviour. In an online shopping environment, interactive decision may have an influence on aid customer decision making. Each customer is becoming more interactive, and though online reviews customers can influence other potential buyers' behaviors. In addition, not only those reviews, people more rely on other people's post information about product commends on social media. There will shows common problems in the past and some solutions or comments of the merchants will be attached for customer reference.
Subsequently, risk and trust would also are two important factors affecting people's' behavior in digital environments. Customer consider to switch between e-channels, because they are mainly influence by the comparison with offline shopping, involving growth of security, financial and performance-risks In other words, a customer shopping online that they may receive more risk than people shopping in stores. There are three factors may influence people to do the buying decision, firstly, people cannot examine whether the product satisfy their needs and wants before they receive it. Secondly, customer may concern at after-sale services. Finally, customer may afraid that they cannot fully understand the language used in e-sales. Based on those factors customer perceive risk may as a significantly reason influence the online purchasing behaviour.
Online retailers has place much emphasis on customer trust aspect, trust is another way driving customer's behaviour in digital environment, which can depend on customer's attitude and expectation. Indeed, the company's products design or ideas can not met customer's expectations. Customer's purchase intention based on rational expectations, and additionally impacts on emotional trust. Moreover, those expectations can be also establish on the product information and revision from others.
Product selection
Consumers find a product of interest by visiting the website of the retailer directly or by searching among alternative vendors using a shopping search engine. Once a particular product has been found on the website of the seller, most online retailers use shopping cart software to allow the consumer to accumulate multiple items and to adjust quantities, like filling a physical shopping cart or basket in a conventional store. A "checkout" process follows (continuing the physical-store analogy) in which payment and delivery information is collected, if necessary. Some stores allow consumers to sign up for a permanent online account so that some or all of this information only needs to be entered once. The consumer often receives an e-mail confirmation once the transaction is complete. Less sophisticated stores may rely on consumers to phone or e-mail their orders (although full credit card numbers, expiry date, and Card Security Code, or bank account and routing number should not be accepted by e-mail, for reasons of security).
Payment
Online shoppers commonly use a credit card or a PayPal account in order to make payments. However, some systems enable users to create accounts and pay by alternative means, such as:
Billing to mobile phones and landlines
Bitcoin or other cryptocurrencies
Cash on delivery (C.O.D.)
Cheque/ Check
Debit card
Direct debit in some countries
Electronic money of various types
Gift cards
Invoice, especially popular in some markets/countries, such as Switzerland
Postal money order
Wire transfer/delivery on payment
Some online shops will not accept international credit cards. Some require both the purchaser's billing and shipping address to be in the same country as the online shop's base of operation. Other online shops allow customers from any country to send gifts anywhere. The financial part of a transaction may be processed in real time (e.g. letting the consumer know their credit card was declined before they log off), or may be done later as part of the fulfillment process.
Product delivery
Once a payment has been accepted, the goods or services can be delivered in the following ways. For physical items:
Package delivery: The product is shipped to a customer-designated address. Retail package delivery is typically done by the public postal system or a retail courier such as FedEx, UPS, DHL, or TNT.
Drop shipping: The order is passed to the manufacturer or third-party distributor, who then ships the item directly to the consumer, bypassing the retailer's physical location to save time, money, and space.
In-store pick-up: The customer selects a local store using a locator software and picks up the delivered product at the selected location. This is the method often used in the bricks and clicks business model.
For digital items or tickets:
Downloading/Digital distribution: The method often used for digital media products such as software, music, movies, or images.
Printing out, provision of a code for, or e-mailing of such items as admission tickets and scrip (e.g., gift certificates and coupons). The tickets, codes, or coupons may be redeemed at the appropriate physical or online premises and their content reviewed to verify their eligibility (e.g., assurances that the right of admission or use is redeemed at the correct time and place, for the correct dollar amount, and for the correct number of uses).
Will call, COBO (in Care Of Box Office), or "at the door" pickup: The patron picks up pre-purchased tickets for an event, such as a play, sporting event, or concert, either just before the event or in advance. With the onset of the Internet and e-commerce sites, which allow customers to buy tickets online, the popularity of this service has increased.
Shopping cart systems
Simple shopping cart systems allow the off-line administration of products and categories. The shop is then generated as HTML files and graphics that can be uploaded to a webspace. The systems do not use an online database. A high-end solution can be bought or rented as a stand-alone program or as an addition to an enterprise resource planning program. It is usually installed on the company's web server and may integrate into the existing supply chain so that ordering, payment, delivery, accounting and warehousing can be automated to a large extent. Other solutions allow the user to register and create an online shop on a portal that hosts multiple shops simultaneously from one back office. Examples are BigCommerce, Shopify and FlickRocket. Open source shopping cart packages include advanced platforms such as Interchange, and off-the-shelf solutions such as Magento, osCommerce, WooCommerce, PrestaShop, and Zen Cart. Commercial systems can also be tailored so the shop does not have to be created from scratch. By using an existing framework, software modules for various functionalities required by a web shop can be adapted and combined.
Design
Customers are attracted to online shopping not only because of high levels of convenience, but also because of broader selections, competitive pricing, and greater access to information. Business organizations seek to offer online shopping not only because it is of much lower cost compared to bricks and mortar stores, but also because it offers access to a worldwide market, increases customer value, and builds sustainable capabilities.
Information load
Designers of online shops are concerned with the effects of information load. Information load is a product of the spatial and temporal arrangements of stimuli in the web store. Compared with conventional retail shopping, the information environment of virtual shopping is enhanced by providing additional product information such as comparative products and services, as well as various alternatives and attributes of each alternative, etc. Two major dimensions of information load are complexity and novelty. Complexity refers to the number of different elements or features of a site, often the result of increased information diversity. Novelty involves the unexpected, suppressed, new, or unfamiliar aspects of the site. The novelty dimension may keep consumers exploring a shopping site, whereas the complexity dimension may induce impulse purchases.
Consumer needs and expectations
According to the output of a research report by Western Michigan University published in 2005, an e-commerce website does not have to be good looking with listing on a lot of search engines. It must build relationships with customers to make money. The report also suggests that a website must leave a positive impression on the customers, giving them a reason to come back. However, resent research has proven that sites with higher focus on efficiency, convenience, and personalised services increased the customers motivation to make purchases.
Dyn, an Internet performance management company conducted a survey on more than 1400 consumers across 11 countries in North America, Europe, Middle-East and Asia and the results of the survey are as follows:
Online retailers must improve the website speed
Online retailers must ease consumers fear around security
These concerns majorly affect the decisions of almost two thirds of the consumers.
User interface
The most important factors determining whether customers return to a website are ease of use and the presence of user-friendly features. Usability testing is important for finding problems and improvements in a web site. Methods for evaluating usability include heuristic evaluation, cognitive walkthrough, and user testing. Each technique has its own characteristics and emphasizes different aspects of the user experience.
Market share
The popularity of online shopping continues to erode sales of conventional retailers. For example, Best Buy, the largest retailer of electronics in the U.S. in August 2014 reported its tenth consecutive quarterly dip in sales, citing an increasing shift by consumers to online shopping. Amazon.com has the largest market share in the United States. As of May 2018, a survey found two-thirds of Americans had bought something from Amazon (92% of those who had bought anything online), with 40% of online shoppers buying something from Amazon at least once a month. The survey found shopping began at amazon.com 44% of the time, compared to a general search engine at 33%. It estimated 75 million Americans subscribe to Amazon Prime and 35 million more use someone else's account.
There were 242 million people shopping online in China in 2012. For developing countries and low-income households in developed countries, adoption of e-commerce in place of or in addition to conventional methods is limited by a lack of affordable Internet access.
Advantages
Convenience
Online stores are usually available 24 hours a day, and many consumers in Western countries have Internet access both at work and at home. Other establishments such as Internet cafes, community centers and schools provide internet access as well. In contrast, visiting a conventional retail store requires travel or commuting and costs such as gas, parking, or bus tickets, and must usually take place during business hours. Delivery was always a problem which affected the convenience of online shopping. However to overcome this many retailers including online retailers in Taiwan brought in a store pick up service. This now meant that customers could purchase goods online and pick them up at a nearby convenience store, making online shopping more advantageous to customers. In the event of a problem with the item (e.g., the product was not what the consumer ordered or the product was not satisfactory), consumers are concerned with the ease of returning an item in exchange for the correct product or a refund. Consumers may need to contact the retailer, visit the post office and pay return shipping, and then wait for a replacement or refund. Some online companies have more generous return policies to compensate for the traditional advantage of physical stores. For example, the online shoe retailer Zappos.com includes labels for free return shipping, and does not charge a restocking fee, even for returns which are not the result of merchant error. (Note: In the United Kingdom, online shops are prohibited from charging a restocking fee if the consumer cancels their order in accordance with the Consumer Protection (Distance Selling) Act 2000). A 2018 survey in the United States found 26% of online shoppers said they never return items, and another 65% said they rarely do so.
Information and reviews
Online stores must describe products for sale with text, photos, and multimedia files, whereas in a physical retail store, the actual product and the manufacturer's packaging will be available for direct inspection (which might involve a test drive, fitting, or other experimentation). Some online stores provide or link to supplemental product information, such as instructions, safety procedures, demonstrations, or manufacturer specifications. Some provide background information, advice, or how-to guides designed to help consumers decide which product to buy. Some stores even allow customers to comment or rate their items. There are also dedicated review sites that host user reviews for different products. Reviews and even some blogs give customers the option of shopping for cheaper purchases from all over the world without having to depend on local retailers. In a conventional retail store, clerks are generally available to answer questions. Some online stores have real-time chat features, but most rely on e-mails or phone calls to handle customer questions. Even if an online store is open 24 hours a day, seven days a week, the customer service team may only be available during regular business hours.
Price and selection
One advantage of shopping online is being able to quickly seek out deals for items or services provided by many different vendors (though some local search engines do exist to help consumers locate products for sale in nearby stores). Search engines, online price comparison services and discovery shopping engines can be used to look up sellers of a particular product or service. Shipping costs (if applicable) reduce the price advantage of online merchandise, though depending on the jurisdiction, a lack of sales tax may compensate for this. Shipping a small number of items, especially from another country, is much more expensive than making the larger shipments bricks-and-mortar retailers order. Some retailers (especially those selling small, high-value items like electronics) offer free shipping on sufficiently large orders. Another major advantage for retailers is the ability to rapidly switch suppliers and vendors without disrupting users' shopping experience.
Disadvantages
Fraud and security concerns
Given the lack of ability to inspect merchandise before purchase, consumers are at higher risk of fraud than face-to-face transactions. When ordering merchandise online, the item may not work properly, it may have defects, or it might not be the same item pictured in the online photo. Merchants also risk fraudulent purchases if customers are using stolen credit cards or fraudulent repudiation of the online purchase. However, merchants face less risk from physical theft by using a warehouse instead of a retail storefront. Secure Sockets Layer (SSL) encryption has generally solved the problem of credit card numbers being intercepted in transit between the consumer and the merchant. However, one must still trust the merchant (and employees) not to use the credit card information subsequently for their own purchases, and not to pass the information to others. Also, hackers might break into a merchant's web site and steal names, addresses and credit card numbers, although the Payment Card Industry Data Security Standard is intended to minimize the impact of such breaches. Identity theft is still a concern for consumers. A number of high-profile break-ins in the 2000s has prompted some U.S. states to require disclosure to consumers when this happens. Computer security has thus become a major concern for merchants and e-commerce service providers, who deploy countermeasures such as firewalls and anti-virus software to protect their networks. Phishing is another danger, where consumers are fooled into thinking they are dealing with a reputable retailer, when they have actually been manipulated into feeding private information to a system operated by a malicious party. Denial of service attacks are a minor risk for merchants, as are server and network outages.
Quality seals can be placed on the Shop web page if it has undergone an independent assessment and meets all requirements of the company issuing the seal. The purpose of these seals is to increase the confidence of online shoppers. However, the existence of many different seals, or seals unfamiliar to consumers, may foil this effort to a certain extent.
A number of resources offer advice on how consumers can protect themselves when using online retailer services. These include:
Sticking with well-known stores, or attempting to find independent consumer reviews of their experiences; also ensuring that there is comprehensive contact information on the website before using the service, and noting if the retailer has enrolled in industry oversight programs such as a trust mark or a trust seal.
Before buying from a new company, evaluating the website by considering issues such as: the professionalism and user-friendliness of the site; whether or not the company lists a telephone number and/or street address along with e-contact information; whether a fair and reasonable refund and return policy is clearly stated; and whether there are hidden price inflators, such as excessive shipping and handling charges.
Ensuring that the retailer has an acceptable privacy policy posted. For example, note if the retailer does not explicitly state that it will not share private information with others without consent.
Ensuring that the vendor address is protected with SSL (see above) when entering credit card information. If it does the address on the credit card information entry screen will start with "HTTPS".
Using strong passwords which do not contain personal information such as the user's name or birthdate. Another option is a "pass phrase," which might be something along the lines: "I shop 4 good a buy!!" These are difficult to hack, since they do not consist of words found in a dictionary, and provides a variety of upper, lower, and special characters. These passwords can be site specific and may be easy to remember.
Although the benefits of online shopping are considerable, when the process goes poorly it can create a thorny situation. A few problems that shoppers potentially face include identity theft, faulty products, and the accumulation of spyware. If users are required to put in their credit card information and billing/shipping address and the website is not secure, customer information can be accessible to anyone who knows how to obtain it. Most large online corporations are inventing new ways to make fraud more difficult. However, criminals are constantly responding to these developments with new ways to manipulate the system. Even though online retailers are making efforts to protect consumer information, it is a constant fight to maintain the lead. It is advisable to be aware of the most current technology and scams to protect consumer identity and finances. Product delivery is also a main concern of online shopping. Most companies offer shipping insurance in case the product is lost or damaged. Some shipping companies will offer refunds or compensation for the damage, but this is up to their discretion.
Lack of full cost disclosure
The lack of full cost disclosure may also be problematic. While it may be easy to compare the base price of an item online, it may not be easy to see the total cost up front. Additional fees such as shipping are often not visible until the final step in the checkout process. The problem is especially evident with cross-border purchases, where the cost indicated at the final checkout screen may not include additional fees that must be paid upon delivery such as duties and brokerage. Some services such as the Canadian-based Wishabi attempts to include estimates of these additional cost, but nevertheless, the lack of general full cost disclosure remains a concern.
Privacy
Privacy of personal information is a significant issue for some consumers. Many consumers wish to avoid spam and telemarketing which could result from supplying contact information to an online merchant. In response, many merchants promise to not use consumer information for these purposes, Many websites keep track of consumer shopping habits in order to suggest items and other websites to view. Brick-and-mortar stores also collect consumer information. Some ask for a shopper's address and phone number at checkout, though consumers may refuse to provide it. Many larger stores use the address information encoded on consumers' credit cards (often without their knowledge) to add them to a catalog mailing list. This information is obviously not accessible to the merchant when paying in cash or through a bank (money transfer, in which case there is also proof of payment).
Product suitability
Many successful purely virtual companies deal with digital products, (including information storage, retrieval, and modification), music, movies, office supplies, education, communication, software, photography, and financial transactions. Other successful marketers use drop shipping or affiliate marketing techniques to facilitate transactions of tangible goods without maintaining real inventory. Some non-digital products have been more successful than others for online stores. Profitable items often have a high value-to-weight ratio, they may involve embarrassing purchases, they may typically go to people in remote locations, and they may have shut-ins as their typical purchasers. Items which can fit in a standard mailbox—such as music CDs, DVDs and books—are particularly suitable for a virtual marketer.
Products such as spare parts, both for consumer items like washing machines and for industrial equipment like centrifugal pumps, also seem good candidates for selling online. Retailers often need to order spare parts specially, since they typically do not stock them at consumer outlets—in such cases, e-commerce solutions in spares do not compete with retail stores, only with other ordering systems. A factor for success in this niche can consist of providing customers with exact, reliable information about which part number their particular version of a product needs, for example by providing parts lists keyed by serial number. Products less suitable for e-commerce include products that have a low value-to-weight ratio, products that have a smell, taste, or touch component, products that need trial fittings—most notably clothing—and products where colour integrity appears important. Nonetheless, some web sites have had success delivering groceries and clothing sold through the internet is big business in the U.S.
Aggregation
High-volume websites, such as Yahoo!, Amazon.com and eBay offer hosting services for online stores to all size retailers. These stores are presented within an integrated navigation framework, sometimes known as virtual shopping malls or online marketplaces.
Impact of reviews on consumer behavior
One of the great benefits of online shopping is the ability to read product reviews, written either by experts or fellow online shoppers. The Nielsen Company conducted a survey in March 2010 and polled more than 27,000 Internet users in 55 markets from the Asia-Pacific, Europe, Middle East, North America, and South America to look at questions such as "How do consumers shop online?", "What do they intend to buy?", "How do they use various online shopping web pages?", and the impact of social media and other factors that come into play when consumers are trying to decide how to spend their money on which product or service. According to the research, reviews on electronics (57%) such as DVD players, cellphones, or PlayStations, and so on, reviews on cars (45%), and reviews on software (37%) play an important role in influencing consumers who tend to make purchases online. Furthermore, 40% of online shoppers indicate that they would not even buy electronics without consulting online reviews first.
In addition to online reviews, peer recommendations on online shopping pages or social media websites play a key role for online shoppers when they are researching future purchases. 90% of all purchases made are influenced by social media.
See also
Bricks and clicks business model
Dark store
Digital distribution
Electronic business
Online auction business model
Online music store
Online pharmacy
Online shopping malls
Online shopping rewards
Open catalogue
Package delivery
Personal shopper
Product tracing systems: allow to see source factory of a product
Retail therapy
Types of retail outlets
References
External links
Consumer behaviour
Merchandising |
1845709 | https://en.wikipedia.org/wiki/Unified%20English%20Braille | Unified English Braille | Unified English Braille Code (UEBC, formerly UBC, now usually simply UEB) is an English language Braille code standard, developed to permit representing the wide variety of literary and technical material in use in the English-speaking world today, in uniform fashion.
Background on why the new encoding standard was developed
Standard 6-dot braille only provides 63 distinct characters (not including the space character), and thus, over the years a number of distinct rule-sets have been developed to represent literary text, mathematics, scientific material, computer software, the @ symbol used in email addresses, and other varieties of written material. Different countries also used differing encodings at various times: during the 1800s American Braille competed with English Braille and New York Point in the War of the Dots. As a result of the expanding need to represent technical symbolism, and divergence during the past 100 years across countries, braille users who desired to read or write a large range of material have needed to learn different sets of rules, depending on what kind of material they were reading at a given time. Rules for a particular type of material were often not compatible from one system to the next (the rule-sets for literary/mathematical/computerized encoding-areas were sometimes conflicting—and of course differing approaches to encoding mathematics were not compatible with each other), so the reader would need to be notified as the text in a book moved from computer braille code for programming to Nemeth Code for mathematics to standard literary braille. Moreover, the braille rule-set used for math and computer science topics, and even to an extent braille for literary purposes, differed among various English-speaking countries.
Overview of the goals of UEB
Unified English Braille is intended to develop one set of rules, the same everywhere in the world, which could be applied across various types of English-language material. The notable exception to this unification is Music Braille, which UEB specifically does not encompass, because it is already well-standardized internationally. Unified English Braille is designed to be readily understood by people familiar with the literary braille (used in standard prose writing), while also including support for specialized math and science symbols, computer-related symbols (the @ sign as well as more specialised programming-language syntax), foreign alphabets, and visual effects (bullets, bold type, accent marks, and so on).
According to the original 1991 specification for UEB, the goals were:
1. simplify and unify the system of braille used for encoding English, reducing community-fragmentation
2. reduce the overall number of official coding systems, which currently include:
a. literary code (since 1933, English Braille Grade 2 has been the main component)
i. BANA flavor used in North America, et cetera
ii. BAUK flavor used in United Kingdom, etc.
b. Textbook Formats and Techniques code
c. math-notation and science-notation codes
i. Nemeth Code (since 1952, in North America and several other countries)
ii. modern variants of Taylor Code, a subset of literary code (since 18xx, standard elsewhere, alternative in North America)
iii. Extended Nemeth Code With Chemistry Module
iv. Extended Nemeth Code With Ancient Numeration Module
v. Mathematical Diagrams Module (not actually associated with any particular coding-system)
d. Computer Braille Code (since the 1980s, for special characters)
i. the basic CBC
ii. CBC With Flowchart Module
e. Braille Music Code (since 1829, last upgraded/unified 1997, used for vocals and instrumentals—this one explicitly not to be unified nor eliminated)
f. [added later] IPA Braille code (used for phonetic transcriptions—this one did not yet exist in 1991)
3. if possible, unify the literary-code used across English-speaking countries
4. where it is not possible to reduce the number of coding systems, reduce conflicts
a. most especially, rule-conflicts (which make the codes incompatible at a "software" level—in human brains and computer algorithms)
b. symbol conflicts, for example, the characters "$", "%", "]", and "[" are all represented differently in the various code systems
c. sometimes the official coding-systems themselves are not explicitly in conflict, but ambiguity in their rules can lead to accidental conflicts
5. the overall goal of steps 1 to 4 above is to make acquisition of reading, writing, and teaching skill in the use of braille quicker, easier, and more efficient
6. this in turn will help reverse the trend of steadily eroding usage of Braille itself (which is being replaced by electronics and/or illiteracy)
7. besides those practical goals, it is also desired that braille—as a writing system—have the properties required for long-term success:
a. universal, with no special code-system for particular subject-matter, no special-purpose "modules", and no serious disagreements about how to encode English
b. coherent, with no internal conflicts, and thus no need for authoritative fiat to "resolve" such conflicts by picking winners and losers
c. ease of use, with dramatically less need for braille-coding-specific lessons, certifications, workshops, literature, etc.
d. uniform yet extensible, with symbol-assignment giving an unvarying identity-relationship, and new symbols possible without conflicts or overhauls
8. philosophically, an additional goal is to upgrade the braille system to be practical for employment in a workplace, not just for reading recreational and religious texts
a. computer-friendly (braille-production on modern keyboards and braille-consumption via computerized file formats—see also Braille e-book which did not really exist back in 1990)
b. tech-writing-friendly (straightforward handling of notations used in math/science/medical/programming/engineering/similar)
c. precise bidirectional representation (both #8a and #8b can be largely satisfied by a precision writing system…but the existing braille systems as of 1990 were not fully precise, replacing symbols with words, converting unit-systems, altering punctuation, and so on)
9. upgrades to existing braille-codes are required, and then these modified codes can be merged into a unified code (preferably singular plus the music-code)
Some goals were specially and explicitly called out as key objectives, not all of which are mentioned above:
objective#A = precise bidirectional representation of printed-text (see #8c)
objective#B = maximizing the usefulness of braille's limited formatting mechanisms in systematic fashion (so that readers can quickly and easily locate the information they are seeking)
objective#C = unifying the rule-systems and symbol-assignments for all subject-matters except musical notation, to eliminate 'unlearning' (#9 / #2 / #3)
objective#D = context-independent encoding (symbols must be transcribable in straightforward fashion—without regard to their English meaning)
objective#E = markup or mode-switching ability (to clearly distinguish between information from the printed version, versus transcriber commentary)
objective#F = easy-to-memorize symbol-assignments (to make learning the coding system easier—and also facilitate reading of relatively rare symbols) (see #7c / #5 / #1)
objective#G = extensible coding-system (with the possibility of introducing new symbols in a non-conflicting and systematic manner) (see #7d)
objective#H = algorithmic representation and deterministic rule-set (texts are amenable to automatic computerized translation from braille to print—and vice versa) (see #8a)
objective#I = backward compatibility with English Braille Grade 2 (someone reading regular words and sentences will hardly notice any modifications)
objective#J = reverse the steadily declining trend of braille-usage (as a statistical percentage of the blind-community), as soon as possible (see #6)
Goals that were specifically not part of the UEB upgrade process were the ability to handle languages outside the Roman alphabet (cf. the various national variants of ASCII in the ISO 8859 series versus the modern pan-universal Unicode standard, which governs how writing systems are encoded for computerized use).
History of specification and adoption of UEB
Work on UEB formally began in 1991, and preliminary draft standard was published in March 1995 (as UBC), then upgraded several times thereafter. Unified English Braille (UEB) was originally known as Unified Braille Code (UBC), with the English-specific nature being implied, but later the word "English" was formally incorporated into its name—Unified English Braille Code (UEBC)—and still more recently it has come to be called Unified English Braille (UEB). On April 2, 2004, the International Council on English Braille (ICEB) gave the go-ahead for the unification of various English braille codes. This decision was reached following 13 years of analysis, research, and debate. ICEB said that Unified English Braille was sufficiently complete for recognition as an international standard for English braille, which the seven ICEB member-countries could consider for adoption as their national code. South Africa adopted the UEB almost immediately (in May 2004). During the following year, the standard was adopted by Nigeria (February 5, 2005), Australia (May 14, 2005), and New Zealand (November 2005). On April 24, 2010, the Canadian Braille Authority (CBA) voted to adopt UEB, making Canada the fifth nation to adopt UEB officially. On October 21, 2011, the UK Association for Accessible Formats voted to adopt UEB as the preferred code in the UK. On November 2, 2012, the Braille Authority of North America (BANA) became the sixth of the seven member-countries of the ICEB to officially adopt the UEB.
Controversy over mathematics notation in UEB
The major criticism against UEB is that it fails to handle mathematics or computer science as compactly as codes designed to be optimal for those disciplines. Besides requiring more space to represent and more time to read and write, the verbosity of UEB can make learning mathematics more difficult. Nemeth Braille, officially used in the United States since 1952, and as of 2002 the de facto standard for teaching and doing mathematics in braille in the US, was specifically invented to correct the cumbersomeness of doing mathematics in braille. However, although the Nemeth encoding standard was officially adopted by the JUTC of the US and the UK in the 1950s, in practice only the USA switched their mathematical braille to the Nemeth system, whereas the UK continued to use the traditional Henry Martyn Taylor coding (not to be confused with Hudson Taylor, who was involved with the use of Moon type for the blind in China during the 1800s) for their braille mathematics. Programmers in the United States who write their programming codefiles in braille—as opposed to in ASCII text with use of a screenreader for example—tend to use Nemeth-syntax numerals, whereas programmers in the UK use yet another system (not Taylor-numerals and not literary-numerals).
The key difference of Nemeth Braille compared to Taylor (and UEB which uses an upgraded version of the Taylor encoding for math) is that Nemeth uses "down-shifted" numerals from the fifth decade of the Braille alphabet (overwriting various punctuation characters), whereas UEB/Taylor uses the traditional 1800s approach with "up-shifted" numerals from the first decade of the (English) Braille alphabet (overwriting the first ten letters, namely ABCDEFGHIJ). Traditional 1800s braille, and also UEB, require insertion of numeral-prefixes when speaking of numerals, which makes representing some mathematical equations 42% more verbose. As an alternative to UEB, there were proposals in 2001 and 2009, and most recently these were the subject of various technical workshops during 2012. Although UEB adopts some features of Nemeth, the final version of UEB mandates up-shifted numerals, which are the heart of the controversy. According to BANA, which adopted UEB in 2012, the official braille codes for the USA will be UEB and Nemeth Braille (as well as Music Braille for vocals and instrumentals plus IPA Braille for phonetic linguistics), despite the use of contradictory representation of numerals and arithmetical symbols in the UEB and Nemeth encodings. Thus, although UEB has officially been adopted in most English-speaking ICEB member-countries, in the USA (and possibly the UK where UEB is only the "preferred" system) the new encoding is not to be the sole encoding.
Another proposed braille-notation for encoding math is GS8/GS6, which was specifically invented in the early 1990s as an attempt to get rid of the "up-shifted" numerals used in UEB—see Gardner–Salinas Braille. GS6 implements "extra-dot" numerals from the fourth decade of the English Braille alphabet (overwriting various two-letter ligatures). GS8 expands the braille-cell from 2×3 dots to 2×4 dots, quadrupling the available codepoints from the traditional 64 up to 256, but in GS8 the numerals are still represented in the same way as in GS6 (albeit with a couple unused dot-positions at the bottom).
Attempts to give the numerals their own distinct position in braille are not new: the original 1829 specification by Louis Braille gave the numerals their own distinct symbols, with the modern digraph-based literary-braille approach mentioned as an optional fallback. However, after trying the system out in the classroom, the dashes used in the numerals—as well as several other rows of special characters—were found to be too difficult to distinguish from dot-pairs, and thus the typical digraph-based numerals became the official standard in 1837.
Implementation of UEB in English-speaking countries
As of 2013, with the majority of English-speaking ICEB member-countries having officially adopted UEB, there remain barriers to implementation and deployment. Besides ICEB member-nations, there are also many other countries with blind citizens that teach and use English: India, Hong Kong/China, Pakistan, the Philippines, and so on. Many of these countries use non-UEB math notation, for English-speaking countries specifically, versions of the Nemeth Code were widespread by 1990 (in the United States, Western Samoa, Canada including Quebec, New Zealand, Israel, Greece, India, Pakistan, Sri Lanka, Thailand, Malaysia, Indonesia, Cambodia, Vietnam, and Lebanon) in contrast to the similar-to-UEB-but-not-identical Taylor notation in 1990 (used by the UK, Ireland, Australia, Nigeria, Hong Kong, Jordan, Kenya, Sierra Leone, Singapore, and Zimbabwe). Some countries in the Middle East used Nemeth and Taylor math-notations as of 1990, i.e. Iran and Saudi Arabia. As of 2013, it is unclear whether the English-using blind populations of various ICEB and non-ICEB nations will move to adopt the UEB, and if so, at what rate. Beyond official adoption rates in schools and by individuals, there are other difficulties. The vast majority of existing Braille materials, both printed and electronic, are in non-UEB encodings. Furthermore, other technologies that compete with braille are now ever-more-widely affordable (screen readers for electronic-text-to-speech, plus physical-pages-to-electronic-text software combined with high-resolution digital cameras and high-speed document scanners, and the increasing ubiquity of tablets/smartphones/PDAs/PCs). The percentage of blind children who are literate in braille is already declining—and even those who know some system tend not to know UEB, since that system is still very new. Still, as of 2012 many of the original goals for UEB have already been fully or partially accomplished:
A unified literary code across most English-speaking countries (see separate section of this article on the adoption of UEB)
Number of coding-subsystems reduced from five major and one minor ( + music/etc.) down to two major and two minor ( using formal codeswitching + music/ipa), plus the generality of the basic uebLiterary was increased to fully cover parentheses, math-symbols, emails, and websites.
Reasonable level of backward compatibility with the American style of English Braille (more time is required before the exact level of transitional pain can be pinpointed, but studies in Australia and the UK indicate that braille users in the United States will also likely cope quite easily)
Making braille more computer-friendly, especially in terms of translation and backtranslation of the encoding system
Fully extensible encoding system, where new symbols can be added without causing conflicts or requiring coding-overhauls Not all the symbol-duplications were eliminated (there are still at least two representations of the $ symbol for instance). Since there are still two major coding-systems for math-notation and other technical or scientific writing (Nemeth as an option in the United States versus the Taylor-style math-notation recently added to uebLiterary that will likely be used in other countries), some rule conflicts remain, and braille users will be required to "unlearn" certain rules when switching. In the long run, whether these accomplishments will translate into broader goals, of reducing community fragmentation among English-speaking braille users, boosting the acquisition speed of reading/writing/teaching skill in the use of braille, and thereby preserving braille's status as a useful writing-system for the blind, as of 2013 remains to be seen.
See also
American Braille
Gardner Salinas braille
Nemeth Braille
References
External links
The Rules of Unified English Braille (2013)
Comments on Mathematical Aspects of the UEBC by Dr. Abraham Nemeth, inventor of the Nemeth Braille Code
International Council on English Braille (ICEB)
National Braille Press has a free booklet about the UEBC (in braille or electronic braille only)
Some Thoughts on the UEBC
History of standardization of braille-encodings, 1860 through 1950
Braille
Constructed languages introduced in the 1990s
1991 introductions |
75028 | https://en.wikipedia.org/wiki/Voice%20over%20IP | Voice over IP | Voice over Internet Protocol (VoIP), also called IP telephony, is a method and group of technologies for the delivery of voice communications and multimedia sessions over Internet Protocol (IP) networks, such as the Internet. The terms Internet telephony, broadband telephony, and broadband phone service specifically refer to the provisioning of communications services (voice, fax, SMS, voice-messaging) over the Internet, rather than via the public switched telephone network (PSTN), also known as plain old telephone service (POTS).
Overview
The steps and principles involved in originating VoIP telephone calls are similar to traditional digital telephony and involve signaling, channel setup, digitization of the analog voice signals, and encoding. Instead of being transmitted over a circuit-switched network, the digital information is packetized and transmission occurs as IP packets over a packet-switched network. They transport media streams using special media delivery protocols that encode audio and video with audio codecs and video codecs. Various codecs exist that optimize the media stream based on application requirements and network bandwidth; some implementations rely on narrowband and compressed speech, while others support high-fidelity stereo codecs.
The most widely used speech coding standards in VoIP are based on the linear predictive coding (LPC) and modified discrete cosine transform (MDCT) compression methods. Popular codecs include the MDCT-based AAC-LD (used in FaceTime), the LPC/MDCT-based Opus (used in WhatsApp), the LPC-based SILK (used in Skype), μ-law and A-law versions of G.711, G.722, and an open source voice codec known as iLBC, a codec that uses only 8 kbit/s each way called G.729.
Early providers of voice-over-IP services used business models and offered technical solutions that mirrored the architecture of the legacy telephone network. Second-generation providers, such as Skype, built closed networks for private user bases, offering the benefit of free calls and convenience while potentially charging for access to other communication networks, such as the PSTN. This limited the freedom of users to mix-and-match third-party hardware and software. Third-generation providers, such as Google Talk, adopted the concept of federated VoIP. These solutions typically allow dynamic interconnection between users in any two domains of the Internet, when a user wishes to place a call.
In addition to VoIP phones, VoIP is also available on many personal computers and other Internet access devices. Calls and SMS text messages may be sent via Wi-Fi or the carrier's mobile data network. VoIP provides a framework for consolidation of all modern communications technologies using a single unified communications system.
Pronunciation
VoIP is variously pronounced as an initialism, V-O-I-P, or as an acronym, (). Full words, voice over Internet Protocol, or voice over IP, are sometimes used.
Protocols
Voice over IP has been implemented with proprietary protocols and protocols based on open standards in applications such as VoIP phones, mobile applications, and web-based communications.
A variety of functions are needed to implement VoIP communication. Some protocols perform multiple functions, while others perform only a few and must be used in concert. These functions include:
Network and transport – Creating reliable transmission over unreliable protocols, which may involve acknowledging receipt of data and retransmitting data that wasn't received.
Session management – Creating and managing a session (sometimes glossed as simply a "call"), which is a connection between two or more peers that provides a context for further communication.
Signaling – Performing registration (advertising one's presence and contact information) and discovery (locating someone and obtaining their contact information), dialing (including reporting call progress), negotiating capabilities, and call control (such as hold, mute, transfer/forwarding, dialing DTMF keys during a call [e.g. to interact with an automated attendant or IVR], etc.).
Media description – Determining what type of media to send (audio, video, etc.), how to encode/decode it, and how to send/receive it (IP addresses, ports, etc.).
Media – Transferring the actual media in the call, such as audio, video, text messages, files, etc.
Quality of service – Providing out-of-band content or feedback about the media such as synchronization, statistics, etc.
Security – Implementing access control, verifying the identity of other participants (computers or people), and encrypting data to protect the privacy and integrity of the media contents and/or the control messages.
VoIP protocols include:
Session Initiation Protocol (SIP), connection management protocol developed by the IETF
H.323, one of the first VoIP call signaling and control protocols that found widespread implementation. Since the development of newer, less complex protocols such as MGCP and SIP, H.323 deployments are increasingly limited to carrying existing long-haul network traffic.
Media Gateway Control Protocol (MGCP), connection management for media gateways
H.248, control protocol for media gateways across a converged internetwork consisting of the traditional PSTN and modern packet networks
Real-time Transport Protocol (RTP), transport protocol for real-time audio and video data
Real-time Transport Control Protocol (RTCP), sister protocol for RTP providing stream statistics and status information
Secure Real-time Transport Protocol (SRTP), encrypted version of RTP
Session Description Protocol (SDP), a syntax for session initiation and announcement for multi-media communications and WebSocket transports.
Inter-Asterisk eXchange (IAX), protocol used between Asterisk PBX instances
Extensible Messaging and Presence Protocol (XMPP), instant messaging, presence information, and contact list maintenance
Jingle, for peer-to-peer session control in XMPP
Skype protocol, proprietary Internet telephony protocol suite based on peer-to-peer architecture
Adoption
Consumer market
Mass-market VoIP services use existing broadband Internet access, by which subscribers place and receive telephone calls in much the same manner as they would via the PSTN. Full-service VoIP phone companies provide inbound and outbound service with direct inbound dialing. Many offer unlimited domestic calling and sometimes international calls for a flat monthly subscription fee. Phone calls between subscribers of the same provider are usually free when flat-fee service is not available.
A VoIP phone is necessary to connect to a VoIP service provider. This can be implemented in several ways:
Dedicated VoIP phones connect directly to the IP network using technologies such as wired Ethernet or Wi-Fi. These are typically designed in the style of traditional digital business telephones.
An analog telephone adapter connects to the network and implements the electronics and firmware to operate a conventional analog telephone attached through a modular phone jack. Some residential Internet gateways and cablemodems have this function built in.
Softphone application software installed on a networked computer that is equipped with a microphone and speaker, or headset. The application typically presents a dial pad and display field to the user to operate the application by mouse clicks or keyboard input.
PSTN and mobile network providers
It is increasingly common for telecommunications providers to use VoIP telephony over dedicated and public IP networks as a backhaul to connect switching centers and to interconnect with other telephony network providers; this is often referred to as IP backhaul.
Smartphones may have SIP clients built into the firmware or available as an application download.
Corporate use
Because of the bandwidth efficiency and low costs that VoIP technology can provide, businesses are migrating from traditional copper-wire telephone systems to VoIP systems to reduce their monthly phone costs. In 2008, 80% of all new Private branch exchange (PBX) lines installed internationally were VoIP. For example, in the United States, the Social Security Administration is converting its field offices of 63,000 workers from traditional phone installations to a VoIP infrastructure carried over its existing data network.
VoIP allows both voice and data communications to be run over a single network, which can significantly reduce infrastructure costs. The prices of extensions on VoIP are lower than for PBX and key systems. VoIP switches may run on commodity hardware, such as personal computers. Rather than closed architectures, these devices rely on standard interfaces. VoIP devices have simple, intuitive user interfaces, so users can often make simple system configuration changes. Dual-mode phones enable users to continue their conversations as they move between an outside cellular service and an internal Wi-Fi network, so that it is no longer necessary to carry both a desktop phone and a cell phone. Maintenance becomes simpler as there are fewer devices to oversee.
VoIP solutions aimed at businesses have evolved into unified communications services that treat all communications—phone calls, faxes, voice mail, e-mail, web conferences, and more—as discrete units that can all be delivered via any means and to any handset, including cellphones. Two kinds of service providers are operating in this space: one set is focused on VoIP for medium to large enterprises, while another is targeting the small-to-medium business (SMB) market.
Skype, which originally marketed itself as a service among friends, has begun to cater to businesses, providing free-of-charge connections between any users on the Skype network and connecting to and from ordinary PSTN telephones for a charge.
Delivery Mechanisms
In general, the provision of VoIP telephony systems to organizational or individual users can be divided into two primary delivery methods: private or on-premises solutions, or externally hosted solutions delivered by third-party providers. On-premises delivery methods are more akin to the classic PBX deployment model for connecting an office to local PSTN networks.
While many use cases still remain for private or on-premises VoIP systems, the wider market has been gradually shifting toward Cloud or Hosted VoIP solutions. Hosted systems are also generally better suited to smaller or personal use VoIP deployments, where a private system may not be viable for these scenarios.
Hosted VoIP Systems
Hosted or Cloud VoIP solutions involve a service provider or telecommunications carrier hosting the telephone system as a software solution within their own infrastructure.
Typically this will be one or more datacentres, with geographic relevance to the end-user(s) of the system. This infrastructure is external to the user of the system and is deployed and maintained by the service provider.
Endpoints, such as VoIP telephones or softphone applications (apps running on a computer or mobile device), will connect to the VoIP service remotely. These connections typically take place over public internet links, such as local fixed WAN breakout or mobile carrier service.
Private VoIP Systems
In the case of a private VoIP system, the primary telephony system itself is located within the private infrastructure of the end-user organization. Usually, the system will be deployed on-premises at a site within the direct control of the organization. This can provide numerous benefits in terms of QoS control (see below), cost scalability, and ensuring privacy and security of communications traffic. However, the responsibility for ensuring that the VoIP system remains performant and resilient is predominantly vested in the end-user organization. This is not the case with a Hosted VoIP solution.
Private VoIP systems can be physical hardware PBX appliances, converged with other infrastructure, or they can be deployed as software applications. Generally, the latter two options will be in the form of a separate virtualized appliance. However, in some scenarios, these systems are deployed on bare metal infrastructure or IoT devices. With some solutions, such as 3CX, companies can attempt to blend the benefits of hosted and private on-premises systems by implementing their own private solution but within an external environment. Examples can include datacentre collocation services, public cloud, or private cloud locations.
For on-premises systems, local endpoints within the same location typically connect directly over the LAN. For remote and external endpoints, available connectivity options mirror those of Hosted or Cloud VoIP solutions.
However, VoIP traffic to and from the on-premises systems can often also be sent over secure private links. Examples include personal VPN, site-to-site VPN, private networks such as MPLS and SD-WAN, or via private SBCs (Session Border Controllers). While exceptions and private peering options do exist, it is generally uncommon for those private connectivity methods to be provided by Hosted or Cloud VoIP providers.
Quality of service
Communication on the IP network is perceived as less reliable in contrast to the circuit-switched public telephone network because it does not provide a network-based mechanism to ensure that data packets are not lost, and are delivered in sequential order. It is a best-effort network without fundamental quality of service (QoS) guarantees. Voice, and all other data, travels in packets over IP networks with fixed maximum capacity. This system may be more prone to data loss in the presence of congestion than traditional circuit switched systems; a circuit switched system of insufficient capacity will refuse new connections while carrying the remainder without impairment, while the quality of real-time data such as telephone conversations on packet-switched networks degrades dramatically. Therefore, VoIP implementations may face problems with latency, packet loss, and jitter.
By default, network routers handle traffic on a first-come, first-served basis. Fixed delays cannot be controlled as they are caused by the physical distance the packets travel. They are especially problematic when satellite circuits are involved because of the long distance to a geostationary satellite and back; delays of 400–600 ms are typical. Latency can be minimized by marking voice packets as being delay-sensitive with QoS methods such as DiffServ.
Network routers on high volume traffic links may introduce latency that exceeds permissible thresholds for VoIP. Excessive load on a link can cause congestion and associated queueing delays and packet loss. This signals a transport protocol like TCP to reduce its transmission rate to alleviate the congestion. But VoIP usually uses UDP not TCP because recovering from congestion through retransmission usually entails too much latency. So QoS mechanisms can avoid the undesirable loss of VoIP packets by immediately transmitting them ahead of any queued bulk traffic on the same link, even when the link is congested by bulk traffic.
VoIP endpoints usually have to wait for the completion of transmission of previous packets before new data may be sent. Although it is possible to preempt (abort) a less important packet in mid-transmission, this is not commonly done, especially on high-speed links where transmission times are short even for maximum-sized packets. An alternative to preemption on slower links, such as dialup and digital subscriber line (DSL), is to reduce the maximum transmission time by reducing the maximum transmission unit. But since every packet must contain protocol headers, this increases relative header overhead on every link traversed.
The receiver must resequence IP packets that arrive out of order and recover gracefully when packets arrive too late or not at all. Packet delay variation results from changes in queuing delay along a given network path due to competition from other users for the same transmission links. VoIP receivers accommodate this variation by storing incoming packets briefly in a playout buffer, deliberately increasing latency to improve the chance that each packet will be on hand when it is time for the voice engine to play it. The added delay is thus a compromise between excessive latency and excessive dropout, i.e. momentary audio interruptions.
Although jitter is a random variable, it is the sum of several other random variables that are at least somewhat independent: the individual queuing delays of the routers along the Internet path in question. Motivated by the central limit theorem, jitter can be modeled as a Gaussian random variable. This suggests continually estimating the mean delay and its standard deviation and setting the playout delay so that only packets delayed more than several standard deviations above the mean will arrive too late to be useful. In practice, the variance in latency of many Internet paths is dominated by a small number (often one) of relatively slow and congested bottleneck links. Most Internet backbone links are now so fast (e.g. 10 Gbit/s) that their delays are dominated by the transmission medium (e.g. optical fiber) and the routers driving them do not have enough buffering for queuing delays to be significant.
A number of protocols have been defined to support the reporting of quality of service (QoS) and quality of experience (QoE) for VoIP calls. These include RTP Control Protocol (RTCP) extended reports, SIP RTCP summary reports, H.460.9 Annex B (for H.323), H.248.30 and MGCP extensions.
The RTCP extended report VoIP metrics block specified by is generated by an IP phone or gateway during a live call and contains information on packet loss rate, packet discard rate (because of jitter), packet loss/discard burst metrics (burst length/density, gap length/density), network delay, end system delay, signal/noise/echo level, mean opinion scores (MOS) and R factors and configuration information related to the jitter buffer. VoIP metrics reports are exchanged between IP endpoints on an occasional basis during a call, and an end of call message sent via SIP RTCP summary report or one of the other signaling protocol extensions. VoIP metrics reports are intended to support real-time feedback related to QoS problems, the exchange of information between the endpoints for improved call quality calculation and a variety of other applications.
DSL and ATM
DSL modems typically provide Ethernet connections to local equipment, but inside they may actually be Asynchronous Transfer Mode (ATM) modems. They use ATM Adaptation Layer 5 (AAL5) to segment each Ethernet packet into a series of 53-byte ATM cells for transmission, reassembling them back into Ethernet frames at the receiving end.
Using a separate virtual circuit identifier (VCI) for audio over IP has the potential to reduce latency on shared connections. ATM's potential for latency reduction is greatest on slow links because worst-case latency decreases with increasing link speed. A full-size (1500 byte) Ethernet frame takes 94 ms to transmit at 128 kbit/s but only 8 ms at 1.5 Mbit/s. If this is the bottleneck link, this latency is probably small enough to ensure good VoIP performance without MTU reductions or multiple ATM VCs. The latest generations of DSL, VDSL and VDSL2, carry Ethernet without intermediate ATM/AAL5 layers, and they generally support IEEE 802.1p priority tagging so that VoIP can be queued ahead of less time-critical traffic.
ATM has substantial header overhead: 5/53 = 9.4%, roughly twice the total header overhead of a 1500 byte Ethernet frame. This "ATM tax" is incurred by every DSL user whether or not they take advantage of multiple virtual circuits – and few can.
Layer 2
Several protocols are used in the data link layer and physical layer for quality-of-service mechanisms that help VoIP applications work well even in the presence of network congestion. Some examples include:
IEEE 802.11e is an approved amendment to the IEEE 802.11 standard that defines a set of quality-of-service enhancements for wireless LAN applications through modifications to the Media Access Control (MAC) layer. The standard is considered of critical importance for delay-sensitive applications, such as voice over wireless IP.
IEEE 802.1p defines 8 different classes of service (including one dedicated to voice) for traffic on layer-2 wired Ethernet.
The ITU-T G.hn standard, which provides a way to create a high-speed (up to 1 gigabit per second) Local area network (LAN) using existing home wiring (power lines, phone lines and coaxial cables). G.hn provides QoS by means of Contention-Free Transmission Opportunities (CFTXOPs) which are allocated to flows (such as a VoIP call) that require QoS and which have negotiated a contract with the network controllers.
Performance metrics
The quality of voice transmission is characterized by several metrics that may be monitored by network elements and by the user agent hardware or software. Such metrics include network packet loss, packet jitter, packet latency (delay), post-dial delay, and echo. The metrics are determined by VoIP performance testing and monitoring.
PSTN integration
A VoIP media gateway controller (aka Class 5 Softswitch) works in cooperation with a media gateway (aka IP Business Gateway) and connects the digital media stream, so as to complete the path for voice and data. Gateways include interfaces for connecting to standard PSTN networks. Ethernet interfaces are also included in the modern systems which are specially designed to link calls that are passed via VoIP.
E.164 is a global numbering standard for both the PSTN and public land mobile network (PLMN). Most VoIP implementations support E.164 to allow calls to be routed to and from VoIP subscribers and the PSTN/PLMN. VoIP implementations can also allow other identification techniques to be used. For example, Skype allows subscribers to choose Skype names (usernames) whereas SIP implementations can use Uniform Resource Identifier (URIs) similar to email addresses. Often VoIP implementations employ methods of translating non-E.164 identifiers to E.164 numbers and vice versa, such as the Skype-In service provided by Skype and the E.164 number to URI mapping (ENUM) service in IMS and SIP.
Echo can also be an issue for PSTN integration. Common causes of echo include impedance mismatches in analog circuitry and an acoustic path from the receive to transmit signal at the receiving end.
Number portability
Local number portability (LNP) and mobile number portability (MNP) also impact VoIP business. Number portability is a service that allows a subscriber to select a new telephone carrier without requiring a new number to be issued. Typically, it is the responsibility of the former carrier to "map" the old number to the undisclosed number assigned by the new carrier. This is achieved by maintaining a database of numbers. A dialed number is initially received by the original carrier and quickly rerouted to the new carrier. Multiple porting references must be maintained even if the subscriber returns to the original carrier. The FCC mandates carrier compliance with these consumer-protection stipulations. In November 2007, the Federal Communications Commission in the United States released an order extending number portability obligations to interconnected VoIP providers and carriers that support VoIP providers.
A voice call originating in the VoIP environment also faces least-cost routing (LCR) challenges to reach its destination if the number is routed to a mobile phone number on a traditional mobile carrier. LCR is based on checking the destination of each telephone call as it is made, and then sending the call via the network that will cost the customer the least. This rating is subject to some debate given the complexity of call routing created by number portability. With MNP in place, LCR providers can no longer rely on using the network root prefix to determine how to route a call. Instead, they must now determine the actual network of every number before routing the call.
Therefore, VoIP solutions also need to handle MNP when routing a voice call. In countries without a central database, like the UK, it may be necessary to query the mobile network about which home network a mobile phone number belongs to. As the popularity of VoIP increases in the enterprise markets because of LCR options, VoIP needs to provide a certain level of reliability when handling calls.
Emergency calls
A telephone connected to a land line has a direct relationship between a telephone number and a physical location, which is maintained by the telephone company and available to emergency responders via the national emergency response service centers in form of emergency subscriber lists. When an emergency call is received by a center the location is automatically determined from its databases and displayed on the operator console.
In IP telephony, no such direct link between location and communications end point exists. Even a provider having wired infrastructure, such as a DSL provider, may know only the approximate location of the device, based on the IP address allocated to the network router and the known service address. Some ISPs do not track the automatic assignment of IP addresses to customer equipment.
IP communication provides for device mobility. For example, a residential broadband connection may be used as a link to a virtual private network of a corporate entity, in which case the IP address being used for customer communications may belong to the enterprise, not the residential ISP. Such off-premises extensions may appear as part of an upstream IP PBX. On mobile devices, e.g., a 3G handset or USB wireless broadband adapter, the IP address has no relationship with any physical location known to the telephony service provider, since a mobile user could be anywhere in a region with network coverage, even roaming via another cellular company.
At the VoIP level, a phone or gateway may identify itself by its account credentials with a Session Initiation Protocol (SIP) registrar. In such cases, the Internet telephony service provider (ITSP) knows only that a particular user's equipment is active. Service providers often provide emergency response services by agreement with the user who registers a physical location and agrees that, if an emergency number is called from the IP device, emergency services are provided to that address only.
Such emergency services are provided by VoIP vendors in the United States by a system called Enhanced 911 (E911), based on the Wireless Communications and Public Safety Act. The VoIP E911 emergency-calling system associates a physical address with the calling party's telephone number. All VoIP providers that provide access to the public switched telephone network are required to implement E911, a service for which the subscriber may be charged. "VoIP providers may not allow customers to opt-out of 911 service." The VoIP E911 system is based on a static table lookup. Unlike in cellular phones, where the location of an E911 call can be traced using assisted GPS or other methods, the VoIP E911 information is accurate only if subscribers keep their emergency address information current.
Fax support
Sending faxes over VoIP networks is sometimes referred to as Fax over IP (FoIP). Transmission of fax documents was problematic in early VoIP implementations, as most voice digitization and compression codecs are optimized for the representation of the human voice and the proper timing of the modem signals cannot be guaranteed in a packet-based, connectionless network.
A standards-based solution for reliably delivering fax-over-IP is the T.38 protocol. The T.38 protocol is designed to compensate for the differences between traditional packet-less communications over analog lines and packet-based transmissions which are the basis for IP communications. The fax machine may be a standard device connected to an analog telephone adapter (ATA), or it may be a software application or dedicated network device operating via an Ethernet interface. Originally, T.38 was designed to use UDP or TCP transmission methods across an IP network.
Some newer high-end fax machines have built-in T.38 capabilities which are connected directly to a network switch or router. In T.38 each packet contains a portion of the data stream sent in the previous packet. Two successive packets have to be lost to actually lose data integrity.
Power requirements
Telephones for traditional residential analog service are usually connected directly to telephone company phone lines which provide direct current to power most basic analog handsets independently of locally available electrical power. The susceptibility of phone service to power failures is a common problem even with traditional analog service where customers purchase telephone units that operate with wireless handsets to a base station, or that have other modern phone features, such as built-in voicemail or phone book features.
IP Phones and VoIP telephone adapters connect to routers or cable modems which typically depend on the availability of mains electricity or locally generated power. Some VoIP service providers use customer premises equipment (e.g., cable modems) with battery-backed power supplies to assure uninterrupted service for up to several hours in case of local power failures. Such battery-backed devices typically are designed for use with analog handsets. Some VoIP service providers implement services to route calls to other telephone services of the subscriber, such a cellular phone, in the event that the customer's network device is inaccessible to terminate the call.
Security
Secure calls are possible using standardized protocols such as Secure Real-time Transport Protocol. Most of the facilities of creating a secure telephone connection over traditional phone lines, such as digitizing and digital transmission, are already in place with VoIP. It is necessary only to encrypt and authenticate the existing data stream. Automated software, such as a virtual PBX, may eliminate the need for personnel to greet and switch incoming calls.
The security concerns for VoIP telephone systems are similar to those of other Internet-connected devices. This means that hackers with knowledge of VoIP vulnerabilities can perform denial-of-service attacks, harvest customer data, record conversations, and compromise voicemail messages. Compromised VoIP user account or session credentials may enable an attacker to incur substantial charges from third-party services, such as long-distance or international calling.
The technical details of many VoIP protocols create challenges in routing VoIP traffic through firewalls and network address translators, used to interconnect to transit networks or the Internet. Private session border controllers are often employed to enable VoIP calls to and from protected networks. Other methods to traverse NAT devices involve assistive protocols such as STUN and Interactive Connectivity Establishment (ICE).
Standards for securing VoIP are available in the Secure Real-time Transport Protocol (SRTP) and the ZRTP protocol for analog telephony adapters, as well as for some softphones. IPsec is available to secure point-to-point VoIP at the transport level by using opportunistic encryption. Though many consumer VoIP solutions do not support encryption of the signaling path or the media, securing a VoIP phone is conceptually easier to implement using VoIP than on traditional telephone circuits. A result of the lack of widespread support fo encryption is that it is relatively easy to eavesdrop on VoIP calls when access to the data network is possible. Free open-source solutions, such as Wireshark, facilitate capturing VoIP conversations.
Government and military organizations use various security measures to protect VoIP traffic, such as voice over secure IP (VoSIP), secure voice over IP (SVoIP), and secure voice over secure IP (SVoSIP). The distinction lies in whether encryption is applied in the telephone endpoint or in the network. Secure voice over secure IP may be implemented by encrypting the media with protocols such as SRTP and ZRTP. Secure voice over IP uses Type 1 encryption on a classified network, such as SIPRNet. Public Secure VoIP is also available with free GNU software and in many popular commercial VoIP programs via libraries, such as ZRTP.
Caller ID
Voice over IP protocols and equipment provide caller ID support that is compatible the PSTN. Many VoIP service providers also allow callers to configure custom caller ID information.
Hearing aid compatibility
Wireline telephones which are manufactured in, imported to, or intended to be used in the US with Voice over IP service, on or after February 28, 2020, are required to meet the hearing aid compatibility requirements set forth by the Federal Communications Commission.
Operational cost
VoIP has drastically reduced the cost of communication by sharing network infrastructure between data and voice. A single broadband connection has the ability to transmit multiple telephone calls.
Regulatory and legal issues
As the popularity of VoIP grows, governments are becoming more interested in regulating VoIP in a manner similar to PSTN services.
Throughout the developing world, particularly in countries where regulation is weak or captured by the dominant operator, restrictions on the use of VoIP are often imposed, including in Panama where VoIP is taxed, Guyana where VoIP is prohibited. In Ethiopia, where the government is nationalizing telecommunication service, it is a criminal offense to offer services using VoIP. The country has installed firewalls to prevent international calls from being made using VoIP. These measures were taken after the popularity of VoIP reduced the income generated by the state-owned telecommunication company.
Canada
In Canada, the Canadian Radio-television and Telecommunications Commission regulates telephone service, including VoIP telephony service. VoIP services operating in Canada are required to provide 9-1-1 emergency service.
European Union
In the European Union, the treatment of VoIP service providers is a decision for each national telecommunications regulator, which must use competition law to define relevant national markets and then determine whether any service provider on those national markets has "significant market power" (and so should be subject to certain obligations). A general distinction is usually made between VoIP services that function over managed networks (via broadband connections) and VoIP services that function over unmanaged networks (essentially, the Internet).
The relevant EU Directive is not clearly drafted concerning obligations that can exist independently of market power (e.g., the obligation to offer access to emergency calls), and it is impossible to say definitively whether VoIP service providers of either type are bound by them. A review of the EU Directive is underway and should be complete by 2007.
Arab states of the GCC
Oman
In Oman, it is illegal to provide or use unauthorized VoIP services, to the extent that web sites of unlicensed VoIP providers have been blocked. Violations may be punished with fines of 50,000 Omani Rial (about 130,317 US dollars), a two-year prison sentence or both. In 2009, police raided 121 Internet cafes throughout the country and arrested 212 people for using or providing VoIP services.
Saudi Arabia
In September 2017, Saudi Arabia lifted the ban on VoIPs, in an attempt to reduce operational costs and spur digital entrepreneurship.
United Arab Emirates
In the United Arab Emirates (UAE), it is illegal to provide or use unauthorized VoIP services, to the extent that web sites of unlicensed VoIP providers have been blocked. However, some VoIPs such as Skype were allowed. In January 2018, internet service providers in UAE blocked all VoIP apps, including Skype, but permitting only 2 "government-approved" VoIP apps (C’ME and BOTIM) for a fixed rate of Dh52.50 a month for use on mobile devices, and Dh105 a month to use over a computer connected." In opposition, a petition on Change.org garnered over 5000 signatures, in response to which the website was blocked in UAE.
On March 24, 2020, the United Arab Emirates loosened restriction on VoIP services earlier prohibited in the country, to ease communication during the COVID-19 pandemic. However, popular instant messaging applications like WhatsApp, Skype, and FaceTime remained blocked from being used for voice and video calls, constricting residents to use paid services from the country's state-owned telecom providers.
India
In India, it is legal to use VoIP, but it is illegal to have VoIP gateways inside India. This effectively means that people who have PCs can use them to make a VoIP call to any number, but if the remote side is a normal phone, the gateway that converts the VoIP call to a POTS call is not permitted by law to be inside India. Foreign-based VoIP server services are illegal to use in India.
In the interest of the Access Service Providers and International Long Distance Operators, the Internet telephony was permitted to the ISP with restrictions. Internet Telephony is considered to be a different service in its scope, nature, and kind from real-time voice as offered by other Access Service Providers and Long Distance Carriers. Hence the following type of Internet Telephony are permitted in India:
(a) PC to PC; within or outside India (b) PC / a device / Adapter conforming to the standard of any international agencies like- ITU or IETF etc. in India to PSTN/PLMN abroad. (c) Any device / Adapter conforming to standards of International agencies like ITU, IETF etc. connected to ISP node with static IP address to similar device / Adapter; within or outside India. (d) Except whatever is described in , no other form of Internet Telephony is permitted. (e) In India no Separate Numbering Scheme is provided to the Internet Telephony. Presently the 10 digit Numbering allocation based on E.164 is permitted to the Fixed Telephony, GSM, CDMA wireless service. For Internet Telephony, the numbering scheme shall only conform to IP addressing Scheme of Internet Assigned Numbers Authority (IANA). Translation of E.164 number / private number to IP address allotted to any device and vice versa, by ISP to show compliance with IANA numbering scheme is not permitted. (f) The Internet Service Licensee is not permitted to have PSTN/PLMN connectivity. Voice communication to and from a telephone connected to PSTN/PLMN and following E.164 numbering is prohibited in India.
South Korea
In South Korea, only providers registered with the government are authorized to offer VoIP services. Unlike many VoIP providers, most of whom offer flat rates, Korean VoIP services are generally metered and charged at rates similar to terrestrial calling. Foreign VoIP providers encounter high barriers to government registration. This issue came to a head in 2006 when Internet service providers providing personal Internet services by contract to United States Forces Korea members residing on USFK bases threatened to block off access to VoIP services used by USFK members as an economical way to keep in contact with their families in the United States, on the grounds that the service members' VoIP providers were not registered. A compromise was reached between USFK and Korean telecommunications officials in January 2007, wherein USFK service members arriving in Korea before June 1, 2007, and subscribing to the ISP services provided on base may continue to use their US-based VoIP subscription, but later arrivals must use a Korean-based VoIP provider, which by contract will offer pricing similar to the flat rates offered by US VoIP providers.
United States
In the United States, the Federal Communications Commission requires all interconnected VoIP service providers to comply with requirements comparable to those for traditional telecommunications service providers. VoIP operators in the US are required to support local number portability; make service accessible to people with disabilities; pay regulatory fees, universal service contributions, and other mandated payments; and enable law enforcement authorities to conduct surveillance pursuant to the Communications Assistance for Law Enforcement Act (CALEA).
Operators of "Interconnected" VoIP (fully connected to the PSTN) are mandated to provide Enhanced 911 service without special request, provide for customer location updates, clearly disclose any limitations on their E-911 functionality to their consumers, obtain affirmative acknowledgements of these disclosures from all consumers, and 'may not allow their customers to “opt-out” of 911 service.' VoIP operators also receive the benefit of certain US telecommunications regulations, including an entitlement to interconnection and exchange of traffic with incumbent local exchange carriers via wholesale carriers. Providers of "nomadic" VoIP service—those who are unable to determine the location of their users—are exempt from state telecommunications regulation.
Another legal issue that the US Congress is debating concerns changes to the Foreign Intelligence Surveillance Act. The issue in question is calls between Americans and foreigners. The National Security Agency (NSA) is not authorized to tap Americans' conversations without a warrant—but the Internet, and specifically VoIP does not draw as clear a line to the location of a caller or a call's recipient as the traditional phone system does. As VoIP's low cost and flexibility convinces more and more organizations to adopt the technology, the surveillance for law enforcement agencies becomes more difficult. VoIP technology has also increased Federal security concerns because VoIP and similar technologies have made it more difficult for the government to determine where a target is physically located when communications are being intercepted, and that creates a whole set of new legal challenges.
History
The early developments of packet network designs by Paul Baran and other researchers were motivated by a desire for a higher degree of circuit redundancy and network availability in the face of infrastructure failures than was possible in the circuit-switched networks in telecommunications of the mid-twentieth century. Danny Cohen first demonstrated a form of packet voice in 1973 as part of a flight simulator application, which operated across the early ARPANET.
On the early ARPANET, real-time voice communication was not possible with uncompressed pulse-code modulation (PCM) digital speech packets, which had a bit rate of 64kbps, much greater than the 2.4kbps bandwidth of early modems. The solution to this problem was linear predictive coding (LPC), a speech coding data compression algorithm that was first proposed by Fumitada Itakura of Nagoya University and Shuzo Saito of Nippon Telegraph and Telephone (NTT) in 1966. LPC was capable of speech compression down to 2.4kbps, leading to the first successful real-time conversation over ARPANET in 1974, between Culler-Harrison Incorporated in Goleta, California, and MIT Lincoln Laboratory in Lexington, Massachusetts. LPC has since been the most widely used speech coding method. Code-excited linear prediction (CELP), a type of LPC algorithm, was developed by Manfred R. Schroeder and Bishnu S. Atal in 1985. LPC algorithms remain an audio coding standard in modern VoIP technology.
In the following time span of about two decades, various forms of packet telephony were developed and industry interest groups formed to support the new technologies. Following the termination of the ARPANET project, and expansion of the Internet for commercial traffic, IP telephony was tested and deemed infeasible for commercial use until the introduction of VocalChat in the early 1990s and then in Feb 1995 the official release of Internet Phone (or iPhone for short) commercial software by VocalTec, based on the Audio Transceiver patent by Lior Haramaty and Alon Cohen, and followed by other VoIP infrastructure components such as telephony gateways and switching servers. Soon after it became an established area of interest in commercial labs of the major IT concerns. By the late 1990s, the first softswitches became available, and new protocols, such as H.323, MGCP and the Session Initiation Protocol (SIP) gained widespread attention. In the early 2000s, the proliferation of high-bandwidth always-on Internet connections to residential dwellings and businesses, spawned an industry of Internet telephony service providers (ITSPs). The development of open-source telephony software, such as Asterisk PBX, fueled widespread interest and entrepreneurship in voice-over-IP services, applying new Internet technology paradigms, such as cloud services to telephony.
In 1999, a discrete cosine transform (DCT) audio data compression algorithm called the modified discrete cosine transform (MDCT) was adopted for the Siren codec, used in the G.722.1 wideband audio coding standard. The same year, the MDCT was adapted into the LD-MDCT speech coding algorithm, used for the AAC-LD format and intended for significantly improved audio quality in VoIP applications. MDCT has since been widely used in VoIP applications, such as the G.729.1 wideband codec introduced in 2006, Apple's Facetime (using AAC-LD) introduced in 2010, the CELT codec introduced in 2011, the Opus codec introduced in 2012, and WhatsApp's voice calling feature introduced in 2015.
Milestones
1966: Linear predictive coding (LPC) proposed by Fumitada Itakura of Nagoya University and Shuzo Saito of Nippon Telegraph and Telephone (NTT).
1973: Packet voice application by Danny Cohen.
1974: The Institute of Electrical and Electronics Engineers (IEEE) publishes a paper entitled "A Protocol for Packet Network Interconnection".
1974: Network Voice Protocol (NVP) tested over ARPANET in August 1974, carrying barely audible 16kpbs CVSD encoded voice.
1974: The first successful real-time conversation over ARPANET achieved using 2.4kpbs LPC, between Culler-Harrison Incorporated in Goleta, California, and MIT Lincoln Laboratory in Lexington, Massachusetts.
1977: Danny Cohen and Jon Postel of the USC Information Sciences Institute, and Vint Cerf of the Defense Advanced Research Projects Agency (DARPA), agree to separate IP from TCP, and create UDP for carrying real-time traffic.
1981: IPv4 is described in RFC 791.
1985: The National Science Foundation commissions the creation of NSFNET.
1985: Code-excited linear prediction (CELP), a type of LPC algorithm, developed by Manfred R. Schroeder and Bishnu S. Atal.
1986: Proposals from various standards organizations for Voice over ATM, in addition to commercial packet voice products from companies such as StrataCom
1991: Speak Freely, a voice-over-IP application, was released to the public domain.
1992: The Frame Relay Forum conducts development of standards for Voice over Frame Relay.
1992: InSoft Inc. announces and launches its desktop conferencing product Communique, which included VoIP and video. The company is credited with developing the first generation of commercial, US-based VoIP, Internet media streaming and real-time Internet telephony/collaborative software and standards that would provide the basis for the Real Time Streaming Protocol (RTSP) standard.
1993 Release of VocalChat, a commercial packet network PC voice communication software from VocalTec.
1994: MTALK, a freeware LAN VoIP application for Linux
1995: VocalTec releases Internet Phone commercial Internet phone software.
Beginning in 1995, Intel, Microsoft and Radvision initiated standardization activities for VoIP communications system.
1996:
ITU-T begins development of standards for the transmission and signaling of voice communications over Internet Protocol networks with the H.323 standard.
US telecommunication companies petition the US Congress to ban Internet phone technology.
G.729 speech codec introduced, using CELP (LPC) algorithm.
1997: Level 3 began development of its first softswitch, a term they coined in 1998.
1999:
The Session Initiation Protocol (SIP) specification RFC 2543 is released.
Mark Spencer of Digium develops the first open source private branch exchange (PBX) software (Asterisk).
A discrete cosine transform (DCT) variant called the modified discrete cosine transform (MDCT) is adopted for the Siren codec, used in the G.722.1 wideband audio coding standard.
The MDCT is adapted into the LD-MDCT algorithm, used in the AAC-LD standard.
2001: INOC-DBA, first inter-provider SIP network deployed; also first voice network to reach all seven continents.
2003: First released in August 2003, Skype was the creation of Niklas Zennström and Janus Friis, in cooperation with four Estonian developers. It quickly became a popular program that helped democratise VoIP.
2004: Commercial VoIP service providers proliferate.
2006: G.729.1 wideband codec introduced, using MDCT and CELP (LPC) algorithms.
2007: VoIP device manufacturers and sellers boom in Asia, specifically in the Philippines where many families of overseas workers reside.
2009: SILK codec introduced, using LPC algorithm, and used for voice calling in Skype.
2010: Apple introduces FaceTime, which uses the LD-MDCT-based AAC-LD codec.
2011:
Rise of WebRTC technology which allows VoIP directly in browsers.
CELT codec introduced, using MDCT algorithm.
2012: Opus codec introduced, using MDCT and LPC algorithms.
See also
Audio over IP
Communications Assistance For Law Enforcement Act
Comparison of audio network protocols
Comparison of VoIP software
Differentiated services
High bit rate audio video over Internet Protocol
Integrated services
Internet fax
IP Multimedia Subsystem
List of VoIP companies
Mobile VoIP
Network Voice Protocol
RTP audio video profile
SIP Trunking
UNIStim
Voice VPN
VoiceXML
VoIP recording
Notes
References
External links
Broadband
Videotelephony
Audio network protocols
Office equipment |
27570764 | https://en.wikipedia.org/wiki/Rhodes%20Chroma | Rhodes Chroma | The ARP Chroma is a polyphonic, multitimbral, microprocessor controlled, subtractive synthesis analog synthesizer developed in 1979-1980 by ARP Instruments, Inc. just before the company's bankruptcy and collapse in 1981.
The design was purchased by
CBS Musical Instruments and put into production by their Rhodes Division in 1982 as the Rhodes Chroma at a list price of US $5295. They also released a keyboard-less version of the Chroma called the Chroma Expander at a list price of US $3150.
The Chroma was one of the early microprocessor-controlled analog synthesizers. It was designed before MIDI and featured a 25-pin D-sub connector computer interface used to slave the Expander to the Chroma. Also, an Apple IIe interface card and sequencing software was available.
The Rhodes Chroma and Expander were discontinued in 1984. Somewhere between 1400 and 3000 Chromas and Expanders were built.
Keyboard Velocity and Pressure
The Chroma has a velocity-sensitive keyboard consisting of 64 weighted, levered wooden keys that resemble piano keys.
Every Chroma has software and interface hardware for an optional polyphonic pressure-sensitive keyboard sensor. But few units have the original factory pressure sensor installed. In 2009, a pressure sensor retrofit kit was produced by a third party. At the time of writing (2015) the kit may still be available.
Polyphony
The Chroma has sixteen synthesizer "channels" each consisting of one oscillator, waveshaper, filter and amplifier. Sound programs can use one channel per voice to produce sixteen voice polyphony. However, most sound programs use two channels per voice which delivers a fatter sound, but reduces the polyphony to eight voices.
Architecture
The Chroma's sixteen synthesizer channels consist of one Voltage Controlled Oscillator, Waveshaper, Filter, and Amplifier under software control via multiplexed analog voltage control channels. The channels are grouped into eight pairs. One channel in each pair is labelled "A" and the other "B".
Although the oscillators, filters and amplifiers themselves are voltage controlled, the overall control of the synthesizer channels is entirely in software. The embedded computer generates thirty-two ADSR envelopes (two per channel, one with delay) and sixteen LFO sweep signals in software. Signals from the levers, pedals, control panel or the keyboard are all encoded digitally, processed by the computer, and sent to the synthesizer channels on the voice cards via several multiplexed analog control lines and a number of digital control registers.
Sound programs can use one channel per voice to produce sixteen-voice polyphony. However, more synthesizer power is available when channels are paired together. This yields two oscillators, two waveshapers, two filters, two amplifiers, two glides, two LFO sweeps, and four ADR envelopes, in addition to the performance controls.
Modular Configuration
The Chroma uses an Electronically Reconfigurable System which allows the VCOs, VCFs and VCAs to be reconfigured, or "patched" like modular synthesizers, but without the patch cords. Instead, the Chroma digitally stores all of the parameters which determine a sound. Sound programs can also be saved to and loaded from cassette.
On page 4 of the Rhodes Chroma Programming Manual, they boast "The Chroma has better patching capabilities than most modular systems, and it's fully programmable."
In fact, the Chroma is often compared to modular synthesizers like the ARP 2600.
A Chroma sound program can be composed of two oscillators, filters and amplifiers in one of fifteen different configurations. Each configuration connects (or patches) the oscillators, filters and amplifiers together in different ways to provide for a wide variety of possible sounds. For example, filters can be arranged in series for 4 pole or band-pass response, or in parallel for notch filtering. In addition, some configurations feature oscillator synchronization, filter frequency modulation or ring modulation.
When editing a sound program, the configuration is selected via parameter [1] "Patch". The values range from 0 through 15.
Control Panel
The Chroma control panel consists of 71 membrane switches. Most of them are multi-purpose and are used to select sound programs or sound program parameters, when in edit mode. A single slider is used to change parameter values.
Competing synthesizer designs of the time, like the Oberheim OB-8, had dozens of knobs and mechanical switches (as opposed to membrane switches) on their control panels. The Chroma's economical approach to control panel design was copied by many later synthesizers like the Yamaha DX-7.
A unique feature of the Chroma is that, when you operate a membrane switch, a "tapper" bumps the underside of the control panel, so as to mimic the tactile feedback of operating a conventional mechanical switch. This is an early example of haptic feedback technology.
Power Supply
Perhaps the worst feature of the ARP/Rhodes Chroma is the factory original power supply. It runs very hot, it is unreliable and it is very heavy.
In 2008, a third party designed and produced a digital switching power supply replacement kit for the Chroma and Expander. At the time of writing (2015) the kit may still be available.
CPU
The main microprocessor in the Chroma and Expander is a 68B09, and it has a computer interface consisting of a 25-pin D-sub connector. The factory original Chroma CPU board has 2 AA batteries to preserve memory while the power is off. Many Chroma CPU boards have been damaged from battery leakage.
In 2006, a third party designed and produced a Chroma CPU board replacement kit known as the CC+. The CC+ is available with optional native MIDI support.
External Control and MIDI
The Chroma was designed and released before the introduction of MIDI. The Chroma's main microprocessor was a 68B09 with a computer interface consisting of a 25-pin D-sub connector. An Apple IIe interface card (used to connect to the Chroma's D-sub connector) and sequencing software was released by ARP and Fender Rhodes.
Multiple third parties came out with Chroma-to-MIDI converter boxes. They use the 25-pin D-sub connector to interface with the Chroma.
In 2006, a third party designed and produced a Chroma CPU board replacement kit known as the CC+. The CC+ is available with optional native MIDI support. At the time of writing (2015), the kit may still be available.
Accessories
The Chroma came with the usual complement of accessories plus some unique extras. In addition to a single footswitch (dedicated to program changes) and a programmable variable (volume type) foot pedal, the Chroma came with a unique Dual Footswitch Assembly. The dual footswitch is a rugged, heavy, solid piece of hardware that mimics a pair of standard piano foot pedals with programmable functions including sostenuto.
The Chroma also came with a custom designed, heavily padded, ATA Anvil (R) case with a pedal compartment. The rumor was that early units shipped without a case were damaged in shipping because the Chroma is so heavy and fragile (a fair criticism). In any case, the Chroma and Expander included a road case.
Chroma Expander
Fender also released a keyboard-less version of the Chroma called the Chroma Expander. Like the Chroma, the main microprocessor in the Expander is a 68B09 with a computer interface consisting of a 25-pin D-sub connector. The Expander can be slaved to the Chroma via the 25-pin D-sub connector. Third party Chroma-to-MIDI converter boxes produced for the Chroma also work for the Expander.
References
External links
rhodeschroma.com - a site and community dedicated to the Rhodes Chroma
ARP Chroma announcement
http://www.vintagesynth.com/misc/chroma.php
Rhodes Chroma Demo, Part 1
Sound on Sound - Rhodes Chroma Analogue Polysynth (Retro) - October 1995 - NORMAN FAY takes a retro look at the Rhodes Chroma, the last and most obscure of ARP's long line of analogue synthesizers.
https://www.facebook.com/RhodesChroma
Synthmuseum.com - Rhodes Chroma
The ENABLER is a custom made 95-knob dedicated MIDI controller for the CC+ equipped Rhodes Chroma.
Synthtopia Rhodes Chroma
Rhodes Chroma (@rhodeschroma) - Twitter
Rhodes chroma value - Gearslutz.com
Rhodes Chroma Analog Synthesizer - Synthesizer Database
ARP Synthesizers fenderrhodes.com
MATRIXSYNTH: Rhodes Chroma Overview by Eric Frampton
Vintage Keyboard Studio: Rhodes Chroma/Expander
BCR2000 Programmer Overlay for Rhodes Chroma
Rhodes Chroma Editor for iPad by Matrix
Rhodes-ARP Chroma - Specifications, pictures, prices, links, reviews and ratings - Sonic State
http://www.native-instruments.com/forum/threads/rhodes-chroma-ni-kontakt-sample-library.180159/
Rhodes Chroma Ni Kontakt sample library | NI Support Forum
ARP synthesizers
Analog synthesizers
Polyphonic synthesizers |
64037067 | https://en.wikipedia.org/wiki/Thai%20typography | Thai typography | Thai typography concerns the representation of the Thai script in print and on displays, and dates to the earliest printed Thai text in 1819. The printing press was introduced by Western missionaries during the mid-nineteenth century, and the printed word became an increasingly popular medium, spreading modern knowledge and aiding reform as the country modernized. The printing of textbooks for a new education system and newspapers and magazines for a burgeoning press in the early twentieth century spurred innovation in typography and type design, and various styles of Thai typefaces were developed through the ages as metal type gave way to newer technologies. Modern media is now served by digital typography, and despite early obstacles including lack of copyright protection, the market now sees contributions by several type designers and digital type foundries.
The printed Thai script has characters in the line of text, as well as combining characters that appear above or below them. One of the main distinguishing features among typefaces is the head of characters, also referred to as the terminal loop. While these loops are a major element of conventional handwritten Thai and traditional typefaces, the loopless style, which resembles sans-serif Latin characters and is also referred to as Roman-like, was introduced in the 1970s and has become highly popular. It is widely used in advertising and as display typefaces, though its use as body-text font has been controversial. Classification systems of Thai typefaces—primarily based on the terminal loop—have been proposed, as has terminology for type anatomy, though they remain under development as the field continues to progress.
History
First printing of Thai
Prior to the introduction of printing, Thai script had evolved along a calligraphic tradition, with most written records in the form of folding-book manuscripts known as samut khoi. Records mentioning printing first appear during the reign of King Narai (1656–1688) of the Ayutthaya Kingdom, though the first documented printing of the Thai language did not occur until 1788, in the early Rattanakosin period, when the French Catholic missionary Arnaud-Antoine Garnault had a catechism and a primer printed in Pondicherry in French India. The texts, printed in romanized Thai, were distributed in Siam, and Garnault later set up a printing press in Bangkok.
The printing of Thai script was pioneered by Protestant missionaries. In 1819, Ann Hasseltine Judson, an American Baptist missionary based in Burma, translated the Gospel of Matthew, as well as a catechism and a tract, into Thai. She had learned the language from settled Thai war captives who had been relocated following the fall of Ayutthaya in 1767. The catechism was printed at the Serampore Mission Press in Danish-controlled Serampore, on the outskirts of Calcutta, at the end of the year. It is the earliest known printing of the Thai script, though no remaining copies have been found. The type was probably cast by mission printer George H. Hough, who had worked with the Judsons in Burma. The same font may have later been used in 1828 to print A Grammar of the Thai or Siamese Language by East India Company Captain James Low. The book was printed at the Baptist Mission Press in Calcutta, an offshoot of the Serampore mission, and is the oldest known extant printed material in the Thai script.
In 1823, a set of the font was purchased by Samuel Milton for the London Missionary Society (LMS)'s printing operations in Singapore. The LMS press did not see much Thai output until the early 1830s, when Protestant missionaries began taking up residence in Bangkok. Karl Gützlaff's translation of the Gospel of Luke was printed in 1834, and is the earliest surviving printing of the Bible in the Thai script. The type used is clearly different from that of Low's grammar, and may have been a newer font cast later.
Introduction to the country
Thai-script printing reached Siam when the American missionary doctor Dan Beach Bradley arrived in Bangkok in 1835, bringing with him from the Singapore printing operation (which had been acquired by the American Board of Commissioners for Foreign Missions (ABCFM) the year before) an old printing press, together with a set of Thai type. Bradley, working with a few other missionaries, successfully operated the press the following year. They were soon joined by a printer from the Baptist Board for Foreign Missions, who brought new printing equipment, and thus were able to start producing religious material for distribution. The ABCFM and Baptist ministries later established separate printing houses, but initially, they relied on sharing the original set of type brought from Singapore. The missionaries initially ordered new type from Singapore and Penang, but they found the quality unsatisfactory. They finally succeeded in casting their own type in 1841.
Although the missionaries saw limited success in proselytizing, their introduction of printing had far-reaching effects, and Bradley in particular became well known as a printer and produced many influential secular works. In 1839, the government of King Rama III hired the ABCFM press to produce the country's first printed official document: 9,000 copies of a royal edict prohibiting the use or sale of opium. Bradley authored and printed several medical treatises, launched the first Thai-language newspaper—the Bangkok Recorder—in 1844, and published several books, including Nirat London—the first Thai work for which copyright fees were paid—in 1861. His press gained the attention of elite Thais, especially Prince Mongkut (who was then ordained as a monk and would later become King Rama IV), who set up his own printing press at Wat Bowonniwet, cast his own Thai type, and created a new script, known as Ariyaka, to print the Pali language used for Buddhist texts. When he became king in 1851, Mongkut established a royal press in the Grand Palace, which printed official publications including the newly established Royal Gazette.
The earliest typefaces used by these printing establishments were based on the handwriting style of the period, and accordingly featured mostly angular shapes in a single thickness, and were slanted throughout. As Bradley refined his craft, he shifted to upright types with outlines in the shape of vertical rectangles (as seen used in the Bangkok Recorder), and later, with Nirat London, introduced rounded curves. His work would greatly influence later printers.
Expansion
The introduction of printing paved the way for the modernization of the country under the reign of Mongkut's successor Chulalongkorn (King Rama V, r. 1868–1910). Bradley was joined in the field by Samuel J. Smith and several other printers in the 1860s, and they started a trend of book publishing, producing numerous books of fact as well as popular literature. The widespread circulation of texts which had previously been confined to manuscripts transformed society's conception of knowledge. Dozens of private printing enterprises arose during the following decades, and the Vajirañāṇa Library was established as a central repository of knowledge as well as a publishing regulator: it oversaw the production of a new genre of books known as cremation volumes, and in effect helped standardize the language's orthography. A distinctive typeface from this period is now known as Thong Siam, named for its use in the Flag Regulations for the Kingdom of Siam, printed in 1899 by W. Drugulin in Leipzig, Germany.
The typewriter was also introduced to the country around this time. The first Thai-language typewriter was developed by Edwin McFarland in 1892. Typewriters became widely adopted by the government, and helped transform the country's administration into a modern bureaucracy. They also modified the language. Since the typewriters of the day were unable to accommodate all Thai characters, McFarland decided to exclude two less-used consonants—kho khuat and kho khon—leading to their eventual obsolescence.
The first formal schools were established during Chulalongkon's reign, and as basic education further expanded under his successor Vajiravudh (King Rama VI, r. 1910–1925), so did demand for textbooks to facilitate teaching. Several printing houses specialized in the production of schoolbooks, among them Aksoranit Press, whose typeface Witthayachan is notable for the period. The Catholic Mission of Bangkok was also influential in pioneering education, and established Assumption College, one of the oldest schools in the country. Among the works printed by its Assumption Press are the primary-school Thai textbook Darunsuksa by the French priest and teacher F. Hilaire, which was first published in 1914 and remains in print over a century later. The press's preferred typeface, Farang Ses, designed in 1913, was the first to employ thick and thin strokes reflecting old-style serif Latin typefaces, and became extremely popular, with its derivatives widely used into the digital age.
The reign of Vajiravudh also saw the beginnings of a flourishing press, and the newspaper industry underwent explosive growth into the 1930s, followed closely by pulp magazines. The new stage for public discourse contributed to the abolition of absolute monarchy in 1932, and as newspapers became more politically vocal, demand rose for large display types for their headlines. Many new fonts were created, mostly influenced by the wood-carved style introduced by Chinese immigrants, who dominated the market as dedicated type foundries were opened. Printing and typesetting became an established craft, and dedicated trade schools began teaching in 1932. Italic (or oblique) type was introduced, with the earliest example found in 1925, and bold type after World War II, but apart from more refined font sizes, not much innovation was seen in regard to body-text typefaces for several decades. Meanwhile, a trend emerged in the form of craft shops offering services creating custom hand-drawn decorative text for copperplate printing. An angular, blocky text style emerged during this period, and was use especially for magazine covers and logotype. It also became popular in sign-making, mostly replacing the Blackletter-like Naris style (named after its designer Prince Naris) that had been in use since the late nineteenth century.
Transition from metal type
Between 1957 and 1962, the printing technologies of hot metal typesetting and phototypesetting were introduced by major publishers. Thai Watana Panich (TWP) adopted the Monotype system, and partnered with the Monotype Corporation to develop Thai Monotype typefaces for its use. Around the same period, Kurusapa Press (the printing business of the Ministry of Education) developed the Kurusapa typeface for use with photocomposing machines, and the Ministry of Education received a grant from the Tokyo Book Development Centre and the UNESCO to develop a new typeface, now known as Unesco. These typefaces similarly featured a uniform stroke width and smooth curves, but mostly failed to gain traction among the wider industry, and the Monotype system soon became obsolete with the advent of offset printing. An exception was Thai Medium 621, which was adopted for TWP's schoolbooks, became widely recognized, and remained popular into the following decades.
The 1970s brought dry-transfer lettering, introduced to Thailand by DHA Siamwalla through a partnership with Mecanorma of the Netherlands. Compatibility with the new offset-printing technology helped boost its popularity for creating display lettering in advertising, news printing, and the creation of political materials, especially during the 1973–1976 democracy movement. Most of the fonts were designed by Manop Srisomporn, who made a major innovation in the form of loopless characters, which abandoned conventional letter shapes for simple, minimalist forms. The best known of these typefaces, Manoptica, was designed to invoke the characteristics of the sans-serif typeface Helvetica, and was released in 1973. The style, widely perceived as modern and trendy, became extremely popular, especially in advertising, and remains so to the present.
Among publishers, phototypesetting became widely adopted in the 1970s–1980s, marking the end of metal type in the Thai publishing industry. Thairath, the country's best-selling newspaper, developed new typefaces for use with its Compugraphic machines in 1974. Tom Light, designed by Thongterm Samerasut and released by the East Asiatic (Thailand) Company, was created as a body-text font for the newspaper, and featured geometrical designs invoking a sense of modernity. More typefaces, including ChuanPim, UThong and Klonglarn, emerged at the end of the decade.
Digital typography
Computer systems with Thai-language support were introduced in the late 1960s in the form of card-punch machines and line printers by IBM. On-screen interactive display of Thai text became available in the 1980s, and DOS-based word processors such as CU Writer, released in 1989, saw widespread adoption. The advent of desktop publishing arrived with the Apple Macintosh, which was first imported in 1985 by Sahaviriya OA, who also developed the first Thai computer fonts in PostScript format. More refined typefaces were soon released by emerging dedicated type design companies, notably the DB series by Suraphol Vesaratchavej and Parinya Rojarayanond of Dear Book (later known as DB Design), and the PSL series by PSL SmartLetter. These new typefaces, as well as digital fonts based on earlier classic types, were widely adopted as the media industry boomed amidst rapid economic growth, until halted by the 1997 financial crisis.
During this early period of computerization, the proliferation of software systems led to interoperability issues, prompting NECTEC (Thailand's central computer research institute) to issue several standards covering language handling. For TrueType fonts, proper positioning of some combining characters required the use of private use area glyphs, but these were defined differently between Windows and Mac OS systems, causing font files for each to be incompatible. Certain software, especially those by Adobe, had long-standing issues with above-line mark positioning. The adoption of the OpenType format is expected to alleviate the issue.
Copyright regulations also lagged behind the rapid innovation and spread of information, and type designers had difficulty commercializing their work, leading to a slump following the initial period. Even after new copyright law that provided protection for computer programs was issued in 1994, the copyrightability of typefaces remained unclear. The issue came to the forefront in 2002, when PSL began suing publishers who used its fonts unlicensed for copyright infringement. This led to heated discussions and conflicts with the publishing industry, who believed font designs to be in the Public Domain and saw PSL's practice as predatory litigation. Ultimately, the campaign led to a new awareness and acceptance of computer fonts as a copyright-protected good, especially as the Intellectual Property and International Trade Court made a ruling in favour of PSL in 2003 that fonts were protected as computer programs.
One of the responses to the issue was a proliferation of freely licensed computer fonts. Earlier, in 2001, NECTEC had released three such typefaces, Kinnari, Garuda and Norasi, under its National Fonts project, intending them as public alternatives to the widely used, yet licence-restricted, commercial typefaces that came bundled with major operating systems and applications. (For Windows systems, these were the UPC series of fonts by Unity Progress, which were based on major earlier types.) The project was expanded upon in 2007, when the Software Industry Promotion Agency together with the Department of Intellectual Property released thirteen typefaces following a national competition. Most notable among them is Sarabun, which in 2010 was made the official typeface for all government documents, replacing the previous de facto standard Angsana (a UPC font family derived from Farang Ses). The community website F0nt.com, which hosts freely licensed fonts mostly by amateurs and hobbyists, was established in 2004. Trade associations of the printing industry also later released their own freely licensed typefaces.
The changed landscape led to a gradual resurgence in digital type design, with new players joining the market, including Cadson Demak, which focuses on custom designs for corporate users. Anuthin Wongsunkakon, one of the company's 2002 co-founders, had designed among the first custom fonts in the market for AIS, one of Thailand's three main mobile operators, who wished to build a stronger brand identity at a time when all three companies shared the same font in their marketing material. The industry grew from then, and the fields of digital typography and type design saw increased public awareness, especially in the 2010s. In 2013, Thailand's twelve digital type foundries joined up to found Typographic Association Bangkok to promote the industry. Among the trends seen during this period is a sharp rise in popularity of the loopless or Roman-like style introduced by Manop, which began seeing use as body text in some magazines in 1999. Type designers have also introduced Thai typefaces with wider ranges of font weight, mostly in the loopless style, though their use continues to be a point of debate.
Type anatomy
There is not yet a single standard terminology for Thai typeface anatomy, and type designers have variably observed several features: Parinya in 2003 described six: heads, tails, mid-stroke loops, serrations, beaks, and flags. Other authors have also mentioned the stroke/line, pedestals/feet, and spurs/limbs.
Head
The head, also described as the first or terminal loop, is one of the most distinguishing features of Thai script, and conventionally appears as simple loops (e.g. ), curled loops or crowns (), and kinked/serrated crowns (). It may either face left or right, and may appear top (), bottom (), or in the middle of the character ().
Tail
The tail appears as ascenders (e.g. ), descenders (), arch/oblique tails (), looped/coiled tails (), and a middle tail (). They mostly project above the mean line or below the baseline.
Mid-stroke loop
The mid-stroke or second loop can be at the top (touching the mean line, e.g. ) or the bottom (touching the baseline, )
Serration
Serration or broken lines, apart from in the crown, is found in the canopy (e.g. ) and the looped descender tail of .
Beak
The beak appears in several characters (e.g. ), with a single appearance, though designs vary among typefaces.
Flag
The flag, or double-storey line, is used in the consonants and the vowel . It doesn't form a contrasting feature against other characters.
Stroke
Features of the stroke or line include stems (vertical front, back, or middle lines), canopies or upper lines (usually as an arch), bases or lower lines (a horizontal stroke along the baseline), oblique lines, and creases or stroke reversals.
Pedestal
The pedestal or foot is found in a few characters, either attached to the descender as part of the tail ( and ), or unattached ( and )
Spur
The spur or limb is an element found in some typefaces. It appears like a serif at the angle of the base of some characters.
Typeface styles and classification
The established conventional handwriting styles of Thai script fall broadly into two categories: angular and rounded, with the former forming the majority. The angular styles, in common use until at least the mid-twentieth century, are probably derived from the manuscript traditions of the early Rattanakosin period (though they have lost the marked slant found in most historical manuscripts). The Alak calligraphic style, in particular, is still associated with royal artistic tradition, and is used for the official manuscript editions of the constitution.
Thai typefaces can likewise be classified as angular or round, although the majority of today's typefaces are in the rounded style and thus the distinction no longer usefully reflects typographical usage. Most Thai typefaces also include characters for the basic Latin script, and some applications classify them as serif or sans-serif based on their Latin characters, though this often has little bearing on their Thai counterparts. More often, typefaces are mainly categorized based on the shape of the head of Thai characters, i.e. whether or not the font features the traditional loop. While earlier designs that truncate or omit the loop had been used in sign-making and as decorative text since the nineteenth century, it is the Roman-like loopless style introduced in the 1970s that has received the most attention in the age of digital type design.
Thai type classification is still undergoing development, with input from several organizations and academics. While the Royal Institute and NECTEC had included classification systems in their typeface-design guidelines released in 1997 and 2001, they were not widely adopted, and a standard system has not been agreed upon. More recently, design company Cadson Demak has contributed to a classification model that assigns typefaces to three main categories—traditional (looped), display (topical) and modern (loopless)—with several subcategories to each.
Traditional
Typefaces of the traditional or looped category are distinguished by the looped terminal as the character head, reflecting the conventional handwriting styles after which the earliest types were designed. They are subcategorized into the following styles:
Handwriting
This style includes most early fonts, as well as those directly influenced by calligraphic handwriting styles, and many of them feature angular letter shapes. Today they are used, mainly as display type, to convey a sense of venerability. (Examples: Bradley Square, Bradley Curved, Thong Siam)
Old style
Influenced by old-style serif Latin typefaces and typified by 1913's Farang Ses, this style employs contrasting thick and thin strokes, and was used for government documents into the 2000s. (Examples: Angsana UPC, Kinnari)
Wood type
Developed as display type for large headlines in the 1930s, the style was introduced by Chinese immigrants, and some probably appeared as wood type before being cast in metal. (Examples: DB Zair, DB PongMai, DB PongRong)
Humanist
First created by Monotype for Thai Watana Panich's school textbooks, this style is influenced by Western humanist sans-serif typefaces and employs monoline strokes with a crisp appearance. (Examples: Monotype Thai Medium 621, TF Pimai, Browallia UPC, Garuda)
Geometric
This style employs geometric designs to create a futuristic appearance, with influences from geometric sans-serif Latin typefaces. The style's original type, Tom Light, was first designed as body-text font for the Thairath newspaper. (Examples: Tom Light (C-1), EAC Tomlight, Cordia UPC)
Geometric humanist
The style was introduced with ChuanPim, which was the first typeface created with the specific consideration of allowing it to blend in together with Latin script. (Example: EAC Chuanpim)
Neo-geometric
Typified by ThongLor, created by Cadson Demak with more white space to allow for increases in font weight, the style features a modular design, with strokes in separate segments. (Example: ThongLor)
Display
The display category includes typefaces derived from styles of originally hand-drawn display lettering, which were purpose-made for uses including signage, book covers, and labels. They are subcategorized into two genres: script and decorative. The script type features letters distinctively shaped by their writing implement, while the decorative genre covers a large variety of designs, including those incorporating traditional patterns or stylized motifs. A prominent style among the script genre is Blackletter, while constructivism is one of the major decorative styles.
Blackletter
Also known as the ribbon style following its appearance of thick and thin lines formed by a broad-nibbed pen, it has been widely used for display signage since the nineteenth century. Best known among them is the Naris style, which has been recreated as a digital typeface. (Examples: Thai Naris, ABC Burgbarn)
Constructivism
This angular, blocky style, with its high visual impact, was widely used for pulp magazine covers, and was also preferred by the People's Party regime that followed the abolition of absolute monarchy, though it fell out of favour in official usage after World War II. The style has inspired some digital typefaces. (Examples: Tualiam, 9 LP)
Modern
Typefaces of the modern or loopless category are also referred to as Roman-like, reflecting their original inspiration of mimicking the appearance of sans-serif Latin typefaces. They are defined by the lack of distinct terminal loops, though some may not be completely loopless. There are three subcategories:
Modern
The modern style emerged with dry-transfer lettering, with Manoptica considered its main progenitor. The minimalist, loopless design evokes characteristics of sans-serif typefaces, and were designed primarily as display type. (Examples: Manoptica, Manop Mai)
Obscure loop
Typefaces of this style feature highly reduced character heads which appear as a small slab or nib, which still help enhance legibility. They include the first loopless typefaces to be used for body-text. (Examples: LC Manop, PSL Display)
Crossover
These typefaces were created in the digital age, and many support greater weight gradation, allowing for the development of extended font families. They may be considered flexible enough to be used for both display and text. (Example: Sukhumvit)
Usage and considerations
The proper display of Thai text on computer systems requires support for complex text rendering. Thai script consists of inline base characters (consonants, vowels and punctuation marks) and combining characters (vowels, tone marks and miscellaneous symbols) that are displayed above or below them, generally separated into four vertical levels (the baseline, two above, and one below). With mechanical typewriters, each character had a fixed vertical position, with the tone marks in the topmost level. In traditional and digital typesetting, they are shifted downwards if the second level is unoccupied, and above-line marks are shifted slightly leftwards to make way for the base character's ascender, if it has one. Two consonants have unattached pedestals, which are removed when combined with below-line vowels. Thai is written without spaces between words, and word splitting is required to determine the proper placement of line breaks. Justified text alignment, if desired, must be achieved by distribution, increasing the spacing between character clusters (i.e. between in-line characters but not the above-and-below marks).
Today, the choice between looped and loopless typeface styles remains among the most major considerations in Thai typography. Generally, the distinction is seen as analogous to the use of serif and sans-serif typefaces in Latin script—looped terminals are seen as aiding legibility, making the style more suited for body text than loopless fonts. However, the comparison is not completely accurate, as the loop is also an important distinguishing feature between several letter pairs, and many typefaces match looped Thai letters with sans-serif Latin characters. Nevertheless, the popularity of the loopless Roman-like style, with its connotations of modernity, saw its use expanding, especially since the 2000s, from advertising into other media, including print publications. This has been subject to some controversy. Wallpaper magazine was criticized for using such a typeface as body text when it introduced its Thai edition in 2005, and when Apple adopted one for the user interface of its iOS 7 mobile operating system, customer complaints forced the company to reverse course in a later update.
While some designers see the opposition to loopless typefaces as a traditionalist rejection of change, critics claim that their overuse hinders legibility, and may cause confusion due to their similarity with Latin characters. The abbreviation , for example, appears nearly identical to the Latin letters W.S.U. when printed in such typefaces. A 2018 pilot study found that Thai readers were more likely to make errors when reading a test passage printed in Roman-like typefaces compared to ones with conventional loops.
Some designers have attributed the trend to a lack of innovation in the looped typeface category during the past few decades; the majority of text typefaces in wide use were derived from just four major pre-digital types: Farang Ses, Thai Medium 621, Tom Light and ChuanPim. Cadson Demak, itself regarded as a proponent of the loopless style in the 2000s, has since shifted its focus to produce more typefaces with looped terminals. Some of the company's designs, such as the Thai ranges for Neue Frutiger and IBM Plex, are also now designed with both looped and loopless varieties as part of the same font family.
Typefaces
Only a few common typefaces are known from the days of cast metal type, including Bradley, Thong Siam, Witthayachan, Farang Ses, and the "Pong" display types. Several more text typefaces are known from the pre-digital era, including Monotype Thai, Unesco, Kurusapa, ChuanPim, UThong and Klonglarn, while Mecanorma's dry-transfer sheets were offered in dozens of typefaces, mostly named after their designers, e.g. Manop 1, Manop 2, etc.
The number of Thai typefaces exploded in the digital age, reaching about 300–400 by 2001. These computer fonts are usually grouped into series, named after their designer or foundry. The major early typeface series include DB by DB Design, UPC by Unity Progress (several of which are licensed to Microsoft), PSL by PSL SmartLetter, SV by Sahaviriya, JS by JS Technology, and Mac OS fonts designed by Apple (Singapore).
People
People notable for their contribution to the field of Thai typography and type design include:
Anuthin Wongsunkakon
Anuthin co-founded Cadson Demak in 2002, and is considered one of Thailand's leading type designers. He is also a lecturer at the Faculty of Architecture, Chulalongkorn University. He and his company have designed fonts for Apple and Google, as well as many other businesses.
Kamthorn Sathirakul
Kamthorn (1927–2008) was the director of the Kurusapa Business Organization, and made many contributions to the publishing industry, including introducing offset printing to Thailand. His research and writing led him to be known as a subject expert on the history of Thai printing.
Manop Srisomporn
Having invented the modern loopless style of typefaces, Manop's work is among the most influential in Thai type design. His career spans from the age of hand-drawn text to dry transfer to phototype to digital, when he became among the first to design computer fonts, for Sahaviriya OA. He is now retired.
Pairoj Teeraprapa
Also known as Roj Siamruay, Pairoj is probably best known from his vernacular style of lettering work for film posters, including 2000's Tears of the Black Tiger, later developed into the typeface SR FahTalaiJone. He received the Silpathorn Award in 2014.
Panutat Tejasen
Panutat was a medical student at Chiang Mai University in the 1980s, when he began teaching himself programming and developing Thai-language software. He created the JS series of fonts, which are among the earliest Thai typefaces for the PC.
Parinya Rojarayanond
Parinya is a co-founder of DB Design, Thailand's first digital type foundry, and pioneered the creation of many Thai PostScript fonts in the early digital age. He received the Silpathorn Award in 2009.
Pracha Suveeranont
Pracha is a graphic designer, known for his work with advertising agency SC Matchbox as well as contributions to the field of typographic design. Among his numerous writings on design and culture, his 2002 book and exhibit, 10 Faces of Thai Type and the Nation, helped establish the historical narrative of Thai typography. He received the Silpathorn Award in 2010.
Notes
References
Typography
Typography |
173768 | https://en.wikipedia.org/wiki/Dr.%20Dobb%27s%20Journal | Dr. Dobb's Journal | Dr. Dobb's Journal (DDJ) was a monthly magazine published in the United States by UBM Technology Group, part of UBM. It covered topics aimed at computer programmers. When launched in 1976, DDJ was the first regular periodical focused on microcomputer software, rather than hardware. In its last years of publication, it was distributed as a PDF monthly, although the principal delivery of Dr. Dobb's content was through the magazine's website. Publication ceased at the end
of 2014, with the archived website continuing to be available online.
History
Origins
Bob Albrecht edited an eccentric newspaper about computer games programmed in the BASIC computer language, with the same name as the tiny nonprofit educational corporation that he had founded, People's Computer Company (PCC). Dennis Allison was a longtime computer consultant on the San Francisco Peninsula and sometime instructor at Stanford University. The Dobbs title was based on a mashup of the first letters of their names: Dennis and Bob.
First issues
In the first three quarterly issues of the PCC newspaper published in 1975, Albrecht had published articles written by Allison, describing how to design and implement a stripped-down version of an interpreter for the BASIC language, with limited features to be easier to implement. He called it Tiny BASIC. At the end of the final part, Allison asked computer hobbyists who implemented it to send their implementations to PCC, and they would circulate copies of any implementations to anyone who sent a self-addressed stamped envelope. Allison said, Let us stand on each others' shoulders; not each others' toes.
The journal was originally intended to be a three-issue xerographed publication. Titled dr. dobb's journal of Tiny BASIC Calisthenics & Orthodontia (with the subtitle Running Light Without Overbyte) it was created to distribute the implementations of Tiny BASIC. The original title was created by Eric Bakalinsky, who did occasional paste-up work for PCC. Dobb's was a contraction of Dennis and Bob. It was at a time when computer memory was very expensive, so compact coding was important. Microcomputer hobbyists needed to avoid using too many bytes of memory.
After the first photocopies were mailed to those who had sent stamped addressed envelopes, PCC was flooded with requests that the publication become an ongoing periodical devoted to general microcomputer software.
PCC agreed, and hired Jim Warren as its first editor. He immediately changed the title to Dr. Dobb's Journal of Computer Calisthenics & Orthodontia prior to publishing the first issue in January 1976.
Early years
Jim Warren was DDJ's editor for about a year and a half. While he went on to make a splash with his series of West Coast Computer Faires, subsequent DDJ editors like Marlin Ouverson, Hank Harrison, Michael Swaine and Jonathan Erickson appear to have focused on the journalistic and social aspects of the young but growing microcomputer industry. Eventually PCC, the non-profit corporation, sold DDJ to a commercial publisher.
The newsletter's content was originally pure enthusiast material. Initial interest circled around the Tiny BASIC interpreter, but Warren broadened that to include a variety of other programming topics, as well as a strong consumer bias, especially needed in the chaotic early days of microcomputing. All of the content came from volunteer contributors, with Steve Wozniak as one of the better known of them. Other contributors included Jef Raskin, later credited as a leader in the Macintosh development; Hal Hardenberg, the originator of DTACK Grounded an early newsletter for Motorola 68000 based software and hardware; and Gary Kildall, who had created CP/M, the first disk operating system for microcomputers which was not married to proprietary hardware.
Computer program source code published during the early years include:
Tiny BASIC interpreter
Palo Alto Tiny BASIC by Li-Chen Wang
Small-C compiler by Ron Cain
Music programs
There were also projects for computer speech synthesis and computer music systems. The March 1985 issue "10(3)" printed Richard Stallman's "GNU Manifesto" a call for participation in the then-new free software movement.
Discontinuation of printed edition
In later years, the magazine received contributions from developers all over the world working in application development and embedded systems across most programming languages and platforms. The magazine's focus became more professional. Columnists included Michael Swaine, Allen Holub and Verity Stob, the pseudonymous British programmer.
The title was later shortened to Dr. Dobb's Journal, then changed to Dr. Dobb's Journal of Software Tools as it became more popular. The magazine later reverted to Dr. Dobb's Journal with the selling line, "The World of Software Development", with the abbreviation DDJ also used for the corresponding website. In January 2009 Jonathan Erickson, the editor-in-chief, announced the magazine would cease monthly print publication, become a section of InformationWeek called Dr Dobb's Report., a website and monthly digital PDF edition.
Later history
The primary Dr. Dobb's content streams at the end were the Dr. Dobb's website, Dr. Dobb's Journal (the monthly PDF magazine, which had different content from the website) and a weekly newsletter, Dr. Dobb's Update. In addition, Dr. Dobb's continued to run the Jolt Awards and, since 1995, the Dr. Dobb's Excellence in Programming Award. Regular bloggers include Scott Ambler, Walter Bright, Andrew Koenig, and Al Williams. Adrian Bridgwater edited the news section beginning in 2010.
End
On December 16, 2014, an article by editor-in-chief Andrew Binstock announced that Dr. Dobb's would cease publication of new articles at the end of 2014. Archived articles are still available online. While no longer distributed, Dr. Dobb's is widely considered an important and influential source for the history of the PC industry.
See also
DTACK Grounded
Component Developer Magazine
References
Further reading
John Markoff, What the Dormouse Said ().
External links
Dr. Dobb's Web site
Dr. Dobb's bibliography
Computer magazines published in the United States
Defunct computer magazines published in the United States
Monthly magazines published in the United States
Magazines established in 1976
Magazines disestablished in 2009
1976 establishments in California
Magazines published in San Francisco
Companies based in San Francisco
Informa brands |
24436541 | https://en.wikipedia.org/wiki/Router%20table%20%28woodworking%29 | Router table (woodworking) | A router table is a stationary woodworking machine in which a vertically oriented spindle of a woodworking router protrudes from the machine table and can be spun at speeds typically between 3000 and 24,000 rpm. Cutter heads (router bits) may be mounted in the spindle chuck. As the workpiece is fed into the machine, the cutters mold a profile into it. The machine normally features a vertical fence, against which the workpiece is guided to control the horizontal depth of cut. Router tables are used to increase the versatility of a hand-held router, as each method of use is particularly suited to specific application, e.g. very large workpieces would be too large to support on a router table and must be routed with a hand-held machine, very small workpieces would not support a hand-held router and must be routed on a router table with the aid of pushtool accessories etc.
Varieties
Router tables exist in three varieties:
floor standing machines
accessories bolted into table saws
small bench-top machines
Use
Router tables are used in one of three ways. In all cases, an accessory is used to direct the workpiece.
A fence is used, with the router bit partially emerging from the fence. The workpiece is then moved against the fence, and the exposed portion of the router bit removes material from the workpiece.
No fence is used. A template is affixed to the workpiece, and a router bit with a ball bearing guide is used. The ball bearing guide bears against the template, and the router bit removes material from the workpiece so as to make the workpiece the same shape as the template.
A "pin router" accessory is used. A pin router originally had a pin in the table that would trace the part and hung the router motor on an "over arm" that rose from one edge or corner of the router table, arced over the table, and descends directly (coaxially) towards the pin. This was a big safety concern as people's hand were very accessible to the cutter. In 1976 C.R. Onsrud patented the Inverted Pin Router that reversed the two and mounted the motor under the table and the guide pin on the "over arm". A template (with an interior recess on the top face removed) is affixed to the workpiece, and the guide pin is lowered into this recess. The template is then moved against the pin, carrying the workpiece against the spinning router bit and creating a duplicate of the patterned part.
History
Router tables evolved as shop improvised tools. Individual woodworkers began taking routers, mounting them in an inverted position beneath a table, and using the routers' depth adjustment to raise the bit through a hole in the table surface.
Over time manufacturers began selling accessories (pre-made table tops, table legs, table inserts, fences, hold downs, vertical adjustment tools ("lifts"), etc.
Finally, manufacturers began selling complete packages, such as the Inverted Pin Router, which put them in the business of effectively selling wood shapers, the very tool that shop improvised router tables were created as inexpensive substitutes for.
See also
Router (woodworking)#Table mounted router
References
Router Table |
45028101 | https://en.wikipedia.org/wiki/528th%20Engineer%20Battalion | 528th Engineer Battalion | The 528th Engineer Battalion is an engineer battalion of the Louisiana Army National Guard. It is part of one of the 225th Engineer Brigade, one of largest engineer brigades in the United States Army National Guard. The 528th Engineer Battalion is headquartered in Monroe, LA in Ouachita Parish with the remaining companies and detachments located in Franklin, Caldwell, Union, Morehouse, West Carroll and Richland Parishes. The battalion provides command and control to plan integrate, and direct execution of three to five assigned engineer companies and one forward support company (FSC) to provide mobility in support application or focused logistics.
History
1975-97: 528th Engineer Battalion (Combat Heavy)
1997-2000: 528th Engineer Battalion (Corps) (Wheeled)
2000-06: 528th Engineer Battalion (Combat Heavy)
2006–present: 528th Engineer Battalion
Lineage
The lineage from which the 528th Engineer Battalion has evolved, can be traced back over 200 years. Records in the Louisiana National Guard archives list this area's National Guard: as the Ouachita Company of Infantry in 1786, (That was the same year Don Juan Filhiol was commissioned by Governor Miro to establish a post here in Monroe, Louisiana, which was later named Fort Miro). In 1803 the area's guard was the Ouachita Company of Cavalry; in 1805 a battalion of the 10th Regiment; and from 1822 to 1840 a battalion of the 19th Louisiana Regiment.
During the Civil War guardsmen from this area included the Monroe Guard, the Monroe Rifles, the Ouachita Blues, the Ouachita Guerrillas Artillery, the Ouachita Rangers, and the Ouachita Southron. Records also indicate that Monroe units entered federal service for the Spanish–American War, the Mexican Border Incident, World War I, and World War II.
Monroe area units were primarily infantry in the early days, but have also included Coast Artillery, Transportation, and Maintenance units. In 1975, area units were reorganized to Combat Heavy Engineers, and on 1 Sep 97, the battalion was reorganized to its present designation as the 528th Combat Engineer Battalion (Corps)(Wheeled).
On 16 September 2000 the 528th Engineer Battalion was restructured to a combat heavy battalion. This internal restructure increased the 528th by 100 additional positions. The Battalion deployed under this designation to Afghanistan for combat operations in support of Operation Enduring Freedom. On 2 September 2006 the 528th Engineer Battalion dropped the (combat heavy) designation was re-structured again to its present-day designation.
Organization
The 528th Engineer Battalion consists of a Headquarters and Headquarters Company, Forward Support Company and four Engineer companies.
Headquartered in Monroe, LA in Ouachita Parish
Headquarters Service Company at Monroe, LA
Forward Support Company at Monroe, LA
830th Engineer Team (Concrete) at Monroe, LA
832nd Engineer Team (Asphalt) at Plaquemine, LA
921st Engineer Company (Horizontal) at Winnsboro, LA in Franklin Parish
1023rd Engineer Company (Vertical) at Bastrop, LA in Morehouse Parish
Mission
HHC
Provides command and control to plan, integrate, and direct execution of three to five assigned engineer companies and one forward support company (FSC) to provide mobility in support application or focused logistics
FSC
To provide direct and habitual combat sustainment support to the engineer battalion in the engineer brigade
830th
To plan, conduct, prepare, and provide construction support equipment and personnel for concrete mixing/pouring as part of major horizontal and vertical construction projects such as highways, storage facilities, airfields and base camp construction
832nd
To plan conduct, prepare and provide construction support equipment and personnel for bituminous mixing, paving and major horizontal construction projects such as highways, storage facilities and airfields
921st
To provide command and control of engineer effects platoon that are necessary to conduct missions such as repair, maintain, construct air/ground lines of communication (LOC), emplace culverts, hauling force protection and limited clearing operations
1023rd
To provide command and control of three to five vertical engineer platoons that provide specific engineering support to logic region (LR) 1-4. Construct base camps and internment facilities as well as construct, repair, maintain other vertical infrastructures in support of the corps or division and maneuver brigade combat team (BCT)
See also
225th Engineer Brigade
256th Infantry Brigade
Louisiana Army National Guard
References
External links
Louisiana National Guard official homepage
225th Engineer Brigade
Engineer battalions of the United States Army |
33242762 | https://en.wikipedia.org/wiki/William%20Clancey | William Clancey | William J. Clancey (born 1952) is a computer scientist who specializes in cognitive science and artificial intelligence. He has worked in computing in a wide range of sectors, including medicine, education, and finance, and had performed research that brings together cognitive and social science to study work practices and examine the design of agent systems. Clancey has been described as having developed “some of the earliest artificial intelligence programs for explanation, the critiquing method of consultation, tutorial discourse, and student modeling,” and his research has been described as including “work practice modeling, distributed multiagent systems, and the ethnography of field science.” He has also participated in Mars Exploration Rover mission operations, “simulation of a day-in-the-life of the ISS, knowledge management for future launch vehicles, and developing flight systems that make automation more transparent.” Clancey’s work on "heuristic classification" and "model construction operators" is regarded as having been influential in the design of expert systems and instructional programs.
Clancey was Chief Scientist for Human-Centered Computing at NASA Ames Research Center, Intelligent Systems Division from 1998-2013, where he managed the Work Systems Design & Evaluation Group. During this intergovernmental personnel assignment as a civil servant, he was also employed at the Florida Institute for Human and Machine Cognition in Pensacola, where he holds the title of Senior Research Scientist.
Early life and education
William J. Clancey was born and grew up in New Jersey. He was a Boy Scout and rose to the rank of Eagle Scout.
In an eighth-grade commencement address entitled “Humanity's Next Great Adventure,” given at a school in San Mateo, California, in 2002, Clancey recalled his own school years, when he “always got extra credit in eighth grade science, answering quiz questions about the latest Gemini two-person launches, getting us ready for the first trip to the moon in three years. I used to read these stories in the New York Times--absorbing every word. And of course when Star Trek began on TV in September, I watched the first episode and haven't missed any in 36 years. So space travel was on my mind as I sat at MY eighth grade graduation, and it's probably no coincidence that I work for NASA today.”
He graduating as valedictorian from East Brunswick High School, earning honors in biology. He majored in Mathematical Sciences at Rice University in Houston, where in connection with his interest in cognition he took courses in a range of fields, including philosophy, anthropology, linguistics, religion, and sociology. He has said that at Rice “I went through the catalog and took every course that mentioned 'knowledge' or 'cognition,' regardless of the department.” He would later write that “The courses that had the greatest influence on my later work were 'The philosophy of knowledge' (Konstantin Kolenda), 'Language, thought, and culture' (Stephen Tyler), and 'The radical sociology of knowledge' (Kenneth Leiter). My advisor was Ken Kennedy, who taught a fantastic course on compilers. Altogether, I took 40 courses in 13 departments, including six anthropology and three philosophy courses. Rice's teachers were wonderful lecturers who inspired you with their own enthusiasm and the clarity of their thought.” He was elected to Phi Beta Kappa and received a B.A. summa cum laude in 1974.
He then went to Stanford University, where he was engaged in expert systems research.[1] He received a Ph.D. in Computer Science from that institution in 1979, specifically in the area of Artificial Intelligence, He has said that at Stanford, “I focused on Artificial Intelligence, but again combined different areas by developing a computer program to teach medical students how to diagnose a patient (combining computer science, education, psychology, and medicine).” His dissertation project, he has said, “was the first attempt to use an expert system for instruction.” He describes himself as having been “a member of the 'Mycin Gang' in the Heuristic Programming Project, which became the Knowledge Systems Laboratory in the late 1970s. These projects were directed by Bruce G. Buchanan.”
Career
Before NASA
From 1979 to 1987, according to Clancey, he “managed research on Neomycin (one of the first second-generation expert systems) and a variety of associated explanation, instructional, and learning programs funded by the Office of Naval Research and the McDonnell Foundation.” He also designed “the instructional program GUIDON for teaching medical diagnostic strategy.” In his own words, he “developed some of the earliest AI programs for explanation, the critiquing method of consultation, tutorial discourse, and student modeling. My work on 'heuristic classification' and 'model construction operators' has been influential in the design of expert systems and instructional programs.”
From 1988 to 1997 Clancey was associated with the Institute for Research on Learning in Menlo Park, California,[1] of which he was a founding member. It was there that “he co-developed the methods of business anthropology in corporate environments.” His “special interest” at that institute, he has said, was “in relating the cognitive and social perspectives about knowledge and learning. I worked on organizational change and work systems design projects in corporate settings at the former Nynex Science and Technology, Xerox (Customer Care Center, Dallas), and Kaiser-Permanente (Pasadena, CA).”
Clancey has explained this work more simply as follows: “I worked with social scientists (anthropologists, sociologists, and educational psychologists)....We observed people in businesses (such as a customer call center) to understand how learning naturally occurs. We emphasized how people succeed in doing their work despite having inadequate tools or incomplete procedures, by studying how they helped and learned from each other.” He has also said that his “broad college background in computer science, philosophy, and anthropology...helped me understand the social scientists at the Institute for Research on Learning, so I could relate what I knew about computer science to what they knew about people.”
Clancey co-founded Teknowledge, which describes itself as an “IT solutions & ITES company with specialization in Finance Domain, Stocks, Forex, Supply-Chain Management, Enterprise (BOT) Build Operate and Transfer Model & Mobile Development.” He was also a founder of Modernsoft, which produces Financial Genome, “a unique business modeling software for Excel.” He was a founding Editor-in-Chief of AAAI/MIT Press, established by the Association for the Advancement of Artificial Intelligence and The MIT Press in 1989 “as a publishing imprint founded to serve the information needs of the international AI community,” and he has also been a Senior Editor of Cognitive Science.
NASA
Clancey was at NASA from 1998 until 2013. Among his research at NASA has been the use of “work practice simulation to design and evaluate varying configurations of roles and responsibilities for people and automated systems in safety-critical situations.”
In a 2000 presentation, Clancey explained how an awareness of the different types of cognition can aid in developing “heuristics for recognizing extraterrestrial intelligence.” For example, participants in SETI “might look for very long phrases, modality blending (e.g., tasting shapes), noise that is actually music, and descriptions that articulate relationships that humans express kinesthetically as gestures and facial expressions.” He also talked about how current “advances in robotics and neuroscience are the beginnings of a process memory architecture that will become the foundation of a successful computational theory of intelligence.” And he pointed out that “consciousness is not a mystical phenomenon or a topic to be shunned, but is instead the key for understanding how human intelligence is possible at all, how it is distinguished from other forms of intelligence on this planet, and indeed, how it is distinguished from current computer systems. In some important ways, even simple programming languages exceed the capability of human consciousness. But in other ways, related to the flexibility of 'run-time' learning and relating perceptual-motor modalities, no computer system replicates the mechanism of human conceptual coordination.”
In his 2002 commencement address, he described his then-current NASA activities as follows: “I am drawing pictures of space vehicles on Mars, but now I'm living inside them, and actually pretending we are ON Mars. I've done that in the High Arctic of Canada on Devon Island and more recently in the Utah desert, where I was the Station Commander for two weeks.” He said that “For thirty years we've waited to carry space exploration forward. We're in a transition zone, what I like to call 'Standing on Columbus' dock.'...Today we are called by adventure to another land....It's OUR new land, a new frontier. We're going to Mars....But when?...More practically, we're going within 20 years. Some of you could be walking on Mars before you are forty years old. You just have to believe. Your...attitude...matters. You must believe in discovery and adventure. If you let other people say 'it can't be done,' I can promise you, you will be 50 years old like me one day, and we will STILL be sending tin cans around the Earth, round and round going nowhere.”
In an essay entitled “Living On Mars Time,” Clancey has described a two-week period in February 2004 that he spent in Pasadena “observing how geologists and engineers controlled the Mars Exploration Rovers. There were two active rovers near the equator on opposite sides of Mars, at places called Gusev Crater (the rover we called 'Spirit') and at Meridiani Planum ('Opportunity'). I was working with the Opportunity team. At the time...the Opportunity team was living their lives as if they were actually at Meridiani” - that is, living on Mars time, not Earth time.
In a 2010 talk about SETI he noted that “both the nature of consciousness in humans and our belief systems affect our notion of what intelligence can be, how it might communicate, and how we would attempt to communicate with it.” He stressed the importance of “break[ing] out of ways of thinking that are limiting SETI” and asked whether “questioning our assumptions about mortality, purpose, and humanity's long-term role in the Universe [could] inform SETI.”
Clancey told an audience in February 2012 that the MER mission “provides a new way of understanding how computer tools and social organization can be orchestrated to extend human capabilities” and that “the story of planetary exploration today is about the relation of people and robotic spacecraft—machines that are actually complex laboratories capable of operating in extreme cold with little power, packaged to handle the vibrations of launch, and work for years without repair. Sending these scientific instruments throughout the solar system is one of the great successes of the computer age and will surely mark our place in the history of science and exploration.”
At the end of a May 2012 talk to a group of Eagle Scouts he addressed the question “Are we alone? Are there other beings like us in the universe?” He said, “I think the universe is saying to us, 'Take a guess. The answer should be obvious' That could mean the Universe is an unimaginably large space full of a diversity of life and cultures – in the memorable words of Carl Sagan, there are billions and billions of planets like Earth.” He concluded by saying: “Everyday I remind myself of where I am and how we are part of this incredible hum and vibrancy of life. And I feel an immense privilege to be alive now, knowing and enjoying this gift.”
He has described his current professional activities as follows: “I am a scientist who helps NASA design human and robotic space missions, including what people will do (astronauts and flight controllers on earth) and the tools they will use (especially computer systems).” He has said that the skills required for his work are the ability “to observe and describe how people think and work (cognitive science, psychology, anthropology/ethnography),” to “invent new kinds of computer systems (computer science),” and “to think about complex interactions and recognize unclear ways of thinking (philosophy, mathematics).”
“I have always been a research scientist, rather than a professor,” he has written. “Working with students might have been exciting, but as a research scientist I have been able to focus on developing new ideas and new kinds of computer tools.” He advises aspiring scientists to “Carry out your own investigations, even if they have nothing to do with what your school offers you. Realize that many of the important ideas for the future are not in textbooks, but were published 100 years ago or more and have not necessarily been understood or appreciated by your teachers. This includes philosophy and psychology (such as the work of John Dewey) and what is called 'systems theory'....Science and technology are rapidly changing. You can contribute by finding the bits and pieces that interest you, becoming knowledgeable about those things, then bring them to the table when the world is ready.
He has described his recent writing as “spann[ing] a variety of topics that reconsider the relation of knowledge and memory: "situated robots," neuropsychological dysfunctions, and how policies and plans are interpreted in work settings (particularly how the nature of the scientific method is adapted for doing collaborative scientific work remotely on Mars).
Professional memberships
Clancey was a founding member of the Institute for Research on Learning. He is on the Board of Directors of the Association of Mars Explorers and the CONTACT Conference. He is a member of the Mars Society, a member of the advisory board of the Constructivist Foundations, and chairman and
Chief Technology Officer of Modernsoft, Inc. He has also been a fellow of the American College of Medical Informatics since 1986, of the Association for Advancement of Artificial Intelligence since 1991, and of the Association for Psychological Science since 2010.
Miscellaneous activities
Clancey often speaks about space science at schools, museums, and other venues. He has given talks in over twenty countries. He has described himself as having presented the results of his research “in tutorials and keynote addresses in twenty-two countries.”
Honors and awards
On an appointment to NASA from the Florida Institute for Human & Machine Cognition (Pensacola), Clancey and his team received the NASA Honor Award and the Johnson Space Center Exceptional Software Award for an “agent” system that automates all routine file transfers between the International Space Station and Mission Control in Houston.
Clancey received the 2014 Gardner-Lasser Aerospace History Literature Award for Working on Mars.
Selected books
Knowledge-Based Tutoring (1987)
Contemplating Minds: A Forum for Artificial Intelligence (1994, with S. Smoliar and M. Stefik)
Situated Cognition: On Human Knowledge and Computer Representations (1997)
Conceptual Coordination: How the Mind Orders Experience In Time (1999)
Working on Mars: Voyages of Scientific Discovery with the Mars Exploration Rovers (2012)
Conceptual Coordination How the Mind orders Experience in Time
His 1999 book Conceptual Coordination: How the Mind Orders Experience in Time has been described in a review by Melanie Mitchell as a book “about the nature of human memory, and how it differs from processes in computers....In Clancey's view, memory consists not of the storage and retrieval of symbols, but of a dynamical process of activation of physically connected neural structures. New memories or concepts are embodied as physical relations between structures in the brain that include not only an encoding of some external contents but, very centrally, the perceptual and motor
activities making up 'what I am doing now.' Remembering consists of an approximate re-activation of these structures, in place and in sequence, with the possibility of substitutions, reconstructions, and new interpretations. In this view, a few basic processes at the neural level can explain many different aspects of memory.” While Clancey has learned from thinkers like William James and the founders of Gestalt, his original contribution “is a detailed exposition and extension of these basic ideas, revealing the deep links between perceptual-motor skills and higher-level cognition, a detailed re-examination and recasting of well-known cognitive phenomena and models into this 'conceptual coordination' framework, yielding some interesting novel explanations, and a preliminary link to some recent research in neuroscience.”
Working on Mars
Working on Mars, published by MIT Press, describes how the Mars Exploration Rovers (MRS) have “changed the nature of planetary field science,” enabling his team at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena to remotely operate the MRSs on the Martian surface and thus have a virtual experience of being on the red planet themselves. A reviewer of the book notes that “Clancey, while a computer scientist, is in Working on Mars more of an anthropologist at times, comparing and contrasting the people working on the mission. There are the differences between the scientists using the rovers to study Mars and the engineers who built and operate the rovers, roles that can lead to conflict but also cooperation, as he notes. There’s also differences within the science team, particularly between those whose background is working in the field and those who primarily work in the lab. The former often wanted to quickly move on to the next destination, while the latter often wanted to linger and perform more observations with their instruments.”
Working on Mars received the 2014 Gardner-Lasser Aerospace History Literature Award.
Selected publications
Clancey, W.J. 2009. "Becoming a Rover". In S. Turkle (Ed.), Simulation and Its Discontents, Cambridge: MIT Press, pp. 107–127.
Clancey, W.J. 2006. "Clear Speaking about Machines: People are exploring Mars, not robots". AAAI Workshop: The Human Implications of Human-Robotic Interaction, Boston.
Clancey, W.J. 2011. "Relating Modes of Thought". In T. Bartscherer and R. Coover (Eds.), Switching Codes, University of Chicago Press, pp. 161–183.
Clancey, W.J., Sierhuis, M., Alena, R., Dowding, J., Scott, M., and van Hoof, R. 2006. "Power Agents: The Mobile Agents 2006 Field Test at MDRS". To appear in F. Crossman and R. Zubrin (eds.), On to Mars: Volume 3, Burlington, Canada: Apogee Books. [Mars Society Presentation]
Clancey, W. J., Lee, P., Cockell, C., Braham, S., Shafto, M. 2006. "To the North Coast of Devon: Collaborative navigation while exploring unfamiliar terrain". In J. Clarke (Ed.) Mars Analog Research, Vol. 111, American Astronautical Society Science and Technology Series, San Diego: Univelt, Inc., pp. 197–226. AAS 06-263.
Clancey, W.J. 2006. "Observation of work practices in natural settings". In A. Ericsson, N. Charness, P. Feltovich & R. Hoffman (Eds.), Cambridge Handbook on Expertise and Expert Performance. New York: Cambridge University Press, pp. 127–145.
Clancey, W. J. 2008. "Scientific antecedents of situated cognition". In Philip Robbins and Murat Aydede (Eds.), Cambridge Handbook of Situated Cognition. New York: Cambridge University Press, pp. 11–34.
Clancey, W.J, Sierhuis, M., Damer, B., Brodsky, B. 2005. "Cognitive modeling of social behaviors". In R. Sun (Ed.), Cognition and Multi-Agent Interaction: From Cognitive Modeling to Social Simulation. New York: Cambridge University Press, pp. 151–184.
Clancey, W.J. 2006. "Participant observation of a Mars surface habitat mission simulation Habitation", 11(1/2) 27-47.
Pedersen, L., Clancey, W.J., Sierhuis, M., Muscettola, N., Smith, D.E., Lees, D., Rajan, K., Ramakrishnan, S., Tompkins, P., Vera, A., Dayton, T. 2006. "Field demonstration of surface human-robotic exploration activity".AAAI-06 Spring Symposium: Where no human-robot team has gone before, Stanford, March.
Clancey, W.J. 2004. "Roles for agent assistants in field science: Understanding personal projects and collaboration 34" (2) 125-137. Special Issue of IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, May.
Clancey, W.J., Lowry, M., Nado, R., Sierhuis, M. 2011. "Software Productivity of Field Experiments Using the Mobile Agents Open Architecture with Workflow Interoperability". Proceedings of IEEE Fourth International Conference on Space Mission Challenges for Information Technology (SMC-IT), Palo Alto, CA, August 2011, pp. 85–92
References
External links
http://bill.clancey.name/ website
1952 births
Living people
Cognitive scientists
Florida Institute for Human and Machine Cognition people |
14460347 | https://en.wikipedia.org/wiki/Martyn%20Thomas | Martyn Thomas | Prof. Martyn Thomas CBE FREng FIET FRSA (born 1948) is a British independent consultant and software engineer.
Overview
Martyn Thomas founded the software engineering company Praxis in 1983, based in Bath, southern England. He has a special interest in safety-critical systems and other high integrity applications. He has acted as an expert witness involving complex software engineering issues.
Thomas was born in Salisbury, southern England. He studied biochemistry at University College London, graduating in 1969, when he started working in the field of computing. Between 1969 and 1983, he was employed at universities in London and the Netherlands, at STC working on telecommunications software, and at the South West Universities Regional Computer Centre in Bath.
In 1983, Thomas founded Praxis with David Bean, where he encouraged the use of formal methods within the company for software development. In 1986, Praxis became the first independent systems house to achieve BS 5750 (later ISO 9001) certification for all its activities. Praxis became internationally recognised as a leader in the use of rigorous software engineering, including formal methods, and grew to around 200 staff.
In December 1992, Praxis was sold to Deloitte and Touche, an international firm of accountants and management consultants, and Martyn became a Deloitte Consulting international partner whilst remaining chairman and, later, managing director of Praxis. He left Deloitte Consulting in 1997.
He is currently director of Martyn Thomas Associates Limited and a visiting professor at the University of Manchester, and a Fellow and Emeritus Professor at Gresham College. He lives in London.
Current career
Fellow, Emeritus Professor and member of Council. Gresham College,
Visiting Professor of Software Engineering at Aberystwyth University, UK,
Fellow at The Royal Academy of Engineering,
Member at UK Computing Research Committee,
Owner, Principal Consultant and Expert Witness at MTAL.,
Past career
Non-executive director of the Health and Safety Executive (HSE),
IT Livery Company Professor of Information Technology at Gresham College,
Member of Advisory Council at Foundation for Information Policy Research,
Non-executive Director of the Serious Organised Crime Agency,
Non-executive Director of the Office of the Independent Adjudicator for Higher Education,
Fellow at British Computer Society,
Chair, Executive Board at DEPLOY Project,
Member, "Sufficient Evidence" study at National Academies / CSTB,
Chair, Steering Committee at DIRC,
Member of Council at EPSRC,
Member of Advisory Group at OST Foresight programmes,
Partner at Deloitte Consulting,
Founder/Chairman/managing director at Praxis,
Chairman at Praxis Critical Systems,
Deputy Director at SWURCC,
Software Engineer at STC.
Honors and awards
Commander of the Order of the British Empire, CBE,
Fellow of the Royal Academy of Engineering,
Honorary DSc (Hull),
Honorary DSc (Edinburgh),
Honorary DSc (City),
Honorary DSc (Bath), Dr of Engineering,
IEE Achievement Medal, Computing and Control,
Who's Who.
References
External links
Martyn Thomas Associates Limited website
IT Livery Professor, Gresham College
Martyn Thomas biography from the IET
Oxford University Computing Laboratory home page
Dr Thomas Oration, University of Bath
1948 births
Living people
People from Salisbury
Alumni of University College London
British software engineers
Formal methods people
People in information technology
British corporate directors
Commanders of the Order of the British Empire
Fellows of the British Computer Society
Fellows of the Institution of Engineering and Technology
Members of the Department of Computer Science, University of Oxford
People from Bath, Somerset |
31117122 | https://en.wikipedia.org/wiki/Planetary%20Science%20Decadal%20Survey | Planetary Science Decadal Survey | The Planetary Science Decadal Survey is a publication of the United States National Research Council produced for NASA and other United States Government Agencies such as the National Science Foundation. The document identifies key questions facing planetary science and outlines recommendations for space and ground-based exploration ten years into the future. Missions to gather data to answer these big questions are described and prioritized, where appropriate. Similar Decadal Surveys cover astronomy and astrophysics, earth science, and heliophysics.
As of 2021 there have been two "Decadals", one published in 2002 for the decade from 2003 to 2013, and one in 2011 for 2013 to 2022. Work on the survey for 2023 to 2032 is currently in progress.
Before the decadal surveys
Planetary Exploration, 1968-1975, published in 1968, recommended missions to Jupiter, Mars, Venus, and Mercury in that order of priority.
Report of Space Science, 1975 recommended exploration of the outer planets.
Strategy for Exploration of the Inner Planets, 1977–1987 was published in 1977.
Strategy for the Exploration of Primitive Solar-System Bodies--Asteroids, Comets, and Meteoroids, 1980–1990 was published in 1980.
A Strategy for Exploration of the Outer Planets, 1986-1996 was published in 1986.
Space Science in the Twenty-First Century – Imperatives for the Decades 1995 to 2015, published in 1988, recommended a focus on "Galileo-like missions to study Saturn, Uranus and Neptune" including a mission to rendezvous with Saturn's rings and study of Titan. It also recommended study of the moon with a "Lunar Geoscience Orbiter", a network of lunar rovers and sample return from the lunar surface. The report recommended a Mercury Orbiter to study not only that planet but provide some solar study as well. A "Program of Extensive Study of Mars" beginning with the Mars Pathfinder mission was planned for 1995 to be followed up by one in 1998 to return samples to Earth for study. Study of primitive bodies such a comet or asteroid was recommended as a flyby mission of Apollo and Amor asteroid.
2003–2013, New Frontiers in the Solar System
New Frontiers in the Solar System: An Integrated Exploration Strategy, published in 2003, mapped out a plan for the decade from 2003 to 2013. The committee producing the survey was led by Michael J. Belton. Five panels focused on the inner planets, Mars, the giant planets, large satellites and astrobiology. The survey placed heavy emphasis on Mars exploration including the Mars Exploration Rovers, established of the New Frontiers program including New Horizons mission to study Pluto and established programs in power and propulsion to lay a technological basis for programs in later decades including crewed missions beyond Earth orbit.
The paper suggested that NASA should prioritize the following missions:
Medium-class missions
Primitive bodies:
Kuiper Belt-Pluto Explorer
Comet Surface Sample Return
Trojan Asteroid/Centaur Reconnaissance
Asteroid Lander/Rover/Sample Return
Triton/Neptune Flyby
Inner planets:
Venus In-Situ Explorer (VISE)
South Pole-Aitken Basin Sample Return
Mars:
Mars Long-Lived Lander Network
Mars Upper Atmosphere Orbiter
Mars Science Laboratory
Giant planets:
Jupiter Polar Orbiter with Probes (JPOP)
Large satellites:
Io Observer
Ganymede Orbiter
Neptune flyby
Large-class missions
Primitive bodies:
Comet Cryogenic Sample Return
Mars:
Mars Sample Return
Outer planets:
Neptune Orbiter with Probes
Large satellites:
Europa Geophysical Explorer
Europa Pathfinder Lander
Europa Astrobiological Lander
Titan Explorer
Uranus Orbiter
Neptune Orbiter
2013–2022, Visions and Voyages for Planetary Science
Visions and Voyages for Planetary Science in the Decade 2013 – 2022 (2011) was published in prepublication form on March 7, 2011, and in final form later that year. Draft versions of the document were presented at town hall meetings around the country, at lunar and planetary conferences, and made available publicly on the NASA website and via the National Academies Press. The report differed from previous reports in that it included a "brutally honest" budgetary review from a 3rd party contractor.
Flagship missions
The report highlighted a new Mars rover, a mission to Jupiter's moon Europa, and a mission to Uranus and its moons as proposed Flagship Missions. The Mars mission was given highest priority, followed by the Europa mission.
The Mars rover proposal was called MAX-C and it would store samples for eventual return to Earth, but the method of return was left open. It only recommended the rover mission if it could be done cheaply enough (US$2.5 billion).
Studies
The committee producing the survey was led by Steve Squyres of Cornell University and included 5 panels focusing on the inner planets (Mercury, Venus, and the Moon), Mars (not including Phobos and Deimos), the gas giant planets, satellites (Galilean satellites, Titan, and other satellites of the giant planets) and primitive bodies (Asteroids, comets, Phobos, Deimos, Pluto/Charon and other Kuiper belt objects, meteorites, and interplanetary dust).
Mission & Technology Studies:
Mercury Lander Mission Concept Study
Venus Mobile Explorer Mission Concept Study
Venus Intrepid Tessera Lander Concept Study
Venus Climate Mission Concept Study
Lunar Geophysical Network Concept Study
Lunar Polar Volatiles Explorer Mission Concept Study
Near Earth Asteroid Trajectory Opportunities in 2020–2024
Mars 2018 MAX-C Caching Rover Concept Study
Mars Sample Return Orbiter Mission Concept Study
Mars Sample Return Lander Mission Concept Study
Mars 2018 Sky Crane Capability Study
Mars Geophysical Network Options
Mars Geophysical Network Concept Study
Mars Polar Climate Concepts
Jupiter Europa Orbiter (component of EJSM) Concept Study
Io Observer Concept Study
Ganymede Orbiter Concept Study
Trojan Tour Concept Study
Titan Saturn System Mission
Saturn Atmospheric Entry Probe Trade Study
Saturn Atmospheric Entry Probe Mission Concept Study
Saturn Ring Observer Concept Study
Enceladus Flyby & Sample Return Concept Studies
Enceladus Orbiter Concept Study
Titan Lake Probe Concept Study
Chiron Orbiter Mission Concept Study
Uranus and Neptune Orbiter and Probe Concept Studies
Neptune-Triton-Kuiper Belt Objects Mission Concept Study
Comet Surface Sample Return Mission Concept Study
Cryogenic Comet Nucleus Sample Return Mission Technology Study
Small Fission Power System Feasibility Study
The recommendation for the New Frontiers program was a selection from one of Comet Surface Sample Return, Lunar South Pole-Aitken Basin Sample Return, Saturn Probe, Trojan Tour and Rendezvous, and Venus In Situ Explorer. Then another selection adding Io Observer, Lunar Geophysical Network. (for NF 4 and 5) In the 2011 response from NASA to the review, NASA supported the New Frontiers recommendations. (The first three New Frontiers missions include New Horizons to Pluto flyby, the Juno Jupiter orbiter, and the OSIRIS-REx near-Earth orbit sample return mission.)
See also
Astronomy and Astrophysics Decadal Survey
Earth Science Decadal Survey
References
External links
Official website
Previous Space Studies Board reports
Planetary science
Decadal science surveys |
19842044 | https://en.wikipedia.org/wiki/Technocity%2C%20Thiruvananthapuram | Technocity, Thiruvananthapuram | Technocity is a technology park an under construction integrated township in Thiruvananthapuram, Kerala, India dedicated to electronics, software, and other information technology (IT). South India's largest World Trade Centre is coming here with total build up area of 2.5 million sqft and World's fifth Nissan
Global Digital Hub campus also coming here. It was conceived in 2005 as the fourth phase of development of Technopark, Technocity is a complete IT City, spread across about 500 acres (200 hectares), which includes not just space for IT/ITES firms but also residential, commercial, hospitality, medical and educational facilities. The project is a new self-dependent satellite city, which would not strain the resources and infrastructure of the city of Thiruvananthapuram
The units in Technocity includes a wide variety of companies engaged in a range of activities, which include embedded software development, enterprise resource planning (ERP), process control software design, engineering and computer-aided design software development, IT Enabled Services (ITES), process re-engineering, animation and e-business. The firms will include domestic companies as well as subsidiaries of multi-national organisations.
Technocity is being jointly developed by the Government of Kerala and private developers. The Govt. is represented by Technopark and a special company called the Kerala State Information Technology Infrastructure Limited (KSITIL). Individual phases of the project will be developed as Special Purpose Vehicles (SPVs) between KSITIL and individual developers. KSITIL plans to hold 26% equity share in all the SPVs. Once fully operational, Technocity is expected to create about 200,000 employment opportunities.
Infrastructure
Technocity provides all the infrastructure and support facilities needed for IT/ITES and electronics companies to function as well as for their employees to enjoy world-class lifestyles. This is done either directly or through private partners. In addition, Technocity, like Technopark, may provide business incubation facilities.
IT Space
Technocity have up to 20 million square feet (1.9 million square meters) of built-up space within multiple buildings for its tenant organisations. This facility will be developed in phases across as many as 10 years.
Non-IT Space
Technocity is being developed as an Integrated Township and it will include residential space, commercial space, retail facilities, multiplexes, hospitals and schools. This will enable employees in the companies at Technocity to enjoy a world-class lifestyle within walking distance of their offices.
Utilities and support facilities
Technocity supply electricity through a 220 KV, 100 MVA dedicated internal power sub-station and distribution system with built-in redundancies at all levels. Water supply is from a dedicated treatment plant while there will be a sewage treatment plant built especially for Technocity. All these facilities will be developed by KSITIL and Technopark for the entire area.
Connectivity
Thiruvananthapuram is connected to the National Internet Backbone and Technocity will be serviced by a variety of bandwidth providers, including Reliance Infocomm, Bharti Airtel, Videsh Sanchar Nigam and Asianet Dataline, through fibre optic lines in the campus.
FLAG Telecom—a subsidiary of Reliance Infocomm—has landed its FALCON global cable system at Thiruvananthapuram, providing direct connectivity to the Maldives and Sri Lanka. Technocity will be connected through fiber link, in self-healing redundant ring architecture to Reliance Internet Data Center and Gateway at Mumbai, directly connecting to FLAG, the undersea cable system backbone that connects 134 countries including U.S, U.K, Middle East and Asia Pacific. This provides connectivity with the Middle East, South East Asia, Far East, Europe and North America.
Institutions
Technocity will host at least two important educational and research institutes. The Indian Institute of Information Technology and Management–Kerala (IIITM–K), which has now being upgraded to Kerala University of Digital sciences, Innovation, and Technology, has been allocated 10 acres (4 hectares) in Technocity to develop its own campus. IIITM–K is a premier institution of higher education, research and development in applied information technology and management. In addition to providing post graduate courses in Information Technology, IIITM–K is a leader in educational networking and in setting up web portals which benefit the community. Portals for computational chemistry and agricultural information dissemination are among its focus areas. IIITM–K is located at present in Technopark, Thiruvananthapuram.
The Asian School of Business (ASB) is an institution of post graduate management education. It was started in 2005. The ASB is currently located inside Technopark and plans to move to a campus in Technocity by 2020–21. ASB offers the full-time Post Graduate Programme in Management (PGPM). The Asian School of Business is managed by a Board of Governors which includes stalwarts of the Indian IT industry like Tata Consultancy Services CEO S. Ramadorai and Infosys CEO Kris Gopalakrishnan.
Phases of Development
Technocity will be developed in multiple phases by a number of private developers in association with KSITIL and Technopark. The developers will chosen through global bidding as each phase becomes ready to be developed.
Tata Consultancy Services (BSE: 532540, NSE: TCS), a leading IT services, consulting and business solutions organization, announced that it is setting up the world's largest corporate learning and development center with a total capacity to train 15,000 professionals at one time and 50,000 professionals annually.
The proposed TCS Learning Campus in Thiruvananthapuram will be located on a 97-acre property in the Technopark area of the city. The campus will be built over an area of 6.1 million square feet and feature residential accommodation for professionals and faculty at the center.
Phase I
KSITIL has called for the Request for Proposal to develop about 60 acres (0.23 km².) of land as mixed-use IT/ITES park on 16 October 2008. This will be the First Phase of Technocity.
Nine major developers had qualified for the bid including the Indian subsidiary of the world's leading developer - Emaar Properties, a consortium of Forest City Enterprises and Sun Group, K.Raheja Corporation, Larsen and Toubro, Suzlon, Maytas Infrastructure and the Brigade Group.
Special Economic Zones in Technocity
There are plans for nine Special Economic Zones (SEZs) within Technocity. These will be developed by multiple developers in association with KSITIL and Technopark. One SEZ has been set apart for development by Technopark itself.
Notes
External links
Technopark official website
Kerala IT Mission official website, Department of IT, Government of Kerala
Kerala State Information Technology Infrastructure Limited website
The Indian Institute of Information Technology and Management–Kerala (IIITM–K)
2008 establishments in Kerala
Organisations based in Thiruvananthapuram
Science and technology in Thiruvananthapuram
Software technology parks in Kerala |
44133735 | https://en.wikipedia.org/wiki/Alteryx | Alteryx | Alteryx is an American computer software company based in Irvine, California, with a development center in Broomfield, Colorado. The company's products are used for data science and analytics. The software is designed to make advanced analytics accessible to any data worker.
History
SRC LLC, the predecessor to Alteryx, was founded in 1997 by Dean Stoecker, Olivia Duane Adams and Ned Harding. SRC developed the first online data engine for delivering demographic-based mapping and reporting shortly after being founded. In 1998, SRC released Allocate, a data engine incorporating geographically organized U.S. Census data that allows users to manipulate, analyze and map data. Solocast was developed in 1998, which was software that allowed customers to do customer segmentation analysis.
In 2000, SRC LLC entered into a contract with the U.S. Census Bureau that resulted in a modified version of its Allocate software being included on CD-ROMs of Census Data sold by the Bureau.
In 2006, the software product Alteryx was released, which was a unified spatial and non-spatial data environment for building analytical processes and applications.
In 2010, SRC LLC changed its name to that of its core product, Alteryx.
In 2011, Alteryx raised $6 million in venture funding from the Palo Alto investment arm of SAP AG, SAP Ventures. In 2013, Alteryx raised $12 million from SAP Ventures and Toba Capital. In 2014, the company raised $60 million in Round B funding from Insight Venture Partners, Sapphire Ventures (formerly SAP Ventures) and Toba Capital, and announced plans for a 30% workforce expansion.
In 2015, ICONIQ Capital led an $85 million investment in Alteryx, with Insight Venture Partners and Meritech Capital Partners also participating. Alteryx announced plans to use the new capital to expand internationally, invest in research and development, and increase its sales and marketing efforts.
In 2016, Alteryx was ranked #24 on the Forbes Cloud 100 list.
On March 24, 2017, Alteryx went public in an IPO listed on the NYSE.
In October 2017, it was discovered that Alteryx was subject to a data breach of partially anonymized data records for approximately 120 million U.S. households.
On February 22, 2018, Alteryx was named a leader in Gartner's 2018 Magic Quadrant for Data Science and Machine Learning Platforms.
Products
As of July 2020, Alteryx offered the following products as part of an analytics platform (ver. 2019.1):
Alteryx Connect
Alteryx Designer
Alteryx Promote
Alteryx Server
Analytics Hub
Alteryx Intelligence Suite
Alteryx also hosts a cloud-based website known as the Alteryx Analytics Gallery.
Acquisitions
In January 2017, Alteryx acquired Prague-based software company, Semanta. Alteryx Connect is an outgrowth of the Semanta acquisition.
In June 2017, Alteryx acquired data science startup Yhat to enhance their capabilities for managing and deploying advanced analytic models ultimately resulting in Alteryx Promote. Alteryx paid $10.8 million in cash and equity. Yhat had raised $2.6 million before the acquisition.
In February 2018, Alteryx acquired Alteryx ANZ, a distributor of altered software based in Sydney, Australia
In April 2019, Alteryx acquired ClearStory Data for $19.6 million in cash.
In October 2019, Alteryx acquired Feature Labs, a machine learning startup founded by 2 MIT researchers for $25.2 million in cash with an additional $12.5 million in equity incentive awards. Feature Labs is known for developing Featuretools, an open source library for automated feature engineering with over 350,000 downloads at the time of acquisition. The acquisition added an engineering hub for Alteryx in Boston, Massachusetts. Feature Labs had raised $1.5 million prior to the acquisition.
Awards and recognition
Alteryx was recognized by research firm Gartner as a leader in the 2018 Magic Quadrant for Data Science and Machine Learning Platforms. In addition, Alteryx was named the Gold winner in "The Best Business Intelligence and Analytics Software of 2017, as Reviewed by Customers” by Gartner Peer Insights, a comprehensive platform that provides unfiltered, first-hand product and service ratings and reviews by experienced Enterprise Technology Buyers.
Alteryx has also been named one of Deloitte's Technology Fast 500, an 2019 APPEALIE SaaS Award Winner, and a Top 20 AI All-Stars in Technology by KeyBanc Capital Markets.
In 2017, co-founder and CEO Dean Stoecker received the Ernst and Young Entrepreneur of the Year 2017 Award in the Orange County Region, which recognizes entrepreneurs who excel in areas such as innovation, financial performance and personal commitment to their businesses and communities.
Alteryx was also named as one of the Best Places to Work in Orange County three years in a row (2016), 2017, and 2018, ranking within the top ten in the large employer category.
Kevin Rubin was recognized as the CFO of the Year by the Orange County Business Journal, an annual award that recognizes Orange County CFOs who demonstrated superior leadership and corporate stewardship in the preceding fiscal year.
Data Breach
During the October 2017 data breach mentioned above, although no names were attached, telephone numbers and physical addresses were among the 248 fields per household involved in the breach. Also included was "consumer demographics, life event, direct response, property, and mortgage information for more than 235 million consumers" according to the company. Alteryx assembled information from Experian and public sources like the U.S. Census Bureau to create their product which sold for 39,000 per license. Alteryx's hosting on Amazon Web Services had been unsecured (its sources had no breach).
References
External links
Alteryx website
Business intelligence companies
Business software
Extract, transform, load tools
Software companies based in California
Companies based in Irvine, California
Software companies established in 1997
2017 initial public offerings
Companies listed on the New York Stock Exchange
Analytics companies
Data analysis software
Software companies of the United States |
46980164 | https://en.wikipedia.org/wiki/Treston%20Thomison | Treston Thomison | Treston Thomison is an American mixed martial artist currently competing in the Featherweight division of Bellator MMA. A professional competitor since 2009, he has also competed for King of the Cage.
Mixed martial arts record
|-
| Loss
| align=center| 10–6
| Justin Lawrence
| TKO (doctor stoppage)
| Bellator 181
|
| align=center| 1
| align=center| 3:34
| Thackerville, Oklahoma, United States
|Return to Featherweight.
|-
| Loss
| align=center| 10–5
| Emmanuel Rivera
| Decision (unanimous)
| Bellator 174
|
| align=center| 3
| align=center| 5:00
| Thackerville, Oklahoma, United States
|Catchweight (151 lbs) bout.
|-
| Win
| align=center| 10–4
| Dawond Pickney
| Submission (armbar)
| Bellator 166
|
| align=center| 1
| align=center| 0:51
| Thackerville, Oklahoma, United States
|Lightweight debut.
|-
| Win
| align=center| 9–4
| Aaron Roberson
| Submission (guillotine choke)
| Bellator 151
|
| align=center| 2
| align=center| 2:20
| Thackerville, Oklahoma, United States
|
|-
| Loss
| align=center| 8–4
| Chris Jones
| Decision (unanimous)
| Bellator 146
|
| align=center| 3
| align=center| 5:00
| Thackerville, Oklahoma, United States
|Catchweight (150 lbs) bout.
|-
| Loss
| align=center| 8-3
| Cody Walker
| KO (head kick)
| Bellator 128
|
| align=center| 2
| align=center| 4:59
| Thackerville, Oklahoma, United States
|
|-
| Win
| align=center| 8-2
| Stephen Banaszak
| Submission (guillotine choke)
| Bellator 121
|
| align=center| 1
| align=center| N/A
| Thackerville, Oklahoma, United States
|
|-
| Loss
| align=center| 7-2
| Stephen Banaszak
| Submission (armbar)
| Bellator CXIX
|
| align=center| 1
| align=center| 4:56
| Rama, Ontario, Canada
|
|-
| Win
| align=center| 7-1
| Jade Porter
| Submission (armbar)
| KOTC: Regulators
|
| align=center| 1
| align=center| 0:18
| Scottsdale, Arizona, United States
|
|-
| Win
| align=center| 6-1
| Daniel Armendariz
| KO (punch)
| KOTC: Aerial Assault
|
| align=center| 1
| align=center| 0:31
| Thackerville, Oklahoma, United States
|
|-
| Win
| align=center| 5-1
| Brian Joseph
| Submission (armbar)
| KOTC: Bad Intentions II
|
| align=center| 1
| align=center| 1:51
| Thackerville, Oklahoma, United States
|
|-
| Loss
| align=center| 4-1
| Mike Maldonado
| KO (punches)
| KOTC: Total Destruction
|
| align=center| 1
| align=center| 0:31
| Thackerville, Oklahoma, United States
|
|-
| Win
| align=center| 4-0
| Anthony Kellen
| Decision (split)
| KOTC: Apocalypse
|
| align=center| 3
| align=center| 5:00
| Thackerville, Oklahoma, United States
|
|-
| Win
| align=center| 3-0
| Scott Bear
| Submission (rear-naked choke)
| KOTC: Epic Force
|
| align=center| 1
| align=center| 1:18
| Thackerville, Oklahoma, United States
|
|-
| Win
| align=center| 2-0
| Cris Leyva
| Submission (armbar)
| KOTC: Underground 67
|
| align=center| 1
| align=center| 2:13
| Cortez, Colorado, United States
|
|-
| Win
| align=center| 1-0
| Kyle Waag
| Submission
| FCF: Freestyle Cage Fighting 43
|
| align=center| 1
| align=center| 0:29
| Claremore, Oklahoma, United States
|
Mixed martial arts amateur record
|-
| Win
| align=center| 4-0
| Charles Anderson
| Submission (rear-naked choke)
| FCF: Freestyle Cage Fighting 39
|
| align=center| 1
| align=center| 0:20
| Shawnee, Oklahoma, United States
|
|-
| Win
| align=center| 3-0
| Robert Pickle
| Submission
| FCF: Freestyle Cage Fighting 33
|
| align=center| 3
| align=center| 0:27
| Durant, Oklahoma, United States
|
|-
| Win
| align=center| 2-0
| Steven Tackett
| Submission (armbar)
| FCF: Freestyle Cage Fighting 31
|
| align=center| 1
| align=center| 1:01
| Tulsa, Oklahoma, United States
|
|-
| Win
| align=center| 1-0
| Charles Evans
| Submission (rear-naked choke)
| FCF: Freestyle Cage Fighting 30
|
| align=center| 1
| align=center| 0:20
| Shawnee, Oklahoma, United States
|
See also
List of male mixed martial artists
References
External links
American male mixed martial artists
Featherweight mixed martial artists
Lightweight mixed martial artists
Living people
Year of birth missing (living people) |
29304768 | https://en.wikipedia.org/wiki/Clementine%20%28software%29 | Clementine (software) | Clementine is a free and open-source audio player. It is a port of Amarok 1.4 to the Qt 4 framework and the GStreamer multimedia framework. It is available for Unix-like, Windows and macOS. Clementine is released under the terms of the GPL-3.0-or-later.
Clementine was created due to the transition from version 1.4 to version 2 of Amarok, and the shift of focus connected with it, which was criticized by many users. The first version of Clementine was released in February 2010.
The last stable release of Clementine was in 2016, but development has since resumed on Github, with a number of release candidate versions published.
In 2018, a fork of Clementine named Strawberry Music Player was released.
Features
Some of the features supported by Clementine are:
Listening to Internet radio from Spotify, Grooveshark (now defunct), Jamendo (January 2014 catalog), Last.fm, Magnatune, RadioTunes (Formerly Sky.FM), SomaFM, Icecast, Digitally Imported, SoundCloud and Google Drive.
Sidebar information panes with song lyrics, statistics, artist biographies and pictures.
Tag editor, album cover and queue manager.
Downloading cover art from Last.fm.
Fetch missing tags from MusicBrainz.
projectM audio visualization.
Search and download podcasts.
Creation of smart and dynamic playlists.
Tabbed playlists, import and export as M3U, XSPF, PLS, ASX and Cue sheets.
Transfer of music to some iPods (corruption of iPod problems exist as of build 1.1.1), iPhone, MTP or any USB mass-storage player.
Transcoding music into MP3, Ogg (Vorbis, Speex, Opus), FLAC, AAC or WMA.
Playback of Windows Media Files in macOS (which iTunes and many other players with advanced library functions cannot do).
Remote control using an Android device, a Wii Remote, MPRIS or the command-line interface.
Moodbar visualizations.
Save statistics to file.
See also
References
External links
2010 software
Applications using D-Bus
Audio player software that uses Qt
Free audio software
Free media players
Free software programmed in C++
Linux media players
macOS media players
Software that uses GStreamer
Windows multimedia software |
32382121 | https://en.wikipedia.org/wiki/Nefsis | Nefsis | Nefsis Corporation is a communications technology company. It was an early developer of real-time communications software and the first to use cloud computing in the videoconferencing industry.
Nefsis offers multipoint video conferencing with integrated voice and live collaboration solutions for small to medium-sized business and distributed enterprise customers.
History
Nefsis was founded in 1998 by Allen Drennan as WiredRed Corporation. The company name was changed to Nefsis Corporation in 2010.
In 1998 through 2000 the company developed and sold a VPN-like, full-duplex, multipoint communications software product called e/pop that supported several applications including presence management, instant messaging, multiparty VoIP, and remote control.
In 2001 the company introduced version 3 of its e/pop software, including server-to-server pipes, providing a unique method of relaying presence status and secure instant messaging across firewalls and proxies in multi-office, distributed networks. e/pop v3.0 received Network Computing Editor’s Choice award in September, 2004, for enterprise instant messaging due in part to its secure multi-office capabilities.
The company’s real-time software technology was distributed under OEM license by Sony Online Entertainment in 2003 as the multipoint VoIP software engine in the Planetside™ multiplayer online game. Commencing 2004, NewHeights Software Corporation licensed the company’s technology to power presence, IM, and web conferencing features in several softphone products sold under the NewHeights and Mitel brands. These OEM integrations were noteworthy at the time as they added multipoint VoIP and web conferencing to these online gaming and softphone applications, respectively.
In May, 2004, the company appeared in the market research report ‘Gartner Magic Quadrant for Web Conferencing,’ citing a “forward-looking hybrid of presence based IM and Web conferencing.” During the same timeframe the company added multipoint video as another feature of its on-premises, web conferencing software products.
In 2005 the company started offering its software under hosted service agreements (software-as-a-service). After several years in development, the company introduced cloud computing and parallel processing technology to its customers commencing in 2008. The new video conferencing online service was introduced under the Nefsis brand, which later became the company name.
The company was cited by European CEO Magazine and market research firm Frost & Sullivan in 2009 as the first to use cloud computing in a multipoint video conferencing online service.
Nefsis has been used for corporate video conferencing and online meetings, as a business continuity tool during inclement weather, and in specialty applications such as training, telemedicine, video arraignment, and video remote interpreting among others.
In 2011, Nefsis was acquired by Brother Industries.
References
External links
Official Website
Web conferencing
Teleconferencing
Videotelephony |
14583350 | https://en.wikipedia.org/wiki/Rawstudio | Rawstudio | Rawstudio is stand-alone application software to read and manipulate images in raw image formats from digital cameras. It is designed for working rapidly with a large volume of images, whereas similar tools are designed to work with one image at a time.
Rawstudio reads raw images from all digital camera manufacturers using dcraw as a back end. supports color management using LittleCMS to allow the user to apply color profiles (see also Linux color management).
Rawstudio uses the GTK+ user interface toolkit.
Rawstudio was available in Debian through version 7 "Wheezy", but removed from the distribution due to the software's dependency on obsolete libraries.
See also
Darktable
RawTherapee
UFRaw
References
External links
Digital photography
Free graphics software
Free photo software
Free software programmed in C
Graphics software that uses GTK
Photo software for Linux
Raw image processing software |
1103452 | https://en.wikipedia.org/wiki/Line%20Mode%20Browser | Line Mode Browser | The Line Mode Browser (also known as LMB, WWWLib, or just www) is the second web browser ever created.
The browser was the first demonstrated to be portable to several different operating systems.
Operated from a simple command-line interface, it could be widely used on many computers and computer terminals throughout the Internet.
The browser was developed starting in 1990, and then supported by the World Wide Web Consortium (W3C) as an example and test application for the libwww library.
History
One of the fundamental concepts of the "World Wide Web" projects at CERN was "universal readership". In 1990, Tim Berners-Lee had already written the first browser, WorldWideWeb (later renamed to Nexus), but that program only worked on the proprietary software of NeXT computers, which were in limited use. Berners-Lee and his team could not port the WorldWideWeb application with its features—including the graphical WYSIWYG editor— to the more widely deployed X Window System, since they had no experience in programming it. The team recruited Nicola Pellow, a math student intern working at CERN, to write a "passive browser" so basic that it could run on most computers of that time.
The name "Line Mode Browser" refers to the fact that, to ensure compatibility with the earliest computer terminals such as Teletype machines, the program only displayed text, (no images) and had only line-by-line text input (no cursor positioning).
Development started in November 1990 and the browser was demonstrated in December 1990.
The development environment used resources from the PRIAM project, a French language acronym for "PRojet Interdivisionnaire d'Assistance aux Microprocesseurs", a project to standardise microprocessor development across CERN.
The short development time produced software in a simplified dialect of the C programming language. The official standard ANSI C was not yet available on all platforms.
The Line Mode Browser was released to a limited audience on VAX, RS/6000 and Sun-4 computers in March 1991. Before the release of the first publicly available version, it was integrated into the CERN Program Library (CERNLIB), used mostly by the High-Energy Physics-community. The first beta of the browser was released on 8 April 1991. Berners-Lee announced the browser's availability in August 1991 in the alt.hypertext newsgroup of Usenet.
Users could use the browser from anywhere in the Internet through the telnet protocol to the info.cern.ch machine (which was also the first web server).
The spreading news of the World Wide Web in 1991 increased interest in the project at CERN and other laboratories such as DESY in Germany, and elsewhere throughout the world.
The first stable version, 1.1, was released in January 1992. Since version 1.2l, released in October 1992, the browser has used the common code library (later called libwww). The main developer, Pellow, started working on the MacWWW project, and both browsers began to share some source code. In the May 1993 World Wide Web Newsletter Berners-Lee announced that the browser was released into the public domain to reduce the work on new clients. On 21 March 1995, with the release of version 3.0, CERN put the full responsibility for maintaining the Line Mode Browser on the W3C. The Line Mode Browser and the libwww library are closely tied together—the last independent release of a separate browser component was in 1995, and the browser became part of libwww.
The Agora World Wide Web email browser was based on the Line Mode Browser. The Line Mode Browser was very popular in the beginning of the web, since it was the only web browser available for all operating systems. Statistics from January 1994 show that Mosaic had quickly changed the web browser landscape and only 2% of all World Wide Web users browsed by Line Mode Browser. The new niche of text-only web browser was filled by Lynx, which made the Line Mode Browser largely irrelevant as a browser. One reason was that Lynx is much more flexible than the Line Mode Browser. It then became a test application for the libwww.
Operating mode
The simplicity of the Line Mode Browser had several limitations.
The Line Mode Browser was designed to work on any operating system using what were called "dumb" terminals. The user interface had to be as simple as possible. The user began with a command-line interface specifying a Uniform Resource Locator (URL). The requested web page was then printed line by line on the screen, like a teleprinter. Websites were displayed using the first versions of HTML. Formatting was achieved with capitalization, indentation, and new lines. Header elements were capitalized, centered and separated from the normal text by empty lines.
Navigation was not controlled by a pointing device such as a mouse or arrow keys, but by text commands typed into the program.
Numbers in brackets are displayed for each link; links are opened by typing the corresponding number into the program.
This led one journalist of the time to write: "The Web is a way of finding information by typing numbers."
The page scrolled down when an empty command (carriage return) was entered, and scrolled up with the command "u". The command "b" navigated backwards in history, and new pages were navigated with "g http://..." (for go to) and the URL.
The browser had no authoring functions, so pages could only be read and not edited. This was considered to be unfortunate by Robert Cailliau, one of the developers:
Features
The Line Mode Browser was designed to be able to be platform independent. There are official ports to Apollo/Domain, IBM RS6000, DECStation/ultrix, VAX/VMS, VAX/Ultrix, MS-DOS, Unix, Windows, Classic Mac OS, Linux, MVS, VM/CMS, FreeBSD, Solaris, and to macOS. The browser supports many protocols like File Transfer Protocol (FTP), Gopher, Hypertext Transfer Protocol (HTTP), Network News Transfer Protocol (NNTP), and Wide area information server (WAIS).
Other features included rlogin and telnet hyperlinks, Cyrillic support (added on 25 November 1994 in version 2.15), and ability to be set up as a proxy client. The browser could run as a background process and download files. The Line Mode Browser has had problems recognizing character entities, properly collapsing whitespace, and supporting tables and frames.
References
Further reading
External links
Line Mode Browser 2013 (CERN)
Gopher clients
Text-based web browsers
Free web browsers
Free software programmed in C
World Wide Web Consortium
Usenet clients
Web browsers for DOS
MacOS web browsers
Web browsers
Hypertext Transfer Protocol clients
Portable software
1991 software
Cross-platform software
CERN software
Discontinued web browsers |
1807201 | https://en.wikipedia.org/wiki/QAD%20Inc. | QAD Inc. | QAD Inc. is a software company that provides enterprise resource planning (ERP) software and related enterprise software to manufacturing companies. The company has customers in over 100 countries around the world.
On June 28, 2021, it was announced that Thoma Bravo would acquire QAD Inc. in an all-cash transaction with an equity value of approximately 2 billion.
History
QAD was founded in 1979 by Pamela Lopker, who serves as president. QAD initially developed proprietary software applications for manufacturing companies in Southern California.
In 1984, QAD announced MFG/PRO, which was built using Progress Software Corporation's Fourth Generation Language (4GL) and relational database. MFG/PRO was one of the first software applications built for manufacturers following the APICS principles. MFG/PRO was also one of the first applications to support closed-loop Manufacturing Resource Planning (MRP II), as well as operation in open systems. QAD software supports lean manufacturing principles and interoperates with other systems via open standards. QAD stock began trading as of its initial public offering (IPO) on August 6, 1997.[2]
In 2003, a product called Supply Visualization (since rebranded to Supplier Portal) was first hosted in a multi-tenant configuration for QAD customers and those customers' suppliers, establishing QAD as a player in providing Software-as-a-Service (SaaS) software for manufacturers. QAD Supplier Portal continues to allow customers and their authorized suppliers to share information about inventory, scheduling, purchase orders, shipments, Kanbans and more. In 2006, QAD announced a user interface called .NET UI, and in 2007 its core product suite name was changed from MFG/PRO to QAD Enterprise Applications. QAD began their cloud apps in 2003, and in 2011 officially launched QAD On Demand, which was later named QAD Cloud ERP.
In 2015, the QAD Cloud ERP software was enhanced further when the Channel Islands User Experience (UX) initiative was launched in phases, named after the Channel Islands off the coast of Santa Barbara, California. In 2017, the QAD Enterprise Platform was released as a way to deliver functionality to users utilizing the Channel Islands UX in QAD Cloud ERP. In 2019, QAD renamed its software portfolio to QAD Adaptive Applications. In addition, QAD's flagship ERP software was renamed QAD Adaptive ERP, which features the Adaptive UX and is built on the QAD Enterprise Platform.
QAD sells its products and services to companies in six main manufacturing industries: automotive, consumer products, high technology, food and beverage, industrial equipment and life sciences. The company’s software portfolio is called QAD Adaptive Applications, which is headlined by QAD Adaptive ERP. QAD Adaptive Applications is designed to streamline the management of manufacturing operations, supply chains, financials, customers, technology and business performance.[3] QAD Adaptive ERP is marketed as SaaS software using cloud computing.
Acquisitions
Sep 20, 2006 - QAD acquired Precision Software, a company delivering transportation, global trade and supply chain management software. (2019 - Precision Software renamed QAD Precision)
Nov 6, 2006 - QAD acquired FBO Systems, Inc., a Georgia-based company and leading provider of enterprise asset management (EAM) products and professional services.
Jun 30, 2006 - QAD acquired Bisgen Ltd., a UK-based company whose product is tailored to the unique sales force and marketing automation needs of manufacturers.
Apr 22, 2008 - QAD acquired FullTilt Solutions’ product suite, including Perfect Product Suite, called master data management (MDM) for Internet-enabled commerce.
Jun 7, 2012 - QAD acquired DynaSys, a European provider of collaborative demand and supply chain planning software. (2019 - DynaSys renamed QAD DynaSys)
Dec 31, 2012- QAD acquired CEBOS, a provider of quality management and management system standard software and services. (2019 - CEBOS renamed QAD CEBOS; 2020 - "QAD CEBOS" division name dropped and all solutions absorbed)
Aug 22, 2018 - QAD acquired PT Iris Sistem Inforindo (PT Iris), a distributor and system integrator for QAD software operating across South Asia, primarily in Indonesia.
Jan 5, 2021 - QAD acquired Allocation Network GmbH, a provider for strategic sourcing and supplier management, based in Munich, Germany.
Acquisition by Thoma Bravo
On June 28, 2021 QAD announced that it has entered into a definitive agreement to be acquired by Thoma Bravo, a leading private equity investment firm focused on the software and technology-enabled services sector, in an all-cash transaction with an equity value of approximately 2 billion. Under the terms of the agreement, QAD shareholders will receive $87.50 per share of Class A Common Stock or Class B Common Stock in cash.
Upon completion of the transaction, QAD will become a private company with the flexibility to continue investing in the development and deployment of Enterprise Resource Planning (ERP) software and related enterprise software for manufacturing companies around the world. Anton Chilton will continue to lead QAD as CEO, and the Company will maintain its headquarters in Santa Barbara, California.
The QAD Board of Directors formed a Special Committee composed entirely of independent directors to conduct a robust process and negotiate the transaction with the assistance of independent financial and legal advisors. Following the Special Committee’s unanimous recommendation, members of the QAD Board other than Mrs. Lopker, who recused herself, unanimously approved the merger agreement with Thoma Bravo, and recommend that QAD shareholders adopt and approve the merger agreement and the transaction.
The acquisition of QAD by Thoma Bravo was completed in early November 2021. Following the completion of the transaction, Mrs. Lopker intends to retain a significant ownership interest in the Company and will continue serving the QAD Board.
See also
Progress Software
References
Software companies based in California
Software companies established in 1979
Companies based in Santa Barbara, California
ERP software companies
American companies established in 1979
1979 establishments in California
1997 initial public offerings
Companies formerly listed on the Nasdaq
Software companies of the United States
2021 mergers and acquisitions
Private equity portfolio companies
Privately held companies based in California |
13309886 | https://en.wikipedia.org/wiki/Reversing%3A%20Secrets%20of%20Reverse%20Engineering | Reversing: Secrets of Reverse Engineering | Reversing: Secrets of Reverse Engineering is a textbook written by Eldad Eilam on the subject of reverse engineering software, mainly within a Microsoft Windows environment. It covers the use of debuggers and other low-level tools for working with binaries. Of particular interest is that it uses OllyDbg in examples, and is therefore one of the few practical, modern books on the subject that uses popular, real-world tools to facilitate learning. The book is designed for independent study and does not contain problem sets, but it is also used as a course book in some university classes.
The book covers several different aspects of reverse engineering, and demonstrates what can be accomplished:
How copy protection and DRM technologies can be defeated, and how they can be made stronger.
How malicious software such as worms can be analyzed and neutralized.
How to obfuscate code so that it becomes more difficult to reverse engineer.
The book also includes a detailed discussion of the legal aspects of reverse engineering, and examines some famous court cases and rulings that were related to reverse engineering.
Considering its relatively narrow subject matter, Reversing is a bestseller that has remained on Amazon.com's list of top 100 software books for several years, since its initial release.
Chapter Outline
Part I: Reversing 101.
Chapter 1: Foundations.
Chapter 2: Low-Level Software.
Chapter 3: Windows Fundamentals.
Chapter 4: Reversing Tools.
Part II: Applied Reversing.
Chapter 5: Beyond the Documentation.
Chapter 6: Deciphering File Formats.
Chapter 7: Auditing Program Binaries.
Chapter 8: Reversing Malware.
Part III: Cracking.
Chapter 9: Piracy and Copy Protection.
Chapter 10: Antireversing Techniques.
Chapter 11: Breaking Protections.
Part IV: Beyond Disassembly.
Chapter 12: Reversing .NET.
Chapter 13: Decompilation.
Appendix A: Deciphering Code Structures.
Appendix B: Understanding Compiled Arithmetic.
Appendix C: Deciphering Program Data.
Editions
Reversing: Secrets of Reverse Engineering, English, 2005. 595pp.
Reversing: 逆向工程揭密, Simplified Chinese, 2007. 598pp.
References
Software engineering books
A good Reference for software reverse engineering books would be reverse engineering books with the best book listed |
10208102 | https://en.wikipedia.org/wiki/George%20Raveling | George Raveling | George Henry Raveling (born June 27, 1937) is an American former college basketball player and coach. He played at Villanova University, and was the head coach at Washington State University the University of Iowa and the University of Southern California
Raveling has been Nike's global basketball sports marketing director since he retired from coaching in 1994. FOX Sports Net color commentator, he is a member of the Naismith Memorial Basketball Hall of Fame.
Early life
Born and raised in Washington, D.C., Raveling did not play basketball until his ninth grade year. He was enrolled at St. Michael's, a Catholic boarding school in Hoban Heights, Pennsylvania; it was founded as an orphanage in 1916 near Scranton and closed in 2010. His grandmother's employer helped him enroll. Raveling's father died when he was 9 and his mother was institutionalized when he was 13, so academics became among the most influential forces in his life.
College and early career
Raveling attended college at Villanova University near Philadelphia and played basketball for the Wildcats. An outstanding rebounder, he set school single game and season rebounding records in his time. Raveling was team captain in his senior season, featured on the cover of the 1960 media guide, and led the Wildcats to consecutive appearances in the National Invitation Tournament (NIT) in 1959 and 1960. The Philadelphia Warriors selected him in the eighth round (pick 7) of the 1960 NBA draft.
Raveling became an assistant coach at his alma mater Villanova, then moved to Maryland in 1969 on the staff of new head coach Lefty Driesell. he became the first African American coach in the Atlantic Coast Conference
March on Washington with Martin Luther King Jr., 1963
On August 28, 1963, as Martin Luther King Jr. waved goodbye to an audience of over 250,000 "March on Washington" participants, Raveling asked King if he could have the speech. King handed Raveling the original typewritten "I Have a Dream" pages. Raveling was on the podium with King at that moment, having volunteered to provide security. He kept the original, and had been offered more than three million dollars for the speech in 2013.
He declined the offer. In 2021, he gave it to Villanova University. It is intended to be used in a "long-term "on loan" arrangement."
Head coaching career
Washington State (1972–1983)
Hired in Pullman in Raveling was the first African-American basketball coach in the Pacific-8 Conference (Pac-8, now Pac-12). He guided the Washington State Cougars from with two NCAA tournament appearances during his eleven years. The first was in 1980 and marked the first time WSU was included in the NCAA bracket since the runner-up finish in 1941; the second was three years later in 1983. Raveling was one of the winningest coaches in Washington State basketball history, with a record and seven winning seasons, including five straight from the 1975–76 campaign through the 1980 season.
While at WSU, Raveling was the West Regional coach at the 1979 U.S. Olympic Sports Festival, and an assistant coach for the U.S. Olympic Trials in 1980.
Among his outstanding players were James Donaldson, Craig Ehlo, Don Collins, Bryan Rison, and Steve Harriel, who all earned All-Pac-10 first team honors. Donaldson went on to play in the NBA for 14 years and was on the Western Conference team for the All-Star Game in 1988. Collins played in both the NBA and CBA after setting the WSU record for career steals and finishing third in scoring. Ehlo, a junior college transfer from Texas, was selected in the third round of the 1983 NBA draft by the Houston Rockets; he played fourteen seasons with four NBA teams, amassing respectable career totals of 7,492 points, 2,456 assists, and 3,139 rebounds.
Raveling was the UPI Pac-8 Coach of the Year winner in 1976, its coach of the year in 1976 and 1983, and was the national runner-up for AP coach of the year He was honored by WSU with his induction into the Pac-12 Hall of Honor.
Iowa (1983–1986)
Raveling succeeded Lute Olson as head coach at the University of Iowa in April 1983, and guided the Hawkeyes to consecutive 20-win seasons and NCAA tournament berths in 1985 and 1986.
1984 Olympics, assistant coach
At the Olympics in 1984 in Los Angeles, he served as the assistant coach for the USA team, composed of collegians. Bob Knight was the head coach, and Steve Alford and Michael Jordan were guards on that team. Shooting 63.9 percent from the floor, the U.S. team captured the ninth Olympic title with a convincing 96–65 victory over Spain in the gold medal game.
During his three years at Iowa, Raveling is probably best known for his recruits and outstanding players, including B. J. Armstrong, Kevin Gamble, Ed Horton, Roy Marble, and Greg Stokes, all of whom went on to play in the NBA.
USC (1986–1994)
In March 1986, he returned to the Pac-10 as head coach for the University of Southern California (USC) in Los Angeles.
Hank Gathers and Bo Kimble were recruited to USC by Head Coach Stan Morrison and his top assistant, David Spencer. They were joined by high school All-American, Tom Lewis, and Rich Grande as the "Four Freshmen" star recruiting class. Following an 11–17 season coaching USC, Morrison and Spencer were fired after the 1985–86 season was over, despite winning the Pac-10 the previous year. It was reported that the players would not remain unless certain conditions were met, including having a say in the next coaching staff. USC hired Raveling as the next head coach of the Trojans. Raveling gave the players a deadline to respond whether they would remain on the team. When they did not respond, he revoked the scholarships of Gathers, Kimble, and Lewis. Raveling's controversial statement was, "You can't let the Indians run the reservation," he said. "You've got to be strong, too. Sometimes you have to tell them that they have to exit." Kimble and Gathers transferred together from USC to Loyola Marymount. Lewis transferred to Pepperdine. Grande remained at USC.
During Raveling's career at USC, the Trojans advanced to the NCAA tournament in 1991 and 1992 and competed in the NIT in 1993 and 1994.
Raveling was named Kodak National Coach of the Year (1992), Basketball Weekly Coach of the Year (1992), Black Coaches Association Coach of the Year (1992) and CBS/Chevrolet National Coach of the Year (1994).
Raveling and Sonny Vaccaro had been close friends, to the point that he was the best man at Sonny's second wedding. But, Raveling had a falling out with Sonny over the business of summer high school basketball camps that Sonny ran.
Car accident and coaching retirement, 1994
On the morning of September 25, 1994, Raveling's Jeep was blindsided in a two-car collision in Los Angeles. He was seriously injured, suffering nine broken ribs, a fractured pelvis and clavicle, and a He was in intensive care due to bleeding in his chest cavity for two weeks. Citing the automobile accident and planned lengthy rehabilitation, he retired as head coach of USC at the age of 57 on
Post-coaching
Raveling has worked as the Director for International Basketball for Nike since his retirement from USC, and has authored two books on rebounding drills, War on the Boards and A Rebounder's Workshop. He has served as a color commentator for CBS Sports and FOX Sports Net, often drawing assignments for Pac-10 conference games.
Raveling has the original typewritten "I Have a Dream" speech given to him by Martin Luther King Jr.
On September 8, 2018, he was selected by former University of Maryland head basketball coach Lefty Driesell as one of Driesell's presenters upon his induction into the Naismith Hall of Fame.
Awards
In 2013, he received the John W. Bunn Lifetime Achievement Award by the Naismith Memorial Basketball Hall of Fame.
On November 21, 2013, he was a recipient of the Lapchick award (in memory of Joe Lapchick St. John's Basketball Coach, together with Don Haskins and Theresa Grentz.
Raveling was inducted into the College Basketball Hall of Fame in 2013.
On February 14, 2015 it was announced that George Raveling would be inducted into the Naismith Memorial Basketball Hall of Fame when he selected for direct election by the Contributor Direct Election Committee.
Head coaching record
References
External links
1937 births
Living people
African-American basketball coaches
American men's basketball players
Basketball coaches from Washington, D.C.
Basketball players from Washington, D.C.
College basketball announcers in the United States
College men's basketball head coaches in the United States
Iowa Hawkeyes men's basketball coaches
Maryland Terrapins men's basketball coaches
Naismith Memorial Basketball Hall of Fame inductees
National Collegiate Basketball Hall of Fame inductees
Philadelphia Warriors draft picks
USC Trojans men's basketball coaches
Villanova Wildcats men's basketball players
Washington State Cougars men's basketball coaches
21st-century African-American people
20th-century African-American sportspeople |
60054901 | https://en.wikipedia.org/wiki/Sarah%20Louisa%20Forten%20Purvis | Sarah Louisa Forten Purvis | Sarah Louisa Forten Purvis (1814–1883) was a American poet and abolitionist from Philadelphia, Pennsylvania. She co-founded The Philadelphia Female Anti-Slavery Society and contributed many poems to the anti-slavery newspaper The Liberator. She was an important figure for the history of abolitionism and feminism.
Biography
Purvis née Forten was born in 1814 in Philadelphia, Pennsylvania. She was one of the "Forten Sisters." Her mother was Charlotte Vandine Forten and her father was the well known African-American abolitionist, James Forten. Sarah Louisa Forten Purvis's sisters were Harriet Forten Purvis (1810–1875), and Margaretta Forten (1808–1875). The three sisters, along with their mother, were founders of the Philadelphia Female Anti-Slavery Society in 1833. This society was not the first female Anti-Slavery society. However, this society was particularly important because of the role it played in the origins on American Feminism.
Sarah Louisa Forten Purvis was a poet. She is cited in some scholarship as used the pen names, "Ada" and "Magawisca", as well as her own name. There is some conflict surrounding the poetry under the pen names of "Ada" as it has been argued that certain poems with this pen name may have been inaccurately attributed to Forten Purvis. She is credited with writing many poems about the experience of slavery and womanhood. Some of Forten Purvis's most well known works include "An Appeal to Woman" and "The Grave of the Slave." Both of which were published in the abolitionist newspaper The Liberator. The poem "The Grave of the Slave" was subsequently set to music by Frank Johnson, and the song was often used as an anthem at antislavery gatherings. While the poem "An Appeal to Woman" was utilized in the pamphlets for the Anti-Slavery Convention of New York in 1837.
In 1838 Sarah married Joseph Purvis with whom she had eight children, including William B. Purvis. Joseph Purvis was the brother of Robert Purvis, who was the husband of Sarah's sister Harriet.
She is said to have died in 1883. Though some works that speak about her life and poetry state she died in 1857. This discrepancy may be related to the misattribution of some of her poems.
Education
Sarah Louisa Forten Purvis and her sisters received private educations and were members of the Female Literary Association, a sisterhood of Black women founded by Sarah Mapps Douglass - another woman of a prominent abolitionist family in Philadelphia. Sarah began her literary legacy through this organization where she anonymously developed essays and poems.
Written work
Motherhood and Daughterhood within the context of slavery are made example of within Forten Purvis's poetry. These perspectives come from a personal place according to Julie Winch (a writer of History at the University of Massachusetts), and are informed by Forten Purvis's ancestry, status and intellectual background. Though Forten Purvis was never herself oppressed through the chattel slavery system, her poetry extensively made example of the anguish within the experience of being enslaved as a woman of African descent. The notion of cultural kinship was present within much of her poetry. Additionally, the marginalization and oppression exemplified within her poetry is shown to be compounded in many cases by the gendered nature of the poetry. These poems, though primarily about the lived experiences of those within the slavery system, also work to show the lived experience of women as intersecting with their race. Examples of the experience of racism as informed by the experience of womanhood can be seen within "An Appeal to Women", "The Slave Girl's Address to her Mother", "A Mother's Grief", and "The Slave Girl's Farewell."
Feminist contributions
Forten Purvis's poetic contributions to feminist activism has been discussed within the academic world as an equally considerable contribution to intersectionality. For example, Forten Purvis's Poem "An Appeal to Women" is identified through the lens of race and womanhood within Janet Gray's book "Race and Time" (2004). Similarly, Julie Winch discusses Forten Purvis's relationship to both Womanhood and Race. It is identified that this poem, which was distributed and read allowed to the attendees of the antislavery convention for women in 1873, spoke primarily to the white women of this period. In particular, it urged them to join in solidarity with their African-American female counterparts as a sisterhood in the fight against slavery. Gray suggests that what makes this poem inherently intersectional in its feminism is Forten Purvis's identification of the plurality of being Black and being female in comparison to the lived experience of being a white woman. Additionally, this poem makes mention of the self-objectification of white women's "fairness" as synonymous with their social value, and as apposed to the agency of black women as something more than merely "fairness" (Fairness in this case as related to complexion). Forten Purvis's poem conversely plays on white women's "fairness" as a "virtue" or more contemporarily put, a mark of privilege and further calls for white women to use their "virtue" for activism in the defense of their Black sisters. It is suggested that Forten Purvis's poetry, transforms the female listener into an agent of change.
Poetry
As can be noted in additional poetry from Forten Purvis, the dualistic nature of blackness in relation to womanhood is a common theme. This intersectional dissemination of feminist ideals and the perspective and experiences of black women through poetry cannot be investigated separately. Ira V. Brown additionally specifies that the women who acted within the Philadelphia Female Anti Slavery society, through whatever those actions were (in Forten Purvis's case, creative poetry) were contributors to what she called "The Cradle of Feminism" - or in other words the development of it.
Correspondence
On the topic of Prejudice, Forten Purvis believed that all people regardless of gender had a responsibility to act as political catalysts in the Abolition of slavery. This is evidenced by her letter to Angelina Grimke, written on April 15th of 1837. It specified that man or woman were to be equal contributors to the cause and that women, regardless of their politically oppressions condition at the time must consider their "sisters" and act upon this consideration.
Sketches
Forten Purvis also made contributions to the imagery of the emblem of the female supplicant. Adapting this emblem according to their own devices, many women within American drew renditions of the emblem. Forten Purvis being one of them. As specified by Jean Fagan Yellin, Forten Purvis privately added her rendition of the emblem as a sketch into Elizabeth Smith's album.
Misattribution of some works
As identified, some of Forten Purvis's works may have been under the pen names of "Ada" or "Magawisca." According to some scholars, a Quaker abolitionist by the name of Eliza Earle Hacker (1807-1846), from Rhode Island, had been the author of what many thought to be some of Forten Purvis's work. Though there is little evidence as to which poems are not in fact Forten Purvis's. There are some possible distinctions. The fact that Forten Purvis's "Ada" signature always comes with a specifier as to the place with which the poetry was written, while Hackers "Ada" does not, indicates the potential for separation of the authors work. Regardless, many Anti-Slavery and Abolition Authors used pen names to protect their identity and as a result, it has become difficult to attribute certain works to certain individuals. For this reason the chart only includes works in which the place of original is specified as being Philadelphia (Forten Purvis's home state).
Specifically, Ada's poem "Lines: Suggested on Reading 'An Appeal to Christian Women of the South' by Angelina Grimké," was most likely written by Hacker but often attributed to Forten and included in African-American writing anthologies.
References
1814 births
1883 deaths
African-American abolitionists
19th-century African-American activists
African-American women writers
African-American poets
Forten family
19th-century American women
19th-century African-American women
Feminism
Feminism and history
Poetry
19th-century African-American writers |
19424663 | https://en.wikipedia.org/wiki/UniPro%20protocol%20stack | UniPro protocol stack | In mobile-telephone technology, the UniPro protocol stack follows the architecture of the classical OSI Reference Model. In UniPro, the OSI Physical Layer is split into two sublayers: Layer 1 (the actual physical layer) and Layer 1.5 (the PHY Adapter layer) which abstracts from differences between alternative Layer 1 technologies. The actual physical layer is a separate specification as the various PHY options are reused in other MIPI Alliance specifications.
The UniPro specification itself covers Layers 1.5, 2, 3, 4 and the DME (Device Management Entity). The Application Layer (LA) is out of scope because different uses of UniPro will require different LA protocols. The Physical Layer (L1) is covered in separate MIPI specifications in order to allow the PHY to be reused by other (less generic) protocols if needed.
OSI Layers 5 (Session) and 6 (Presentation) are, where applicable, counted as part of the Application Layer.
Physical Layer (L1)
D-PHY
Versions 1.0 and 1.1 of UniPro use MIPI's D-PHY technology for the off-chip Physical Layer. This PHY allows inter-chip communication. Data rates of the D-PHY are variable, but are in the range of 500-1000 Mbit/s (lower speeds are supported, but at decreased power efficiency). The D-PHY was named after the Roman number for 500 ("D").
The D-PHY uses differential signaling to convey PHY symbols over micro-stripline wiring. A second differential signal pair is used to transmit the associated clock signal from the source to the destination. The D-PHY technology thus uses a total of 2 clock wires per direction plus 2 signal wires per lane and per direction. For example, a D-PHY might use 2 wires for the clock and 4 wires (2 lanes) for the data in the forward direction, but 2 wires for the clock and 6 wires (3 lanes) for the data in the reverse direction. Data traffic in the forward and reverse directions are totally independent at this level of the protocol stack.
In UniPro, the D-PHY is used in a mode (called "8b9b" encoding) which conveys 8-bit bytes as 9-bit symbols. The UniPro protocol uses this to represent special control symbols (outside the usual 0 to 255 values). The PHY itself uses this to represent certain special symbols that have meaning to the PHY itself (e.g. IDLE symbols). Note that the ratio 8:9 can cause some confusion when specifying the data rate of the D-PHY: a PHY implementation running with a 450 MHz clock frequency is often rated as a 900 Mbit/s PHY, while only 800 Mbit/s is then available for the UniPro stack.
The D-PHY also supports a Low-Power Data Transmission (LPDT) mode and various other low-power modes for use when no data needs to be sent.
M-PHY
Versions 1.4 and beyond of UniPro support both the D-PHY as well as M-PHY technology. The M-PHY technology is still in draft status, but supports high-speed data rates starting at about 1000 Mbit/s (the M-PHY was named after the Roman number for 1000). In addition to higher speeds, the M-PHY will use fewer signal wires because the clock signal is embedded with the data through the use of industry-standard 8b10b encoding. Again, a PHY capable of transmitting user data at 1000 Mbit/s is typically specified as being in 1250 Mbit/s mode due to the 8b10b encoding.
The D- and M-PHY are expected to co-exist for several years. D-PHY is a less complex technology, M-PHY provides higher bandwidths with fewer signal wires, and C-PHY provides low-power.
Low speed modes and power savings
It is worth noting that UniPro supports the power efficient low speed communication modes provided by both the D-PHY (10 Mbit/s) and M-PHY (3 Mbit/sec up to 500 Mbit/s). In these modes, power consumption roughly scales with the amount of data that is sent.
Furthermore, both PHY technologies provide additional power saving modes because they were optimized for use in battery-powered devices.
PHY Adapter Layer (L1.5)
Architecturally, the PHY Adapter layer serves to hide the differences between the different PHY options (D- and M-PHY). This abstraction thus mainly gives architectural flexibility. Abstracted PHY details include the various power states and employed symbol encoding schemes.
L1.5 symbols
L1.5 thus has its own (conceptual) symbol encoding consisting of 17-bit symbols. These 17-bit symbols never show up on the wires, because they are first converted by L1.5 to a pair of PHY symbols. The extra 17th control bit indicates special control symbols which are used by the protocol (L1.5 and L2) itself. In the figures, the control bits are shown in "L1.5 red" as a reminder that they are defined in- and used by protocol Layer 1.5.
L1.5 multi-lane support
The main feature that L1.5 offers users is to allow the bandwidth of a UniPro link to be increased by using 2, 3 or 4 lanes when a single lane does not provide enough bandwidth. To the user, such a multi-lane link simply looks like a faster physical layer because the symbols are sent across 2, 3 or 4 lanes. Applications that require higher bandwidth in one direction but require less bandwidth in the opposite direction, can have different numbers of lanes per direction.
L1.5 lane discovery
Starting in UniPro v1.4, L1.5 automatically discovers the number of usable M-PHY lanes for each direction of the link. This involves a simple discovery protocol within L1.5 that is executed on initialization. The protocol transmits test data on each available outbound lane, and receives information back from the peer entity about which data on which lane actually made it to the other end of the link. The mechanism also supports transparent remapping of the lanes to give circuit board designers flexibility in how the lanes are physically wired.
L1.5 link power management
Starting in UniPro v1.4, L1.5 has a built in protocol called PACP (PA Control Protocol) that allows L1.5 to communicate with its peer L1.5 entity at the other end of an M-PHY-based link. Its main usage is to provide a simple and reliable way for a controller at one end of the link to change the power modes of both the forward and reverse directions of the link. This means that a controller situated at one end of the link can change the power mode of both link directions in a single atomic operation. The intricate steps required for doing this in a fully reliable way are handled transparently within L1.5.
L1.5 peer parameters control
In addition to the L1.5 link power management the PACP is also used to access control and status parameters of the peer UniPro device.
L1.5 guarantees
The mechanisms in L1.5 guarantee the following to upper layer protocols:
after reset, each L1.5 transmitter will wait until the connected L1.5 receiver is known to be active (handled via a handshake)
if more than one lane is used, the ordering of the original symbol stream is preserved (despite usage of multiple lanes and freedom on how to interconnect these lanes)
power mode changes are executed reliably (even in the presence of bit errors)
Data Link Layer (L2)
The main task of UniPro's Data Link layer (L2) is to allow reliable communication between two adjacent nodes in the network - despite occasional bit errors at the Physical layer or potential link congestion if the receiver cannot absorb the data fast enough.
L2 data frames
L2 clusters 17-bit UniPro L1.5 symbols into packet-like data frames (the term packet is reserved for L3). These data frames start with a 17-bit start-of-frame control symbol, followed by up to 288 bytes of data (144 data symbols) and followed by an end-of-frame control symbol and a checksum.
Note that two or more of the 288 bytes are used by higher layers of the UniPro protocol. The maximum frame size of 288 payload bytes per frame was chosen to ensure that the entire protocol stack could easily transmit 256 bytes of application data in a single chunk. Payloads consisting of odd numbers of bytes are supported by padding the frame to an even number of bytes and inserting a corresponding flag in the trailer.
L2 control frames
In addition to data frames which contain user data, L2 also transmits and receives control frames. The control frames can be distinguished from data frames by three bits in the first symbol. There are two types of control frames:
One type ("AFC- Acknowledgement and L2 Flow Control", 3 symbols) serves to acknowledge successfully received data frames.
The other type ("NAC", 2 symbols) notifies the corresponding transmitter that an incorrect frame has been received.
Note that these L2 types of control frames are sent autonomously by L2.
L2 retransmission
High speed communication at low power levels can lead to occasional errors in the received data. The Data Link layer contains a protocol to automatically acknowledge correctly received data frames (using AFC control frames) and to actively signal errors that can be detected at L2 (using NAC control frames). The most likely cause of an error at L2 is that a data frame was corrupted at the electrical level (noise, EMI). This results in an incorrect data or control frame checksum at the receiver side and will lead to its automatic retransmission. Note that data frames are acknowledged (AFC) or negatively acknowledged (NAC). Corrupt control frames are detected by timers that monitor expected or required responses.
A bandwidth of 1 Gbit/s and a bit-error rate of 10−12 at a speed of 1 gigabit/s would imply an error every 1000 seconds or once every 1000th transmitted Gbit. Layer 2 thus automatically corrects these errors at the cost of marginal loss of bandwidth and at the cost of buffer space needed in L2 to store copies of transmitted data frames for possible retransmission or "replay".
L2 flow control
Another feature of L2 is the ability for an L2 transmitter to know whether there is buffer space for the data frame at the receiving end. This again relies on L2 control frames (AFC) which allow a receiver to tell the peer's transmitter how much buffer space is available. This allows the receiver to pause the transmitter if needed, thus avoiding receive buffer overflow. Control frames are unaffected by L2 flow control: they can be sent at any time and the L2 receiver is expected to process these at the speed at which they arrive.
L2 Traffic Classes and arbitration
UniPro currently supports two priority levels for data frames called Traffic Class 0 (TC0) and Traffic Class 1 (TC1). TC1 has higher priority than TC0. This means that if an L2 transmitter has a mix of TC0 and TC1 data frames to send, the TC1 data frames will be sent first. Assuming that most data traffic uses TC0 and that the network has congestion, this helps ensure that TC1 data frames arrive at their destination faster than TC0 data frames (analogous to emergency vehicles and normal road traffic). Furthermore, L2 can even interrupt or "preempt" an outgoing TC0 data frame to transmit a TC1 data frame. Additional arbitration rules apply to control frames: in essence these receive higher priority than data frames because they are small and essential for keeping traffic flowing.
In a multi-hop network, the arbitration is done within every L2 transmitter at every hop. The Traffic Class assigned to data does not normally change as data progresses through the network. It is up to the applications to decide how to use the priority system.
L2 single Traffic Class option
In UniPro version 1.1, an option was introduced to allow simple endpoint devices to implement only one of the two Traffic Classes if they choose to. This can be useful when device designers are more concerned with implementation cost than with control over frame arbitration. The connected L2 peer device detects such devices during the link initialization phase and can avoid using the missing Traffic Class.
L2 guarantees
The various L2 mechanisms provide a number of guarantees to higher layer protocols:
a received data frame will contain the correct payload (checked using a checksum)
a transmitted data frame will reach the peer's receiver (after potential retransmissions)
there will be room to accommodate received data frames (L2 flow control)
the content of a data frame will only be passed once to the upper protocol layer (duplicate data frames are discarded)
data frames within the same Traffic Class will be received and passed to the upper protocol layers in order
Thus individual links autonomously provide reliable data transfer. This is different from, for example, the widely used TCP protocol that detects errors at the endpoints and relies on end-to-end retransmission in case of corrupted or missing data.
Network Layer (L3)
The network layer is intended to route packets through the network toward their destination. Switches within a multi-hop network use this address to decide in which direction to route individual packets. To enable this, a header containing a 7-bit destination address is added by L3 to all L2 data frames. In the example shown in the figure, this allows Device #3 to not only communicate with Device #1, #2 and #5, but also enables it to communicate with Devices #4 and #6.
Version 1.4 of the UniPro spec does not specify the details of a switch, but does specify enough to allow a device to work in a future networked environment.
L3 addressing
Although the role of the L3 address is the same as the IP address in packets on the Internet, a UniPro DeviceID address is only 7 bits long. A network can thus have up to 128 different UniPro devices. Note that, as far as UniPro is concerned, all UniPro devices are created equal: unlike PCI Express or USB, any device can take the initiative to communicate with any other device. This makes UniPro a true network rather than a bus with one master.
L3 packets
The diagram shows an example of an L3 packet which starts at the first L2 payload byte of an L2 frame and ends at the last L2 payload byte of an L2 frame. For simplicity and efficiency, only a single L3 packet can be carried by one L2 frame. This implies that, in UniPro, the concepts of an L2 Frame, an L3 Packet and an L4 Segment (see below) are so closely aligned that they are almost synonyms. The distinction (and "coloring") is however still made to ensure that the specification can be described in a strictly layered fashion.
L3 short-header packet structure
UniPro short-header packets use a single header byte for L3 information. It includes the 7-bit L3 destination address. The remaining bit indicates the short-header packet format. For short-header packets, the L3 source address is not included in the header because it is assumed that the two communicating devices have exchanged such information beforehand (connection-oriented communication).
L3 long-header packets
Long-header packets are intended to be introduced in a future version of the UniPro specification, so their format is undefined (except for one bit) in the current UniPro v1.4 specification. However, UniPro v1.4 defines a hook that allows long-header packets to be received or transmitted by a UniPro v1.4 conformant-device assuming the latter can be upgraded via software. The "long-header trap" mechanism of UniPro v1.4 simply passes the payload of a received L2 data frame (being the L3 packet with its header and payload) to the L3 extension (e.g. software) for processing. The mechanism can also accept L2 frame payload from the L3 extension for transmission. This mechanism aims to allow UniPro v1.4 devices to be able to be upgraded in order to support protocols that require the as-yet undefined long-header packets.
L3 guarantees
Although details of switches are still out of scope in the UniPro v1.4 spec, L3 allows UniPro v1.0/v1.1/v1.4 devices to serve as endpoints on a network. It therefore guarantees a number of properties to higher layer protocols:
that packets will be delivered to the addressed destination device (and packets addressed to non-existent devices are discarded)
that payload sent by an L3 source to a single L3 destination as a series of one or more short-header packets within a single Traffic Class will arrive in order and with the correct payload (reliability)
Transport Layer (L4)
The features of UniPro's Transport layer are not especially complex, because basic communication services have already been taken care of by lower protocol layers. L4 is essentially about enabling multiple devices on the network or even multiple clients within these devices to share the network in a controlled manner. L4's features tend to be roughly comparable to features found in computer networking (e.g. TCP and UDP) but that are less commonly encountered in local busses like PCI Express, USB or on-chip busses.
UniPro's L4 also has special significance because it is the top protocol layer in the UniPro specification. Applications are required to use L4's top interface to interact with UniPro and are not expected to bypass L4 to directly access lower layers. Note that the interface at the top of L4 provided for transmitting or receiving data is defined at the behavioral or functional level. This high level of abstraction avoids restricting implementation options. Thus, although the specification contains an annex with a signal-level interface as a non-normative example, a UniPro implementation is not required to have any specific set of hardware signals or software function calls at its topmost interface.
L4 features
UniPro's Transport layer can be seen as providing an extra level of addressing within a UniPro device. This
allows a UniPro device to communicate with another UniPro device using multiple logical data streams (example: sending audio and video and control information separately).
allows a UniPro device to simultaneously connect to multiple other devices (this requires switches as supported in a future version of UniPro) using multiple logical data streams.
provides mechanisms to reduce the risk of congestion on the network.
provides a mechanism to structure a stream of bytes as a stream of messages.
These points are explained in more detail below.
L4 segments
An L4 segment, is essentially the payload of an L3 packet. The L4 header, in its short form, consists of just a single byte.
The main field in the short L4 header is a 5-bit "CPort" identifier which can be seen as a sub-address within a UniPro device and is somewhat analogous to the port numbers used in TCP or UDP. Thus every segment (with a short header) is addressed to a specific CPort of specific UniPro device.
A single bit in the segment header also allows segments to be defined with long segment headers. UniPro v1.4 does not define the structure of such segment formats (except for this single bit). Long header segments may be generated via the long header trap described in the L3 section.
L4 connections
UniPro calls a pair of CPorts that communicate with each other a Connection (hence the C in CPort). Setting up a connection means that one CPort has been initialized to create segments which are addressed to a specific L4 CPort of a specific L3 DeviceID using a particular L2 Traffic Class. Because UniPro connections are bidirectional, the destination CPort is also configured to allow data to be sent back to the source CPort.
In UniPro 1.0/1.1 connection setup is implementation specific.
In UniPro v1.4 connection setup is assumed to be relatively static: the parameters of the paired CPorts are configured by setting the corresponding connection Attributes in the local and peer devices using the DME. This will be supplemented by a dynamic connection management protocol in a future version of UniPro.
L4 flow control
CPorts also contain state variables that can be used to track how much buffer space the peer or connected CPort has. This is used to prevent the situation whereby a CPort sends segments to a CPort which has insufficient buffer space to hold the data, thus leading to stalled data traffic. Unless resolved fast, this traffic jam at the destination quickly grows into a network-wide gridlock. This is highly undesirable as it can greatly affect network performance for all users or, worse, can lead to deadlock situations. The described L4 mechanism is known as end-to-end flow control (E2E FC) because it involves the endpoints of a connection.
L4 flow control versus L2 flow control
L4 flow control is complementary to L2 flow control. Both work by having the transmitter pause until it knows there is sufficient buffer space at the receiver. But L4 flow control works between a pair of CPorts (potentially multiple hops apart) and aims to isolate connections from one another ("virtual wire" analogy). In contrast, L2 flow control is per-hop and avoids basic loss of data due to lack of receiver buffer space.
L4 flow control applicability
E2E FC is only possible for connection-oriented communication, but at present UniPro's L4 does not support alternative options. E2E FC is enabled by default but can, however, be disabled. This is not generally recommended.
L4 safety net
UniPro provides "safety net" mechanisms that mandate that a CPort absorbs all data sent to it without stalling. If a stall is detected anyway, the endpoint discards the incoming data arriving at that CPort in order to maintain data flow on the network. This can be seen as a form of graceful degradation at the system level: if one connection on the network cannot keep up with the speed of the received data, other devices and other connections are unaffected.
L4 and Messages
UniPro L4 allows a connection between a pair of CPorts to convey a stream of so-called messages (each consisting of a series of bytes) rather than a single stream of bytes. Message boundaries are triggered by the application-level protocol using UniPro and are signaled via a bit in the segment header. This End-of-Message bit indicates that the last byte in the L4 segment is the last byte of the application-level message.
UniPro needs to be told by the application where or when to insert message boundaries into the byte stream: the boundaries have no special meaning for UniPro itself and are provided as a service to build higher-layer protocols on top of UniPro. Messages can be used to indicate (e.g. via an interrupt) to the application that a unit of data is complete and can thus be processed. Messages can also be useful as a robust and efficient mechanism to implement resynchronization points in some applications.
UniPro v1.4 introduces the notion of message fragment, a fragment being a portion of a message passed between the application and the CPort. This option can be useful when specifying Applications on top of UniPro that need to interrupt the Message creation based on information from the UniPro stack, e.g., incoming Messages, or backpressure.
L4 guarantees
The mechanisms in L4 provide a number of guarantees to upper layer protocols:
A CPort cannot stall, in the sense that it will always continue to accept data as fast as the link or network can deliver the data.
If an application bound to CPort of a connection stalls and thus fails (for brief or longer periods) to absorb data, other connections to the same or different devices are unaffected.
A stream of data sent from one CPort to another will always arrive intact, in order, and with the correct message boundary information if the CPort is able to keep up with the incoming data stream.
In case the CPort cannot keep up with the incoming data stream, one or more messages may be corrupted (due to missing data) and the receiver is notified about this error condition.
It is safe for an application-level protocol to wait for a peer's response (e.g. an answer or acknowledgement) to a sent L4 message (e.g. a question or command). But it is unsafe for an application-level protocol to await a peer's response to a sent partial message.
The content of received short header packets/segments will always be correct. Although delivery at the long-header trap interface is not guaranteed, a future protocol extension plans to make the delivery of such packets reliable. This protocol extension could be implemented in software on top of the long-header trap.
Device Management Entity (DME)
The DME (Device Management Entity) controls the layers in the UniPro stack. It provides access to control and status parameters in all layers, manages the power mode transitions of the Link and handles the boot-up, hibernate and reset of the stack. Furthermore, it provides means to control the peer UniPro stack on the Link.
References
Embedded systems
Network protocols
UniPro |
763499 | https://en.wikipedia.org/wiki/National%20Institute%20of%20Technology%20Calicut | National Institute of Technology Calicut | National Institute of Technology Calicut (NIT Calicut or NITC), formerly Regional Engineering College Calicut, is a public technical university and an institute of national importance governed by the NIT Act passed by the Parliament of India. The campus is situated north east of Kozhikode, on the Kozhikode–Mukkam Road. It was established in 1961 and was known as Calicut Regional Engineering College (CREC) until 2002. It is one of the National Institutes of Technology campuses established by the Government of India for imparting high standard technical education to students from all over the country. NIT Calicut hosts a supercomputer on its campus, and has a dedicated nanotechnology department.
History
Initial years
National Institute of Technology, Calicut was set up in 1961 as Regional Engineering College Calicut (CREC), the ninth of its kind and the first one to be established during the Third Five-Year Plan period. Until the formation of Calicut University in 1963, the institute was affiliated with Kerala University. It was largely due to the efforts of Pattom Thanu Pillai, then Chief Minister of Kerala, that the institute came into being. Prof. S. Rajaraman, first principal of Government Engineering College, Thrissur was appointed as the special officer in 1961 to organise the activities of the college until M. V. Kesava Rao took charge as the first principal of the college. The classes were initially held at the Government Polytechnic at West Hill, before it moved to its present campus in 1963. The college started with an annual intake of 125 students for the undergraduate courses, on a campus of .
Expansion
The intake for the undergraduate courses was increased to 250 in 1966, 150 for the first year and 100 for the preparatory course. The annual intake was reduced from 250 to 200 from the year 1968–69 on account of industrial recession.
After Prof S. Unnikrishnan Pillai took charge as principal in 1983, the Training and Placement Department was started to organise campus recruitments for students. The college moved into the area of information technology in 1984 with the commissioning of multi-user PSI Omni system and HCL workhorse PCs. In 1987 the college celebrated 25 years of its existence, and postgraduate courses were started. The CEDTI was established on the campus the following year.
In 1990 Shankar Dayal Sharma inaugurated the Architecture Department Block and construction of a computer centre was completed. In 1996, the institute website (the first in Kerala) was launched. The Indian Institute of Management Calicut functioned from the NIT campus in its first few years of existence before moving to its new campus in Kunnamangalam in 2003.
The Ministry of Human Resource Development, Government of India, accorded NIT status to REC Calicut in June 2002 granting it academic and administrative autonomy. It was a lead institute under the World Bank-funded Technical Education Quality Improvement Program (TEQIP) which began in 2002. In 2003, students were first admitted to the flagship undergraduate B.Tech through the All India Engineering Entrance Exam. With the passing of the National Institutes of Technology Act in May 2007, NIT Calicut was declared an Institute of National Importance. The National Institutes of Technology Act is the second legislation for technical education institutions after the Indian Institutes of Technology Act of 1961. In 2007 NIT Calicut raised its annual intake for its undergraduate program to 570. The annual intake for undergraduate program was increased to 1049 by 2011.
Campus
Hostels
NITC is a fully residential institution with 13 hostels on the campus to accommodate students. There are around 4500 students in NITC hostels. There are 13 Men's hostels, named by letters A, B, C, D, E, F, G, PG I, PG II, IH and newly formed Mega Hostel and MBA Hostel. The 4 Ladies' hostels (LH): A, B, C, and Mega Ladies Hostels have triple rooms.
A and B hostels accommodate 1st year B.Tech students. II year B.Tech students are accommodated in the C and Mega Hostels. III year B.Tech students are accommodated in C and G hostels. Final year B.Tech students are accommodated in D, E, F, G and PG-II hostels. M.Tech. and MCA students reside in apartments.
The older men's hostels are close to the academic area, while the IH, Mega Hostels, ladies' hostels and Professor's Apartments are in the residential campus. A mini-canteen is available in the hostel premises.
Students are permitted to use their own computers in their rooms. All hostels apart from A and B are well connected through a 100 Mbit/s LAN network to the Campus Networking Center through which internet connectivity is provided for free. Each hostel contains a common room with cable TV, daily newspaper and indoor games facilities.
Each hostel has its own mess and students are allowed to join the mess of any hostel, with the exception of A and B which are exclusive for first year students. The type of food served in the hostel messes is as follows:
Cosmopolitan: A, B, C (Kerala - vegetarian), D, E, PG I, IH (Andhra mess)
Non-vegetarian: F, G & PG-II (North & South mixed)
Two cosmopolitan messes are available in the ladies hostel premises. Other facilities like mini-canteen, indoor shuttle court, gymnasium and an extension of the Co-operative Society store are available in the ladies' hostel.
Sports
NITC has a gymnasium, swimming pool, an open-air theatre, an auditorium and facilities for outdoor sports like tennis, football, volleyball, badminton, roller skating, hockey and basketball. It also has a cricket ground where Ranji Trophy matches have been played.
Central Computer Center
The Central Computer Center is a central computing facility which caters to the computing requirements of the whole community of this institution. The working hours of this centre is on round the clock on all working days except Republic Day, Independence Day, Thiruvonam, Vijayadashami, Gandhi Jayanthi, Bakrid and Christmas.
The centre is equipped with three IBM X-series servers, one Dell PowerEdge 6600 Quad Processor Xeon Server, six Dell PowerEdge 2600 dual processor server and one Sunfire V210 Server. The desktops and thin clients are connected to the servers through gigabit switches and CAT6 UTP cables. The centre is connected to the campus-networking centre with backbone of 32Mbit/s through a Nortel L3 switch and in turn to the internet.
Central Library
NITC's Central Library, with more than 100,000 books, is one of the largest technical libraries in India. It subscribes to more than 200 print journals. The institute has a digital library, Nalanda (Network of Automated Library and Archives), which houses online resources. Users of the institute and networked institutions can access around 17,000 journals, proceedings, databases, electronic theses, dissertations and online courses at Nalanda. It is part of the Indest consortium, which networks the libraries at technical institutions in India.
NIT Calicut's supercomputer, Purna (Parallel Universal Remote Numerical Analyser), is accessible from anywhere in the campus and is provided for the use of all students and faculty members. PURNA has a peak speed of 1.5 teraflops.
Technology Business Incubator
The Technology Business Incubator (TBI) at NIT Calicut was set up with the help of the Department of Science and Technology, Government of India and the National Science and Technology Entrepreneurship Development Board (NSTEDB). Its objective is to help the development of start-up ventures in electronics and IT.TBI provide workspace with shared office facilities with emphasis on business and professional services necessary for nurturing and supporting early stage growth of technology and technology based enterprises.
Organisation and administration
Governance
Under the constitution of the National Institutes of Technology Act 2007, the President of India is the Visitor to the institute. The authorities of the institute are Board of Governors and the Senate. The Board is headed by the chairman, who is appointed by the Visitor. The Director, who is the secretary of the Board, looks after the day-to-day running of the institute. The Board of Governors has nominees of the Central Government, the State Government, the NIT Council and the Institute Senate.
Gajjala Yoganand is the chairman of the Board of Governors. Prasad Krishna was appointed the director of the institute in 2021.
Academic departments
The institute includes eleven academic departments, eight centres departments, 8 centers and three schools. The departments include Architecture and Planning, Chemistry, Physics, Mathematics, and Physical Education as well as five engineering determinants for Chemical Engineering, Civil Engineering, Computer Science & Engineering, Electrical Engineering, Electronics & Communication Engineering and Mechanical Engineering. The eight centres are the Campus Networking Centre (CNC), Centre for Biomechanics, Advanced Manufacturing Centre, Central Computer Centre, Centre for Value Education, Sophisticated Instruments Centre, Centre for Transportation Research and Centre for Scanning Microscopy. The three additional schools are the School of Biotechnology, School of Management Studies and School of Materials Science and Engineering.
School of Management Studies
The School of Management Studies (SOMS), NIT Calicut offers a two-year residential Master of Business Administration (MBA) program for graduates in any discipline from any recognised university/institute. Admission to the MBA program is based on CAT score, performance in group discussion, and personal interviews. SOMS offers a two-year full-time MBA program with specialisations in Finance, Marketing, Human resource management, Operations, and Systems. In addition to two year MBA program, NITC-SOMS offers research programs leading to the award of Ph.D. degrees in streams of management such as General Management, Finance and Economics, Human Resource Management, and Marketing. The program was initiated in 2008 and the first batch of students enrolled in 2009.
Academics
Courses
An undergraduate course offered by NITC include Bachelor of Technology (B.Tech.) in various engineering fields and Bachelor of Architecture (B.Arch.). These are paralleled by postgraduate courses offering Master of Technology (M.Tech). In addition, NITC offers an applied computer science course granting Master of Computer Application (M.C.A.), a business course granting Master of Business Administration (M.B.A.) and a two-year Master of Science (M.Sc.) in science departments. PhD programmes are available in all engineering and science disciplines and management.
Admissions
Students are taken in for the undergraduate courses through the Joint Entrance Examination Main (JEE Main) conducted by National Testing Agency (NTA). Around 16 lakh students wrote this test in 2012 to gain admission to one of 20 NITs. This makes it one of the largest such examinations in the world. Admission to some other autonomous national level technical institutes (called Deemed University) is also through JEE Main. Students who have studied outside India for a minimum period of 2(two) years may seek admission through DASA (Direct Admission for Students Abroad) which was managed by EdCil (Educational Consultants India Limited) and is currently being managed by the National Institute of Technology Hazaratbal, Srinagar, (based on 12th Standard Board Examination marks or equivalent higher secondary examination marks/grade and SAT-II (Scholastic Aptitude Test -II) scores in Physics, Chemistry, and Mathematics conducted by College Board. A similar admission provision is provided for foreign nationals.
Admission to the graduate M.Tech and Ph.D. courses are primarily based on scores in the GATE exam, conducted by the IITs. Admission to the MCA program is done through the NIMCET conducted by the NITs. The first NIMCET in 2006 was conducted by NIT Calicut. Faculty from other institutes work as research scholars in NITC under the Quality Improvement Programme (QIP). It is the national coordination center for the QIP of polytechnic institutes.
Admission to MBA course is based on the Common Admission Test[CAT] score. Admission details will be published on the institute website and major dailies in India. The applicants will be shortlisted based on the CAT score, performance in the qualifying degree (BTech/BE) and work experience. The final selection is based on the performance of shortlisted candidates in group discussion (GD) and personal interview (PI). The GD and PI are normally conducted in major cities across India.
Rankings
NIT Calicut was ranked 23rd in the engineering stream and 3rd in the architecture stream in India by the National Institutional Ranking Framework (NIRF) in 2020. It was ranked 23rd among engineering colleges in India in 2019 by The Week.National Institute of Technology Calicut (NITC) has secured secured 9th rank in the ARIIA 21 in the category of centrally funded technical institutes (CFTIs), central university and institutes of national importance). NITC has become the first among all the 31 NITs in the country in the ARIIA 21 rankings.
International Liaison Office
The institute has set up an International Liaison Office to follow up the MoUs (Memorandum of Understanding) signed between NITC and other institutions in the world. It also guides current students from abroad and those interested in joining the institute. In 2011, a MoU was signed between NITC and Auburn University (USA) for research in Photonics.
A MoU was also signed with NITK Suratkal for academic and research collaborations.
NITC is also the mentoring institute of NIT Sikkim.
International conferences and symposiums
For two consecutive years, 2010 and 2011, NIT Calicut organised the Indo-US Symposium on Biocomputing. The symposium was organised jointly by the Department of Computer Science and Engineering and the School of Biotechnology and was held at the Taj Gateway in Calicut. It was jointly funded by the Department of Bio-Technology (India) and the National Science Foundation (USA). The symposium consisted of a series of talks by leading researchers from India and the US in the area of biocomputing.
Embarking on novel research domain of light-matter interaction, and fostering interaction amongst researchers in the field of Light, the Department of Physics at NIT Calicut hosted an International Conference on Light: Optics '14. The conference conducted during 19–21 March 2014, witnessed research works from numerous institutes within the nation and abroad. Gold medals were awarded to the best papers, while some selected papers were reviewed and published in American Institute of Physics (AIP) Proceedings.
NIT Calicut hosted TEDx NIT Calicut on 14 January 2012,12 February 2017 which was an independently organised TED event, consisting of talks, video and interactive sessions with high-achieving entrepreneurs, innovators, performers, and industry leaders.
Student life
NIT Calicut holds two major annual events, the techno-management festival Tathva, held in the monsoon semester, and the cultural festival Ragam, which is held in the winter semester.
Tathva
Tathva is the annual techno-management fest organised by the Institute, and one of the largest technical festivals of South India. It is usually held during the month of September and lasts for four days. It has been held every year since its inception in 2001. Aimed at inspiring innovation and technical interest among students and the public, Tathva has played host to lectures, seminars, workshops by companies like ParaMek Technologies, competitions, paper presentations, exhibitions, quizzes, model displays, and robotics events. A.P.J. Abdul Kalam, G. Madhavan Nair, Harold Kroto, Johannes Orphal, Jimmy Wales, and Suhas Gopinath, were some of the eminent guests in previous editions of Tathva.
Ragam
Ragam is the cultural festival of NITC. Colleges and universities from Kerala and outside compete in events like trivia quizzes, dance competitions, rock shows, and music concerts. Some of the performers who have performed in previous years include Shaan, Sunidhi Chauhan, Blaaze, Shankar Mahadevan, Kartik, Benny Dayal, Stephen Devassy, Sonu Nigam, Parikrama and Darshan Raval. Breathe Floyd, a Pink Floyd tribute band from the UK, performed during Ragam 2009. Ragam 2010 featured KK, Naresh Iyer, and Higher on Maiden, a tribute band to Iron Maiden. Ragam is held in the memory of a former student P. Rajan who died after being held (ostensibly for being a Naxalite) in police custody.
Hoping to make their way into the Limca Book of Records, in Ragam 2015, students took part in an event to get the most people in a single selfie. It was organised to break the Bangladeshi record of 1,151 people in a single selfie that took place during a product promotion event. According to the event organisers, more than 2,000 students gathered in the venue for the event. Apart from this, another cultural event called Sneharagam was organised, which is meant for the differently-abled children from all over Kerala.
Student organisations
The student organisations at NITC include Robotics Interest Group (RIG), the Literary and Debating Club (LND), The Indian Cultural Association (ICA), the Forum for Dance and Dramatics (DND), Civil Engineering Association (CEA), Electronics and Communication Engineering Association (ECEA), Electrical Engineering Association (EEA), Computer Science & Engineering Association (CSEA), Chemical Engineering Association (CHEA) which is the NITC Chapter of IIChE, Mechanical Engineering Association, Enquire (the NITC Quiz Club), The Music Club, Club Mathematica, Audio Visual Club (AVC), Team Unwired (Engineering and Technology Club), Aero Unwired (an Aeromodelling Club), Society of Automotive Engineers (SAE) student chapter, the Indian Society for Technical Education (ISTE) Students' Chapter, the SPICMACAY NIT Calicut chapter, The Industrial and Planning Forum (IPF), the Nature Club, Adventure Club (Ad-Club), and the Robotics Interest Group (Engineering and Technology).
Professional bodies with student chapters include NIT Calicut ACM Student Chapter, Computer Society of India, the Indian Society for Technical Education, Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineers (India).
Noted people
Notable faculties
Chairpersons
C. G. Krishnadas Nair (2011–2014)
Aruna Jayanthi (2014–2020)
Notable alumni
John Mathai, former Chief Secretary of Kerala
Jishnu Raghavan, Malayalam film actor
Baby Niveditha, Malayalam film actress
See also
List of engineering colleges in Kerala
Ragam (festival)
References
External links
Educational institutions established in 1961
Engineering colleges in Kerala
National Institutes of Technology
Universities and colleges in Kozhikode
Science and technology in Kozhikode
1961 establishments in Kerala
All India Council for Technical Education
Technical universities and colleges in India
Kozhikode east |
852331 | https://en.wikipedia.org/wiki/Amirkabir%20University%20of%20Technology | Amirkabir University of Technology | Amirkabir University of Technology (AUT) (), also called the Tehran Polytechnic, is a public technological university located in Tehran, Iran. Founded in 1928, AUT is the second oldest technical university established in Iran.
It is referred to as the 'Mother of Engineering Universities'. Acceptance to the university is competitive, entrance to undergraduate and graduate programs requires scoring among the top 1% of students in the Iranian University Entrance Exam, known as 'Konkour'.
The university was founded in 1928 as a technical academy, and was further developed into a full-fledged university by Habib Nafisi in 1956, after that it was extended and enlarged by Dr. Mohammad Ali Mojtahedi, during the Pahlavi dynasty. Named the Tehran Polytechnic, it initially offered five engineering degrees, namely; Electrical and Electronics, Mechanical, Textile, Chemistry and Construction and Infrastructure. Six months before the victory of 1979 Iranian Revolution, Tehran Polytechnic was renamed after the Iranian prime minister Amir Kabir (1807–1852).
The university now has 18 science and engineering departments, dozens of research groups and laboratories and three other affiliated centers, located in Garmsar, Bandar Abbas and Mahshahr. Around 13,400 students are enrolled in the undergraduate and graduate programs. AUT has more than 500 full-time academic faculty members and 550 administrative employees, giving it the highest staff-to-student ratio among the country's universities. The executive branch consists of four departments which receive participation from councils in planning and administering affairs.
AUT has signed agreements with international universities for research and educational collaboration. There is a joint program between AUT and the University of Birmingham.
AUT is one of the leading universities in E-Learning systems in Iran which began its activities in 2004.
AUT is the pioneer of sustainable development in Iran and established the Office of Sustainability in 2011. The activities of this office contribute to the AUT campus by reducing energy consumption, costs, and emissions, and also provide student coursework, volunteer opportunities for students, as well as research and education academic activities on sustainable development.
History
The establishment and formation of the Amirkabir University of Technology dates back to October 1956 by Eng. Habib Nafisi (حبیب نفیسی). The core of the university was formed at that time under the name Tehran Polytechnic in order to expand the activities of two technical institutes: the Civil Engineering Institute and the Higher Art Center. After Habib Nafisi, the founder of Tehran polytechnic, Dr. Abedi became the president of the university for a few months until Dr. Mohammad Ali Mojtahedi, the principal at the renowned Alborz High School, was appointed as president of the university early in 1963. Among the accomplishments of Dr. Mojtahedi is the construction of a central amphitheatre, a dining area and a sports ground as well as various faculty buildings.
The university has grown into a national center of science and engineering and its undergraduates number more than 7,000, with a further 6,400 graduate students. The university boasts 35 undergraduate majors, around 90 M.Sc. majors and 36 Ph.D. and post-doctoral programs.
Rankings and reputation
Amirkabir University has been consistently ranked as one of Iran's top universities. The 2011 QS World University Rankings ranked the university 301–350 in Engineering and Technology in the world. Iran's Ministry of Science, Research and Technology ranked AUT among the Top 3 high ranked universities in the country. In Webometrics Ranking of World Universities (2012), the university also ranks among the top three highest ranked universities in Iran. Computer Science, Polymer Engineering and Biomedical Engineering have the highest field rankings in Amirkabir University of Technology.
In the 2013 Shanghai Rankings Amirkabir University's Computer Science department ranked 100–150 among World Universities. In 2014, the Shanghai ranking placed Amirkabir University's Engineering Sciences 151–200 among World Universities. The Polymer Engineering Department of Amirkabir University of Technology is the first and most prestigious Polymer Engineering program in Iran. AUT also ranked first among Iranian universities in 2014 in the CWTS Leiden Ranking. In 2014, the U.S. News & World Report ranked Amirkabir University of Technology's Engineering Sciences 89 among world universities. Also, Computer Science of AUT ranked 90 among World Universities.
The 2021 edition of the QS World University Rankings placed Amirkabir University 477th in the world and second in Iran with an overall score of 24.8, behind only Sharif University of Technology which stood at 407th.
Campuses
Tehran
The main campus of the Amirkabir University of Technology is in Tehran, Iran. It is located close to Vali Asr Crossroads, the intersection of Enghelab Street and Vali Asr Street, in the very center of Tehran City. Many students commute to AUT via the subway by Vali Asr station.
Mahshahr
The Mahshar campus of AUT has been constructed in the province of Khouzestan in 2001 in order to establish close cooperation with the national company of petroleum industries.
Bandar Abbas
The Bandar Abbas campus of AUT has been established in the province of Hormozgan, which is the center of marine industries in Iran.
Garmsar
The Garmsar campus of AUT has been constructed in the province of Semnan in order to establish close cooperation and distance with the main campus in Tehran.
Library
The library and document center at AUT, the largest technical and engineering library in Iran's capital, is one of the richest academic libraries in the technical and engineering field in the region. The library includes a central library and 16 satellite libraries in Tehran and Bandar Abbas. The library houses about 5 million books.
Departments
AUT has 16 departments including 'management, science and technology', electrical engineering, biomedical engineering, polymer engineering, mathematics and computer science, chemical engineering, industrial engineering, civil and environmental engineering, physics and energy engineering, computer and information technology, mechanical engineering, mining and metallurgical engineering, textile engineering, petroleum engineering, ship engineering, and aerospace engineering. AUT has three educational sites in Garmsar, Bandar Abbas and Mahshahr.
University departments websites are:
Management, Science and Technology
Robotics Engineering
Aerospace Engineering
Biomedical Engineering
Chemical Engineering
Civil and Environmental Engineering
Computer Engineering and Information Technology
Electrical Engineering
Industrial Engineering
Marine Technology
Mathematics and Computer Science
Mechanical Engineering
Mining and Metallurgy Engineering
Energy Engineering and Physics
Polymer and Color Engineering
Textile Engineering
Petroleum Engineering
Group of MBE
Scientific associations
Scientific associations exist to help students transform themselves into contributing members of the professional community. Course work develops only one range of skills. Other skills needed to flourish professionally include effective communication and personal interactions, leadership experience, establishing a personal network of contacts, presenting scholarly work in professional meetings and journals, and outreach services to the campus and local communities.
University association websites are as:
Scientific Association of Physics
Presidents
Habib Nafisi
Prof. Abedi
Prof. Mohammad Ali Mojtahedi
Prof. Bita
Prof. Yeganeh Haeri
Prof. Mohammad-Jafar Jadd Babaei
Prof. Kayvan Najmabadi
Prof. Hossein Mahban
Prof. Siroos Shafiei
Prof. Miri
Prof. Mahdi
Prof. Hasan Farid Alam
Prof. Kamaleddin Yadavar Nikravesh
Prof. Hassan Rahimzadeh
Prof. Aliakbar Ramezanianpour
Prof. Reza Hosseini Abardeh
Prof. Mohammad Hossein Salimi Namin
Prof. Abdolhamid Riazi
Prof. Ahmad Fahimifar
Prof. Alireza Rahai (2005–2014)
Prof. Ahmad Motamedi (2014–2021)
Prof. Hassan Ghodsipour (acting) (2021–present)
Research and innovation
The university is known as a pioneer in research and innovation in Iran. AUT is a public university, and the government of Iran partly provides its research funding. AUT has cooperation with industrial companies, especially in the oil and gas industries. As a result, many research projects in the university are funded by industrial companies.
The Amirkabir University of Technology was appointed as a Center of Excellence by Iran's Ministry of Science and Technology in the fields of Biomechanics, Power Systems, Radiocommunication systems (RACE) and Thermoelasticity.
The university houses a supercomputer which has a speed of 34,000 billion operations per second. The computer is available for both university affiliated as well as non-affiliated research.
Amirkabir Journal of Science and Technology is a scientific journal which publishes research achievements of academic and industrial researchers in all engineering and science areas.
The AUT Journal of Mathematics and Computing (AJMC) is a peer-reviewed journal that publishes original articles, review articles and short communications in all areas of mathematics, statistics and computer sciences.
Research and Technology Center of AUT is an office which collaborates with industries and universities in order to improve research level in the university.
Notable alumni and faculty
Science and technology
Abolhassan Astaneh-Asl, professor of civil engineering, University of California, Berkeley
Mohammad Reza Eslami, professor of mechanical engineering, one of the top 20 most cited scientists of Iran
Reza Iravani, professor of electrical engineering, University of Toronto
Mohammad Modarres, professor of mechanical and energy engineering, University of Maryland, College Park
Industry
Bahaedin Adab, co-founder of Karafarin Insurance Co. and Karafarin Bank, former deputy chairman of the board of director of the Industry Confederation of Iran, former chairman of the board of directors of the Syndicate of Construction Companies of Tehran
Hossein Hosseinkhani, Shareholder & Owner at Matrix, Inc., a world-leading biotech company dedicated to healthcare technology to improve patient's quality of life. New York City, USA. Former chairman of the board of directors of the Pacific Stem Cells, Ltd.
Politics
Abbas Abdi, political activist
Bahaedin Adab, former member of parliament from Kurdistan (Sanandaj, Kamyaran, Divandareh), co-founder of the Kurdish United Front
Ali Afshari, political activist
Masoumeh Ebtekar, vice president of Iran for women and family affairs
Mohsen Mirdamadi, secretary general of Islamic Iran Participation Front, the largest reformist party in Iran
Mostafa Mirsalim, former minister of Islamic Culture and Guidance
Ahmad Motamedi, former minister of Communication and Information Technology
Behzad Nabavi, former minister of industry and former deputy speaker of the Parliament of Iran
Majid Tavakoli, prominent Iranian student leader, human rights activist and political prisoner
Ahmad Vahidi, former minister of defense
Ezzatollah Zarghami, former president of the Islamic Republic of Iran Broadcasting (IRIB), Iran's television and radio organization
Farhad Nouri golpa, former president of students council, Screenplay writer, director
Other
Mohammad Ali Mojtahedi, former dean of the university
Davood Mirbagheri, screenwriter and film director
News
Fars News Agency reports: "President of Iran's Amirkabir University of Technology Alireza Rahaei announced the country is preparing to put a new home-made satellite, called Nahid (Venus), into orbit in the next three months".
Cientifica reports in news item on Nanowerk, April 25, 2012: Iranian scientists are using lecithin to synthesize and bind silver nanoparticles more tightly to wool.
References
External links
AUT homepage
IBIMA – Iran Building Information Modeling Association
1928 establishments in Iran
Educational institutions established in 1928
1956 establishments in Iran
Educational institutions established in 1956 |
32455810 | https://en.wikipedia.org/wiki/L-packet | L-packet | In the field of mathematics known as representation theory, an L-packet is a collection of (isomorphism classes of) irreducible representations of a reductive group over a local field, that are L-indistinguishable, meaning they have the same Langlands parameter, and so have the same L-function and ε-factors. L-packets were introduced by Robert Langlands in , .
The classification of irreducible representations splits into two parts: first classify the L-packets, then classify the representations in each L-packet. The local Langlands conjectures state (roughly) that the L-packets of a reductive group G over a local field F are conjecturally parameterized by certain homomorphisms of the Langlands group of F to the L-group of G, and Arthur has given a conjectural description of the representations in a given L-packet.
The elements of an L-packet
For irreducible representations of connected complex reductive groups, Wallach proved that all the L-packets contain just one representation. The L-packets, and therefore the irreducible representations, correspond to quasicharacters of a Cartan subgroup, up to conjugacy under the Weyl group.
For general linear groups over local fields, the L-packets have just one representation in them (up to isomorphism).
An example of an L-packet is the set of discrete series representations with a given infinitesimal character and given central character. For example, the discrete series representations of SL2(R) are grouped into L-packets with two elements.
gave a conjectural parameterization of the elements of an L-packet in terms of the connected components of C/Z, where Z is the center of the L-group, and C is the centralizer in the L-group of Im(φ), and φ is the homomorphism of the Langlands group to the L-group corresponding to the L-packet. For example, in the general linear group, the centralizer of any subset is Zariski connected, so the L-packets for the general linear group all have 1 element. On the other hand, the centralizer of a subset of the projective general linear group can have more than 1 component, corresponding to the fact that L-packets for the special linear group can have more than 1 element.
References
Langlands program |
872925 | https://en.wikipedia.org/wiki/SCO%20Group%2C%20Inc.%20v.%20DaimlerChrysler%20Corp. | SCO Group, Inc. v. DaimlerChrysler Corp. | SCO Group v. DaimlerChrysler was a lawsuit filed in the United States, in the state of Michigan. In December 2003, SCO sent a number of letters to Unix licensees. In these letters, SCO demanded that the licensees certify certain things regarding their usage of Linux. DaimlerChrysler, a former Unix user and current Linux user, did not respond to this letter. On March 3, 2004, SCO filed suit against DaimlerChrysler for violating their Unix license agreement, by failing to respond to the certification request made by SCO. The parties agreed to a stipulated dismissal order on December 21, 2004. The case was dismissed without prejudice, but if SCO wishes to pursue the timeliness claim again, it must pay DaimlerChrysler's legal fees since August 9. On December 29, 2004, SCO filed a claim of appeal notice. On January 31, 2005, the claim of appeal was dismissed.
History
For use on their Cray supercomputer, Chrysler Corporation bought a Unix source license from AT&T on September 2, 1988. A source license allows the licensee to view, modify and use the Unix source code on a number of specific machines (designated CPUs).
Through a number of acquisitions, The SCO Group became the licensing agent that handled Unix source licenses. Chrysler Motors Corporations merged with Daimler-Benz in 1998 forming DaimlerChrysler.
Background information
The licenses sold by AT&T allow the licensor to ask for certification regarding the use of the licensed product.
On [SCO's] request, but not more frequently than annually, LICENSEE shall furnish to [SCO] a statement, certified by an authorized representative of LICENSEE, listing the location, type and serial number of all DESIGNATED CPUs hereunder and stating that the use by LICENSEE of SOFTWARE PRODUCTS subject to this Agreement has been reviewed and that each such SOFTWARE PRODUCT is being used solely on DESIGNATED CPUs (or temporarily on back-up CPUs) for such SOFTWARE PRODUCTS in full compliance with the provisions of this Agreement.
The SCO Group invoked their right to ask for certification on December 18, 2003. In addition to the certification specified in the license, SCO also instructed the Unix licensees to certify their use of Linux, a competing operating system.
DaimlerChrysler did not respond to this letter. In fact, it is possible that DaimlerChrysler never received it; it was addressed to Chrysler Motors Corporation at 12800 Oakland Avenue in Highland Park, Michigan, but Chrysler Corporation (as it was then known) had announced the move of its Highland Park headquarters in 1992, and famously moved
into a massive, headquarters complex (1000 Chrysler Drive) in nearby Auburn Hills between 1993-1996. The company closed down its Highland Park facilities in 1997.
The name of the company, too had changed: "Chrysler Motors Corporation" had become part of DaimlerChrysler when it merged with Daimler-Benz AG in 1998. The former Chrysler operations were now referred to informally as "the Chrysler Group", but were legally known as DaimlerChrysler Motors Company LLC.
The lawsuit
On March 3, 2004, The SCO Group filed a breach of contract lawsuit against DaimlerChrysler. In its complaint, SCO claimed that DaimlerChrysler refused to comply with the terms of the license. SCO also speculated that DaimlerChrysler broke the licensing agreement when they moved to the Linux operating system and that this is the reason why they refused to certify.
DaimlerChrysler responded with a motion for summary judgment on April 15, 2004. DaimlerChrysler claimed that the letter sent by SCO asked for certifications that were not agreed upon in the original licensing agreement, such as certifications about the use of Linux. Additionally DaimlerChrysler claimed that the original licensing agreement does not mention a specific time in which a licensee should respond to a certification request. DaimlerChrysler also told the court that it had not been contacted by SCO after receiving the letter, instead SCO filed suit without further attempts to receive any certifications.
At the same time, DaimlerChrysler also responded by certifying their use of Unix, according to the provisions specified in the original licensing agreement. In this certification DaimlerChrysler revealed that they have not used Unix for over 7 years.
On August 9, 2004, Judge Chabot granted the summary disposition almost completely. The only remaining issue on the case was whether DaimlerChrysler's response was submitted in a timely manner. On November 17, 2004, SCO moved to stay its suit pending SCO v. IBM case, but was denied.
The parties agreed to a stipulated dismissal order on December 21, 2004. The case was dismissed without prejudice, but if SCO wishes to pursue the timeliness claim again, it must pay DaimlerChrysler's legal fees since August 9. On December 29, 2004, SCO filed a claim of appeal notice. On January 31, 2005, the claim of appeal was dismissed.
References
External links
Legal documents of the case
Legal documents and analysis by Al Petrofsky
SCO v. DaimlerChrysler at Groklaw
SCO–Linux disputes
Lawsuits
2004 in Michigan
DaimlerChrysler
2004 in United States case law
Legal history of Michigan |
11222137 | https://en.wikipedia.org/wiki/List%20of%20licensed%20professional%20wrestling%20video%20games | List of licensed professional wrestling video games | The following is a list of licensed wrestling video games based on professional wrestling, licensed by promotions such as WWF/WWE, WCW, ECW, NJPW, TNA, and AAA.
Promotion–based games
All Elite Wrestling
Video games by professional wrestling promotion All Elite Wrestling:
AEW Casino: Double or Nothing [2020] (iOS, Android)
AEW Elite GM [2020] (iOS, Android)
Untitled All Elite Wrestling video game for home consoles [upcoming] (PlayStaton 5, PlayStation 4, Xbox Series X/S, Xbox One)
All Japan Pro Wrestling
Video games by professional wrestling promotion All Japan Pro Wrestling:
All Japan Pro Wrestling [1993] (SNES)
All Japan Pro Wrestling Dash: World's Strongest Tag Team [1993] (SNES)
All Japan Pro Wrestling Jet [1994] (Game Boy)
All Japan Pro Wrestling 2: 3-4 Budokan [1995] (SNES)
All Japan Pro Wrestling featuring Virtua [1997] (Saturn)
King's Soul: All Japan Pro Wrestling [1999] (PlayStation)
Giant Gram: All Japan Pro Wrestling 2 [1999] (Dreamcast)
Giant Gram 2000: All Japan Pro Wrestling 3 [2000] (Dreamcast)
Virtual Pro Wrestling 2 [2000] (Nintendo 64)
All Japan Women's Pro-Wrestling
Video games by former professional wrestling promotion All Japan Women's Pro-Wrestling:
Fire Pro Women: All-Star Dream Slam [1994] (SNES)
Super Fire Pro Wrestling: Queen's Special [1995] (Super Famicom, TurboGrafx-16/PC Engine)
Wrestling Universe: Fire Pro Women: Dome Super Female Big Battle: All Japan Women VS J.W.P. [1995] (TurboGrafx-16/PC Engine)
All Japan Women's Pro-Wrestling: Queen of Queens [1995] (PC-FX)
All Japan Women's Pro-Wrestling [1998] (PlayStation)
Extreme Championship Wrestling
Video games by former professional wrestling promotion Extreme Championship Wrestling:
ECW Hardcore Revolution [2000] (Nintendo 64, PlayStation, Game Boy Color, Dreamcast)
ECW Anarchy Rulz [2000] (PlayStation, Dreamcast)
New Japan Pro-Wrestling
Video games by professional wrestling promotion New Japan Pro-Wrestling:
New Japan Pro-Wrestling: Toukon Sanjushi [1991] (Game Boy)
New Japan Pro-Wrestling: Chou Senshi in Tokyo Dome [1993] (SNES)
New Japan Pro-Wrestling '94 [1994] (SNES)
New Japan Pro-Wrestling '94: Battlefield in Tokyo Dome [1994] (TurboGrafx-CD, Super Famicom)
New Japan Pro-Wrestling '95: Battle 7 in Tokyo Dome [1995] (SNES)
New Japan Pro-Wrestling: Toukon Retsuden [1995] (PlayStation, WonderSwan)
New Japan Pro-Wrestling: Toukon Retsuden 2 [1996] (PlayStation)
New Japan Pro-Wrestling: Toukon Road – Brave Spirits [1998] (Nintendo 64)
New Japan Pro-Wrestling: Toukon Retsuden 3 [1998] (PlayStation)
New Japan Pro-Wrestling: Toukon Road 2 – The Next Generation [1998] (Nintendo 64)
New Japan Pro-Wrestling: Toukon Retsuden 4 [1999] (Dreamcast)
New Japan Pro-Wrestling: Toukon Retsuden Advance [2002] (Game Boy Advance)
Fire Pro Wrestling World [2018] (Personal Computer/PC, PlayStation 4)
NJPW Collection [2020] (iOS, Android)
NJPW Strong Spirits [2021] (iOS, Android)
Total Nonstop Action Wrestling
Video games by professional wrestling promotion Total Nonstop Action Wrestling:
TNA Impact! [2008] (PlayStation 2, PlayStation 3, Xbox 360, Wii)
TNA Wrestling [2009] (iOS)
TNA Impact!: Cross The Line [2010] (Nintendo DS, PlayStation Portable)
TNA Wrestling Impact! [2011] (iOS, Android)
World Championship Wrestling
Video games by former professional wrestling promotion World Championship Wrestling:
WCW Wrestling [1989] (NES)
WCW: The Main Event [1994] (Game Boy)
WCW SuperBrawl Wrestling [1994] (SNES)
WCW vs. the World [1997] (PlayStation)
WCW vs. nWo: World Tour [1997] (Nintendo 64)
Virtual Pro Wrestling 64 [1997] (Nintendo 64)
WCW Nitro [1998] (PlayStation, Nintendo 64, Microsoft Windows)
WCW/nWo Revenge [1998] (Nintendo 64)
WCW/nWo Thunder [1999] (PlayStation)
WCW Mayhem [1999] (PlayStation, Nintendo 64, Game Boy Color)
WCW Backstage Assault [2000] (PlayStation, Nintendo 64)
WWE
Some WWF/WWE games which share a name but were produced for different platforms are considered separate, especially if they were released years apart. For example, the SNES game WWF Royal Rumble is completely different from the Dreamcast game entitled WWF Royal Rumble released years later.
MicroLeague Wrestling [1987] (Amiga, Commodore 64)
WWF WrestleMania [1989] (NES)
WWF Superstars [1989] (Arcade)
WWF WrestleMania Challenge [1990] (NES, Commodore 64)
WWF Superstars [1991] (Game Boy)
WWF WrestleMania [1991] (Amstrad CPC, Amiga, Commodore 64, ZX Spectrum, Atari ST, Personal Computer/PC)
WWF WrestleFest [1991] (Arcade)
WWF Superstars 2 [1992] (Game Boy)
WWF European Rampage Tour [1992] (Amiga, Atari ST, Personal Computer/PC, Commodore 64)
WWF Super WrestleMania [1992] (SNES, Mega Drive/Genesis)
WWF WrestleMania: Steel Cage Challenge [1992] (Master System, Game Gear, NES)
WWF Royal Rumble [1993] (SNES, Mega Drive/Genesis)
WWF Rage in the Cage [1993] (Sega CD)
WWF King of the Ring [1993] (NES, Game Boy)
WWF Raw [1994] (32X, Mega Drive/Genesis, Game Boy, Game Gear, SNES)
WWF WrestleMania: The Arcade Game [1995] (Arcade, 32X, Mega Drive/Genesis, MS-DOS, PlayStation, Saturn, SNES)
WWF In Your House [1996] (Personal Computer/PC, PlayStation, Saturn, MS-DOS)
WWF War Zone [1998] (Game Boy, PlayStation, Nintendo 64)
WWF Attitude [1999] (Game Boy Color, Nintendo 64, PlayStation, Dreamcast)
WWF WrestleMania 2000 [1999] (Nintendo 64, Game Boy Color)
WWF SmackDown! [2000] (PlayStation)
WWF Royal Rumble [2000] (Arcade, Dreamcast)
WWF SmackDown! 2: Know Your Role [2000] (PlayStation)
WWF No Mercy [2000] (Nintendo 64)
With Authority! [2001] (Personal Computer/PC)
WWF Betrayal [2001] (Game Boy Color)
WWF Road to WrestleMania [2001] (Game Boy Advance)
WWF SmackDown! Just Bring It [2001] (PlayStation 2)
WWF Raw [2002] (Personal Computer/PC, Xbox)
WWE WrestleMania X8 [2002] (Gamecube)
WWE Road to WrestleMania X8 [2002] (Game Boy Advance)
WWE SmackDown! Shut Your Mouth [2002] (PlayStation 2)
WWE Crush Hour [2003] (PlayStation 2, Gamecube)
WWE Raw 2 [2003] (Xbox)
WWE WrestleMania XIX [2003] (Gamecube)
WWE SmackDown! Here Comes the Pain [2003] (PlayStation 2)
WWE Mobile Madness [2003] (Mobile)
WWE Mobile Madness Hardcore [2003] (Mobile)
WWE Mobile Madness: Cage [2003] (Mobile)
WWE Day of Reckoning [2004] (Gamecube)
WWE Survivor Series [2004] (Game Boy Advance)
WWE SmackDown! vs. Raw [2004] (PlayStation 2)
WWE Raw [2005] (Mobile)
WWE Smackdown [2005] (Mobile)
WWE WrestleMania 21 [2005] (Xbox)
WWE Aftershock [2005] (N-Gage)
WWE Day of Reckoning 2 [2005] (Gamecube)
WWE SmackDown! vs. Raw 2006 [2005] (PlayStation 2, PlayStation Portable)
WWE SmackDown vs. Raw 2007 [2006] (Xbox 360, PlayStation 2, PlayStation Portable)
WWE SmackDown vs. Raw 2008 [2007] (PlayStation 2, PlayStation 3, PlayStation Portable, Nintendo DS, Wii, Xbox 360, Mobile)
WWE SmackDown vs. Raw 2009 [2008] (PlayStation 2, PlayStation 3, PlayStation Portable, Nintendo DS, Wii, Xbox 360, Mobile)
WWE Legends of WrestleMania [2009] (PlayStation 3, Xbox 360, IOS)
WWE SmackDown vs. Raw 2010 [2009] (PlayStation 2, PlayStation 3, PlayStation Portable, Nintendo DS, Wii, Xbox 360, IOS)
WWE SmackDown vs. Raw 2011 [2010] (PlayStation 2, PlayStation 3, PlayStation Portable, Wii, Xbox 360)
WWE All Stars [2011] (PlayStation 2, PlayStation 3, PlayStation Portable, Wii, Nintendo 3DS, Xbox 360)
WWE Superstar Slingshot [2011] (Mobile)
WWE '12 [2011] (PlayStation 3, Wii, Xbox 360)
WWE '13 [2012] (PlayStation 3, Wii, Xbox 360)
Apptivity WWE Rumblers [2012] (iPad)
WWE WrestleFest [2012] (iPad)
WWE 2K14 [2013] (PlayStation 3, Xbox 360)
WWE Presents: John Cena's Fast Lane [2013] (iOS, Android)
WWE Presents: RockPocalypse [2013] (iOS, Android)
WWE SuperCard [2014] (iOS, Android)
WWE 2K15 [2014] (PlayStation 3, Xbox 360, PlayStation 4, Xbox One, Personal Computer/PC, Android, iOS)
WWE 2K [2015] (iOS, Android)
WWE Immortals [2015] (iOS, Android)
WWE 2K16 [2015] (PlayStation 3, Xbox 360, PlayStation 4, Xbox One, Personal Computer/PC)
WWE 2K17 [2016] (PlayStation 3, Xbox 360, PlayStation 4, Xbox One, Personal Computer/PC)
WWE Champions [2017] (iOS, Android)
WWE Tap Mania [2017] (iOS, Android)
WWE 2K18 [2017] (PlayStation 4, Xbox One, Nintendo Switch, Personal Computer/PC)
WWE Mayhem [2017] (iOS, Android)
WWE 2K19 [2018] (PlayStation 4, Xbox One, Personal Computer/PC)
WWE 2K20 [2019] (PlayStation 4, Xbox One, Personal Computer/PC)
WWE Universe [2019] (iOS, Android)
The King of Fighters All Star [2020] (iOS, Android)
WWE 2K Battlegrounds [2020] (PlayStation 4, Xbox One, Nintendo Switch, Personal Computer/PC)
WWE Champions 2021 [2021] (iOS, Android)
WWE Undefeated [2021] (iOS, Android)
WWE 2K22 [2022] (PlayStation 4, Xbox One, PlayStation 5, Xbox Series X, Personal Computer/PC)
Other promotions
Frontier Martial-Arts Wrestling — Onita Atsushi FMW [1993] (Super Famicom)
JWP Joshi Puroresu — Wrestling Universe: Fire Pro Women: Dome Super Female Big Battle: All Japan Women VS J.W.P. [1995] (TurboGrafx-CD)
Lucha Libre AAA World Wide — Lucha Libre AAA: Héroes del Ring [2010] (PlayStation 3, Xbox 360)
5 Star Wrestling — 5 Star Wrestling [2015] (PlayStation 3, PlayStation 4)
Chikara — CHIKARA: Action Arcade Wrestling [2021] (Personal Computer/PC, PlayStation 4, Xbox One)
World Wonder Ring Stardom — Fire Pro Wrestling World [2018] (Personal Computer/PC, PlayStation 4)
Brandless games
These titles do not belong to a specific brand. However, some of the following titles include real wrestlers from brands like WWF/WWE, WCW, NWA, ECW, TNA, NJPW, AJPW, and NOAH.
Tag Team Wrestling [1983] (Arcade)
Mat Mania – The Prowrestling Network [1985] (Arcade)
Fire Pro Wrestling Combination Tag [1989] (PC Engine, Wii)
Cutie Suzuki's Ringside Angel [1990] (Mega Drive/Genesis)
Fire Pro Wrestling 2nd Bout [1991] (PC Engine, Wii)
Super Fire Pro Wrestling [1991] (Super Famicom)
Thunder Pro Wrestling Retsuden [1992] (Mega Drive)
Fire Pro Wrestling 3: Legend Bout [1992] (PC Engine)
Super Fire Pro Wrestling 2 [1992] (Super Famicom)
Super Fire Pro Wrestling 3 Final Bout [1993] (Super Famicom)
Super Fire Pro Wrestling Special [1994] (Super Famicom)
Fire Pro Gaiden: Blazing Tornado [1994] (Arcade, Saturn)
Super Fire Pro Wrestling X [1995] (Super Famicom)
Fire Pro Wrestling: Iron Slam '96 [1996] (PlayStation)
Super Fire Pro Wrestling X Premium [1996] (Super Famicom)
Fire Prowrestling S: 6Men Scramble [1996] (Saturn)
Fire Pro Wrestling G [1999] (Playstation)
All Star Pro-Wrestling [2000] (PlayStation 2)
Fire Pro Wrestling for WonderSwan [2000] (WonderSwan)
Fire Pro Wrestling D [2001] (Dreamcast)
Fire Pro Wrestling [2001] (Game Boy Advance)
All Star Pro-Wrestling II [2001] (PlayStation 2)
Legends of Wrestling [2001] (PlayStation 2, GameCube, Xbox)
Fire Pro Wrestling 2 [2002] (Game Boy Advance)
Legends of Wrestling II [2002] (Game Boy Advance, PlayStation 2, GameCube, Xbox)
Backyard Wrestling: Don't Try This at Home [2003] (PlayStation 2, Xbox)
All Star Pro Wrestling III [2003] (PlayStation 2)
King of Colosseum [2003] (PlayStation 2)
Fire Pro Wrestling Z [2004] (PlayStation 2)
Backyard Wrestling 2: There Goes the Neighborhood [2004] (PlayStation 2, Xbox)
King of Colosseum II [2004] (PlayStation 2)
Showdown: Legends of Wrestling [2004] (PlayStation 2, Xbox)
Fire Pro Wrestling Returns [2005] (PlayStation 2)
Wrestle Kingdom [2006] (Xbox 360, PlayStation 2)
Wrestle Kingdom 2 [2007] (PlayStation 2)
Hulk Hogan's Main Event [2011] (Xbox 360)
Fire Pro Wrestling [2012] (Xbox 360)
Fire Pro Wrestling in Mobage [2012] (Mobage)
RetroMania Wrestling [2021] (PlayStation 4, Xbox One, Personal Computer/PC, Nintendo Switch)
The Wrestling Code [upcoming] (PlayStation 5, Xbox Series X/S, Personal Computer/PC)
See also
WWE 2K
List of video games in the WWE 2K Games series
List of wrestling video games
List of sumo video games
References
Video games, licensed
Wrestling video games |
5587038 | https://en.wikipedia.org/wiki/William%20Winn | William Winn | William David "Bill" Winn (1945–2006) was an American educational psychologist, and professor at the University of Washington College of Education, known for his work on how people learn from diagrams, and on how cognitive and constructivist theories of learning can help instructional designers select effective teaching strategies.
Biography
Specializing first in French and German languages and comparative literature, Winn earned a BA and MA from Oxford University and an MA from Indiana University. He earned a PhD from Indiana University (1972) in Instructional Systems Technology (minor educational psychology) for research on instructional message design. His doctoral dissertation was on the Similarity of Hierarchically Organized Pairs of Pictures and Words as Reported by Field-Dependent and Field-Independent High-School Seniors.
In 1972 Winn started his academic career as assistant professor in the Department of Pedagogy, Faculty of Education, at the Université de Sherbrooke. From 1974 to 1985, he was the academic coordinator of the Learning Technology Unit at the University of Calgary. Eventually, Winn was appointed professor at the University of Washington College of Education where he held appointments in curriculum and instruction, and cognitive studies. He was also director of the Learning Center at the Human Interface Technology Lab (HITLab), and adjunct professor in the College of Engineering, and the Music department.
Winn was the editor of Educational Communication and Technology Journal, and served on the editorial review boards of many other journals in the fields of educational psychology and educational technology.
Work
Winn's areas of teaching and research included instructional theory, design of computer-based learning, instructional effects of illustrations, theories of visual perception applied to instructional materials design, computer interfaces, and the roles and effectiveness of virtual environments in education and training. This work extended cognitive theories of learning into systems dynamics models of cognition and cognitive neuroscience.
Winn collaborated broadly across disciplines and national boundaries, presenting papers in French, German and English. In addition to teaching, extensive graduate advising activities, and a prolific writing schedule, at the time of his death he was working on research with the Puget Sound Marine Environment Modeling Group, augmented reality and physical models of complex organic molecules, INFACT/PixelMath, and collaborating with PRISM and the Center for Environmental Visualization.
Computer-based learning
Winn was very interested in computer-based learning for being a method that allows students to obtain information in formats that cannot be presented by teachers and because it gives the students control of the information. He acknowledged that computer-based learning follows a constructivist learning approach because students construct understandings for themselves by interacting with the material they encounter.
Virtual environments
Winn also focused his research in constructing virtual learning environments which are computer created environments intended to simulate realistic experiences in order to help students understand concepts presented in those environments. For example, Winn explained “that the act of designing and creating environments that embody concepts and principles governing phenomena as diverse as wetlands ecology and medieval castles helps students master these topics with depth and clarity”. He also found that virtual learning has greater success for students who do poorly in school. However, teaching through virtual environments also has its weaknesses. Winn declared that this method of learning often result in misconceptions due to oversimplifying the interactions that occur in the natural environments which are simulated. Additionally, problems in the transfer of knowledge are seen in younger students who lack the ability to think abstractly. These children have a difficulty transferring what they learn in the virtual world to other areas in the real world.
Learning oceanography from a computer compared to direct experience at sea
This is an example of one of the studies conducted by Winn in which he evaluated the difference of learning in a computer-based environment as opposed to learning through direct experience. In this study, two groups of college students learned oceanography. One group learned using a computer simulation of the ocean which included a three-dimensional (3D) model, and the other group learned by spending a day in a research vessel and used oceanographic tools. In his discussion of this study Winn makes reference to Kolb's experiential learning theory because it highlights the significance of direct experience with the environment, as well as the need for abstract concepts in order to learn and apply knowledge. According to Winn, the proper use of metaphors in simulations may allow students to learn abstract concepts better than they would in real experiences. This study took place in Seattle and was focused on the oceanography of the Puget Sound estuary system within Washington. There were 25 students in each group and both groups received a total of three lessons. Two of the lessons were taught by the same professors and covered the same material. For the third lesson the groups were separated to their different settings. One of the limitations of this study was that the students taking the “Virtual Puget Sound” (VPS) experience could only control some independent variables but not others, like for example they could not change the salinity of the water. The results of the study showed “no difference in overall learning between students who used the VPS simulation and those who studied the same material in the field”. However, the study found that students with less experience in water learned more from direct experience, while the simulated ocean experience helped students transfer the knowledge they obtained while working in the computer, to the material presented in class.
Response to criticism
After reading Winn's article titled "Current trends in educational technology research: The study of learning environments" published in 2002, the educational psychologist Richard Mayer (2003) criticized Winn's article for dismissing controlled experiments and in this way dismissing an approach that would produce substantial evidence and enable researchers to make claims on the learning development of students. In response to Mayer's criticism, Winn confirmed that experimental research is important, and he proposed that researchers use a system that connects evidence from both experimental and non-experimental research when conducting their studies since each method produces different information. Controlled experimental research is useful for obtaining details about student learning, and non-experimental research allows the researcher to see how learning occurs in real settings.
Non-experimental research method
As part of his response to Mayer's criticism, Winn articulates that a good non-experimental method for researchers to use is the “design experiment” which was described by Ann Brown in 1992. Winn prefers this type of experiment particularly because it conveys many features of open ended research methods. In a design experiment, the researcher tests his or her intervention in an educational setting such as a classroom, makes modifications depending on the data collected, and conducts the intervention until it produces good results. The data collected is in form of observations, results from tests, or any form of work that will show that the student has learned what is expected. Compared to a controlled experiment in which many variables are controlled, in the design experiment, modifications are made over time. Winn explained that a key difference between the two types of experiments is that “the controlled experiment adapts the setting to suit the intervention through experimental control, whereas the design experiment adapts the intervention to suit the setting through iteration”. Although Winn is in favor of design experiments he does note one of its weaknesses. This type of non-experimental research involves more time and skill than implementing experimental research. However, it can yield crucial evidence about the success of interventions and how students learn.
Implications for educational technology
Winn made significant contributions to the field of educational technology as evident by his extensive research in this area. The following is a list of eight suggestions provided by Winn (2002) for those researchers who are also working in this field, or for future researchers. This list provides useful information on how practitioners can reduce factors that may disrupt research findings and thus assist in improving educational technology research.
Instructors should not use metaphors that may confuse students or prevent them from understanding concepts.
Computer learning environments yield greater results when conducted under a constructivist approach. Instructors should allow for mistakes and should not use virtual environments to teach basic facts.
Educational technology is not a sufficient method for teaching. Educators should implement activities and other methods of communication into their lessons.
Students must understand the task they have to accomplish and they require scaffolding to obtain their end goal.
Educators must implement social context in the technology driven learning environment, and acknowledge sharing and collaboration amongst students.
Educators should involve experts from the outside community in order to make their teaching effective.
Educators should promote that the students make changes to their learning environment, as this will allow educators to obtain information about student learning.
Educators, students, and researchers should work as a team since they all contribute to the improvement of educational technology research.
Selected publications
Winn, William D. "Content Structure and Cognition in Instructional Systems" (1978).
Articles, a selection:
Winn, W.D. (1987). Charts, graphics and diagrams in educational materials. In D. Willows and H. Houghton (Eds.), The Psychology of Illustration. Vol. 1. Basic Research. New York: Springer, 152–198.
Winn, W.D. (1990). A theoretical framework for research on learning from graphics. International Journal of Educational Research, 14, 553–564.
Winn, W.D. (1991). Learning from maps and diagrams. Educational Psychology Review, 3, 211–247.
Winn, W.D. (1993). An account for how people search for information in diagrams. Contemporary Educational Psychology, 18, 162–185.
Winn, W.D. (1994). Contributions of perceptual and cognitive processes to the comprehension of graphics. W. Schnotz & R. Kulhavy (Eds.), Comprehension of Graphics. Amsterdam: Elsevier. 3-27.
References
External links
William Winn, web page at hitl.washington.edu
Bill Winn, Profile at the HITlab
1945 births
2006 deaths
American educational theorists
Educational psychologists
Information visualization experts
Alumni of the University of Oxford
Indiana University alumni
https://www.findagrave.com/memorial/217593133/william-d-winn |
7810648 | https://en.wikipedia.org/wiki/GPSBabel | GPSBabel | GPSBabel is a cross-platform, free software to transfer routes, tracks, and waypoint data to and from consumer GPS units, and to convert between over a hundred types of GPS data formats. It has a command-line interface and a graphical interface for Windows, macOS, and Linux users.
GPSBabel is part of many Linux distributions including Debian and Fedora, and also part of the Fink and Homebrew systems for getting Unix software on macOS. The most current, official version is always on the official GPSBabel download site.
Applications
Many contributors to OpenStreetMap use GPSBabel to convert GPS track data from proprietary formats to the GPX format OpenStreetMap requires.
GPSBabel is popular in the Geocaching community because it enables people with incompatible GPS units to share data.
Geographic information system (GIS) applications such as QGIS and Grass use GPSBabel for many import and export operations and processing.
Photographers frequently use GPSBabel for geotagging images, associating location with photographs. This relies on GPS data loggers, either external or internal to the camera.
GPSBabel enables owners of many different brands of GPS units to view their GPS data in several popular consumer map programs, such as Google Earth and Microsoft Streets & Trips.
Notes
References
"GPS Running Log", Make Magazine, vol. 7, pp. 117–118.
Further reading
External links
GpsPrune can also act as a frontend
Free GIS software
Free software programmed in C
Satellite navigation software
Free software programmed in C++
Software that uses Qt |
1965369 | https://en.wikipedia.org/wiki/SWAC%20%28computer%29 | SWAC (computer) | The SWAC (Standards Western Automatic Computer) was an early electronic digital computer built in 1950 by the U.S. National Bureau of Standards (NBS) in Los Angeles, California. It was designed by Harry Huskey.
Overview
Like the SEAC which was built about the same time, the SWAC was a small-scale interim computer designed to be built quickly and put into operation while the NBS waited for more powerful computers to be completed (in particular, the RAYDAC by Raytheon).
The machine used 2,300 vacuum tubes. It had 256 words of memory, using Williams tubes, with each word being 37 bits. It had only seven basic operations: add, subtract, and multiply (single precision and double precision versions); comparison, data extraction, input and output. Several years later, drum memory was added.
When the SWAC was completed in August 1950, it was the fastest computer in the world. It continued to hold that status until the IAS computer was completed a year later. It could add two numbers and store the result in 64 microseconds. A similar multiplication took 384 microseconds. It was used by the NBS until 1954 when the Los Angeles office was closed, and then by UCLA until 1967 (with modifications). It was charged out there for $40 per hour.
In January 1952, Raphael M. Robinson used the SWAC to discover five Mersenne primes—the largest prime numbers known at the time, with 157, 183, 386, 664 and 687 digits.
Additionally, the SWAC was vital in doing the intense calculation required for the X-ray analysis of the structure of vitamin B12 done by Dorothy Hodgkin. This was fundamental in Hodgkin receiving the Nobel Prize in Chemistry in 1964.
See also
List of vacuum tube computers
References
Williams, Michael R. (1997). A History of Computing Technology. IEEE Computer Society.
Further reading
External links
IEEE Transcript: SWAC—Standards Western Automatic Computer: The Pioneer Day Session at NCC July 1978
Oral history interview with Alexandra Forsythe, Charles Babbage Institute, University of Minnesota. Alexandra Illmer Forsythe discusses the career of her husband, George Forsythe. At UCLA he became involved with the National Bureau of Standards Western Automatic Computer (SWAC) until 1957, when the National Bureau of Standards closed its operation at UCLA. Also discusses his founding of the Stanford Computer Science Department.
Margaret R. Fox Papers, 1935-1976, Charles Babbage Institute, University of Minnesota. collection contains reports, including the original report on the ENIAC, UNIVAC, and many early in-house National Bureau of Standards (NBS) activity reports; memoranda on and histories of SEAC, SWAC, and DYSEAC; programming instructions for the UNIVAC, LARC, and MIDAC; patent evaluations and disclosures relevant to computers; system descriptions; speeches and articles written by Margaret Fox's colleagues; and correspondence of Samuel Alexander, Margaret Fox, and Samuel Williams.
MERSENNE AND FERMAT NUMBERS by RAPHAEL M. ROBINSON. February 7, 1954. From "The Prime Pages".
One-of-a-kind computers
Vacuum tube computers
National Institute of Standards and Technology
1950s computers
Computer-related introductions in 1950
1950 in California
Science and technology in Greater Los Angeles |
15939094 | https://en.wikipedia.org/wiki/JUGENE | JUGENE | JUGENE (Jülich Blue Gene) was a supercomputer built by IBM for Forschungszentrum Jülich in Germany. It was based on the Blue Gene/P and succeeded the JUBL based on an earlier design. It was at the introduction the second fastest computer in the world, and the month before its decommissioning in July 2012 it was still at the 25th position in the TOP500 list. The computer was owned by the "Jülich Supercomputing Centre" (JSC) and the Gauss Centre for Supercomputing.
With 65,536 PowerPC 450 cores, clocked at 850 MHz and housed in 16 cabinets the computer reaches a peak processing power of 222.8 TFLOPS (Rpeak). With an official Linpack rating of 167.3 TFLOPS (Rmax) JUGENE took second place overall and is the fastest civil/commercially used computer in the TOP500 list of November 2007.
The computer was financed by Forschungszentrum Jülich, the State of North Rhine-Westphalia, the Federal Ministry for Research and Education as well as the Helmholtz Association of German Research Centres. The head of the JSC, Thomas Lippert, said that "The unique thing about our JUGENE is its extremely low power consumption compared to other systems even at maximum computing power". A Blue Gene/P-System should reach about 0.35 GFLOPS/Watt and is therefore an order of magnitude more effective than a common x86 based supercomputer for a similar task.
In February 2009 it was announced that JUGENE would be upgraded to reach petaflops performance in June 2009, making it the first petascale supercomputer in Europe.
On May 26, 2009, the newly configured JUGENE was unveiled. It includes 294,912 processor cores, 144 terabyte memory, 6 petabyte storage in 72 racks. With a peak performance of about one PetaFLOPS, it was at the time the third fastest supercomputer in the world, ranking behind IBM Roadrunner and Jaguar. The new configuration also incorporates a new water cooling system that will reduce the cooling cost substantially.
The two front nodes of JUGENE are operated with SUSE Linux Enterprise Server 10.
JUGENE was decommissioned on 31 July 2012 and replaced by the Blue Gene/Q system JUQUEEN.
References
IBM supercomputers
Parallel computing
Petascale computers
Supercomputing in Europe |
23684527 | https://en.wikipedia.org/wiki/Antimachus%20%28mythology%29 | Antimachus (mythology) | Antimachus (Ancient Greek: Αντίμαχος means "against battle", derived from αντι anti "against" and μαχη mache "battle.") may refer to these persons in Greek mythology:
Antimachus, the son of Hippodamas, son of the river Achelous and Aeolid Perimede.
Antimachus, one of the sons of Aegyptus. He married the Danaid Mideia who murdered him on their wedding night.
Antimachus, a Centaur. He attended the wedding of Pirithous and was slain by Caeneus.
Antimachus, the Thespian son of Heracles and Nicippe, daughter of King Thespius of Thespiae. Antimachus and his 49 half-brothers were born of Thespius' daughters who were impregnated by Heracles in one night, for a week or in the course of 50 days while hunting for the Cithaeronian lion. Later on, the hero sent a message to Thespius to keep seven of these sons and send three of them in Thebes while the remaining forty, joined by Iolaus, were dispatched to the island of Sardinia to found a colony.
Antimachus, one of the Heraclides. He was the son of Thrasyanor and father of Deiphontes.
Antimachus, a Cretan warrior who came with Idomeneus to fight on the Greeks side in the Trojan war. He was one of the warriors hidden in the Trojan horse. He was killed by Aeneas.
Antimachus, the Trojan father of Pisander, Hippolochus, Hippomachus, and Tisiphone. Bribed by Paris, he was against returning Helen to Menelaus.
Antimachus, one of the Suitors of Penelope who came from Dulichium along with other 56 wooers. He, with the other suitors, was shot dead by Odysseus with the assistance of Eumaeus, Philoetius, and Telemachus.
Notes
References
Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website.
Athenaeus of Naucratis, The Deipnosophists or Banquet of the Learned. London. Henry G. Bohn, York Street, Covent Garden. 1854. Online version at the Perseus Digital Library.
Athenaeus of Naucratis, Deipnosophistae. Kaibel. In Aedibus B.G. Teubneri. Lipsiae. 1887. Greek text available at the Perseus Digital Library.
Diodorus Siculus, The Library of History translated by Charles Henry Oldfather. Twelve volumes. Loeb Classical Library. Cambridge, Massachusetts: Harvard University Press; London: William Heinemann, Ltd. 1989. Vol. 3. Books 4.59–8. Online version at Bill Thayer's Web Site
Diodorus Siculus, Bibliotheca Historica. Vol 1-2. Immanel Bekker. Ludwig Dindorf. Friedrich Vogel. in aedibus B. G. Teubneri. Leipzig. 1888-1890. Greek text available at the Perseus Digital Library.
Gaius Julius Hyginus, Fabulae from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project.
Hesiod, Catalogue of Women from Homeric Hymns, Epic Cycle, Homerica translated by Evelyn-White, H G. Loeb Classical Library Volume 57. London: William Heinemann, 1914. Online version at theio.com
Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library.
Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library.
Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library
Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library.
Publius Ovidius Naso, Metamorphoses translated by Brookes More (1859-1942). Boston, Cornhill Publishing Co. 1922. Online version at the Perseus Digital Library.
Publius Ovidius Naso, Metamorphoses. Hugo Magnus. Gotha (Germany). Friedr. Andr. Perthes. 1892. Latin text available at the Perseus Digital Library.
Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library
Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library.
Quintus Smyrnaeus, The Fall of Troy translated by Way. A. S. Loeb Classical Library Volume 19. London: William Heinemann, 1913. Online version at theio.com
Quintus Smyrnaeus, The Fall of Troy. Arthur S. Way. London: William Heinemann; New York: G.P. Putnam's Sons. 1913. Greek text available at the Perseus Digital Library
Tzetzes, John, Book of Histories, Book II-IV translated by Gary Berkowitz from the original Greek of T. Kiessling's edition of 1826. Online version at theio.com
Sons of Aegyptus
Centaurs
Children of Heracles
Heracleidae
Achaeans (Homer)
Trojans
Suitors of Penelope
Characters in Greek mythology
Thessalian mythology
Cretan characters in Greek mythology |
1212890 | https://en.wikipedia.org/wiki/Microchip%20implant%20%28animal%29 | Microchip implant (animal) | A microchip implant is an identifying integrated circuit placed under the skin of an animal. The chip, about the size of a large grain of rice, uses passive radio-frequency identification (RFID) technology, and is also known as a PIT (passive integrated transponder) tag. Standard pet microchips are typically 11–13 mm long (approximately inch) and 2 mm in diameter.
Externally attached microchips such as RFID ear tags are commonly used to identify farm and ranch animals, with the exception of horses. Some external microchips can be read with the same scanner used with implanted chips.
Animal shelters, animal control officers and veterinarians routinely look for microchips to return lost pets quickly to their owners, avoiding expenses for housing, food, medical care, outplacing and euthanasia. Many shelters place chips in all outplaced animals.
Microchips are also used by kennels, breeders, brokers, trainers, registries, rescue groups, humane societies, clinics, farms, stables, animal clubs and associations, researchers, and pet stores.
Usage
Since their first use in the mid-1980s, microchips have allowed innovative investigations into numerous biological traits of animals. The tiny, coded markers implanted into individual animals allow assessment of growth rates, movement patterns, and survival patterns for many species in a manner more reliable than traditional approaches of externally marking animals for identification. Microchips have also been used to confirm the identity of pets and protected species that have been illegally removed from the wild.
Microchips can be implanted by a veterinarian or at a shelter. After checking that the animal does not already have a chip, the vet or technician injects the chip with a syringe and records the chip's unique ID. No anesthetic is required, as it is a simple procedure and causes little discomfort; the pain is minimal and short-lived. In dogs and cats, chips are usually inserted below the skin at the back of the neck between the shoulder blades on the dorsal midline. According to one reference, continental European pets get the implant in the left side of the neck. The chip can often be felt under the skin. Thin layers of connective tissue form around the implant and hold it in place.
Horses are microchipped on the left side of the neck, halfway between the poll and withers and approximately one inch below the midline of the mane, into the nuchal ligament.
Birds are implanted in their breast muscles. Proper restraint is necessary so the operation requires either two people (an avian veterinarian and a veterinary technician) or general anesthesia. Studies on horses show swelling and increased sensitivity take approximately three days to resolve. Humans report swelling and bruising at the time of implant, two to four weeks for scar tissue to form and itching and pinching sensations for up to two years. A test scan ensures correct operation.
Some shelters and vets designate themselves as the primary contact to remain informed about possible problems with the animals they place. The form is sent to a registry, who may be the chip manufacturer, distributor or an independent entity such as a pet recovery service. Some countries have a single official national database. For a fee, the registry typically provides 24-hour, toll-free telephone service for the life of the pet. Some veterinarians leave registration to the owner, usually done online, but a chip without current contact information is essentially useless.
The owner receives a registration certificate with the chip ID and recovery service contact information. The information can also be imprinted on a collar tag worn by the animal. Like an automobile title, the certificate serves as proof of ownership and is transferred with the animal when it is sold or traded; an animal without a certificate could be stolen. There are some privacy concerns regarding the use of microchips.
Authorities and shelters examine strays for chips, providing the recovery service with the ID number, description and location so that they may notify the owner or contact. If the pet is wearing the collar tag, the finder does not need a chip reader to contact the registry. An owner can also report a missing pet to the recovery service, as vets look for chips in new animals and check with the recovery service to see if it has been reported lost or stolen.
Many veterinarians scan an animal's chip on every visit to verify correct operation. Some use the chip ID as their database index and print it on receipts, test results, vaccination certifications and other records.
Some veterinary tests and procedures require positive identification of the animal, and a microchip may be acceptable for this purpose as an alternative to a tattoo.
Some pet doors can be programmed to be activated by the microchips of specific animals, allowing only certain animals to use the door.
Advantages of data collection
Pets
There are multiple reasons for the use of the microchips on pets as a documentation device, which are also advantages of microchips regarding information collection. The three major reasons for microchip implantation are , recording, domestication and showing proof of ownership. For example, with a feline microchip, delocalization shows that a registered cat is one that society is aware of and the cat has a position in the social order of animals. Recording shows that the microchip helps authorized people review and monitor cats in a certain region by referring to the database; thus the registry and the implanted microchips transform cats into social objects.
Livestock
Due to the advantages of microchips, there are many concrete applications of RFID in the agri-food sector covering the majority of usual foods, such as all kinds of meats as well as various vegetables, fruits. The feature of RFID, namely its traceability, makes it possible for the increased security and confidence of customers. As one of the most popular livestock around the world, the health condition of pigs is vital to farmer's income and inevitably influence customers' health. It is challenging to monitor the pigs' health condition individually by using traditional approaches. It is common for diseases to spread from a single pig to nearly all the pigs living in the same pigsty. By adopting the technology of microchips to measure the drinking behavior of individual pigs housed in a group, it is possible to identify a pig's health and productivity state. This kind of behavior is a good indicator of a pig's overall health. Compared to traditional visual observations to determine the pig's health state, RFID-based monitoring of pig drinking behavior is a feasible and more efficient option.
Wildlife
Using microchips in wild animals in biology began with fisheries studies to determine the efficacy of this method for measuring fish movement. Later, studies that use microchips to track wild animals expanded over the years, including researches on mammals, reptiles, birds, and amphibians. Compared with previous marking and tagging techniques used to identify wild animals before the advent of microchips, such as ear tags and color-coded leg bands, microchips are visually less obvious and less likely to be detected by prey and predators. Due to the fact that traditional identifications are on the exterior of the animal, tags can be lost, scars can heal and tattoos can fade.
Other useful and significant information can be collected by microchips. Chipped wild animals that are recaptured can provide information on growth rate and change of location, as well as other valuable data such as age structure, sex ratios, and longevity of individuals in the wild. Other researches on small mammals like rats and mice also adopt this technology to determine body temperature of terminally ill animals. As microchips are internal, permanent, durable under harsh environments, and have little influence on animals, more scholars have employed microchip implantation to collect useful data on wildlife researches.
Components of a microchip
A microchip implant is a passive RFID device. Lacking an internal power source, it remains inert until it is powered by the scanner or another power source. While the chip itself only interacts with limited frequencies, the device also has an antenna that is optimized for a specific frequency, but is not selective. It may receive, generate current with, and reradiate stray electromagnetic waves. The radio-waves emitted by the scanner activate the chip, making the chip transmit the identification number to the scanner, and the scanner displays the number on screen. The microchip is enclosed in a biocompatible glass cylinder and includes an identifying integrated circuit placed under the skin of an animal. Relevant standards for the chips are ISO 11784 and ISO 11785.
Most implants contain three elements: a 'chip' or integrated circuit, a coil inductor, possibly with a ferrite core, and a capacitor. The chip contains unique identification data and electronic circuits to encode that information. The coil acts as the secondary winding of a transformer, receiving power inductively coupled to it from the scanner. The coil and capacitor together form a resonant LC circuit tuned to the frequency of the scanner's oscillating magnetic field to produce power for the chip. The chip then transmits its data back through the coil to the scanner. The way the chip communicates with the scanner is a method called backscatter. It becomes part of the electromagnetic field and modulates it in a manner that communicates the ID number to the scanner.
These components are encased in biocompatible soda lime or borosilicate glass and hermetically sealed. Leaded glass should not be used for pet microchips and consumers should only accept microchips from reliable sources. The glass is also sometimes coated with polymers. Parylene C (chlorinated poly-dimethylbenzene) has become a common coating. Plastic pet microchips have been registered in the international registry since 2012 under Datamars manufacturer code 981 and are being implanted in pets. The patent suggests it is a silicon filled polyester sheath, but the manufacturer does not disclose the exact composition.
Animal species
Many animal species have been microchipped, including cockatiels and other parrots, horses, llamas, alpacas, goats, sheep, miniature pigs, rabbits, deer, ferrets, penguins, sharks, snakes, lizards, alligators, turtles, toads, frogs, rare fish, chimpanzees, mice, and prairie dogs—even whales and elephants. The U.S. Fish and Wildlife Service uses microchipping in its research of wild bison, black-footed ferrets, grizzly bears, elk, white-tailed deer, giant land tortoises and armadillos.
Use by country
Some countries require microchips in imported animals to match vaccination records. Microchip tagging may also be required for CITES-regulated international trade in certain endangered animals: for example, Asian Arowana are tagged to limit import to captive-bred fish. Birds that are not banded and cross international borders as pets or for trade are microchipped so that each bird is uniquely identifiable.
Australia
Microchips are legally required in the state of New South Wales, Australia.
Because the ability to trace livestock from property of birth to slaughter is critical to the safety of red meat, the Australian red meat industry has implemented a national system known as National Livestock Identification System to ensure the quality and safety of beef, lamb, sheep meat and goat meat. There are weaknesses in the current microchipping system in Australia. According to several pieces of researches in 2015, reclaim rates were significantly higher for animals with microchips than those without microchips, which is based on the statistical analysis of the raw data of dogs and cats living in Australia as well as microchipped animals. To determine the character and the frequency of inaccurate microchip data used for locating owners of stray pets, the researchers also analyzed admission data for stray dogs and cats entering shelters called RSPCA-Queensland (QLD). The results show that the problem of microchip data may reduce the possibility that a pet's owner will be contacted to reclaim the animal. It is necessary that the current microchipping system in Australia be perfect and that microchip owners update their data frequently.
France
Since 1999, all dogs older than four months must be permanently identified with a microchip (or a tattoo, though the latter is not accepted if the animal is to leave the country).
Cats are not required to be microchipped, though 2019 recorded increased support for mandatory chipping. Instead, since 1 January 2012, all cats older than seven months require mandatory registration in the European Union database.
Israel
Dogs and cats imported to Israel are required to be microchipped with an ISO 11784/11785 compliant 15 digit pet microchip.
Japan
Japan requires ISO-compliant microchips or a compatible reader on imported dogs and cats.
New Zealand
All dogs first registered after 1 July 2006 must be microchipped. Farmers protested that farm dogs should be exempt, drawing a parallel to the Dog Tax War of 1898. Farm dogs were exempted from microchipping in an amendment to the legislation passed in June 2006. A National Animal Identification and Tracing scheme in New Zealand is currently being developed for tracking livestock.
United Kingdom
In April 2012, Northern Ireland became the first part of the United Kingdom to require microchipping of individually licensed dogs.
As of 6 April 2016, all dogs in England, Scotland and Wales must be microchipped.
United States
Microchipping of pets and other animals is voluntary except for some legislation mandating microchipping as a means of identifying animals who have been identified as being dangerous. In 1994, the Louisiana Department of Agriculture and Forestry (LDAF) issued a regulation requiring permanent identification (in the form of a brand, lip tattoo or electronic identification) of all horses tested for equine infectious anemia. According to the LDAF and the state veterinarian, this requirement made a huge contribution to determining the owners of horses displaced during Hurricane Katrina in fall 2005.
The United States uses the National Animal Identification System for farm and ranch animals other than dogs and cats. In most species, except horses, an external eartag is typically used in lieu of an implant microchip. Eartags with microchips or simply stamped with a visible number can be used. Both use ISO fifteen-digit microchip numbers with the U.S. country code of 840.
Cross-compatibility and standards issues
In most countries, pet ID chips adhere to an international standard to promote compatibility between chips and scanners. In the United States, however, three proprietary types of chips compete along with the international standard. Scanners distributed to United States shelters and veterinarians well into 2006 could each read at most three of the four types. Scanners with quad-read capability are now available and are increasingly considered required equipment. Older scanner models will be in use for some time, so United States pet owners must still choose between a chip with good coverage by existing scanners and one compatible with the international standard. The four types include:
The ISO conformant full-duplex type has the greatest international acceptance. It is common in many countries including Canada and large parts of Europe (since the late 1990s). It is one of two chip protocol types (along with the "half-duplex" type sometimes used in farm and ranch animals) that conform to International Organization for Standardization standards ISO 11784 and ISO 11785. To support international/multivendor application, the three-digit country code can contain an assigned ISO country code or a manufacturer code from 900 to 998 plus its identifying serial number. In the United States, distribution of this type has been controversial. When 24PetWatch.com began distributing them in 2003 (and more famously Banfield Pet Hospitals in 2004) many shelter scanners couldn't read them. At least one Banfield-chipped pet was inadvertently euthanized.
The Trovan Unique type is another pet chip protocol type in use since 1990 in pets in the United States. Patent problems forced the withdrawal of Trovan's implanter device from United States distribution and they became uncommon in pets in the United States, although Trovan's original registry database "infopet.biz" remained in operation. In early 2007, the American Kennel Club's chip registration service, AKC Companion Animal Recovery Corp, which had been the authorized registry for HomeAgain brand chips made by Destron/Digital Angel, began distributing Trovan chips with a different implanter. These chips are read by the Trovan, HomeAgain (Destron Fearing), Bayer (Black Label), and Avid (MiniTracker 3) readers.
A third type, sometimes known as FECAVA or Destron, is available under various brand names. These include, in the United States, "Avid Eurochip", the common current 24PetWatch chips, and the original (and still popular) style of HomeAgain chips. (HomeAgain and 24Petwatch can now supply the true ISO chip instead on request.) Chips of this type have ten-digit hexadecimal chip numbers. This "FECAVA" type is readable on a wide variety of scanners in the United States and has been less controversial, although its level of adherence to the ISO standards is sometimes exaggerated in some descriptions. The ISO standard has an annex (appendix) recommending that three older chip types be supported by scanners, including a 35-bit "FECAVA"/"Destron" type. The common Eurochip/HomeAgain chips don't agree perfectly with the annex description, although the differences are sometimes considered minor. But the ISO standard also makes it clear that only its 64-bit "full-duplex" and "half-duplex" types are "conformant"; even chips (e.g., the Trovan Unique) that match one of the Annex descriptions are not. More visibly, FECAVA cannot support the ISO standard's required country/manufacturer codes. They may be accepted by authorities in many countries where ISO-standard chips are the norm, but not by those requiring literal ISO conformance.
Finally, there's the AVID brand FriendChip type, which has unique encryption characteristics. Cryptographic features are welcomed by pet rescuers or humane societies that object to outputting an ID number "in the clear" for anyone to read, along with authentication features for detection of counterfeit chips, but the authentication in "FriendChips" has been found lacking and rather easy to spoof to the AVID scanner. Although no authentication encryption is involved, obfuscation requires proprietary information to convert transmitted chip data to its original label ID code. Well into 2006, scanners containing the proprietary decryption were provided to the United States market only by AVID and Destron/Digital Angel; Destron/Digital Angel put the decryption feature in some, but not all, of its scanners, possibly as early as 1996. (For years, its scanners distributed to shelters through HomeAgain usually had full decryption, while many sold to veterinarians would only state that an AVID chip had been found.) Well into 2006, both were resisting calls from consumers and welfare group officials to bring scanners to the United States shelter community combining AVID decryption capability with the ability to read ISO-compliant chips. Some complained that AVID itself had long marketed combination pet scanners compatible with all common pet chips except possibly Trovan outside the United States. By keeping them out of the United States, it could be considered partly culpable in the missed-ISO chips problem others blamed on Banfield. In 2006, the European manufacturer Datamars, a supplier of ISO chips used by Banfield and others, gained access to the decryption secrets and began supplying scanners with them to United States customers. This "Black Label" scanner was the first four-standard full-multi pet scanner in the United States market. Later in 2006, Digital Angel announced that it would supply a full-multi scanner in the United States. In 2008, Avid introduced the MiniTracker Pro to support Avid, FECAVA, and ISO full-duplex microchips. Trovan also acquired the decryption technology in 2006 or earlier, and now provides it in scanners distributed in the United States by AKC-CAR. (Some are quad-read, but others lack full ISO support.)
Many references in print state that the incompatibilities between different chip types are a matter of "frequency". One may find claims that early ISO adopters in the United States endangered their customers' pets by giving them ISO chips that work at a "different frequency" from the local shelter's scanner, or that the United States government considered forcing an incompatible frequency change. These claims were little challenged by manufacturers and distributors of ISO chips, although later evidence suggests the claims were disinformation. All chips operate at the scanner's frequency. Although ISO chips are optimized for 134.2 kHz, in practice they are readable at 125 kHz and the "125 kHz" chips are readable at 134.2 kHz. Confirmation comes from government filings that indicate the supposed "multi-frequency" scanners now commonly available are really single-frequency scanners operating at 125, 134.2 or 128 kHz. In particular, the United States HomeAgain scanner didn't change excitation frequency when ISO-read capability was added; it's still a single frequency, 125 kHz scanner.
For users requiring shelter-grade certainty, this table is not a substitute for testing the scanner with a set of specimen chips. One study cites problems with certain Trovan chips on the Datamars Black Label scanner. In general, the study found none of the tested scanners to read all four standards without some deficiency, but it predates the most recent scanner models.
Difficulties in identifying a lost pet through its microchip
It can be challenging to identify a lost pet through its microchip. Not every scanner is capable of reading every chip, as even the best scanners miss some chips. The main issues are patent protection, business interests, and politics. It can also be difficult to ascertain which registry service archives the pet's identifying information. The American Animal Hospital Association Universal Pet Microchip Lookup Tool is an internet-based application to assist in the identification of those registries to which a particular microchip is registered, or otherwise provide the chip's manufacturer. Due to AAHA's effort, it is easier to figure out which registry keeps the animal's identifying information through a microchip search site. By searching the databases of participating companies, the tool provides useful information. To protect owners' privacy, it will not return pet owner information contained in the registries' databases. Instead, it will display which registries should be contacted when a lost pet is scanned, and its microchip number is identified. However, since not all microchip registry companies are involved in this tool, it is missing a significant databank of Avid Identification System Inc.
Reported adverse reactions
Adverse event reporting for animal microchips has been inconsistent. RFID chips are used in animal research, and at least three studies conducted since the 1990s have reported tumors at the site of implantation in laboratory mice and rats. The UK's Veterinary Medicines Directorate (VMD) assumed the task of adverse event reporting for animal microchips there in April 2014. Mandatory adverse event reporting went into effect in the UK in February 2015. The first report was issued for the period of April 2014 through December 2015. Mandatory microchip implant of dogs went into effect in April 2016. Data sets for 2016 through 2018 have become available. Adverse reactions to microchip implants may include infection, rejection, mass and tumor formation or death, but the risk of adverse reactions is very low. Sample sizes, in rodents and dogs in particular, have been small, and so conclusive evidence has been limited.
Noted veterinary associations have responded with continued support for the microchip implant procedures as reasonably safe for cats and dogs, pointing to rates of serious complications on the order of one in a million in the UK, which has a system for tracking such adverse reactions and has chipped over 3.7 million pet dogs. A 2011 study found no safety concerns for microchipped animals with RFID chips undergoing MRI at one Tesla magnetic field strength. In 2011 a microchip-associated fibrosarcoma was reported found in the neck of a 9-year old, neutered-male cat. Histological examination was consistent with postinjection sarcoma, but all prior vaccinations occurred in the hindlegs.
The microchip is implanted in the subcutaneous tissues causing an inflammatory response until scar tissue develops around the microchip. Studies on horses are used as the basis for short inflammatory response claims, while procedures on done on small kittens and puppies. People have reported swelling and bruising at the time of the implant with itching and pinching sensations for up to two years. The broader impacts on inflammatory disorders and cancer have not been determined and most of the health risks that were defined in the FDA Guidance developed for human implants should be considered. Adverse event reporting in the US can be made by the pet owner or a veterinarian to the FDA.
The estimate for the total cat and dog population of the UK is 16 million with 8.5 million dogs subject to mandatory microchip implant. The population of dogs implanted prior to mandatory adverse event reporting February 2015 was between 60% (February 2013) and 86% (April 2016). Approximately 95% are reported to be implanted as of April 2017.
Privacy
Unauthorized reading of microchips can present a risk to privacy and can potentially provide information to identify or track packages, consumers, carriers, or even owners of different animals. Several prototype systems are being developed to combat unauthorized reading, including RFID signal interruption, as well as the possibility of legislation. Hundreds of scientific papers have been published on this matter since 2002. Different countries have responded differently to these issues.
As early as in 1997, some scholars believed that microchip implantation was technically possible, but it was suggested that it was the time to consider strategies for preventing potentially grievous intrusion into personal privacy. It is possible that microchips implanted on animals can also lead to privacy issues or information breaches, which can lead to serious social problems.
The microchip ownership question
The widespread adoption of microchip identification may lead to ownership disputes occurring more frequently since sometimes microchip ownership information is irrelevant according to the ownership laws. This can occur when the owner is not the one to whom the microchip ownership information belongs. This is a significant problem because client confidentiality rules generally prohibit veterinarians from divulging information about a pet without the client's permission. Furthermore, veterinarians are required to get permission from the person who registered the chip to perform a surgery on a microchipped animal, even if the animal is experiencing a severe medical emergency. The problem can be more complicated if animals with microchips are abandoned or stolen.
Protecting privacy
The first method of protecting microchip privacy is by regularly updating information. Stray animals with incorrect microchip details are less likely to be reclaimed and when compared to pets with correct microchip details, the time taken to retrieve the pets is longer, and sometimes reuniting is impossible. Therefore, it is wise to update microchip information regularly, especially when owners move or change their phone numbers. According to research, email reminders may increase the frequency of pet owners updating their microchip information. By increasing the pet owners' updating frequency of the pets' data, the reclaim percentages of stray animals will increase and reduce the number of pets euthanized in shelters every year.
Another method of protection is by using cryptography. Rolling codes and challenge-response authentication (CRA) are commonly used to foil monitor-repetition of the messages between the tag and reader; as any messages that have been recorded would prove to be unsuccessful on repeat transmission. It is possible that some novel RFID authentication protocols for microchip ownership transfer can be adapted to protect users' privacy, which meets three key requirements for secure microchip ownership transfer. The three requirements include: new owner privacy (only the new owner should be able to identify and control the microchip), old owner privacy (past interactions between the microchip and its previous owner should not be traceable by the new owner) as well as authorization recovery (the new owner should be able to transfer its authorization rights to the previous owner in some special cases). These features can protect owners' privacy to some extent.
Manufacturers and registers
In the United States, the history of some tag manufacturers dates back more than 30 years. Several of the major tag manufacturers are listed below:
AVID, Inc.(American Veterinary Identification Devices): www.avidid.com; Norco, California
Biomark, Inc.: www.biomark.com; Meridian, Idaho
Bio Medic Data Systems, Inc.: www.bmds.com; Seaford, Delaware
Digital Angel Corporation (formerly Destron Fearing, Inc.): www.destronfearing.com; St.Paul, Minnesota
Trovan, Ltd.: www.trovan.com; Santa Barbara, California
Some RFID-USA Registers includes:
Home Again
AVID
AKC Reunite (formerly AKC Companion Animal Recovery [CAR])
Digital Angel
ResQ
ALLFLEX
Schering Plough
24 PET WATCH
Lifechip
Banfield
Crystal Tag
Datamars
Destron Fearing
See also
Microchip implant (human)
PositiveID
Proximity card
Pet recovery service
Remote-controlled animal
Radio-frequency identification
Clipped tag
Consumer privacy
ISO 11784 and ISO 11785
Notes
References
External links
Lost Pet Found After 13 Years (Apparently the current record for this type of story)
Cat equipment
Dog equipment
Dogs as pets
Radio-frequency identification
Identification of domesticated animals
Animal trade |
22528633 | https://en.wikipedia.org/wiki/John%20Grant%20%28American%20football%29 | John Grant (American football) | John David Grant (born June 28, 1950) is a former American football defensive tackle in the National Football League, playing seven seasons with the Denver Broncos.
Born and raised in Boise, Idaho, Grant graduated from its Capital High School and played college football at the University of Southern California in Los Angeles under head coach John McKay. In his senior season in 1972, the Trojans were undefeated and consensus national champions. Grant was first-team All-Pac-8 in 1971 and 1972, and a second-team All-American in 1972.
Grant was among ten Trojans selected in the 1973 NFL Draft, taken in the seventh round by Denver. He was part of the Bronco's Orange Crush defense in 1977 which led the team to Super Bowl XII; it was the franchise's first appearance in the postseason.
References
External links
1950 births
Living people
Sportspeople from Boise, Idaho
Players of American football from Idaho
American football defensive tackles
American football defensive ends
USC Trojans football players
Denver Broncos players |
51026855 | https://en.wikipedia.org/wiki/AppFolio | AppFolio | AppFolio is a company founded in 2006 that offers software-as-a-service (SaaS) applications for vertical markets. AppFolio primarily focuses on cloud-based property management software, services, and data analytics to the real estate industry.
The company’s headquarters is in Goleta, California, in the Santa Barbara area.
History
AppFolio was established in 2006 by co-founders Klaus Schauser and Jon Walker. Schauser had previously founded Expertcity.
The company’s first focus was property management for small to medium businesses and its first product, AppFolio Property Manager, was launched in 2007.
In November 2012, AppFolio acquired MyCase, a "legal practice management software provider."
AppFolio purchased real estate software firm RentLinx in April 2015. This acquisition included rights to the website ShowMeTheRent.com, which increased AppFolio’s listing presence.
In May 2015, AppFolio announced its IPO, which was unveiled in June.
In September 2018, AppFolio reported acquisition of utility analytics software WegoWise.
In January 2019, AppFolio acquired Dynasty Marketplace, Inc., for $60 million.
In September 2020, AppFolio announced the sale of MyCase to private equity firm Apax Funds, for approximately $193 million.
Products
AppFolio Property Manager
A property technology solution with accounting, marketing, leasing, and management functionality for multifamily and single-family, commercial, student housing, community association, and mixed portfolio property managers.
AppFolio Investment Management
A software platform designed for real estate investment management, with tools for fund management and syndication.
Awards
In 2020, AppFolio ranked #1 in Fortune’s list of Fastest-Growing Companies.
The same year, AppFolio was recognized as a Best Place to Work by Glassdoor.
References
External links
Property management
Software companies established in 2006
Software companies based in California
Companies based in Santa Barbara County, California
Cloud computing providers
Companies listed on the Nasdaq
2015 initial public offerings
Property management companies |
28809645 | https://en.wikipedia.org/wiki/NetPoint | NetPoint | NetPoint is a graphically-oriented project planning and scheduling software application first released for commercial use in 2009. NetPoint's headquarters are located in Chicago, Illinois. The application uses a time-scaled activity network diagram to facilitate interactive project planning and collaboration. NetPoint provides planning, scheduling, resource management, and other project controls functions.
NetPoint is capable of calculating schedules using both the Critical Path Method (CPM) as well as the Graphical Path Method (GPM).
Schedules created in NetPoint can be exported for use in Primavera, Microsoft Project, and other CPM-based Project management software.
See also
Project planning
Project management
Project management software
Comparison of project management software
Schedule (project management)
Critical path method
References
External links
Gilbane: Interactive Scheduling
Mosaic: List of Scheduling Tools
CPM in Construction Management: List of CPM Software
Project management software
Critical Path Scheduling
Schedule (project management) |
43596383 | https://en.wikipedia.org/wiki/Convercent | Convercent | Convercent is a Denver, Colorado-based software company that helps companies design and implement compliance programs. The company's Convercent governance, risk management and compliance (GRC) software integrates the management of corporate compliance risk, cases, disclosures, training and policies.
Software
The company's software is delivered using the software as a service model.
Funding
In January 2013, Convercent received $10.2 million in funding, led by Azure Capital Partners and Mantucket Capital and with participation from City National Bank.
In October 2013, Convercent raised $10M in Series B funding led by Sapphire Ventures] (formerly SAP Ventures), with participation from existing investors Azure Capital Partners, Rho Capital Partners, and Mantucket Capital.
Customers
Convercent has hundreds of customers in more than 130 countries, including Philip Morris International, CH2M Hill and Under Armour.
References
Risk management software
Companies based in Denver |
1865277 | https://en.wikipedia.org/wiki/In%20re%20Aimster%20Copyright%20Litigation | In re Aimster Copyright Litigation | In re Aimster Copyright Litigation, 334 F.3d 643 (7th Cir. 2003), was a case in which the United States Court of Appeals for the Seventh Circuit addressed copyright infringement claims brought against Aimster, concluding that a preliminary injunction against the file-sharing service was appropriate because the copyright owners were likely to prevail on their claims of contributory infringement, and that the services could have non-infringing users was insufficient reason to reverse the district court's decision. The appellate court also noted that the defendant could have limited the quantity of the infringements if it had eliminated an encryption system feature, and if it had monitored the use of its systems. This made it so that the defense did not fall within the safe harbor of 17 U.S.C. § 512(i). and could not be used as an excuse to not know about the infringement. In addition, the court decided that the harm done to the plaintiff was irreparable and outweighed any harm to the defendant created by the injunction.
Background
Recording industry owners of copyrights in musical performances brought contributory and vicarious infringement action, a type of secondary liability, against a website operator called Aimster, a company similar to Napster which facilitated the swapping of digital copies of songs over the internet.
Owners of copyrighted popular music claimed that John Deep ("Deep")'s Aimster Internet service was a contributory and vicarious infringer of these copyrights. The United States District Court for the Northern District of Illinois, Marvin E. Aspen, Jr., granted preliminary injunction for plaintiffs, which shut down Defendant's service until the suit was resolved, Aimster appealed from this preliminary injunction to the Court of Appeals for the Seventh Circuit.
The defendants argued that, unlike Napster, they designed their technology in such a way that they had no way of monitoring the content of swapped files. Someone who wanted to use Aimster's basic service for the first time to swap files had to download Aimster's software and then had to register on the system. After doing this he might designate any other registered user called a buddy, with whom he might communicate directly whenever both of them were online, and have the capability of interchanging music files. If the user did not designate any buddies, then all the users of the system became automatically his buddies to share files.
Opinion
The court held that in this case the users of the systems were the direct infringers, these who are ignorant or more commonly disdainful of copyright and in any event discount the likelihood of being sued or prosecuted for copyright infringement, however companies such as Aimster that facilitate their infringement, even if they are not themselves direct infringers can be liable for copyright violations as contributory infringers.
The court analyzed that the copyrighted materials might sometimes be shared between users of such a system without the authorization of the holder of the copyright owner and, in this case, fair-use privilege will not make the Aimster a contributory infringer. As mentioned in the Sony Corp. of America v. Universal City Studios, Inc., also known as the Betamax case, the producer of a product which has substantial noninfringing uses is not a contributory infringer, merely because some of the uses actually made of the product are infringing. In that case, a video reproducer machine called Betamax, the predecessor of today's videocassette recorders was at the issue. The court explained about the sale of the Betamax that the ability of a service provider to prevent its customers from infringing is a factor to be considered in determining whether the provider is or not a contributory infringer. Aimster, however, was not able to produce any evidence that its service had ever been used for a noninfringing purpose, instead the facts showed that Aimster encouraged these infringing activities.
The court rejected Aimster's argument that to prevail the recording industry should prove that some actual loss of money has occurred because of the copying that Aimster service's contribute in producing. The court explained that although the court, in Betamax, emphasized that the plaintiffs had failed to show that they had sustained substantial harm from Sony's video recorder, it did so in the context of assessing the argument that time shifting of television programs was fair use rather than infringement. The court believed that Betamax was not hurting the copyright owners because it was enlarging the audience for their programs, as well as advertisements. However it was also clear that even though without proving economic loss, compensation for damages can not be awarded, plaintiff could still obtain statutory damages and an injunction.
The Court also rejected Aimster's argument that because the court said in Betamax that mere constructive knowledge of infringing uses is not enough for contributory infringement (464 U.S. at 439, 104 S.Ct. 774) and the encryption feature of Aimster's service prevented Deep from knowing what songs were being copied by the users of his system, Aimster lacked the knowledge of infringing uses that liability for contributory infringement requires. The opinion also makes it clear that a service provider that fits within the characteristics of a contributory infringer does not obtain any sort of immunity by using encryption, to avoid knowledge of the unlawful purposes for which the service is being used. Actually, a tutorial for the Aimster software showed as its only examples of file sharing the sharing of copyrighted works. In this sense the tutorial was nothing but an invitation to infringe this copyrighted music, same invitation that the Supreme Court found could not find in the Sony case.
Willful blindness is knowledge, in copyright law (where indeed it may be enough that the defendant should have known of the direct infringement, see Casella v. Morris), as it is in the law generally. Another example is Louis Vuitton S.A. v. Lee, 875 F.2d 584, 590 (7th Cir. 1989) (contributory trademark infringement). The doctrine of willful blindness is established in many criminal statutes, which require proof that a defendant acted knowingly or willfully. Courts have held that defendants cannot escape the reach of these statutes by deliberately shielding themselves, from clear evidence of critical facts that are strongly suggested by the circumstances, understanding that those who behave in such manner should be treated as those who had actual knowledge.
Lastly, the court established that the DMCA § 512 "safe harbors" were unavailable because Aimster had done nothing to comply reasonably with Section 512(i)'s requirement to establish a policy to terminate repeat infringers and instead even encouraged repeat infringement.
Opinion of the Judge
The opinion was written by Judge Richard Posner, known for his publications on law and economics, and followed closely on the heels of the Ninth Circuit's decision in A & M Records, Inc. v. Napster, Inc.
Conclusion
The decision of the District Court was affirmed, concluding that a preliminary injunction against the file-sharing service was appropriate.
Subsequent developments
Petition for writ of certiorari to the U.S. Court of Appeals for the First Circuit denied.
See also
Sony Corp. of America v. Universal City Studios, Inc.
United States Court of Appeals for the Seventh Circuit
Richard Posner
References
External links
United States copyright case law
United States Court of Appeals for the Seventh Circuit cases
2003 in United States case law |
36935800 | https://en.wikipedia.org/wiki/Response%20policy%20zone | Response policy zone | A response policy zone (RPZ) is a mechanism to introduce a customized policy in Domain Name System servers, so that recursive resolvers return possibly modified results. By modifying a result, access to the corresponding host can be blocked.
Usage of an RPZ is based on DNS data feeds, known as zone transfer, from an RPZ provider to the deploying server. With respect to other blocklist methods, such as Google Safe Browsing, the actual blocklist is not managed, not even seen, by the client application. Web browsers, and any other client applications which connect to servers on the Internet, need the IP address of the server in order to open the connection. The local resolver is usually a system software which in turn puts the query to a recursive resolver, which often is located at the Internet service provider. If the latter server deploys RPZ, and either the queried name or the resulting address are in the blocklist, the response is modified so as to impede access.
History
The RPZ mechanism was developed by the Internet Systems Consortium led by Paul Vixie as a component of the BIND Domain Name Server (DNS). It was first available in BIND release 9.8.1 released 2010, and first publicly announced at Black Hat in July, 2010. It is also available in the Unbound software as of version 1.14.0.
The RPZ mechanism is published as an open and vendor-neutral standard for the interchange of DNS Firewall configuration information, allowing other DNS resolution software to implement it.
RPZ was developed as a technology to combat the misuse of the DNS by groups and/or persons with malicious intent or other nefarious purposes. It follows on from the Mail Abuse Prevention System project which introduced reputation data as a mechanism for protecting against email spam. RPZ extends the use of reputation data into the Domain Name System.
Function
RPZ allows a DNS recursive resolver to choose specific actions to be performed for a number of collections of domain name data (zones).
For each zone, the DNS service may choose to perform full resolution (normal behaviour), or other actions, including declaring that the requested domain does not exist (technically, NXDOMAIN), or that the user should visit a different domain (technically, CNAME), amongst other potential actions.
As zone information can be obtained from external sources (via a zone transfer) this allows a DNS service to obtain information from an external organisation about domain information and then choose to handle that information in a non-standard manner.
Purpose
RPZ is essentially a filtering mechanism, either preventing people from visiting internet domains, or pointing them to other locations by manipulating the DNS answers in different ways.
RPZ provides the opportunity for DNS recursive resolver operators to be able to obtain reputational data from external organisations about domains that may be harmful, and then use that information to avoid harm coming to the computers that use the recursive resolver by preventing those computers from visiting the potentially harmful domains.
Mechanism and data
RPZ is a mechanism that needs data on which it is to respond.
Some Internet security organisations have offered data describing potentially dangerous domains early in the development of the RPZ mechanism. Others services also offer RPZ for specific domain categories (for example for adult content domains). A recursive resolver operator is also easily capable of defining their own domain name data (zones) to be used by RPZ.
Example of use
Consider that Alice uses a computer which uses a DNS service (recursive resolver) which is configured to use RPZ and has access to some source of zone data which lists domains that are believed to be dangerous.
Alice receives an email with a link that appears to resolve to some place that she trusts, and she wishes to click on the link. She does so, but the actual location is not the trusted source that she read but a dangerous location which is known to the DNS service.
As the DNS service realizes that the resulting web location is dangerous, instead of informing her computer how to get to it (unmodified response), it sends information which leads to a safe location. Depending on how the DNS service configures its policy actions, the modified response can be a fixed page on a web site which informs her of what has happened, or a DNS error code such as NXDOMAIN or NODATA, or send no response at all.
See also
Google Safe Browsing
BIND
DNS management software
Quad9
References
External links
The original blog post (Paul Vixie)
Slides with more detail (Paul Vixie) - Link broken
Spamhaus' RPZ data feed information
Building DNS Firewalls with Response Policy Zones
Using URLhaus as a Response Policy Zone (RPZ)
DNS software
Free network-related software |
1001960 | https://en.wikipedia.org/wiki/Cmp%20%28Unix%29 | Cmp (Unix) | In computing, cmp is a command-line utility on Unix and Unix-like operating systems that compares two files of any type and writes the results to the standard output. By default, cmp is silent if the files are the same; if they differ, the byte and line number at which the first difference occurred is reported. The command is also available in the OS-9 shell.
History
is part of the X/Open Portability Guide since issue 2 of 1987. It was inherited into the first version of POSIX.1 and the Single Unix Specification. It first appeared in Version 1 Unix.
The version of cmp bundled in GNU coreutils was written by Torbjorn Granlund and David MacKenzie.
The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. The command has also been ported to the IBM i operating system.
Switches
cmp may be qualified by the use of command-line switches. The switches supported by notable implementations of cmp are:
Operands that are byte counts are normally decimal, but may be preceded by '0' for octal and '0x' for hexadecimal.
A byte count can be followed by a suffix to specify a multiple of that count; in this case an omitted integer is understood to be 1. A bare size letter, or one followed by 'iB', specifies a multiple using powers of 1024. A size letter followed by 'B' specifies powers of 1000 instead. For example, '-n 4M' and '-n 4MiB' are equivalent to '-n 4194304', whereas '-n 4MB' is equivalent to '-n 4000000'. This notation is upward compatible with the SI prefixes for decimal multiples and with the IEC 60027-2 prefixes for binary multiples.
Return values
0 – files are identical
1 – files differ
2 – inaccessible or missing argument
See also
Comparison of file comparison tools
List of Unix commands
References
External links
Comparing and Merging Files: Invoking cmp The section of the manual of GNU cmp in the diffutils free manual.
Free file comparison tools
Standard Unix programs
Unix SUS2008 utilities
Plan 9 commands
Inferno (operating system) commands |
47152350 | https://en.wikipedia.org/wiki/Human%20performance%20modeling | Human performance modeling | Human performance modeling (HPM) is a method of quantifying human behavior, cognition, and processes. It is a tool used by human factors researchers and practitioners for both the analysis of human function and for the development of systems designed for optimal user experience and interaction . It is a complementary approach to other usability testing methods for evaluating the impact of interface features on operator performance.
History
The Human Factors and Ergonomics Society (HFES) formed the Human Performance Modeling Technical Group in 2004. Although a recent discipline, human factors practitioners have been constructing and applying models of human performance since World War II. Notable early examples of human performance models include Paul Fitts' model of aimed motor movement (1954), the choice reaction time models of Hick (1952) and Hyman (1953), and the Swets et al. (1964) work on signal detection. It is suggested that the earliest developments in HPM arose out of the need to quantify human-system feedback for those military systems in development during WWII (see Manual Control Theory below), with continued interest in the development of these models augmented by the cognitive revolution (see Cognition & Memory below).
Human Performance Models
Human performance models predict human behavior in a task, domain, or system. However, these models must be based upon and compared against empirical human-in-the-loop data to ensure that the human performance predictions are correct. As human behavior is inherently complex, simplified representations of interactions are essential to the success of a given model. As no model is able to capture the complete breadth and detail of human performance within a system, domain, or even task, details are abstracted away to these keep models manageable. Although the omission of details is an issue in basic psychological research, it is less of a concern in applied contexts such as those of most concern to the human factors profession. This is related to the internal-external validity trade-off. Regardless, development of a human performance model is an exercise in complexity science. Communication and exploration of the most essential variables governing a given process are often just as important as the accurate prediction of an outcome given those variables.
The goal of most human performance models is to capture enough detail in a particular domain to be useful for the purposes of investigation, design, or evaluation; thus the domain for any particular model is often quite restricted. Defining and communicating the domain of a given model is an essential feature of the practice - and of the entirety of human factors - as a systems discipline. Human performance models contain both the explicit and implicit assumptions or hypotheses upon which the model depends, and are typically mathematical - being composed of equations or computer simulations - although there are also important models that are qualitative in nature.
Individual models vary in their origins, but share in their application and use for issues in the human factors perspective. These can be models of the products of human performance (e.g., a model that produces the same decision outcomes as human operators), the processes involved in human performance (e.g., a model that simulates the processes used to reach decisions), or both. Generally, they are regarded as belonging to one of three areas: perception & attention allocation, command & control, or cognition & memory; although models of other areas such as emotion, motivation, and social/group processes continue to grow burgeoning within the discipline. Integrated models are also of increasing importance. Anthropometric and biomechanical models are also crucial human factors tools in research and practice, and are used alongside other human performance models, but have an almost entirely separate intellectual history, being individually more concerned with static physical qualities than processes or interactions.
The models are applicable in many number of industries and domains including military, aviation, nuclear power, automotive, space operations, manufacturing, user experience/user interface (UX/UI) design, etc. and have been used to model human-system interactions both simple and complex.
Model Categories
Command & Control
Human performance models of Command & Control describe the products of operator output behavior, and are often also models of dexterity within the interactions for certain tasks.
Hick-Hyman Law
Hick (1952) and Hyman (1953) note that the difficulty of a choice reaction-time task is largely determined by the information entropy of the situation. They suggested that information entropy (H) is a function of the number of alternatives (n) in a choice task, H = log2(n + 1); and that reaction time (RT) of a human operator is a linear function of the entropy: RT = a + bH. This is known as the Hick-Hyman law for choice response time.
Pointing
Pointing at stationary targets such as buttons, windows, images, menu items, and controls on computer displays is commonplace and has a well-established modeling tool for analysis - Fitts's law (Fitts, 1954) - which states that the time to make an aimed movement (MT) is a linear function of the index of difficulty of the movement: MT = a + bID. The index of difficulty (ID) for any given movement is a function of the ratio of distance to the target (D) and width of the target (W): ID = log2(2D/W) - a relationship derivable from information theory. Fitts' law is actually responsible for the ubiquity of the computer mouse, due to the research of Card, English, and Burr (1978). Extensions of Fitt's law also apply to pointing at spatially moving targets, via the steering law, originally discovered by C.G. Drury in 1971 and later on rediscovered in the context of human-computer interaction by Accott & Zhai (1997, 1999).
Manual Control Theory
Complex motor tasks, such as those carried out by musicians and athletes, are not well modeled due to their complexity. Human target-tracking behavior, however, is one complex motor task that is an example of successful HPM.
The history of manual control theory is extensive, dating back to the 1800s in regard to the control of water clocks. However, during the 1940s with the innovation of servomechanisms in WWII, extensive research was put into the continuous control and stabilization of contemporary systems such as radar antennas, gun turrets, and ships/aircraft via feedback control signals.
Analysis methods were developed that predicted the required control systems needed to enable stable, efficient control of these systems (James, Nichols, & Phillips, 1947). Originally interested in temporal response - the relationship between sensed output and motor output as a function of time - James et al. (1947) discovered that the properties of such systems are best characterized by understanding temporal response after it had been transformed into a frequency response; a ratio of output to input amplitude and lag in response over the range of frequencies to which they are sensitive. For systems that respond linearly to these inputs, the frequency response function could be expressed in a mathematical expression called a transfer function. This was applied first to machine systems, then human-machine systems for maximizing human performance. Tustin (1947), concerned with the design of gun turrets for human control, was first to demonstrate that nonlinear human response could be approximated by a type of transfer function. McRuer and Krenzel (1957) synthesized all the work since Tustin (1947), measuring and documenting the characteristics of the human transfer function, and ushered in the era of manual control models of human performance. As electromechanical and hydraulic flight control systems were implemented into aircraft, automation and electronic artificial stability systems began to allow human pilots to control highly sensitive systems These same transfer functions are still used today in control engineering.
From this, the optimal control model (Pew & Baron, 1978) developed in order to model a human operator's ability to internalize system dynamics and minimize objective functions, such as root mean square (RMS) error from the target. The optimal control model also recognizes noise in the operator's ability to observe the error signal, and acknowledges noise in the human motor output system.
Technological progress and subsequent automation have reduced the necessity and desire of manual control of systems, however. Human control of complex systems is now often of a supervisory nature over a given system, and both human factors and HPM have shifted from investigations of perceptual-motor tasks, to the cognitive aspects of human performance.
Attention & Perception
Signal Detection Theory (SDT)
Although not a formal part of HPM, signal detection theory has an influence on the method, especially within the Integrated Models. SDT is almost certainly the best-known and most extensively used modeling framework in human factors, and is a key feature of education regarding human sensation and perception. In application, the situation of interest is one in which a human operator has to make a binary judgement about whether a signal is present or absent in a noise background. This judgement may be applied in any number of vital contexts. Besides the response of the operator, there are two possible "true" states of the world - either the signal was present or it was not. If the operator correctly identifies the signal as present, this is termed a hit (H). If the operator responds that a signal was present when there was no signal, this is termed a false alarm (FA). If the operator correctly responds when no signal is present, this is termed a correct rejection (CR). If a signal is present and the operator fails to identify it, this is termed a miss (M).
In applied psychology and human factors, SDT is applied to research problems including recognition, memory, aptitude testing, and vigilance. Vigilance, referring to the ability of operators to detect infrequent signals over time, is important for human factors across a variety of domains.
Visual Search
A developed area in attention is the control of visual attention - models that attempt to answer, "where will an individual look next?" A subset of this concerns the question of visual search: How rapidly can a specified object in the visual field be located? This is a common subject of concern for human factors in a variety of domains, with a substantial history in cognitive psychology. This research continues with modern conceptions of salience and salience maps. Human performance modeling techniques in this area include the work of Melloy, Das, Gramopadhye, and Duchowski (2006) regarding Markov models designed to provide upper and lower bound estimates on the time taken by a human operator to scan a homogeneous display. Another example from Witus and Ellis (2003) includes a computational model regarding the detection of ground vehicles in complex images. Facing the nonuniform probability that a menu option is selected by a computer user when certain subsets of the items are highlighted, Fisher, Coury, Tengs, and Duffy (1989) derived an equation for the optimal number of highlighted items for a given number of total items of a given probability distribution. Because visual search is an essential aspect of many tasks, visual search models are now developed in the context of integrating modeling systems. For example, Fleetwood and Byrne (2006) developed an ACT-R model of visual search through a display of labeled icons - predicting the effects of icon quality and set size not only on search time but on eye movements.
Visual Sampling
Many domains contain multiple displays, and require more than a simple discrete yes/no response time measurement. A critical question for these situations may be "How much time will operators spend looking at X relative to Y?" or "What is the likelihood that the operator will completely miss seeing a critical event?" Visual sampling is the primary means of obtaining information from the world. An early model in this domain is Sender's (1964, 1983) based upon operators monitoring of multiple dials, each with different rates of change. Operators try, as best as they can, to reconstruct the original set of dials based on discrete sampling. This relies on the mathematical Nyquist theorem stating that a signal at W Hz can be reconstructed by sampling every 1/W seconds. This was combined with a measure of the information generation rate for each signal, to predict the optimal sampling rate and dwell time for each dial. Human limitations prevent human performance from matching optimal performance, but the predictive power of the model influenced future work in this area, such as Sheridan's (1970) extension of the model with considerations of access cost and information sample value.
A modern conceptualization by Wickens et al. (2008) is the salience, effort, expectancy, and value (SEEV) model. It was developed by the researchers (Wickens et al., 2001) as a model of scanning behavior describing the probability that a given area of interest will attract attention (AOI). The SEEV model is described by p(A) = sS - efEF + (exEX)(vV), in which p(A) is the probability a particular area will be samples, S is the salience for that area; EF represents the effort required in reallocating attention to a new AOI, related to the distance from the currently attended location to the AOI; EX (expectancy) is the expected event rate (bandwidth), and V is the value of the information in that AOI, represented as the product of Relevance and Priority (R*P). The lowercase values are scaling constants. This equation allows for the derivation of optimal and normative models for how an operator should behave, and to characterize how they behave. Wickens et al., (2008) also generated a version of the model that does not require absolute estimation of the free parameters for the environment - just the comparative salience of other regions compared to region of interest.
Visual Discrimination
Models of visual discrimination of individual letters include those of Gibson (1969), Briggs and Hocevar (1975), and McClelland and Rumelhart (1981), the last of which is part of a larger model for word recognition noted for its explanation of the word superiority effect. These models are noted to be highly detailed, and make quantitative predictions about small effects of specific letters.
Depth Perception
A qualitative HPM example includes te Cutting and Vishton (1995) model of depth perception, which indicates that cues to depth perception are more effective at various distances.
Workload
Although an exact definition or method for measurement of the construct of workload is debated by the human factors community, a critical part of the notion is that human operators have some capacity limitations and that such limitations can be exceeded only at the risk of degrading performance. For physical workload, it may be understood that there is a maximum amount that a person should be asked to lift repeatedly, for example. However, the notion of workload becomes more contentious when the capacity to be exceeded is in regard to attention - what are the limits of human attention, and what exactly is meant by attention? Human performance modeling produces valuable insights into this area.
Byrne and Pew (2009) consider an example of a basic workload question: "To what extent do task A and B interfere?" These researchers indicate this as the basis for the psychological refractory period (PRP) paradigm. Participants perform two choice reaction-time tasks, and the two tasks will interfere to a degree - especially when the participant must react to the stimuli for the two tasks when they are close together in time - but the degree of interference is typically smaller than the total time taken for either task. The response selection bottleneck model (Pashler, 1994) models this situation well - in that each task has three components: perception, response selection (cognition), and motor output. The attentional limitation - and thus locus of workload - is that response selection can only be done for one task at a time. The model makes numerous accurate predictions, and those for which it cannot account are addressed by cognitive architectures (Byrne & Anderson, 2001; Meyer & Kieras, 1997). In simple dual-task situations, attention and workload are quantified, and meaningful predictions made possible.
Horrey and Wickens (2003) consider the questions: To what extent will a secondary task interfere with driving performance, and does it depend on the nature of the driving and on the interface presented in the second task? Using a model based on multiple resource theory (Wickens, 2002, 2008; Navon & Gopher, 1979), which proposes that there are several loci for multiple-task interference (the stages of processing, the codes of processing, and modalities), the researchers suggest that cross-task interference increases proportional to the extent that the two tasks use the same resources within a given dimension: Visual presentation of a read-back task should interfere more with driving than should auditory presentation, because driving itself makes stronger demands on the visual modality than on the auditory.
Although multiple resource theory the best known workload model in human factors, it is often represented qualitatively. The detailed computational implementations are better alternatives for application in HPM methods, to include the Horrey and Wickens (2003) model, which is general enough to be applied in many domains. Integrated approaches, such as task network modeling, are also becoming more prevalent in the literature.
Numerical typing is an important perceptual-motor task whose performance may vary with different pacing, finger strategies and urgency of situations. Queuing network-model human processor (QN-MHP), a computational architecture, allows performance of perceptual-motor tasks to be modelled mathematically. The current study enhanced QN-MHP with a top-down control mechanism, a close-loop movement control and a finger-related motor control mechanism to account for task interference, endpoint reduction, and force deficit, respectively. The model also incorporated neuromotor noise theory to quantify endpoint variability in typing. The model predictions of typing speed and accuracy were validated with Lin and Wu's (2011) experimental results. The resultant root-meansquared errors were 3.68% with a correlation of 95.55% for response time, and 35.10% with a correlation of 96.52% for typing accuracy. The model can be applied to provide optimal speech rates for voice synthesis and keyboard designs in different numerical typing situations.
The psychological refractory period (PRP) is a basic but important form of dual-task information processing. Existing serial or parallel processing models of PRP have successfully accounted for a variety of PRP phenomena; however, each also encounters at least 1 experimental counterexample to its predictions or modeling mechanisms. This article describes a queuing network-based mathematical model of PRP that is able to model various experimental findings in PRP with closed-form equations including all of the major counterexamples encountered by the existing models with fewer or equal numbers of free parameters. This modeling work also offers an alternative theoretical account for PRP and demonstrates the importance of the theoretical concepts of “queuing” and “hybrid cognitive networks” in understanding cognitive architecture and multitask performance.
Cognition & Memory
The paradigm shift in psychology from behaviorism to the study of cognition had a huge impact on the field of Human Performance Modeling. Regarding memory and cognition, the research of Newell and Simon regarding artificial intelligence and the General Problem Solver (GPS; Newell & Simon, 1963), demonstrated that computational models could effectively capture fundamental human cognitive behavior. Newell and Simon were not simply concerned with the amount of information - say, counting the number of bits the human cognitive system had to receive from the perceptual system - but rather the actual computations being performed. They were critically involved with the early success of comparing cognition to computation, and the ability of computation to simulate critical aspects of cognition - thus leading to the creation of the sub-discipline of artificial intelligence within computer science, and changing how cognition was viewed in the psychological community. Although cognitive processes do not literally flip bits in the same way that discrete electronic circuits do, pioneers were able to show that any universal computational machine could simulate the processes used in another, without a physical equivalence (Phylyshyn, 1989; Turing, 1936). The cognitive revolution allowed all of cognition to be approached by modeling, and these models now span a vast array of cognitive domains - from simple list memory, to comprehension of communication, to problem solving and decision making, to imagery, and beyond.
One popular example is the Atkinson-Shiffrin (1968) "modal" model of memory. Also, please see Cognitive Models for information not included here..
Routine Cognitive Skill
One area of memory and cognition regards modeling routine cognitive skills; when an operator has the correct knowledge of how to perform a task and simply needs to execute that knowledge. This is widely applicable, as many operators are practiced enough that their procedures become routine. The GOMS (goals, operators, methods, and selection rules) family of Human Performance Models popularized and well-defined by researchers in the field (Card et al., 1983; John & Kieras, 1996a, 1996b) were originally applied to model users of computer interfaces, but have since been extended to other areas. They are useful HPM tools, suitable for a variety of different concerns and sizes of analysis, but are limited in regard to analyzing user error (see Wood & Kieras, 2002, for an effort to extend GOMS to handling errors).
The simplest form of a GOMS model is a keystroke-level model (KLM) - in which all physical actions are listed (e.g., keystrokes, mouse clicks), also termed operations, that a user must take in order to complete a given task. Mental operations (e.g., find an object on the screen) augment this using a straightforward set of rules. Each operations has a time associated with it (such as 280 ms for a keystroke), and the total time for the task is estimated by adding up operation times. The efficiency of two procedures may then be compared, using their respected estimated execution times. Although this form of model is highly approximate (many assumptions are taken at liberty), it is a form of model still used today (e.g., in-vehicle information systems and mobile phones).
Detailed versions of GOMS exist, including:
--CPM-GOMS: "Cognitive, perceptual, motor"" and 'critical path method" (John & Kieras, 1996a, 1996b) - attempts to break down performance into primitive CPM units lasting tens to hundreds of milliseconds (durations for many operations in CPM-GOMS models come from published literature, especially Card et al., 1983).
--GOMSL / NGOMSL: GOMS Language or Natural GOMS Language, which focus on the hierarchical decomposition of goals, but with an analysis including methods - procedures people use to accomplish those goals. Many generic mental operations in the KLM are replaced with detailed descriptions of the cognitive activity involving the organization of people's procedural knowledge into methods. A detailed GOMSL analysis allows for the prediction of not only execution time, but also the time it takes for learning the procedures, and the amount of transfer that can be expected based on already known procedures (Gong and Kieras, 1994). These models are not only useful for informing redesigns of user-interfaces, but also quantitatively predict execution and learning time for multiple tasks.
Decision-Making
Another critical cognitive activity of interest to human factors is that of judgement and decision making. These activities starkly contrast to routine cognitive skills, for which the procedures are known in advance, as many situations require operators to make judgments under uncertaintly - to produce a rating of quality, or perhaps choose among many possible alternatives. Although many disciplines including mathematics and economics make significant contributions to this area of study, the majority of these models do not model human behavior but rather model optimal behavior, such as subjective expected utility theory (Savage, 1954; von Neumann & Morgenstern, 1944). While models of optimal behavior are important and useful, they do not consider a baseline of comparison for human performance - though much research on human decision making in this domain compares human performance to mathematically optimal formulations. Examples of this include Kahneman and Tversky's (1979) prospect theory and Tversky's (1972) elimination by aspects model. Less formal approaches include Tversky and Kahneman's seminal work on heuristics and biases, Gigerenzer's work on 'fast and frugal' shortcuts (Gigerenzer, Todd, & ABC Research Group, 2000), and the descriptive models of Paune, Bettman, and Johnson (1993) on adaptive strategies.
Sometimes optimal performance is uncertain, one powerful and popular example is the lens model (Brunswick, 1952; Cooksey, 1996; Hammond, 1955), which deals with policy capturing, cognitive control, and cue utilization, and has been used in aviation (Bisantz & Pritchett, 2003), command and control (Bisantz et al., 2000); to investigate human judgement in employment interviews (Doherty, Ebert, & Callender, 1986), financial analysis (Ebert & Kruse, 1978), physicians' diagnoses (LaDuca, Engel, & Chovan, 1988), teacher ratings (Carkenord & Stephens, 1944), and numerous others. Although the model does have limitations [described in Byrne & Pew (2009)], it is very powerful and remains underutilized in the human factors profession.
Situation Awareness (SA)
Models of SA range from descriptive (Endsley, 1995) to computational (Shively et al., 1997). The most useful model in HPM is that of McCarley et al. (2002) known as the A-SA model (Attention/Situation Awareness). It incorporates two semi-independent components: a perception/attention module and a cognitive SA-updated module. The P/A model of this A-SA model is based on the Theory of Visual Attention. (Bundesen, 1990) (refer to McCarley et al., 2002).
Integrated Models
Many of these models described are very limited in their application. Although many extensions of SDT have been proposed to cover a variety of other judgement domains (see T.D. Wickens, 2002, for examples), most of these never caught on, and SDT remains limited to binary situations. The narrow scope of these models is not limited to human factors, however - Newton's laws of motion have little predictive power regarding electromagnetism, for example. However, this is frustrating for human factors professionals, because real human performance in vivo draws upon a wide array of human capabilities. As Byrne & Pew (2009) describe, "in the space of a minute, a pilot might quite easily conduct a visual search, aim for and push a button, execute a routine procedure, make a multiple-cue probabilistic judgement" and do just about everything else described by fundamental human performance models. A fundamental review of HPM by the National Academies (Elkind, Card, Hochberg, & Huey, 1990) described integration as the great unsolved challenge in HPM. This issue remains to be solved, however, there have been efforts to integrate and unify multiple models and build systems that span across domains. In human factors, the two primary modeling approaches that accomplish this and have gained popularity are task network modeling and cognitive architectures.
Task Network Modeling
The term network model refers to a modeling procedure involving Monte Carlo simulation rather than to a specific model. Although the modeling framework is atheoretical, the quality of the models that are built with it are only as high of a quality as the theories and data used to create them.
When a modeler builds a network model of a task, the first step is to construct a flow chart decomposing the task into discrete sub-tasks - each sub-task as a node, the serial and parallel paths connecting them, and the gating logic that governs the sequential flow through the resulting network. When modeling human-system performance, some nodes represent human decision processes and.or human task execution, some represent system execution sub-tasks, and some aggregate human/machine performance into a single node. Each node is represented by a statistically specified completion time distribution and a probability of completion. When all these specifications are programmed into a computer, the network is exercised repeatedly in Monte Carlo fashion to build up distributions of the aggregate performance measures that are of concern to the analyst. The art in this is in the modeler's selection of the right level of abstraction at which to represent nodes and paths and in estimating the statistically defined parameters for each node. Sometimes, human-in-the-loop simulations are conducted to provide support and validation for the estimates.. Detail regarding this, related, and alternative approaches may be found in Laughery, Lebiere, and Archer (2006) and in the work of Schwieckert and colleagues, such as Schweickert, Fisher, and Proctor (2003).
Historically, Task Network Modeling stems from queuing theory and modeling of engineering reliability and quality control. Art Siegel, a psychologist, first though of extending reliability methods into a Monte Carlo simulation model of human-machine performance (Siegel & Wolf, 1969). In the early 1970s, the U.S. Air Force sponsored the development of SAINT (Systems Analysis of Integrated Networks of Tasks), a high-level programming language specifically designed to support the programming of Monte Carlo simulations of human-machine task networks (Wortman, Pritsker, Seum, Seifert, & Chubb, 1974). A modern version of this software is Micro Saint Sharp (Archer, Headley, & Allender, 2003). This family of software spawned a tree of special-purpose programs with varying degrees of commonality and specificity with Micro Saint. The most prominent of these is the IMPRINT series (Improved Performance Research Integration Tool) sponsored by the U.S. Army (and based on MANPRINT) which provides modeling templates specifically adapted to particular human performance modeling applications (Archer et al., 2003). Two workload-specific programs are W/INDEX (North & Riley, 1989) and WinCrew (Lockett, 1997).
The network approach to modeling using these programs is popular due to its technical accessibility to individual with general knowledge of computer simulation techniques and human performance analysis. The flowcharts that result from task analysis lead naturally to formal network models. The models can be developed to serve specific purposes - from simulation of an individual using a human-computer interface to analyzing potential traffic flow in a hospital emergency center. Their weakness is the great difficulty required to derive performance times and success probabilities from previous data or from theory or first principles. These data provide the model's principle content.
Cognitive Architectures
Cognitive Architectures are broad theories of human cognition based on a wide selection of human empirical data and are generally implemented as computer simulations. They are the embodiment of a scientific hypothesis about those aspects of human cognition relatively constant over time and independent of task (Gray, Young, & Kirschenbaum, 1997; Ritter & young, 2001). Cognitive architectures are an attempt to theoretically unify disconnected empirical phenomena in the form of computer simulation models. While theory is inadequate for the application of human factors, since the 1990s cognitive architectures also include mechanisms for sensation, perception, and action. Two early examples of this include the Executive Process Interactive Control model (EPIC; Kieras, Wood, & Meyer, 1995; Meyer & Kieras, 1997) and the ACT-R (Byrne & Anderson, 1998).
A model of a task in a cognitive architecture, generally referred to as a cognitive model, consists of both the architecture and the knowledge to perform the task. This knowledge is acquired through human factors methods including task analyses of the activity being modeled. Cognitive architectures are also connected with a complex simulation of the environment in which the task is to be performed - sometimes, the architecture interacts directly with the actual software humans use to perform the task. Cognitive architectures not only produce a prediction about performance, but also output actual performance data - able to produce time-stamped sequences of actions that can be compared with real human performance on a task.
Examples of cognitive architectures include the EPIC system (Hornof & Kieras, 1997, 1999), CPM-GOMS (Kieras, Wood, & Meyer, 1997), the Queuing Network-Model Human Processor (Wu & Liu, 2007, 2008), ACT-R (Anderson, 2007; Anderson & Lebiere, 1998), and QN-ACTR (Cao & Liu, 2013).
The Queuing Network-Model Human Processor model was used to predict how drivers perceive the operating speed and posted speed limit, make choice of speed, and execute the decided operating speed. The model was sensitive (average d’ was 2.1) and accurate (average testing accuracy was over 86%) to predict the majority of unintentional speeding
ACT-R has been used to model a wide variety of phenomena. It consists of several modules, each one modeling a different aspect of the human system. Modules are associated with specific brain regions, and the ACT-R has thus successfully predicted neural activity in parts of those regions. Each model essentially represents a theory of how that piece of the overall system works - derived from research literature in the area. For example, the declarative memory system in ACT-R is based on series of equations considering frequency and recency and that incorporate Baysean notions of need probability given context, also incorporating equations for learning as well as performance, Some modules are of higher fidelity than others, however - the manual module incorporates Fitt's law and other simple operating principles, but is not as detailed as the optimal control theory model (as of yet). The notion, however, is that each of these modules require strong empirical validation. This is both a benefit and a limitation to the ACT-R, as there is still much work to be done in the integration of cognitive, perceptual, and motor components, but this process is promising (Byrne, 2007; Foyle and Hooey, 2008; Pew & Mavor, 1998).
Group Behavior
Team/Crew Performance Modeling
GOMS has been used to model both complex team tasks (Kieras & Santoro, 2004) and group decision making (Sorkin, Hays, & West, 2001).
Modeling Approaches
Computer Simulation Models/Approaches
Example: IMPRINT (Improved Performance Research Integration Tool)
Mathematical Models/Approaches
Example: Cognitive model
Comparing HPM Models
To compare different HPM models, one of ways is to calculate their AIC (Akaike information criterion) and consider the Cross-validation criterion.
Benefits
Numerous benefits may be gained from using modeling techniques in the human performance domain.
Specificity
A sizable majority of explanations in psychology are not only qualitative but also vague. Concepts such as "attention", "processing capacity", "workload", and "situation awareness" (SA), both general and specific to human factors, are often difficult to quantify in applied domains. Researchers differ in their definitions of such terms, which makes it likewise difficult to specify data for each term. Formal models, in contrast, typically require explicit specification of theoretical terms. Specificity requires that explanations be internally coherent; while verbal theories are often so flexible that they fail to remain consistent, allowing contradictory predictions to be derived from their use. Not all models are quantitative in nature, however, and thus not all provide the benefit of specificity to the same degree.
Objectivity
Formal models are generally modeler independent. Although great skill is involved in constructing a specific model, once it is constructed, anybody with the appropriate knowledge can run it or solve it, and the model produces the same predictions regardless of who is running or solving the model. Predictions are no longer leashed to the biases or sole intuition of a single expert but, rather, to a specification that can be made public.
Quantitativeness
Many human performance models make quantitative predictions, which are critical in applied situations. Purely empirical methods analyzed with hypothesis testing techniques, as standard in most psychological experiments, focus on providing answers to vague questions such as "Are A and B different?" and then "Is this difference statistically significant?"; while formal models often provide useful quantitative information such as "A is x% slower than B."
Clarity
Human performance models provide clarity, in that the model provides an explanation for observed differences; such explanations are not generally provided by strictly empirical methods.
Issues
Misconceptions
Many human performance models share key features with Artificial Intelligence (AI) methods and systems. The function of AI research is to produce systems that exhibit intelligent behavior, generally without consideration of the degree to which that intelligence resembles or predicts human performance, yet the distinction between AI methods and that of HPM is at times unclear. For example, Bayesian classifiers used to filter spam emails approximate human classification performance (classifying spam emails as spam, and non-spam emails as importation) and are thus highly intelligence systems, but fail to rely on interpretation of the semantics of the messages themselves; instead relying on statistical methods. However, Bayesian analysis can also be essential to human performance models.
Usefulness
Models may focus more on the processes involved in human performance rather than the products of human performance, thus limiting their usefulness in human factors practice.
Abstraction
The abstraction necessary for understandable models competes with accuracy. While generality, simplicity, and understandability are important to the application of models in human factors practice, many valuable human performance models are inaccessible to those without graduate, or postdoctoral training. For example, while Fitts's law is straightforward for even undergraduates, the lens model requires an intimate understanding of multiple regression, and construction of an ACT-R type model requires extensive programming skills and years of experience. While the successes of complex models are considerable, a practitioner of HPM must be aware of the trade-offs between accuracy and usability.
Free Parameters
As is the case in most model-based sciences, free parameters rampant within models of human performance also require empirical data a priori. There may be limitations in regard to collecting the empirical data necessary to run a given model, which may constrains the application of that given model.
Validation
Validation of human performance models is of the highest concern to the science of HPM.
Usually researchers using R square and Root Mean Square (RMS) between the experimental data and the model's prediction.
In addition, while validity may be assessed with comparison between human data and the model's output, free parameters are flexible to incorrectly fit data.
Common Terms
-Free Parameter: The parameters of a model whose values are estimated from the data to be modeled to maximally align the model's prediction.
-Coefficient of determination (R Square): A line or curve indicate how well the data fit a statistic model.
-Root Mean Square (RMS): A statistical measure defined as the square root of the arithmetic mean of the squares of a set of numbers.
See also
Cognitive Architectures
Cognitive Model
Cognitive Revolution
Decision-Making
Depth Perception
Human Factors
Human Factors (Journal)
Human Factors & Ergonomics Society
Manual Control Theory
Markov Models
Mathematical Psychology
Monte Carlo
Salience
Signal Detection Theory
Situation Awareness
Visual Search
Workload
References
Modeling and simulation
Software optimization |
1869476 | https://en.wikipedia.org/wiki/Mind%27s%20Eye%20%28film%20series%29 | Mind's Eye (film series) | The Mind's Eye series consists of several art films rendered using computer-generated imagery of varying levels of sophistication, with original music scored note-to-frame. The series was conceived by Steven Churchill of Odyssey Productions in 1990. The initial video was directed, conceptualized, edited and co-produced by Jan Nickman of Miramar Productions and produced by Churchill. The first three products in the series were released on VHS (by BMG) and LaserDisc (by Image Entertainment) and also released on DVD (by Simitar Entertainment). The fourth program in the series was released and distributed by Sony Music on DVD.
Overview
The typical entry in the Mind's Eye series is a short package film, usually 50 to 60 minutes long, with an electronic music soundtrack over a series of music video-like sequences. The original film, titled The Mind's Eye: A Computer Animation Odyssey, by director and co-producer Jan Nickman and producer Steven Churchill, consisted of a non-rigid structure of many semi-related sequences. The general style which characterizes the series is light and cartoonish, due to the difficulty of rendering more complicated images using the computers of the day.
The computer animation sequences that appeared in the films were generally not produced specifically for the Mind's Eye series but rather were work originally created for other purposes, including demo reels, commercials, music videos, and feature films. Nickman then assembled these sequences into a narrative through creative editing, which resulted in a double platinum selling film considered to be a milestone in the field of computer animation. As a result, The Mind's Eye: A Computer Animation Odyssey reached No. 12 on Billboards video hits chart. This approach gave Churchill access to the best-quality computer graphics of the time without having to bear their substantial production costs.
The soundtracks for the films were composed by James Reynolds, Jan Hammer, Thomas Dolby and Kerry Livgren (founder and guitarist for Kansas).
FilmsThe Mind's Eye: A Computer Animation Odyssey (Miramar Images, Inc.), released on September 25, 1990, was the first effort by director and co-producer Jan Nickman and producer Steven Churchill, which served as a demonstration of computer animation when the artform was still in its relative infancy. It is composed of a sequence of segments ambitiously chronicling the formation of Earth ("Creation"), the rise of human civilizations ("Civilization Rising"), and the technological advances of humanity from the advent of agriculture to the future exploration of the cosmos. The video speculatively concludes with a segment of what might be the next sentient species to arise on Earth, as well as the CGI short Stanley and Stella in: Breaking the Ice. The soundtrack was composed by James Reynolds. The sales of this video were RIAA-certified as "Multi-Platinum" and reached as high as No. 12 on Billboard's video sales chart.Beyond the Mind's Eye (Miramar Images, Inc.), released on December 23, 1992, was directed by Michael Boydstun and produced by Steven Churchill. It featured the music of Jan Hammer and included the series' first vocal tracks in such segments as "Too Far" and "Seeds of Life", the latter a sequence themed around planet-colonizing seeds, featuring the noted Panspermia by computer graphics artist Karl Sims. The DVD version included both the vocal version of "Seeds of Life" (sung by Chris Thompson) that blended the animation segment and footage of Hammer and his "band" performing (composed of four Jan Hammers) and an instrumental version of the same track. Some scenes of Beyond the Mind's Eye were originally created for the arcade lasergame Cube Quest, produced by Simutrek in 1983. Beyond the Mind's Eye also features some CGI sequences from The Lawnmower Man (1992). The DVD contains 11 segments. The sales of this video were RIAA-certified as "Multi-Platinum" and reached as high as No. 8 on Billboard's video sales chart.The Gate to the Mind's Eye (Miramar Images, Inc.), released on June 30, 1994, was directed by Michael Boydstun and produced by Steven Churchill. It featured music by Thomas Dolby and also continued the trend of vocal tracks, with five of its nine segments including vocals: "Armageddon", a sequence depicting massive devastation; "Neo", an astronomy-themed song; "Valley of the Mind's Eye", a song about the progress of human technology; "Nuvogue", the first jazz track in the series; and "Quantum Mechanic", starring guest vocalist Dr. Fiorella Terenzi. The Gate to the Mind's Eye also featured the animations "Delirium Tremendus", "God and the Quantum" and "Synchronicity", produced and conceptualized by visionary artist Beny Tchaicovsky.Odyssey Into the Mind's Eye (Odyssey Productions), released on July 12, 1996, was directed by Edward Feuer and produced by Steven Churchill. It featured a soundtrack by Kerry Livgren and two more vocal tracks: "One Dark World" (sung by Darren Rogers) and "Aspen Moon" (sung by Livgren's nephew Jacob). Odyssey Into the Mind's Eye features versions of CGI sequences from Ecco: The Tides of Time (1994) and Johnny Mnemonic (1995), and also features CGI sequences from Cyberscape, a 45 minute computer animation produced and copyrighted by Beny Tchaicovsky, released on VHS and DVD by Sony Music in 1997.
Spin-off titles and other releases
Concurrently with the release of the Mind's Eye series, Churchill also released a series of titles such as Virtual Nature: A Computer Generated Visual Odyssey From the Makers of the Mind's Eye (Odyssey Visual Design, 1993) that obliquely referenced the series. This sister series of videos continued after the release of Odyssey Into the Mind's Eye with three titles: The Mind's Eye Presents Luminous Visions (Odyssey Productions, April 24, 1998), The Mind's Eye Presents Ancient Alien (Odyssey Productions, July 10, 1998) and The Mind's Eye Presents Little Bytes (Odyssey Productions, July 25, 2000).
Other anthology films released by Churchill, such as Imaginaria (Odyssey Visual Design, December 21, 1993) and Turbulence (Odyssey Productions, March 16, 1996), did not include the term "The Mind's Eye" as part of their titles and are thus not considered to be a part of the series. Churchill's most recent releases have been entries in the eight part Computer Animation series, which ran from 1996 to 2000, with Computer Animation Festival Volume 1.0 (Odyssey Visual Design, November 5, 1993), Computer Animation Festival Volume 2.0 (Odyssey Visual Design, September 2, 1994), and Computer Animation Festival Volume 3.0 (Odyssey Productions, July 12, 1996) forming the main series. The subsequent three Computer Animation titles again included oblique references to Mind's Eye and are entitled The Mind's Eye Presents Computer Animation Classics (Odyssey Productions, May 6, 1997), The Mind's Eye Presents Computer Animation Showcase (Odyssey Productions, August 29, 1997), and The Mind's Eye Presents Computer Animation Celebration (Odyssey Productions, May 1, 1998). The last two titles in the series are Computer Animation Marvels (Odyssey Productions, July 23, 1999) and Computer Animation Extravaganza''' (Odyssey Productions, August 18, 2000).
A second sister series obliquely referencing Computer Animation is formed by the original Mind's Eye video and Cyberscape: A Computer Animation Vision (August 28, 1997, co-produced by Zoe Productions and Odyssey Productions), a surreal animation chronicling the evolution of human life and thought, by Beny Tchaicovsky.
Reception and adaptationsBeyond the Mind's Eye was a bestseller in the US when it was originally released on VHS and LaserDisc. Roger Ebert selected it as his "Video Pick of the Week" for the week of December 23, 1992 on the TV series Siskel & Ebert.
Several excerpts from The Mind's Eye were seen in the 1992 sci-fi horror film The Lawnmower Man, which itself was featured in Beyond the Mind's Eye. The Mind's Eye and Beyond the Mind's Eye were both integral components in YTV's Short Circutz segments that aired between programs in the 1990s. Canadian independent television station NTV airs excerpts from the first three Mind's Eye videos as part of their "Computer Animated Art Festivals" that run overnight on Fridays.
Pantera covered the song "Planet Caravan", originally by Black Sabbath, on their 1994 album Far Beyond Driven. The music video for this song features scenes from Beyond the Mind's Eye''.
References
External links
The Mind's Eye
Beyond the Mind's Eye
The Gate to the Mind's Eye
Odyssey Into the Mind's Eye
Luminous Visions
Ancient Alien
Virtual Nature
Computer-animated films
Package films |
1949450 | https://en.wikipedia.org/wiki/Variable%20data%20printing | Variable data printing | Variable data printing (VDP) (also known as variable information printing (VIP) or variable imaging (VI)) is a form of digital printing, including on-demand printing, in which elements such as text, graphics and images may be changed from one printed piece to the next, without stopping or slowing down the printing process and using information from a database or external file. For example, a set of personalized letters, each with the same basic layout, can be printed with a different name and address on each letter. Variable data printing is mainly used for direct marketing, customer relationship management, advertising, invoicing and applying addressing on selfmailers, brochures or postcard campaigns.
Variable data printing
VDP is a direct outgrowth of digital printing, which harnesses computer databases and digital print devices and highly effective software to create high-quality, full color documents, with a look and feel comparable to conventional offset printing. Variable data printing enables the mass customization of documents via digital print technology, as opposed to the 'mass-production' of a single document using offset lithography. Instead of producing 10,000 copies of a single document, delivering a single message to 10,000 customers, variable data printing could print 10,000 unique documents with customized messages for each customer.
There are several levels of variable printing. The most basic level involves changing the salutation or name on each copy much like mail merge. More complicated variable data printing uses 'versioning', where there may be differing amounts of customization for different markets, with text and images changing for groups of addresses based upon which segment of the market is being addressed. Finally there is full variability printing, where the text and images can be altered for each individual address. All variable data printing begins with a basic design that defines static elements and variable fields for the pieces to be printed. While the static elements appear exactly the same on each piece, the variable fields are filled in with text or images as dictated by a set of application and style rules and the information contained in the database.
There are three main operational methodologies for variable data printing.
In one methodology, a static document is loaded into printer memory. The printer is instructed, through the print driver or raster image processor (RIP) to always print the static document when sending any page out to the printer driver or RIP. Variable data can then be printed on top of the static document. This methodology is the simplest way to execute VDP, however its capability is less than that of a typical mail merge.
A second methodology is to combine the static and variable elements into print files, prior to printing, using standard software. This produces a conventional (and potentially huge) print file with every image being merged into every page. A shortcoming of this methodology is that running many very large print files can overwhelm the RIP's processing capability. When this happens, printing speeds might become slow enough to be impractical for a print job of more than a few hundred pages.
A third methodology is to combine the static and variable elements into print files, prior to printing, using specialized VDP software. This produces optimized print files, such as PDF/VT, PostScript or PPML, that maximize print speed since the RIP only needs to process static elements once.
Software and services
There are many software packages available to merge text and images into VDP print files. Some are stand-alone software packages, however most of the advanced VDP software packages are actually plug-in modules for one or more publishing software packages such as Adobe Creative Suite.
Besides VDP software, other software packages may be necessary for VDP print projects. Mailing software is necessary in the United States (United States Postal Service) and Canada to take advantage of reduced postage for bulk mailing. Used prior to the VDP print file creation, mailing software presorts and validates and generates bar codes for mailing addresses. Pieces can then be printed in the proper sequence for sorting by postal code. In Canada, Canada Post now offers a 'Machineable' personalized mail category which does not require addresses to be sorted into any specific order before mailing; therefore reducing the need for specialized sorting software to obtain optimal postage rates.
Software to manage data quality (e.g. for duplicate removal or handling of bad records) and uniformity may also be needed. In lieu of purchasing software, various companies provide an assortment of VDP-related print file, mailing and data services.
Benefits
The difference between variable data printing and traditional printing is the personalization that is involved. Personalization allows a company to connect to its customers. Variable data printing is more than a variable name or address in a printed piece; in the past, a variable name would have been effective, because it was a new concept at the time. In today’s world, personalization has to reflect what the customer values. In order for VDP to be successful, the company must know something about the customer. For example, a customer who loves baseball receives a VDP postcard with an image of their favorite baseball player. The postcard is effective, because the customer is more likely to read what is on it. An example of an ineffective VDP piece would be to mail a postcard to the same customer with an image of a soccer player. If the customer has no interest in soccer, then he or she may or may not pay attention to the postcard. The idea is to add value to the customer through the ability to relate to them. Personalization allows a company to relate, communicate, and possibly start a relationship with a prospective customer and to maintain a relationship with their current customers. A prospect that is converted to a customer can then be converted to a loyal customer. Companies want to create loyal customers, and variable data printing can help gain these types of customers, along with quality work and service.
Another benefit is the increase in the response rate and response time. Because personalization catches the attention of the consumer, the response rate of a mail campaign increases. Personalization also increases the response time, because the mailed piece has an effect on the consumer. This effect causes the consumer to respond quicker. A mailed piece that is not eye-catching may be put down and forgotten until a later time, so it may take weeks before a response is received.
Integration
Variable data printing can be combined with other platforms – such as PURLS, email blasts, and QR codes; all three platforms are considered marketing tools. Many people have found the benefit of combining all of these platforms in order to have a successful campaign. Email blasts and PURLS allow a company to find out information about their consumer. An email blasts usually doesn’t contain much personalization, but it can. The bulk of the personalization would be seen in a PURL. A PURL is a personalized uniform resource locator (URL). In short, it is a landing page. It is also where most of the knowledge about the consumer will be gained. The email blasts will contain a PURL, which will lead the consumer to a personalized page. The PURL is where a company can gain information about the consumer through the requested information. The QR code can be added to a mailed piece. It works like an email blasts. It directs the consumer to a website. The integration of these three platforms can help a campaign.
Origin of the concept
The origin of the term variable data printing is widely credited to Frank Romano, Professor Emeritus, School of Print Media, at the College of Imaging Arts and Sciences at Rochester Institute of Technology. Mr. Romano does not explicitly take credit for coining the term
but points to his use of it as early as 1969 and its appearance in the 1999 book, “Personalized and Database Printing”, that he authored with David Broudy.
The concept of merging static document elements and variable document elements predates the term and has seen various implementations ranging from simple desktop mail merge, to complex mainframe applications in the financial and banking industry. In the past, the term VDP has been most closely associated with digital printing machines. However, in recent years the application of this technology has spread to web pages, emails, and mobile messaging.
See also
Desktop publishing
Digital printing
Dynamic publishing
Mail merge
Mass customization
Offset printing
Personalization
Print on demand
Transaction printing
Variable data publishing
References
Documents
Digital press |
32561365 | https://en.wikipedia.org/wiki/Claims-based%20identity | Claims-based identity | Claims-based identity is a common way for applications to acquire the identity information they need about users inside their organization, in other organizations, and on the Internet. It also provides a consistent approach for applications running on-premises or in the cloud. Claims-based identity abstracts the individual elements of identity and access control into two parts: a notion of claims, and the concept of an issuer or an authority.
Identity and claims
A claim is a statement that one subject, such as a person or organization, makes about itself or another subject. For example, the statement can be about a name, group, buying preference, ethnicity, privilege, association or capability. The subject making the claim or claims is the provider. Claims are packaged into one or more tokens that are then issued by an issuer (provider), commonly known as a security token service (STS).
The name "claims-based identity" can be confusing at first because it seems like a misnomer. Attaching the concept of claims to the concept of identity appears to be combining authentication (determination of identity) with authorization (what the identified subject may and may not do). However a closer examination reveals that this is not the case. Claims are not what the subject can and cannot do. They are what the subject is or is not. It is up to the application receiving the incoming claim to map the is/is not claims to the may/may not rules of the application. In traditional systems there is often confusion about the differences and similarities between what a user is/is not and what the user may/may not do. Claims-based identity makes that distinction clear.
Security token service
Once the distinction between what the user is/is not and what the user may/may not do is clarified, it is possible that the authentication of what the user is/is not (the claims) can be handled by a third party. This third party is called the security token service. To better understand the concept of security token service, consider the analogy of a night club with a doorman. The doorman wants to prevent under-age patrons from entry. To facilitate this he requests a patron to present a driver's license, health insurance card or other identification (the token) that has been issued by a trusted third party (the security token service) such as the provincial or state vehicle license department, health department or insurance company. The nightclub is thus relieved of the responsibility of determining the patron's age. It only has to trust the issuing authority (and of course make its own judgment of the authenticity of the token presented). With these two steps completed the nightclub has successfully authenticated the patron with regard to the claim that he or she is of legal drinking age.
Continuing the analogy, the nightclub may have a membership system, and certain members may be regular or VIP. The doorman might ask for another token, the membership card, which might make another claim; that the member is a VIP. In this case the trusted issuing authority of the token would probably be the club itself. If the membership card makes the claim that the patron is a VIP, then the club can react accordingly, translating the authenticated VIP membership claim to a permission such as the patron being permitted to sit in the exclusive lounge area and be served free drinks.
Note that not all uses of the term "authentication" include claims acquisition. The only difference is that authentication is limited to the binding of the user to the information contained about the user in the target site as no attribute data (claim) is required to complete an authentication. As privacy concerns become more important, the ability of digital entities to authenticate users without access to personal attributes becomes increasingly important.
Benefits
Claims-based identity has the potential to simplify authentication logic for individual software applications, because those applications don't have to provide mechanisms for account creation, password creation, reset, and so on. Furthermore, claims-based identity enables applications to know certain things about the user, without having to interrogate the user to determine those facts. The facts, or claims, are transported in an "envelope" called a secure token.
Claims-based identity can greatly simplify the authentication process because the user doesn't have to sign in multiple times to multiple applications. A single sign in creates the token which is then used to authenticate against multiple applications, or web sites. In addition, because certain facts (claims) are packaged with the token, the user does not have to tell each individual application those facts repeatedly, for instance by answering similar questions or completing similar forms.
See also
Access Control Service
Identity management
Security token
References
Authentication methods |
67345750 | https://en.wikipedia.org/wiki/Use%20and%20development%20of%20software%20for%20COVID-19%20pandemic%20mitigation | Use and development of software for COVID-19 pandemic mitigation | Various kinds of software have been developed and used for mitigating the COVID-19 pandemic. These include mobile apps for contact tracing and notifications about infection risks, digital passports verifying one's vaccination status, software for enabling – or improving the effectiveness of – lockdowns and social distancing in general, Web software for the creation of related information services, and software for the research and development for COVID-19 mitigation.
Contact tracing
Design
Many different contact tracing apps have been developed. Design decisions such as those relating to the users' privacy and data-storage and -security vary between the apps. In most cases the different apps are not interoperable which may reduce their effectiveness when multiple apps are used within a country or when people travel across national borders. It has been suggested that building a single open-source app (or a small set of such) that is usable by all, ensures interoperability, and is developed as a robust app everyone is putting their energy into – for a bundled work force, standardized optimal design and combined maximum innovative capacity, would have been preferable to such highly parallel development.
Use and effectiveness
An unincentivized and always entirely voluntary use of such digital contact tracing apps by the public was found to be low even if the apps are built to preserve privacy, leading to low usefulness of the software for pandemic mitigation as of April 2021. A lack of possible features and persistent, prevalent errors reduced their usefulness further. Use of such an app in general or during specific times is in many or all cases not provable.
Check-in functionality
Some apps also allow "check-ins" that enable contact tracing and exposure notifications after entering public venues such as fitness centres. One such example is the We-Care project, a novel initiative by University of California, Davis researchers that uses anonymity and crowdsourced information, to which check-ins are essential, to alert infected users and slow the spread of COVID-19.
Digital vaccination certificates
Digital vaccine passports and vaccination certificates are and use software for verifying a person's coronavirus vaccination status.
Such certificates may enable vaccinated persons to get access to events, buildings and services in the public such as airplanes, concert venues and health clubs and travel across borders. This may enable partial reopenings. Lawrence Gostin stated that there is enormous economic and social incentive for such proof of vaccinations.
Hurdles and ethical implications
COVID-19 vaccines are usually distributed based on infection risks and granting privileges based on vaccination-status certification has some ethical implications, like any of the other mechanisms/factors by which society grants privileges to individuals. For instance, privileges based on vaccination-status may lead to people not at high risk of COVID-19 infection or of a severe prognosis of the disease to obtain a large share of the limited supply of vaccination doses (via society's mechanisms of finance) and the vaccinated people to be granted permissions that could be seen as "unfair" by unvaccinated people. Moreover, such certificates could require a set of tamper-proof, privacy-respecting, verifiable, authenticity-ensuring, data-validity-ensuring, secure digital certification technologies – robust digital signature cryptography-based software that may not exist yet. Furthermore, such privileging mechanisms may exacerbate inequality, increase risks of deliberate infections or transmission, and depend or increase dependence on inoculation preventing COVID-19 transmission which the WHO considers to still be an uncertainty. The public health justification of avoiding preventable sickness or death of others may not be shared or communicated effectively to significant parts of the population. A large share of elderly do not have smartphones which many digital vaccination certificates designs may rely on.
Design
Several groups have stated that common standards are important and that a single common and optimal standard for each purpose would be best. As no adequate technical swift mechanisms for its collective design – or establishing firm consensus for it – exists, some teams are developing cross-compatible solutions. Likewise, the design and development of such technologies is highly parallel, rather than collaborative, efficient and integrative. Governments often like to ensure data sovereignty. According to some experts, national governments should have developed – or helped to develop – a standardized, secure, digital proof of vaccination earlier. In the U.S. such digital certificates are being developed by the private sector, with a large number of different solutions being produced by small corporate teams and no vaccinations database being designed by state-funded organizations. Development of many solutions may lead to a large number of security vulnerabilities and no highly robust, privacy-respecting, extendable, performant and interoperable software which, however, may be developed, improved or become a standard years after the large number of apps are published and used. Saskia Popescu and Alexandra Phelan argue that "any moves to institute vaccine passports must be coordinated internationally".
The WHO has established a – small and nonparticipative – "working group focused on establishing standards for a common architecture for a digital smart vaccination certificate to support vaccine(s) against COVID-19 and other immunizations".
The COVID-19 Credentials Initiative hosted by Linux Foundation Public Health (LFPH) is a global initiative working to develop and deploy privacy-preserving, tamper-evident and verifiable credential certification projects based on the open standard Verifiable Credentials (VCs).
Cybersecurity expert Laurin Weissinger argues that it's important for such software to be fully free and open source, to clarify concepts and designs timely, to have it penetration-tested by security experts and to communicate which and how data is collected and processed as such would be needed to build required trust. Jenny Wanger, director of programs at the Linux Foundation, also contends that is essential for such software to be open source. ACLU senior policy analyst Jay Stanley affirms this notion and warns that an "architecture that is not good for transparency, privacy, or user control" could set a "bad standard" for future apps and systems that host credentials.
In Israel, Estonia and Iceland such passports are already being used. In other places like New York pilot programs are being run. Many other countries and unions are considering or planning for such passports and/or certificates.
Software for remote work, distance education, telemedicine, product delivery, eGovernment and videoconferencing
Websites
Web software has been used to inform the public about the latest state of the pandemic. Wikipedia and COVID-19 dashboards were used widely for obtaining aggregated, integrated and reliable information about the pandemic.
The Wikimedia project Scholia provides a graphical interface around data in Wikidata – such as literature about a specific coronavirus protein – which may help with research, research-analysis, making data interoperable, automated applications, regular updates, and data-mining.
A multitude of diverse websites were consulted by citizens proactively striving to learn which activities were allowed and which disallowed – such as the times during which curfews apply – which changed continuously throughout the pandemic and also varied by location.
A study found results from a German government-organized hackathon held via Internet-technologies to be "tangible".
In response to the COVID-19 pandemic, a group of online archivists have used the open access PHP- and Linux-based shadow library Sci-Hub to create an archive of over 5000 articles about coronaviruses. They confirmed that making the archive openly accessible is currently illegal, but consider it a moral imperative. Sci-Hub provides free full access for most scientific publications about the COVID-19 pandemic.
In addition, multiple legacy scientific publishers have created open access portals, including the Cambridge University Press, the Europe branch of the Scholarly Publishing and Academic Resources Coalition, The Lancet, John Wiley and Sons, and Springer Nature.
Medical software
GNU Health
The open source, Qt- and GTK-based GNU Health has a variety of default features already built in that makes it useful during pandemics. As the software is open source, it allows many different parties to pool efforts on an existing single integrated program – instead of different programs for different purposes or different clients and multiple programs for each purpose – to enhance its usefulness during the pandemic as well as adapting it to their needs. Already existing features include a way for the clinical information to be made available and get updated immediately in any health institution via the unique "Person Universal ID", lab test report templates and functionalities, functionality for digital signing and encryption of data as well as storage of medical records. Theoretically, the software could be used or modified to aid e.g. COVID-19 testing and research about COVID-19 as securely sharing anonymized patient treatments, medical history and individual outcomes – including by common primary care physicians – may speed up such research and clinical trials. The software has been considered as a possible backbone of a robust, sustainable public health infrastructure based on cooperation.
Software for COVID-19 testing
In Bavaria, Germany a delay in communicating 44,000 test-results was caused by the lack of use or preparatory setup or development of required software.
Vaccination management
Software is being used to manage to distribution of vaccines, which have to be kept cold, and to record which individuals already received a – and which – vaccination. A lack of use or preparatory setup or development of required software caused delays and other problems.
Screening
In China, Web-technologies were used to screen and direct individuals to appropriate resources. In Taiwan, infrared thermal cameras were used in airports to rapidly detect individuals with fever. Machine learning has been used for rapid diagnosis and risk prediction of COVID-19.
Quarantining
Electronic monitoring hard- and software has been to ensure and verify infected individuals' adherence to quarantine. However, solutions based on mobile apps have been found to be insufficient. Furthermore, various software designs may threaten civil liberties and infringe on privacy.
Nextstrain
Nextstrain is an open source platform for pathogen genomic data such as about viral evolution of SARS-CoV-2 and was used widely during the COVID-19 pandemic such as for research about novel variants of SARS-CoV-2.
Data sharing and copyright
In March 2021, a proposal backed by a large number of organizations and prominent researchers and experts and initiated mainly by organizations in India and South Africa, called the international authority World Trade Organization (WTO) to reduce copyright barriers to COVID-19 prevention, containment and treatment. In 2020, professor Sean Flynn, noted that "Only a minority of countries authorize sharing text and data mining databases between researchers needed for collaboration, including across borders."
Vaccine production
Software has been used in leaks and industrial espionage of vaccine-related data. Machine learning has been applied to improve vaccine productivity.
Modelling software
Scientific software models and simulations for SARS-CoV-2, including spread, functional mechanisms and properties, efficacy of potential treatments, transmission risks, vaccination modelling/monitoring, etc. (computational fluid dynamics, computational epidemiology, computational biology/computational systems biology, ...)
Modelling software and related software is also used to evaluate impacts on the environment (see #Websites) and the economy.
Results from such as models are used in science-based policy-making, science-based recommendations and the development of treatments.
Folding@home
In March 2020, the volunteer distributed computing project Folding@home became the world's first system to reach one exaFLOPS. The system simulates protein folding, was used for medical research on COVID-19 and achieved a speed of approximately 2.43 x86 exaFLOPS by 13 April 2020 many times faster than the fastest supercomputer to that date Summit.
That same month, the Rosetta@home distributed computing project also joined the effort. The project uses volunteers' computers to model the proteins of SARS-CoV-2 virus to discover potential drug targets or develop new proteins to neutralize the virus. The researchers announced that using Rosetta@home, they were able to "accurately predict the atomic-scale structure of an important coronavirus protein weeks before it could be measured in the lab."
In May 2020, the OpenPandemics—COVID-19 partnership was launched between Scripps Research and IBM's World Community Grid. The partnership is a distributed computing project that "will automatically run a simulated experiment in the background [of connected home PCs] that will help predict the efficacy of a particular chemical compound as a potential treatment for COVID-19."
COVID-19 drug repurposing research and drug development
Supercomputers – including the world's fastest single supercomputers Summit and Fugaku – have been used in attempts to identify potential treatments by running simulations with data on existing, already-approved medications. Two early examples of supercomputer consortia are listed below:
In March 2020, the United States Department of Energy, National Science Foundation, NASA, industry, and nine universities pooled resources to access supercomputers from IBM, combined with cloud computing resources from Hewlett Packard Enterprise, Amazon, Microsoft, and Google, for drug discovery. The COVID-19 High Performance Computing Consortium also aims to forecast disease spread, model possible vaccines, and screen thousands of chemical compounds to design a COVID-19 vaccine or therapy.
The C3.ai Digital Transformation Institute, an additional consortium of Microsoft, six universities (including the Massachusetts Institute of Technology, a member of the first consortium), and the National Center for Supercomputer Applications in Illinois, working under the auspices of C3.ai, an artificial intelligence software company, are pooling supercomputer resources toward drug discovery, medical protocol development and public health strategy improvement, as well as awarding large grants to researchers who proposed by May to use AI to carry out similar tasks.
See also
Timeline of computing 2020–2029
Pandemic prevention#Surveillance and mapping
COVID-19 surveillance
Teamwork
Open-source software development
Citizen science#COVID-19 pandemic
Information management
COVID-19 pandemic#Information dissemination
Open-source ventilator
Bioinformatics
Impact of the COVID-19 pandemic on science and technology#Computing and machine learning research and citizen science
Public health mitigation of COVID-19#Information technology
Technology policy
References
External links
, a scientific review for an overview of how IT applications could be used during the COVID-19 outbreak and pandemic
Emergency management software |
3210339 | https://en.wikipedia.org/wiki/The%20Melodians | The Melodians | The Melodians are a rocksteady band formed in the Greenwich Town area of Kingston, Jamaica, in 1963, by Tony Brevett (born 1949, nephew of The Skatalites bassist, Lloyd Brevett), Brent Dowe and Trevor McNaughton. Renford Cogle assisted with writing and arranging material.
Career
Trevor McNaughton had the idea of putting a group together and contacted the then 14-year-old Tony Brevett, who had already had success in local talent shows. Brevett recruited his friend Brent Dowe and the group was formed, with Brevett taking on lead vocal duties. Bramwell Brown and Renford Cogle also had short stints in the group in its early days, and Cogle became one of the group's main songwriters.
The group recorded some material with Prince Buster before Ken Boothe introduced them to Coxsone Dodd's Studio One label where in 1966 they recorded "Lay It On" (one of the first records to reflect the shift from ska to rocksteady), "Meet Me", "I Should Have Made It Up" and "Let's Join Hands (Together)". Lead vocal duties were now shared between Brevett and Dowe. From 1967 to 1968 they had a number of hits on Duke Reid's Treasure Isle label, including "You Have Caught Me", "Expo 67", "I'll Get Along Without You", and "You Don't Need Me". After recording "Swing and Dine" for record producer Sonia Pottinger, they had further hits with "Little Nut Tree" before recording their biggest hit, "Rivers of Babylon" for Leslie Kong. This song became an anthem of the Rastafarian movement, and was featured on the soundtrack of the movie The Harder They Come. In the early 1970s Brevett also recorded as a solo artist, having his greatest success with "Don't Get Weary". After Kong's death in 1971, they recorded for Lee Perry and Byron Lee's Dynamic Studios. In 1973, Brent Dowe left the group for a solo career. The group reformed briefly a few years later, and again in the early 1980s.
The Melodians regrouped again in the 1990s as part of the roots revival. In 1992 they recorded "Song of Love", which was issued on the Tappa Zukie label. Throughout the later 1990s they continued touring internationally, including appearing at the Sierra Nevada World Music Festival in California in 2002. In 2005 The Melodians embarked on a West Coast tour.
The death of Tony Brevett in 2013 left McNaughton as the only surviving original member. McNaughton toured as a solo artist in 2014 and subsequently recruited Taurus Alphonso (formerly of the Mellow Tones) and Winston Dias (formerly of The Movers) to form a new Melodians line-up. As of February 2015, the group were recording a new album in Florida with producer Willie Lindo. The Return of the Melodians was released in May 2017 and went on to reach no. 19 on the Billboard Reggae Albums chart.
In February 2017, the Melodians received an 'Iconic Award' from the Jamaica Reggae Industry Association (JaRIA).
Deaths
Brent Dowe
On the evening of 29 January 2006, after a rehearsal in preparation for a performance to take place the following weekend at the Jamaican Prime Minister’s residence, Brent Dowe suffered a fatal heart attack at the age of 59. The remaining original members Tony Brevett and Trevor McNaughton continued touring in Europe and the U.S. backed by the Yellow Wall Dub Squad.
Tony Brevett
On 25 October 2013 Tony Brevett died from cancer after being admitted to hospital in Miami in August. He was 64 years old.
Trevor McNaughton
McNaughton, the last surviving original member of the group, died on 20 November 2018 at the Kendrick Rehabilitation Hospital in Hollywood, Florida, from respiratory failure. He was 77, and had been admitted to hospital the previous month.
Partial discography
Albums
Rivers of Babylon (1970), Trojan
Sweet Sensation (1976), Trojan
Sweet Sensation: The Original Reggae Hit Sound (1980), Island
Irie Feelings (1983), Ras
Premeditation (1986), Skynote
The Return of the Melodians (2017), TWT Music
Compilation albums
Swing and Dine (1993), Heartbeat
Rivers of Babylon (1997), Trojan
Sweet Sensation: The Best of the Melodians (2003), Trojan
Compilation appearances
The Rough Guide to Reggae (1997), World Music Network
See also
Crab Records
References
External links
Discography at Discogs
Jamaican reggae musical groups
Rocksteady musical groups
Trojan Records artists |
66359436 | https://en.wikipedia.org/wiki/2011%20SP189 | 2011 SP189 | is a small asteroid and Mars trojan orbiting near the of Mars (60 degrees behind Mars on its orbit).
Discovery, orbit and physical properties
was first observed on 29 September 2011 by the Mount Lemmon Survey. Its orbit is characterized by low eccentricity (0.040), moderate inclination (19.9°) and a semi-major axis of 1.52 AU. Upon discovery, it was classified as Mars-crosser by the Minor Planet Center. Its orbit is well determined as it is currently (January 2021) based on 45 observations with a data-arc span of 2390 days. has an absolute magnitude of 20.9 which gives a characteristic diameter of 300 m.
Mars trojan and orbital evolution
Recent calculations indicate that it is a stable Mars trojan with a libration period of 1300 yr and an amplitude of 20°. These values are similar to those of 5261 Eureka and related objects and it may be a member of the so-called Eureka family.
See also
5261 Eureka (1990 MB)
References
Further reading
Three new stable L5 Mars Trojans de la Fuente Marcos, C., de la Fuente Marcos, R. 2013, Monthly Notices of the Royal Astronomical Society: Letters, Vol. 432, Issue 1, pp. 31–35.
Orbital clustering of Martian Trojans: An asteroid family in the inner solar system? Christou, A. A. 2013, Icarus, Vol. 224, Issue 1, pp. 144–153.
External links
data at MPC.
Mars trojans
Minor planet object articles (unnumbered)
20110929 |
28680468 | https://en.wikipedia.org/wiki/UNIGINE%20Company | UNIGINE Company | UNIGINE Company is a multinational software development company headquartered in Clemency, Luxembourg. It is well known for developing the UNIGINE Engine proprietary cross-platform middleware and advanced GPU benchmarks (Heaven, Valley and Superposition).
Main products
UNIGINE Engine (cross-platform 3D engine for simulators, virtual reality systems and computer games)
Naval strategy Oil Rush
GPU benchmarks
Sanctuary v2.3 (2007-2010) – no longer supported;
Tropics v1.3 (2007-2010) – included in the Phoronix Test Suite for Linux, no longer supported;
Heaven v4.0 (2009-2013) – the first benchmark for DirectX 11, included in the Phoronix Test Suite for Linux;
Valley v1.0 (2013);
Superposition v1.1 (2017-2019).
History
The development of UNIGINE technology began with the open source project Frustum, which was opened in 2004 by Alexander Zapryagaev, co-founder (along with Denis Shergin, CEO) and ex-CTO of UNIGINE company, as well as the lead developer of the UNIGINE Engine.
The name UNIGINE is an abbreviation for Unique Engine or Universal Engine.
See also
Unigine Engine
Oil Rush
Linux gaming
References
External links
Benchmarks home page
Companies established in 2005
Video game companies of Russia
Video game development companies |
64200894 | https://en.wikipedia.org/wiki/Shad%20%28software%29 | Shad (software) | The Student Education Network (), with acronym Shad () That in addition to the abbreviation of the full name of the program, it refers to the word Shaad meaning happy, is a communication and educational software that was launched following the spread of the coronavirus due to the absence of students in schools in Iran.The software is owned by the Ministry of Education of Iran, and students, teachers and headmasters are the people who use this software.
At first, on 2020 April 4, Shaad Software was run only on messaging apps, and principals, teachers, and students needed to install one of the Bale, Soroush, Gap, iGap, and Rubica messengers, but on 2020 April 9, the Ministry of Education presented the software without needing to have those messengers. About 70% of Iranian students are members of this social network. Due to the emphasis of education on the installation and use of this software, a significant number of students were activated in this student network, which is estimated to be more than 17 million people. According to Mohammad Mehdi Nooripour, chairman of the Student Organization Assembly, Shad software has about 800,000 daily visits. 13 percent of Iranian students never had an electronic device for application setup.
History
During the pandemic Shad development was delayed and it was replaced by online TV teachers but it has so much problems.
It was called Social network of students.
Products and services
shaadbin.ir ("children search engine"- by Zarebin.ir)
Student real life identity authentication (15 million students)
Temporary free Internet bandwidth(mobile data- some designated Iranian mobile network corporations offered SIMs)
External APIs for Iranian mobile apps
Use
Private schools educators are not required to install the app.
Reception
Simultaneously with the unveiling of the software, many students and teachers criticized the software. They claimed that the software had low quality and could not compensate for the training for students. In the view of some, its inefficiency is due to the fact that some students live in deprived areas and lack facilities such as computers, laptops, smartphones, and even high-speed or regular Internet. On the other hand, Mohammad Mehdi Nooripour and Majid Najafizadeh, Representatives of students and teachers of Iran thanked the Minister of Education for setting up this network at a meeting of Student Organization. Recently, in an update, new features have been added and the performance of Shad has been improved.
See also
COVID-19 pandemic in Iran
References
External links
https://shad.ir/
Official website
Social software
Educational software
Android (operating system) software
Mobile applications
Instant messaging clients |
14901400 | https://en.wikipedia.org/wiki/1749%20Telamon | 1749 Telamon | 1749 Telamon is a dark Jupiter Trojan from the Greek camp, approximately in diameter. It was discovered by German astronomer Karl Reinmuth at the Heidelberg Observatory on 23 September 1949, and named after Telamon from Greek mythology. The D-type asteroid is the principal body of the proposed Telamon family and belongs to the 60 largest Jupiter trojans. It has a rotation period of 17.0 hours and possibly a spherical shape.
Classification and orbit
Telamon is a dark Jovian asteroid orbiting in the leading Greek camp at Jupiter's Lagrangian point, 60° ahead of the Gas Giant's orbit in a 1:1 resonance (see Trojans in astronomy). It orbits the Sun at a distance of 4.6–5.7 AU once every 11 years and 8 months (4,268 days; semi-major axis of 5.15 AU). Its orbit has an eccentricity of 0.11 and an inclination of 6° with respect to the ecliptic. The body's observation arc begins at with its first observation as at Turku Observatory in January 1941, more than 8 years prior to its official discovery observation at Heidelberg.
Telamon family
Fernando Roig and Ricardo Gil-Hutton identified Telamon as the principal body of a small Jovian asteroid family, using the hierarchical clustering method (HCM), which looks for groupings of neighboring asteroids based on the smallest distances between them in the proper orbital element space. According to the astronomers, the Telamon family belongs to the larger Menelaus clan, an aggregation of Jupiter trojans which is composed of several families, similar to the Flora family in the inner asteroid belt.
However this family is not included in David Nesvorný HCM-analysis from 2014. Instead, Telamon is listed as a non-family asteroid of the Jovian background population on the Asteroids Dynamic Site (AstDyS) which based on another analysis by Milani and Knežević.
Physical characteristics
Telamon is a dark D-type asteroid according to the SDSS-based taxonomy and the surveys conducted by SMASS (Xu) and Pan-STARRS.
Diameter and albedo
According to the surveys carried out by the Infrared Astronomical Satellite IRAS, the Japanese Akari satellite and NASA's Wide-field Infrared Survey Explorer with its subsequent NEOWISE mission, Telamon measures between 64.90 and 81.06 kilometers in diameter and its surface has an albedo between 0.046 and 0.078.
The Collaborative Asteroid Lightcurve Link derives an albedo of 0.0469 and a diameter of 80.91 kilometers based on an absolute magnitude of 9.4.
Lightcurves
Photometric observations of Telamon by Stefano Mottola from August 1995 were used to build a lightcurve rendering a rotation period of 11.2 hours with a brightness variation of in magnitude (). In October 2010, another observation by Robert Stephens at the Goat Mountain Astronomical Research Station in California gave a period of 16.975 hours ().
In August 2017, observations by the K2 mission of the Kepler spacecraft during Campaign 6 gave two periods of 11.331 and 22.613 hours with an amplitude of 0.06 and 0.07 magnitude, respectively (). The body is possibly of spherical shape as all lightcurves measured a very small variation in brightness.
Naming
This minor planet was named by the discoverer after Telamon, from Greek mythology, who was an argonaut searching for the Golden Fleece, and father of Ajax and Teucer, after whom the minor planets 1404 Ajax and 2797 Teucer are named.
Telamon banished his son Teucer (as he had been banished by his own father) when he returned home from the Trojan war without the remains of his brother. The official was published by the Minor Planet Center on 15 February 1970 ().
References
External links
Asteroid Lightcurve Database (LCDB), query form (info )
Dictionary of Minor Planet Names, Google books
Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center
001749
Discoveries by Karl Wilhelm Reinmuth
Minor planets named from Greek mythology
Named minor planets
19490923 |
870501 | https://en.wikipedia.org/wiki/The%20Xindi | The Xindi | "The Xindi" is the 53rd episode of the American science fiction television series Star Trek: Enterprise, the first episode of the third season. It first aired on September 10, 2003, on the UPN network in the United States. The episode was written by executive producers Rick Berman and Brannon Braga, and directed by Allan Kroeker.
Set in the 22nd century, the series follows the adventures of the first Starfleet starship Enterprise, registration NX-01. Beginning with this episode, season three of Enterprise features an ongoing storyline following an attack on Earth by previously unknown aliens called the Xindi at the end of season two. In this episode, the crew of the Enterprise attempt to track down the location of the Xindi homeworld by asking a lone Xindi enslaved in a mining colony. After being tricked by the mining foreman, Captain Jonathan Archer (Scott Bakula) and Commander Charles "Trip" Tucker III (Connor Trinneer) escape with the Xindi, with assistance from Lieutenant Malcolm Reed (Dominic Keating) and the ship's new Military Assault Command Operations (MACO) team.
"The Xindi" saw the first appearance of several new sets, as well as a new costume for Sub-Commander T'Pol (Jolene Blalock). The episode saw a large number of guest stars, including several who would recur several more times during the third season such as Major Hayes played by Steven Culp, Tucker Smallwood as the Xindi-Primate Councilor and Randy Oglesby as Degra. The episode received ratings of 2.6/5 percent according to Nielsen Media Research, and was watched by 4.1 million viewers. "The Xindi" received a mixed reception from critics, who praised the increase of action promised for the season by this episode but criticised elements such as the writing and the MACOs.
Plot
As Enterprise travels deeper into the Delphic Expanse, a secret council of aliens discuss what to do with the lone human spaceship. Meanwhile, Captain Jonathan Archer (Scott Bakula) directs Enterprise to a mining penal colony within the Expanse. He then strikes a deal with the mine's foreman (Stephen McHattie): in exchange for a half-liter of liquid platinum, Archer and Commander Charles "Trip" Tucker III (Connor Trinneer) will be allowed to meet a Primate worker named Kessick (Richard Lineback).
Archer requests the coordinates of Xindus, the Xindi homeworld, from Kessick. But the alien refuses to help unless Archer helps him escape. Archer declines, but he soon learns that the foreman had ulterior motives, since he has ordered three warships to overpower Enterprise and enslave his crew. Kessick claims to know how to escape the mine, but asks for Archer's help in return for guiding the Starfleet officers. Archer reluctantly agrees, and Kessick leads him and Tucker through the mine's sewage removal system. However, the group is soon detected in a conduit, and the foreman floods the system with plasma in an effort to kill them. They narrowly escape being killed, but quickly fall into the hands of the mine's security forces.
Meanwhile, Sub-Commander T'Pol (Jolene Blalock) persuades Lieutenant Malcolm Reed (Dominic Keating) to allow the newly assigned MACOs (Military Assault Command Operations) to attempt an extraction. Led by him, they perform remarkably well in combat, and manage to rescue Archer, Tucker, and Kessick. Enterprise then leaves orbit just as the warships arrive. Kessick dies, but not before providing coordinates for the Xindi homeworld. When the ship reaches this position, there is nothing but a 120-year-old field of space debris.
Character Development
"Vulcan Neuro-Pressure" becomes a routine Enterprise activity beginning with this episode and provides the structure for the ongoing development of the "Trip'Pol" (Trip + T'Pol) relationship, further sexualizing both actor Blalock and her character T'Pol as she appears topless alone in the same room with Trip. As noted previously, Trip can't sleep because he is haunted by his sister's death. Quantum anomalies from The Expanse (which will be explored throughout Season 3 in general and in the next episode in particular) are disturbing T'Pol's ability to sleep as well. Vulcan Neuro-Pressure (VNP) can release mental stresses and allow sleep probability to increase. Because of these weaknesses, the two agree to "treat" each other.
Production
The episode followed up on the plot introduced in the final instalment of the second season in which a probe from an unknown alien race attacks Earth. A number of new sets and costumes were required, with preparations beginning for some departments up to three weeks before filming began. One change which was made was a new outfit for T'Pol, with costume tests taking place a week in advance of filming. The redesign was due to studio executives wanting the show to appeal more to the 18–49 male demographic. The production team looked to emulate the mid-series boost that the introduction of Seven of Nine provided on Star Trek: Voyager. Kate O'Hara of New York Magazine later chided in reference to the change, "Women of the future will certainly choose to wear tight, uncomfortable, skin-tight catsuits!"
"The Xindi" was seen as a new pilot by executive producers Brannon Braga and Rick Berman, who also wrote the episode. Braga said "We were re-establishing an Enterprise that was going to be a little bit different this year, so we had to think of it in those terms." They felt the best way to do this was to immediately reveal the Xindi to the audience, and to give the MACOs something to do in order to introduce them. He called it a "big episode" as they sought to set up the rest of season. The shoot began on June 26, 2003, taking nine days instead of the usual seven to complete. One of the special effects used ground Styrofoam, which had been dyed blue and processed through a wood chipper, to represent the mineral Trellium-D. The Styrofoam particles stuck to the actors' shoes and costumes and ended up being spread throughout the Paramount lot where the series was filmed. It would turn up in unexpected places on set for the rest of the series, and was found in among the sets as they were being dismantled after the end of season four.
Guest cast
"The Xindi" featured several actors who would reappear throughout the season. These included the MACO marines under the command of Major Hayes, played by Steven Culp, in his first of five episodes. At the time of his appearance in "The Xindi", he felt that he did not have any characterisation to work with. During the production of his second episode, "The Shipment", Culp read an article in the Los Angeles Times about a troubled youth who joined the military and in serving in the Iraq War had found himself. After discussing it with the director, this became the basis for the character. Daniel Dae Kim made his first of three appearances as Corporal Chang; he had previously appeared as Gotana-Retz in the Voyager episode "Blink of an Eye". Nathan Anderson had previously appeared as Namon in the Voyager episode "Nemesis"; he made one further appearance as Sergeant Kemper in the following episode, "Anomaly".
Other actors included the members of the Xindi council. As with Kim and Anderson, Tucker Smallwood had already appeared on Voyager, as Admiral Bullock in the episode "In the Flesh". He appeared as his Xindi character nine times during the third season of Enterprise. Randy Oglesby, who played Degra, was another Voyager alumnus. He had appeared as Kir in the episode "Counterpoint". Rick Worthy had appeared as several different characters in the Star Trek franchise, including an appearance in the 1998 film Star Trek: Insurrection. As well as Kornan in the Star Trek: Deep Space Nine episode "Soldiers of the Empire", he too had appeared in Voyager, but in two roles; first as the androids 3947 and 122 in "Prototype" and then as Noah Lessing in "Equinox". In addition, making a return to the Star Trek franchise was Stephen McHattie, who had previously appeared as the Romulan senator Vreenak in the Deep Space Nine episode "In the Pale Moonlight".
Reception
"The Xindi" was first aired in the United States on UPN on September 10, 2003. According to Nielsen Media Research, it received a 2.6/5 percent share among adults. This means that it was seen by 2.6 percent of all households, and 5 percent of all of those watching television at the time of the broadcast. It was estimated that "The Xindi" was watched by 4.1 million viewers. The following episode, "Anomaly", received the same rating but the viewer number increased by 200,000.
Robert Bianco reviewed "The Xindi" for USA Today, giving it two out of four stars. He said that while the Xindi storyline "does promise to provide more action and excitement", some of the alterations "smack[ed] of desperation". He called T'Pol "Seven of Vulcan", and said the main issue with the series was the "subpar" writing. In this episode, he felt that Tucker was written so poorly that Trineer seemed like he was overacting to compensate for it.
Rob Owen of the Pittsburgh Post-Gazette described the new season as "less boring", he appreciated the faster pace than most episodes "but it does trip over itself", but criticized the incomprehensible aliens and ridiculous seduction scene between T'Pol and Trip.
IGN, gave it one 1 of 5, and said it was "like watching a television episode made up of all the things from the 'Stuff We've Tried That Doesn't Work on Star Trek list." Criticism was directed at the introduction of the MACOs, which were described as Starship Troopers clones, and at the modification to the theme tune. The Xindi themselves were described as "bad Farscape knock offs", and the reviewer said that they set a poor tone for the rest of the season. Aint It Cool News gave it 2.5 out of 5 and wrote "This is easily the most mundane and haphazardly constructed of the Enterprise season openers." Michelle Erica Green of TrekNation said she was of a "double mind" about "The Xindi" as there were both good and bad elements. She praised the action sequences, the make-up and McHattie as the alien foreman. Green enjoyed the twist at the end with the Xindi homeworld being already destroyed, and felt that the Xindi races could be interesting if developed well. She wondered if the writers had learned at all from past mistakes, as the show was still not doing a good job with the dramatic tension among characters. She said the guest Xindi was given "more personality, wit and depth than any of these new semi-regulars" and feared the MACO's would not get the character development they would need, as it still had not happened for Mayweather after two seasons.
Novelization
"The Xindi" was adapted as a novel in conjunction with the preceding episode, "The Expanse", by J.M. Dillard. Entitled The Expanse, the book was published by Pocket Books in trade paperback format in October 2003.
Home media release
The first home media release of "The Xindi" was as part of the season three DVD box set, released in the United States on September 27, 2005. The Blu-ray release of Enterprise was released on January 7, 2014.
Notes
References
External links
Star Trek: Enterprise (season 3) episodes
2003 American television episodes
Television episodes written by Rick Berman
Television episodes written by Brannon Braga |
11610492 | https://en.wikipedia.org/wiki/Lord%20Tanamo | Lord Tanamo | Joseph Abraham Gordon (2 October 1934 – 15 April 2016), better known as Lord Tanamo, was a Jamaican-Canadian singer and songwriter best known for his mento and ska work.
Career
Born in Kingston and raised in Denham Town in the West of the city, Gordon was influenced by Lord Kitchener, who lived in Jamaica in the 1940s. His interest in music began at an early age when he heard a rumba box being played by local musician Cecil Lawes. He went on to perform locally as a teenager, singing calypsos accompanied by Lawes, and began performing in hotels in the early 1950s.
He first recorded for Kingston businessman and sound system operator Stanley Motta, and later recorded with a backing band that included Theophilus Beckford and Ernest Ranglin. His early hits included "Blues Have Got Me Down" (1960) for producer Emil Shallit.
He switched to ska in the early 1960s, and was a founding member of the Skatalites, singing with the band on tracks such as "Come Down" and "I'm in the Mood For Ska". He recorded for Clement Dodd, Duke Reid, and Lindon Pottinger in the 1960s, and had hits with adapted folk songs such as "Iron Bar" and "Matty Rag", and had further hits with songs such as "Ol' Fowl". In 1965 he won the Festival Song Contest with "Come Down".
In 1970, he recorded a reggae cover of Tony Joe White's "Rainy Night in Georgia", which was a number one hit in Jamaica for seven weeks. He was based in Canada from the mid-1970s, where he married a local woman and opened the Record Nook shop, selling Jamaican-produced records, although he returned to Jamaica to record. During one of these trips back he recorded the 1979 album Calypso Reggae, for Bunny Lee.
In 1990, his ska cover of "I'm in the Mood for Love", gave him his only UK hit, reaching no. 58 in the UK Singles Chart after being featured in a television advert for Paxo in 1989.
In 2002, Tanamo performed as part of the 'Legends of Ska' concerts in Toronto, the performances recorded and released as a film in 2014. Tanamo continued to perform with the Skatalites into the 21st century, including a set at the 2003 Glastonbury Festival.
In January 2008 it was stated in a Jamaican newspaper that Tanamo was in a nursing home in Canada after suffering a stroke that had left him unable to speak. He died in Toronto on 15 April 2016.
Discography
Albums
Come, Come, Come To Jamaica – Independence Year 1962 (1964), RCA – Lord Tanamo and his Calypsonians
Festival Jump-Up (1965), Gaydisc
Calypso Reggae (1979), Third World
Rolling Steady (2007), Motion — The Skatalites
Best Place in the World (2000), Grover — Lord Tanamo with Dr. Ring-Ding & The Senior Allstars
Compilations
Skament-Movement (1992), Alpha Enterprise — Lord Tanamo with The Skatalites (reissued 1999 as Skamento Movement)
In the Mood For Ska (1993), Trojan – Lord Tanamo with The Skatalites
I'm in the Mood for Ska! The Best of Lord Tanamo (2007), Trojan
References
1934 births
2016 deaths
Musicians from Kingston, Jamaica
Jamaican male singers
Jamaican songwriters
Calypsonians
Jamaican ska musicians
Jamaican emigrants to Canada
Mento
Trojan Records artists
Jamaican expatriates in Canada
RCA Records artists |
60104416 | https://en.wikipedia.org/wiki/DarkMatter%20%28Emirati%20company%29 | DarkMatter (Emirati company) | DarkMatter Group, founded in the United Arab Emirates (UAE) in 2014 or 2015, is a cybersecurity company. The company describes itself as a purely defensive company, but several whistleblowers have alleged that it is involved in offensive cybersecurity ("cracking" or, colloquially, "hacking"), including on behalf of the Emirati government.
Company history
DarkMatter was founded in either 2014 or 2015 by Faisal al-Bannai, the founder of mobile phone vendor Axiom Telecom and the son of a major general in the Dubai Police Force. Around 2014, Zeline 1, a wholly owned subsidiary of DarkMatter, became active in Finland.
DarkMatter's public launch came in 2015, at the 2nd Annual Arab Future Cities Summit. At this time, the company advertised capabilities including network security and bug sweeping, and promised to create a new, "secure" mobile phone handset. It promoted itself as a "digital defense and intelligence service" for the UAE.
In 2016, DarkMatter replaced CyberPoint as a contractor for Project Raven. Also in 2016, DarkMatter sought smartphone development expertise in Oulu, Finland. DarkMatter recruited several Finnish engineers.
By early 2018, DarkMatter's turnover was hundreds of millions of U.S. dollars. Eighty per cent of its work was for the UAE government and related organizations, including the NESA. It had developed a smartphone model called Katim, Arabic for "silence". DarkMatter was an official provider for the Expo 2020, but has since been dropped in favour of a different company.
Recruitment practices
In addition to recruiting via conventional routes such as personal referrals and stalls at trade shows (e.g. Black Hat), DarkMatter headhunts staff from the U.S. National Security Agency and has "poached" competitors' staff after they were contracted to the UAE government, as happened with some CyberPoint employees.
The company has reportedly hired graduates of the Israel Defense Force technology units and is paying them up to $1 million annually.
Simone Maragitelli, an Italian security researcher, blogged about DarkMatter's vague and dubious recruiting practices as a warning to others. He claimed that any questions or objections to the company's practices would result in being told that "things had been blown out of proportion" and that information about the job opening was extremely vague despite asking questions.
Allegations of surveillance for UAE government
In response to alleged cyber spying on opponents of Iran's best interests by the government of Iran during 2010 and 2011, the United States assisted the United Arab Emirates in late 2011 with establishing the National Electronic Security Authority (NESA) which is the UAE's equivalent to the US NSA.
Project Raven
Project Raven was a confidential initiative to help the UAE surveil other governments, militants, and human rights activists. Its team included former U.S. intelligence agents, who applied their training to hack phones and computers belonging to Project Raven's victims. The operation was based in a converted mansion in Abu Dhabi nicknamed "the Villa."
From around 2014 to 2016, CyberPoint supplied U.S.-trained contractors to Project Raven. In 2016, news reports emerged that CyberPoint had contracted with the Italian spyware company Hacking Team, which damaged CyberPoint's reputation as a defensive cybersecurity firm. Reportedly dissatisfied with relying upon a U.S.-based contractor, the UAE replaced CyberPoint with DarkMatter as its contractor, and DarkMatter induced several CyberPoint staff to move to DarkMatter. After this, Project Raven reportedly expanded its surveillance to include the targeting of Americans, potentially implicating its American staff in unlawful behaviour.
Following the 24 October 2016 The Intercept article revealing DarkMatter surveillance for UAE, Samer Khalife, the chief financial officer for DarkMatter, transferred some United States citizens from DarkMatter to a new company Connection Systems and tiger teams were established by DarkMatter to counter the allegations contained in The Intercept article.
On 9 December 2021, Loujain al-Hathloul filed a lawsuit in a US district court in Oregon against three former US intelligence and military officers, who carried out hacking operations on behalf of the UAE. According to the lawsuit, the three men — Marc Baier, Ryan Adams, and Daniel Gericke — worked for DarkMatter and assisted the Emirati security officials to exfiltrate data from her iPhone. The hacking had led to al-Hathloul’s arrest from the UAE and rendition to Saudi Arabia, where she was detained, imprisoned and tortured.
In December 2021, U.S. lawmakers urged Treasury Department and State Department to sanction DarkMatter, NSO Group, Nexa Technologies and Trovicor. The letter signed by the Senate Finance Committee Chairman Ron Wyden, House Intelligence Committee Chairman Adam Schiff and 16 other lawmakers, asked for Global Magnitsky sanctions, as the companies were accused of enabling human rights abuses. High-ranking executives at the DarkMatter, along with the three other firms, were demanded to be sanctioned in the letter.
Karma spyware
In 2016, Project Raven bought a tool called Karma. Karma was able to remotely exploit Apple iPhones anywhere in the world, without requiring any interaction on the part of the iPhone's owner. It apparently achieved this by exploiting a zero-day vulnerability in the device's iMessage app. Project Raven operatives were able to view passwords, emails, text messages, photos and location data from the compromised iPhones.
People whose mobile phones have been deliberately compromised using Karma reportedly include:
The Emir of Qatar, Sheikh Tamim bin Hamad Al Thani, plus his brother and several other close associates.
Nadia Mansoor, wife of imprisoned UAE human rights activist Ahmed Mansoor. (Nadia was nicknamed "Purple Egret" by Project Raven; Ahmed was nicknamed "Egret".)
British journalist Rori Donaghy. (Donaghy was nicknamed "Gyro" by Project Raven.)
Hundreds of other targets in Europe and the Middle East, including in the governments of Yemen, Iran and Turkey.
In 2017, Apple patched some of the security vulnerabilities exploited by Karma, reducing the tool's effectiveness.
Certificate authority controversy
In 2016, two DarkMatter whistleblowers and multiple other security researchers expressed concerns that DarkMatter intended to become a certificate authority (CA). This would give it the technical capability to create fraudulent certificates, which would allow fraudulent websites or software updates to convincingly masquerade as legitimate ones. Such capabilities, if misused, would allow DarkMatter to more easily deploy rootkits to targets' devices, and to decrypt HTTPS communications of Firefox users via man-in-the-middle attacks.
On 28 December 2017, DarkMatter requested that Mozilla include it as a trusted CA in the Firefox web browser. For more than a year, Mozilla's reviewers addressed concerns about DarkMatter's technical practices, eventually questioning on that basis whether DarkMatter met the baseline requirements for inclusion.
On 30 January 2019, Reuters published investigations describing DarkMatter's Project Raven. Mozilla's reviewers noted the investigation's findings. Subsequently, the Electronic Frontier Foundation (EFF) and others asked Mozilla to deny DarkMatter's request, on the basis that the investigation showed DarkMatter to be untrustworthy and therefore liable to misuse its capabilities. , Mozilla's public consultation and deliberations are ongoing.
In July 2019, Mozilla prohibited the government of United Arab Emirates from operating as one of its internet security gatekeepers, following reports on the cyber-espionage program, which was run by Abu Dhabi-based DarkMatter staff for leading a clandestine hacking operation.
In August 2019, Google blocked websites approved by DarkMatter, after Reuters reported the firm's involvement in a hacking operation led by the United Arab Emirates. Google, previously, said that all websites certified by DarkMatter would be marked as unsafe by its Chrome and Android browsers.
F.B.I. investigation and indictments
DarkMatter is under investigation by the F.B.I. for crimes including digital espionage services, involvement in the Jamal Khashoggi assassination, and incarceration of foreign dissidents. The F.B.I. is also investigating current and former American employees of DarkMatter for possible cybercrimes. It is not clear whether American officials have confronted their counterparts in the Emirati government about the ToTok app, a tool claimed to be used for mass surveillance. All sources have spoken out anonymously for fear of retribution.
On September 14, 2021, Marc Baier 49, Ryan Adams 34 and Daniel Gericke 40, who were indicted for violations of United States laws involving computer fraud and improper exporting of technology, agreed to a deferred prosecution agreement in which they would pay a fine over three years of $750,000, $600,000, and $335,000, respectively, for a total of $1.68 million, support FBI and Justice Department investigations, sever ties to any United Arab Emirates intelligence and law enforcement agencies, be under a prohibition of services, including defense articles, associated with ITAR and future computer network exploitation employment, and immediately both relinquish their security clearances from the United States and any foreign entity and be under a lifetime ban on future security clearances from the United States. After the UAE contracts shifted from the US parent firm CyberPoint to its UAE subsidiary DarkMatter, Baier, who was a former employee of the NSA, and Adams and Gericke, who had been in the United States military and intelligence community, failed to receive permission to be employed by the UAE firm. According to Lori Stroud who is a former NSA employee, the trio had worked for the United States-based CyberPoint and then for its UAE subsidiary DarkMatter which in 2018 Faisal al-Bannai confirmed that DarkMatter works very closely with the government of the UAE and is a competitor of the Israeli firm NSO Group. From January 2016 to November 2019, the trio of Marc Baier, Ryan Adams and Daniel Gericke significantly improved the operations that DarkMatter provided to the government of the UAE. DarkMatter was very interested in hacking into Qatar's computers and obtaining and reading its electronic messages. For example, DarkMatter had hacked into an electronic communication between First Lady Michelle Obama and a former Qatari minister regarding Michelle Obama and Conan O’Brien's November 2015 trip to Qatar where both Obama and O'Brien visited the al-Udeid airbase which hosts the forward base headquarters of United States Central Command, the RAF's No. 83 Expeditionary Air Group, and the headquarters of the United States Air Forces Central Command during the Wars in Iraq and Afghanistan.
New United States law
In January 2020 during the FBI investigation's into DarkMatter employees' conduct, the United States Congress passed a law proposed by congressperson Max Rose of New York in 2019 that requires the United States intelligence agencies to annually assess the risk in detail to United States national security posed by former intelligence officials and employees that are working for foreign based firms, governments and entities. This law was driven in part by the United Arab Emirates cyber espionage operations against United States citizens, firms, entities and government.
See also
NSO Group
Stealth Falcon
George Nader
Elliott Broidy
Notes
References
External links
Companies based in Abu Dhabi
Software companies established in 2014
Cyber-arms companies
Information technology companies of the United Arab Emirates
Computer surveillance |
18533974 | https://en.wikipedia.org/wiki/Guerrilla%20warfare%20in%20the%20American%20Civil%20War | Guerrilla warfare in the American Civil War | Guerrilla warfare during the American Civil War (1861–1865) was a form of warfare characterized by ambushes, surprise raids, and irregular styles of combat. Waged by both sides of the conflict, it gathered in intensity as the war dragged on and had a profound impact on the outcome of the Civil War.
Background
Guerrilla warfare in the American Civil War followed the same general patterns of irregular warfare conducted in 19th century Europe. Structurally, they can be divided into three different types of operations: the so-called 'people's war', 'partisan warfare', and 'raiding warfare'. Each had distinct characteristics that were common practice during the war.
Operations
People's war
The concept of a 'people's war,' first described by Clausewitz in his classic treatise On War, was the closest example of a mass guerrilla movement in the 19th century. In general during the American Civil War, this type of irregular warfare was conducted in the hinterland of the border states (Missouri, Arkansas, Tennessee, Kentucky, and northwestern Virginia / West Virginia). It was marked by a vicious quality of neighbors fighting each other as other grudges got settled. It was frequent for residents of one part of a single county to take up arms against their counterparts in the rest of the vicinity. Bushwhacking, murder, assault, and terrorism were characteristics of this kind of fighting. Few participants wore uniforms or were formally mustered into the actual armies. In many cases, civilians fought against civilians or civilians fought against opposing enemy troops.
One such example was the opposing irregular forces operating in Missouri and northern Arkansas from 1862 to 1865, most of which were pro-Confederate or pro-Union in name only. They preyed on civilians and isolated military forces of both sides with little regard for politics. From the semiorganized guerrillas, several groups formed and were given some measure of legitimacy by their governments. Quantrill's Raiders, who terrorized pro-Union civilians and fought Federal troops in large areas of Missouri and Kansas, was one such unit. Another notorious unit, with debatable ties to the Confederate military, was led by Champ Ferguson along the Kentucky-Tennessee border, who became one of the few figures of the Confederate cause to be executed after the war. Dozens of other small, localized bands terrorized the countryside throughout the border region during the war, bringing total war to the area that lasted until the end of the Civil War and, in some areas, beyond.
Partisan warfare
Partisan warfare, in contrast, more closely resembled commando operations of the 20th century. Partisans were small units of conventional forces, controlled and organized by a military force for operations behind enemy lines. The 1862 Partisan Ranger Act, passed by the Confederate Congress, authorized the formation of such units and gave them legitimacy, which placed them in a different category from the common 'bushwhacker' or 'guerrilla'. John Singleton Mosby formed a partisan unit (the 43rd Battalion) that was very effective in tying down Union forces behind their lines in northern Virginia in the last two years of the war. Groups such as Blazer's Scouts, White's Comanches, the Loudoun Rangers, McNeill's Rangers, and other similar forces at times served in the formal armies, but they often were loosely organized and operated more as partisans than as cavalry, especially early in the war.
Raiding warfare
Lastly, deep raids by conventional cavalry forces were often considered 'irregular' in nature. The "Partisan Brigades" of Nathan Bedford Forrest and John Hunt Morgan operated as part of the cavalry forces of the Confederate Army of Tennessee in 1862 and 1863. They were given specific missions to destroy logistical hubs, railroad bridges, and other strategic targets to support the greater mission of the Army of Tennessee. Morgan led raids into Kentucky as well. In his last raid, he violated orders by going across the Ohio River and raiding in Ohio and Indiana as well since he wanted to bring the war to the North. The long raid diverted thousands of Union troops. Morgan captured and paroled nearly 6,000 troops, destroyed bridges and fortifications, and ran off livestock. By mid-1863, Morgan's Raiders had been mostly destroyed in the late days of the Great Raid of 1863.
Some of his followers continued under their own direction, such as Marcellus Jerome Clarke, who kept on with raids in Kentucky. The Confederacy conducted few deep cavalry raids in the latter years of the war, mostly because of the losses in experienced horsemen and the offensive operations of the Union Army. Federal cavalry conducted several successful raids during the war but in general used their cavalry forces in a more conventional role. A good exception was the 1863 Grierson's Raid, which did much to set the stage for General Ulysses Grant's victory during the Vicksburg Campaign.
Counterinsurgency
Counterinsurgency operations were successful in reducing the impact of Confederate guerrilla warfare. In Arkansas, Union forces used a wide variety of strategies to defeat irregulars. They included the use of Arkansas Unionist forces as anti-guerrilla troops, the use of riverine forces such as gunboats to control the waterways, and the provost marshal's military law enforcement system to spy on suspected guerrillas and to imprison those who were captured. Against Confederate raiders, the Union army developed an effective cavalry itself and reinforced that system by numerous blockhouses and fortification to defend strategic targets.
However, Union attempts to defeat Mosby's Partisan Rangers fell short of success because of Mosby's use of very small units (10–15 men) that operated in areas that were considered to be friendly to the Confederates. Another regiment, known as Thomas' Legion, had white and anti-Union Cherokee Indians, morphed into a guerrilla force and continued fighting in the remote mountain back-country of western North Carolina for a month after Robert E. Lee's surrender at Appomattox Court House. That unit was never completely suppressed by Union forces, but it voluntarily ceased hostilities after capturing the town of Waynesville, North Carolina, on May 10, 1865.
Aftermath
In the late 20th century, several historians focused on the Confederate government's decision to not use guerrilla warfare to prolong the war. Near the end of the war, some in the Confederate administration who advocated continuing the fight as a guerrilla conflict. Such efforts were opposed by Confederate generals such as Lee, who ultimately believed that surrender and reconciliation were the best options for the war-ravaged South.
Notable guerrillas
See also
Bushwhackers - (Confederate)
Jayhawkers - (Union)
Partisan rangers - (Confederate)
Primary sources
U.S. War Department, The War of the Rebellion: A Compilation of the Official Records of the Union and Confederate Armies, 70 volumes in 4 series. Washington, D.C.: United States Government Printing Office, 1880–1901.
Lowell Hayes Harrison, James c. Klotter, A New History of Kentucky, Lexington, KY: University Press of Kentucky, 1997
Further reading
Beckett, Ian Frederick William. Encyclopedia of guerrilla warfare (ABC-Clio, 1999)
Beilein, Joseph M. and Matthew Christopher Hulbert, eds. The Civil War Guerrilla: Unfolding the Black Flag in History, Memory, and Myth. Lexington: University Press of Kentucky, 2015. .
Browning, Judkin. Shifting Loyalties: The Union Occupation of Eastern North Carolina (Univ of North Carolina Press, 2011)
Fellman, Michael. Inside War: The Guerrilla Conflict in Missouri During the American Civil War (Oxford University Press, 1989)
Gallagher, Gary W. "Disaffection, Persistence, and Nation: Some Directions in Recent Scholarship on the Confederacy." Civil War History 55#3 (2009) pp: 329–353.
Grant, Meredith Anne. "Internal Dissent: East Tennessee's Civil War, 1849-1865." (thesis 2008). online
Hulbert, Matthew Christopher. "Constructing Guerrilla Memory: John Newman Edwards and Missouri's Irregular Lost Cause," Journal of the Civil War Era. 2, No. 1 (March 2012), 58–81.
Hulbert, Matthew Christopher. The Ghosts of Guerrilla Memory: How Civil War Bushwhackers Became Gunslingers in the American West. Athens: University of Georgia Press, 2016. .
Hulbert, Matthew Christopher. "How to Remember'This Damnable Guerrilla Warfare': Four Vignettes from Civil War Missouri," Civil War History. 59, No. 2 (June 2013), 142–167.
Hulbert, Matthew Christopher. "The Rise and Fall of Edwin Terrell, Guerrilla Hunter, U.S.A.," Ohio Valley History. 18, No. 3 (Fall 2018), 42–61.
Mackey, Robert R. The Uncivil War: Irregular Warfare in the Upper South, 1861-1865 (University of Oklahoma Press, 2014 reprint)
Mountcastle, Clay. Punitive War: Confederate Guerrillas and Union Reprisals (University Press of Kansas, 2009)
Nichols, Bruce, Guerrilla Warfare in Civil War Missouri, McFarland & Co. Inc., 2006. .
Sutherland, Daniel E. A Savage Conflict: The Decisive Role of Guerrillas in the American Civil War (Univ of North Carolina Press, 2009)
Vaughan, Virginia C. Tennessee County Historical Series: Weakley County (Memphis State University Press, 1983)
Williams, David. Bitterly Divided: The South's Inner Civil War (The New Press, 2010)
External links
"Guerilla Warfare in Kentucky" — Article by Civil War historian/author Bryan S. Bush
Guerrilla warfare
Military history of the Confederate States of America
Military operations of the American Civil War |
7732473 | https://en.wikipedia.org/wiki/CE%20Linux%20Forum | CE Linux Forum | The Consumer Electronics Linux Forum (CE Linux Forum or CELF) was a non-profit organization to advance the Linux operating system as an open-source software platform for consumer electronics (CE) devices. It had a primarily technical focus, working on specifications, implementations, conferences and testing to help Linux developers improve Linux for use in CE products.
It existed from 2003 to 2010.
History
The forum was an outgrowth of a joint project between Sony Corporation and Matsushita Electric Industrial Co. Ltd. (using the brand name Panasonic). CELF was founded in June 2003 by those plus six more consumer electronics companies, Hitachi Ltd., NEC Corporation, Royal Philips Electronics, Samsung Electronics Co. Ltd., Sharp Corporation, and Toshiba Corporation.
It was seen at least partially as a reaction to the use of Windows CE for consumer electronics.
Phillips and Samsung founded a group with similar aims in November 2004, promoting a universal home application programming interface called the UHAPI Forum.
The UHAPI was presented to the CE Linux Forum in 2005.
NXP Semiconductors spun off from Phillips in 2006, and the UHAP was revised up to a version 1.2.
A sample implementation of UHAPI was published on SourceForge.
The UHAPI forum added a few other supporters, such as the Digital TV Alliance of China and Japan-based Access (company),
and maintained a web site until the Great Recession of 2008.
By 2004, hardware from Renesas Electronics running software from Lineo was demonstrated at a CELF meeting.
In 2005, a meeting in San Jose, California drew about engineers from competing companies.
By the end of 2006, the competing Linux Phone Standards Forum had formed, to focus on mobile devices.
After other groups such as Linaro and the Limo Foundation formed, some questioned the fragmentation of the industry.
In 2010, the CE Linux Forum merged with the Linux Foundation, to become a technical work group of Linux Foundation.
The group planned to support the Yocto Project to produce an embedded Linux distribution.
Activities
CELF initiatives included:
technical working groups, which produce specifications and implementations (usually patches against existing open source projects) to enhance Linux suitability for CE products
hosting of conferences dedicated to embedded Linux (see below)
providing hardware resources to open source developers
funding for direct feature development, via contracting with a few Linux developers
a test lab in San Jose, California was established in 2006
Members submit technical output directly back to the relevant open source project (for example, by sending enhancements to the Linux kernel directly to the Linux kernel mailing list, or to an appropriate technology- or architecture-specific mailing list.) Collected information and forum output was primarily located on a wiki for embedded developers.
The content of CELF's wiki was included on another site called eLinux.org, created by Tim Riker in 2006.
As of 2007, CELF had the following technical working groups:
Audio, Video and Graphics
Bootup Time
Digital Television Profile
Memory Management
Mobile Phone Profile
Power Management
Real Time
Security
System Size
The CE Linux Forum sponsors embedded projects. Amongst others the LinuxTiny patches and the LogFS and SquashFS flash file systems have been pushed to mainline Linux.
The forum sponsored the Embedded Linux Conference since 2005. Originally started as a conference in the US, a yearly ELC Europe started in 2007.
in 2007 it was hosted with the Real-time Linux Workshop in Linz, Austria; in 2008 with the NLUUG in Ede, Netherlands; and in 2009 with Embedded Systems Week in Grenoble.
CELF sponsored the Linux Symposium from 2004 to 2008, hosting sessions specific to embedded use of Linux and development of Linux capabilities for embedded use.
In Japan and Korea, CELF organizes Technical Jamborees every two months. Jamborees are smaller, have a single track, and are held in the local language.
By 2009 CELF had about 30 members, consisting of consumer electronics manufacturers, semiconductor vendors, and Linux software companies:
ARM Ltd., AXE, Inc., Broadcom,
Canon Inc.,
ETRI,
Fujitsu Limited,
Fuji-Xerox,
Hewlett-Packard,
Hitachi, Ltd.,
IBM,
Intel Corporation,
JustSystems Corporation,
LG Electronics,
Lineo Solutions, Inc,
Panasonic Corporation,
MIPS Technologies,
NEC Corporation,
NXP Semiconductors,
Renesas,
Royal Philips Electronics,
Samsung Electronics,
Selenic Consulting,
Sharp Corporation,
SnapGear,
Sony Corporation,
Toshiba Corporation,
Yamaha Corporation
See also
Linux Foundation - parent organization
Digital Living Network Alliance, another group from 2004 to 2017
References
External links
CELF Website
Linux organizations
Embedded Linux
Free and open-source software organizations |
227989 | https://en.wikipedia.org/wiki/Inform | Inform | Inform is a programming language and design system for interactive fiction originally created in 1993 by Graham Nelson. Inform can generate programs designed for the Z-code or Glulx virtual machines. Versions 1 through 5 were released between 1993 and 1996. Around 1996, Nelson rewrote Inform from first principles to create version 6 (or Inform 6).
Over the following decade, version 6 became reasonably stable and a popular language for writing interactive fiction. In 2006, Nelson released Inform 7 (briefly known as Natural Inform), a completely new language based on principles of natural language and a new set of tools based around a book-publishing metaphor.
Z-Machine and Glulx
The Inform compilers translate Inform code to story files for Glulx or Z-code, two virtual machines designed specifically for interactive fiction. Glulx, which can support larger games, is the default.
The Z-machine was originally developed by Infocom in 1979 for their interactive fiction titles. Because there is at least one such interpreter for nearly every major and minor platform, this means that the same Z-code file can be run on a multitude of platforms with no alterations. Originally Inform targeted the Z-machine only.
Andrew Plotkin created an unofficial version of Inform 6 that was also capable of generating files for Glulx, a virtual machine he had designed to overcome many of the limitations of the several-decades-old Z-machine. Starting with Inform 6.3, released February 29, 2004, Inform 6 has included official support for both virtual machines, based on Andrew Plotkin's work. Early release of Inform 7 did not support Glulx, but in August 2006 Glulx support was released.
Inform 6
Inform was originally created by Graham Nelson in 1993. In 1996 Nelson rewrote Inform from first principles to create version 6 (or Inform 6). Over the following decade, version 6 became reasonably stable and a popular language for writing interactive fiction.
The Inform 6 system consists of two major components: the Inform compiler, which generates story files from Inform source code, and the Inform library, a suite of software which handles most of the difficult work of parsing the player's text input and keeping track of the world model. The name Inform also refers to the Inform programming language that the compiler understands.
Although Inform 6 and the Z-Machine were originally designed with interactive fiction in mind, many other programs have been developed, including a BASIC interpreter, a LISP tutorial (complete with interpreter), a Tetris game, and a version of the game Snake.
The Inform 6 compiler
The Inform compiler generates files for the Z-machine or Glulx (also called story files) from Inform 6 source code.
The Inform 6 programming language
The Inform programming language is object-oriented and procedural. A key element of the language is objects. Objects are maintained in an object tree which lists the parent–child relationships between objects. Since the parent–child relationship is often used to represent location, an object which is the parent of another object is often said to "hold" it. Objects can be moved throughout the tree. Typically, top level objects represent rooms and other locations within the game, which may hold objects representing the room's contents, be they physical items, non-player characters, the player's character, or background effects. All objects can hold other objects, so a livingroom object might hold an insurancesaleman object which is holding a briefcase object which contains the insurancepaperwork object.
In early versions of Inform, objects were different from the notion of objects from object-oriented programming, in that there was no such thing as a class. Later versions added support for class definitions and allowed objects to be members of classes. Objects and classes can inherit from multiple classes. Interactive fiction games typically contain many unique objects. Because of this, many objects in Inform do not inherit from any class, other than the "metaclass" Object. However, objects very frequently have attributes (boolean properties, such as scenery or edible) that are recognized by the Inform library. In other languages this would normally be implemented via inheritance.
Here is a simple example of Inform 6 source code.
[ Main;
print "Hello, World!^";
];
Inform 6 library
The Inform system also contains the Inform library, which automates nearly all the most difficult work involved in programming interactive fiction; specifically, it includes a text parser that makes sense of the player's input, and a world model that keeps track of such things as objects (and their properties), rooms, doors, the player's inventory, etc.
The Inform compiler does not require the use of the Inform library. There are several replacement libraries available, such as Platypus and InformATE, a library that codes Inform in Spanish.
Example game
Here is an example of Inform 6 source code that makes use of the Inform library. The Inform 6 code sample below is usable in Inform 7, but not without special demarcation indicating that it is embedded legacy code.
Constant Story "Hello Deductible";
Constant Headline "^An Interactive Example^";
Include "Parser";
Include "VerbLib";
[ Initialise;
location = Living_Room;
"Hello World";
];
Object Kitchen "Kitchen";
Object Front_Door "Front Door";
Object Living_Room "Living Room"
with
description "A comfortably furnished living room.",
n_to Kitchen,
s_to Front_Door,
has light;
Object -> Salesman "insurance salesman"
with
name 'insurance' 'salesman' 'man',
description "An insurance salesman in a tacky polyester
suit. He seems eager to speak to you.",
before [;
Listen:
move Insurance_Paperwork to player;
"The salesman bores you with a discussion
of life insurance policies. From his
briefcase he pulls some paperwork which he
hands to you.";
],
has animate;
Object -> -> Briefcase "briefcase"
with
name 'briefcase' 'case',
description "A slightly worn, black briefcase.",
has container;
Object -> -> -> Insurance_Paperwork "insurance paperwork"
with
name 'paperwork' 'papers' 'insurance' 'documents' 'forms',
description "Page after page of small legalese.";
Include "Grammar";
Notable games developed in Inform 6 or earlier versions
Curses, by Graham Nelson (1993), the first game ever written in the Inform programming language. Considered one of the first "modern" games to meet the high standards set by Infocom's best titles.
Zork: The Undiscovered Underground (1997), written by Marc Blank & Michael Berlyn, programmed by Gerry Kevin Wilson. Given away free by Activision to promote the release of Zork: Grand Inquisitor.
Anchorhead, by Michael S. Gentry (1998) is a highly rated horror story inspired by H. P. Lovecraft's Cthulhu Mythos.
Photopia, by Adam Cadre (1998), the first almost entirely puzzle-free game. Won the annual Interactive Fiction Competition in 1998.
Varicella by Adam Cadre (1999). It won four XYZZY Awards in 1999 including the XYZZY Award for Best Game, and had a scholarly essay written about it.
Galatea, by Emily Short (2000). Galatea is focused entirely on interaction with the animated statue of the same name. Galatea has one of the most complex interaction systems for a non-player character in an interactive fiction game. Adam Cadre called Galatea "the best NPC ever".
Slouching Towards Bedlam, by Star C. Foster and Daniel Ravipinto (2003). Set in a steampunk setting, the game integrates meta-game functionality (saving, restoring, restarting) into the game world itself. The game won two XYZZY Awards and received the highest average score of any game in the Interactive Fiction Competition as of 2006.
Inform 7
On April 30, 2006, Graham Nelson announced the beta release of Inform 7 to the rec.arts.int-fiction newsgroup.
Inform 7 consists of three primary parts: The Inform 7 IDE with development tools specialized for testing interactive fiction, the Inform 7 compiler for the new language, and "The Standard Rules" which form the core library for Inform 7. Inform 7 also relies on the Inform library and Inform compiler from Inform 6. The compiler compiles the Inform 7 source code into Inform 6 source code, which is then compiled separately by Inform 6 to generate a Glulx or Z-code story file. Inform 7 also defaults to writing Blorb files, archives which include the Z-code together with optional "cover art" and metadata intended for indexing purposes. The full set of Inform 7 tools are currently available for Mac OS X, Microsoft Windows and Linux. The March 25, 2007 release added command line support for Linux, and new releases now include an IDE using the GNOME desktop environment under the GNOME Inform 7 SourceForge project.
The language and tools remain under development;
the March 25, 2007 release included a number of changes to the language. In 2019, Graham Nelson announced the eventual open sourcing of Inform 7.
Inform 7 was named Natural Inform for a brief period of time, but was later renamed Inform 7. This old name is why the Inform 7 compiler is named "NI."
Inform 7 IDE
Inform 7 comes with an integrated development environment (IDE) for Mac OS X, Microsoft Windows and Linux. The Mac OS X IDE was developed by Andrew Hunter. The Microsoft Windows IDE was developed by David Kinder. The Linux IDE (known as GNOME Inform) was developed by Philip Chimento.
The Inform 7 IDE includes a text editor for editing Inform 7 source code. Like many other programming editors it features syntax highlighting. It marks quoted strings in one color. Headings of organizational sections (Volumes, Books, Chapters, Parts, and Sections) are bolded and made larger. Comments are set in a different color and made slightly smaller.
The IDE includes a built-in Z-code interpreter. The Mac OS X IDE's interpreter is based on the Zoom interpreter by Andrew Hunter, with contributions from Jesse McGrew. The Microsoft Windows IDE's interpreter is based on WinFrotz.
As a developer tests the game in the built-in interpreter, progress is tracked in the "skein" and "transcript" views of the IDE. The skein tracks player commands as a tree of branching possibilities. Any branch of the tree can be quickly re-followed, making it possible to retry different paths in a game under development without replaying the same portions of the game. Paths can also be annotated with notes and marked as solutions, which can be exported as text walkthroughs. The transcript, on the other hand, tracks both player commands and the game's responses. Correct responses from the game can be marked as "blessed." On replaying a transcript or a branch of the skein, variations from the blessed version will be highlighted, which can help the developer find errors.
The IDE also provides various indices into the program under development. The code is shown as a class hierarchy, a traditional IF map, a book-like table of contents, and in other forms. Clicking items in the index jumps to the relevant source code.
The IDE presents two side-by-side panes for working in. Each pane can contain the source code being worked on, the current status of compilation, the skein, the transcript, the indices of the source code, a running version of the game, documentation for Inform 7 or any installed extensions to it, or settings. The concept is to imitate an author's manuscript book by presenting two "facing pages" instead of a multitude of separate windows.
Inform 7 programming language
Notable features include strong bias towards declarative rule-based style of programming and ability to infer types and properties of objects from the way they are used. For example, the statement "John wears a hat." creates a "person" called "John" (since only people are capable of wearing things), creates a "thing" with the "wearable" property (since only objects marked "wearable" are capable of being worn), and sets John as wearing the hat.
Another notable aspect of the language is direct support for relations which track associations between objects. This includes automatically provided relations, like one object containing another or an object being worn, but the developer can add his/her own relations. A developer might add relations indicating love or hatred between beings, or to track which characters in a game have met each other.
Inform 7 is a highly domain-specific programming language, providing the writer/programmer with a much higher level of abstraction than Inform 6, and highly readable resulting source code.
Example game
Statements in Inform 7 take the form of complete sentences. Blank lines and indentation are in some places structurally significant. The basic form of an Inform 7 program is as follows:
"Hello, World!" by "I.F. Author"
The world is a room.
When play begins, say "Hello, World!"
The following is a reimplementation of the above "Hello Deductible" example written in Inform 7. It relies on the library known as "The Standard Rules" which are automatically included in all Inform 7 compilations.
"Hello Deductible" by "I.F. Author"
The story headline is "An Interactive Example".
The Living Room is a room. "A comfortably furnished living room."
The Kitchen is north of the Living Room.
The Front Door is south of the Living Room.
The Front Door is a door. The Front Door is closed and locked.
The insurance salesman is a man in the Living Room. The description is "An insurance salesman in a tacky polyester suit. He seems eager to speak to you." Understand "man" as the insurance salesman.
A briefcase is carried by the insurance salesman. The description is "A slightly worn, black briefcase." Understand "case" as the briefcase.
The insurance paperwork is in the briefcase. The description is "Page after page of small legalese." Understand "papers" or "documents" or "forms" as the paperwork.
Instead of listening to the insurance salesman:
say "The salesman bores you with a discussion of life insurance policies. From his briefcase he pulls some paperwork which he hands to you.";
move the insurance paperwork to the player.
Notable games written in Inform 7
Mystery House Possessed (2005), by Emily Short, was the first Inform 7 game released to be public. It was released as part of the "Mystery House Taken Over" project.
On March 1, 2006, Short announced the release of three further games:
Bronze (an example of a traditional puzzle-intensive game) and Damnatio Memoriae (a follow-up to her award-winning Inform 6 game Savoir-Faire) were joined by Graham Nelson's The Reliques of Tolti-Aph (2006). When the Inform 7 public beta was announced on April 30, 2006, six "worked examples" of medium to large scale works were made available along with their source code, including the three games previously released on March 1.
Emily Short's Floatpoint was the first Inform 7 game to take first place in the Interactive Fiction Competition.
It also won 2006 XYZZY Awards for Best Setting and Best NPCs. Rendition, by nespresso (2007), is a political art experiment in the form of a text adventure game. Its approach to tragedy has been discussed academically by both the Association for Computing Machinery and Cambridge University.
See also
Inform version history
lists software similar to Inform
TADS The Text Adventure Development System (TADS), another leading IF development system
Further reading
Inform 6
The official manual of Inform is Graham Nelson's Inform Designer's Manual: it is a tutorial, a manual, and a technical document rolled into one. It is available online for free at Inform's official website, and two printed editions are available: a softcover () and a hardcover ().
The Inform Beginner's Guide by Roger Firth and Sonja Kesserich () attempts to provide a more gentle introduction to Inform. It is available for free at Inform's official website.
Inform 7
The SPAG Interview - An interview with designers Graham Nelson and Emily Short about the development of Inform 7. This interview was made shortly before its release and published on the same day as the initial release.
"Natural Language, Semantic Analysis and Interactive Fiction" - A paper on the design of Inform 7 by designer Graham Nelson.
References
External links
Cloak of Darkness: Inform presents the same, short game implemented in both Inform 6 and Inform 7, as well as other languages for comparison.
Inform 6 - Official web site
Inform 6 FAQ at Roger Firth's IF Pages provides details on programming in Inform 6.
Inform 7 - Official web site.
The Interactive Fiction Archive provides many Inform tools, examples, and library files.
Playfic is a web-based interface for creating and sharing new games using Inform 7.
Guncho is a multiplayer interactive fiction system based on Inform 7 with a combination of MUD-like and web-based interfaces.
1993 software
Interactive fiction
Freeware
Domain-specific programming languages
History of computing in the United Kingdom
Video game development software
Text adventure game engines
Programming languages created in 1993 |
7156461 | https://en.wikipedia.org/wiki/Default%20password | Default password | Where a device needs a username and/or password to log in, a default password is usually provided that allows the device to be accessed during its initial setup, or after resetting to factory defaults.
Manufacturers of such equipment typically use a simple password, such as admin or password on all equipment they ship, in the expectation that users will change the password during configuration. The default username and password is usually found in the instruction manual (common for all devices) or on the device itself.
Default passwords are one of the major contributing factors to large-scale compromises of home routers. Leaving such a password on devices available to the public is a huge security risk.
Some devices (such as wireless routers) will come with unique default router username and passwords printed on a sticker, which is a more secure option than a common default password. Some vendors will however derive the password from the device's MAC address using a known algorithm, in which case the password can be also easily reproduced by attackers.
Default access
To access internet-connected devices on a network, a user must know its default IP address. Manufacturers typically use 192.168.1.1, and also 10.0.0.1 default router's IP address however, some will have variations on this. Similarly to login details, leaving this unchanged can lead to security issues.
See also
Backdoor (computing)
Internet of things
Cyber-security regulation
References
Password authentication
Computer security exploits |
801662 | https://en.wikipedia.org/wiki/Process%20control%20block | Process control block | A process control block (PCB) is a data structure used by computer operating systems to store all the information about a process. It is also known as a process descriptor. When a process is created (initialized or installed), the operating system creates a corresponding process control block.
This specifies the process state i.e. new, ready, running, waiting or terminated.
Role
The role of the PCBs is central in process management: they are accessed and/or modified by most utilities, particularly those involved with scheduling and resource management.
Structure
In multitasking operating systems, the PCB stores data needed for correct and efficient process management. Though the details of these structures are system-dependent, common elements fall in three main categories:
Process identification
Process state
Process control
Status tables exist for each relevant entity, like describing memory, I/O devices, files and processes.
Memory tables, for example, contain information about the allocation of main and secondary (virtual) memory for each process, authorization attributes for accessing memory areas shared among different processes, etc. I/O tables may have entries stating the availability of a device or its assignment to a process, the status of I/O operations, the location of memory buffers used for them, etc.
Process identification data include a unique identifier for the process (almost invariably an integer) and, in a multiuser-multitasking system, data such as the identifier of the parent process, user identifier, user group identifier, etc. The process id is particularly relevant since it is often used to cross-reference the tables defined above, e.g. showing which process is using which I/O devices, or memory areas.
Process state data define the status of a process when it is suspended, allowing the OS to restart it later. This always includes the content of general-purpose CPU registers, the CPU process status word, stack and frame pointers, etc. During context switch, the running process is stopped and another process runs. The kernel must stop the execution of the running process, copy out the values in hardware registers to its PCB, and update the hardware registers with the values from the PCB of the new process.
Process control information is used by the OS to manage the process itself. This includes:
Process scheduling state–The state of the process in terms of "ready", "suspended", etc., and other scheduling information as well, such as priority value, the amount of time elapsed since the process gained control of the CPU or since it was suspended. Also, in case of a suspended process, event identification data must be recorded for the event the process is waiting for.
Process structuring information–the process's children id's, or the id's of other processes related to the current one in some functional way, which may be represented as a queue, a ring or other data structures
Interprocess communication information–flags, signals and messages associated with the communication among independent processes
Process Privileges–allowed/disallowed access to system resources
Process State–new, ready, running, waiting, dead
Process Number (PID)–unique identification number for each process (also known as Process ID)
Program Counter (PC)–A pointer to the address of the next instruction to be executed for this process
CPU Registers–Register set where process needs to be stored for execution for running state
CPU Scheduling Information–information scheduling CPU time
Memory Management Information–page table, memory limits, segment table
Accounting Information–amount of CPU used for process execution, time limits, execution ID etc.
I/O Status Information–list of I/O devices allocated to the process.
Location
PCB must be kept in an area of memory protected from normal process access. In some operating systems the PCB is placed at the beginning of the kernel stack of the process.
See also
Thread control block (TCB)
Program segment prefix (PSP)
Data segment
Notes
Process (computing) |
40140232 | https://en.wikipedia.org/wiki/Theodore%20J.%20Williams | Theodore J. Williams | Theodore Joseph Williams (1923 – April 27, 2013) was an American engineer and Professor of Engineering at Purdue University, known for the development of the Purdue Enterprise Reference Architecture.
Biography
Williams received his B.S., M.S., and Ph.D. degrees in chemical engineering from the Pennsylvania State University, and another M.S. degree in electrical engineering from Ohio State University.
In World War II, Williams served in the US Air Force as navigator in a B-24, was awarded the Air Medal with two oak leaf clusters, and retired with the rank of captain in 1956.
He was Professor of Engineering and Director of the Purdue Laboratory for Applied Industrial Control at Purdue University, West Lafayette, Indiana from 1965 to 1994.
Williams was president of the American Automatic Control Council (AACC) from 1965 to 1967; of the Instrument Society of America (ISA) in 1969; of the American Federation for Information Processing Societies (AFIPS) from 1976 to 1978; and first chairman of the IFAC/IFIP Task Force on Architectures for Integrating Manufacturing Activities and Enterprises from 1990 to 1996. In 1976 Williams was awarded the Sir Harold Hartley Silver Medal by the Institute of Measurement and Control in London, England, and the A. F. Sperry Founder Award Gold Medal by the Instrument Society of America in 1990.
For a more information see Biographical Sketch on the PERA.NET website.
Publications
Williams wrote and edited about 50 books and over 400 articles and papers in the fields from chemical engineering (process dynamics) to computer science (industrial computer control, Computer-integrated manufacturing) and the emerging field of Enterprise Architecture. Books:
1950s
1950. Studies in Distillation Calculations. Pennsylvania State College.
1956. Automatic Control in Continuous Distillation. Ohio State University
1959. The Theory and Design of the Triggered Spark Gap. U.S. Atomic Energy Commission, Division of Technical Information.
1960s
1961. Automatic control of chemical and petroleum processes. By Theodore J. Williams and Verlin A. Lauher.
1961. Process dynamics and control. Edited by Theodore J. Williams.
1961. Systems engineering for the process industries.
1965. Process control and applied mathematics. Edited with Lester H. Krone.
1966. A Manual for Digital Computer Applications to Process Control. Purdue University.
1969. Interface Requirements, Transducers, and Computers for On-line Systems: Survey on Digital Computers in Control. Naczelna Organizacja Techniczna w Polsce.
1969. Progress in direct digital control; a compilation of articles, technical papers, and ISA documents reflecting the development and application of DDC concepts in the process industries. Edited with F. M. Ryan.
1970s
1970 Coastal Navigation. Thomas Reed
1971. Interfaces with the process control computer; the operator, engineer, and management; proceedings of the symposium held August 3–6, 1971, Lafayette, Indiana. Edited by T. J. Williams.
1974. Computer applications in the automation of shipyard operation and ship design : proceedings of the IFIP/IFAC/JSNA joint conference, Tokyo, Japan, August 28–30, 1973. Edited with Yuzuru Fujita and Kjell Lind.
1974. Modelling and Control of Multiple Effect Black Liquor Evaporator Systems. With Laxmi K. Rastogi. Purdue Laboratory for Applied * 1974. Ship operation automation : proceedings of the IFAC/IFIP symposium, Oslo, Norway, July 2–5, 1973. Edited by Yuzuru Fujita, Kjell Lind and Theodore J. Williams.
1975. Modeling and control of kraft production systems for pulp production, chemical recovery, and energy conservation : proceedings of a symposium held by the Pulp and Paper Division of the Instrument Society of America at the ISA/75 Industry Oriented Conferen
1976. Computer applications in the automation of shipyard operation and ship design, II : proceedings of the IFIP/IFAC/SSI/City of Gothenburg Scandinavian joint conference, Gothenburg, Sweden, June 8–11, 1976. Edited with Åke Jacobsson, Folke Borgström.
1976. Digital Computer Applications to Process Control. Purdue University. Division of Conference and Continuation Services, Purdue Laboratory for Applied Industrial Control
1976. IFAC/IFIP Symposium on Automation in Offshore Oil Field Operation, Bergen, Norway, 1976. Automation in offshore oil field operation : proceedings of the IFAC/IFIP Symposium, Bergen, Norway, June 14–17, 1976. Edited with Frode L. Galtung and Kåre Røsandhaug.
1976. Ship operation automation, II : proceedings of the 2nd IFAC/IFIP symposium, Washington D.C., USA, August 30-September 2, 1976. Edited with Marvin Pitkin and John J. Roche.
1978. Control systems readiness for munitions plants : a first pass : proceedings of the Workshop on Control Systems Readiness for Munitions Plants, held at Purdue University, West Lafayette, Indiana, September 19–20, 1977. Industrial Control, Schools of Engineering, Purdue University.
1979. A Mathematical Model of the Babcock and Wilcox Black Liquor Recovery Furnace. With Maurice G. Kamienny and Paavo Uronen. Purdue Laboratory for Applied Industrial Control, Schools of Engineering, Purdue University,
1979. Computer applications in the automation of shipyard operation and ship design III : IFIP/IFAC third international conference, University of Strathclyde, Glasgow, Scotland, June 18–21, 1979. Edited with C. Kuo and K. J. MacCallum.
1980s
1980. Advanced Control Conference, (6th : 1980 : West Lafayette, Ind.) Man-machine interfaces for industrial control : proceedings of the sixth annual Advanced Control Conference, W. Lafayette, Indiana, April 28–30, 1980. Edited with E. J. Kompass.
1980. Hierarchy Computer Systems in Nipon Steel Corporation: A Report on Their Benefits, Particularly in Productivity and Labor Savings, Their Costs and Implementation Efforts. Purdue Laboratory for Applied Industrial Control.
1982. A Mathematical Model of the Kraft Pulping Process: Narrative. With Tor Christensen and Lyle Frederick Albright
1983. Development of Improved Operating Conditions for Kamyr Digesters. With Yousry L. Sidrak, and Lyle Frederick Albright.
1983. Learning Systems and Pattern Recognition in Industrial Control: Applying Artificial Intelligence to Industrial Control ; Proceedings of the Ninth Annual Advanced Control Conference, West Lafayette, Indiana, September 19–21, 1983. With E. J. Kompass
1983. Modelling, estimation, and control of the soaking pit : an example of the development and application of some modern control techniques to industrial processes. With Yong-Zai Lu.
1984. Use of digital computers in process control
1985. Analysis and design of hierarchical control systems : with special reference to steel plant operations. Edited by T. J. Williams.
1985. Glossary of standard computer control system terminology.
1986. Advanced control techniques move from theory to practice : techniques that have made it : proceedings of the twelfth annual Advanced Control Conference, West Lafayette, Indiana, September 15–17, 1986
1987. Advanced Control in Computer Integrated Manufacturing. With Henry M. Morris and E. J. Kompass.
1988. Standards in information technology and industrial control : contributions from IFIP Working Group 5.4. Edited with Nicolas M. and E. Malagardis.
1989. Reference model for computer integrated manufacturing (CIM) : a description from the viewpoint of industrial automation. Edited by Theodore J. Williams.
1989, Total control systems availability: its achievement through robustness, fault tolerance, fault analysis, maintainability, and other techniques : proceedings of the fifteenth annual Advanced Control Conference, West Lafayette, Indiana, September 11–13, 19891990s
1990. Generic Control for Batch Manufacturing: Advanced Control Techniques in Integrated Batch Control : Proceedings of the Sixteenth Annual Advanced Control Conference, West Lafayette, Indiana, September 24–26, 1990. With E. J. Kompass and Sharon K. Whitlock.
1991. Expert Systems Applications in Advanced Control: Successes, Techniques, Requirements and Limitations : Proceedings of the Seventeenth Annual Advanced Control Conference, West Lafayette, Indiana, September 30- October 2, 1991 1992. Evaluation of Underhand Backfill Practice for Rock Burst Control. With Jeff K. Whyatt and M. P. Board,
1992. Instrumentation of an Experimental Underhand Longwall Stope. With M. E. Poad, Jeff K. Whyatt
1992. Rock mechanics investigations at the Lucky Friday mine.
1992. Purdue enterprise reference architecture : a technical guide for CIM planning and implementation.
1996. Architectures for Enterprise Integration. With Peter Bernus and Laszlo Nemes. Springer, 31 mrt. 1996
1999. We went to war. With Barbara J. Gotham.
Articles, a selection
1960. "A generalized chemical processing model for the investigation of computer control." With Robert E. Otto in: American Institute of Electrical Engineers, Part I: Communication and Electronics, Transactions of the 79.5. p. 458-473
1992. The Purdue enterprise reference architecture: a technical guide for CIM planning and implementation. Research Triangle Park, NC: Instrument Society of America.
1993. "The Purdue enterprise reference architecture." Proceedings of the JSPE/IFIP TC5/WG5. 3 Workshop on the Design of Information Infrastructure Systems for Manufacturing. North-Holland Publishing Co.
1994. "The Purdue enterprise reference architecture." Computers in industry'' Vol 24 (2). p. 141-158
References
External links
Theodore Williams Obituary
pera net
1923 births
2013 deaths
American engineers
Enterprise modelling experts
Systems engineers
Ohio State University College of Engineering alumni
Penn State College of Engineering alumni
Purdue University faculty
United States Army Air Forces personnel of World War II |
42345640 | https://en.wikipedia.org/wiki/Susan%20B.%20Horwitz | Susan B. Horwitz | Susan Beth Horwitz (January 6, 1955 – June 11, 2014) was an American computer scientist noted for her research on
programming languages and software engineering, and in particular on program slicing and
dataflow-analysis. She had several best paper and an impact paper award mentioned below under awards.
She was an award-winning teacher at her institution and was the founder of Peer Led Team Learning for Computer Science (PLTLCS), creating the Wisconsin Emerging Scholars-Computer Science (WES-CS) program. She took the lead for an NSF ITWF Grant 0420343 that was a collaboration between eight schools doing PLTLCS, including the University of Wisconsin–Madison with Horwitz, Duke University, Georgia Tech, Rutgers University, University of Wisconsin at Milwaukee, Purdue University, Beloit College, and Loyola College. They published a paper in 2009 that showed that active recruiting combined with peer-led team learning is an effective approach to attracting and retaining under-represented students in an introductory Computer Science class. She was also noted for her leadership in computing in high schools. She was a member of the Educational Testing Services Advanced Placement Computer Science Test Development Committee for ten years from 1987 to 1997, including chairing the committee for five years from 1992 to 1997 at a time when the programming language for the exam changed from Pascal to C++.
Biography
Horwitz received an A.B. magna cum laude in Ethnomusicology from Wesleyan University in 1977,
a M.S. in Computer Science from Cornell University in 1982 and a Ph.D in Computer Science from Cornell University in 1985. She joined the Department of Computer Science at the University of Wisconsin in Madison as an assistant professor in 1985. She was promoted to associate professor in 1991, and to professor in 1996. She was associate chair from 2004 to 2007. She became an emeritus professor in 2014.
Death
Horwitz died on June 11, 2014, aged 59, from stomach cancer.
Awards
Horwitz received several best paper awards:
Her 1988 paper "Interprocedural slicing using dependence graphs" (with T. Reps and D. Binkley) was selected as one of the 50 best papers to appear at the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI) during the period 1979–99.
In 2011, she received an ACM SIGSOFT Retrospective Impact Paper Award (with T. Reps, M. Sagiv, and G. Rosay) for their paper "Speeding up slicing", which appeared at the SIGSOFT Symposium on Foundations of Software Engineering (FSE) in 1994.
Her paper "Reducing the Overhead of Dynamic Analysis" (with S. Yong) in 2002 at the Second Workshop on Runtime Verification was selected as one of the best papers at the workshop and invited for submission to a special issue of the journal Formal Methods in System Design.
Her paper "Demand interprocedural dataflow analysis" (with Thomas Reps and Mooly Sagiv) in SIGSOFT '95 was selected as one of the best papers at the conference invited for submission to ACM Transactions on Software Engineering and Methodology.
Her paper "Precise interprocedural dataflow analysis with applications to constant propagation" (with M. Saviv and T. Reps) in TAPSOFT '95 was selected as one of the best papers in the conference and invited for submission to Theoretical Computer Science.
Horwitz has several awards at Wisconsin:
University of Wisconsin College of Letters and Science Distinguished Honors Faculty Award, 2011
University of Wisconsin Computer Sciences Department Carolyn Rosner Excellent Educator Award, 1997
University of Wisconsin William H. Kiekhofer Excellence in Teaching Award, 1993
University of Wisconsin College of Letters and Sciences Teaching Excellence Award, 1992
References
1955 births
2014 deaths
Deaths from cancer in Wisconsin
Deaths from stomach cancer
American women computer scientists
American computer scientists
Cornell University alumni
Wesleyan University alumni
University of Wisconsin–Madison faculty
20th-century American women scientists
American women academics
21st-century American women |
2204566 | https://en.wikipedia.org/wiki/Preemption%20%28computing%29 | Preemption (computing) | In computing, preemption is the act of temporarily interrupting an executing task, with the intention of resuming it at a later time. This interrupt is done by an external scheduler with no assistance or cooperation from the task. This preemptive scheduler usually runs in the most privileged protection ring, meaning that interruption and resuming are considered highly secure actions. Such a change in the currently executing task of a processor is known as context switching.
User mode and kernel mode
In any given system design, some operations performed by the system may not be preemptable. This usually applies to kernel functions and service interrupts which, if not permitted to run to completion, would tend to produce race conditions resulting in deadlock. Barring the scheduler from preempting tasks while they are processing kernel functions simplifies the kernel design at the expense of system responsiveness. The distinction between user mode and kernel mode, which determines privilege level within the system, may also be used to distinguish whether a task is currently preemptable.
Most modern operating systems have preemptive kernels, which are designed to permit tasks to be preempted even when in kernel mode. Examples of such operating systems are Solaris 2.0/SunOS 5.0, Windows NT, Linux kernel (2.5.4 and newer), AIX and some BSD systems (NetBSD, since version 5).
Preemptive multitasking
The term preemptive multitasking is used to distinguish a multitasking operating system, which permits preemption of tasks, from a cooperative multitasking system wherein processes or tasks must be explicitly programmed to yield when they do not need system resources.
In simple terms: Preemptive multitasking involves the use of an interrupt mechanism which suspends the currently executing process and invokes a scheduler to determine which process should execute next. Therefore, all processes will get some amount of CPU time at any given time.
In preemptive multitasking, the operating system kernel can also initiate a context switch to satisfy the scheduling policy's priority constraint, thus preempting the active task. In general, preemption means "prior seizure of". When the high-priority task at that instance seizes the currently running task, it is known as preemptive scheduling.
The term "preemptive multitasking" is sometimes mistakenly used when the intended meaning is more specific, referring instead to the class of scheduling policies known as time-shared scheduling, or time-sharing.
Preemptive multitasking allows the computer system to more reliably guarantee each process a regular "slice" of operating time. It also allows the system to rapidly deal with important external events like incoming data, which might require the immediate attention of one or another process.
At any specific time, processes can be grouped into two categories: those that are waiting for input or output (called "I/O bound"), and those that are fully utilizing the CPU ("CPU bound"). In early systems, processes would often "poll" or "busy-wait" while waiting for requested input (such as disk, keyboard or network input). During this time, the process was not performing useful work, but still maintained complete control of the CPU. With the advent of interrupts and preemptive multitasking, these I/O bound processes could be "blocked", or put on hold, pending the arrival of the necessary data, allowing other processes to utilize the CPU. As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution.
Although multitasking techniques were originally developed to allow multiple users to share a single machine, it soon became apparent that multitasking was useful regardless of the number of users. Many operating systems, from mainframes down to single-user personal computers and no-user control systems (like those in robotic spacecraft), have recognized the usefulness of multitasking support for a variety of reasons. Multitasking makes it possible for a single user to run multiple applications at the same time, or to run "background" processes while retaining control of the computer.
Time slice
The period of time for which a process is allowed to run in a preemptive multitasking system is generally called the time slice or quantum. The scheduler is run once every time slice to choose the next process to run. The length of each time slice can be critical to balancing system performance vs process responsiveness - if the time slice is too short then the scheduler will consume too much processing time, but if the time slice is too long, processes will take longer to respond to input.
An interrupt is scheduled to allow the operating system kernel to switch between processes when their time slices expire, effectively allowing the processor's time to be shared between a number of tasks, giving the illusion that it is dealing with these tasks in parallel (simultaneously). The operating system which controls such a design is called a multi-tasking system.
System support
Today, nearly all operating systems support preemptive multitasking, including the current versions of Windows, macOS, Linux (including Android) and iOS.
Some of the earliest operating systems available to home users featuring preemptive multitasking were Sinclair QDOS (1984) and Amiga OS (1985). These both ran on Motorola 68000-family microprocessors without memory management. Amiga OS used dynamic loading of relocatable code blocks ("hunks" in Amiga jargon) to multitask preemptively all processes in the same flat address space.
Early PC operating systems such as MS-DOS and PC DOS, did not support multitasking at all, however alternative operating systems such as MP/M-86 (1981) and Concurrent CP/M-86 did support preemptive multitasking. Other Unix-like systems including MINIX and Coherent provided preemptive multitasking on 1980s-era personal computers.
Later DOS versions natively supporting preemptive multitasking/multithreading include Concurrent DOS, Multiuser DOS, Novell DOS (later called Caldera OpenDOS and DR-DOS 7.02 and higher). Since Concurrent DOS 386, they could also run multiple DOS programs concurrently in virtual DOS machines.
The earliest version of Windows to support a limited form of preemptive multitasking was Windows/386 2.0, which used the Intel 80386's Virtual 8086 mode to run DOS applications in virtual 8086 machines, commonly known as "DOS boxes", which could be preempted. In Windows 95, 98 and Me, 32-bit applications were made preemptive by running each one in a separate address space, but 16-bit applications remained cooperative for backward compatibility. In Windows 3.1x (protected mode), the kernel and virtual device drivers ran preemptively, but all 16-bit applications were non-preemptive and shared the same address space.
Preemptive multitasking has always been supported by Windows NT (all versions), OS/2 (native applications), Unix and Unix-like systems (such as Linux, BSD and macOS), VMS, OS/360, and many other operating systems designed for use in the academic and medium-to-large business markets.
Although there were plans to upgrade the cooperative multitasking found in the classic Mac OS to a preemptive model (and a preemptive API did exist in Mac OS 9, although in a limited sense), these were abandoned in favor of Mac OS X (now called macOS) that, as a hybrid of the old Mac System style and NeXTSTEP, is an operating system based on the Mach kernel and derived in part from BSD, which had always provided Unix-like preemptive multitasking.
See also
Computer multitasking
Cooperative multitasking
References
Operating system technology
Concurrent computing
de:Multitasking#Präemptives Multitasking |
787850 | https://en.wikipedia.org/wiki/Two-phase%20commit%20protocol | Two-phase commit protocol | In transaction processing, databases, and computer networking, the two-phase commit protocol (2PC) is a type of atomic commitment protocol (ACP). It is a distributed algorithm that coordinates all the processes that participate in a distributed atomic transaction on whether to commit or abort (roll back) the transaction. This protocol (a specialized type of consensus protocol) achieves its goal even in many cases of temporary system failure (involving either process, network node, communication, etc. failures), and is thus widely used.
However, it is not resilient to all possible failure configurations, and in rare cases, manual intervention is needed to remedy an outcome. To accommodate recovery from failure (automatic in most cases) the protocol's participants use logging of the protocol's states. Log records, which are typically slow to generate but survive failures, are used by the protocol's recovery procedures. Many protocol variants exist that primarily differ in logging strategies and recovery mechanisms. Though usually intended to be used infrequently, recovery procedures compose a substantial portion of the protocol, due to many possible failure scenarios to be considered and supported by the protocol.
In a "normal execution" of any single distributed transaction (i.e., when no failure occurs, which is typically the most frequent situation), the protocol consists of two phases:
The commit-request phase (or voting phase), in which a coordinator process attempts to prepare all the transaction's participating processes (named participants, cohorts, or workers) to take the necessary steps for either committing or aborting the transaction and to vote, either "Yes": commit (if the transaction participant's local portion execution has ended properly), or "No": abort (if a problem has been detected with the local portion), and
The commit phase, in which, based on voting of the participants, the coordinator decides whether to commit (only if all have voted "Yes") or abort the transaction (otherwise), and notifies the result to all the participants. The participants then follow with the needed actions (commit or abort) with their local transactional resources (also called recoverable resources; e.g., database data) and their respective portions in the transaction's other output (if applicable).
The two-phase commit (2PC) protocol should not be confused with the two-phase locking (2PL) protocol, a concurrency control protocol.
Assumptions
The protocol works in the following manner: one node is a designated coordinator, which is the master site, and the rest of the nodes in the network are designated the participants. The protocol assumes that there is stable storage at each node with a write-ahead log, that no node crashes forever, that the data in the write-ahead log is never lost or corrupted in a crash, and that any two nodes can communicate with each other. The last assumption is not too restrictive, as network communication can typically be rerouted. The first two assumptions are much stronger; if a node is totally destroyed then data can be lost.
The protocol is initiated by the coordinator after the last step of the transaction has been reached. The participants then respond with an agreement message or an abort message depending on whether the transaction has been processed successfully at the participant.
Basic algorithm
Commit request (or voting) phase
The coordinator sends a query to commit message to all participants and waits until it has received a reply from all participants.
The participants execute the transaction up to the point where they will be asked to commit. They each write an entry to their undo log and an entry to their redo log.
Each participant replies with an agreement message (participant votes Yes to commit), if the participant's actions succeeded, or an abort message (participant votes No, not to commit), if the participant experiences a failure that will make it impossible to commit.
Commit (or completion) phase
Success
If the coordinator received an agreement message from all participants during the commit-request phase:
The coordinator sends a commit message to all the participants.
Each participant completes the operation, and releases all the locks and resources held during the transaction.
Each participant sends an acknowledgement to the coordinator.
The coordinator completes the transaction when all acknowledgments have been received.
Failure
If any participant votes No during the commit-request phase (or the coordinator's timeout expires):
The coordinator sends a rollback message to all the participants.
Each participant undoes the transaction using the undo log, and releases the resources and locks held during the transaction.
Each participant sends an acknowledgement to the coordinator.
The coordinator undoes the transaction when all acknowledgements have been received.
Message flow
Coordinator Participant
QUERY TO COMMIT
-------------------------------->
VOTE YES/NO prepare*/abort*
<-------------------------------
commit*/abort* COMMIT/ROLLBACK
-------------------------------->
ACKNOWLEDGMENT commit*/abort*
<--------------------------------
end
An * next to the record type means that the record is forced to stable storage.
Disadvantages
The greatest disadvantage of the two-phase commit protocol is that it is a blocking protocol. If the coordinator fails permanently, some participants will never resolve their transactions: After a participant has sent an agreement message to the coordinator, it will block until a commit or rollback is received.
Implementing the two-phase commit protocol
Common architecture
In many cases the 2PC protocol is distributed in a computer network. It is easily distributed by implementing multiple dedicated 2PC components similar to each other, typically named Transaction managers (TMs; also referred to as 2PC agents or Transaction Processing Monitors), that carry out the protocol's execution for each transaction (e.g., The Open Group's X/Open XA). The databases involved with a distributed transaction, the participants, both the coordinator and participants, register to close TMs (typically residing on respective same network nodes as the participants) for terminating that transaction using 2PC. Each distributed transaction has an ad hoc set of TMs, the TMs to which the transaction participants register. A leader, the coordinator TM, exists for each transaction to coordinate 2PC for it, typically the TM of the coordinator database. However, the coordinator role can be transferred to another TM for performance or reliability reasons. Rather than exchanging 2PC messages among themselves, the participants exchange the messages with their respective TMs. The relevant TMs communicate among themselves to execute the 2PC protocol schema above, "representing" the respective participants, for terminating that transaction. With this architecture the protocol is fully distributed (does not need any central processing component or data structure), and scales up with number of network nodes (network size) effectively.
This common architecture is also effective for the distribution of other atomic commitment protocols besides 2PC, since all such protocols use the same voting mechanism and outcome propagation to protocol participants.
Protocol optimizations
Database research has been done on ways to get most of the benefits of the two-phase commit protocol while reducing costs by protocol optimizations and protocol operations saving under certain system's behavior assumptions.
Presumed abort and presumed commit
Presumed abort or Presumed commit are common such optimizations. An assumption about the outcome of transactions, either commit, or abort, can save both messages and logging operations by the participants during the 2PC protocol's execution. For example, when presumed abort, if during system recovery from failure no logged evidence for commit of some transaction is found by the recovery procedure, then it assumes that the transaction has been aborted, and acts accordingly. This means that it does not matter if aborts are logged at all, and such logging can be saved under this assumption. Typically a penalty of additional operations is paid during recovery from failure, depending on optimization type. Thus the best variant of optimization, if any, is chosen according to failure and transaction outcome statistics.
Tree two-phase commit protocol
The Tree 2PC protocol (also called Nested 2PC, or Recursive 2PC) is a common variant of 2PC in a computer network, which better utilizes the underlying communication infrastructure. The participants in a distributed transaction are typically invoked in an order which defines a tree structure, the invocation tree, where the participants are the nodes and the edges are the invocations (communication links). The same tree is commonly utilized to complete the transaction by a 2PC protocol, but also another communication tree can be utilized for this, in principle. In a tree 2PC the coordinator is considered the root ("top") of a communication tree (inverted tree), while the participants are the other nodes. The coordinator can be the node that originated the transaction (invoked recursively (transitively) the other participants), but also another node in the same tree can take the coordinator role instead. 2PC messages from the coordinator are propagated "down" the tree, while messages to the coordinator are "collected" by a participant from all the participants below it, before it sends the appropriate message "up" the tree (except an abort message, which is propagated "up" immediately upon receiving it or if the current participant initiates the abort).
The Dynamic two-phase commit (Dynamic two-phase commitment, D2PC) protocol is a variant of Tree 2PC with no predetermined coordinator. It subsumes several optimizations that have been proposed earlier. Agreement messages (Yes votes) start to propagate from all the leaves, each leaf when completing its tasks on behalf of the transaction (becoming ready). An intermediate (non leaf) node sends ready when an agreement message to the last (single) neighboring node from which agreement message has not yet been received. The coordinator is determined dynamically by racing agreement messages over the transaction tree, at the place where they collide. They collide either at a transaction tree node, to be the coordinator, or on a tree edge. In the latter case one of the two edge's nodes is elected as a coordinator (any node). D2PC is time optimal (among all the instances of a specific transaction tree, and any specific Tree 2PC protocol implementation; all instances have the same tree; each instance has a different node as coordinator): By choosing an optimal coordinator D2PC commits both the coordinator and each participant in minimum possible time, allowing the earliest possible release of locked resources in each transaction participant (tree node).
See also
Three-phase commit protocol
Paxos algorithm
Raft algorithm
Two Generals' Problem
References
Data management
Transaction processing |
68766921 | https://en.wikipedia.org/wiki/Conductor%20%28software%29 | Conductor (software) | Conductor is a free and open-source microservice orchestration software platform originally developed by Netflix.
Conductor was developed by Netflix to solve the problems of orchestrating microservices and business processes at scale in a cloud native environment. It was released under the Apache License 2.0 and has been adopted by companies looking to orchestrate their processes at scale in a cloud native environment.
Conductor belongs a set of software products that allows developers to build resilient, high-scale, cloud-native stateful applications using stateless primitives.
Architecture
Conductor server is written in Java with APIs exposed over HTTP and gRPC interfaces making it possible to do language agnostic development. A set of client libraries are made available by Netflix and community in Java, Python and Go.
Conductor uses a lightweight JSON based schema with rich programming language constructs such as fork/join, switch case, loops and exception handling to define the flows.
At the heart of Conductor is a queuing system that is used to schedule tasks and manage the process flows. Conductor leverages a pluggable model allowing different implementations of the queuing system. Open source version uses Dyno-Queues developed at Netflix for queuing as default implementation
The workflows are defined as the orchestration among the tasks which can be a system level construct such as fork, join, switch, loop, an external HTTP endpoint implementing business logic or a task worker running outside of Conductor servers and listening for work to be scheduled by the server. The workers communicate with the server using pre-defined APIs over HTTP or gRPC. Conductor provides lightweight libraries to manage worker states in Java, Python and Go and additional languages can be used to implement logic using provided APIs.
Conductor uses pluggable architecture model allowing for different databases to store its states. The current version has support for Redis (Stand-alone, Sentinel, Cluster and Dynomite), Postgres, Mysql, Cassandra and uses Elasticsearch as indexing mechanism.
The UI is written in ReactJS and provides ability to search, visualize and manage the workflow states.
References
Netflix
2016 software
Free software for cloud computing
Free and open-source software
Linux software
Software using the Apache license |
65765162 | https://en.wikipedia.org/wiki/Bruno%20Siciliano%20%28engineer%29 | Bruno Siciliano (engineer) | Bruno Siciliano (Naples, 27 October 1959) is an Italian engineer, academic and scientific popularizer. He is professor of Automatic Control at the University of Naples Federico II, Director of the ICAROS Center, and Coordinator of the PRISMA Lab at the Department of Electrical Engineering and Information Technology. He is also Honorary Professor at the University of Óbuda where he holds the Rudolf Kálmán chair.
Education and career
In 1982, Siciliano graduated in Electronic Engineering from the University of Naples Federico II where he then obtained a PhD in Electronic and Computer Engineering in 1987. Fascinated by the readings of Isaac Asimov's books on science fiction and cybernetics, he decided to approach robotics in terms of research. From September 1985 to June 1986 he was visiting scholar at the George W. Woodruff School of Mechanical Engineering of the Georgia Institute of Technology.
Siciliano became Assistant Professor of Automatic Control in 1989 at the Department of Computer and Systems Engineering of the University of Naples and then Associate Professor in 1992. He moved to the role of Full Professor in 2000 for the Department of Electronic and Computer Engineering of the University of Salerno. Since 2003 he has been Full Professor of Automatic Control at the Department of Computer and Systems Engineering, which has later become the Department of Electrical Engineering and Information Technology.
Since 2016 he has been Honorary Professor of the University of Óbuda from which he received the chair named after Rudolf Emil Kálmán in 2019.
Siciliano was President of the IEEE Robotics and Automation Society from 2008 to 2009. From 2013 to 2021 he was a member of the Board of Directors of the European Robotics Association. In 2019, he was among the founding members of the National Institute for Robotics and Intelligent Machines (I-RIM). He is a member of the I-RIM Board of Directors. Since 2020 he is on the Board of the International Foundation of Robotics Research. Since 2020 he is an IFAC Pavel J. Nowacki Distinguished Lecturer.
Research
Siciliano's research concerns the manipulation and control of robots, cooperation between robots and humans and service robotics. He directs ICAROS, the Interdepartmental Center for Robotic Surgery which aims to create synergies between clinical and surgical practice and research on new technologies for computer/robot assisted surgery. He coordinates PRISMA Lab, the Laboratory of Projects of Industrial and Service Robotics, Mechatronics and Automation in the Department of Electrical Engineering and Information Technology (DIETI) of the University of Naples Federico II. He is a member of the Board of Directors of the Research Consortium for Energy, Automation and Electromagnetic Technologies (CREATE) where he is responsible for the research program in Robotics.
Among his research projects are RoDyMan (Robotic Dynamic Manipulation, 2013-2019) a robot capable of replicating the movements of the pizza maker, for which he obtained an Advanced Grant, a frontier research grant from the European Research Council. Siciliano has been the coordinator of several projects funded by the European Commission: REFILLS (Robotics Enabling Fully-Integrated Logistics Lines for Supermarkets, 2017-2020) a project aimed at the realization of mobile assistance cobots in supermarkets, EuRoC (European Robotics Challenges, 2014–2018), the largest research program in Europe on robotics competitions, DEXMART (DEXterous and Autonomous Dual-Arm / Hand Robotic Manipulation with sMART Sensory-Motor Skills: A Bridge from Natural to Artificial Cognition, 2008–2012) one of the first European projects on bimanual manipulation. He also co-coordinated ECHORD (European Clearing House for Open Robotics Development, 2009–2013), a pilot project for technology transfer from research laboratories to SMEs.
Educator
Siciliano is active on the MOOC front of the e-learning platform of the University of Naples Federico II with his two Robotics Foundations I & II courses associated with the contents of his textbook, also available on the edX platform, and participation in Industry 4.0 courses, on enabling technologies underlying the new 4.0 paradigm and Pizza Revolution for research and studies on robotics applied to the art of making pizza.
"Keep the gradient" is the motto that Siciliano invented and means the constant search for new ideas and new solutions: a hymn to complexity to seize challenges and opportunities always under the banner of the art of "work and play" as he stated in his TEDx talk in 2016.
Publications
In 2008 with Oussama Khatib of Stanford University, Siciliano published the Springer Handbook of Robotics (), which received the PROSE Award from the American Association of Publishers for Excellence in Physical Sciences & Mathematics. A text that is the result of the coordination work of over 200 world-renowned researchers with the aim of combining the manual dimension with the encyclopedic one. With the second edition of 2016 () the book was among the first to have a multimedia support for direct viewing of videos within the text.
In 2009 with Lorenzo Sciavicco, Luigi Villani and Giuseppe Oriolo he published Robotics, Modeling, Planning and Control (), a textbook by Springer now in its third edition and translated into Chinese (), Greek () and Italian ().
Awards
Siciliano was awarded the IEEE RAS George Saridis Leadership Award in Robotics and Automation "for his outstanding leadership in the robotics and automation community as a research innovator, an inspired educator, a dedicated contributor of professional service, an ambassador of science and technology" (2015) and the IEEE RAS Distinguished Service Award "for outstanding leadership and commitment in promoting robotics and automation and RAS as the number one Society in the field" (2010). He has also won the Guido Dorso Award for the University section (2015) and the IPE Alumni Award (2008).
Siciliano ranks tenth (second among engineers) on the list of the 90 most influential scientists of the University of Naples Federico II.
Personal life
Siciliano is married, with two sons and a daughter. He is a passionate Napoli fan and also an admirer of rock music, gourmet food and fine wines.
References
Bibliography
(EN) From Pizza Making to Human Care, Springer Nature storytelling project "Before the Abstract", 10 July 2018
(IT) Conferimento del Premio Guido Dorso, Palazzo Giustiniani, Roma, 15 October 2015
(EN) Oral Histories: Bruno Siciliano, Robotics History: Narratives and Networks, IEEE TV, 4 May 2015
(IT) I nipoti di Galileo, Pietro Greco, Baldini Castoldi Dalai Editore, , 2011
(EN) In the Spotlight: Prof. Bruno Siciliano, Springer Author Zone, January 2011
External links
1959 births
Living people
Engineers from Naples
University of Naples Federico II faculty |
42005 | https://en.wikipedia.org/wiki/Collaborative%20software | Collaborative software | Collaborative software or groupware is application software designed to help people working on a common task to attain their goals. One of the earliest definitions of groupware is "intentional group processes plus software to support them".
As regards available interaction, collaborative software may be divided into: real-time collaborative editing platforms that allow multiple users to engage in live, simultaneous and reversible editing of a single file (usually a document), and version control (also known as revision control and source control) platforms, which allow separate users to make parallel edits to a file, while preserving every saved edit by every user as multiple files (that are variants of the original file).
Collaborative software is a broad concept that overlaps considerably with computer-supported cooperative work (CSCW). According to Carstensen and Schmidt (1999) groupware is part of CSCW. The authors claim that CSCW, and thereby groupware, addresses "how collaborative activities and their coordination can be supported by means of computer systems."
The use of collaborative software in the work space creates a collaborative working environment (CWE).
Finally, collaborative software relates to the notion of collaborative work systems, which are conceived as any form of human organization that emerges any time that collaboration takes place, whether it is formal or informal, intentional or unintentional. Whereas the groupware or collaborative software pertains to the technological elements of computer-supported cooperative work, collaborative work systems become a useful analytical tool to understand the behavioral and organizational variables that are associated to the broader concept of CSCW.
History
Douglas Engelbart first envisioned collaborative computing in 1951 and documented his vision in 1962, with working prototypes in full operational use by his research team by the mid-1960s, and held the first public demonstration of his work in 1968 in what is now referred to as "The Mother of All Demos." The following year, Engelbart's lab was hooked into the ARPANET, the first computer network, enabling them to extend services to a broader userbase.
Online collaborative gaming software began between early networked computer users. In 1975, Will Crowther created Colossal Cave Adventure on a DEC PDP-10 computer. As internet connections grew, so did the numbers of users and multi-user games. In 1978 Roy Trubshaw, a student at University of Essex in the United Kingdom, created the game MUD (Multi-User Dungeon).
The US Government began using truly collaborative applications in the early 1990s. One of the first robust applications was the Navy's Common Operational Modeling, Planning and Simulation Strategy (COMPASS). The COMPASS system allowed up to 6 users to create point-to-point connections with one another; the collaborative session only remained while at least one user stayed active, and would have to be recreated if all six logged out. MITRE improved on that model by hosting the collaborative session on a server that each user logged into. Called the Collaborative Virtual Workstation (CVW), this allowed the session to be set up in a virtual file cabinet and virtual rooms, and left as a persistent session that could be joined later.
In 1996, Pavel Curtis, who had built MUDs at PARC, created PlaceWare, a server that simulated a one-to-many auditorium, with side chat between "seat-mates", and the ability to invite a limited number of audience members to speak. In 1997, engineers at GTE used the PlaceWare engine in a commercial version of MITRE's CVW, calling it InfoWorkSpace (IWS). In 1998, IWS was chosen as the military standard for the standardized Air Operations Center. The IWS product was sold to General Dynamics and then later to Ezenia.
Groupware
Collaborative software was originally designated as groupware and this term can be traced as far back as the late 1980s, when Richman and Slovak (1987) wrote: "Like an electronic sinew that binds teams together, the new groupware aims to place the computer squarely in the middle of communications among managers, technicians, and anyone else who interacts in groups, revolutionizing the way they work."
Even further back, in 1978 Peter and Trudy Johnson-Lenz coined the term groupware; their initial 1978 definition of groupware was, "intentional group processes plus software to support them." Later in their article they went on to explain groupware as "computer-mediated culture... an embodiment of social organization in hyperspace." Groupware integrates co-evolving human and tool systems, yet is simply a single system.
In the early 1990s the first commercial groupware products were delivered, and big companies such as Boeing and IBM started using electronic meeting systems for key internal projects. Lotus Notes appeared as a major example of that product category, allowing remote group collaboration when the internet was still in its infancy. Kirkpatrick and Losee (1992) wrote then: "If GROUPWARE really makes a difference in productivity long term, the very definition of an office may change. You will be able to work efficiently as a member of a group wherever you have your computer. As computers become smaller and more powerful, that will mean anywhere." In 1999, Achacoso created and introduced the first wireless groupware.
Design and implementation
The complexity of groupware development is still an issue. One reason for this is the socio-technical dimension of groupware. Groupware designers do not only have to address technical issues (as in traditional software development) but also consider the organizational aspects and the social group processes that should be supported with the groupware application. Some examples for issues in groupware development are:
Persistence is needed in some sessions. Chat and voice communications are routinely non-persistent and evaporate at the end of the session. Virtual room and online file cabinets can persist for years. The designer of the collaborative space needs to consider the information duration needs and implement accordingly.
Authentication has always been a problem with groupware. When connections are made point-to-point, or when log-in registration is enforced, it's clear who is engaged in the session. However, audio and unmoderated sessions carry the risk of unannounced 'lurkers' who observe but do not announce themselves or contribute.
Until recently, bandwidth issues at fixed location limited full use of the tools. These are exacerbated with mobile devices.
Multiple input and output streams bring concurrency issues into the groupware applications.
Motivational issues are important, especially in settings where no pre-defined group process was in place.
Closely related to the motivation aspect is the question of reciprocity. Ellis and others have shown that the distribution of efforts and benefits has to be carefully balanced in order to ensure that all required group members really participate.
Real-time communication via groupware can lead to a lot of noise, over-communication and information overload.
One approach for addressing these issues is the use of design patterns for groupware design. The patterns identify recurring groupware design issues and discuss design choices in a way that all stakeholders can participate in the groupware development process.
Levels of collaboration
Groupware can be divided into three categories depending on the level of collaboration:
Communication can be thought of as unstructured interchange of information. A phone call or an IM Chat discussion are examples of this.
Conferencing (or collaboration level, as it is called in the academic papers that discuss these levels) refers to interactive work toward a shared goal. Brainstorming or voting are examples of this.
Co-ordination refers to complex interdependent work toward a shared goal. A good metaphor for understanding this is to think about a sports team; everyone has to contribute the right play at the right time as well as adjust their play to the unfolding situation - but everyone is doing something different - in order for the team to win. That is complex interdependent work toward a shared goal: collaborative management.
Collaborative management (coordination) tools
Collaborative management tools facilitate and manage group activities. Examples include:
Electronic calendars (also called time management software) — schedule events and automatically notify and remind group members
Project management systems — schedule, track, and chart the steps in a project as it is being completed
Online proofing — share, review, approve, and reject web proofs, artwork, photos, or videos between designers, customers, and clients
Workflow systems — collaborative management of tasks and documents within a knowledge-based business process
Knowledge management systems — collect, organize, manage, and share various forms of information
Enterprise bookmarking — collaborative bookmarking engine to tag, organize, share, and search enterprise data
Prediction markets — let a group of people predict together the outcome of future events
Extranet systems (sometimes also known as 'project extranets') — collect, organize, manage and share information associated with the delivery of a project (e.g.: the construction of a building)
Intranet systems — quickly share company information to members within a company via Internet (e.g.: marketing and product info)
Social software systems — organize social relations of groups
Online spreadsheets — collaborate and share structured data and information
Client portals — interact and share with your clients in a private online environment
Collaborative software and human interaction
The design intent of collaborative software (groupware) is to transform the way documents and rich media are shared in order to enable more effective team collaboration.
Collaboration, with respect to information technology, seems to have several definitions. Some are defensible but others are so broad they lose any meaningful application. Understanding the differences in human interactions is necessary to ensure the appropriate technologies are employed to meet interaction needs.
There are three primary ways in which humans interact: conversations, transactions, and collaborations.
Conversational interaction is an exchange of information between two or more participants where the primary purpose of the interaction is discovery or relationship building. There is no central entity around which the interaction revolves but is a free exchange of information with no defined constraints, generally focused on personal experiences. Communication technology such as telephones, instant messaging, and e-mail are generally sufficient for conversational interactions.
Transactional interaction involves the exchange of transaction entities where a major function of the transaction entity is to alter the relationship between participants.
In collaborative interactions the main function of the participants' relationship is to alter a collaboration entity (i.e., the converse of transactional). When teams collaborate on projects it is called Collaborative project management.
See also
Collaboration technologies
Telecommuting
Closely related terms
Computer supported cooperative work
Integrated collaboration environment
Groupware type of applications
Content management system
Customer relationship management software
Document management system
Enterprise content management
Event management software
Intranet
Other related type of applications
Massively distributed collaboration
Online consultation
Online deliberation
Other related terms
Collaborative innovation network
Commons-based peer production
Electronic business
Information technology management
Management information systems
Management
Office of the future
Operational transformation
Organizational Memory System
Worknet
Cloud collaboration
Document collaboration
MediaWiki
Wikipedia
Lists of collaborative software
List of collaborative software
List of social bookmarking websites
Intranet portal
Enterprise portal
References
Citations
Sources
Lockwood, A. (2008). The Project Manager's Perspective on Project Management Software Packages. Avignon, France. Retrieved February 24, 2009.
Pedersen, A.A. (2008). Collaborative Project Management. Retrieved February 25, 2009.
Pinnadyne, Collaboration Made Easy. Retrieved November 15, 2009.
Romano, N.C., Jr., Nunamaker, J.F., Jr., Fang, C., & Briggs, R.O. (2003). A Collaborative Project Management Architecture. Retrieved February 25, 2009. System Sciences, 2003. Proceedings of the 36th Annual Hawaii International Conference on Volume, Issue, 6-9 Jan. 2003 Page(s): 12 pp
M.Katerine (kit) Brown, Brenda Huetture, and Char James-Tanny (2007), Managing Virtual Teams: Getting the Most from Wikis, Blogs, and Other Collaborative Tools, Worldware Publishing, Plano.
External links
Business software
Groupware
Multimodal interaction
Computer-mediated communication
Social software |
46901670 | https://en.wikipedia.org/wiki/Igraph | Igraph | igraph is a library collection for creating and manipulating graphs and analyzing networks. It is written in C and also exists as Python and R packages. There exists moreover an interface for Mathematica. The software is widely used in academic research in network science and related fields. The publication that introduces the software has 5623 citations as of according to Google Scholar.
igraph was developed by Gábor Csárdi and Tamás Nepusz. The source code of igraph packages was written in C. igraph is freely available under GNU General Public License Version 2.
Basic properties
The three most important properties of igraph that shaped its development are as follows:
igraph is capable of handling large networks efficiently
it can be productively used with a high-level programming language
interactive and non-interactive usage are both supported
Characteristics
The software is open source, source code can be downloaded from the project's GitHub page. There are several open source software packages that use igraph functions. As an example, R packages tnet, igraphtosonia and cccd depend on igraph R package.
Users can use igraph on many operating systems. The C library and R and Python packages need the respective software, otherwise igraph is portable. The C library of igraph is well documented as well as the R package and the Python package
Functions
igraph can be used to generate graphs, compute centrality measures and path length based properties as well as graph components and graph motifs. It also can be used for degree-preserving randomization. igraph can read and write Pajek and GraphML files, as well as simple edge lists. The library contains several layout tools as well.
References
External links
Free software |
861864 | https://en.wikipedia.org/wiki/Snort%20%28software%29 | Snort (software) | Snort is a free open source network intrusion detection system (IDS) and intrusion prevention system (IPS) created in 1998 by Martin Roesch, founder and former CTO of Sourcefire. Snort is now developed by Cisco, which purchased Sourcefire in 2013.
In 2009, Snort entered InfoWorld's Open Source Hall of Fame as one of the "greatest [pieces of] open source software of all time".
Uses
Snort's open-source network-based intrusion detection/prevention system (IDS/IPS) has the ability to perform real-time traffic analysis and packet logging on Internet Protocol (IP) networks. Snort performs protocol analysis, content searching and matching.
The program can also be used to detect probes or attacks, including, but not limited to, operating system fingerprinting attempts, semantic URL attacks, buffer overflows, server message block probes, and stealth port scans.
Snort can be configured in three main modes: 1. sniffer, 2. packet logger, and 3. network intrusion detection.
Sniffer Mode
The program will read network packets and display them on the console.
Packet Logger Mode
In packet logger mode, the program will log packets to the disk.
Network Intrusion Detection System Mode
In intrusion detection mode, the program will monitor network traffic and analyze it against a rule set defined by the user. The program will then perform a specific action based on what has been identified.
Third-party tools
There are several third-party tools interfacing Snort for administration, reporting, performance and log analysis:
Snorby – a GPLv3 Ruby on Rails application
BASE
Sguil (free)
See also
List of free and open-source software packages
Zeek
Suricata_(software)
References
External links
Snort Blog
Talos Intelligence
Free security software
Computer security software
Linux security software
Unix network-related software
Lua (programming language)-scriptable software
Intrusion detection systems |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.