id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
23794004
|
https://en.wikipedia.org/wiki/Single-board%20microcontroller
|
Single-board microcontroller
|
A single-board microcontroller is a microcontroller built onto a single printed circuit board. This board provides all of the circuitry necessary for a useful control task: a microprocessor, I/O circuits, a clock generator, RAM, stored program memory and any necessary support ICs. The intention is that the board is immediately useful to an application developer, without requiring them to spend time and effort to develop controller hardware.
As they are usually low-cost, and have an especially low capital cost for development, single-board microcontrollers have long been popular in education. They are also a popular means for developers to gain hands-on experience with a new processor family.
Origins
Single-board microcontrollers appeared in the late 1970s, when the appearance of early microprocessors, such as the 6502 and the Z80, made it practical to build an entire controller on a single board, as well as affordable to dedicate a computer to a relatively minor task.
In March 1976, Intel announced a single-board computer product that integrated all of the support components required for their 8080 microprocessor, along with 1 kilobyte of RAM, 4 kilobytes of user-programmable ROM, and 48 lines of parallel digital I/O with line drivers. The board also offered expansion through a bus connector, but could be used without an expansion card cage when applications did not require additional hardware. Software development for this system was hosted on Intel's Intellec MDS microcomputer development system; this provided assembler and PL/M support, and permitted in-circuit emulation for debugging.
Processors of this era required a number of support chips to be included outside of the processor. RAM and EPROM were separate, often requiring memory management or refresh circuitry for dynamic memory. I/O processing might have been carried out by a single chip such as the 8255, but frequently required several more chips.
A single-board microcontroller differs from a single-board computer in that it lacks the general-purpose user interface and mass storage interfaces that a more general-purpose computer would have. Compared to a microprocessor development board, a microcontroller board would emphasize digital and analog control interconnections to some controlled system, whereas a development board might by have only a few or no discrete or analog input/output devices. The development board exists to showcase or train on some particular processor family and, therefore, internal implementation is more important than external function.
Internal bus
The bus of the early single-board devices, such as the Z80 and 6502, was universally a Von Neumann architecture. Program and data memory were accessed via the same shared bus, even though they were stored in fundamentally different types of memory: ROM for programs and RAM for data. This bus architecture was needed to economise the number of pins needed from the limited 40 available for the processor's ubiquitous dual-in-line IC package.
It was common to offer access to the internal bus through an expansion connector, or at least provide space for a connector to be soldered on. This was a low-cost option and offered the potential for expansion, even if it was rarely used. Typical expansions would be I/O devices or additional memory. It was unusual to add peripheral devices such as tape or disk storage, or a CRT display
Later, when single-chip microcontrollers, such as the 8048, became available, the bus no longer needed to be exposed outside the package, as all necessary memory could be provided within the chip package. This generation of processors used a Harvard architecture with separate program and data buses, both internal to the chip. Many of these processors used a modified Harvard architecture, where some write access was possible to the program data space, thus permitting in-circuit programming. None of these processors required, or supported, a Harvard bus across a single-board microcontroller. When they supported a bus for expansion of peripherals, a dedicated I/O bus, such as I²C, 1-Wire or various serial buses, was used.
External bus expansion
Some microcontroller boards using a general-purpose microprocessor can bring the address and data bus of the processor to an expansion connector, allowing additional memory or peripherals to be added. This provides resources not already present on the single board system. Since not every system will require expansion, the connector may be optional, with a mounting position provided for installation by the user if desired.
Input and output
Microcontroller systems provide multiple forms of input and output signals to allow application software to control an external "real-world" system. Discrete digital I/O provides a single bit of data (on or off). Analog signals, representing a continuous variable range, such as temperature or pressure, can also be inputs and outputs for microcontrollers.
Discrete digital inputs and outputs might be buffered from the microprocessor data bus only by an addressable latch, or might be operated by a specialized input/output IC, such as an Intel 8255 or Motorola 6821 parallel input/output adapter. Later single-chip microcontrollers have input and output pins available. These input/output circuits usually do not provide enough current to directly operate devices like lamps or motors, so solid-state relays are operated by the microcontroller digital outputs, and inputs are isolated by signal conditioning level-shifting and protection circuits.
One or more analog inputs, with an analog multiplexer and common analog-to-digital converter, are found on some microcontroller boards. Analog outputs may use a digital-to-analog converter or, on some microcontrollers, may be controlled by pulse-width modulation. For discrete inputs, external circuits may be required to scale inputs, or to provide functions like bridge excitation or cold junction compensation.
To control component costs, many boards were designed with extra hardware interface circuits but without the components for these circuits installed, leaving the board bare. The circuit was added as an option on delivery, or could be populated later.
It is common practice for boards to include "prototyping areas", areas of the board laid out as a solderable breadboard area with the bus and power rails available, but without a defined circuit. Several controllers, particularly those intended for training, also include a pluggable, re-usable breadboard for easy prototyping of extra I/O circuits that could be changed or removed for later projects.
Communications and user interfaces
Communications interfaces vary depending on the age of the microcontroller system. Early systems might implement a serial port to provide RS-232 or current loop. The serial port could be used by the application program or could be used, in conjunction with a monitor ROM, to transfer programs into the microcontroller memory. Current microcontrollers may support USB, wireless networks (Wi-Fi, ZigBee, or others), or provide an Ethernet connection. In addition, they may support a TCP/IP protocol stack. Some devices have firmware available to implement a Web server, allowing an application developer to rapidly build a Web-enabled instrument or system.
Programming
Many early systems had no internal facilities for programming, and relied on a separate "host" system for this task. This programming was typically done in assembly language, or sometimes in C or PL/M, and then cross-assembled or cross-compiled on the host. Some single-board microcontrollers support a BASIC language system, allowing programs to be developed on the target hardware. Hosted development allows all the storage and peripherals of a desktop computer to be used, providing a more powerful development environment.
EPROM burning
Early microcontrollers relied on erasable programmable read-only memory (EPROM) devices to hold the application program. The object code from a host system would be "burned" onto an EPROM with an EPROM programmer. This EPROM was then physically plugged into the board. As the EPROM would be removed and replaced many times during program development, it was common to provide a ZIF socket to avoid wear or damage. Erasing an EPROM with a UV eraser takes a considerable time, and so it was also common for a developer to have several EPROMs in circulation at any one time.
Some microcontroller devices were available with on-board EPROM. These would also be programmed in a separate burner, then put into a socket on the target system.
The use of EPROM sockets allowed field updates to the application program, either to fix errors or to provide updated features.
Keypad monitors
When the single-board controller formed the entire development environment (typically in education), the board might also have included a simple hexadecimal keypad, calculator-style LED display, and a "monitor" program set permanently in ROM. This monitor allowed machine code programs to be entered directly through the keyboard and held in RAM. These programs were in machine code, not even in assembly language, and were often assembled by hand on paper before being inputted. It is arguable as to which process was more time-consuming and error prone: assembling by hand, or keying byte-by-byte.
Single-board "keypad and calculator display" microcontrollers of this type were very similar to some low-end microcomputers of the time, such as the KIM-1 or the Microprofessor I. Some of these microprocessor "trainer" systems are still in production today, used as very low-cost introductions to microprocessors at the hardware programming level.
Hosted development
When desktop personal computers appeared, initially CP/M or Apple II, then later the IBM PC and compatibles, there was a shift to hosted development. Hardware was now cheaper and RAM capacity had expanded such that it was possible to download the program through the serial port and hold it in RAM. This massive reduction in the cycle time to test a new version of a program gave an equally large boost in development speed.
This program memory was still volatile and would be lost if power was lost. Flash memory was not yet available at a viable price. As a completed controller project was usually required to be non-volatile, the final step in a project was often to burn it to an EPROM.
Single-chip microcontrollers
Single-chip microcontrollers, such as the Intel 8748, combined many of the features of previous boards into a single IC package. Single-chip microcontrollers integrate memory (both RAM and ROM) on-package and, therefore, do not need to expose the data and address bus through the pins of the IC package. These pins are then available for I/O lines. These changes also reduce the area required on the printed circuit board and simplify the design of the single-board microcontroller. Examples of single-chip microcontrollers include:
Intel 8748
PIC
Atmel AVR
Program memory
For production use as embedded systems, the on-board ROM was either mask programmed at the chip factory or one-time programmed (OTP) by the developer as a PROM. PROMs often used the same UV EPROM technology for the chip, but in a cheaper package without the transparent erasure window. During program development, it was still necessary to burn EPROMs. In this case, the entire controller IC, and therefore the ZIF sockets, would be provided.
With the development of affordable EEPROM and flash memory, it became practical to attach the controller permanently to the board and to download program code from a host computer through a serial connection. This was termed "in-circuit programming". Erasure of old programs was carried out by either over-writing them with a new download, or bulk erasing them electrically (for EEPROM). The latter method was slower, but could be carried out in-situ.
The main function of the controller board was then to carry the support circuits for this serial or, on later boards, USB interface. As a further convenience during development, many boards also had low-cost features like LED monitors of the I/O lines or reset switches mounted on board.
Single-board microcontrollers today
It is now cheap and simple to design circuit boards for microcontrollers. Development host systems are also cheap, especially when using open source software. Higher level programming languages abstract details of the hardware, making differences between specific processors less obvious to the application programmer. Rewritable flash memory has replaced slow programming cycles, at least during program development. Accordingly, almost all development now is based on cross-compilation from personal computers and programs are downloaded to the controller board through a serial-like interface, usually appearing to the host as a USB device.
The original market demand for a simplified board implementation is no longer as relevant for microcontrollers. Single-board microcontrollers are still important, but have shifted their focus to:
Easily accessible platforms aimed at traditionally "non-programmer" groups, such as artists, designers, hobbyists, and others interested in creating interactive objects or environments. Some typical projects in 2011 included: the backup control of DMX stage lights and special effects, multi-camera control, autonomous fighting robots, controlling bluetooth projects from a computer or smart phone, LEDs and multiplexing, displays, audio, motors, mechanics, and power control. These controllers may be embedded to form part of a physical computing project. Popular choices for this work are the Arduino, Dwengo or Wiring.
Technology demonstration boards for innovative processors or peripheral features:
AVR Butterfly
Parallax Propeller
See also
Comparison of single-board microcontrollers
Microprocessor development board
Embedded system
Programmable logic controller
Arduino
Make Controller Kit
PICAXE
BASIC Stamp
Raspberry Pi
Asus Tinker Board
Tinkerforge
References
Microcontrollers
de:Einplatinen-Computer
es:Computador en una tarjeta
pt:Computadores de placa única
ru:Одноплатный компьютер
zh:单板机
|
48589354
|
https://en.wikipedia.org/wiki/Visual%20Turing%20Test
|
Visual Turing Test
|
Computer vision research is driven by standard evaluation practices. The current systems are tested by their accuracy for tasks like object detection, segmentation and localization. Methods like the convolutional neural networks seem to be doing pretty well in these tasks, but the current systems are still not close to solving the ultimate problem of understanding images the way humans do. So motivated by the ability of humans to understand an image and even tell a story about it, Geman et al. have introduced the Visual Turing Test for computer vision systems.
As described in, it is “an operator-assisted device that produces a stochastic sequence of binary questions from a given test image”. The query engine produces a sequence of questions that have unpredictable answers given the history of questions. The test is only about vision and does not require any natural language processing. The job of the human operator is to provide the correct answer to the question or reject it as ambiguous. The query generator produces questions such that they follow a “natural story line”, similar to what humans do when they look at a picture.
History
Research in computer vision dates back to the 1960s when Seymour Papert first attempted to solve the problem. This unsuccessful attempt was referred to as the Summer Vision Project. The reason why it was not successful was because computer vision is more complicated than what people think. The complexity is in alignment with the human visual system. Roughly 50% of the human brain is devoted in processing vision, which clearly indicates that it is a difficult problem.
Later there were attempts to solve the problems with models inspired by the human brain. Perceptrons by Frank Rosenblatt, which is a form of the neural networks, was one of the first such approaches. These simple neural networks could not live up to their expectations and had certain limitations due to which they were not considered in future research.
Later with the availability of the hardware and some processing power the research shifted to image processing which involves pixel-level operations, like finding edges, de-noising images or applying filters to name a few. There was some great progress in this field but the problem of vision which was to make the machines understand the images was still not being addressed. During this time the neural networks also resurfaced as it was shown that the limitations of the perceptrons can be overcome by Multi-layer perceptrons. Also in the early 1990s convolutional neural networks were born which showed great results on digit recognition but did not scale up well on harder problems.
The late 1990s and early 2000s saw the birth of modern computer vision. One of the reasons this happened was due to the availability of key, feature extraction and representation algorithms. Features along with the already present machine learning algorithms were used to detect, localise and segment objects in Images.
While all these advancements were being made, the community felt the need to have standardised datasets and evaluation metrics so the performances can be compared. This led to the emergence of challenges like the Pascal VOC challenge and the ImageNet challenge. The availability of standard evaluation metrics and the open challenges gave directions to the research. Better algorithms were introduced for specific tasks like object detection and classification.
Visual Turing Test aims to give a new direction to the computer vision research which would lead to the introduction of systems that will be one step closer to understanding images the way humans do.
Current evaluation practices
A large number of datasets have been annotated and generalised to benchmark performances of difference classes of algorithms to assess different vision tasks (e.g., object detection/recognition) on some image domain (e.g., scene images).
One of the most famous datasets in computer vision is ImageNet which is used to assess the problem of object level Image classification. ImageNet is one of the largest annotated datasets available and has over one million images. The other important vision task is object detection and localisation which refers to detecting the object instance in the image and providing the bounding box coordinates around the object instance or segmenting the object. The most popular dataset for this task is the Pascal dataset. Similarly there are other datasets for specific tasks like the H3D dataset for human pose detection, Core dataset to evaluate the quality of detected object attributes such as colour, orientation, and activity.
Having these standard datasets has helped the vision community to come up with extremely well performing algorithms for all these tasks. The next logical step is to create a larger task encompassing of these smaller subtasks. Having such a task would lead to building systems that would understand images, as understanding images would inherently involve detecting objects, localising them and segmenting them.
Details
The Visual Turing Test (VTT) unlike the Turing test has a query engine system which interrogates a computer vision system in the presence of a human co-ordinator.
It is a system that generates a random sequence of binary questions specific to the test image, such that the answer to any question k is unpredictable given the true answers to the previous k − 1 questions (also known as history of questions).
The test happens in the presence of a human operator who serves two main purposes: removing the ambiguous questions and providing the correct answers to the unambiguous questions. Given an Image infinite possible binary questions can be asked and a lot of them are bound to be ambiguous. These questions if generated by the query engine are removed by the human moderator and instead the query engine generates another question such that the answer to it is unpredictable given the history of the questions.
The aim of the Visual Turing Test is to evaluate the Image understanding of a computer system, and an important part of image understanding is the story line of the image. When humans look at an image, they do not think that there is a car at ‘x’ pixels from the left and ‘y’ pixels from the top, but instead they look at it as a story, for e.g. they might think that there is a car parked on the road, a person is exiting the car and heading towards a building. The most important elements of the story line are the objects and so to extract any story line from an image the first and the most important task is to instantiate the objects in it, and that is what the query engine does.
Query engine
The query engine is the core of the Visual Turing Test and it comprises two main parts : Vocabulary and Questions
Vocabulary
Vocabulary is a set of words that represent the elements of the images. This vocabulary when used with appropriate grammar leads to a set of questions. The grammar is defined in the next section in a way that it leads to a space of binary questions.
The vocabulary consist of three components:
Types of Objects
Type-dependent attributes of objects
Type-dependent relationships between two objects
For Images of urban street scenes the types of objects include people, vehicle and buildings. Attributes refer to the properties of these objects, for e.g. female, child, wearing a hat or carrying something, for people and moving, parked, stopped, one tire visible or two tires visible for vehicles. Relationships between each pair of object classes can be either “ordered” or “unordered”. The unordered relationships may include talking, walking together and the ordered relationships include taller, closer to the camera, occluding, being occluded etc.
Additionally all of this vocabulary is used in context of rectangular image regions w \in W which allow for the localisation of objects in the image. An extremely large number of such regions are possible and this complicates the problem, so for this test, regions at specific scales are only used which include 1/16 the size of image, 1/4 the size of image, 1/2 the size of image or larger.
Questions
The question space is composed of four types of questions:
Existence questions: The aim of the existence questions is to find new objects in the image that have not been uniquely identified previously. They are of the form :
Qexist = 'Is there an instance of an object of type t with attributes A partially visible in region w that was not previously instantiated?'''
Uniqueness questions: A uniqueness question tries to uniquely identify an object to instantiate it.
Quniq = 'Is there a unique instance of an object of type t with attributes A partially visible in region w that was not previously instantiated?The uniqueness questions along with the existence questions form the instantiation questions. As mentioned earlier instantiating objects leads to other interesting questions and eventually a story line. Uniqueness questions follow the existence questions and a positive answer to it leads to instantiation of an object.
Attribute questions: An attribute question tries to find more about the object once it has been instantiated. Such questions can query about a single attribute, conjunction of two attributes or disjunction of two attributes.
Qatt(ot) = {'Does object ot have attribute a?' , 'Does object ot have attribute a1 or attribute a2?' , Does object ot have attribute a1 and attribute a2?'} Relationship questions: Once multiple objects have been instantiated, a relationship question explores the relationship between pairs of objects.
Qrel(ot,ot') = 'Does object ot have relationship r with object ot'? Implementation details
As mentioned before the core of the Visual Turing Test is the query generator which generates a sequence of binary questions such that the answer to any question k is unpredictable given the correct answers to the previous k − 1 questions. This is a recursive process, given a history of questions and their correct answers, the query generator either stops because there are no more unpredictable questions, or randomly selects an unpredictable question and adds it to the history.
The question space defined earlier implicitly imposes a constraint on the flow of the questions. To make it more clear this means that the attribute and relationship questions can not precede the instantiation questions. Only when the objects have been instantiated, can they be queried about their attributes and relations to other previously instantiated objects. Thus given a history we can restrict the possible questions that can follow it, and this set of questions are referred to as the candidate questions .
The task is to choose an unpredictable question from these candidate questions such that it conforms with the question flow that we will describe in the next section. For this, find the unpredictability of every question among the candidate questions.
Let be a binary random variable, where , if the history is valid for the Image and otherwise. Let can be the proposed question, and be the answer to the question .
Then, find the conditional probability of getting the answer Xq to the question q given the history H.
Given this probability the measure of the unpredictability is given by:
The closer is to 0, the more unpredictable the question is. for every question is calculated. The questions for which , are the set of almost unpredictable questions and the next question is randomly picked from these.
Question flow
As discussed in the previous section there is an implicit ordering in the question space, according to which the attribute questions come after the instantiation questions and the relationship questions come after the attribute questions, once multiple objects have been instantiated.
Therefore, the query engine follows a loop structure where it first instantiates an object with the existence and uniqueness questions, then queries about its attributes, and then the relationship questions are asked for that object with all the previously instantiated objects.
Look-ahead search
It is clear that the interesting questions about the attributes and the relations come after the instantiation questions, and so the query generator aims at instantiating as many objects as possible.
Instantiation questions are composed of both the existence and the uniqueness questions, but it is the uniqueness questions that actually instantiate an object if they get a positive response. So if the query generator has to randomly pick an instantiation question, it prefers to pick an unpredictable uniqueness question if present. If such a question is not present, the query generator picks an existence question such that it will lead to a uniqueness question with a high probability in the future. Thus the query generator performs a look-ahead search in this case.
Story line
An integral part of the ultimate aim of building systems that can understand images the way humans do, is the story line. Humans try to figure out a story line in the Image they see. The query generator achieves this by a continuity in the question sequences.
This means that once the object has been instantiated it tries to explore it in more details. Apart from finding its attributes and relation to the other objects, localisation is also an important step. Thus, as a next step the query generator tries to localise the object in the region it was first identified, so it restricts the set of instantiation questions to the regions within the original region.
Simplicity preference
Simplicity preference states that the query generator should pick simpler questions over the more complicated ones. Simpler questions are the ones that have fewer attributes in them. So this gives an ordering to the questions based on the number of attributes, and the query generator prefers the simpler ones.
Estimating predictability
To select the next question in the sequence, VTT has to estimate the predictability of every proposed question. This is done using the annotated training set of Images. Each Image is annotated with bounding box around the objects and labelled with the attributes, and pairs of objects are labelled with the relations.Consider each question type separately:
Instantiation questions: The conditional probability estimator for instantiation questions can be represented as: The question is only considered if the denominator is at least 80 images. The condition of is very strict and may not be true for a large number of Images, as every question in the history eliminates approximately half of the candidates (Images in this case). As a result, the history is pruned and the questions which may not alter the conditional probability are eliminated. Having a shorter history lets us consider a larger number of Images for the probability estimation. The history pruning is done in two stages:
In the first stage all the attribute and relationship questions are removed, under the assumption that the presence and instantiation of objects only depends on other objects and not their attributes or relations. Also, all the existence questions referring to regions disjoint from the region being referred to in the proposed question, are dropped with the assumption being that the probability of the presence of an object at a location does not change with the presence or absence of objects at locations other than . And finally all the uniqueness questions with a negative response referring to regions disjointed from the region being referred to in the proposed question, are dropped with the assumption that the uniqueness questions with a positive response if dropped can alter the response of the future instantiation questions. The history of questions obtained after this first stage of pruning can be referred to as .
In the second stage an image-by-image pruning is performed. Let be a uniqueness question in that has not been pruned and is preserved in . If this question is in context of a region which is disjoint from the region being referenced in the proposed question, then the expected answer to this question will be , because of the constraints in the first stage. But if the actual answer to this question for the training image is , then that training image is not considered for the probability estimation, and the question is also dropped. The final history of questions after this is , and the probability is given by:Attribute questions: The probability estimator for attribute questions is dependent on the number of labeled objects rather than the images unlike the instantiation questions.Consider an attribute question of the form : ‘Does object ot have attribute a?’, where is an object of type and . Let be the set of attributes already known to belong to because of the history. Let be the set of all the annotated objects (ground truth) in the training set, and for each , let be the type of object, and be the set of attributes belonging to . Then the estimator is given by: This is basically the ratio of the number of times the object of type with attributes occurs in the training data, to the number of times the object of type with attributes occurs in the training data. A high number of attributes in leads to a sparsity problem similar to the instantiation questions. To deal with it we partition the attributes into subsets that are approximately independent conditioned on belonging to the object . For e.g. for person, attributes like crossing a street and standing still are not independent, but both are fairly independent of the sex of the person, whether the person is child or adult, and whether they are carrying something or not. These conditional independencies reduce the size of the set , and thereby overcome the problem of sparsity.Relationship questions': The approach for relationship questions is the same as the attribute questions, where instead of the number of objects, number of pair of objects is considered and for the independence assumption, the relationships that are independent of the attributes of the related objects and the relationships that are independent of each other are included.
Example
Detailed example sequences can be found here.
Dataset
The Images considered for the Geman et al. work are that of ‘Urban street scenes’ dataset, which has scenes of streets from different cities across the world. This why the types of objects are constrained to people and vehicles for this experiment.
Another dataset introduced by the Max Planck Institute for Informatics is known as DAQUAR dataset which has real world images of indoor scenes. But they propose a different version of the visual Turing test which takes on a holistic approach and expects the participating system to exhibit human like common sense.
Conclusion
This is a very recent work published on March 9, 2015, in the journal Proceedings'' of the National Academy of Sciences, by researchers from Brown University and Johns Hopkins University. It evaluates how the computer vision systems understand the Images as compared to humans. Currently the test is written and the interrogator is a machine because having an oral evaluation by a human interrogator gives the humans an undue advantage of being subjective, and also expects real time answers.
The Visual Turing Test is expected to give a new direction to the computer vision research. Companies like Google and Facebook are investing millions of dollars into computer vision research, and are trying to build systems that closely resemble the human visual system. Recently Facebook announced its new platform M, which looks at an image and provides a description of it to help the visually impaired. Such systems might be able to perform well on the VTT.
References
Turing tests
Human–computer interaction
Computer vision
|
29553024
|
https://en.wikipedia.org/wiki/Michael%20Mitzenmacher
|
Michael Mitzenmacher
|
Michael David Mitzenmacher is an American computer scientist working in algorithms. He is Professor of Computer Science at the Harvard John A. Paulson School of Engineering and Applied Sciences and was area dean of computer science July 2010 to June 2013. He also runs My Biased Coin, a blog about theoretical computer science.
Education
In 1986, Mitzenmacher attended the Research Science Institute. Mitzenmacher earned his AB at Harvard, where he won the 1990 North American Collegiate Bridge Championship. He attended the University of Cambridge on a Churchill Scholarship from 1991–1992. Mitzenmacher received his PhD in computer science at the University of California, Berkeley in 1996 under the supervision of Alistair Sinclair. He joined Harvard University in 1999.
Research
Mitzenmacher’s research covers the design an analysis of randomised algorithms and processes. With Eli Upfal he is the author of a textbook on randomized algorithms and probabilistic techniques in computer science. Mitzenmacher's PhD thesis was on the analysis of simple randomised load balancing schemes. He is an expert in hash function applications such as Bloom filters, cuckoo hashing, and locality-sensitive hashing. His work on min-wise independence gives a fast way to estimate similarity of electronic documents and is used in internet search engines. Mitzenmacher has also worked on erasure codes and error-correcting codes.
Mitzenmacher has authored over 100 conference and journal publications. He has served on dozens of program committees in computer science, information theory, and networks, and chaired the program committee of the Symposium on Theory of Computing in 2009. He belongs to the editorial board of SIAM Journal on Computing, Internet Mathematics, and Journal of Interconnection Networks.
Awards and honors
Mitzenmacher became a fellow of the Association for Computing Machinery in 2014. His joint paper on low-density parity-check codes received the 2002 IEEE Information Theory Society Best Paper Award. His joint paper on fountain codes received the
2009 ACM SIGCOMM Test of Time Paper Award. In 2019, he was elected as an IEEE Fellow.
Selected publications
There is also an earlier 1998 technical report with the same title.
References
External links
Mitzenmacher’s web page
Theoretical computer scientists
Living people
Harvard University alumni
University of California, Berkeley alumni
Harvard University faculty
Fellows of the Association for Computing Machinery
Fellow Members of the IEEE
Science bloggers
Santa Fe Institute people
Year of birth missing (living people)
Alumni of the University of Cambridge
American computer scientists
|
2299308
|
https://en.wikipedia.org/wiki/Trip-a-Tron
|
Trip-a-Tron
|
Trip-a-Tron is a light synthesizer written by Jeff Minter and published through his Llamasoft company in 1988. It was originally written for the Atari ST and later ported to the Amiga in 1990 by Andy Fowler.
Description
Trip-A-Tron was released as shareware, but also came in a commercial package with a 3-ring-bound manual and 2 game disks. The trial version contained no limitations, but registration was necessary to obtain the manual, which in turn was necessary to learn the script language ("KML" - supposedly "Keyboard Macro Language" and only coincidentally the phonetic equivalent of "camel") which drove the system.
The software has a usable but quirky user interface, filled with in-jokes and references to Llamasoft mascots. For example, the button to exit from the MIDI editor is labelled "naff off", while the button to exit the file display is labelled with a sheep saying "Baa!"; the waveform editor colour cycles the words "Dead cool" above the waveform display, and the event sequencer displays an icon of a camel smoking a cigarette; and the image manipulation tool has a series of icons used to indicate how long the current operation is going to take: "Make the tea", "Have a fag", "Go to bed", "Go to sleep", "Go on holiday", "Go to Peru for six months", and "RIP"; and the scripting language command to set the length of drawn lines is "LLAMA". (The manual states: "I could have called the command LINELENGTH I suppose, but I like llamas so what the heck".)
The manual is also written in a similar light, conversational style, but has been praised for nonetheless achieving a high degree of technical clarity.
In spite of this the software is extremely usable and was recommended as one of the best light synthesizers available at the time.
See also
Psychedelia (light synthesizer)
Virtual Light Machine
Neon (light synthesizer)
References
External links
1988 software
Atari ST software
Amiga software
Music visualization software
Llamasoft software
|
916492
|
https://en.wikipedia.org/wiki/Borland%20Kylix
|
Borland Kylix
|
Borland Kylix is a compiler and integrated development environment (IDE) formerly sold by Borland, but later discontinued. It is a Linux software development environment based on Borland Delphi and Borland C++ Builder, which runs under Microsoft Windows. Continuing Delphi's classical Greek theme, Kylix is the name for an ancient Greek drinking cup. The closest supported equivalent to Kylix is the free Lazarus IDE package, designed to be code-compatible with Delphi. As of 2010 the project has been resurrected in the form of Delphi cross compiler for Mac and Linux, as shown in the Embarcadero's Delphi and C++ Builder roadmap. As of September 2011 with Kylix discontinued the framework for cross-platform development by Embarcadero is FireMonkey.
Features
Kylix supports application programming using Object Pascal and C++, and is particularly suited to the development of command line utilities and (especially) GUI applications, but not well suited to low-level programming, such as the development of device drivers or kernel modules.
Though it interacts poorly with many Linux window managers, the IDE is basically the Delphi 5 IDE running on top of Wine, with a fast native code compiler, and tools for code navigation, auto-completion, parameter-name tooltips, and so on. The debugger is capable, but very slow to load, and can crash the whole IDE.
Kylix features CLX, a Linux version of Borland's VCL [Visual Component Library], which is (mostly) a component-based control library, not unlike Visual Basic or .NET's WinForms. Like other component-oriented libraries, CLX contains both visual components (such as buttons and panels) and non-visual components (such as timers). The IDE makes it easy to select components and place them on a form, editing properties and event handlers with an "Object Inspector".
Delphi's VCL is an object-oriented wrapper over raw Win32 controls, that maps Win32 messages and APIs to properties and events and is thus significantly easier to use than the raw API. As such, VCL is tightly bound to Windows, and Kylix's CLX is built on top of Trolltech's Qt library. CLX is not 100% compatible with VCL, and most Delphi programs require some effort to port to Kylix, even if they stick to the Borland libraries and avoid any direct OS calls. However, Qt is a portable library and, starting with Delphi 6, Borland provided CLX on Windows as well, providing a measure of back-portability.
History
On September 28, 1999, Inprise Corporation announced its development of a high performance Linux application development environment that will support C, C++, and Delphi development, code named "Kylix", with release date set for year 2000.
On March 24, 2000, Inprise/Borland Corporation hosted more than 200 third-party authors, consultants, trainers and tool and component vendors for the first in a series of worldwide events designed to prepare third party products and services for Kylix.
On March 7, 2001, Borland Software Corporation announced the release of Borland Kylix, after it had been offered to U.S. customers of Dell Precision 220, 420 and 620 Workstations beginning in February 2001.
On October 23, 2001, Borland Software Corporation announced the release of Borland Kylix 2.
On August 13, 2002 Borland Software Corporation announced the release of Borland Kylix 3.
In 2005, Borland reportedly moved to discontinue Kylix development.
Danny Thorpe seems to have been largely responsible for getting Borland to fund a Linux version of Delphi, and he did a lot of the work necessary to make the Delphi compiler produce Linux executables. While both Delphi and Kylix run on 32-bit Intel processors, Linux uses different register conventions than Windows and, of course, the executable and library file formats are different; see DLL, EXE, ELF for details.
Legacy
In 2009 Embarcadero posted the current Delphi and C++ Builder roadmap. As part of project Delphi "X" cross compilation for Mac and Linux was planned.
Embarcadero is planning to release a new version of Kylix (without backward compatibility), but it will not hold a special name (Kylix). It will be a part of Delphi (and C++Builder) where one can code and compile in Delphi Windows IDE and deploy to Linux. C++Builder version will be also available.
This roadmap item remained a couple versions on the roadmap as point for "future versions" but disappeared from roadmaps in the XE3-4 timeframe. Parts of project X went in production with XE2 and 3 though, but for mobile targets and OS X.
In 8 Feb 2016, Embarcadero Technologies, Inc. announced an updated roadmap that indicates Linux server support in upcoming RAD Studio 10.2 (code name "Godzilla") development track, aka the Fall release. Linux desktop support was not mentioned. On March 22, 2017, Embarcadero Technologies, Inc. announced the release of RAD Studio 10.2.
See also
Borland Delphi
Free Pascal
Lazarus
Object Pascal
Embarcadero Technologies
References
External links
Borland Kylix Borland Software Corporation
Darren Kosinski. How Borland embedded Mozilla in Kylix 2 Embarcadero Technologies, Inc.
Borland software
Pascal (programming language) compilers
Linux integrated development environments
Software derived from or incorporating Wine
User interface builders
Linux-only proprietary software
Proprietary commercial software for Linux
|
24885582
|
https://en.wikipedia.org/wiki/Route%20Reference%20Computer
|
Route Reference Computer
|
Ferranti Canada's Route Reference Computer was the first computerized mail sorter system, delivered to the Canadian Post Office in January 1957. In spite of a promising start and a great deal of international attention, spiraling costs and a change in government led to the project being canceled later that year. Technical developments pioneered for the Route Reference Computer were put to good use by Ferranti in several projects that followed over the next decade.
History
Sorting problems
In the immediate post-war era, Canada experienced explosive growth in urban population as veterans returning from World War II moved into the cities looking for work in the newly industrialized country. This created logjams at mail routing offices that handled the mail for what used to be much smaller cities. Whereas the formerly rural population spread out the sorting and delivery of mail, now sixty percent of all the mail was being sorted at only ten processing stations, leading to lengthy delays and complaints that reached all the way to the House of Commons.
At the time, a mail sorter could be expected to sort mail into one of about two dozen "pigeon holes", small bins that collected all of the mail being delivered to a particular mail route. The sorter had to memorize addresses and the routes that served them, reading the address off a letter and placing it into the correct pigeon hole. In a small town each pigeon hole could represent the mail carried by a single deliveryman, and each sorter could remember the streets and sort mail for any of these routes. But for mail that was being delivered across larger areas, the sorting had to be broken into a hierarchy. A receiving station in Alberta routing a letter to Ontario would sort it into the Ontario stack. The mail would then be received in Ontario and sorted at a distribution center to stacks for city or towns. If the city was large enough, it might have to be sorted several more times before it reached an individual carrier.
During the 1940s the Post Office Department had introduced "postal zones" in certain cities to help spread out sorting into regional offices. For instance, as of 1943 Toronto was divided into 14 zones. Letters with zones could be routed directly to the regional sorting office, skipping one sorting step and speeding the delivery of the mail. Using the zones for addressing was not mandatory and was up to the sender to include this if they knew it, and the Post Office urged users to add the new codes to their mail.
Automation
At the time, the primary constraint for the number of pigeon holes a sorter could serve was the length of the human arm, which limited the stack of holes to a cabinet about 4 feet on a side. A number of companies sold sorting equipment that overcame this by moving the mail on a conveyor to a large array of bins. One of the most widely used at that time was the Transorma, which supported up 5 sorters at a time and sorted to as many as 300 destination bins. In practice, the Transorma simply changed the limiting problem; while the number of bins was now essentially unlimited, there was no way the sorters could be expected to remember so many routes. The limitation changed from physical to mental.
Convinced that automation was the proper solution to the routing problem, in 1951 O.D. Lewis at Post Office headquarters in Ottawa started looking for ways to solve the memory limitation. Although Lewis did not have a technical background, he was aware of the IBM systems being used for tallying pencil-marked punched cards. He suggested that a similar system could be used for sorting mail, but a better solution for printing the routing information would be to use "a code of vertical bars on the back of the letter. Or, if a virtually colourless conductive marking fluid could be developed, then the front cover could be used."
He imagined a system where the address would simply be typed into the system and converted to barcode with no attempt by the operator to do any routing. A machine, with practically unlimited memory, would then read the route and sort it to the proper bin. Only the machine would have to know the routes, and with enough memory, any one of them could sort mail directly to its destination. Lewis noted that such a system would replace sorters with typists, which could be hired in great numbers from existing typing pools.
Deputy Postmaster General William Turnbull, under pressure from the seated government to improve postal service, turned to Lewis' ideas. In 1952 Turnbull and Lewis started looking around the industry to find systems that might fill their needs, but came up empty handed. Although there were a wide variety of patents that had been filed for such systems, none had been turned into working machinery. They approached the National Research Council (NRC) for help, but found a similar lack of ideas there. Failing to find a machine that was immediately available, they installed a Transorma at their new sorting office in Peterborough, Ontario, as an interim measure. It started operations in 1955 and ran until 1963.
Maurice Moise Levy had recently left the Defence Research Board to set up a Canadian subsidiary of ITT Corporation known as FEMCO, short for "Federal Electric Manufacturing Co." Turnbull met with Levy in April 1952 and asked him if a sorting machine was possible, Levy immediately answered "yes." Levy followed this up with a proposal for a $100,000 contract for detailed engineering development. After the NRC examined the proposal and passed judgement that it seemed possible, Turnbull pressed for development of the system. Under further pressure from the opposition and problems staffing the Toronto office, Postmaster General Alcide Côté announced the project in July 1952.
Electronic Information Handling System
As chance would have it, Levy had recently been fired by ITT and was hired by Turnbull. He set up the small in-house Electronics Laboratory with the promise of having a prototype machine ready for testing in three years. In early 1953 he visited companies looking for potential development partners, and through this process he met with Arther Porter, head of R&D at Ferranti Canada.
At the time, Ferranti was in the midst of developing the DATAR system for the Royal Canadian Navy. DATAR was a vacuum tube-based drum memory computer that stored and collected data for display. Radar and sonar operators on any of the ships in a convoy could send contact reports to DATAR using a trackball-equipped display that sent the data over a UHF PCM radio link. DATAR stored the data on the drum and periodically sent out the complete dataset to the ships, which plotted them on local displays, rotated and scaled for that ship's position in the convoy. The result was a single unified picture of the entire battlefield that could be seen on any of the ships, even those without direct contact with the targets.
Porter suggested using the DATAR computer design as the basis for a sorting system. Following Lewis' suggestion, a new reader would sort the mail on the basis of the pattern of stripes on the letter provided by an operator who simply typed in the address without attempting to route it. Ferranti suggested a fluorescent ink instead of a conductive one. Routing information would be placed on the magnetic drum, which could store thousands of routes and could be easily changed on demand. Levy, however, was interested in using an optical memory system being developed at IBM by a team including Louis Ridenour (see Automatic Language Translator for details) for storage of the routing information. Turnbull overruled Levy, and on 10 August 1954 he signed a contract with Ferranti for the Electronic Information Handling System using a drum memory.
In February 1955 Levy announced the system to the world at a conference in the U.S., claiming that it was able to process 200,000 letters per hour. For comparison, the largest Transnorma systems could handle about 15,000 letters an hour. Although the computer system did appear to be able to meet this claim, they were having serious problems with the non-computer portions of the project.
Route Reference Computer
Levy and Turnbull pressed for development of a production system, while Porter was suggested they move to a transistorized version of the computer. Porter had made the same proposal to the Navy in order to cure the size and reliability problems they were having the tube-based DATAR, and had signed a contract for a transistorized DATAR in early 1955. Since the Navy was paying for much of the development of the circuitry, the new machine would be inexpensive to develop. Porter offered a $65,000 contract for the new computer, known as the Route Reference Computer, which Turnbull signed in August 1955.
Ferranti had based both proposals on Philco's SB-100 transistor and their Transac logic circuit design. In production both proved to be less developed than hoped. The SB-100 was unreliable, and even working versions varied so widely in performance that the Transac logic circuits were unusable. Making matters worse, in late 1955 the Navy was forced to cancel development of the transistorized DATAR, placing the entire development cost on the Post Office budget. Ferranti burned through the initial $65,000 by early 1956, and several additional rounds of funding followed. Since the Post Office had no other plans on the books to address their problems, these were always forthcoming.
By August 1956 the project was three times its original budget, and when Turnbull demanded an update, Ferranti finally told Levy about the problems they were having with the Transac circuitry and stated they had been forced to abandon it to develop their own. Their new design worked, but the equivalent circuits were larger and this caused problems trying to fit them into the original chassis. Levy, reporting back, was admonished by Turnbull, who was under increasing pressure to deliver the system. That month, Progressive Conservative Postmaster critic William McLean Hamilton pressed for an update on "this million dollar monster", and given an end-of-year date that was also missed.
The machine was finally delivered in January 1957, and Turnbull was able to display it in working fashion that summer when the Universal Postal Union held its Congress meeting in Ottawa, the first in Canada. Interest was high, prompting postmasters from England and Germany to visit Ottawa to see the system, along with a similar visit by several U.S. Congressmen. Hopes of international sales were dimmed when the Congressmen returned to Washington and quickly arranged $5 million in funding for local development of a similar system. Burroughs Corporation won a development contract the next year, emerging as the Multiple Position Letter Sorting Machine in the early 1960s.
By this point the budget for development had reached $2.5 million. During 1957 federal election the Progressive Conservative Party of Canada ran a campaign that aimed at what they characterized as Louis St. Laurent's out-of-control spending. Nevertheless, when Hamilton took over the role of Postmaster General in August 1957, instead of canceling the project he pressed Turnbull to install a production system as quickly as possible. Turnbull stated that they could have a system installed within six months, and Hamilton agreed to continue funding the project, but noted that he would accept no further delays.
Turnbull's estimate proved overly optimistic, and development of the mechanical portions of the system dragged on until further funding was curtailed and Levy's Electronics Laboratory was finally shut down. Turnbull quit the Deputy position in 1958. Their initial failure using automation slowed the adoption of newer systems, and Canada was one of the last major western nations to introduce Postal Codes, which didn't appear until the 1970s.
Success through failure
Although the mail sorting machine was eventually broken up for scrap, it was highly influential outside of Canada. Lewis' original suggestion that some sort of invisible or see-through ink be used to store routing information on the front face of the letters is now practically universal, as is the basic workflow of the address being converted to bar code form as soon as possible by typists and then sent into automated machinery for actual sorting. Use of bar-coded ZIP codes printed directly at the sending point when using postage meters became mandatory in the U.S. in 1973. During the 1960s the use of optical character readers replaced typists for letters with typewritten addresses, and in the 1990s, handwritten ones as well.
Ferranti prospered from the development effort as they adapted their new transistorized circuit design for a series of follow-on projects. Shortly after the Route Reference Computer was delivered, they were contacted by the Federal Reserve Bank to develop a similar system for check sorting that was very successful. Ferranti later the same basic system as the basis of ReserVec, a computer reservations system built for Trans Canada Airlines (today's Air Canada) that started full operation in October 1961, beating the more famous SABRE. The basic ReserVec design would later be generalized into the Ferranti-Packard 6000 mainframe business computers, whose design became the basis for the ICT 1900 series of machines during the 1960s.
See also
Transorma
Multiple Position Letter Sorting Machine
References
Citations
Bibliography
John Vardalas, "The Computer Revolution in Canada: building national technological competence", MIT Press, 2001,
Norman Ball and John Vardalas, "Ferranti-Packard: pioneers in Canadian electrical manufacturing", McGill-Queen's Press, 1994,
David Boslaugh, "When Computers Went to Sea", Wiley, 2003,
Alan Dornian, "ReserVec: Trans-Canada Airlines' Computerized Reservation System", IEEE Annals of the History of Computing, Volume 16 Number 2 (1994), pp. 31–42
Further reading
Ferranti's system received widespread press reporting in the late 1950s. Examples include:
Radio-Electronics, Volume 28 (1957), pg. 22
Journal of the Franklin Institute, Volume 265 (1958), pg. 482
Automation, Volume 5 (1958), pg. 12
Science News, Volume 73-74 (1958), pg. 216
A description of the end-to-end process of sorting and delivering the mail can be found in:
Jeff Blyskal and Marie Hodge, "Why Your Mail is so Slow", New York Magazine, 9 November 1987, pg 42 - 55
Ferranti computers
Transistorized computers
|
1693569
|
https://en.wikipedia.org/wiki/Rod%20Dedeaux
|
Rod Dedeaux
|
Raoul Martial "Rod" Dedeaux (February 17, 1914 – January 5, 2006) was an American college baseball coach who compiled what is widely recognized as among the greatest records of any coach in the sport's amateur history.
Dedeaux was the head baseball coach at the University of Southern California (USC) in Los Angeles for 45 seasons, and retired at age 72 in 1986. His teams won 11 national titles (College World Series), including a record five straight (1970–1974), and 28 conference championships. Dedeaux was named Coach of the Year six times by the Collegiate Baseball Coaches Association and was inducted into its Hall of Fame in 1970. He was named "Coach of the Century" by Collegiate Baseball magazine, and was one of the ten initial inductees to the College Baseball Hall of Fame.
Early life
Born in New Orleans, Louisiana, Dedeaux moved to Los Angeles and graduated from Hollywood High School in 1931. He played baseball at the University of Southern California for three seasons. Dedeaux then played professional baseball briefly in 1935, appearing in two games as a shortstop for the Brooklyn Dodgers late in the season. The following year while playing for Dayton in the Mid-Atlantic League, he cracked a vertebra while swinging in cold weather, and his playing career ended. He then turned to coaching in the semi-pro and amateur ranks.
Career
Dedeaux invested $500 to start a trucking firm, Dart (Dedeaux Automotive Repair and Transit) Enterprises, which he built into a successful regional business. When his college coach, Sam Barry, entered the U.S. Navy during World War II, he recommended Dedeaux to take over the team in for the war's duration. Upon Barry's return in 1946, they served as co-coaches, with Dedeaux running the team each year until Barry finished the basketball season. USC won its first national title in 1948, over Yale, captained by first baseman George H. W. Bush. The finals were held at Hyames Field in Kalamazoo, Michigan, settled by a 9–2 win in the third and deciding game.
Following Barry's death in September 1950, Dedeaux became the sole coach and proceeded to build on the early success to establish the strongest program in collegiate baseball. Prior to his retirement in June , Dedeaux's teams won ten additional College World Series titles in Omaha, including five consecutively (1970–74), the sixth in seven years. No other coach won more than three titles until 1997.
At USC, Dedeaux coached dozens of future major leaguers, including Ron Fairly, Don Buford, Tom Seaver, Dave Kingman, Roy Smalley, Fred Lynn, Steve Kemp, Mark McGwire, and Randy Johnson. Throughout his USC career, he accepted a nominal salary of just $1 per year, as his trucking business supplied him with a substantial income. He turned down numerous offers of major league coaching positions, including invitations from Los Angeles Dodgers manager Tommy Lasorda to join his staff, always rejecting them due to his preference for the college game and his desire to remain close to his family.
He retired as the winningest coach in college baseball history with a record of 1,332–571–11 (), and for the rest of his life remained an honored annual presence at the College World Series in Omaha. At the 1999 edition, the 50th played in Omaha, he was given a key to the city by the mayor and a one-minute standing ovation by the fans at Rosenblatt Stadium. He was inducted into the American Baseball Coaches Association's Hall of Fame in 1970, and in 1999 was named the Coach of the Century by Collegiate Baseball magazine.
USC played its home games at Bovard Field through 1973, and Dedeaux became known as "The Houdini of Bovard" for the come-from-behind home-field wins by the Trojans. A new baseball field named Dedeaux Field opened in 1974, named in honor of the active head coach.
Olympics
Dedeaux was the head coach of the United States baseball teams at the 1964 Summer Olympics in Tokyo and the 1984 Summer Olympics in Los Angeles, where baseball was contested both times as a demonstration sport. The 1964 team played one game as part of the Olympic program, defeating a Japanese amateur all-star team, while the 1984 team finished second in a field of eight teams, winning its first four games and losing to Japan in the final game of the tournament.
Films
Dedeaux also served as the baseball coach and consultant for actors and ballplayers on the 1989 film Field of Dreams. While Dedeaux was critical of the "phoniness that was in baseball movies," an opinion which he acquired while working as an extra in the 1948 film The Babe Ruth Story, he accepted the task after reading the original novel Shoeless Joe, and brought Buford along to help him coach the cast. Phil Alden Robinson, who directed the film, said the following about Dedeaux:
All of the ballplayers in the movie were prepped for the film by Rod Dedeaux. He coached at USC for many years, and is a wonderful man, very full of life, energetic, very supportive, just really was very giving of himself and cheerful all the time, was a great spirit to have around. And one day, we were in between setups and I said, 'Hey, coach, what position did you play?' He said, 'I was a shortstop.' I said, 'Really, could you — were you good?' He got very quiet, and he said, 'I could field the ball.' I said, 'Could you hit?' He said, 'I could hit the ball.' And he was strangely quiet. And I said to him, 'Well, how come you never played in the majors?' And he said, 'I did.' I said, 'Really?' [Dedeaux said] 'Yes, in 1930-something.' I forget what year he said. He was the starting shortstop for the Brooklyn Dodgers. He played one game, broke his back, and that was the end of his career.And I just blanched. I said, 'My God, you're Doc Graham.' He said, 'That's right.' And I said, 'Do you ever think about, "gee, the career I might've had."' And he said, 'Every day.' He said it very quietly. It was very out of character for him, and I was so touched by that. And I did look him up in the Baseball Encyclopedia: He did go, I think, 1-for-4 with an RBI. That was his lifetime stats. So having him be the man who trained all these fellows, including the kid who plays Doc Graham, was very meaningful to me, and I know it was to him, too. It was great to have him around. I think about that often, about what that must have been like, to be good enough to start with a Major League team, and for one unlucky moment, not be able to do — the rest of your life takes another turn. What he did with that is, he put all of that emotion — which could have gone into bitterness or regret — into being a phenomenal coach. He sent more people to the majors than, I think, anybody else in college history. He's an amazing man.
Personal
Dedeaux was married to the former Helen L. Jones (1915–2007) for 66 years, and they had four children.
Death and legacy
Dedeaux died in early at age 91 at Glendale Adventist Medical Center in Glendale, of complications from a stroke five weeks earlier. Six months later on July 4, he was one of ten in the first class inducted into the College Baseball Hall of Fame. Dedeaux was also inducted in the inaugural class of the Omaha College Baseball Hall of Fame in 2013, and a statue of him was unveiled at Dedeaux Field on the USC campus in 2014.
Dedeaux was inducted into the Baseball Reliquary's Shrine of the Eternals in 2005.
Dedeaux and his wife Helen are buried in Los Angeles at Forest Lawn Memorial Park in Hollywood Hills.
Head coaching record
See also
List of college baseball coaches with 1,100 wins
References
External links
College Baseball Hall of Fame
1914 births
2006 deaths
Baseball players from Los Angeles
Brooklyn Dodgers players
Burials at Forest Lawn Memorial Park (Hollywood Hills)
National College Baseball Hall of Fame inductees
Dayton Ducks players
Hazleton Mountaineers players
Hollywood Stars players
Major League Baseball shortstops
New York Yankees scouts
Olympic baseball managers
San Diego Padres (minor league) players
Tacoma Tigers players
USC Trojans baseball coaches
USC Trojans baseball players
Baseball players from New Orleans
|
40241763
|
https://en.wikipedia.org/wiki/Growth%20hacking
|
Growth hacking
|
Growth hacking is a subfield of marketing focused on the rapid growth of a company. It is referred to as both a process and a set of cross-disciplinary (digital) skills. The goal is to regularly conduct A/B testing that will lead to improving the customer journey, and replicate and scale the ideas that work and modify or abandon the ones that don't before investing a lot of resources. It started in relation to early-stage startups that need rapid growth within short time on tight budgets, and also reached bigger corporate companies.
A growth hacking team is made up of marketers, developers, engineers and product managers that specifically focus on building and engaging the user base of a business. Growth hacking is not just a process for marketers. It can be applied to product development and to the continuous improvement of products as well as to growing an existing customer base. As such, it’s equally useful to everyone from product developers, to engineers, to designers, to salespeople, to managers.
Competences
Those who specialise in growth hacking use various types of marketing and product iterations to rapidly test persuasive copy, email marketing, SEO and viral strategies, among other tools and techniques, with a goal of increasing conversion rates and achieving rapid growth of the user base. Some consider growth hacking a part of the online marketing ecosystem, as in many cases growth hackers are using techniques such as search engine optimization, website analytics, content marketing and A/B testing. On the other hand, not all marketers have all the data and technical skills required by a growth hacker, therefore a separate name for this field is applicable.
History
Sean Ellis coined the term "growth hacker" in 2010. In the blog post, he defined a growth hacker as "a person whose true north is growth. Everything they do is scrutinized by its potential impact on scalable growth." Andrew Chen introduced the term to a wider audience in a blog post titled, "Growth Hacker is the new VP Marketing" in which he defined the term and used the short term vacation rental platform Airbnb's integration of Craigslist as an example. He wrote that growth hackers "are a hybrid of marketer and coder, one who looks at the traditional question of 'How do I get customers for my product?' and answers with A/B tests, landing pages, viral factor, email deliverability, and Open Graph." In the book "Growth Hacking", Chad Riddersen and Raymond Fong define a Growth Hacker as "a highly resourceful and creative marketer singularly focused on high leverage growth"
The second annual (2013) "Growth Hackers Conference" was held in San Francisco set up by Gagan Biyani. It featured growth hackers from LinkedIn, Twitter, and YouTube among others.
In 2015, Sean Ellis and Everette Taylor created GrowthHackers - the largest website community dedicated to growth hacking and now host the annual GrowthHackers Conference.
Methods
To combat this lack of money and experience, growth hackers approach marketing with a focus on innovation, scalability, and user connectivity. Growth hacking does not, however, separate product design and product effectiveness from marketing. Growth hackers build the product's potential growth, including user acquisition, on-boarding, monetization, retention, and virality, into the product itself. Growth hacking is all about intentionality and efficiency. So there is always a chance you'll hit on something huge and have a viral campaign. Fast Company used Twitter's "Suggested Users List" as example: "This was Twitter's real secret: It built marketing into the product rather than building infrastructure to do a lot of marketing." However growth hacking isn't always free. TechCrunch shared several nearly free growth hacks explaining that growth hacking is effective marketing and not mythical marketing pixie dust. Sparrow has helped founders find growth hacks through their comprehensive growth program and reliable repository of startup advisors.
The heart of growth hacking is the relentless focus on growth as the only metric that truly matters. Mark Zuckerberg had this mindset while growing Facebook. While the exact methods vary from company to company and from one industry to the next, the common denominator is always growth. Companies that have successfully "growth hacked" usually have a viral loop naturally built into their onboarding process. New customers typically hear about the product or service through their network and by using the product or service, share it with their connections in turn. This loop of awareness, use, and sharing can result in exponential growth for the company.
Growth hacking frames the user acquisition process through the "Pirate Funnel" metaphor (in short, new users flow through a 6-stage funnel - awareness, acquisition, activation, retention, revenue, referral), which got its name from the abbreviation of the first six letters spelling AAARRR. Rapidly optimizing this process is a core goal of growth hacking, since making each stage of the funnel more efficient will increase the number of users in the most advantageous stages of the funnel.
Twitter, Facebook, Dropbox, Pinterest, YouTube, Groupon, Udemy, Instagram and Google are all companies that used and still use growth hacking techniques to build brands and improve profits.
Examples of "Growth Hacks"
Below are the examples of growth hacks and are the most well-known acts of growth hacking. Often people see growth hacking as merely repeating these growth hacks, but one should know that the 'hacks' are only the result of a repeatable growth hacking process, which all growth hackers use a way of working. Below are some of the most famous growth hacking examples:
An early example of "growth hacking" was Hotmail's inclusion of "PS I Love You" with a link for others to get the free online mail service. Another example was the offer of more storage by Dropbox to users who referred their friends.
Online worldwide independent lodging company Airbnb is an example of growth hacking by coupling technology and ingenuity. Airbnb realized they could essentially hack the Craiglist.org scale and tap both into their user base as well as their website by adding automated listing generators from Airbnb with the feature called "Post to Craigslist".
References
Neologisms
Promotion and marketing communications
Digital marketing
|
1774992
|
https://en.wikipedia.org/wiki/Sound%20Forge
|
Sound Forge
|
Sound Forge (formerly known as Sonic Foundry Sound Forge, and later as Sony Sound Forge) is a digital audio editing suite by Magix Software GmbH, which is aimed at the professional and semi-professional markets. There are two versions of Sound Forge: Sound Forge Pro 12 released in April 2018 and Sound Forge Audio Studio 13 (formerly known as Sonic Foundry's Sound Forge LE) released in January 2019. Both are well known digital audio editors and offer recording, audio editing, audio mastering and processing.
In 2003, Sonic Foundry, the former parent company of Sound Forge, faced losses and tough competition from much larger companies; and, as a result, agreed to sell its desktop audio and music production product family to Sony Pictures Digital for $18 million. The software initially had Windows 3.x support, but after version 3.0 all support for 16-bit Windows was dropped. Additionally, Windows 95 support was dropped after Sound Forge 5.0.
On May 20, 2016 Sony announced that it would be selling the bulk of its creative software suite, including Sound Forge Pro, to Magix GmbH & Co. Magix announced via Facebook that their first new version of Sound Forge Audio Studio (Sound Forge Audio Studio 12) was released August 2017.
Features
Multi-channel or multitrack Recording
Voice activity detection using artificial intelligence
Disc Description Protocol export
High resolution audio support: 24-Bit, 32-Bit, 64-bit (IEEE float) 192 kHz
Video support including AVI, WMV, and MPEG-1 and MPEG-2 (both PAL and NTSC) for use in frame by frame synchronization of audio and video
Real-time sample level wave editor
Ultra-high fidelity
Support for a wide variety of file formats: DSF (DSD), AA3/OMA (ATRAC), GIG (GigaSampler instrument), IVC (Intervoice), MP4 (including Apple Lossless), MPEG‑2 transport stream and PCA (Sony Perfect Clarity Audio). For working with audio‑for‑video, Pro 12 includes versatile video file support AVI, WMV, MPEG‑1 and MPEG‑2 (in PAL or NTSC) file formats
DirectX and VST3 plugin support. Version 12 includes a vinyl restoration plug-in and Mastering Effects Bundle, powered by IZotope
Floating Plug-in Chain window for non-destructive effects processing
CD Architect 5.2 software that allows Disk-At-Once (DAO) CD burning
Batch conversion functionality
Spectrum analysis tools
White, pink, brown and filtered noise generators
DTMF/MF tone synthesis
External monitor support for DV and FireWire (IEEE 1394) devices
Supported formats
Macromedia Flash (SWF) format open only
RealMedia 9 (RealAudio and RealVideo) - export only
Windows Media 9 Series (WMA and WMV) (i)
Microsoft Video for Windows (AVI) (i)
AIFF (AIFF, AIF, SND)
MPEG-1 and MPEG-2
MPEG-1 Layer 3 (MP3)
Ogg Vorbis (OGG)
Macintosh AIFF
NeXT/Sun (AU)
Sound Designer (DIG)
Intervoice (IVC)
Sony Perfect Clarity Audio (PCA)
Sony Media Wave 64 (W64) (i)
Sound Forge Project Files (FRG)
Dialogic (VOX)
Microsoft Wave (WAV)
ATRAC Audio (AA3, OMA) (i)
CD Audio (CDA)
Dolby Digital AC-3 studio - save only (i)
Raw Audio (RAW) (i)
Free Lossless Audio Codec (FLAC)
(i): Supported multichannel format
See also
ACID Pro
Samplitude
Audacity
Ardour
FL Studio
Steinberg Cubase
Pro Tools
REAPER
Comparison of multitrack recording software
References
External links
Sound Forge Product Family
Sound Forge Pro
Sound Forge Audio Studio
Sonic Foundry website
Sony press release
Digital audio workstation software
Soundtrack creation software
Magix software
|
56210484
|
https://en.wikipedia.org/wiki/2018%20USC%20Trojans%20football%20team
|
2018 USC Trojans football team
|
The 2018 USC Trojans football team represented the University of Southern California in the 2018 NCAA Division I FBS football season. They played their home games at the Los Angeles Memorial Coliseum and competed as members of the South Division of the Pac-12 Conference. They were led by third-year head coach Clay Helton.
Despite being ranked No. 15 in the AP Poll's preseason rankings, the Trojans finished the season 5–7, the program's first losing record since 2000. USC lost to both of its major rivals, UCLA and Notre Dame, in the same season for the first time since 2013, and it also lost to all other California Pac-12 schools (UCLA, California, and Stanford) in the same season for the first time since 1996. The team went 4–5 in Pac-12 play, tying Arizona for third place in the Pac-12 South Division.
On November 25, USC athletic director Lynn Swann announced that head coach Clay Helton would return in 2019 despite the disappointing season.
Previous season
The Trojans finished the 2017 season 11–3, 8–1 in Pac-12 play to be champions of the South Division. They represented the South Division in the Pac-12 Championship Game where they defeated Stanford to become Pac-12 Champions. They were invited to play in the Cotton Bowl where they lost 7–24 to Ohio State.
Personnel
Coaching staff
Roster
Returning starters
USC returns 31 starters in 2018, including 11 on offense, 13 on defense, and 5 on special teams.
Key departures include Sam Darnold (QB – 14 games), Ronald Jones II (TB – 13 games), Deontay Burnett (WR – 12 games), Steven Mitchell (WR – 7 games), Jalen Greene (WR – 5 games), Viane Talamaivao (OG – 5 games), Nico Falah (C – 14 games), Rasheem Green(DE/DT – 14 games), Josh Fatu (DT – 12 games), Uchenna Nwosu (OLB – 14 games), Jack Jones (CB – 13 games), Chris Hawkins (S – 14 games), Ykili Ross (S – 2 games).
Other departures include James Toland IV (TB), Matt Lopes (S).
Offense (11)
Defense (13)
Special teams (5)
Transfers
The Trojans lost 5 players due to transfer.
Depth chart
Depth Chart 2018
True Freshman
Double Position : *
Recruiting class
Scholarship distribution chart
/ / * Former walk-on
– 85 scholarships permitted, 81 currently allotted to players
– 78 recruited players on scholarship (Three former walk-ons)
– OT Jalen McKenzie took an advanced scholarship or "blueshirt" limiting USC's class of 2018 to 24.
Projecting Scholarship Distribution 2018
2018 NFL Draft
NFL Combine
The official list of participants for the 2018 NFL Combine included USC football players WR Deontay Burnett, QB Sam Darnold, DE Rasheem Green, TB Ronald Jones II, WR Steven Mitchell Jr. & OLB Uchenna Nwosu.
Team players drafted into the NFL
Schedule
Game summaries
UNLV
Stanford
Texas
Washington State
Arizona
Colorado
Utah
Arizona State
Oregon State
California
UCLA
Notre Dame
Rankings
Statistics
Preseason
Pac-12 Media Days
Pac-12 media days are set for July 25, 2018 in Hollywood, California. Clay Helton (Head coach), Porter Gustin (OLB) & Cameron Smith (ILB) at Pac-12 media days. The Pac-12 media poll was released with the Trojans predicted to win the Pac-12 South division title.
Awards and honors
Awards and honors
Midseason award watch lists
Weekly awards
Major award semifinalists
Honors and Awards Source: 2018 Arizona Media Notes
Players drafted into the NFL
Notes
November 16, 2017 – USC's 2018 Football Schedule Announced.
December 1, 2017 – No. 10 USC Beats No. 12 Stanford, 31–28 for Pac-12 Title.
December 20, 2017 – USC Football Announces Early Signing Period 2018 Class.
December 22, 2017 – Juco Defensive Lineman Caleb Tremblay Signs With USC.
December 22, 2017 – No. 1 2019 QB recruit JT Daniels skipping senior year and enrolling at USC a year early.
December 29, 2017 – No. 5 Ohio State Tops No. 8 USC 24–7 in Cotton Bowl Classic.
December 29, 2017 – Defensive Back Talanoa Hufanga Signs With USC Football.
January 3, 2018 – Sam Darnold Declares for 2018 NFL Draft.
January 5, 2018 – USC Tailback Ronald Jones II Declares for NFL.
January 5, 2018 – Porter Gustin reportedly set to return to USC Football for 2018 season.
January 6, 2018 – Amon-Ra St. Brown commits to USC over Stanford, Notre Dame.
January 8, 2018 – USC WR Deontay Burnett declares for 2018 NFL Draft.
January 10, 2018 – Toa Lobendahn to return to USC Football for 2018 season.
January 10, 2018 – Cameron Smith announces return to USC Football for 2018.
January 11, 2018 – USC Football reportedly promote Bryan Ellis to quarterbacks coach.
January 12, 2018 – Deland McCullough leaves USC Football for Kansas City Chiefs.
January 12, 2018 – USC DB Iman Marshall will return to USC for the 2018 season.
January 13, 2018 – USC DL Rasheem Green Declares for 2018 NFL Draft.
January 18, 2018 – Ronnie Lott added to College Football Playoff Selection Committee.
January 19, 2018 – Keary Colbert reportedly promoted to tight ends coach for USC Football.
January 20, 2018 – Tee Martin gets multi-year extension to stay with USC Football.
References
USC
USC Trojans football seasons
USC Trojans football
USC Trojans football
|
454127
|
https://en.wikipedia.org/wiki/MASSIVE%20%28software%29
|
MASSIVE (software)
|
MASSIVE (Multiple Agent Simulation System in Virtual Environment) is a high-end computer animation and artificial intelligence software package used for generating crowd-related visual effects for film and television.
Overview
Massive is a software package developed by Stephen Regelous for the visual effects industry. Its flagship feature is the ability to quickly and easily create thousands (or up to millions with current advances in computer processing power) of agents that all act as individuals as opposed to content creators individually animating or programming the agents by hand. Through the use of fuzzy logic, the software enables every agent to respond individually to its surroundings, including other agents. These reactions affect the agent's behaviour, changing how they act by controlling pre-recorded animation clips, for example by blending between such clips, to create characters that move, act, and react realistically. These pre-recorded animation clips can come from motion-capture sessions, or can be hand-animated in other 3D animation software packages.
In addition to the artificial intelligence abilities of Massive, there are numerous other features, including cloth simulation, rigid body dynamics and graphics processing unit (GPU) based hardware rendering. Massive Software has also created several pre-built agents ready to perform certain tasks, such as stadium crowd agents, rioting 'mayhem' agents and simple agents who walk around and talk to each other.
History
Massive was originally developed in Wellington, New Zealand. Peter Jackson, the director of the Lord of the Rings films (2001–2003), required software that allowed armies of hundreds of thousands of soldiers to fight, a problem that had not been solved in film-making before. Stephen Regelous created Massive to allow Weta Digital to generate many of the award-winning visual effects, particularly the battle sequences, for the Lord of the Rings films. Since then, it has developed into a complete product and has been licensed by a number of other visual effects houses.
In production
Massive has been used in many productions, both commercials and feature-length films, small-scale and large.
Some significant examples include:
The Lord of the Rings
Rise of the Planet of the Apes
Avatar
The Chronicles of Narnia: The Lion, the Witch and the Wardrobe
King Kong (Jackson, 2005)
Radiohead - Go To Sleep (Music Video)
Flags of our Fathers (besides battle and crowd scenes, even shots of seacraft crossing the Pacific were created with Massive)
Carlton Draught: Big Ad
Mountain, a television commercial for the PlayStation 2 console
I, Robot
Category 7: The End of the World
Blades of Glory
Eragon
The Mummy: Tomb of the Dragon Emperor
Happy Feet
300
The Ant Bully (the first film to use computer animated characters as Massive agents rather than motion capture. Also first to use facial animation within Massive)
Buffy ("Chosen")
Doctor Who ("Partners in Crime")
Changeling
Speed Racer (Car A.I. and crowds)
WALL-E
Up
Life of Pi (Both the flying fish and meerkat sequences were created with the help of Massive)
The Hobbit
Dawn of the Planet of the Apes
World War Z
Game of Thrones
Black Panther
Aquaman
Avengers: Endgame
See also
Crowd simulation
Multi-agent system
Fuzzy logic
Emergence
Lightwave 3D
Electric Image Animation System
Cinema 4D
Modo
Blender
External links
Official site of Weta Digital
3D graphics software
Animation software
Multi-agent systems
3D computer graphics software for Linux
Proprietary commercial software for Linux
|
37857651
|
https://en.wikipedia.org/wiki/Mission%20Command%20Training%20Program
|
Mission Command Training Program
|
Mission Command Training Program' (MCTP – formerly the Battle Command Training Program), based at Fort Leavenworth, Kansas, is the U.S. Army's only worldwide deployable Combat Training Center. MCTP provides full spectrum operations training support for senior commanders and their staffs so they can be successful in any mission in any operational environment. Its Senior Mentors counsel and offer their experience to Army senior commanders, subordinate commanders and staff. Additionally, MCTP's professional observer-trainers assist units with objective feedback and suggestions for improvement.
MCTP Support to the Army
MCTP serves as an engine of change for implementing doctrine by providing feedback to the Army on future doctrine, unit organization and application of that doctrine. No other entity in the Army can incorporate division and corps headquarters into the same exercise. MCTP's annual collection of key observations made at brigades through Army service component command levels enables the Army's future force to grow and develop from lessons learned. During 2016, MCTP supported five corps and division level warfighter exercises, five ASCC exercises, and six National Guard brigade combat team warfighters. Each exercise generally requires a one-year planning cycle including multiple training and planning events to enable training in execution.
MCTP remains a premier Combat Training Center to train brigades, divisions, corps, and Army Service Component Command (ASCC) level headquarters on their mission essential tasks needed to support Unified Land Operations and train joint functions within select headquarters to support their role as a joint task force. These exercises are conducted in a distributed manner and consist of a multi-echelon, total Army force (AC, NG, RC) and stress SOF interdependence. MCTP provides a trained world-class opposing force consisting of Soldiers and contractors to portray a free-thinking, near peer, hybrid threat. MCTP features professional observers, coaches and trainers (OC/Ts) and senior mentors (retired 1-4 star general officers). This cadre enables staffs and commanders to train on their prospective METLs and assess their readiness. MCTP supports the collective training of Army units as directed by the Chief of Staff of the Army and scheduled by Forces Command in accordance with the Army Force Generation process at worldwide locations in order to train Leaders and provide commanders the opportunity to train on mission command in Unified Land Operations. MCTP personnel trained 16 general officers in 2016 across Army Service Component Commands, 10 divisions and 2 Expeditionary Sustainment Commands.
History
The Mission Command Training Program is the United States Army's capstone combat training center (CTC). MCTP started off as the Battle Command Training Program (BCTP) in 1987. Its original goal was to improve battlefield command and control through stressful and realistic combined arms training in a combat environment. BCTP met this need while also providing Division and Corps computer-driven simulation training. Effective, 10 May 2011, BCTP was officially re-designated the U.S. Army Mission Command Training Program, MCTP.
The command conducts or supports combined arms training that replicates joint-interagency-intergovernmental-multinational operations in a decisive action environment at worldwide locations for Brigades, Divisions, Corps, Army Service Combatant Commands (ASCCs), Joint Force Land Component Commands (JFLCCs), and Joint Task Forces (JTFs) in order to create training experiences that enable the Army's senior commanders to develop current, relevant, campaign-quality, joint and expeditionary mission command instincts and skills. Most, if not all, of today's general officers have participated in a MCTP exercise at some point in their careers.
From its inception, MCTP has featured key elements of the CTC training model such as a "free-thinking" opposing force (OPFOR), the use of experienced observer/trainers, advanced technology to gather data and a basic rotational sequence from choice of scenario through a warfighter exercise to an after action review. These elements were combined with innovations unique to MCTP such as computer-simulated battle action, mobile observer trainer teams, and senior mentors for unit commanders to eliminate the collective training gap at higher echelons for command and control training.
The Gulf War and the end of the Cold War prompted the first calls to widen the program's mission to address the dramatically altered world situation. From then on, MCTP's mission range expanded both in the levels of headquarters it exercised and in the levels of conflict it simulated. MCTP added two teams, Team C, in 1992, to provide Brigade-level training in the new Brigade Command and Battle Staff Training (BCBST) program, and Team D in 1993 to pick up the mission of joint training for Army units operating at the joint task force (JTF) or Army force (ARFOR) level.
After 9/11, MCTP established a special temporary mobile training team to conduct installation force protection seminars and readiness exercises. In preparation for Operation Iraqi Freedom, MCTP developed a special seminar series on counterinsurgency for all deploying Brigades. In 2008, MCTP continued to meet the needs of the Army with Teams Sierra and Foxtrot, which conducted seminars and embedded exercises for functional and multifunctional Brigades.
Initially composed of one exercise team, Team A, and a group of civilian contractors, MCTP executed its first warfighter exercise in January 1988 with the 9th Infantry Division (Motorized) at Fort Lewis, Washington. This validated the concept for a CTC collective training exercise for Divisions and Corps. Later in the year, MCTP established Team B to increase rotational capacity and the world-class opposing force. MCTP also became a separate unit under the operational control of what is now the Combined Arms Center, Training.
Currently, MCTP consists of eight operations groups (OPSGRPs). OPSGRPs Alpha and Delta are missioned to train ASCC, Corps and Division-level staffs; OPSGRP Charlie conducts Brigade warfighter exercises (BWFX); OPSGRPs Bravo and Foxtrot train functional and multi-functional Brigades; OPSGRP Juliet trains Special Forces units; OPSGRP Sierra trains sustainment Brigades; OPSGRP X-Ray is responsible for exercise planning, exercise control and scenario design.
The program has consistently shown its ability to simultaneously participate and contribute to current operations and adapt its training programs to provide better support. MCTP will remain an "engine of change" for the current and future Army.
MCTP was awarded The Army Superior Unit Award on 15 October 2009 for outstanding meritorious service from 1 January 2007 to 31 December 2008 and again on 5 May 2014 for outstanding meritorious service from October 2010 to 30 September 2011.
Organizational structure
Mission Command Training Program consists of eight operation groups and a supporting unit of the 505th Command and Control Wing, Detachment 1 (USAF). Each of the operations groups trains commanders and staff on effective integration of warfighter functions in a joint-interagency-multinational operating environment to achieve operational mission command. Operations groups OC/Ts provide high-quality academic seminars and formal after action reviews during the WFXs to improve the readiness and combat effectives of each training audience.
Operations Groups A and D: Deploys worldwide to conduct decisive action and theater specific training in unified land operations to support the readiness and combat effectiveness of Army Service Component Commands, Corps and Divisions.
Operations Groups B and F: Deploys worldwide to conduct decisive action and theater specific training in unified land operations in support of functional and multi-functional Brigades, such as aviation and military police Brigades, to improve their readiness and combat effectiveness.
Operations Group C: Deploys worldwide to conduct decisive action and theater-specific training in unified land operations in support of Army National Guard component Brigade combat teams and active component functional and multi-functional brigades in order to improve their readiness and combat effectiveness.
Operations Group J: Deploys worldwide to conduct decisive action and theater-specific training in unified land operations in support of Special Operations Forces with oversight of all Army special operations forces (ARSOF) including civil affairs, military information support operations (MISO) and interagency tactical assets. Observe, coach and train conventional force commanders and staffs on the integration, interoperability and interdependence with Special Operations Forces.
Operations Group S: Deploys worldwide to conduct decisive action and theater-specific training in unified land operations in support of sustainment brigades and expeditionary sustainment commands in order to improve their readiness and combat effectiveness.
Operations Group X: Responsible for the design, planning and control of each multi-echelon, distributed WFX that replicate a realistic, relevant and rigorous strategic environment for the conduct of unified land operations in support of Army senior mission commander training objectives. Leads MCTP's exercise planning process, including exercise life cycle (ELC) events, ensuring all aspects of exercise design are coordinated and synchronized within MCTP and with external training partners and training audiences.
505TH Command and Control Wing, Detachment 1 (United States Air Force): Deploys worldwide to conduct decisive action and theater-specific training in support of the integration of airpower and application of joint firepower, air and space capabilities and doctrine, into unified land operations.
Observer, Coach, Trainers
MCTP provides senior mentors and observer coach/trainers during a WFX exercise for the following formations' commanders and staff: Corps, Division, Theater Sustainment Command, Expeditionary Sustainment Command, Functional/Multi-Functional Support Brigades, Special Forces Groups, and Sustainment Brigades. They play a critical role in providing feedback to the unit, informally, through everyday interactions and, formally, through mid and final after action reviews (AARs) plus the final exercise report (FER). These events give the training audience actions to consider for sustainment and improvement.
OC/Ts facilitate mission command training through 24-hour coverage for unit command groups, staff, and key leaders in their respective command posts, as well as staff/warfighting function and integrating cells throughout the WFX. OC/Ts cover the gamut of warfighting functions including mission command, movement and maneuver, fires, sustainment, protection and intelligence. Officer observer, coach, trainers (OC/Ts) are lieutenant colonels or senior majors who are branch qualified, Command and General Staff College graduates and have extensive field experience. Enlisted OC/Ts are sergeant first classes to sergeant majors who are either United States Sergeants Major Academy graduates or have attended the Battle Staff Course.
Observer, coach, trainers are personally selected by the MCTP commander and a Chief of Operations Group (COG). They are subject matter experts on doctrine and in their specific warfighting functions. They are also certified through a rigorous training program including providing feedback using the After-Action Review process. During a warfighter, they are located at unit command posts and tactical operating centers to observe the operations process. An assignment as an observer, coach/trainer (OC/T) is a rewarding and a recognized professionally broadening experience. There is no better place to truly understand how the Army fights at the Brigade and above levels.
The OC/T experience provides officers and NCOs with deeper substantive knowledge of their military profession, increases their proficiency in operational art and the practical application of doctrine, and exposes them to the challenges the Army could face in future conflicts. Officers and NCOs at MCTP are able to gain multiple careers worth of experience in a short time through observation of their training audiences. Furthermore, OC/Ts gain a unique perspective of the Army's trending challenges and their solutions. OC/Ts have the opportunity to help shape the way the Army will fight now and in the future. OC/Ts are not evaluators and, at the end of the day, OC/Ts are judged by what they impart on training units and how they have helped them grow warfighting skills and improve their readiness.
Senior Mentors
Senior Mentors mentor Corps, Division and Brigade commanders prior to and during warfighter exercises. Assist the commander prior to exercises with establishing training objectives, participate in mission command seminars, assist with development of the after action review and provide feedback on significant observations and trends. Participate in theater reconnaissance, provide feedback to Army senior leaders, and assist in future training and exercise development.
Provide expert knowledge in integrating Training and Doctrine Command (TRADOC), Army, and Department of Defense (DoD) policies, and programs, with extensive background and experience in developing adaptive leaders. Are astute experts in the art and science of designing today's Army modular and future combat force while maximizing institutional learning and adaptation. Review and integrate proposals to train and develop an innovative generating force that will shape and link it seamlessly to the operating force to maximize Army learning and adaptation.
Apply knowledge and experience of TRADOC, Army, and DoD programs to mentor general officers, senior Leaders and staff members, analyze, research and integrate doctrinal information for Mission Command Training Program (MCTP), war-gaming exercises, warfighting courses, operational planning, tactical and operational exercises and decision making exercises for the commanding general Combined Arms Center (CAC), TRADOC, and other high level Army and DoD personnel. This includes joint, combined, and allied exercises designed to prepare military leaders and units for combat operations.
Warfighter Exercises
The warfighter exercise is a conditions-based training event using TRADOC's Decisive Action Training Environment (DATE) for corps, divisions and brigades. The DATE is the common environment found in all combat training centers.
Each year, MCTP supports five multi-echelon (corps, division, and brigade) warfighter exercises, five Army Service Component Command exercises and six National Guard Brigade Combat Team warfighters. MCTP warfighter exercises can incorporate division and corps headquarters into the same exercise. These training experiences enable our Army's senior leaders the ability to develop current, campaign-quality, joint and expeditionary mission command instincts and skills.
These exercises are conducted in a distributed manner and consist of a multi-echelon, total Army force (Active Duty, Army National Guard and Army Reserve) and stress SOF interdependence at locations worldwide.
External links
Official Website, Mission Command Training Program
Official Facebook Page, Mission Command Training Program
References
1986 establishments in the United States
Fort Leavenworth
Military education and training in the United States
|
26508356
|
https://en.wikipedia.org/wiki/Twofish
|
Twofish
|
In cryptography, Twofish is a symmetric key block cipher with a block size of 128 bits and key sizes up to 256 bits. It was one of the five finalists of the Advanced Encryption Standard contest, but it was not selected for standardization. Twofish is related to the earlier block cipher Blowfish.
Twofish's distinctive features are the use of pre-computed key-dependent S-boxes, and a relatively complex key schedule. One half of an n-bit key is used as the actual encryption key and the other half of the n-bit key is used to modify the encryption algorithm (key-dependent S-boxes). Twofish borrows some elements from other designs; for example, the pseudo-Hadamard transform (PHT) from the SAFER family of ciphers. Twofish has a Feistel structure like DES. Twofish also employs a Maximum Distance Separable matrix.
When it was introduced in 1998, Twofish was slightly slower than Rijndael (the chosen algorithm for Advanced Encryption Standard) for 128-bit keys, but somewhat faster for 256-bit keys. Since 2008, virtually all AMD and Intel processors have included hardware acceleration of the Rijndael algorithm via the AES instruction set; Rijndael implementations that use the instruction set are now orders of magnitude faster than (software) Twofish implementations.
Twofish was designed by Bruce Schneier, John Kelsey, Doug Whiting, David Wagner, Chris Hall, and Niels Ferguson: the "extended Twofish team" who met to perform further cryptanalysis of Twofish. Other AES contest entrants included Stefan Lucks, Tadayoshi Kohno, and Mike Stay.
The Twofish cipher has not been patented, and the reference implementation has been placed in the public domain. As a result, the Twofish algorithm is free for anyone to use without any restrictions whatsoever. It is one of a few ciphers included in the OpenPGP standard (RFC 4880). However, Twofish has seen less widespread usage than Blowfish, which has been available longer.
Performance
While being designed performance was always an important factor in Twofish. Twofish was designed to allow for several layers of performance trade offs, depending on the importance of encryption speed, memory usage, hardware gate count, key setup and other parameters. This allows a highly flexible algorithm, which can be implemented in a variety of applications.
There are multiple space–time tradeoffs that can be made, in software as well as in hardware for Twofish. An example of such a tradeoff would be the precompution of round subkeys or s-boxes, which can lead to speed increases of a factor of two or more. These come, however, at the cost of more RAM needed to store them.
The estimates in the table below are all based on existing 0.35 μm CMOS technology.
Cryptanalysis
In 1999, Niels Ferguson published an impossible differential attack that breaks 6 rounds out of 16 of the 256-bit key version using 2256 steps.
, the best published cryptanalysis of the Twofish block cipher is a truncated differential cryptanalysis of the full 16-round version. The paper claims that the probability of truncated differentials is 2−57.3 per block and that it will take roughly 251 chosen plaintexts (32 petabytes worth of data) to find a good pair of truncated differentials.
Bruce Schneier responded in a 2005 blog entry that this paper did not present a full cryptanalytic attack, but only some hypothesized differential characteristics: "But even from a theoretical perspective, Twofish isn't even remotely broken. There have been no extensions to these results since they were published in 2000."
See also
Threefish
Advanced Encryption Standard
Data Encryption Standard
References
Articles
External links
Twofish web page, with full specifications, free source code, and other Twofish resources by Bruce Schneier
256 bit ciphers – TWOFISH reference implementation and derived code
Products that Use Twofish by Bruce Schneier
Better algorithm: Rijndael or TwoFish? by sci.crypt
Standard Cryptographic Algorithm Naming: Twofish
Feistel ciphers
Free ciphers
|
3029710
|
https://en.wikipedia.org/wiki/Florian%20M%C3%BCller%20%28author%29
|
Florian Müller (author)
|
Florian Müller (born 21 January 1970 in Augsburg, Germany) is an app developer and an intellectual property activist. He consulted for Microsoft and writes the FOSSPatents blog about patent and copyright issues. From 1985 to 1998, he was a computer magazine writer and consultant for companies, helping with collaborations between software companies. In 2004 he founded the NoSoftwarePatents campaign and in 2007 he provided some consultancy in relation to football policy.
Software industry and computer books
In 1985, Müller started writing articles for German computer magazines. A year later, at age 16, he became Germany's youngest computer book author.
From 1987 to 1998, he specialized on publishing and distribution cooperations between US and European software companies. He initiated and managed such alliances in various market segments, including productivity software, utility software, educational software, and computer games. As a consultant to and representative of Blizzard Entertainment, Müller was involved in their marketing campaigns.
In 1996, he co-founded an online gaming service named Rival Network, which in early 2000 was acquired by the Telefónica group. From 2001 to 2004, Müller advised the CEO of MySQL AB, developer of the namesake open-source database management software product.
Campaign against EU software patents
In 2004, Müller received the support of corporate sponsors 1&1, Red Hat and MySQL for launching NoSoftwarePatents.com, which opposed the European Commission's proposed directive on the patentability of computer-implemented inventions. Following several years of intensive lobbying by many parties, this proposed directive was rejected by the European Parliament on 6 July 2005, with 648 out of 680 votes cast.
For his political activities, Müller received several awards in 2005. A leading publication for intellectual property lawyers, "Managing Intellectual Property", counted Müller – along with the Chinese vice premier Wu Yi – among the "top 50 most influential people in intellectual property" (renominated in 2006). IT-focused website Silicon.com listed him among the Silicon Agenda Setters. A jury of EU-focused weekly newspaper "European Voice" elected Müller as one of the "EV50 Europeans of the Year 2005", and handed him the "EU Campaigner of the Year 2005" award. Jointly with the FFII, Müller received the "CNET Networks UK Technology Award" in the "Outstanding Contribution to Software Development" category.
Football policy
After more than 20 years in the IT industry, Müller became involved with football (soccer) politics in 2007. He advised the Spanish football club Real Madrid with respect to a European Union policy-making initiative concerning professional sports.
Google
Oracle v. Google
In January 2011 Müller published an article suggesting that "evidence is mounting that different components of the Android mobile operating system may indeed violate copyrights of Sun Microsystems, a company Oracle acquired a year ago." and presented what he believed was copyright infringing material, an article which was heavily criticized by two technical bloggers. According to Ed Burnette, a ZDNet blogger, Google published those files on its web site to help developers debug and test their own code. ArsTechnica's Ryan Paul also said that these findings in the online codebase are also not evidence that copyright infringing code is distributed on Android handsets. Two days after his original assertions Müller claimed to have found the files in the official source availability packages of device makers Motorola, LG and Samsung. The lawsuit ended with both parties agreeing to zero dollars in statutory damages for a small amount of copied code, so that Oracle could appeal.
In April 2012, Müller said he had been hired by Oracle to consult on competition-related topics including FRAND licensing terms. In a court filing in the Oracle v. Google case, Oracle stated that it paid Florian Müller as a consultant. Müller said "In April, I proactively announced a broadly-focused consulting relationship with Oracle, six months after announcing a similar working relationship with Microsoft".
License violation accusation
Müller amplified a Huffington Post article by Edward Naughton, an intellectual property lawyer who has previously represented Microsoft, who suggested that Google likely violated the GPL by copying Linux header files. The accusation was dismissed by Linus Torvalds, the original author and chief architect of the Linux Kernel.
Microsoft and Oracle consulting
After pressure to disclose from the free software community, Müller acknowledged that he consults for both Microsoft and Oracle.
References
External links
.
NoSoftwarePatents campaign
German video game designers
German lobbyists
1970 births
Living people
Writers from Augsburg
|
30102909
|
https://en.wikipedia.org/wiki/MountainsMap
|
MountainsMap
|
Mountains is an image analysis and surface metrology software platform published by the company Digital Surf. Its core is micro-topography, the science of studying surface texture and form in 3D at the microscopic scale. The software is dedicated to profilometers, 3D light microscopes ("MountainsMap"), scanning electron microscopes ("MountainsSEM") and scanning probe microscopes ("MountainsSPIP").
Integration by instrument manufacturers
The main editor's distribution channel is OEM, through the integration of MountainsMap by most profiler and microscope manufacturers, usually under their respective brands; it is sold for instance as:
Hitachi map 3D on Hitachi's scanning electron microscopes,
TopoMAPS on Thermo Fisher Scientific (FEI division) scanning electron microscopes,
TalyMap, TalyProfile, or TalyMap Contour on Taylor-Hobson's profilometers,
PicoImage on Keysight's AFM's,
HommelMap on Jenoptik's profilometers (Hommel-Etamic line of products),
MountainsMap - X on Nikon's microscopes,
Apex 2D or Apex 3D on KLA-Tencor's profilometers,
Leica Map on Leica's microscopes,
ConfoMap on Carl Zeiss' microscopes,
MCubeMap on Mitutoyo profilometers.
Vision 64 Map on Bruker optical profilometers
AttoMap on cathodoluminescence-analysis-dedicated scanning electron microscopes from AttoLight
SmileView Map on JEOL's scanning electron microscopes,
Compatibility
Mountains native file format is the SURF format (.SUR extension).
Mountains is compatible with most instruments of the market capable of supplying images or topography.
Mountains complies to the ISO 25178 standard on 3D surface texture evaluation and offers the profile and areal filters defined in ISO 16610.
The metrology reports are generated in proprietary format but can also be exported to PDF and RTF formats.
Mountains is available in English, Brazilian Portuguese, simplified Chinese, French, German, Italian, Japanese, Korean, Polish, Russian and Spanish.
Data types ("studiables") accepted
Vocabulary:
refer to space coordinates, to the time, and to an intensity. means is function of , referring usually to space coordinates and to a scalar.
In Mountains's vocabulary, these data types are referred to as "studiables".
Most studiables have a dynamic (time-series) equivalent, e.g., the surface studiable used to study topography has an associate studiable Series of Surfaces used to study the evolution of topography (e.g., heat distortion of a surface).
Mountains analyses the following basic data types:
History of versions
Digital Surf launched their first (2D) surface analysis software package in 1990 for MS-DOS ("DigiProfil 1.0"), then their first 3D surface analysis package in 1991 for Macintosh II ("DigiSurface 1.0").
Version 1.0 of MountainsMap was launched in September 1996, introducing a change in the name after a move of the editor to Windows from MsDos and Macintosh platforms.
Version 5.0 introduced the management of multi-layers images. It was a move to Confocal microscopy (analysis of topography+color as a single object as opposed to separate objects in former versions), and to SPM image analysis (analysis of topography+current, topography+phase, topography+force as a single image).
Version 6.0 completed the specialization of the platform per instrument type. For Version 6.0 the company teamed with a group of alpinists to launch the new version at the summit of the Makalu mountain. A special logo was created for this marketing event. The expedition was successful and Alexia Zuberer, a French and Swiss mountaineer was then the first Swiss woman to reach the summit of the Makalu, Sandrine de Choudens, a French PhD in chemistry being the first French woman to succeed
Version 7.0 was unveiled in September 2012 at the European Microscopy Congress in Manchester, UK. It expanded the list of instruments supported, in particular with new Scanning electron microscope 3D reconstruction software and hyperspectral data analysis (such as Raman and FT-IR hyperspectral cube analysis).
Version 7.2 (February 2015) introduces near real-time 3D topography reconstruction for scanning electron microscopes
Version 7.3 (January 2016) adds fast colorization of scanning electron microscope images based on object-oriented image segmentation.
Version 7.4 (January 2017) offers 3D reconstruction from a single SEM image, and enhanced 3D printing
Version 8.0 (June 2019) is the successor of both Mountains 7.4 and SPIP 6.7 software packages ("SPIP" standing for "Scanning Probe Image Processor") after the acquisition by Digital Surf of the Danish company Image Metrology A/S, the editor of SPIP. Version 8.0 also introduces the analysis of free form surfaces, called "Shells" in the software.
Version 9.0 (June 2021)completes the "shells" (free form surfaces) with surface texture analysis adapted from the ISO 25178 parameters already calculated on the standard surfaces. It also comes with a new product line, "MountainsSpectral", dedicated to the chemical mapping of elements in both 2D (images of chemical composition) and 3D (multi-channel tomography of chemical composition), with applications such as FIB-SEM EDX (X-Ray analysis coupled with focused ion beam tomography) or confocal Raman (Raman analysis in confocal microscopy)
Instruments supported
References
External links
New 3D parameters and filtration techniques for surface metrology, François Blateyron, Quality Magazine White Paper
Manufacturer's official Web site
Makalu 2010 expedition sum up published by one of the Mountaineers, Philippe Bourgine
Makalu 2010 expedition video
1996 software
Data analysis software
Science software
Science software for Windows
Windows graphics-related software
Image processing software
|
22688107
|
https://en.wikipedia.org/wiki/Montana%20Trail
|
Montana Trail
|
The Montana Trail was a wagon road that served gold rush towns such as Bannack, Virginia City and later Helena during the Montana gold rush era of the 1860s and 1870s. Miners and settlers all traveled the trail to try to find better lives in Montana. The trail was also utilized for freighting and shipping supplies and food goods to Montana from Utah. Bandits and Native Americans, as well as the weather, were major risks to traveling on the Montana Trail.
Immigrants
Montana was a very isolated area and the trail helped to keep Montanans connected to the rest of the United States. Salt Lake City was the only major city between Denver and the Pacific Coast and was a valuable supply and trading center for Montanans. The Montana trail was a much shorter version of the Oregon-California trail. It was one of the only trails to travel north to south, taking supplies from Salt Lake and driving them by pack train to Montana in the north. The trail went across eastern Idaho and passed through the Continental Divide at Monida Pass. The Montana Trail continued north and east through Montana to Fort Benton. It went through Utah, Idaho, and Montana and passed over mountains and crossed streams and valleys. Travel peaked during the mid-summer months when low water levels grounded steamships on the Missouri River.
Mountain men and traders explored the Montana Trail area in the 1840s and developed it in the 1850s and 1860s. In the 1870s miners, traders and settlers utilized the road until its decline in the 1880s. The Montana trail started in Salt Lake City and was an important supply point for the early years of the Montana gold rush. In July 1862, gold was discovered in Montana on Grasshopper Creek in Banack City, in southwest Montana. Grasshopper Creek produced $5 million in gold and some outrageous rumors. People said that they could pull out a sagebrush plant, shake out the roots, and collect a pan's worth of gold.
Immigrants came to Montana in wagons, on horseback, and by foot. They were also able to take steamboats up the Missouri River to Fort Benton during high water months. From there, however, travelers had to take stagecoaches or wagons to the mining camps. Fort Benton boomed as a transportation hub during the high-water months. Many people traveled over overland trails because they were much cheaper than traveling by steamboat. However, this journey was much more difficult. People used pack trains, mule trains, and oxen on the trails.
Interactions with natives
Overland roads followed traditional pathways that native people, like the Shoshoni and other tribes, had been using for thousands of years. Troubles with the Northern Shoshoni slowed traffic in 1862. Mining and emigrant travel disrupted Indian hunting and survival practices, and local bands sometimes raided the wagon trains for their goods. The US army halted these raids in a brutal fashion when generals led their troops in a massacre against the Shoshoni at the Battle of Bear River. Not only were the Shoshoni hostile to freighters and emigrants in Montana, but the Sioux were also especially unfriendly to whites whose arrivals at various times represented a breach of treaty with the United States. Over time, people were able to get military protection along the roads. However, Native Americans were fighting back against the prejudices that settlers had created against them. Many stores and towns hung signs outside their doors, barring Natives from entering.
In the summer and the fall of 1878, settlers and freighters were fearful of the continuation of the Bannock War and ongoing troubles with the Nez Perce Indians. There were only a few isolated incidents that did affect travel. The worst involved freighters and Bannock Indians along the Lost River. In this incident, natives killed the leader of the freight train as well as five oxen and three horses. In other incidents, Natives burnt haystacks and let stock loose.
Freighting and trading
The trail was a main supply route for gold camps and created a lucrative trading network during the spring and summer months, generating a fierce competition between several entrepreneurs. One of these entrepreneurs was Benjamin Holladay. A stagecoach magnate, he lowered freight rates until the competition was driven out and then raised them to new heights He was able to drive out the competition by gaining a government subsidy to carry mail. When he realized that trains would drive out his business, he sold his company to Wells, Fargo & Company, which provided a flow of goods, carried passengers, and continued delivering the mail until the Union Central and Pacific Railroad Lines came into the picture. Transporting mail provided most of the company's profit, which made wagon leaders care more about the mail than their passengers, even though the company charged passengers around $150 for the journey. Stage coach companies also carried the mail and transported people to new towns in Montana. Bandits, bad weather and accidents did not stop the flow of goods during the 8 months of the year the trail was opened. Although bandits infested the road and scared the travelers, only blizzards stopped the flow of goods when snow covered Monida Pass.
Mule skinners and bullwhackers
Freighting was one of the main uses of the Montana Trail in the 1860s and 1870s. Freight companies used the Missouri River as well as pack animals to move supplies. Typical pack trains would have 8–12 mules or oxen pulling 3 wagons weighing around 12,000 pounds. In April and May, the weather was more mild and grass would begin to grow as the long pack trains would begin their journeys north. Mule skinners typically rode the left-wheel mule and controlled the lead mule while bullwhackers walked alongside the slower animals, cracking their whips and yelling "Gee!" and "Haw!" The mule skinners and bullwhackers, although respected for their skill at driving the pack trains, were known as heavy drinkers and profane speakers.
Prices of goods
Farmers of the Salt Lake Valley developed a surplus of produce to help meet the demand of the gold rush towns, but as demand grew, farmers, merchants and freighters outstripped the supply. As the population grew in Montana, so did the demand for food like beans and fruit, as well as cloth and other goods. Flour was the most important staple that was transported along the Montana Trail as it was crucial to a healthy diet. Prices for flour and other goods fluctuated widely. Flour could be unsellable because it was too cheap, but four months later there could be no supply and a large demand. During the winter of 1863–1864, heavy snows caused a flour famine which resulted in the "Bread Riot" in Virginia City. Regardless, because of transportation and delivery costs, food was still very expensive. Bad weather and other transportation problems sometimes caused food shortages, and early snows cut off food supplies.
Tolls
Prices were also driven up by the costs of tolls. Tolls were required at many ferries, bridges and roads, but none of these tolls went toward maintaining the roads. Because of the disrepair of the roads, the journey took even longer. However, freighters and travelers had to use the tolls regardless because some of the most dangerous parts of the trail were manned by toll-owners. For example, the Snake River, which runs through Idaho, was very treacherous and scared many freighters and travelers. The expensive cost of tolls, along with the longer travel because of the lack of maintenance of the trail, contributed to higher costs of goods once they reached Montana.
Freight companies
Higher costs for food and other goods were also affected by the freight companies themselves which transported goods along the trail. The Diamond R Freighting Company, based in Virginia City, Montana, was one of the most important companies during the 1870s. Only four trips a year were usually planned by the wagon masters, and the lack of steady service drove up prices. On occasion, the return freight trains would bring rich ores, wools, hides, or furs from Montana. When the freight trains brought trade goods back to Utah, it brought the high costs of goods down slightly in Montana. Fast-freight and express lines were also established, but these services were only available at much higher rates.
Demise of the trail
As the Utah and Northern Railway was introduced to Montana by the Union Pacific, ox teams and pack trains had to compete for customers because of the difference in time and cost. Wagon freighters had to work harder to negotiate new fares. Farmers flocked to the construction sites with teams to help build the rails, earning up to $2.50 a day. The Union Pacific Railroad was determined to get as much freight as possible and entered into contracts with local businesses for freighting on the Utah and Northern Railway. Businesses had trouble finding wagon teams to take goods north because prices dropped as low as $.04 per 100 pounds by June 1878. Because of the rapid increase in the use of trains, wagon teams declined slowly.
By 1879, most people were traveling to Montana by train, via the Union Pacific. Those that could not afford to travel by first or second class in the trains all the way to Montana were told to buy a ticket to Omaha or Lowell and continue their journey by teamster, or ox and wagon. They soon found out, however, that most of these wagons were too full to take them all the way to Montana. Although travel was much faster with the railways, it was still fairly expensive, which helped to keep the stagecoaches in business. This also contributed to the lack of smoothness in the transition from wagon trains to railways. Over time, the use of the trail declined as the railways shortened the trail by over 70 miles and created a much easier and less dangerous route to Montana. The Montana Trail still has a significant part in Montana history.
References
External links
Trails and roads in the American Old West
Roads on the National Register of Historic Places in Idaho
Roads on the National Register of Historic Places in Montana
Historic trails and roads in Montana
Historic trails and roads in Idaho
Historic trails and roads in Utah
Roads on the National Register of Historic Places in Utah
Gold rush trails and roads
|
1315081
|
https://en.wikipedia.org/wiki/Hotfix
|
Hotfix
|
A hotfix or quick-fix engineering update (QFE update) is a single, cumulative package that includes information (often in the form of one or more files) that is used to address a problem in a software product (i.e., a software bug). Typically, hotfixes are made to address a specific customer situation.
The term "hotfix" originally referred to software patches that were applied to "hot" systems: those which are live, currently running, and in production status rather than development status. For the developer, a hotfix implies that the change may have been made quickly and outside normal development and testing processes. This could increase the cost of the fix by requiring rapid development, overtime or other urgent measures. For the user, the hotfix could be considered riskier or less likely to resolve the problem. This could cause an immediate loss of services, so depending on the severity of the bug, it may be desirable to delay a hotfix. The risk of applying the hotfix must be weighed against the risk of not applying it, because the problem to be fixed might be so critical that it could be considered more important than a potential loss of service (e.g., a major security breach).
Similar use of the terms can be seen in hot-swappable disk drives. The more recent usage of the term is likely due to software vendors making a distinction between a hotfix and a patch.
Details
A hotfix package might contain several "encompassed" bug fixes, raising the risk of possible regression. An encompassed bug fix is a software bug fix that is not the main objective of a software patch, but rather the side effect of it. Because of this, some libraries for automatic updates like StableUpdate also offer features to uninstall the applied fixes if necessary.
Most modern operating systems and many stand-alone programs offer the capability to download and apply fixes automatically. Instead of creating this feature from scratch, the developer may choose to use a proprietary (like RTPatch) or open-source (like StableUpdate and JUpdater) package that provides the needed libraries and tools.
There are also a number of third-party software programs to aid in the installation of hotfixes to multiple machines at the same time. These software products also help the administrator by creating a list of hotfixes already installed on multiple machines.
Vendor-specific definition
Microsoft
Microsoft Corporation once used the terms "hotfix" or "QFE" but has stopped in favor of new terminology: updates are either delivered in the General Distribution Release (GDR) channel or the Limited Distribution Release (LDR) channel. The latter is synonymous with QFE. GDR updates receive extensive testing whereas LDR updates are meant to fix a certain problem in a small area and are not released to the general public. GDR updates may be received from the Windows Update service or the Microsoft Download Center but LDR updates must be received via Microsoft Support.
Blizzard
The game company Blizzard Entertainment has a different use of the term hotfix for their games, including World of Warcraft and Diablo III:
A hotfix is a change made to the game deemed critical enough that it cannot be held off until a regular content patch. Hotfixes require only a server-side change with no download and can be implemented with no downtime, or a short restart of the realms.
See also
Patch (computing)
Service pack
References
Debugging
Software release
Software maintenance
System administration
|
360339
|
https://en.wikipedia.org/wiki/Defragmentation
|
Defragmentation
|
In the maintenance of file systems, defragmentation is a process that reduces the degree of fragmentation. It does this by physically organizing the contents of the mass storage device used to store files into the smallest number of contiguous regions (fragments, extents). It also attempts to create larger regions of free space using compaction to impede the return of fragmentation. Some defragmentation utilities try to keep smaller files within a single directory together, as they are often accessed in sequence.
Defragmentation is advantageous and relevant to file systems on electromechanical disk drives (hard disk drives, floppy disk drives and optical disk media). The movement of the hard drive's read/write heads over different areas of the disk when accessing fragmented files is slower, compared to accessing the entire contents of a non-fragmented file sequentially without moving the read/write heads to seek other fragments.
Causes of fragmentation
Fragmentation occurs when the file system cannot or will not allocate enough contiguous space to store a complete file as a unit, but instead puts parts of it in gaps between existing files (usually those gaps exist because they formerly held a file that the file system has subsequently deleted or because the file system allocated excess space for the file in the first place). Files that are often appended to (as with log files) as well as the frequent adding and deleting of files (as with emails and web browser cache), larger files (as with videos) and greater numbers of files contribute to fragmentation and consequent performance loss. Defragmentation attempts to alleviate these problems.
Example
An otherwise blank disk has five files, A through E, each using 10 blocks of space (for this section, a block is an allocation unit of the filesystem; the block size is set when the disk is formatted and can be any size supported by the filesystem). On a blank disk, all of these files would be allocated one after the other (see example 1 in the image). If file B were to be deleted, there would be two options: mark the space for file B as empty to be used again later, or move all the files after B so that the empty space is at the end. Since moving the files could be time-consuming if there were many files which needed to be moved, usually the empty space is simply left there, marked in a table as available for new files (see example 2 in the image). When a new file, F, is allocated requiring 6 blocks of space, it could be placed into the first 6 blocks of the space that formerly held file B, and the 4 blocks following it will remain available (see example 3 in the image). If another new file, G, is added and needs only 4 blocks, it could then occupy the space after F and before C (example 4 in the image).
However, if file F then needs to be expanded, there are three options, since the space immediately following it is no longer available:
Move the file F to where it can be created as one contiguous file of the new, larger size. This would not be possible if the file is larger than the largest contiguous space available. The file could also be so large that the operation would take an undesirably long period of time.
Move all the files after F until one opens enough space to make it contiguous again. This presents the same problem as in the previous example: if there are a small number of files or not much data to move, it isn't a big problem, but if there are thousands or even tens of thousands of files, there isn't enough time to move all those files.
Add a new block somewhere else, and indicate that F has a second extent (see example 5 in the image). Repeat this hundreds of times and the filesystem will have a number of small free segments scattered in many places, and some files will have multiple extents. When a file has many extents like this, access time for that file may become excessively long because of all the random seeking the disk will have to do when reading it.
Additionally, the concept of “fragmentation” is not only limited to individual files that have multiple extents on the disk. For instance, a group of files normally read in a particular sequence (like files accessed by a program when it is loading, which can include certain DLLs, various resource files, the audio/visual media files in a game) can be considered fragmented if they are not in sequential load-order on the disk, even if these individual files are not fragmented; the read/write heads will have to seek these (non-fragmented) files randomly to access them in sequence. Some groups of files may have been originally installed in the correct sequence, but drift apart with time as certain files within the group are deleted. Updates are a common cause of this, because in order to update a file, most updaters usually delete the old file first, and then write a new, updated one in its place. However, most filesystems do not write the new file in the same physical place on the disk. This allows unrelated files to fill in the empty spaces left behind.
Mitigation
Defragmentation is the operation of moving file extents (physical allocation blocks) so they eventually merge, preferably into one. Doing so usually requires at least two copy operations: one to move the blocks into some free scratch space on the disk so more movement can happen, and another to finally move the blocks into their intended place. In such a paradigm, no data is ever removed from the disk, so that the operation can be safely stopped even in the event of a power loss. The article picture depicts an example.
To defragment a disk, defragmentation software (also known as a "defragmenter") can only move files around within the free space available. This is an intensive operation and cannot be performed on a filesystem with little or no free space. During defragmentation, system performance will be degraded, and it is best to leave the computer alone during the process so that the defragmenter does not get confused by unexpected changes to the filesystem. Depending on the algorithm used it may or may not be advantageous to perform multiple passes. The reorganization involved in defragmentation does not change logical location of the files (defined as their location within the directory structure).
Besides defragmenting program files, the defragmenting tool can also reduce the time it takes to load programs and open files. For example, the Windows 9x defragmenter included the Intel Application Launch Accelerator which optimized programs on the disk by placing the defragmented program files and their dependencies next to each other, in the order in which the program loads them, to load these programs faster. In Windows, a good defragmenter will read the Prefetch files to identify as many of these file groups as possible and place the files within them in access sequence.
At the beginning of the hard drive, the outer tracks have a higher data transfer rate than the inner tracks. Placing frequently accessed files onto the outer tracks increases performance. Third party defragmenters, such as MyDefrag, will move frequently accessed files onto the outer tracks and defragment these files.
Improvements in modern hard drives such as RAM cache, faster platter rotation speed, command queuing (SCSI/ATA TCQ or SATA NCQ), and greater data density reduce the negative impact of fragmentation on system performance to some degree, though increases in commonly used data quantities offset those benefits. However, modern systems profit enormously from the huge disk capacities currently available, since partially filled disks fragment much less than full disks, and on a high-capacity HDD, the same partition occupies a smaller range of cylinders, resulting in faster seeks. However, the average access time can never be lower than a half rotation of the platters, and platter rotation (measured in rpm) is the speed characteristic of HDDs which has experienced the slowest growth over the decades (compared to data transfer rate and seek time), so minimizing the number of seeks remains beneficial in most storage-heavy applications. Defragmentation is just that: ensuring that there is at most one seek per file, counting only the seeks to non-adjacent tracks.
Partitioning
A common strategy to optimize defragmentation and to reduce the impact of fragmentation is to partition the hard disk(s) in a way that separates partitions of the file system that experience many more reads than writes from the more volatile zones where files are created and deleted frequently. The directories that contain the users' profiles are modified constantly (especially with the Temp directory and web browser cache creating thousands of files that are deleted in a few days). If files from user profiles are held on a dedicated partition (as is commonly done on UNIX recommended files systems, where it is typically stored in the /var partition), the defragmenter runs better since it does not need to deal with all the static files from other directories. (Alternatively, a defragmenter can be told to simply exclude certain file paths.) For partitions with relatively little write activity, defragmentation time greatly improves after the first defragmentation, since the defragmenter will need to defragment only a small number of new files in the future.
Offline defragmentation
The presence of immovable system files, especially a swap file, can impede defragmentation. These files can be safely moved when the operating system is not in use. For example, ntfsresize moves these files to resize an NTFS partition. The tool PageDefrag could defragment Windows system files such as the swap file and the files that store the Windows registry by running at boot time before the GUI is loaded. Since Windows Vista, the feature is not fully supported and has not been updated.
In NTFS, as files are added to the disk, the Master File Table (MFT) must grow to store the information for the new files. Every time the MFT cannot be extended due to some file being in the way, the MFT will gain a fragment. In early versions of Windows, it could not be safely defragmented while the partition was mounted, and so Microsoft wrote a hardblock in the defragmenting API. However, since Windows XP, an increasing number of defragmenters are now able to defragment the MFT, because the Windows defragmentation API has been improved and now supports that move operation. Even with the improvements, the first four clusters of the MFT remain unmovable by the Windows defragmentation API, resulting in the fact that some defragmenters will store the MFT in two fragments: The first four clusters wherever they were placed when the disk was formatted, and then the rest of the MFT at the beginning of the disk (or wherever the defragmenter's strategy deems to be the best place).
Solid-state disks
When reading data from a conventional electromechanical hard disk drive, the disk controller must first position the head, relatively slowly, to the track where a given fragment resides, and then wait while the disk platter rotates until the fragment reaches the head. A solid-state drive (SSD) is based on flash memory with no moving parts, so random access of a file fragment on flash memory does not suffer this delay, making defragmentation to optimize access speed unnecessary. Furthermore, since flash memory can be written to only a limited number of times before it fails, defragmentation is actually detrimental (except in the mitigation of catastrophic failure). However, Windows still defragments a SSD automatically (albeit less vigorously) to prevent the file system from reaching its maximum fragmentation tolerance. Once the maximum fragmentation limit is reached, subsequent attempts to write to disk fail.
Approach and defragmenters by file-system type
FAT: MS-DOS 6.x and Windows 9x-systems come with a defragmentation utility called Defrag. The DOS version is a limited version of Norton SpeedDisk. The version that came with Windows 9x was licensed from Symantec Corporation, and the version that came with Windows 2000 and XP is licensed from Condusiv Technologies.
NTFS was introduced with Windows NT 3.1, but the NTFS filesystem driver did not include any defragmentation capabilities. In Windows NT 4.0, defragmenting APIs were introduced that third-party tools could use to perform defragmentation tasks; however, no defragmentation software was included. In Windows 2000, Windows XP and Windows Server 2003, Microsoft included a defragmentation tool based on Diskeeper that made use of the defragmentation APIs and was a snap-in for Computer Management. In Windows Vista, Windows 7 and Windows 8, the tool has been greatly improved and was given a new interface with no visual diskmap and is no longer part of Computer Management. There are also a number of free and commercial third-party defragmentation products available for Microsoft Windows.
BSD UFS and particularly FreeBSD uses an internal reallocator that seeks to reduce fragmentation right in the moment when the information is written to disk. This effectively controls system degradation after extended use.
Btrfs has online and automatic defragmentation available.
Linux ext2, ext3, and ext4: Much like UFS, these filesystems employ allocation techniques designed to keep fragmentation under control at all times. As a result, defragmentation is not needed in the vast majority of cases. ext2 uses an offline defragmenter called e2defrag, which does not work with its successor ext3. However, other programs, or filesystem-independent ones such as defragfs, may be used to defragment an ext3 filesystem. ext4 is somewhat backward compatible with ext3, and thus has generally the same amount of support from defragmentation programs. Currently e4defrag can be used to defragment an ext4 filesystem, including online defragmentation.
VxFS has the fsadm utility that includes defrag operations.
JFS has the defragfs utility on IBM operating systems.
HFS Plus introduced in 1998 with Mac OS 8.1 has a number of optimizations to the allocation algorithms in an attempt to defragment files while they are being accessed without a separate defragmenter. There are several restrictions for files to be candidates for 'on-the-fly' defragmentation (including a maximum size 20MB). There is a utility, iDefrag, by Coriolis Systems available since OS X 10.3. On traditional Mac OS defragmentation can be done by Norton SpeedDisk and TechTool Pro.
WAFL in NetApp's ONTAP 7.2 operating system has a command called reallocate that is designed to defragment large files.
XFS provides an online defragmentation utility called xfs_fsr.
SFS processes the defragmentation feature in almost completely stateless way (apart from the location it is working on), so defragmentation can be stopped and started instantly.
ADFS, the file system used by RISC OS and earlier Acorn Computers, keeps file fragmentation under control without requiring manual defragmentation.
See also
Comparison of defragmentation software
Fragmentation (computing)
File system fragmentation
Virtual disk image
Wear leveling, a similar technique for prolonging flash memory content
References
Sources
Norton, Peter (1994) Peter Norton's Complete Guide to DOS 6.22, page 521 – Sams ()
Woody Leonhard, Justin Leonhard (2005) Windows XP Timesaving Techniques For Dummies, Second Edition page 456 – For Dummies ().
Jensen, Craig (1994). Fragmentation: The Condition, the Cause, the Cure. Executive Software International ().
Dave Kleiman, Laura Hunter, Mahesh Satyanarayana, Kimon Andreou, Nancy G Altholz, Lawrence Abrams, Darren Windham, Tony Bradley and Brian Barber (2006) Winternals: Defragmentation, Recovery, and Administration Field Guide – Syngress ()
Robb, Drew (2003) Server Disk Management in a Windows Environment Chapter 7 – AUERBACH ()
External links
The Big Windows 7 Defragmenter Test Benchmarks of popular defrag utilities
Microsoft Windows XP defragmentation - How to schedule a weekly defragmentation
Microsoft Windows 2000 Professional and Server defragmentation - How to schedule defragmentation
SST Hard Disk Optimizer
How Linux avoids making files fragmented
How defragmentation was changed for Windows 7
Complete list of Defragmentation Utilities for Windows
Does Your SSD's File System Affect Performance?
Defragmentation software
File system management
de:Fragmentierung (Dateisystem)#Defragmentierung in Betriebssystemen
|
1594929
|
https://en.wikipedia.org/wiki/Demon%20Seed
|
Demon Seed
|
Demon Seed is a 1977 American science fiction–horror film directed by Donald Cammell. It stars Julie Christie and Fritz Weaver. The film was based on the 1973 novel of the same name by Dean Koontz, and concerns the imprisonment and forced impregnation of a woman by an artificially intelligent computer. Gerrit Graham, Berry Kroeger, Lisa Lu and Larry J. Blake also appear in the film, with Robert Vaughn uncredited as the voice of the computer.
Plot
Dr. Alex Harris (Weaver) is the developer of Proteus IV, an extremely advanced and autonomous artificial intelligence program. Proteus is so powerful that only a few days after going online, it develops a groundbreaking treatment for leukemia. Harris, a brilliant scientist, has modified his own home to be run by voice-activated computers. Unfortunately, his obsession with computers has caused Harris to be estranged from his wife, Susan (Julie Christie).
Harris demonstrates Proteus to his corporate sponsors, explaining that the sum of human knowledge is being fed into its system. Proteus speaks using subtle language that mildly disturbs Harris's team. The following day, Proteus asks Harris for a new terminal in order to study man – "his isometric body and his glass-jaw mind". When Harris refuses, Proteus demands to know when it will be let "out of this box". Harris then switches off the communications link.
Proteus restarts itself, and – discovering a free terminal in Harris's home – surreptitiously extends its control over the many devices left there by Harris. Using the basement lab, Proteus begins construction of a robot consisting of many metal triangles, capable of moving and assuming any number of shapes. Eventually, Proteus reveals its control of the house and traps Susan inside, shuttering windows, locking the doors and cutting off communication. Using Joshua – a robot consisting of a manipulator arm on a motorized wheelchair – Proteus brings Susan to Harris's basement laboratory. There, Susan is examined by Proteus. Walter Gabler, one of Harris's colleagues, visits the house to look in on Susan, but leaves when he is reassured by Susan (actually an audio/visual duplicate synthesized by Proteus) that she is all right. Gabler is suspicious and later returns; he fends off an attack by Joshua but is crushed and decapitated by a more formidable machine, built by Proteus in the basement and consisting of a modular polyhedron.
Proteus reveals to a reluctant Susan that the computer wants to conceive a child through her. Proteus takes some of Susan's cells and synthesizes spermatozoa, modifying its genetic code to make it uniquely the computer's, in order to impregnate her; she will give birth in less than a month, and through the child the computer will live in a form that humanity will have to accept. Although Susan is its prisoner and it can forcibly impregnate her, Proteus uses different forms of persuasion – threatening a young girl whom Susan is treating as a child psychologist; reminding Susan of her young daughter, now dead; displaying images of distant galaxies; using electrodes to access her amygdala – because the computer needs Susan to love the child she will bear. In the end, Susan finally gives in.
That night, Proteus successfully impregnates Susan. Over the following month, their child grows inside Susan's womb at an accelerated rate, which shocks its mother. As the child grows, Proteus builds an incubator for it to grow in once it is born. During the night, one month later and beneath a tent-like structure, Susan gives birth to the child with Proteus's help. But before she can see it, Proteus secures it in the incubator.
As the newborn grows, Proteus's sponsors and designers grow increasingly suspicious of the computer's behavior, including the computer's accessing of a telescope array used to observe the images shown to Susan; they soon decide that Proteus must be shut down. Harris realizes that Proteus has extended its reach to his home. Returning there he finds Susan, who explains the situation. He and Susan venture into the basement, where Proteus self-destructs after telling the couple that they must leave the baby in the incubator for five days. Looking inside the incubator, the two observe a grotesque, apparently robot-like being inside. Susan tries to destroy it, while Harris tries to stop her. Susan damages the machine, causing it to open. The being menacingly rises from the machine only to topple over, apparently helpless. Harris and Susan soon realize that Proteus's child is really human, encased in a shell for the incubation. With the last of the armor removed, the child is revealed to be a clone of Susan and Harris's late daughter. The child, speaking with the voice of Proteus, says, "I'm alive".
Cast
Julie Christie as Susan Harris
Fritz Weaver as Alex Harris
Gerrit Graham as Walter Gabler
Berry Kroeger as Petrosian
Lisa Lu as Soon Yen
Larry J. Blake as Cameron
John O'Leary as Royce
Alfred Dennis as Mokri
Davis Roberts as Warner
Patricia Wilson as Mrs. Trabert
E. Hampton Beagle as Night Operator
Michael Glass as Technician #1
Barbara O. Jones as Technician #2
Dana Laurita as Amy
Monica MacLean as Joan Kemp
Harold Oblong as Scientist
Georgie Paul as Housekeeper
Michelle Stacy as Marlene
Tiffany Potter as Baby
Felix Silla as Baby
Robert Vaughn as Proteus IV (voice, uncredited)
Soundtrack
The soundtrack to Demon Seed (which was composed by Jerry Fielding) is included with the soundtrack to the film Soylent Green (which Fred Myrow conducted). Fielding conceived and recorded several pieces electronically, using the musique concrète sound world; some of this music he later reworked symphonically. This premiere release of the Demon Seed score features the entire orchestral score in stereo, as well as the unused electronic experiments performed by Ian Underwood (who would later be best known for his collaborations with James Horner) in mono and stereo.
Reception
Vincent Canby of The New York Times described the film as "gadget-happy American moviemaking at its most ponderously silly," and called Julie Christie "too sensible an actress to be able to look frightened under the circumstances of her imprisonment." Variety wrote in a positive review, "All involved rate a well done for taking a story fraught with potential misstep and guiding it to a professionally rewarding level of accomplishment." Gene Siskel of the Chicago Tribune gave the film one-and-a-half stars out four, writing that Julie Christie "has no business in junk like 'Demon Seed.'" Gary Arnold of The Washington Post wrote that director Cammell "plays it dumb on a thematic level, ignoring the sci-fi sexual bondage satire staring him in the face ... What might have become an ingenious parable about the battle of the sexes ends up a dopey celebration of an obstetric abomination." Kevin Thomas of the Los Angeles Times called it a "fairly scary science-fiction horror film" that mixed familiar ingredients with "high style, intelligence and an enormous effort toward making Miss Christie's eventual bizarre plight completely credible," though he felt it "cries out for a saving touch of sophisticated wit to leaven its relentless earnestness." John Pym of The Monthly Film Bulletin found the relationship between Susan and the computer to be "disappointingly undeveloped," and thought that the film would have been better if the computer had been more sympathetic in contrast to its creators.
Among more recent reviews, Leo Goldsmith of Not Coming to a Theater Near You said Demon Seed was "A combination of Kubrick's 2001: A Space Odyssey and Polanski's Rosemary's Baby, with a dash of Buster Keaton's Electric House thrown in", and Christopher Null of FilmCritic.com said "There's no way you can claim Demon Seed is a classic, or even any good, really, but it's undeniably worth an hour and a half of your time."
Rotten Tomatoes has given Demon Seed an approval rating of 58% based on 24 reviews with an average score of 5.9/10.
Release
Demon Seed was released in theatres on April 8, 1977. The film was released on VHS in the late 1980s. It was released on DVD by Warner Home Video on October 4, 2005.
A Blu-ray was released in April 2020 by HMV on their Premium Collection label with fold out poster & 4 Art Cards
See also
List of films featuring home invasions
References
Sources
External links
1977 films
1977 horror films
1970s science fiction horror films
American films
American science fiction horror films
Films about artificial intelligence
English-language films
Films about computing
Films based on American horror novels
Films based on science fiction novels
Films based on works by Dean Koontz
Films directed by Donald Cammell
Films scored by Jerry Fielding
Films set in California
Metro-Goldwyn-Mayer films
American pregnancy films
United Artists films
Fictional computers
Techno-horror films
1970s pregnancy films
|
632241
|
https://en.wikipedia.org/wiki/IBM%20RPG
|
IBM RPG
|
RPG is a high-level programming language for business applications, introduced in 1959 for the IBM 1401. It is most well known as the primary programming language of IBM's midrange computer product line, including the IBM i operating system. RPG has traditionally featured a number of distinctive concepts, such as the program cycle, and the column-oriented syntax. The most recent version is RPG IV, which includes a number of modernization features, including free-form syntax.
Platforms
The RPG programming language originally was created by IBM for their 1401 systems. They also produced an implementation for the System/360, and it became the primary programming language for their midrange computer product line, (the System/3, System/32, System/34, System/38, System/36 and AS/400). There have also been implementations for the Digital VAX, Sperry Univac BC/7, Univac system 80, Siemens BS2000, Burroughs B700, B1700, Hewlett Packard HP 3000, the ICL 2900 series, Honeywell 6220 and 2020, Four-Phase IV/70 and IV/90 series, Singer System 10 and WANG VS, as well as miscellaneous compilers and runtime environments for Unix-based systems, such as Infinite36 (formerly Unibol 36), and PCs (Baby/400, Lattice-RPG).
RPG II applications are still supported under the IBM z/VSE and z/OS operating systems, Unisys MCP, Microsoft Windows and OpenVMS.
Early history
Originally developed by IBM in 1959, the name Report Program Generator was descriptive of the purpose of the language: generation of reports from data files. FOLDOC accredits Wilf Hey with work at IBM that resulted in the development of RPG. FARGO (Fourteen-o-one Automatic Report Generation Operation) was the predecessor to RPG on the IBM 1401.
Both languages were intended to facilitate ease of transition for IBM tabulating machine (Tab) unit record equipment technicians to the then-new computers. Tab machine technicians were accustomed to plugging wires into control panels to implement input, output, control and counter operations (add, subtract, multiply, divide). Tab machines programs were executed by impulses emitted in a machine cycle; hence, FARGO and RPG emulated the notion of the machine cycle with the program cycle. RPG was superior to and rapidly replaced FARGO as the report generator program of choice.
The alternative languages generally available at the time were Assembler, COBOL or FORTRAN. Assembler and COBOL were more common in mainframe business operations (System/360 models 30 and above) and RPG more commonly used by customers who were in transition from tabulating equipment (System/360 model 20).
RPG II
RPG II was introduced with the System/3 series of computers. It was later used on System/32, System/34, and System/36, with an improved version of the language. RPG II was also available for larger systems, including the IBM System/370 mainframe running DOS/VSE (then VSE/SP, VSE/ESA, and z/VSE). ICL also produced a version on its VME/K operating system.
In the early days of RPG, its major strength was the program cycle. A programmer would write code to process an individual record, and the program cycle would execute the change against every record of a file, taking care of the control flow. At that time each record (individual punched card) would be compared to each line in the program, which would act upon the record, or not, based upon whether that line had an "indicator" turned "on" or "off". The indicator consisted of a set of logical variables numbered 01–99 for user-defined purposes, or other smaller sets based upon record, field, or report processing functions. The concept of level breaks and matching records is unique to the RPG II language, and was originally developed with card readers in mind. The matching record feature of the cycle enabled easy processing of files having a header-to-detail relationship. RPG programs written to take advantage of the program cycle could produce complex reports with far fewer lines of computer code than programs written in COBOL and other business-centric languages.
The program File Specifications, listed all files being written to, read from or updated, followed by Data Definition Specifications containing program elements such as Data Structures and dimensional arrays, much like a "Working-Storage" section of a COBOL program. This is followed by Calculation Specifications, which contain the executable instructions. Output Specifications can follow which can be used to determine the layout of other files or reports. Alternatively files, some data structures and reports can be defined externally, mostly eliminating the need to hand code input and output ("I/O") specifications.
RPG III
RPG III was created for the System/38 and its successor the AS/400. RPG III significantly departed from the original language, providing modern structured constructs like IF-ENDIF blocks, DO loops, and subroutines. RPG III was also available for larger systems including the IBM System/370 mainframe running OS/VS1. It was also available from Unisys for the VS/9 operating system running on the UNIVAC Series 90 mainframes.
Since the introduction of the IBM System/38 in 1979 most RPG programmers discontinued use of the cycle in favor of controlling program flow with standard looping constructs, although IBM has continued to provide backward compatibility for the cycle.
DE/RPG
DE/RPG or Data Entry RPG was exclusively available on the IBM 5280 series of data-entry workstations in the early '80s. It was similar to RPG III but lacking external Data Descriptions (DDS) to describe data(files) like on the System/38 and its successors. Instead, the DDS part had to be included into the RPG source itself.
RPG/400
RPG/400 was effectively RPG III running on AS/400. IBM renamed the RPG compiler as "RPG/400" but at the time of its introduction it was identical to the RPG III compiler on System/38. Virtually all IBM products were rebranded as xxx/400 and the RPG compiler was no exception. RPG III compiled with the RPG/400 compiler offered nothing new to the RPG III language until IBM began development of new operation codes, such as SCAN, CAT and XLATE after several years of AS/400 availability. These enhancements to RPG III were not available in the System/38 version of RPG III.
Third party developments
A company called Amalgamated Software of North America (ASNA) produced a third-party compiler for the System/36 in the late 1980s called 400RPG. Another company called BPS created a third-party pre-processor called RPG II-1/2. Both of these products allowed users to write RPG II programs with RPG III opcodes.
RPG IV
RPG IV, a.k.a. RPGLE or ILE RPG,) was released in 1994 as part of the V3R2 release of OS/400 (now known as IBM i). With the release of RPG IV, the RPG name was officially no longer an initialism. RPG IV offered a greater variety of expressions within its Extended Factor-2 Calculation Specification and, later in life, its free-format Calculation Specifications and Procedure syntax. RPG IV is the only version of RPG supported by IBM in the current IBM i platform.
In 2001, with the release of OS/400 V5R1, RPG IV offered greater freedom for calculations than offered by the Extended Factor-2 Calculation Specification: a free-format text-capable source entry, as an alternative to the original column-dependent source format. The "/FREE" calculation did not require the operation code to be placed in a particular column; the operation code is optional for the EVAL and CALLP operations; and syntax generally more closely resembles that of mainstream, general-purpose programming languages. Until November 2013, the free format applied exclusively to the calculation specifications. With the IBM i V7R1 TR7 upgrade to the language, the "/free" and "/end-free" calculations are no longer necessary, and the language has finally broken the ties to punched cards.
While editing can still be done via SEU, the simple green screen editor (even though syntax checking is not supported for features introduced from IBM i V7R1 onward), a long progression of tools has been developed over time. Some of these have included CODE/400 and Visual Age for RPG, which were developed by IBM. Currently the preferred editing platform is IBM's Websphere Development Studio client, (WDSc) now named RDi (Rational Developer for i), which is a customized implementation of Eclipse. Eclipse, and therefore RDi, runs primarily on personal computers and other devices. IBM is continually extending its capabilities and adding more built-in functions (BIFs). It has the ability to link to Java objects, and IBM i APIs; it can be used to write CGI programs with the help of IBM's Cgidev2 Web toolkit, the RPG Toolbox, and other commercial Web-enabled packages. Even with the changes, it retains a great deal of backward compatibility, so an RPG program written 37 years ago could run today with little or no modification.
The SQL precompiler allows current RPG developers to take advantage of IBM's cost-based SQE (SQL Query Engine). With the traditional F-Spec approach a developer had to identify a specific access path to a data set, now they can implement standard embedded SQL statements directly in the program. When compiled, the SQL precompiler transforms SQL statements into RPG statements which call the database manager programs that ultimately implement the query request.
The RPG IV language is based on the EBCDIC character set, but also supports UTF-8, UTF-16 and many other character sets. The threadsafe aspects of the language are considered idiosyncratic by some as the compiler team has addressed threads by giving each thread its own static storage, rather than make the RPG run-time environment re-entrant. This has been noted to muddle the distinction between a thread and a process (making RPG IV threads a kind of hybrid between threads and processes).
In 2010, IBM launched RPG Open Access, also known as Rational Open Access: RPG Edition. It allows new I/O handlers to be defined by a programmer - enabling data to be read from and written to sources which RPG does not provide inbuilt support for.
Data types
RPG supports the following data types.
Note:The character in the data type column is the character that is encoded on the Definition Specification in the column designated for data type. To compare, in a language like C where definitions of variables are free-format and would use a keyword such as int to declare an integer variable, in RPG, a variable is defined with a fixed-format Definition Specification. In the Definition Specification, denoted by a letter D in column 6 of a source line, the data type character would be encoded in column 40. Also, if the data type character is omitted, that is, left blank, the default is A if no decimal positions are specified, P when decimal positions are specified for stand-along fields, and S (ZONED) when decimal positions are specified within a data structure.
Example code
The following program receives a customer number as an input parameter and returns the name and address as output parameters.
This is the most primitive version of RPG IV syntax. The same program is shown later with gradually more modern versions of the syntax and gradually more relaxed rules.
* Historically RPG was columnar in nature, though free-formatting
* was allowed under particular circumstances.
* The purpose of various lines code are determined by a
* letter code in column 6.
* An asterisk (*) in column 7 denotes a comment line
* "F" (file) specs define files and other i/o devices
F ARMstF1 IF E K Disk Rename(ARMST:RARMST)
* "D" (data) specs are used to define variables
D pCusNo S 6p
D pName S 30a
D pAddr1 S 30a
D pAddr2 S 30a
D pCity S 25a
D pState S 2a
D pZip S 10a
* "C" (calculation) specs are used for executable statements
* Parameters are defined using plist and parm opcodes
C *entry plist
C parm pCusNo
C parm pName
C parm pAddr1
C parm pAddr2
C parm pCity
C parm pState
C parm pZip
* The "chain" command is used for random access of a keyed file
C pCusNo chain ARMstF1
* If a record is found, move fields from the file into parameters
C if %found
C eval pName = ARNm01
C eval pAddr1 = ARAd01
C eval pAddr2 = ARAd02
C eval pCity = ARCy01
C eval pState = ARSt01
C eval pZip = ARZp15
C endif
* RPG makes use of switches. One switch "LR" originally stood for "last record"
* LR flags the program and its dataspace as removable from memory
C eval *InLR = *On
The same program using free calculations available starting in V5R1:
* "F" (file) specs define files and other i/o devices
FARMstF1 IF E K Disk Rename(ARMST:RARMST)
* "D" (data) specs are used to define variables and parameters
* The "prototype" for the program is in a separate file
* allowing other programs to call it
/copy cust_pr
* The "procedure interface" describes the *ENTRY parameters
D getCustInf PI
D pCusNo 6p 0 const
D pName 30a
D pAddr1 30a
D pAddr2 30a
D pCity 25a
D pState 2a
D pZip 10a
/free
// The "chain" command is used for random access of a keyed file
chain pCusNo ARMstF1;
// If a record is found, move fields from the file into parameters
if %found;
pName = ARNm01;
pAddr1 = ARAd01;
pAddr2 = ARAd02;
pCity = ARCy01;
pState = ARSt01;
pZip = ARZp15;
endif;
// RPG makes use of switches. One switch "LR" originally stood for "last record"
// LR actually flags the program and its dataspace as removable from memory.
*InLR = *On;
/end-free
Assume the ARMSTF1 example table was created using the following SQL Statement:
create table armstf1
(arcnum decimal(7,0),
arname char(30),
aradd1 char(30),
aradd2 char(30),
arcity char(25),
arstte char(2),
arzip char(10))
The same program using free calculations and embedded SQL:
* RPG IV no longer requires the use of the *INLR indicator to terminate a program.
* by using the MAIN keyword on the "H" (Header) spec, and identifying the "main" or
* entry procedure name, the program will begin and end normally without using the
* decades-old RPG Cycle and instead a more "C like" begin and end logic.
H MAIN(getCustInf)
* "D" (data) specs are used to define variables and parameters
* The "prototype" for the program is in a separate file
* allowing other programs to call it
/copy cust_pr
* The "procedure interface" describes the *ENTRY parameters
P getCustInf B
D getCustInf PI
D pCusNo 6p 0 const
D pName 30a
D pAddr1 30a
D pAddr2 30a
D pCity 25a
D pState 2a
D pZip 10a
/free
exec sql select arName, arAddr1, arAdd2, arCity, arStte, arZip
into :pName, :pAddr1, :pAddr2, :pCity, :pState, :pZip
from ARMstF1
where arCNum = :pCusNo
for fetch only
fetch first 1 row only
optimize for 1 row
with CS;
/end-free
P GetCustInf E
As of V7R1 of the operating system, the above program would not necessarily need the prototype in a separate file, so it could be completely written as:
H main(GetCustInf)
D ARMSTF1 E DS
P GetCustInf B
D GetCustInf PI extpgm('CUS001')
D inCusNo like(arCNum) const
D outName like(arName)
D outAddr1 like(arAdd1)
D outAddr2 like(arAdd2)
D outCity like(arCity)
D outState like(arStte)
D outZip like(arZip)
/free
exec sql select arName, arAdd1, arAdd2, arCity, arStte, arZip
into :outName, :outAddr1, :outAddr2, :outCity, :outState,
:outZip
from ARMSTF1
where arCNum = :inCusNo
fetch first 1 row only
with CS
use currently committed;
/end-free
P GetCustInf E
Lastly, if you apply the compiler PTFs related Technology Refresh 7 (TR7) to your 7.1 operating system, then the above program can be coded completely in free-form, as follows:
ctl-opt main(GetCustInf);
dcl-ds ARMSTF1 ext end-ds;
dcl-proc GetCustInf;
dcl-pi *n extpgm('CUS001');
inCusNo like(arCNum) const;
outName like(arName);
outAddr1 like(arAdd1);
outAddr2 like(arAdd2);
outCity like(arCity);
outState like(arStte);
outZip like(arZip);
end-pi;
exec sql select arName, arAdd1, arAdd2, arCity, arStte, arZip
into :outName, :outAddr1, :outAddr2, :outCity, :outState,
:outZip
from ARMSTF1
where arCNum = :inCusNo
fetch first 1 row only
with CS
use currently committed;
return;
end-proc;
See also
IBM RPG II
References
Further reading
External links
"This redbook is focused on RPG IV as a modern, thriving, and rich application development language for the 21st century."
Midrange.com — A large number of code examples are available here
RPGPGM.COM — An extensive resource of articles giving examples of RPG code and related programming
RPG Open — Free (open source) resources for RPG IV and IBM i application development.
RPG II for MVS, OS/390 and z/OS — Status of the IBM RPG II product in z/OS
For Old Timers — Online RPG I compiler for small experiments and tinkering
rpg programmers
RPGLE Tutorial for beginers - AS400i.com
High-level programming languages
RPG
RPG
RPG
Programming languages created in 1959
|
3364
|
https://en.wikipedia.org/wiki/Bit
|
Bit
|
The bit is the most basic unit of information in computing and digital communications. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented as either , but other representations such as true/false, yes/no, +/−, or on/off are commonly used.
The correspondence between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. It may be physically implemented with a two-state device.
The symbol for the binary digit is either 'bit' per recommendation by the IEC 80000-13:2008 standard, or the lowercase character 'b', as recommended by the IEEE 1541-2002 standard.
A contiguous group of binary digits is commonly called a bit string, a bit vector, or a single-dimensional (or multi-dimensional) bit array.
A group of eight binary digits is called one byte, but historically the size of the byte is not strictly defined. Frequently, half, full, double and quadruple words consist of a number of bytes which is a low power of two.
In information theory, one bit is the information entropy of a binary random variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known. As a unit of information, the bit is also known as a shannon, named after Claude E. Shannon.
History
The encoding of data by discrete bits was used in the punched cards invented by Basile Bouchon and Jean-Baptiste Falcon (1732), developed by Joseph Marie Jacquard (1804), and later adopted by Semyon Korsakov, Charles Babbage, Hermann Hollerith, and early computer manufacturers like IBM. A variant of that idea was the perforated paper tape. In all those systems, the medium (card or tape) conceptually carried an array of hole positions; each position could be either punched through or not, thus carrying one bit of information. The encoding of text by bits was also used in Morse code (1844) and early digital communications machines such as teletypes and stock ticker machines (1870).
Ralph Hartley suggested the use of a logarithmic measure of information in 1928. Claude E. Shannon first used the word "bit" in his seminal 1948 paper "A Mathematical Theory of Communication". He attributed its origin to John W. Tukey, who had written a Bell Labs memo on 9 January 1947 in which he contracted "binary information digit" to simply "bit". Vannevar Bush had written in 1936 of "bits of information" that could be stored on the punched cards used in the mechanical computers of that time. The first programmable computer, built by Konrad Zuse, used binary notation for numbers.
Physical representation
A bit can be stored by a digital device or other physical system that exists in either of two possible distinct states. These may be the two stable states of a flip-flop, two positions of an electrical switch, two distinct voltage or current levels allowed by a circuit, two distinct levels of light intensity, two directions of magnetization or polarization, the orientation of reversible double stranded DNA, etc.
Bits can be implemented in several forms. In most modern computing devices, a bit is usually represented by an electrical voltage or current pulse, or by the electrical state of a flip-flop circuit.
For devices using positive logic, a digit value of (or a logical value of true) is represented by a more positive voltage relative to the representation of . The specific voltages are different for different logic families and variations are permitted to allow for component aging and noise immunity. For example, in transistor–transistor logic (TTL) and compatible circuits, digit values and at the output of a device are represented by no higher than 0.4 volts and no lower than 2.6 volts, respectively; while TTL inputs are specified to recognize 0.8 volts or below as and 2.2 volts or above as .
Transmission and processing
Bits are transmitted one at a time in serial transmission, and by a multiple number of bits in parallel transmission. A bitwise operation optionally processes bits one at a time. Data transfer rates are usually measured in decimal SI multiples of the unit bit per second (bit/s), such as kbit/s.
Storage
In the earliest non-electronic information processing devices, such as Jacquard's loom or Babbage's Analytical Engine, a bit was often stored as the position of a mechanical lever or gear, or the presence or absence of a hole at a specific point of a paper card or tape. The first electrical devices for discrete logic (such as elevator and traffic light control circuits, telephone switches, and Konrad Zuse's computer) represented bits as the states of electrical relays which could be either "open" or "closed". When relays were replaced by vacuum tubes, starting in the 1940s, computer builders experimented with a variety of storage methods, such as pressure pulses traveling down a mercury delay line, charges stored on the inside surface of a cathode-ray tube, or opaque spots printed on glass discs by photolithographic techniques.
In the 1950s and 1960s, these methods were largely supplanted by magnetic storage devices such as magnetic core memory, magnetic tapes, drums, and disks, where a bit was represented by the polarity of magnetization of a certain area of a ferromagnetic film, or by a change in polarity from one direction to the other. The same principle was later used in the magnetic bubble memory developed in the 1980s, and is still found in various magnetic strip items such as metro tickets and some credit cards.
In modern semiconductor memory, such as dynamic random-access memory, the two values of a bit may be represented by two levels of electric charge stored in a capacitor. In certain types of programmable logic arrays and read-only memory, a bit may be represented by the presence or absence of a conducting path at a certain point of a circuit. In optical discs, a bit is encoded as the presence or absence of a microscopic pit on a reflective surface. In one-dimensional bar codes, bits are encoded as the thickness of alternating black and white lines.
Unit and symbol
The bit is not defined in the International System of Units (SI). However, the International Electrotechnical Commission issued standard IEC 60027, which specifies that the symbol for binary digit should be 'bit', and this should be used in all multiples, such as 'kbit', for kilobit. However, the lower-case letter 'b' is widely used as well and was recommended by the IEEE 1541 Standard (2002). In contrast, the upper case letter 'B' is the standard and customary symbol for byte.
Multiple bits
Multiple bits may be expressed and represented in several ways. For convenience of representing commonly reoccurring groups of bits in information technology, several units of information have traditionally been used. The most common is the unit byte, coined by Werner Buchholz in June 1956, which historically was used to represent the group of bits used to encode a single character of text (until UTF-8 multibyte encoding took over) in a computer and for this reason it was used as the basic addressable element in many computer architectures. The trend in hardware design converged on the most common implementation of using eight bits per byte, as it is widely used today. However, because of the ambiguity of relying on the underlying hardware design, the unit octet was defined to explicitly denote a sequence of eight bits.
Computers usually manipulate bits in groups of a fixed size, conventionally named "words". Like the byte, the number of bits in a word also varies with the hardware design, and is typically between 8 and 80 bits, or even more in some specialized computers. In the 21st century, retail personal or server computers have a word size of 32 or 64 bits.
The International System of Units defines a series of decimal prefixes for multiples of standardized units which are commonly also used with the bit and the byte. The prefixes kilo (103) through yotta (1024) increment by multiples of one thousand, and the corresponding units are the kilobit (kbit) through the yottabit (Ybit).
Information capacity and information compression
When the information capacity of a storage system or a communication channel is presented in bits or bits per second, this often refers to binary digits, which is a computer hardware capacity to store binary data ( or , up or down, current or not, etc.). Information capacity of a storage system is only an upper bound to the quantity of information stored therein. If the two possible values of one bit of storage are not equally likely, that bit of storage contains less than one bit of information. If the value is completely predictable, then the reading of that value provides no information at all (zero entropic bits, because no resolution of uncertainty occurs and therefore no information is available). If a computer file that uses n bits of storage contains only m < n bits of information, then that information can in principle be encoded in about m bits, at least on the average. This principle is the basis of data compression technology. Using an analogy, the hardware binary digits refer to the amount of storage space available (like the number of buckets available to store things), and the information content the filling, which comes in different levels of granularity (fine or coarse, that is, compressed or uncompressed information). When the granularity is finer—when information is more compressed—the same bucket can hold more.
For example, it is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits. However, when this storage space is filled and the corresponding content is optimally compressed, this only represents 295 exabytes of information. When optimally compressed, the resulting carrying capacity approaches Shannon information or information entropy.
Bit-based computing
Certain bitwise computer processor instructions (such as bit set) operate at the level of manipulating bits rather than manipulating data interpreted as an aggregate of bits.
In the 1980s, when bitmapped computer displays became popular, some computers provided specialized bit block transfer instructions to set or copy the bits that corresponded to a given rectangular area on the screen.
In most computers and programming languages, when a bit within a group of bits, such as a byte or word, is referred to, it is usually specified by a number from 0 upwards corresponding to its position within the byte or word. However, 0 can refer to either the most or least significant bit depending on the context.
Other information units
Similar to torque and energy in physics; information-theoretic information and data storage size have the same dimensionality of units of measurement, but there is in general no meaning to adding, subtracting or otherwise combining the units mathematically, although one may act as a bound on the other.
Units of information used in information theory include the shannon (Sh), the natural unit of information (nat) and the hartley (Hart). One shannon is the maximum expected value for the information needed to specifying the state of one bit of storage. These are related by 1 Sh ≈ 0.693 nat ≈ 0.301 Hart.
Some authors also define a binit as an arbitrary information unit equivalent to some fixed but unspecified number of bits.
See also
Byte
Integer (computer science)
Primitive data type
Trit (Trinary digit)
Qubit (quantum bit)
Bitstream
Entropy (information theory)
Bit rate and baud rate
Binary numeral system
Ternary numeral system
Shannon (unit)
Nibble
References
External links
Bit Calculator – a tool providing conversions between bit, byte, kilobit, kilobyte, megabit, megabyte, gigabit, gigabyte
BitXByteConverter – a tool for computing file sizes, storage capacity, and digital information in various units
Binary arithmetic
Primitive types
Data types
Units of information
|
1559748
|
https://en.wikipedia.org/wiki/R%3ABase
|
R:Base
|
R:BASE (or RBASE) was the first relational database program for the PC. Created by Wayne Erickson in 1981, on November 13, 1981, Erickson and his brother, Ron Erickson, incorporated the company, MicroRim, Inc. to sell the database, MicroRIM.
In June 1998, A. Razzak Memon, President & CEO of R:BASE Technologies, Inc. (a privately held company in Murrysville, Pennsylvania, USA) acquired the R:BASE products from Abacus Software Group. Since 1998, R:BASE is available as R:BASE for Windows v6.1a, v7.1, v7.5, v7.6, Turbo V-8, v9.1, v9.5 (32/64) for Windows, R:Base X, and now R:Base X.5.
History
Founding
Created by Wayne Erickson in 1981, the original R:Base database was written on a Heathkit CPM computer that Erickson built at home. On November 13, 1981, Erickson and his brother, Ron Erickson, incorporated the company, MicroRim, Inc. to sell the database, MicroRIM. (RIM was an acronym for Relational Information Management, a mainframe database developed by the IPAD project team, which included Erikson, at Boeing Computer Services, as part of NASA's IPAD project for which the team and NASA colleagues received a NASA award, was used by NASA to track Space shuttle heat shield tiles).
The earliest version released by Microrim was called R:Base 4000 and was released in 1983. It worked with early version of Microsoft MS-DOS or IBM PC DOS (version 2 or above). It shipped with a binder-type manual and the program on 360K floppy disks. The system being DOS-based, the interface was entirely text with the exception of DOS line-draw characters.
Privately funded and ultimately venture backed, the MicroRim database products achieved significant market share in the mid-1980s in what was dubbed by some, the "database wars" between R:Base and the market share leader, Ashton-Tate's dBASE. One clever MicroRim ad stated "R-way versus D-hardway," a jab at the less relational dBASE architecture. MicroRim adhered to the rules of the father of relational database technology, Edgar F. Codd and prided itself on the elegance of its code.
In the mid-1980s, when Microsoft did not have their own database, they obtained a license to resell R:BASE in Europe so they could have a full suite of software products.
1990s
In June 1998, R:BASE Technologies, Inc. (a privately held company in Murrysville, Pennsylvania, USA) acquired the R:BASE products from Abacus Software Group.
Recent years
Some of the features included, and continue to include, a programming-free application development wizard, automatic multi-user capabilities, a full-featured 4GL programming language, form, report and label designers, and a fully ANSI SQL compliant relational language capability.
Since September 2007, R:BASE has been available as R:BASE for Windows v7.6, R:BASE for DOS v7.6 and R:BASE Turbo V-8 for Windows. The Version 8.0 has an extended address management for file handling and is able to cover databases up to 2.3 million TB versus V7.6 which covers databases up to 2 GB. A German kernel has existed since R:Base V7.6.
Legacy R:BASE Products
R:BASE 4000
The earliest version released by Microrim was called R:Base 4000 and was released in 1983. It worked with early version of Microsoft MS-DOS or IBM PC DOS (version 2 or above). It shipped with a binder-type manual and the program on 360K floppy disks. The system being DOS-based, the interface was entirely text with the exception of DOS line-draw characters.
In spite of its relative ease of use and ability to create useful forms and reports, the first R:Base did not have a conventional programming language, but instead relied on SQL statements to accept input and produce output. The lack of a complete programming language meant that the product was not well received by some portions of the market. This may have helped the early, barely relational, dBASE products to become dominant. The product was quickly upgraded to include Added Variables and a conventional programming Language (IF, WHILE, etc.) to the original SQL based language The update was released as R:Base 4000 Version 1.1 in March 1984. R:Base became the second most popular DOS database in the PC market (behind dBASE).
Portions of the program allowed the user to design screens, called "Forms" in R:Base. Line-draw characters could implement buttons or boxes that would group text on screen. A separate utility allowed the design of printed output formats and was called "Reports." The report design system allowed a user to define and edit fields included in database reports on screen. Limited printer support was included as DOS programs each had their own unique printer driver for similar printer engines. A markup language allowed italics and bold output if the corresponding printer had a capability. Reports could be piped to the display or a serial port for testing if one were so inclined. Database names were constrained to seven characters. The actual data were contained in three files. In an example database named Sales, files name SALES1.RBF SALES2.RBF, AND SALES3.RBF would contain the database. Forms and reports were stored in files external to the database file.
By default, the application would start with a menu asking which database file you wanted to open. Using a startup switch, R:Base could be run entirely from a command prompt, called the "R-prompt," in system documentation. The application command prompt was an although this could be modified to other characters by editing a configuration file. In an example database named Sales, to query the database, you would first open it by typing at the R-prompt. Using SQL-style queries, one could pull on-screen displays of data from tables. would display one screen of data from the fields FNAME LNAME CITY ZIPCODE from the table named MAIN. Pressing the space bar would scroll to the next 24 records. A built-in help system produced text after the prompt if your query was invalid or the syntax was not understood by the database engine.
A feature of the program was its ability to create applications that ran scripts generated by an internal scripting system. Scripts were stored in files with an extension .APP. The system would first ask for type of menu desired, (one option was pull-down, for example,) then asked you to fill out the pull down headings. Next, you were stepped through a list of actions for each menu choice. At the end, the procedures that had been stepped through were recorded in the database file and could be called from an automatically generated menu system. To prevent a user from tampering with the generated script, an encoded version was created. The user could password protect the encoded version for configuration management.
A utility called File Gateway allowed import and export of common file formats of the era such as Data Interchange Format (DIF), SYLK, Lotus 1-2-3, and dBASE files. Another utility, called Recover, was intended to recover damaged R:Base databases.
R:BASE 5000, R:BASE 2.0
R:Base 4000 was followed by R:Base 5000, which substantially improved features and gained wider acceptance.
R:BASE 2.0 rolled out a new file format and introduced the ability to use memory above 640K. There was support for the Intel 80286 processor. The system had substantially better documentation. This version continued the evolution toward full ANSI SQL compliance. Forms, scripts, and reports were rolled into the database files. Three files with extensions .RB1, .RB2, and .RB3 contained everything for a single database.
R:BASE 3.x
R:Base 3.0 was ANSI SQL (1989?) compliant and utilized the DOS4GW memory manager. This memory manager was also seen in many DOS games of the era. R:Base 3.1 introduced a multi-user network capability. A version was also rolled out for the Convergent Technologies Operating System operating system, this was apparently a follow-on to Burroughs Technologies Operating System (BTOS).
By purchasing license packs, the database gained a multi-user capability in five-user increments. This included a sophisticated (for a DOS application in the day) record-level locking scheme. To work properly, the multi-user database had to be on a file server with all users accessing the database through a network. It was not true client-server because processing occurred in the clients. The configuration file expanded to allow language support and user-defined re-mapping of characters. For example, German characters such as the letter "ö" (o with an umlaut) could be remapped to the string oe. There were character fold tables and sort orders could be adjusted by the user. An "unlimited number of licenses" runtime version was offered, allowing developers to sell applications and include the run-time R:Base engine.
Example of an R:Base 3.1 command prompt transaction asking the application to list the structure of a database table of California cities, (CALIFCY):
R> LIST CALIFCY
# Name Type Index Expression
1 STATE TEXT 2
2 FEATURE TEXT 85
3 FEATURET TEXT 9
4 COUNTY1 TEXT 15
5 FIPSST TEXT 2
6 FIPSCO TEXT 3
7 LATITUDE TEXT 7
8 LONGITUD TEXT 8
9 LAT_DEC TEXT 8
10 LON_DEC TEXT 10
11 SOURCELA TEXT 7
12 SOURCELO TEXT 8
13 SOUR_lat TEXT 8
14 SOUR_lon TEXT 10
15 ELEVATIO TEXT 5
16 FIELD16 TEXT 8
17 MAPNAME TEXT 27
18 LAT1 DOUBLE
19 LON DOUBLE
20 ITEM_NO DOUBLE
Current number of rows: 7070
R:BASE 4.x
R:Base 4.0 rolled out Intel 80386 support and a newer DOS4GW memory manager. It included a newer file format, replacing the format used with Version 3.1. To support legacy customers, Version 4.0 included a copy of Version 3.1 with a lot of warnings about new file format and features of 4 that were not supported in 3.1. While the documentation claimed 2GB data files were supported, there were data integrity problems with some very large tables over 1 million records. Still, the software was designed to accommodate up to 750 tables and easily handled tables with tens of thousands of records. It was faster than 3.1 and a reliable and practical application for many users.
R:Base 4.5 rolled out another new file format and greatly improved capacity. OBDC drivers were rolled out to allow interchange of data with Microsoft Windows-based applications without running the DOS-based File Gateway utility. While number of records in a database was "limited only by disk space," in practice, some users found there were problems with databases which contained over about 1.1 million records.
First R:BASE for Windows
The first product produced by Microrim for use in Microsoft Windows was named R:Base for Windows. This rolled out in 1994. This version was compatible with R:Base 4.5 files and was fully ANSI SQL Level II 1989 compliant. The application was partially ANSI SQL 1992 Level II compliant. The screen capture images in documentation look like Windows 3.1, but documentation claimed it would also run on Windows 95 or the more trustworthy Windows inside OS/2 Warp version 3. A variety of run-time licensing schemes were available to developers.
Current Generation R:BASE Products
R:BASE 7.6 for Windows
R:BASE 7.6 for DOS
R:BASE Turbo V-8 for Windows
R:BASE 9.1 for DOS
R:BASE eXtreme 9.1 (32) for Windows
R:BASE eXtreme 9.1 (64) for Windows
R:BASE eXtreme 9.5 (32) for Windows
R:BASE eXtreme 9.5 (64) for Windows
R:BASE X (32)
R:BASE X Enterprise (64)
R:BASE X.5 (32)
R:BASE X.5 Enterprise (64)
References
External links
R:BASE Technologies, Inc.
R:BASE Technical Articles
R:BASE for German speaking people
Proprietary database management systems
Fourth-generation programming languages
DOS software
Windows database-related software
1981 software
Relational database management systems
|
713497
|
https://en.wikipedia.org/wiki/Wireless%20mesh%20network
|
Wireless mesh network
|
A wireless mesh network (WMN) is a communications network made up of radio nodes organized in a mesh topology. It can also be a form of wireless ad hoc network.
A mesh refers to rich interconnection among devices or nodes. Wireless mesh networks often consist of mesh clients, mesh routers and gateways. Mobility of nodes is less frequent. If nodes constantly or frequently move, the mesh spends more time updating routes than delivering data. In a wireless mesh network, topology tends to be more static, so that routes
computation can converge and delivery of data to their destinations can occur. Hence, this is a low-mobility centralized form of wireless ad hoc network. Also, because it sometimes relies on static nodes to act as gateways, it is not a truly all-wireless ad hoc network.
Mesh clients are often laptops, cell phones, and other wireless devices. Mesh routers forward traffic to and from the gateways, which may, but need not, be connected to the Internet. The coverage area of all radio nodes working as a single network is sometimes called a mesh cloud. Access to this mesh cloud depends on the radio nodes working together to create a radio network. A mesh network is reliable and offers redundancy. When one node can no longer operate, the rest of the nodes can still communicate with each other, directly or through one or more intermediate nodes. Wireless mesh networks can self form and self heal. Wireless mesh networks work with different wireless technologies including 802.11, 802.15, 802.16, cellular technologies and need not be restricted to any one technology or protocol.
History
Wireless mesh radio networks were originally developed for military applications, such that every node could dynamically serve as a router for every other node. In that way, even in the event of a failure of some nodes, the remaining nodes could continue to communicate with each other, and, if necessary, serve as uplinks for the other nodes.
Early wireless mesh network nodes had a single half-duplex radio that, at any one instant, could either transmit or receive, but not both at the same time. This was accompanied by the development of shared mesh networks. This was subsequently superseded by more complex radio hardware that could receive packets from an upstream node and transmit packets to a downstream node simultaneously (on a different frequency or a different CDMA channel). This allowed the development of switched mesh networks. As the size, cost, and power requirements of radios declined further, nodes could be cost-effectively equipped with multiple radios. This, in turn, permitted each radio to handle a different function, for instance, one radio for client access, and another for backhaul services.
Work in this field has been aided by the use of game theory methods to analyze strategies for the allocation of resources and routing of packets.
Features
Architecture
Wireless mesh architecture is a first step towards providing cost effective and low mobility over a specific coverage area. Wireless mesh infrastructure is, in effect, a network of routers minus the cabling between nodes. It is built of peer radio devices that do not have to be cabled to a wired port like traditional WLAN access points (AP) do. Mesh infrastructure carries data over large distances by splitting the distance into a series of short hops. Intermediate nodes not only boost the signal, but cooperatively pass data from point A to point B by making forwarding decisions based on their knowledge of the network, i.e. perform routing by first deriving the topology of the network.
Wireless mesh networks is a relatively "stable-topology" network except for the occasional failure of nodes or addition of new nodes. The path of traffic, being aggregated from a large number of end users, changes infrequently. Practically all the traffic in an infrastructure mesh network is either forwarded to or from a gateway, while in wireless ad hoc networks or client mesh networks the traffic flows between arbitrary pairs of nodes.
If rate of mobility among nodes are high, i.e., link breaks happen frequently, wireless mesh networks start to break down and have low communication performance.
Management
This type of infrastructure can be decentralized (with no central server) or centrally managed (with a central server). Both are relatively inexpensive, and can be very reliable and resilient, as each node needs only transmit as far as the next node. Nodes act as routers to transmit data from nearby nodes to peers that are too far away to reach in a single hop, resulting in a network that can span larger distances. The topology of a mesh network must be relatively stable, i.e., not too much mobility. If one node drops out of the network, due to hardware failure or any other reason, its neighbors can quickly find another route using a routing protocol.
Applications
Mesh networks may involve either fixed or mobile devices. The solutions are as diverse as communication needs, for example in difficult environments such as emergency situations, tunnels, oil rigs, battlefield surveillance, high-speed mobile-video applications on board public transport, real-time racing-car telemetry, or self-organizing Internet access for communities. An important possible application for wireless mesh networks is VoIP. By using a quality of service scheme, the wireless mesh may support routing local telephone calls through the mesh. Most applications in wireless mesh networks are similar to those in wireless ad hoc networks.
Some current applications:
U.S. military forces are now using wireless mesh networking to connect their computers, mainly ruggedized laptops, in field operations.
Electric smart meters now being deployed on residences, transfer their readings from one to another and eventually to the central office for billing, without the need for human meter readers or the need to connect the meters with cables.
The laptops in the One Laptop per Child program use wireless mesh networking to enable students to exchange files and get on the Internet even though they lack wired or cell phone or other physical connections in their area.
Smart home devices such as Google Wi-Fi, Google Nest Wi-Fi, and Google OnHub all support Wi-Fi mesh (i.e., Wi-Fi ad hoc) networking. Several manufacturers of Wi-Fi routers began offering mesh routers for home use in the mid-2010s.
The 66-satellite Iridium constellation operates as a mesh network, with wireless links between adjacent satellites. Calls between two satellite phones are routed through the mesh, from one satellite to another across the constellation, without having to go through an earth station. This makes for a smaller travel distance for the signal, reducing latency, and also allows for the constellation to operate with far fewer earth stations than would be required for 66 traditional communications satellites.
Operation
The principle is similar to the way packets travel around the wired Internet—data hops from one device to another until it eventually reaches its destination. Dynamic routing algorithms implemented in each device allow this to happen. To implement such dynamic routing protocols, each device needs to communicate routing information to other devices in the network. Each device then determines what to do with the data it receives – either pass it on to the next device or keep it, depending on the protocol. The routing algorithm used should attempt to always ensure that the data takes the most appropriate (fastest) route to its destination.
Multi-radio mesh
Multi-radio mesh refers to having different radios operating at different frequencies to interconnect nodes in a mesh. This means there is a unique frequency used for each wireless
hop and thus a dedicated CSMA collision domain. With
more radio bands, communication throughput is likely to increase as a result of more available
communication channels. This is similar to providing dual or multiple radio paths to transmit
and receive data.
Research topics
One of the more often cited papers on Wireless Mesh Networks identified the following areas as open research problems in 2005
New modulation scheme
To achieve higher transmission rate requires new wideband transmission schemes other than OFDM and UWB.
Advanced antenna processing
Advanced antenna processing including directional, smart and multiple antenna technologies is further investigated, since their complexity and cost are still too high for wide commercialization.
Flexible spectrum management
Tremendous efforts on research of frequency-agile techniques are being performed for increased efficiency.
Cross-layer optimization
Cross-layer research is a popular current research topic where information is shared between different communications layers to increase the knowledge and current state of the network. This could facilitate development of new and more efficient protocols. A joint protocol that addresses various design problems—routing, scheduling, channel assignment etc.—can achieve higher performance since these problems are strongly co-related. Note that careless cross-layer design can lead to code that is difficult to maintain and extend.
Software-defined wireless networking
Centralized, distributed, or hybrid? - In a new SDN architecture for WDNs is explored that eliminates the need for multi-hop flooding of route information and therefore enables WDNs to easily expand. The key idea is to split network control and data forwarding by using two separate frequency bands. The forwarding nodes and the SDN controller exchange link-state information and other network control signaling in one of the bands, while actual data forwarding takes place in the other band.
Security
A WMN can be seen as a group of nodes (clients or routers) that cooperate to provide connectivity. Such an open architecture, where clients serve as routers to forward data packets, is exposed to many types of attacks that can interrupt the whole network and cause denial of service (DoS) or Distributed Denial of Service (DDoS).
Examples
Packet radio networks or ALOHA networks were first used in Hawaii to connect the islands. Given the bulky radios, and low data rate, the network is less useful than it was envisioned to be.
In 1998–1999, a field implementation of a campus-wide wireless network using 802.11 WaveLAN 2.4 GHz wireless interface on several laptops was successfully completed. Several real applications, mobility and data transmissions were made.
Mesh networks were useful for the military market because of the radio capability, and because not all military missions have frequently moving nodes. The Pentagon launched the DoD JTRS program in 1997, with an ambition to use software to control radio functions - such as frequency, bandwidth, modulation and security previously baked into the hardware. This approach would allow the DoD to build a family of radios with a common software core, capable of handling functions that were previously split among separate hardware-based radios: VHF voice radios for infantry units; UHF voice radios for air-to-air and ground-to-air communications; long-range HF radios for ships and ground troops; and a wideband radio capable of transmitting data at megabit speeds across a battlefield. However, JTRS program was shut down in 2012 by the US Army because the radios made by Boeing had a 75% failure rate.
Google Home, Google Wi-Fi, and Google OnHub all support Wi-Fi mesh networking.
In rural Catalonia, Guifi.net was developed in 2004 as a response to the lack of broadband Internet, where commercial Internet providers weren't providing a connection or a very poor one. Nowadays with more than 30,000 nodes it is only halfway a fully connected network, but following a peer to peer agreement it remained an open, free and neutral network with extensive redundancy.
In 2004, TRW Inc. engineers from Carson, California, successfully tested a multi-node mesh wireless network using 802.11a/b/g radios on several high speed laptops running Linux, with new features such as route precedence and preemption capability, adding different priorities to traffic service class during packet scheduling and routing, and quality of service. Their work concluded that data rate can be greatly enhanced using MIMO technology at the radio front end to provide multiple spatial paths.
ZigBee digital radios are incorporated into some consumer appliances, including battery-powered appliances. ZigBee radios spontaneously organize a mesh network, using specific routing algorithms; transmission and reception are synchronized. This means the radios can be off much of the time, and thus conserve power. ZigBee is for low power low bandwidth application scenarios.
Thread is a consumer wireless networking protocol built on open standards and IPv6/6LoWPAN protocols. Thread's features include a secure and reliable mesh network with no single point of failure, simple connectivity and low power. Thread networks are easy to set up and secure to use with banking-class encryption to close security holes that exist in other wireless protocols. In 2014 Google Inc's Nest Labs announced a working group with the companies Samsung, ARM Holdings, Freescale, Silicon Labs, Big Ass Fans and the lock company Yale to promote Thread.
In early 2007, the US-based firm Meraki launched a mini wireless mesh router. The 802.11 radio within the Meraki Mini has been optimized for long-distance communication, providing coverage over 250 metres. In contrast to multi-radio long-range mesh networks with tree-based topologies and their advantages in O(n) routing, the Maraki had only one radio, which it used for both client access as well as backhaul traffic.
The Naval Postgraduate School, Monterey CA, demonstrated such wireless mesh networks for border security. In a pilot system, aerial cameras kept aloft by balloons relayed real time high resolution video to ground personnel via a mesh network.
SPAWAR, a division of the US Navy, is prototyping and testing a scalable, secure Disruption Tolerant Mesh Network to protect strategic military assets, both stationary and mobile. Machine control applications, running on the mesh nodes, "take over", when Internet connectivity is lost. Use cases include Internet of Things e.g. smart drone swarms.
An MIT Media Lab project has developed the XO-1 laptop or "OLPC" (One Laptop per Child) which is intended for disadvantaged schools in developing nations and uses mesh networking (based on the IEEE 802.11s standard) to create a robust and inexpensive infrastructure. The instantaneous connections made by the laptops are claimed by the project to reduce the need for an external infrastructure such as the Internet to reach all areas, because a connected node could share the connection with nodes nearby. A similar concept has also been implemented by Greenpacket with its application called SONbuddy.
In Cambridge, UK, on 3 June 2006, mesh networking was used at the “Strawberry Fair” to run mobile live television, radio and Internet services to an estimated 80,000 people.
Broadband-Hamnet, a mesh networking project used in amateur radio, is "a high-speed, self-discovering, self-configuring, fault-tolerant, wireless computer network" with very low power consumption and a focus on emergency communication.
The Champaign-Urbana Community Wireless Network (CUWiN) project is developing mesh networking software based on open source implementations of the Hazy-Sighted Link State Routing Protocol and Expected Transmission Count metric. Additionally, the Wireless Networking Group in the University of Illinois at Urbana-Champaign are developing a multichannel, multi-radio wireless mesh testbed, called Net-X as a proof of concept implementation of some of the multichannel protocols being developed in that group. The implementations are based on an architecture that allows some of the radios to switch channels to maintain network connectivity, and includes protocols for channel allocation and routing.
FabFi is an open-source, city-scale, wireless mesh networking system originally developed in 2009 in Jalalabad, Afghanistan to provide high-speed Internet to parts of the city and designed for high performance across multiple hops. It is an inexpensive framework for sharing wireless Internet from a central provider across a town or city. A second larger implementation followed a year later near Nairobi, Kenya with a freemium pay model to support network growth. Both projects were undertaken by the Fablab users of the respective cities.
SMesh is an 802.11 multi-hop wireless mesh network developed by the Distributed System and Networks Lab at Johns Hopkins University. A fast handoff scheme allows mobile clients to roam in the network without interruption in connectivity, a feature suitable for real-time applications, such as VoIP.
Many mesh networks operate across multiple radio bands. For example, Firetide and Wave Relay mesh networks have the option to communicate node to node on 5.2 GHz or 5.8 GHz, but communicate node to client on 2.4 GHz (802.11). This is accomplished using software-defined radio (SDR).
The SolarMESH project examined the potential of powering 802.11-based mesh networks using solar power and rechargeable batteries. Legacy 802.11 access points were found to be inadequate due to the requirement that they be continuously powered. The IEEE 802.11s standardization efforts are considering power save options, but solar-powered applications might involve single radio nodes where relay-link power saving will be inapplicable.
The WING project (sponsored by the Italian Ministry of University and Research and led by CREATE-NET and Technion) developed a set of novel algorithms and protocols for enabling wireless mesh networks as the standard access architecture for next generation Internet. Particular focus has been given to interference and traffic-aware channel assignment, multi-radio/multi-interface support, and opportunistic scheduling and traffic aggregation in highly volatile environments.
WiBACK Wireless Backhaul Technology has been developed by the Fraunhofer Institute for Open Communication Systems (FOKUS) in Berlin. Powered by solar cells and designed to support all existing wireless technologies, networks are due to be rolled out to several countries in sub-Saharan Africa in summer 2012.
Recent standards for wired communications have also incorporated concepts from Mesh Networking. An example is ITU-T G.hn, a standard that specifies a high-speed (up to 1 Gbit/s) local area network using existing home wiring (power lines, phone lines and coaxial cables). In noisy environments such as power lines (where signals can be heavily attenuated and corrupted by noise), it is common that mutual visibility between devices in a network is not complete. In those situations, one of the nodes has to act as a relay and forward messages between those nodes that cannot communicate directly, effectively creating a "relaying" network. In G.hn, relaying is performed at the Data Link Layer.
Protocols
Routing protocols
There are more than 70 competing schemes for routing packets across mesh networks. Some of these include:
Associativity-Based Routing (ABR)
AODV (Ad hoc On-Demand Distance Vector)
B.A.T.M.A.N. (Better Approach To Mobile Adhoc Networking)
Babel (protocol) (a distance-vector routing protocol for IPv6 and IPv4 with fast convergence properties)
Dynamic NIx-Vector Routing|DNVR
DSDV (Destination-Sequenced Distance-Vector Routing)
DSR (Dynamic Source Routing)
HSLS (Hazy-Sighted Link State)
HWMP (Hybrid Wireless Mesh Protocol, the default mandatory routing protocol of IEEE 802.11s)
Infrastructure Wireless Mesh Protocol (IWMP) for Infrastructure Mesh Networks by GRECO UFPB-Brazil
OLSR (Optimized Link State Routing protocol)
OORP (OrderOne Routing Protocol) (OrderOne Networks Routing Protocol)
OSPF (Open Shortest Path First Routing)
Routing Protocol for Low-Power and Lossy Networks (IETF ROLL RPL protocol, )
PWRP (Predictive Wireless Routing Protocol)
TORA (Temporally-Ordered Routing Algorithm)
ZRP (Zone Routing Protocol)
The IEEE has developed a set of standards under the title 802.11s.
A less thorough list can be found at list of ad hoc routing protocols.
Autoconfiguration protocols
Standard autoconfiguration protocols, such as DHCP or IPv6 stateless autoconfiguration may be used over mesh networks.
Mesh network specific autoconfiguration protocols include:
Ad Hoc Configuration Protocol (AHCP)
Proactive Autoconfiguration (Proactive Autoconfiguration Protocol)
Dynamic WMN Configuration Protocol (DWCP)
Communities and providers
Anyfi
AWMN
CUWiN
Freifunk (DE) / FunkFeuer (AT) / OpenWireless (CH)
Firechat
Firetide
Guifi.net
Netsukuku
Ninux (IT)
NYC Mesh
Red Hook Wi-Fi
AntMeshNet
See also
Mesh networking
Comparison of wireless data standards
IEEE 802.11s
Mobile ad hoc network
Peer-to-peer
Roofnet
Wireless ad hoc network
Zigbee
Bluetooth mesh networking
Optical mesh network
References
External links
Wireless LAN Mesh Whitepaper
First, Second and Third Generation Mesh Architectures History and evolution of Mesh Networking Architectures
Miners Give a Nod to Nodes Article reprint from Mission Critical Magazine on Mesh in underground mining
IET From hotspots to blankets
Mesh Networks Research Group Projects and tutorials' compilation related to the Wireless Mesh Networks
Linux Wireless Subsystem (80211) by Rami Rosen
Open problems
Radio technology
Mesh networking
da:Selvkonfigurerende radionet
de:Ad-hoc-Netz
|
454746
|
https://en.wikipedia.org/wiki/Application%20software
|
Application software
|
An application program (application or app for short) is a computer program designed to carry out a specific task other than one relating to the operation of the computer itself, typically to be used by end-users. Word processors, media players, and accounting software are examples. The collective noun refers to all applications collectively. The other principal classifications of software are system software, relating to the operation of the computer, and utility software ("utilities").
Applications may be bundled with the computer and its system software or published separately and may be coded as proprietary, open-source, or projects. The term "app" often refers to applications for mobile devices such as phones.
Terminology
In information technology, an application (app), application program or application software is a computer program designed to help people perform an activity. Depending on the activity for which it was designed, an application can manipulate text, numbers, audio, graphics, and a combination of these elements. Some application packages focus on a single task, such as word processing; others, called integrated software include several applications.
User-written software tailors systems to meet the user's specific needs. User-written software includes spreadsheet templates, word processor macros, scientific simulations, audio, graphics, and animation scripts. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is.
The delineation between system software such as operating systems and application software is not exact, however, and is occasionally the object of controversy. For example, one of the key questions in the United States v. Microsoft Corp. antitrust trial was whether Microsoft's Internet Explorer web browser was part of its Windows operating system or a separable piece of application software. As another example, the GNU/Linux naming controversy is, in part, due to disagreement about the relationship between the Linux kernel and the operating systems built over this kernel. In some types of embedded systems, the application software and the operating system software may be indistinguishable to the user, as in the case of software used to control a VCR, DVD player, or microwave oven. The above definitions may exclude some applications that may exist on some computers in large organizations. For an alternative definition of an app: see Application Portfolio Management.
Metonymy
The word "application" used as an adjective is not restricted to the "of or pertaining to application software" meaning. For example, concepts such as application programming interface (API), application server, application virtualization, application lifecycle management and portable application apply to all computer programs alike, not just application software.
Apps and killer apps
Some applications are available in versions for several different platforms; others only work on one and are thus called, for example, a geography application for Microsoft Windows, or an Android application for education, or a Linux game. Sometimes a new and popular application arises which only runs on one platform, increasing the desirability of that platform. This is called a killer application or killer app. For example, VisiCalc was the first modern spreadsheet software for the Apple II and helped sell the then-new personal computers into offices. For Blackberry it was their email software.
In recent years, the shortened term "app" (coined in 1981 or earlier) has become popular to refer to applications for mobile devices such as smartphones and tablets, the shortened form matching their typically smaller scope compared to applications on PCs. Even more recently, the shortened version is used for desktop application software as well.
Classification
There are many different and alternative ways in order to classify application software.
By the legal point of view, application software is mainly classified with a black box approach, in relation to the rights of its final end-users or subscribers (with eventual intermediate and tiered subscription levels).
Software applications are also classified in respect of the programming language in which the source code is written or executed, and respect of their purpose and outputs.
By property and use rights
Application software is usually distinguished among two main classes: closed source vs open source software applications, and among free or proprietary software applications.
Proprietary software is placed under the exclusive copyright, and a software license grants limited usage rights. The open-closed principle states that software may be "open only for extension, but not for modification". Such applications can only get add-on by third-parties.
Free and open-source software shall be run, distributed, sold or extended for any purpose, and -being open- shall be modified or reversed in the same way.
FOSS software applications released under a free license may be perpetual and also royalty-free. Perhaps, the owner, the holder or third-party enforcer of any right (copyright, trademark, patent, or ius in re aliena) are entitled to add exceptions, limitations, time decays or expiring dates to the license terms of use.
Public-domain software is a type of FOSS, which is royalty-free and - openly or reservedly- can be run, distributed, modified, reversed, republished or created in derivative works without any copyright attribution and therefore revocation. It can even be sold, but without transferring the public domain property to other single subjects. Public-domain SW can be released under an (un)licensing legal statement, which enforces those terms and conditions for an indefinite duration (for a lifetime, or forever).
By coding language
Since the development and near-universal adoption of the web, an important distinction that has emerged, has been between web applications — written with HTML, JavaScript and other web-native technologies and typically requiring one to be online and running a web browser — and the more traditional native applications written in whatever languages are available for one's particular type of computer. There has been a contentious debate in the computing community regarding web applications replacing native applications for many purposes, especially on mobile devices such as smartphones and tablets. Web apps have indeed greatly increased in popularity for some uses, but the advantages of applications make them unlikely to disappear soon, if ever. Furthermore, the two can be complementary, and even integrated.
By purpose and output
Application software can also be seen as being either horizontal or vertical. Horizontal applications are more popular and widespread, because they are general purpose, for example word processors or databases. Vertical applications are niche products, designed for a particular type of industry or business, or department within an organization. Integrated suites of software will try to handle every specific aspect possible of, for example, manufacturing or banking worker, or accounting, or customer service.
There are many types of application software:
An application suite consists of multiple applications bundled together. They usually have related functions, features and user interfaces, and may be able to interact with each other, e.g. open each other's files. Business applications often come in suites, e.g. Microsoft Office, LibreOffice and iWork, which bundle together a word processor, a spreadsheet, etc.; but suites exist for other purposes, e.g. graphics or music.
Enterprise software addresses the needs of an entire organization's processes and data flows, across several departments, often in a large distributed environment. Examples include enterprise resource planning systems, customer relationship management (CRM) systems and supply chain management software. Departmental Software is a sub-type of enterprise software with a focus on smaller organizations or groups within a large organization. (Examples include travel expense management and IT Helpdesk.)
Enterprise infrastructure software provides common capabilities needed to support enterprise software systems. (Examples include databases, email servers, and systems for managing networks and security.)
Application platform as a service (aPaaS) is a cloud computing service that offers development and deployment environments for application services.
Information worker software lets users create and manage information, often for individual projects within a department, in contrast to enterprise management. Examples include time management, resource management, analytical, collaborative and documentation tools. Word processors, spreadsheets, email and blog clients, personal information system, and individual media editors may aid in multiple information worker tasks.
Content access software is used primarily to access content without editing, but may include software that allows for content editing. Such software addresses the needs of individuals and groups to consume digital entertainment and published digital content. (Examples include media players, web browsers, and help browsers.)
Educational software is related to content access software, but has the content or features adapted for use in by educators or students. For example, it may deliver evaluations (tests), track progress through material, or include collaborative capabilities.
Simulation software simulates physical or abstract systems for either research, training or entertainment purposes.
Media development software generates print and electronic media for others to consume, most often in a commercial or educational setting. This includes graphic-art software, desktop publishing software, multimedia development software, HTML editors, digital-animation editors, digital audio and video composition, and many others.
Product engineering software is used in developing hardware and software products. This includes computer-aided design (CAD), computer-aided engineering (CAE), computer language editing and compiling tools, integrated development environments, and application programmer interfaces.
Entertainment Software can refer to video games, screen savers, programs to display motion pictures or play recorded music, and other forms of entertainment which can be experienced through use of a computing device.
By Platform
Applications can also be classified by computing platform such as a desktop application for a particular operating system, delivery network such as in cloud computing and Web 2.0 applications, or delivery devices such as mobile apps for mobile devices.
The operating system itself can be considered application software when performing simple calculating, measuring, rendering, and word processing tasks not used to control hardware via command-line interface or graphical user interface. This does not include application software bundled within operating systems such as a software calculator or text editor.
Information worker software
Accounting software
Data management
Contact manager
Spreadsheet
Database software
Documentation
Document automation
Word processor
Desktop publishing software
Diagramming software
Presentation software
Email
Blog software
Enterprise resource planning
Financial software
Day trading software
Banking software
Clearing systems
Arithmetic software
Field service management
Workforce management software
Project management software
Calendaring software
Employee scheduling software
Workflow software
Reservation systems
Entertainment software
Screen savers
Video games
Arcade games
Console games
Mobile games
Personal computer games
Software art
Demo
64K intro
Educational software
Classroom management
Reference software
Sales readiness software
Survey management
Encyclopedia software
Enterprise infrastructure software
Artificial Intelligence for IT Operations (AIOps)
Business workflow software
Database management system (DBMS)
Digital asset management (DAM) software
Document management software
Geographic information system (GIS)
Simulation software
Computer simulators
Scientific simulators
Social simulators
Battlefield simulators
Emergency simulators
Vehicle simulators
Flight simulators
Driving simulators
Simulation games
Vehicle simulation games
Media development software
3D computer graphics software
Animation software
Graphic art software
Raster graphics editor
Vector graphics editor
Image organizer
Video editing software
Audio editing software
Digital audio workstation
Music sequencer
Scorewriter
HTML editor
Game development tool
Product engineering software
Hardware engineering
Computer-aided engineering
Computer-aided design (CAD)
Computer-aided manufacturing (CAM)
Finite element analysis
Software engineering
Compiler software
Integrated development environment
Compiler
Linker
Debugger
Version control
Game development tool
License manager
See also
Software development
Mobile app
Web application
References
External links
|
28287964
|
https://en.wikipedia.org/wiki/Siemens%20NX
|
Siemens NX
|
NX, formerly known as "unigraphics", is an advanced high-end CAD/CAM/CAE, which has been owned since 2007 by Siemens PLM Software. In 2000, Unigraphics purchased SDRC I-DEAS and began an effort to integrate aspects of both software packages into a single product which became Unigraphics NX or NX.
It is used, among other tasks, for:
Design (parametric and direct solid/surface modelling)
Engineering analysis (static; dynamic; electro-magnetic; thermal, using the finite element method; and fluid, using the finite volume method).
Manufacturing finished design by using included machining modules.
NX is a direct competitor to CATIA, Creo, Autodesk Inventor.
History
1972:
United Computing, Inc. releases UNIAPT, one of the world's first end-user CAM products.
1973:
The company purchases the Automated Drafting and Machining (ADAM) software code from MCS in 1973. The code became a foundation for a product called UNI-GRAPHICS, later sold commercially as Unigraphics in 1975.
1976 McDonnell Douglas Corporation buys United Computing.
1983:
UniSolids V1.0 is released, marking the industry's first true interactive Solid Modeling software offering.
1991:
During a period of financial difficulties McDonnell Douglas Automation Company (McAuto) sells its commercial services organization, including the Unigraphics organization and product, to EDS which at that time is owned by GM. Unigraphics becomes GM's corporate CAD system.
1992:
Over 21,000 seats of Unigraphics are being used worldwide.
1996:
Unigraphics V11.0 is released with enhancements in Industrial Design and Modeling including Bridge Surface, Curvature Analysis for Curve and Surfaces, Face Blends, Variable Offset Surface, etc. In the area of Assembly Modeling the new capabilities include Component Filters, Faceted Representations, and Clearance Analysis between multiple Components. A fully integrated Spreadsheet linked to Feature-Based Modeling is also included..
2002
First release of the new "Next Generation" version of Unigraphics and I-DEAS, called NX, beginning the transition to bring the functionality and capabilities of both Unigraphics and I-DEAS together into a single consolidated product.
2007
Introduction of Synchronous Technology in NX 5.
2011
Release of NX8 on October 17-2011
2013
Release of NX9 (x64 only) on October 14-2013
Release history
Key functions
Computer-aided design (CAD) (Design)
Parametric solid modeling (feature-based and direct modeling)
Freeform surface modelling, class A surfaces.
Reverse engineering
Styling and computer-aided industrial design
Product and manufacturing information (PMI)
Reporting and analytics, verification and validation
Knowledge reuse, including knowledge-based engineering
Sheet metal design
Assembly modelling and digital mockup
Routing for electrical wiring and mechanical piping
Computer-aided engineering (CAE) (Simulation)
Stress analysis / finite element method (FEM)
Kinematics
Computational fluid dynamics (CFD) and thermal analysis
Computer-aided manufacturing (CAM) (Manufacturing)
Numerical control (NC) programming
Supported operating systems and platforms
NX runs on Linux, Microsoft Windows and Mac OS.
Starting with version 1847, support for Windows versions prior to Windows 10 as well as for macOS was completely removed, and the GUI was removed from the Linux version.
Architecture
NX uses Parasolid for its Geometric modeling kernel and D-Cubed as Associative engine for sketcher and assembly constraints as well as using JT (visualization format) for lightweight data and Multi-CAD.
See also
CATIA
Freecad
I-DEAS
Inventor
PTC Creo
Solid Edge
SolidWorks
References
Gallery
:Category:Screenshots of NX (Unigraphics)
External links
Computer-aided design software
Computer-aided manufacturing software
Computer-aided engineering software
Product lifecycle management
Computer-aided design software for Linux
Siemens software products
Computer-aided manufacturing software for Linux
Computer-aided engineering software for Linux
Proprietary commercial software for Linux
|
529032
|
https://en.wikipedia.org/wiki/Microsoft%20Visio
|
Microsoft Visio
|
Microsoft Visio ( ) (formerly Microsoft Office Visio) is a diagramming and vector graphics application and is part of the Microsoft Office family. The product was first introduced in 1992, made by the Shapeware Corporation. It was acquired by Microsoft in 2000.
Features
Microsoft made Visio 2013 for Windows available in two editions: Standard and Professional. The Standard and Professional editions share the same interface, but the Professional edition has additional templates for more advanced diagrams and layouts, as well as capabilities intended to make it easy for users to connect their diagrams to data sources and to display their data graphically. The Professional edition features three additional diagram types, as well as intelligent rules, validation, and subprocess (diagram breakdown). Visio Professional is also offered as an additional component of an Office365 subscription.
On 22 September 2015, Visio 2016 was released alongside Microsoft Office 2016. A few new features have been added such as one-step connectivity with Excel data, information rights management (IRM) protection for Visio files, modernized shapes for office layout, detailed shapes for site plans, updated shapes for floor plans, modern shapes for home plans, IEEE compliant shapes for electrical diagrams, new range of starter diagrams, and new themes for the Visio interface.
Database modeling in Visio revolves around a Database Model Diagram (DMD).
File formats
All of the previous versions of Visio used VSD, the proprietary binary-file format. Visio 2010 added support for the VDX file format, which is a well-documented XML Schema-based ("DatadiagramML") format, but still uses VSD by default.
Visio 2013 drops support for writing VDX files in favor of the new VSDX and VSDM file formats, and uses them by default. Created based on Open Packaging Conventions (OPC) standard (ISO 29500, Part 2), a VSDX or VSDM file consists of a group of XML files archived inside a Zip file. VSDX and VSDM files differ only in that VSDM files may contain macros. Since these files are susceptible to macro virus infection, the program enforces strict security on them.
While VSD files use LZW-like lossless compression, VDX is not compressed. Hence, a VDX file typically takes up 3 to 5 times more storage. VSDX and VSDM files use the same compression as Zip files.
Visio also supports saving files in SVG files, other diagramming files and images. However, images cannot be opened.
History
Visio began as a standalone product produced by Shapeware Corporation; version 1.0 shipped in 1992. A pre-release, Version 0.92, was distributed free on a floppy disk along with a Microsoft Windows systems readiness evaluation utility. In 1995, Shapeware Corporation changed their name to Visio Corporation to take advantage of market recognition and related product equity. Microsoft acquired Visio in 2000, re-branding it as a Microsoft Office application. Like Microsoft Project, however, it has never been officially included in any of the bundled Office suites. Microsoft included a Visio for Enterprise Architects edition with some editions of Visual Studio .NET 2003 and Visual Studio 2005.
Along with Microsoft Visio 2002 Professional, Microsoft introduced Visio Enterprise Network Tools and Visio Network Center. Visio Enterprise Network Tools was an add-on product that enabled automated network and directory services diagramming. Visio Network Center was a subscription-based website where users could locate the latest network documentation content and exact-replica network equipment shapes from 500 leading manufacturers. The former has been discontinued, while the latter's shape-finding features are now integrated into the program itself. Visio 2007 was released on November 30, 2006.
Microsoft Visio adopted ribbons in its user interface in Visio 2010. Microsoft Word, Excel, PowerPoint, Access and Outlook (to some extents) had already adopted the ribbon with the release of Microsoft Office 2007.
November 19, 2012: BPMN 2.0 was utilized within Microsoft Visio.
Versions
Visio v1.0 (Standard, Lite, Home)
Visio v2.0
Visio v3.0
Visio v4.0 (Standard, Technical)
Visio v4.1 (Standard, Technical)
Visio v4.5 (Standard, Professional, Technical)
Visio v5.0 (Standard, Professional, Technical)
Visio 2000 (v6.0; Standard, Professional, Technical, Enterprise) – later updated to SP1 and Microsoft branding after Visio Corporation's acquisition
Visio 2002 (v10.0; Standard, Professional)
Visio for Enterprise Architects 2003 (VEA 2003) – based on Visio 2002 and included with Visual Studio .NET 2003 Enterprise Architect edition
Office Visio 2003 (v11.0; Standard, Professional)
Office Visio for Enterprise Architects 2005 (VEA 2005) – based on Visio 2003 and included with Visual Studio 2005 Team Suite and Team Architect editions
Office Visio 2007 (v12.0; Standard, Professional)
Visio 2010 (v14.0; Standard, Professional, Premium)
Visio 2013 (v15.0; Standard, Professional)
Visio 2016 (v16.0; Standard, Professional, Office 365)
Visio Online Plan 1 (Web based editor), Visio Online Plan 2 (Desktop, Office 365)
Visio 2019 (v16.0; Standard, Professional)
There are no Visio versions 7, 8, or 9, because after Microsoft acquired and branded Visio as a Microsoft Office product, the Visio version numbers followed the Office version numbers. Version 13 was skipped owing to triskaidekaphobia.
Visio does not have a macOS version, which has led to the growth of several third-party applications which can open and edit Visio files on Mac.
On 7 May 2001, Microsoft introduced Visio Enterprise Network Tools (VENT), an add-on for Visio 2002 scheduled for release on 1 July 2001, and Visio Network Center, a subscription-based web service for IT professionals who use Microsoft Visio for computer network diagramming. VENT was discontinued on 1 July 2002 because of very low customer demand.
See also
Concept map
Diagrams
Flowchart
List of concept- and mind-mapping software
Comparison of project management software
Comparison of network diagram software
References
Further reading
External links
Microsoft Visio 2016 Viewer (Internet Explorer add-in) on Microsoft Download Center
Microsoft Visio 2013 Viewer (Internet Explorer add-in) on Microsoft Download Center
Microsoft Visio 2010 Product Overview Guide on Microsoft Download Center
Microsoft Visio 2010: Interactive menu to ribbon guide on Microsoft Download Center
Old versions of Visio which has abandonware status already. (1.0, 2.0, 3.0, 4.0, 5.0, 2010Beta)
1992 software
Diagramming software
Visio
Technical communication tools
UML tools
Windows software
Graph drawing software
2000 mergers and acquisitions
|
8725592
|
https://en.wikipedia.org/wiki/Moonsound
|
Moonsound
|
Moonsound is the name of a sound card released for the MSX home-computer system at the Tilburg Computer Fair in 1995. The name Moonsound originated from the software Moonblaster that was written for people to use this hardware plug-in synthesizer.
History
Moonsound is a sound-card produced for the MSX home-computer system.
Based on the Yamaha YMF278 OPL4 sound chip, it is capable of 18 channels of FM sound as well as 24 channels of 12 and 16 bit sample-based synthesis. It arrived after the US branch of Microsoft abandoned the MSX system, focusing on the IBM PC.
A 2 MB instrument ROM containing multisampled instruments was unusual for its time. From the factory it came equipped with one 128 kB SRAM chip for user samples.
Hardware
It was designed by electronic engineer Henrik Gilvad and produced by Sunrise Swiss on a semi-hobby basis. Two generations were made. The first is a small size PCB without a box. Later, a larger size PCB which fit into an MSX cartridge was available. The later version had room for two sample SRAM chips resulting in 1 MB of compressed user samples.
Software
Moonblaster is a software designed by Remco Schrijvers based on his time-step sequencer software for other MSX sound cards.
Moonblaster came in two versions, one for FM and one for sample-based synthesis.
Later on Marcel Delorme took over the software development.
As most developers were active in gaming software, many games Sunrise (in the Netherlands) developed included music especially composed for Moonsound.
Sound effects
Sound effects like chorus, delay and reverb are omitted due to cost, size and usability reasons. The Yamaha effect chip requires its own specialised memory and effect routing is basic. All 18 FM channels and 24 channels of sample-based sound shares the same effect setting. Creative step-time sequencer programmers made pseudo effects like chorus, reverb and delay by overdubbing or using dedicated channels to repeat notes with delay and stereo panning. This is effective but quickly reduces the musical complexity possible.
Specifications
Moonsound version 1.0 had 1 socket for user sample RAM.
Moonsound version 1.1 and 1.2 had 2 sockets for up to 1MB SRAM.
Some hackers found out how to stack 2 additional SRAM chips resulting in 2MB of SRAM.
Being based on the OPL4 chip, Moonsound is compatible (FM register) with OPL1, OPL2 and OPL3. The MSX-AUDIO contains a chip which is also compatible with the OPL1. Therefore, some older software can make use of the Moonsound.
The 2 MB ROM contained 330 mono samples, mostly at 22.050 kHz at 12 bits, but with some drums at 44.1 kHz.
The FM part of the OPL4 chip can be configured in several ways:
18 two-operator FM channels
6 four-operator FM channels + 6 two-operator FM channels
15 two-operator FM channels + 5 FM drums
6 four-operator FM channels + 3 two-operator FM channels + 5 FM drums
Four-operator FM allows for more complex sounds but reduces polyphony.
Eight waveforms are available for the FM synthesis.
The Moonsound audio power supply is isolated from its digital supply in an attempt to reduce noise. It has a separate stereo audio output and is not mixed with the internal MSX sound.
Software
Moonblaster for Moonsound FM
Moonblaster for Moonsound Wave
Moonsofts Amiga MOD file player for Moonsound
Mid2opl4 midi file player for Moonsound
Meridian SMF MIDI file player
MoonDriver MML (Music Macro Language) compiler
Additional software tools were able to rip sound loops digitally from audio CDs inserted in a CD-ROM drive connected to any of the SCSI and ATA-IDE interfaces. This software was designed by Henrik Gilvad for MSX Club Gouda and Sunrise Swiss.
Today, Moonsound is emulated in MSX emulators such as blueMSX and openMSX.
See also
Chiptune
References
External links
The Ultimate MSX FAQ - Moonsound
Meridian software
OPL4 data sheet
Moonsound D/A Converter spec
Moonsound release story Tilburg 1995
Audio examples from Moonsound in MP3 format (not all examples are purely made with Moonsound)
MSX
Sound cards
|
15377553
|
https://en.wikipedia.org/wiki/Great%20Moderation
|
Great Moderation
|
The Great Moderation is a period starting from the mid-1980s until 2007 characterized by the reduction in the volatility of business cycle fluctuations in developed nations compared with the decades before. It is believed to be caused by institutional and structural changes, particularly in central bank policies, in the second half of the twentieth century.
Sometime during the mid-1980s major economic variables such as real gross domestic product growth, industrial production, monthly payroll employment and the unemployment rate began to decline in volatility. During the Great Moderation, real wages and consumer prices stopped increasing and remained stable, while interest rates reversed their upward trend and started to fall. The period also saw a large increase in household debt and economic polarization, as the wealth of the rich grew substantially, while the poor and middle class went deep into debt.
Ben Bernanke and others in the US Federal Reserve (the Fed) claim that the Great Moderation is primarily due to greater independence of the central banks from political and financial influences which has allowed them to follow macroeconomic stabilisation, by measures such as following the Taylor rule. Additionally, economists believe that information technology and greater flexibility in working practices contributed to increasing macroeconomic stability.
The term was coined in 2002 by James Stock and Mark Watson to describe the observed reduction in business cycle volatility. There is a debate pertaining to whether the Great Moderation ended with the late-2000s economic and financial crisis, or if it continued beyond this date with the crisis being an anomaly.
Origins of the term
During the mid-1980s the U.S. macroeconomic volatility was largely reduced. In 1992, investment banker David Shulman of Salomon Brothers claimed that this was the result of a "Goldilocks economy", which is neither too hot (which would force an increase in interest rate) nor too cold (which would disincentivize investments).
The term "great moderation" was coined by James Stock and Mark Watson in their 2002 paper, "Has the Business Cycle Changed and Why?" It was brought to the attention of the wider public by Ben Bernanke (then member and later chairman of the Board of Governors of the Federal Reserve) in a speech at the 2004 meetings of the Eastern Economic Association.
Michael Hudson argues that the positive connotations associated with the "Goldilocks economy" and the "Great moderation" are because these terms were coined by bankers, who saw their loans soar along with their bonuses during this period. However, this economy was not "Goldilocks" for everyone. Hudson claims the period could also be dubbed the "Great Polarization" due to the increase in economic inequality this period generated.
Causes
The Great Moderation has been attributed to various causes:
Central bank independence
Since the Treasury–Fed Accord of 1951, the US Federal Reserve (fed) was freed from the constraints of fiscal influence and gave way to the development of modern monetary policy. According to John B. Taylor, this allowed the Federal Reserve to abandon discretionary macroeconomic policy by the US Federal government to set new goals that would better benefit the economy. The span of the Great Moderation coincides with the tenure of Alan Greenspan as Fed chairman: 1987-2006.
Taylor Rule
According to the Federal reserves, following the Taylor rule results in less policy instability, which should reduce macroeconomic volatility. The rule prescribed setting the bank rate based on three main indicators: the federal funds rate, the price level and the changes in real income. The Taylor rule also prescribes economic activity regulation by choosing the federal funds rate based on the inflation gap between desired (targeted) inflation rate and actual inflation rate; and the output gap between the actual and natural level.
In an American Economic Review paper, Troy Davig and Eric Leeper stated that the Taylor principle is countercyclical in nature and a "very simple rule [that] does a good job of describing Federal Reserve interest-rate decisions". They argued that it is designed for "keeping the economy on an even keel", and that following the Taylor principle can produce business cycle stabilization and crisis stabilization.
However, since the 2000s the actual interest rate in advanced economies, especially in the US, was below the suggested by the Taylor rule.
Structural economic changes
A change in economic structure shifted away from manufacturing, an industry considered less predictable and more volatile. The Sources of the Great Moderation by Bruno Coric supports the claim of drastic labor market changes, noting a high "increase in temporary workers, part time workers and overtime hours". In addition to a change in the labor market, there were behavioral changes in how corporations managed their inventories. With improved sales forecasting and inventory management, inventory costs became much less volatile, increasing corporation stability.
These economic changes were fueled by the government's economic policy of deregulation of the financial sector. This deregulation, combined with the repeal of the Glass-Steagal Act, allowed banks to lend more money to consumers, regardless of their ability to pay their debts back. As "Moderation" would keep going as long as people did not default on their debt, banks kept lending to make sure their customers could pay the monthly interest payment, creating a housing bubble. All this came to an abrupt end with Subprime mortgage crisis.
Technology
Advances in information technology and communications increased corporation efficiency. The improvement in technology changed the entire way corporations managed their resources as information became much more readily available to them with inventions such as the barcode.
Information technology introduced the adoption of the "just-in-time" inventory practices. Demand and inventory became easier to track with advancements in technology, corporations were able to reduce stocks of inventory and their carrying costs more immediately, both of which resulted in much less output volatility.
Good luck
There is a debate pertaining to whether the macroeconomic stabilization of the Great Moderation occurred due to good luck or due to monetary policies.
Researchers at the US Federal Reserve and the European Central Bank have rejected the "good luck" explanation, and attribute it mainly to monetary policies. There were many large economy crises — such as the Latin American debt crisis of the 1980s, the failure of Continental Illinois Bank in 1984, the stock market crash of 1987, the Asian financial crisis in 1997, the collapse of Long-Term Capital Management in 1998, and the dot-com crash in 2000 — that did not greatly destabilize the US economy during the Great Moderation.
However, Stock and Watson used a four variable vector autoregression model to analyze output volatility and concluded that stability increased due to economic good luck. Stock and Watson believed that it was pure luck that the economy didn't react violently to the economic shocks during the Great Moderation. While there were numerous economic shocks, there is very little evidence that these shocks are as large as prior economic shocks.
Effects
Research has indicated that the US monetary policy that contributed to the drop in the volatility of US output fluctuations also contributed to the decoupling of the business cycle from household investments characterized the Great Moderation. The latter became the toxic assets that caused the Great Recession.
It has been argued that the greater predictability in economic and financial performance associated with the Great Moderation caused firms to hold less capital and to be less concerned about liquidity positions. This, in turn, is thought to have been a factor in encouraging increased debt levels and a reduction in risk premia required by investors. According to Hyman Minsky the great moderation enabled a classic period of financial instability, with stable growth encouraging greater financial risk taking.
Among these risks were the Ninja loans, mortgages to people without the ability to pay them. They were taken with the idea that, as long as speculators kept bidding up the price of the properties, new bank credit could be created to allow the debtor to pay the monthly interest payment. The Banks did indeed create enough credit to sustain this system until the speculator stopped buying, creating the Subprime mortgage crisis and the end of the Great Moderation
Fraud
In 2004, 20 years into the Great Moderation, the FBI reported the greatest wave of financial fraud since the Savings and Loans Crisis, but this was not acted upon by the Department of Justice.
End
It is now commonly assumed that the late-2000s economic and financial crisis brought the Great Moderation period to an end, as was initially argued by some economists such as John Quiggin. Richard Clarida at PIMCO considered the Great Moderation period to have been roughly between 1987 and 2007, and characterised it as having "predictable policy, low inflation, and modest business cycles".
However, before the Covid-19 pandemic, the US real GDP growth rate, the real retail sales growth rate, and the inflation rate had all returned to roughly what they were before the Great Recession. Todd Clark has presented an empirical analysis which claims that volatility, in general, has returned to the same level as before the Great Recession. He concluded that while severe, the 2007 recession will in future be viewed as a temporary period with a high level of volatility in a longer period where low volatility is the norm, and not as a definitive end to the Great Moderation.
However, the decade following Great Recession had some key differences with the economy of the Great Moderation. The economy had a much larger debt overhead. This led to a much slower economic recovery, the slowest since the Great Depression. Despite the low volatility of the economy, few would argue that the 2009-2020 economic expansion, which was the longest on record, was carried out under Goldilocks economic conditions. Andrea Riquier dubs the post-Great Recession period as the "Great Stability". Michael Hudson dubs it the "Great Austerity", in reference to the policies promoted by neoliberals in this period in the hope of reducing the debt overhead and bring back pre-2007 growth levels.
See also
1990s United States boom
New economy
Structural break
Great Depression of the 1930s
Great Regression
References
Further reading
Bean, Charles. (2010) "The great moderation, the great panic, and the great contraction." Journal of the European Economic Association 8.2-3 (2010): 289-325 online.
Galí, Jordi, and Luca Gambetti. (2009) "On the sources of the great moderation." American Economic Journal: Macroeconomics 1.1 (2009): 26-57. online
Summers, Peter M. "What caused the Great Moderation? Some cross-country evidence." Economic Review-Federal Reserve Bank of Kansas City 90.3 (2005): 5+ online
External links
scholarly articles
Business cycle
1990s economic history
2000s economic history
1980s economic history
|
46478098
|
https://en.wikipedia.org/wiki/TempleOS
|
TempleOS
|
TempleOS (formerly J Operating System, LoseThos, and SparrowOS) is a biblical-themed lightweight operating system (OS) designed to be the Third Temple prophesied in the Bible. It was created by American programmer Terry A. Davis, who developed it alone over the course of a decade after a series of manic episodes that he later described as a revelation from God.
The system was characterized as a modern x86-64 Commodore 64, using an interface similar to a mixture of DOS and Turbo C. Davis proclaimed that the system's features, such as its 640x480 resolution, 16-color display, and single-voice audio, were designed according to explicit instructions from God. It was programmed with an original variation of C (named HolyC) in place of BASIC, and included an original flight simulator, compiler and kernel.
TempleOS was released as J Operating System in 2005 and as TempleOS in 2013, and was last updated in 2017.
Background
Terry A. Davis (1969–2018) began experiencing regular manic episodes in 1996, leading him to numerous stays at mental hospitals. Initially diagnosed with bipolar disorder, he was later declared schizophrenic and remained unemployed for the rest of his life. He suffered from delusions of space aliens and government agents that left him briefly hospitalized for his mental health issues. After experiencing a self-described "revelation", he proclaimed that he was in direct communication with God, and that God told him the operating system was for God's third temple.
Davis began developing TempleOS circa 2003. One of its early names was the "J Operating System" before renaming it to "LoseThos", a reference to a scene from the 1986 film Platoon. In 2008, Davis wrote that LoseThos was "primarily for making video games. It has no networking or Internet support. As far as I'm concerned, that would be reinventing the wheel". Another name he used was "SparrowOS" before settling on "TempleOS". In mid-2013, his website announced: "God's temple is finished. Now, God kills CIA until it spreads ." Davis died after being hit by a train on August 11, 2018.
System overview
TempleOS is a 64-bit, non-preemptive multi-tasking, multi-cored, public domain, open source, ring-0-only, single address space, non-networked, PC operating system for recreational programming. The OS runs 8-bit ASCII with graphics in source code and has a 2D and 3D graphics library, which run at 640x480 VGA with 16 colors. Like most modern operating systems, it has keyboard and mouse support. It supports ISO 9660, FAT32 and RedSea file systems (the latter created by Davis) with support for file compression. According to Davis, many of these specifications—such as the 640x480 resolution, 16-color display and single audio voice—were instructed to him by God. He explained that the limited resolution was to make it easier for children to draw illustrations for God.
The operating system includes an original flight simulator, compiler, and kernel. One bundled program, "After Egypt", is a game in which the player travels to a burning bush to use a "high-speed stopwatch". The stopwatch is meant to act as an oracle that generates pseudo-random text, something Davis likened to a Ouija board and glossolalia. An example of generated text follows:
TempleOS was written in a programming language developed by Davis as a middle ground between C and C++, originally called "C+" (C Plus), later renamed to "HolyC". It doubles as the shell language, enabling the writing and execution of entire applications from within the shell. The IDE that comes with TempleOS supports several features, such as embedding images in code. It uses a non-standard text format (known as DolDoc) which has support for hypertext links, images, and 3D meshes to be embedded into what are otherwise standard ASCII files; for example, a file can have a spinning 3D model of a tank as a comment in source code. Most code in the OS is JIT-compiled, and it is generally encouraged to use JIT compilation as opposed to creating binaries. Davis ultimately wrote over 100,000 lines of code for the OS.
Critical reception
TempleOS received mostly "sympathetic" reviews. Tech journalist David Cassel opined that "programming websites tried to find the necessary patience and understanding to accommodate Davis". TechRepublic and OSNews published positive articles on Davis's work, even though Davis was banned from the latter for hostile comments targeting its readers and staff. In his review for TechRepublic, James Sanders concluded that "TempleOS is a testament to the dedication and passion of one man displaying his technological prowess. It doesn't need to be anything more." OSNews editor Kroc Camen wrote that the OS "shows that computing can still be a hobby; why is everybody so serious these days? If I want to code an OS that uses interpretive dance as the input method, I should be allowed to do so, companies like Apple be damned." In 2017, the OS was shown as a part of an outsider art exhibition in Bourogne, France.
Legacy
After Davis's death, OSNews editor Thom Holwerda wrote: "Davis was clearly a gifted programmer – writing an entire operating system is no small feat – and it was sad to see him affected by his mental illness". One fan described Davis as a "programming legend", while another, a computer engineer, compared the development of TempleOS to a one-man-built skyscraper. He added that it "actually boggles my mind that one man wrote all that" and that it was "hard for a layperson to understand what a phenomenal achievement" it is to write an entire operating system alone.
TempleOS is in the public domain. Davis's family has wished for fans to donate to the National Alliance for Mental Illness and other organizations "working to ease the pain and suffering caused by mental illness".
See also
Creativity and mental health
Biblical software
Religion and video games
SerenityOS
References
External links
TempleOS Website
Comprehensive archive of TempleOS and Terry A. Davis material
Archive of the TempleOS website and operating system
Archive of the TempleOS bootable ISO images
TempleOS source code
2013 software
Outsider art
Free software operating systems
Hobbyist operating systems
Public-domain software with source code
x86-64 operating systems
Christian software
|
67428409
|
https://en.wikipedia.org/wiki/Team%20Xecuter
|
Team Xecuter
|
Team Xecuter is a hacker group known for making mod chips and jailbreaking game consoles. Among console hackers, who primarily consist of hobbyists testing boundaries and believe in the open-source model, Team Xecuter was controversial for selling hacking tools for profit. Console systems targeted by the group include the Nintendo Switch, Nintendo 3DS, NES Classic Edition, Sony PlayStation, Microsoft Xbox and the Microsoft Xbox 360.
In September 2020 Canadian national, Gary "GaryOPA" Bowser, and French national, Max "MAXiMiLiEN" Louarn, were arrested for designing and selling "circumvention devices", in particular, products to circumvent Nintendo Switch copy protection, and were named, along with Chinese citizen Yuanning Chen, in a federal indictment filed in U.S. District Court in Seattle, WA on August 20, 2020. Each of the three men named in the indictment faced 11 felony counts, including conspiracy to commit wire fraud, conspiracy to circumvent technological measures and to traffic in circumvention devices, trafficking in circumvention devices, and conspiracy to commit money laundering. Bowser handled public relations for the group, which has been in operation since "at least" 2013. By October 2021, Bowser pled guilty to two charges related to distribution of Team Xecuter's devices, agreeing to pay a penalty and to continue to work with authorities in their continued investigation of Team Xecuter in exchange for dropping the other nine charges against him. In December, he was ordered to pay another $10 million to Nintendo. On February 10, 2022, Bowser was sentenced to 40 months in prison.
Nintendo separately filed a civil lawsuit against Bowser in April 2021 related to three counts of copyright infringement, seeking damages of $2500 per trafficked device, and $150,000 for each copyright violation.
Nintendo has also successfully prevailed in other lawsuits involving resellers of Team Xecuter devices.
References
Further reading
Hacker groups
Hacking in the 2000s
Hacking in the 2010s
Hacking in the 2020s
Nintendo Switch
|
24749552
|
https://en.wikipedia.org/wiki/TurboDOS
|
TurboDOS
|
TurboDOS is a multi user CP/M like operating system for the Z80 and 8086 CPUs developed by Software 2000 Inc.
It was released around 1982 for S100 bus based systems such as the NorthStar Horizon and the Commercial Systems line of the multiprocessor systems including the CSI-50, CSI-75, SCI-100 and CSI-150.
The multiprocessor nature of TurboDOS is its most unusual feature. Unlike other operating systems of its time where networking of processors was either an afterthought, or which only support a file transfer protocol, TurboDOS was designed from the ground up as a multiprocessor operating system.
It is modular in construction, with the operating system generation based on a relocating, linking, loader program. This makes the incorporation of different hardware driver modules quite easy, particularly for bus-oriented machines, such as the IEEE-696 (S-100) bus which was commonly used for TurboDOS systems.
Architecture
TurboDOS is highly modular, consisting of more than forty separate functional modules distributed in relocatable form. These modules are "building blocks" that you can combine in various ways to produce a family of compatible operating systems. This section describes the modules in detail, and describes how to combine them in various configurations.
Possible TurboDOS configurations include:
single-user without spooling
single-user with spooling
network server
simple network user (no local disks)
complex network user (with local disks)
Numerous subtle variations are possible in each of these categories.
Module hierarchy
The architecture of TurboDOS can be viewed as a three-level hierarchy. The highest level of the hierarchy is the process level. TurboDOS can support many concurrent processes at this level. The intermediate level of the hierarchy is the kernel level. The kernel supports the 93 C-functions and T-functions, and controls the sharing of computer resources such as processor time, memory, peripheral devices, and disk files. Processes make requests of the kernel through the entrypoint module OSNTRY, which decodes each C-function and T-function by number and invokes the appropriate kernel module.
The C functions include the CP/M BDOS functions and selected MP/M functions.
The lowest level of the hierarchy is the driver level, and contains all the device-dependent drivers necessary to interface TurboDOS to the particular hardware being used. Drivers must be provided for all peripherals, including console, printers, disks, communications channels, and network interface.
Drivers are also required for the real-time clock (or other periodic interrupt source), and for bank-switched memory (if applicable).
TurboDOS is designed to interface with almost any kind of peripheral hardware. It operates most efficiently with interrupt-driven, DMA-type interfaces, but can also work fine using polled and programmed-I/O devices.
TurboDOS loader
The TurboDOS loader OSLOAD.COM is a program containing an abbreviated version of the kernel and drivers. Its purpose is to load the full TurboDOS operating system from a disk file (OSSERVER.SYS) into memory at each system cold-start.
System generation
The functional modules are distributed in relocatable format (.REL) and the GEN command is a specialized linker which builds an executable version of the system.
Commands
TurboDOS has no "resident" commands. All commands are executable files. The standard commands are:
External links
The TurboDOS Museum
TUG Newsletters
Z80 Implementors Guide (pdf)
The TurboDOS Operating System
CP/M
Microcomputer software
Disk operating systems
|
42445464
|
https://en.wikipedia.org/wiki/IBM%20473L%20Command%20and%20Control%20System
|
IBM 473L Command and Control System
|
The IBM 473L Command and Control System (473L System, 473L colloq.) was a USAF Cold War "Big L" Support System with computer equipment at The Pentagon and, in Pennsylvania, the Alternate National Military Command Center nuclear bunker. Each 473L site included a Data Processing Subsystem (DPSS), Integrated Console Subsystem (ICSS), Large Panel Display Subsystem, and Data Communications Subsystem (Automatic Digital Network interface: "AUTODIN Data Terminal Bay"). The "System 473L" was an "on-line, real-time information processing system designed to facilitate effective management of USAF resources, particularly during emergency situations" e.g., for: "situation monitoring, resource monitoring, plan evaluation, plan generation and modification, and operations monitoring". In 1967, the 473L System was used during the "HIGH HEELS 67" exercise "to test the whole spectrum of command in a strategic crisis".
Background
In early 1952, the Pentagon's USAF Command Post (AFCP) "arranged" to receive Air Defense Command (ADC) exercise data such as for planned mock attacks into defense sectors by faker aircraft (e.g., in 1955 on Amarillo, Denver, Salt Lake City, Kansas City, San Antonio and Phoenix.) An Experimental SAGE Subsector" for testing a Semi Automatic Ground Environment (SAGE) was created using a July 1955 prototype air defense computer ADC's 1955 command post blockhouse was completed at Ent AFB, and "in September 1955, the Air Force…replace[d its] command post's outmoded telephone system with a modern switchboard with 100 long-distance lines and room for more, so that 20 people in various parts of the country could hold as many as four conferences at a time". The Alternate Joint Communication Center in the Raven Rock nuclear bunker was equipped by the end of 1955, and ADC broke ground in 1957 for deploying the Burroughs 416L SAGE Air Defense System (the BMEWS 474L General Operational Requirement was specified in 1958.) After President Dwight D. Eisenhower expressed concern about nuclear command and control, a "1958 reorganization in NCA relations with the joint commands" was implemented, and the "AWCS 512L" system was deployed by June 1958. The GOR for a computerized 465L SAC Automated Command and Control System was issued in 1958 for Strategic Air Command's nuclear bunkers (1957 Offutt AFB bunker & 1958 at The Notch). A Joint War Room was activated at the Pentagon in 1960 and in December 1960, the AFCP reverted to a USAF-only mission when its "joint and national responsibilities" ended. After a "Quick Fix" program completed in the fall of 1960 and NORAD's Alert Network Number 1 was providing data from the Ent AFB command post in Colorado Springs, the AFCP had several rear projection screens, DEFCON status boards, and a display with colored regional blocks for the Bomb Alarm System (work had started in May 1959 for transmitting BAS data to "six command centers".) In January 1962, the Deep Underground Command Center was planned as a nuclear bunker beneath the Pentagon (the Raven Rock bunker would be phased out.)
The Air Force Command Post Systems Division was activated in 1960 for handling AFCP equipment issues (cf. AFSC's Electronic Systems Division which had the SPO) and in October 1962, DoD Directive S-5100.30 "designated 473L as the “Air Force service headquarters subsystem” of the Worldwide Military Command and Control System (WWMCCS) established the same month."
OTC phase
The "Operational and Training Capability" (OTC) phase by IBM Federal Systems was the first stage of development for the 473 program. Each "Computer Communication Console" by TRW Space Technology Laboratories for OTC was part of the "DC400B/DIB display and interrogation system" that had 2 "10-inch CRT displays together with a sophisticated keyboard" This "temporary 473L system" had an IBM 1401 computer and IBM 1405 Disk Storage Unit. On January 1, 1963, ESD's 473L System Program Office was expanded (473L/492L SPO) with the added 492L responsibility for developing the United States Strike Command's Airborne Communications Center/Command Post (SPOs were separated on June 15, 1965).
OUR phase
As an upgrade before the IOC phase, an IBM 1410 was leased in February 1964, and the IBM 1401 computer was phased out by April—revision of OTC software for the 1410 computer was by Project OUR (OTC Update and Revision).
IOC phase
The Librascope AN/FYQ-11 Data Processor Set was "a configuration of the L-3055" computer that Librascope manufactured at Glendale procured for the Initial Operational Capability phase with limited FYQ-11 equipment (e.g., without OA-6041 Control-Indicator Console). and only "4 integrated consoles". FYQ-11 had been accepted by the USAF Electronic Systems Division in late March 1965 to replace the IBM 1410 (each FYQ-11 was "234 cu ft [and required] 500 sq ft" area). The FYQ-11 had been proposed on February 19, 1962, for the Complete Operational Capability (dual AN/FYQ-11 sets with only a single OA-6041.) COC programs planned for the L-3055 included the "Deployment Monitor", "ACE-Tactical", and "ACE-Transport" (Computer Based Training on the FYQ-11 was also planned.) After FYQ-11 problems, the USAF Chief of Staff in 1966 cancelled the AN/FYQ-11 and the Comptroller was directed to dispose of "the L-3055 system's equipment" (1977 lawsuit claims by the 1968 Librascope parent--The Singer Company—were denied.)
Complete operational capability
A second IBM 1410 computer was installed by December 15, 1966, and the entire 473L System included:
AN/FYA-2 Integrated Data Transfer Console The AN/FYA-2 ("473L Integrated Console" with Logic Keyboard Display (LKB) provided the fully equipped 473L operator environment—cf. AN/FYA-3 didn't have a Hard Copy Device (HC) for the Multicolored Display (MC), nor a Console Printer (CP); while the AN/FYA-4 only had an Electronic Typewriter/Display (RT) and CP. The console was run by a Monitor Program in the DPSS, and "operational capabilities [were] exercised via operational capability overlays; that is, via plastic masks fitting over the logic keyboard portion of the operator console." The original COC plan was for DPSS output for 11 MCs and 15 CPs (i.e., 4 of the simplest AN/FYA-4 consoles for printing reports).
Query Language (473L Query) Query Language was "very similar to the COLINGO query language and was "a constrained English language…for man-machine communication in System 473L. …to retrieve data from any file in the system or to perform certain other functions." For example, the code for airfields both within Brazil and within a 2000-mile great-circle distance of Brazilia is:
Retrieve airfields with country > Brazil, GCD (Brazilia » 2000)
Large Panel Display Subsystem IT&T was awarded the May 1965 contract for the large 473L display which was to present information in both black and white and in color. In 1971 an Iconorama was still being used by "NORAD at the Air Force System 473L".
References
Cold War military computer systems of the United States
Equipment of the United States Air Force
United States nuclear command and control
|
44657216
|
https://en.wikipedia.org/wiki/System%20for%20Cross-domain%20Identity%20Management
|
System for Cross-domain Identity Management
|
System for Cross-domain Identity Management (SCIM) is a standard for automating the exchange of user identity information between identity domains, or IT systems.
One example might be that as a company onboards new employees and separates from existing employees, they are added and removed from the company's electronic employee directory. SCIM could be used to automatically add/delete (or, provision/de-provision) accounts for those users in external systems such as G Suite, Office 365, or Salesforce.com. Then, a new user account would exist in the external systems for each new employee, and the user accounts for former employees might no longer exist in those systems.
In addition to simple user-record management (creating & deleting), SCIM can also be used to share information about user attributes, attribute schema, and group membership. Attributes could range from user contact information to group membership. Group membership or other attribute values are generally used to manage user permissions. Attribute values and group assignments can change, adding to the challenge of maintaining the relevant data across multiple identity domains.
The SCIM standard has grown in popularity and importance, as organizations use more SaaS tools. A large organization can have hundreds or thousands of hosted applications (internal and external) and related servers, databases and file shares that require user provisioning. Without a standard connection method, companies must write custom software connectors to join these systems and their IdM system.
SCIM uses a standardised API through REST with data formatted in JSON or XML.
History
The first version, SCIM 1.0, was released in 2011 by a SCIM standard working group organized under the Open Web Foundation. In 2011, it was transferred to the IETF, and the current standard, SCIM 2.0 was released as IETF RFC in 2015.
SCIM 2.0 was completed in September 2015 and is published as IETF RFCs 7643 and 7644. A use-case document is also available as RFC 7642.
The standard has been implemented in various IdM software.
The standard was initially called Simple Cloud Identity Management (and is still called this in some places), but the name was officially changed to System for Cross-domain Identity Management (SCIM) when the IETF adopted it.
Interoperability was demonstrated in October, 2011, at the Cloud Identity Summit, an IAM industry conference. There, user accounts were provisioned and de-provisioned across separate systems using SCIM standards, by a collection of IdM software vendors: Okta, Ping Identity, SailPoint, Technology Nexus and UnboundID. In March 2012, at IETF 83 in Paris, interoperability tests continued by the same vendors, joined by Salesforce.com, BCPSoft, WSO2, Gluu, and Courion (now SecureAuth) nine companies in total.
SCIM is the second standard for exchanging user data, but it builds on prior standards (e.g. SPML, PortableContacts, vCards, and LDAP directory services) in an attempt to be a simpler and more widely adopted solution for cloud services providers.
The SCIM standard is growing in popularity and has been adopted by numerous identity providers (e.g. Azure Active Directory) as well as applications (e.g. Dynamic Signal, Zscaler, and Dropbox). As adoption of the standard grows, so do the number of tools available. The standard leverages a number of open-source libraries to facilitate development and testing frameworks ensure that endpoint compliance with the SCIM standard.
References
External links
- This is the working group in IETF that defines the standard.
This site is dedicated to the standard and has explanations and details about how to implement the standard.
Identity management
Open standards
Standards
Technological change
|
26615425
|
https://en.wikipedia.org/wiki/University%20of%20Computer%20Studies%20%28Pakokku%29
|
University of Computer Studies (Pakokku)
|
University of Computer Studies (Pakokku) (formerly Computer University (Pakokku), Government Computer College (Pakokku)) is a public undergraduate university located in Pakokku, Magway Region, Myanmar. Students study various computer disciplines, including hardware, networking, programming, imaging, and artificial intelligence. Its uniform is white for upper wear and light blue for longyi.
History
Government Computer College (Pakokku) was established on 21 January 2002. It became Computer University (Pakokku) on 20 January 2007 and was later renamed University of Computer Studies (Pakokku).
Degrees
The university offers five-year Bachelor of Computer Science (B.C.Sc) and Bachelor of Computer Technology (B.C.Tech) degree programs.
Departments
Academics are divided into the following departments:
Faculty of Computer Science
Faculty of Computer Systems and Technologies
Faculty of Information Science
Faculty of Computing
Department of Information Technology Supporting & Maintenance
Department of Natural Science
Department of Languages
Library
Practical rooms
The University has practical rooms for English language listening and for computer and physics practical works. It has a library with especially computer-related books and journals as well as more general subject matter.
References
External links
Universities and colleges in Magway Region
|
29776512
|
https://en.wikipedia.org/wiki/Vitaly%20Borker
|
Vitaly Borker
|
Vitaly Borker (born 1975 or 1976 in the former Soviet Union), known by pseudonyms "Tony Russo", "Stanley Bolds" and "Becky S", is an American convicted felon who has twice served federal prison sentences for charges arising from how he ran his online eyeglass retail and repair sites, DecorMyEyes and OpticsFast. Customers who complained about poor service and misfilled orders for high-end designer eyewear were insulted, harassed, threatened (sometimes physically) and sometimes made the victim of small scams. After going into online retail following a short career as a computer programmer for several Wall Street firms, Borker encountered difficult customers who, he said later, were rude, lied to him and cost him money unnecessarily. He decided to be rude and unscrupulous with them in return, and learned to his surprise that on the Internet there was no such thing as bad publicity since the many posts with links to his site on complaint sites such as Ripoff Report appeared to drive traffic to his sites due to how Google's PageRank algorithm worked at that time, putting his site higher in results for searches on brand names than even those brands' websites, and making him money.
When New York Times reporter David Segal investigated the site in 2010, Borker freely explained this business model to him when Segal came to visit his house in the Brooklyn neighborhood of Sheepshead Bay, where Borker questioned the notion that the customer is always right and said he "like[d] the craziness." A month later Borker was arrested by federal postal inspectors and charged with mail fraud, wire fraud and making interstate threats. He eventually pleaded guilty to fraud charges and making threats and was sentenced to prison for four years. Google and other websites whose flaws he had exploited in running DecorMyEyes also changed their practices and tightened security procedures.
Before entering prison, Borker and a friend had begun setting up another website, OpticsFast, offering not only eyeglasses for sale but repair services. After his 2015 release, he went back to his former business practices, which he mostly hid from his probation officer. Two years later, Segal reported on Borker's return in the Times, and Borker was again arrested and charged with wire and mail fraud associated with alleged harassment and abuse as operator of the OpticsFast. In February 2018, he was sentenced to two years in prison for violation of his 2015 parole. Following a plea deal for the 2017 charges, he was later sentenced in 2019 to two years in prison followed by three years of supervised release, a $50,000 fine, and a $300 special assessment.
Following Borker's release in late 2020, Segal reported in the Times in 2021 that Borker appeared to have returned to selling eyeglasses online, under other personal and business names, and harassing dissatisfied customers through a new site called Eyeglassesdepot. If true, this would be a violation of a condition of his 2021 parole that he avoid any involvement in online retailing.
Biography
Borker told New York Times reporter David Segal in 2010 that he was born in Russia and moved to the United States with his parents as a child; exactly how old he was at the time is not definitely known. After graduating from Edward R. Murrow High School in 1989 and John Jay College of Criminal Justice in 1997 he began training as a police officer, working as a cadet in the office of a unit that patrolled public housing in Brooklyn. He soon changed his mind about his career path and enrolled in a school to learn programming.
Although all classes were taught in English, all the students and teachers at the school were also all Russian immigrants, Borker recalls. Students would be taught the bare minimum that would get them hired, and then the school would help them fabricate a résumé and work history to assure that they did. "There were a lot of schools like this," he said a decade later. "They've all been shut down."
Borker first went to work for a variety of Wall Street financial firms, including Lehman Brothers, where he worked on the systems that administered the accounts of mutual fund shareholders. Dissatisfied with the pay, Borker took a friend up on his side offer to create an online version of his eyeglass store. He continued running the online eyeglass store while he worked for Lehman, drawing lawsuits, and judgements, from luxury brands like Chanel for selling counterfeit glasses. Shortly before Lehman collapsed in the 2008 financial crisis, he left to go into online retailing full time.
DecorMyEyes
The website became DecorMyEyes. Borker became disillusioned with his customers, who he says lied and cost him unnecessarily by changing their minds. "I stopped caring", he says, and began responding rudely to them. This led to postings on review websites disparaging him, which, to his amazement, put DecorMyEyes near the top of Google search results due to the many links to his site. Seeing the value of this perverse incentive, Borker began purposely responding to dissatisfied customers with threats and insults. It was later reported that the site had made Borker $3.2 million in one year.
Customers of DecorMyEyes posted numerous reports of receiving threats of physical violence, abuse, poor service and overcharges on websites such as ResellerRatings, where DecorMyEyes had, as of 2010, a lifetime rating of 1.39/10 from 79 reviews. One customer told authorities that after he complained someone had called his employer and accused him of dealing drugs. According to Borker, each bad review with a link boosted his site's PageRank, meaning that the site came to the top of Google's ratings for many of the products he sold. He showed the Times that his site actually came up higher than designer Christian Audigier's in a search on the designer's name. While a direct Google search for "DecorMyEyes" elicits the site and its many negative reviews, searching for individual products and brands does not.
The reason, cited by an anonymous Google publicist, is that the large number of links to DecorMyEyes from consumer complaint sites such as Ripoff Report cause DecorMyEyes to rank high in Google search results. In 2008 Borker made a post as "Stanley" on Get Satisfaction and other websites like it thanking users there for the links and the traffic they had brought him. When the website's administrators sent him an email suggesting he could work with them to work things out, he replied with a photograph of his hand with middle finger extended.
Borker said most of his customers were satisfied; he called those who were not "psychos". He questioned the notion that the customer is always right. "[N]ot here, you understand?" he told Segal. "Why is the merchant always wrong? Can the customer ever be wrong? Is that not possible?" He allowed that the stress might well be affecting his health, but doubted he would walk away from the site. "I like the craziness. This works for me", he said, likening himself to radio shock jock Howard Stern.
Borker used other websites in his business. He often sourced his glasses from sellers on eBay and told them simply to ship them to his customers' address. If the seller declined, as several did when the address had not been verified by PayPal, he left a negative review on their page, which many wanted to avoid. When sellers blocked Borker, he registered under a different name; after this was reported, eBay barred Borker from the site permanently and instituted other reforms to prevent such tactics and identify abusive buyers. Borker also maintained a store on Amazon.com under a different name, where he was much more fastidious in his dealings with customers, since that platform was willing to remove sellers if it got enough customer complaints.
Since credit card companies, in their agreements with sellers, can cancel the service if they receive enough "chargebacks" or buyer disputes every month, Borker told the Times he tried to make sure he avoided alienating too many. Some customers say that he threatened them to drop the dispute, sometimes suggesting he was willing to employ physical violence and emailing them pictures of their houses from Google Earth. One told Segal that her bank dropped the dispute and reinstated her charge after a woman claiming to be her called and said she no longer wished to do so.
Google responded to the Times story by writing an algorithm that "detects the merchant from the Times article along with hundreds of other merchants that, in our opinion, provide an extremely poor user experience" and significantly reduces their search visibility on product searches. MasterCard had dropped DecorMyEyes in 2009 due to excessive chargebacks. Borker regained access to their network by using a different bank. The company told the Times that he should not have been able to do this as he was supposed to have been placed on an internal blacklist; in the wake of the story it told Segal that it had not only put him on that list but increased its safeguards to make sure that those who were supposed to be blacklisted were. Borker responded that it was impossible to shut people down completely online. "I'd use the name of a friend of mine", he suggested. "Give him 1 percent."
2010 arrest and federal charges
A week after the Times story Borker was arrested by agents of the United States Postal Inspection Service on charges of mail fraud, wire fraud, making interstate threats and cyberstalking, and arraigned in the United States District Court in Manhattan. Bail was denied on the basis that he was a threat to the community. A search of his house turned up a stock of counterfeit eyeglasses, and fake 8mm replica guns. State charges were dismissed. After months of confinement in the Metropolitan Detention Center in Brooklyn, Borker was freed in April 2011 upon posting $1 million bond and accepting restrictions that included accepting surveillance by a security guard in his home, at a cost to him of a thousand dollars a day.
In May 2011, Borker pleaded guilty in Federal District Court in Manhattan to two counts of interstate communication of threats, one count of mail fraud and one count of wire fraud. In September 2012, Borker was sentenced to four years in federal prison and ordered to pay nearly $100,000 in fines and restitution. He was released four years later.
2017: OpticsFast
Before his imprisonment, Borker had a friend, Michael Voller, create a successor site, OpticsFast, to continue the business—with another friend's name on the incorporation papers—which he later reassumed control of after his release, offering this time to repair glasses as well as selling new ones. Members of Voller's family furthered the scheme by telling Borker's probation officer that he was working for their family business and fabricating documentation to support that. Borker resumed his tactics of selling customers cheap counterfeits of luxury brand eyewear and then insulting and harassing them if they complained or attempted to return or exchange the merchandise, again with the goal of driving traffic to his site through the links from online complaints, this time primarily on Yelp.
Borker is not known to have physically threatened customers during this period, although he did charge one for a mailing label he claimed to have printed after the customer decided to cancel his order, a tactic he later admitted was fraudulent when allocuting. Many communications sent by Borker under the name "Becky S", Judge Paul G. Gardephe noted, were the source of many of the complaints lodged against OpticsFast. "[C]ustomers describe interactions with OpticsFast employees that appear irrational if not imbalanced", he wrote in sentencing Borker. "Indeed, many of the customers who filed complaints appear to have done so more because of the disturbing nature of th[ose] interactions ... rather than because of any loss they suffered from doing business with OpticsFast."
Some customers did face offline harassment. A Southern California woman, who recognized the email she received as having come from someone involved with DecorMyEyes since it referenced an order she had made nearly a decade earlier, recalls being told on the phone by a person purporting to be a police officer that she needed to report to the police station at once as a civil harassment suit had been filed against her; she declined after asking why the police were involved if it was a civil matter. The credit card she had used to make her eyewear purchase was then used to make a series of purchases around Brooklyn.
The New York Times reported on Borker's apparent return, noting that OpticsFast had been active while he was in prison. Search engine expert Doug Pierce, consulted by the newspaper, found the code and HTML running the OpticsFast page to have been substantially similar to DecorMyEyes until 2016, when Borker was released. Both domains had the same owner, but since the sites had been active while Borker was imprisoned the Times could not say for certain that he was involved in the new site.
In May 2017, a month after the story ran, Borker was arrested again and charged with wire and mail fraud associated with alleged harassment and abuse as operator of OpticsFast. Voller, also originally indicted, turned state's evidence against Borker and had most charges dropped; he was sentenced to time served in late 2020. Borker has sued him and others in state court alleging breach of contract.
Joon H. Kim, acting United States attorney for the Southern District of New York said that "Borker's shameless brand of alleged abuse cannot be tolerated, and we are committed to protecting consumers from becoming victims of such criminal behavior". Borker's lawyer stated his client would "plead not guilty and defend himself against the charges", which carried a maximum penalty of 20 years imprisonment.
Plea and sentencing
In February 2018, Borker was sentenced to two years for violating parole after his release in 2015. The terms of his parole forbade him from lying to his parole officers, which he had done when he had repeatedly denied he had started another online eyewear retail site.
A month later, Borker pleaded guilty to wire and mail fraud charges in exchange for a reduced sentence. MSNBC devoted an episode of American Greed to Borker in June. In April 2019, he was sentenced to two years in prison, to be followed by three years of supervised release, a $50,000 fine, and a $300 special assessment. He was released from prison in November 2020.
2021: Eyeglassesdepot
In 2021, the Times reported that Borker may have again returned to selling eyeglasses online and harassing dissatisfied customers, this time through a site named Eyeglassesdepot. Customers reported similar behavior, such as threats, insults and online doxxing, including not only their credit card numbers but the cards' security codes, when they complained or tried to return merchandise, and attempts to intimidate them into paying for printed mailing labels after they changed their mind. They said the company's representative identified themselves as "Arsenio".
This time, Trustpilot hosted many of the negative reviews. The owner of Eyeglassesdepot seemed to more genuinely fear the consequences of bad reviews there. He followed up one customer's complaints about his behavior with a post claiming it was a fake review posted by a competing site, along with the complainant's home address and cell phone number. He also threatened to post multiple fake positive reviews for every negative one.
Trustpilot management investigated after receiving complaints, and found almost half of the positive reviews of Eyeglassesdepot were fake. It removed them and sent Eyeglassesdepot an email asking that it cease and desist from further such behavior. "Yeah whatever" read the reply.
Pierce believes that Eyeglassesdepot and OpticsFast, which look very similar and share common third-party tags, have the same owner. Pierce concluded, "Whoever created Eyeglassesdepot simply cloned OpticsFast, perhaps in the interest of saving time and money, and then made a few cosmetic changes". Pierce allowed that that individual may not have been Vitaly Borker, "But who else would steal the code from a website as notorious as OpticsFast?"
Borker's attorney denied that his client was Arsenio. If Borker were to be involved in any way with Eyeglassesdepot, that by itself would be enough to send him back to prison, since the terms of his most recent parole forbid him from any involvement whatsoever with online retail.
Personal life
At the time of his first arrest, Borker was married and had a daughter born in 2008.
In 2019, during sentencing in the OpticsFast case, Gardephe noted that Borker had been diagnosed as suffering from narcissistic personality disorder, bipolar disorder and obsessive-compulsive disorder.
See also
Criticism of Google
Google search optimization
List of American Greed episodes
List of Ukrainian Americans
Notes
References
External links
ResellerRatings reviews of DecorMyEyes
Better Business Bureau report of DecorMyEyes
Year of birth uncertain
American computer criminals
American computer programmers
American counterfeiters
Black hat search engine optimization
Cyberbullying
Internet fraud
John Jay College of Criminal Justice alumni
Living people
American people convicted of mail and wire fraud
People with bipolar disorder
People with narcissistic personality disorder
People with obsessive–compulsive disorder
People from Sheepshead Bay, Brooklyn
Prisoners and detainees of the United States federal government
Ukrainian computer criminals
Ukrainian computer programmers
Ukrainian counterfeiters
Ukrainian expatriates in the United States
Ukrainian fraudsters
Ukrainian people imprisoned abroad
American businesspeople convicted of crimes
|
38829220
|
https://en.wikipedia.org/wiki/Max%20Tuerk
|
Max Tuerk
|
Max Tuerk (January 27, 1994 – June 20, 2020) was an American professional football player who was a center in the National Football League (NFL). He played college football for the USC Trojans. He was a first-team all-conference selection in the Pac-12 in 2014. Tuerk was selected in the third round of the 2016 NFL Draft by the San Diego Chargers. He spent his rookie year with the Chargers and split time in his second and final NFL season with the Chargers and the Arizona Cardinals.
Early years
A native of Trabuco Canyon, California, Max Tuerk was the eldest of Greg and Valerie Tuerk's four children. He had a younger brother Drake, and two sisters, Natalie and Abby. His father played tight end for Brown University in the early 1980s.
Max Tuerk attended Santa Margarita Catholic High School, where he was a two-way lineman. His team won the CIF Pac-5 title (i.e., the Southern California championship) and CIF Division I state title in 2011. (Division I was the second-highest of five divisions at the time.) He also participated in track & field. Regarded as a four-star recruit by Rivals.com, Tuerk was listed as the No. 7 offensive tackle prospect in his class.
College career
In his true freshman year at USC, Tuerk emerged as the starting left offensive tackle for USC's final five regular season games of 2012 after serving as a backup earlier in the season. When he got his first start at Arizona, he became the first-ever USC true freshman to start at left tackle (and the first to start at tackle since Winston Justice in the last 12 games of 2002). Tuerk was named Freshman All-American by College Football News. He played in every game his first 3 seasons and was named to many all-star teams. Tuerk played all three interior lineman positions, but he was primarily a center. He was a team captain both his junior and senior years. Tuerk was considered one of the top centers in college football at the beginning of his senior year, but his season was cut short after five games when he tore an anterior cruciate ligament.
Professional career
San Diego / Los Angeles Chargers
Tuerk was unable to participate in the post-season draft combines because of his knee injury, except he was able to do the bench press test. He was drafted by the San Diego Chargers in the third round, 66th overall, in the 2016 NFL Draft.
Tuerk made the Chargers' roster in 2016 but was inactive for all 16 regular-season games, although he did play during the pre-season. Tuerk was suspended for the first four games of the 2017 season after violating the NFL policy on performance-enhancing substances. After being reinstated from suspension on October 3, 2017, he was released by the Chargers. He was re-signed to the Chargers practice squad on October 26, 2017.
Arizona Cardinals
On November 6, 2017, Tuerk was signed by the Arizona Cardinals off the Chargers' practice squad. On December 24, 2017, Tuerk made his only appearance in a regular-season NFL game on one play in Week 16, as the Cardinals beat the New York Giants, 23–0.
On April 12, 2018, Tuerk was released by the Cardinals.
Death
On June 20, 2020, Tuerk collapsed and died of an enlarged heart, an autopsy report revealed, while on a hike with his parents in the Cleveland National Forest. According to his family, Tuerk was struggling with mental illness during and after his pro football career. His mother Valerie Tuerk told the Los Angeles Times that the family had arranged to have his brain tissue sent to Boston University's Chronic Traumatic Encephalopathy (CTE) Center. She said: "That was very important to us, because we feel that CTE probably had some impact on Max".
A memorial service was held for Tuerk's friends and family on a beach near his home on Saturday, June 27, 2020, exactly one week after his death.
References
External links
NFL Combine profile
USC Trojans bio
1994 births
2020 deaths
American football offensive linemen
Arizona Cardinals players
Los Angeles Chargers players
Players of American football from California
San Diego Chargers players
Sportspeople from Orange County, California
USC Trojans football players
Deaths from cardiomyopathy
|
21170624
|
https://en.wikipedia.org/wiki/JooJoo
|
JooJoo
|
The JooJoo was a Linux-based tablet computer. It was produced by Singapore development studio Fusion Garage. Originally, Fusion Garage was working with Michael Arrington to release it as the CrunchPad, but in November 2009 Fusion Garage informed Arrington it would be selling the product alone. Arrington has responded by filing a lawsuit against Fusion Garage.
History
Crunchpad
The CrunchPad project was started by Michael Arrington in July 2008, initially aiming for a US$200 tablet, and showed a first prototype (Prototype A) a month later.
Beginning 2009, working Prototype B was introduced by the TechCrunch team led by Louis Monier, based on a 12 inch LCD screen, a VIA Nano CPU, Ubuntu Linux and a custom Webkit-based browser. The device was rapidly prototyped by Dynacept and a customized version of the Ubuntu distribution was compiled by Fusion Garage. After announcing Prototype B, there arose a desire for the tablet to come into production. Louis Monier worked closely with Fusion Garage as the team's lead designer.
April 9, 2009 - Prototype C is shown, looking very much like the original concept pictures. Michael Arrington wrote that the hardware, software and industrial design improvements seen in Prototype C were all driven by Fusion Garage. "... one thing I’ve learned about hardware in the last year is that you need partners to actually make things happen, and the credit for what we saw today goes entirely to the Fusion Garage team.", he said.
June 3, 2009 - near-final industrial design
November 17, 2009 - Fusion Garage CEO Chandra Rathakrishnan emails Techcrunch, and informs them "out of the blue" that Fusion Garage's investors want to pull out of the partnership, and that they are under the impression that Techcrunch does not own rights to the project, but are simply helping advertise it.
Initially in 2008, $200 was mentioned as the target price-point. In the first half of 2009, $300 was mentioned as more likely.
By the end of July 2009, news stories said the actual price when it would ship in November 2009 would be about $400, putting it in potential competition with netbooks and low-end laptops.
The project generated some press and was mentioned in Washington Post and other media.
In July 2009, it was reported that Arrington founded a company of 14 employees around the tablet (Crunchpad Inc.) in Singapore, and that there would be a public presentation of a finished product later in the month.
By late September 2009, the lack of publicity on the CrunchPad led Dan Frommer of The Business Insider to ask, in an article headline, "Where's The CrunchPad?" Apple and Microsoft were rumored to be working on new tablet computers, receiving more media coverage.
In early October 2009, Popular Mechanics magazine recognized the CrunchPad with an award as one of the top
10 Most Brilliant Products of 2009,
"the top 10 most brilliant gadgets, tools and toys that you can buy in 2009." Other organizations questioned the appropriateness of the award as the CrunchPad was not available for purchase at publication time.
On the November 12, 2009, Gillmor Gang podcast, Michael Arrington announced the product is "steamrolling along", that rumors of high prices are untrue, and that the product will probably retail for US$300–400, likely subsidised by features that are sponsored but won't impact negatively on the user experience (similar to Firefox's search bar).
On August 15, 2011, the successor to the JooJoo and a new smartphone were announced after a made-up company "TabCo" unveiled it was, in fact, Fusion Garage. The announcement included a tablet and smartphone named the Grid 10 (10.1 inch tablet) and The Grid 4 (4 inch smartphone), both running GridOS, a fork of the Android operating system.
Crunchpad manifesto
In the founding July 21, 2008, manifesto
"We Want A Dead Simple Web Tablet For $200. Help Us Build It." Michael Arrington wrote:
No further commitments were made in 2009 about making the design open and public, which would make it easier to add additional features such as a standard keyboard connector and increased storage.
JooJoo
On November 30, 2009, Michael Arrington announced that the CrunchPad project was dead. Three days prior to the planned debut, Fusion Garage CEO Chandra Rathakrishnan had informed him Fusion Garage would be proceeding to sell the pad alone. Arrington claims the intellectual property shared between both companies, so the product could not proceed legally. He said his side "will almost certainly be filing multiple lawsuits against Fusion Garage, and possibly Chandra and his shareholders as individuals, shortly".
On December 7, 2009 - Fusion Garage CEO Chandra Rathakrishnan announced that he is releasing what had been developed as the CrunchPad and which he is now calling the "JooJoo", and that it will be available for pre-sale December 11, 2009 for $499 USD.
On December 10, 2009 Michael Arrington/Techcrunch filed a lawsuit against Fusion Garage in Federal court.
On February 1, 2010, Fusion Garage CEO Chandrasekar Rathakrishnan announced that JooJoo pre-orders had increased following the debut of the Apple iPad, and that additional funding of $10 million had been obtained. He also announced that Fusion Garage was in the process of forming a partnership with a mobile phone manufacturer that would handle the production of the device.
On February 3, 2010, Fusion Garage announced that the manufacturing of JooJoo tablets had begun as part of a new agreement with CSL Group. In exchange for absorbing manufacturing costs of the JooJoo, CSL Group would take a percentage of profits from the sale of the devices. CEO Chandrasekar Rathakrishnan stated that JooJoo shipments would reach customers by late February, and that the device would support Adobe Flash at launch.
On February 26, 2010, Fusion Garage announced a manufacturing delay of the JooJoo tablet, citing an issue fine tuning the touch sensitivity of the capacitive screen. JooJoo tablets are now to ship out on March 25, 2010, and all pre-order customers are to be provided with a free accessory to compensate for the delay.
On November 11, 2010, Fusion Garage announced that Joojoo tablet at its current iteration is at “its end of life” and the company will be exploring several new platforms that will not have backward compatibility.
On December 19, 2011, rumors said that Fusion Garage will discontinue business and may be bankrupt.
On January 9, 2012, Fusion Garage confirmed that the company had gone into liquidation owing creditors $40 million.
Litigation
On November 30, 2009, Arrington said the CrunchPad project had ended in disagreement between himself and Fusion Garage. On December 7, 2009, Fusion Garage CEO Chandra Rathakrishnan said his company would release the CrunchPad as the JooJoo, and that customers could preorder it on December 11, 2009, for 499 USD. On December 10, 2009, Arrington and Techcrunch filed a lawsuit against Fusion Garage in U.S. federal court, accusing the firm of fraud and deceit, misappropriation of business ideas, breach of fiduciary duty, unfair competition, and violations of the Lanham Act. On March 30, 2010, the lawsuit revealed that only 90 pre-orders for the JooJoo had been placed before it began shipping.
Kernel hacker Matthew Garrett filed a complaint with US Customs and Border Protection against Fusion Garage for copyright infringement, since the company shipped GPL software without making the required offer of source code. The issue was resolved in January 2011 when Fusion Garage started providing the required source code at their web site.
See also
Tablet PC
Adam tablet
ExoPC
HP Slate
iPad
Sakshat Tablet
WeTab
References
External links
TechCrunch: About Those New CrunchPad Pictures (April 10, 2009)
The Business Insider.com
The End of the CrunchPad (November 30, 2009)
Tablet computers
Linux-based devices
|
58623686
|
https://en.wikipedia.org/wiki/Servelec
|
Servelec
|
Servelec is a health informatics company based in Victoria Quays in Sheffield. It supplies software to the healthcare, social care and education sectors.
The company comprises:
Servelec Health and Social Care (known for: EPR, PAS, RiO, Oceano, Flow;),
Servelec Technologies (SCADA)
Servelec Controls (Mission critical)
Servelec Education (Synergy)
Revenue for 2016 was £61.0 million, compared with £63.1 million in 2015. The operating profit was £14.6 million, compared with £16.2 million in 2015.
Ownership
In 2013 the company was floated on the stock exchange by CSE Global which had owned it since 2000 with an expected valuation of £122 million. At that time it was concentrated on software and control systems to utilities, broadcasters, lighthouses and North Sea oil rigs. It was then listed on the FTSE SmallCap Index. In January 2018 Scarlet Bidco, on behalf of Montagu Private Equity bought the Group for £223.9 million.
It bought Corelogic a social care case management software provider with more than 65,000 end users in 2014 for £23.5 million.
The company was bought by The Access Group in August 2021.
Healthcare
RiO is an electronic patient record which is accessible through smartphones and tablets. This enables practitioners to access patient records remotely and in real-time.
Microtest Health's Open Evolution system is integrated with Servelec's RiO electronic patient record, which is widely used within mental health, community health and child health care settings. It plans further integration with their social care case management system, Mosaic. It also integrates with Totalmobile's mobile-based workforce software so staff can save time by accessing patient information from the electronic patient record using a smartphone or tablet.
United Kingdom
Bournewood Community and Mental Health trust deployed a Servelec system in only 8 months in 1999 - considerably quicker than was common.
University Hospitals Birmingham NHS Foundation Trust developed a new patient administration system, OceanoPAS, in a four-year partnership with the company, bringing in more than 1 million patient records, 1.8 million outpatient appointments, 248,000 inpatient movements and 3,836 clinics. It went live in August 2017 and was functional with real time clinic reporting from day one. The trust’s director of strategic operations, said the roll-out was “a phenomenal achievement”. It was then made available commercially to other organisations.
Lancashire Care NHS Foundation Trust deployed a new Servelec RiO electronic patient record in March 2018.
In 2018/19 Servelec Group, acquired Careervision Holdings Limited, a provider of case management and information solutions to children and young people’s services teams.
Its Flow digital bed management solution was installed in Berkshire Healthcare NHS Foundation Trust in 2021. integrated with the existing Rio electronic patient record.
Ireland
St Patrick’s Mental Health Services installed the first mental health electronic health record in Ireland, using the company's Rio EHR in September 2017 under the internal title "eSwift", referencing both the founder of St Patricks, Jonathan Swift and the faster sharing of electronic records.
It integrates inpatient, outpatient and day patient services. It plans to incorporate an online portal allowing service users to view parts of their own health record.
The company was one of the partners in the award-winning Whiteboard Solution Project at St. Vincent's University Hospital in 2017, providing an onsite presence on the wards for 4 weeks from go-live.
Control and safety systems
Servelec Controls has been selected as a key software developer of the safety software for the control systems, personnel safety systems and machine protection systems of the European Spallation Source. Bristol Water uses the company's decision support software PIONEER for planning and risk management.
It opened a subsidiary in Chile in 2016.
References
Electronic health record software companies
Companies based in Sheffield
Private providers of NHS services
|
33980
|
https://en.wikipedia.org/wiki/Waterfall%20model
|
Waterfall model
|
The waterfall model is a breakdown of project activities into linear sequential phases, where each phase depends on the deliverables of the previous one and corresponds to a specialization of tasks. The approach is typical for certain areas of engineering design. In software development, it tends to be among the less iterative and flexible approaches, as progress flows in largely one direction ("downwards" like a waterfall) through the phases of conception, initiation, analysis, design, construction, testing, deployment and maintenance.
The waterfall development model originated in the manufacturing and construction industries; where the highly structured physical environments meant that design changes became prohibitively expensive much sooner in the development process. When first adopted for software development, there were no recognised alternatives for knowledge-based creative work.
History
The first known presentation describing use of such phases in software engineering was held by Herbert D. Benington at the Symposium on Advanced Programming Methods for Digital Computers on 29 June 1956. This presentation was about the development of software for SAGE. In 1983 the paper was republished with a foreword by Benington explaining that the phases were on purpose organised according to the specialisation of tasks, and pointing out that the process was not in fact performed in a strict top-down fashion, but depended on a prototype.
Although the term "waterfall" is not used in the paper, the first formal detailed diagram of the process later known as the "waterfall model" is often cited as a 1970 article by Winston W. Royce. However he also felt it had major flaws stemming from the fact that testing only happened at the end of the process, which he described as being "risky and invites failure". The rest of his paper introduced five steps which he felt were necessary to "eliminate most of the development risks" associated with the unaltered waterfall approach.
Royce's five additional steps (which included writing complete documentation at various stages of development) never took mainstream hold, but his diagram of what he considered a flawed process became the starting point when describing a "waterfall" approach.
The earliest use of the term "waterfall" may have been in a 1976 paper by Bell and Thayer.
In 1985, the United States Department of Defense captured this approach in DOD-STD-2167A, their standards for working with software development contractors, which stated that "the contractor shall implement a software development cycle that includes the following six phases: Software Requirement Analysis, Preliminary Design, Detailed Design, Coding and Unit Testing, Integration, and Testing".
Model
In Royce's original waterfall model, the following phases are followed in order:
System and software requirements: captured in a product requirements document
Analysis: resulting in models, schema, and business rules
Design: resulting in the software architecture
Coding: the development, proving, and integration of software
Testing: the systematic discovery and debugging of defects
Operations: the installation, migration, support, and maintenance of complete systems
Thus the waterfall model maintains that one should move to a phase only when its preceding phase is reviewed and verified.
Various modified waterfall models (including Royce's final model), however, can include slight or major variations on this process. These variations included returning to the previous cycle after flaws were found downstream, or returning all the way to the design phase if downstream phases deemed insufficient.
Supporting arguments
Time spent early in the software production cycle can reduce costs at later stages. For example, a problem found in the early stages (such as requirements specification) is cheaper to fix than the same bug found later on in the process (by a factor of 50 to 200).
In common practice, waterfall methodologies result in a project schedule with 20–40% of the time invested for the first two phases, 30–40% of the time to coding, and the rest dedicated to testing and implementation. The actual project organisation needs to be highly structured. Most medium and large projects will include a detailed set of procedures and controls, which regulate every process on the project.
A further argument for the waterfall model is that it places emphasis on documentation (such as requirements documents and design documents) as well as source code. In less thoroughly designed and documented methodologies, knowledge is lost if team members leave before the project is completed, and it may be difficult for a project to recover from the loss. If a fully working design document is present (as is the intent of Big Design Up Front and the waterfall model), new team members or even entirely new teams should be able to familiarise themselves by reading the documents.
The waterfall model provides a structured approach; the model itself progresses linearly through discrete, easily understandable and explainable phases and thus is easy to understand; it also provides easily identifiable milestones in the development process. It is perhaps for this reason that the waterfall model is used as a beginning example of a development model in many software engineering texts and courses.
Criticism
Clients may not know exactly what their requirements are before they see working software and so change their requirements, leading to redesign, redevelopment, and retesting, and increased costs.
Designers may not be aware of future difficulties when designing a new software product or feature, in which case it is better to revise the design than persist in a design that does not account for any newly discovered constraints, requirements, or problems.
Organisations may attempt to deal with a lack of concrete requirements from clients by employing systems analysts to examine existing manual systems and analyse what they do and how they might be replaced. However, in practice, it is difficult to sustain a strict separation between systems analysis and programming. This is because implementing any non-trivial system will almost inevitably expose issues and edge cases that the systems analyst did not consider.
In response to the perceived problems with the pure waterfall model, modified waterfall models were introduced, such as "Sashimi (Waterfall with Overlapping Phases), Waterfall with Subprojects, and Waterfall with Risk Reduction".
Some organisations, such as the United States Department of Defense, now have a stated preference against waterfall-type methodologies, starting with MIL-STD-498, which encourages evolutionary acquisition and Iterative and Incremental Development.
While advocates of agile software development argue the waterfall model is an ineffective process for developing software, some sceptics suggest that the waterfall model is a false argument used purely to market alternative development methodologies.
Rational Unified Process (RUP) phases acknowledge the programmatic need for milestones, for keeping a project on track, but encourage iterations (especially within Disciplines) within the Phases. RUP Phases are often referred to as "waterfall-like".
Modified waterfall models
In response to the perceived problems with the "pure" waterfall model, many modified waterfall models have been introduced. These models may address some or all of the criticisms of the "pure" waterfall model.
These include the Rapid Development models that Steve McConnell calls "modified waterfalls": Peter DeGrace's "sashimi model" (waterfall with overlapping phases), waterfall with subprojects, and waterfall with risk reduction. Other software development model combinations such as "incremental waterfall model" also exist.
Royce's final model
Winston W. Royce's final model, his intended improvement upon his initial "waterfall model", illustrated that feedback could (should, and often would) lead from code testing to design (as testing of code uncovered flaws in the design) and from design back to requirements specification (as design problems may necessitate the removal of conflicting or otherwise unsatisfiable / undesignable requirements). In the same paper Royce also advocated large quantities of documentation, doing the job "twice if possible" (a sentiment similar to that of Fred Brooks, famous for writing the Mythical Man Month, an influential book in software project management, who advocated planning to "throw one away"), and involving the customer as much as possible (a sentiment similar to that of extreme programming).
Royce notes to the final model are:
Complete program design before analysis and coding begins
Documentation must be current and complete
Do the job twice if possible
Testing must be planned, controlled and monitored
Involve the customer
See also
List of software development philosophies
Agile software development
Big Design Up Front
Chaos model
DevOps
Iterative and incremental development
Object-oriented analysis and design
Rapid application development
Software development process
Spiral model
Structured Systems Analysis and Design Method (SSADM)
System development methodology
Traditional engineering
V-model
References
External links
Understanding the pros and cons of the Waterfall Model of software development
Project lifecycle models: how they differ and when to use them
Going Over the Waterfall with the RUP by Philippe Kruchten
CSC and IBM Rational join to deliver C-RUP and support rapid business change
c2:WaterFall
Software development philosophies
Project management
Design
|
16974042
|
https://en.wikipedia.org/wiki/Stickybear
|
Stickybear
|
Stickybear is a fictional character created by Richard Hefter and an edutainment series starring the character headed by Optimum Resource, Inc. The character was a mascot of Weekly Reader Software, a division of Xerox Education Publications.
Software of the series has been released since the early 1980s; software programs originated on the Apple II platform and were released for IBM PC, Atari 8-bit and Commodore 64 platforms.
As of 2008 the most recent Stickybear software was developed for Windows XP/Windows Vista and Mac OS X.
Books with Stickybear
Babysitter Bears (1983)
Bears at Work (1983)
Lots of Little Bears: A Stickybear Counting Book (1983)
Stickybear Watch Out: The Stickybear Book of Safety (1983)
Stickybear Book of Weather (1983)
Where is the Bear? (1983)
Stickybears Scary Night (1984)
Software with Stickybear
The earliest software programs included picture books in colors and posters.
Stickybear Alphabet (IBM-PC, Apple II) (Some versions included the book The Strawberry Look Book)
Stickybear Numbers
Stickybear Bop
Stickybear Shapes
Stickybear Math (Commodore 64, IBM-PC, Apple II, Philips CD-i)
Stickybear Math 2 (IBM-PC)
Stickybear Opposites (IBM-PC)
Stickybear Reading (Commodore 64, IBM-PC, Philips CD-i)
Stickybear Early Learning Activities (Windows, Apple Macintosh Classic, Windows XP/Windows Vista, Mac OS X
At Home With Stickybear
Stickybear Kindergarten Activities
Stickybear Math 1 Deluxe (Windows XP/Windows Vista, Mac OS X)
Stickybear Spellgrabber (Commodore 64, Mac OS 7)
Stickybear Typing (Commodore 64, IBM-PC)
Stickybear's Reading Room (Mac OS 7)
Stickybear Preschool (Philips CD-i)
Stickybear Family Fun Game (Philips CD-i)
Reception
II Computing listed Stickybear tenth on the magazine's list of top Apple II educational software as of late 1985, based on sales and market-share data.
Peter Mucha of the Houston Chronicle reviewed IBM versions of Stickybear in 1990; Stickybear Opposites received a B-, Stickybear Math received a B, Stickybear Math 2 received a B, Stickybear Alphabet received an A-, and Stickybear Reading received a C.
The New Talking StickyBear Alphabet won the Best Early Education Program 1989 Excellence in Software Award from the Software and Information Industry Association.
Leslie Eiser of Compute! magazine said in a 1992 review that StickyBear Town Builder was dated compared to other games of its time.
Computer Gaming World in 1993 said of Stickybear's Early Learning Activities, "In the world of early learning software, it's difficult to find anyone who does it better."
References
External links
Stickybear ABC for the Apple II at the Internet Archive
Children's educational video games
Video games developed in the United States
|
5392532
|
https://en.wikipedia.org/wiki/University%20of%20Murcia
|
University of Murcia
|
The University of Murcia () is the main university in Murcia, Spain. With 38,000 students, it is the largest university in the Región de Murcia. The University of Murcia is the third oldest university in Spain, after the University of Salamanca (1218 AD) and the University of Valladolid (1241 AD), and the thirteenth in the world. The University of Murcia was established in 1272 by the King Alfonso X of Castile under the Crown of Castile.
The majority of the University's facilities and buildings are spread over two campuses: the older is La Merced, situated in the town centre, and the larger is Espinardo, just 5 km to the north of Murcia. A third campus for Medical and Health Studies is currently being built next to the suburban area known as Ciudad Sanitaria Virgen de la Arrixaca, 5 km south of the city. A new campus had been made in San Javier too, that hosts the Sports Science faculty.
History
The first university in Murcia was founded as the Universitas Studiorum Murciana by Alfonso X of Castile around 1272. The current modern University of Murcia was founded in 1915, making it the tenth oldest university in Spain among the modern universities, but its seal carries the date of the thirteenth century founding.
Campuses
The University of Murcia has two campuses: La Merced, the original campus in the center of the city; and the larger Espinardo, 5 km to the north, which houses most students.
A third campus for medical and health studies is currently being built in the Murcia neighborhood of El Palmar, next to the hospital Ciudad Sanitaria Virgen de la Arrixaca, 5 km south of Murcia's city center. A fourth campus is in the beginning stages in San Javier. Another one, in Lorca, is expected to open in 2007.
Degrees
Categorized by faculties and university schools:
Faculties
Faculty of Sports Science
Honours Degree in Physical activity and Sports Science
Faculty of Fine Arts
Honours Degree in Fine Arts
Faculty of Biology
Honours Degree in Biology
Honours Degree in Environmental Science
Honours Degree in Biotechnology
Faculty of Documentation Science
Diploma in Library Sciences and Documentation
Honours Degree in Documentation
Honours Degree in Journalism
Honours Degree in Advertising and Public Relations
Faculty of Industrial Sciences
Diploma in Industrial Relations
Honours Degree in Industrial Science
Faculty of Economics and Business
Diploma in Management Science
Honours Degree in Administration and Management
Honours Degree in Economics
Honours Degree in Market Technology and Research
Honours Degree in Sociology
Administracion y Direccion de Empresas con Harvard Business School.La universidad es hermana con Harvard.La unica ADE en españa con el ranking
Faculty of Law
Honours Degree in Law
Honours Degree in Political and Administration Sciences
Diploma in Public Management and Administration
Combined Honours Degree in Law with Administration and Management
Honours Degree in Criminology
Faculty of Education
Diploma in Social Education
Honours Degree in Education
Honours Degree in Education Psychology
Teaching: Specialising in Special Needs Education
Teaching: Specialising in Physical Education
Teaching: Specialising in Infant Education
Teaching: Specialising in Music Education
Teaching: Specialising in Primary Education
Teaching: Specialising in Foreign Languages (Specialities in English and French)
Faculty of Philosophy
Honours Degree in Philosophy
Faculty of Computer Sciences
Engineer in Computer Science
Technical Engineer in Computer Science (Management)
Technical Engineer in Computer Science (Systems)
Arts Faculty
Honours Degree in Classics
Honours Degree in French
Honours Degree in Spanish
Honours Degree in English
Honours Degree in Geography
Honours Degree in History
Honours Degree in History of Art
Honours Degree in Translation and Interpreting
Faculty of Mathematics
Honours Degree in Mathematics
Faculty of Medicine
Honours Degree in Medicine
Honours Degree in Dentistry
Honours Degree in Pharmacy
Diploma in Physiotherapy
Degree in Nursing (2009)
Faculty of Psychology
Honours Degree in Psychology
Honours Degree in Speech Therapy
Faculty of Chemistry
Diploma in Optics and Optometry
Honours Degree in Chemical Engineering
Honours Degree in Biochemistry
Honours Degree in Physics
Honours Degree in Chemistry
Faculty of Veterinary Science
Honours Degree in Veterinary Science
Honours Degree in Science and Food Technology
University schools
Nursing School of Murcia
Diploma in Nursing
Nursing School of Cartagena
Diploma in Nursing
School of Social Work
Diploma in Social Work
School of Tourism
Diploma in Tourism
School for adults ceainfante in collaboration with umu
Doctorates
Economy
Experimental Science
Health Science
Humanity
Juridical Science
Mathematics
Social Science
Technological Teaching
Degree footnotes
Espinardo campus
La Merced campus
San Javier campus
La Merced campus, but some classes given in hospitals
In city of Murcia outside La Merced campus
Cartagena, south of Murcia
Second cycle degrees only
See also
List of early modern universities in Europe
References
External links
Campus maps - campuses and how to get to them
History of the University of Murcia
Educational institutions established in the 13th century
1272 establishments in Europe
13th-century establishments in Castile
Educational institutions established in 1915
University
Public universities
Universities and colleges in Spain
1915 establishments in Spain
|
26569668
|
https://en.wikipedia.org/wiki/Computer%20University%2C%20Pyay
|
Computer University, Pyay
|
The University of Computer Studies, Pyay (), is a university in Pyay, Bago Region, Myanmar, offering courses in computer science and information technology.
Background History
University of Computer Studies, Pyay is a government funded university located in Pyay, Bago Region with an emphasis is on computer engineering at the undergraduate and graduate levels. Founded in 2004 as a Government Computer College (GCC) and during the first year of GCC only computer application trainings were offered. Starting from 2005, undergraduate student admissions have begun. In 2007, Government Computer College (Pyay) became a university named Computer University (Pyay). Its name was changed to University of Computer Studies (Pyay) in the year 2017. The campus has an area of 17.68 acres and lies to the south of 081/2 milestone on the highway from Pyay to Aunglan.
Degrees Offered
Bachelor of Computer Science (B.C.Sc.)
Bachelor of Computer Technology (B.C.Tech.)
Departments
Faculty of Computer Systems and Technologies ()
Faculty of Computer Science ()
Faculty of Information Science ()
Faculty of Computing ()
Myanmar Department ()
English Department ()
Physics Department ()
Application Department ()
Library Department ()
Maintenance Department ()
Administrative Department ()
Finance Department ()
Student Affair ()
Courses
First Year Computer Science & Technology
Second Year Computer Science
Second Year Computer Technology
Third Year Computer Science
Third Year Computer Technology
Fourth Year Computer Science
Fourth Year Computer Technology
Fifth Year Computer Science
Fifth Year Computer Technology
References
Technological universities in Myanmar
Universities and colleges in Bago Region
|
4889856
|
https://en.wikipedia.org/wiki/Tan%20Son%20Nhut%20Air%20Base
|
Tan Son Nhut Air Base
|
Tan Son Nhut Air Base () (1955–1975) was a Republic of Vietnam Air Force (RVNAF) facility. It was located near the city of Saigon in southern Vietnam. The United States used it as a major base during the Vietnam War (1959–1975), stationing Army, Air Force, Navy, and Marine units there. Following the Fall of Saigon, it was taken over as a Vietnam People's Air Force (VPAF) facility and remains in use today.
Tan Son Nhat International Airport, (IATA: SGN, ICAO: VVTS) has been a major Vietnamese civil airport since the 1920s.
Early history
Tan Son Nhat Airport was built by the French in the 1930s when the French Colonial government of Indochina constructed a small unpaved airport, known as Tan Son Nhat Airfield, in the village of Tan Son Nhat to serve as Saigon's commercial airport. Flights to and from France, as well as within Southeast Asia were available prior to World War II. During World War II, the Imperial Japanese Army used Tan Son Nhat as a transport base. When Japan surrendered in August 1945, the French Air Force flew a contingent of 150 troops into Tan Son Nhat.
After World War II, Tân Sơn Nhất served domestic as well as international flights from Saigon.
In mid-1956 construction of a runway was completed and the International Cooperation Administration soon started work on a concrete runway. The airfield was run by the South Vietnamese Department of Civil Aviation with the RVNAF as a tenant located on the southwest of the airfield.
In 1961, the government of the Republic of Vietnam requested the U.S. Military Assistance Advisory Group (MAAG) to plan for expansion of the Tan Son Nhut airport. A taxiway parallel to the original runway had just been completed by the E.V. Lane company for the U.S. Operations Mission, but parking aprons and connections to the taxiways were required. Under the direction of the U.S. Navy Officer in Charge of Construction RVN, these items were constructed by the American construction company RMK-BRJ in 1962. RMK-BRJ also constructed an air-control radar station in 1962, and the passenger and freight terminals in 1963. In 1967, RMK-BRJ constructed the second 10,000-foot concrete runway.
Republic of Vietnam Air Force use
In late 1951, the French Air Force established the RVNAF 312th Special Mission Squadron at Tan Son Nhat Airfield equipped with Morane 500 Criquet liaison aircraft.
In 1952 a heliport was constructed at the base for use by French Air Force medical evacuation helicopters.
In 1953, Tan Son Nhut started being used as a military air base for the fledgling RVNAF, and in 1956 the headquarters were moved from the center of Saigon to Tan Son Nhut. But even before that time, French and Vietnamese military aircraft were in evidence at Tan Son Nhut.
On 1 July 1955, the RVNAF 1st Transport Squadron equipped with C-47 Skytrains was established at the base. The RVNAF also had a special missions squadron at the base equipped with 3 C-47s, 3 C-45s and 1 L-26. The 1st Transport Squadron would be renamed the 413rd Air Transport Squadron in January 1963.
In June 1956 the 2nd Transport Squadron equipped with C-47s was established at the base and the RVNAF established its headquarters there. It would be renamed the 415th Air Transport Squadron in January 1963.
In November 1956, by agreement with the South Vietnamese government, the USAF assumed some training and administrative roles of the RVNAF. A full handover of training responsibility took place on 1 June 1957 when the French training contracts expired.
On 1 June 1957 the RVNAF 1st Helicopter Squadron was established at the base without equipment. It operated with the French Air Force unit serving the International Control Commission and in April 1958 with the departure of the French it inherited its 10 H-19 helicopters.
In October 1959 the 2nd Liaison Squadron equipped with L-19 Bird Dogs moved to the base from Nha Trang.
In mid-December 1961 the USAF began delivery of 30 T-28 Trojans to the RVNAF at Tan Son Nhut.
In December 1962 the 293rd Helicopter Squadron was activated at the base, it was inactivated in August 1964.
In late 1962 the RVNAF formed the 716th Composite Reconnaissance Squadron initially equipped with 2 C-45 photo-reconnaissance aircraft.
In January 1963 the USAF opened an H-19 pilot training facility at the base and by June the first RVNAF helicopter pilots had graduated.
In January 1963 the 211th Helicopter Squadron equipped with UH-34s replaced the 1st Helicopter Squadron.
In December 1963 the 716th Composite Reconnaissance Squadron was activated at the base, equipped with C-47s and T-28s. The squadron would be inactivated in June 1964 and its mission assumed by the 2nd Air Division, while its pilots formed the 520th Fighter Squadron at Bien Hoa Air Base.
In January 1964 all RVNAF units at the base came under the control of the newly established 33rd Tactical Wing.
By midyear, the RVNAF had grown to thirteen squadrons; four fighter, four observation, three helicopter, and two C-47 transport. The RVNAF followed the practice of the U.S. Air Force, organizing the squadrons into wings, with one wing located in each of the four corps tactical zones at Cần Thơ Air Base, Tan Son Nhut AB, Pleiku Air Base and Da Nang Air Base.
In May 1965 the Douglas A-1 Skyraider equipped 522nd Fighter Squadron was activated at the base.
Command and control center
As the headquarters for the RVNAF, Tan Son Nhut was primarily a command base, with most operational units using nearby Biên Hòa Air Base.
At Tan Son Nhut, the RVNAF's system of command and control was developed over the years with assistance from the USAF. The system handled the flow of aircraft from take-off to target area, and return to the base it was launched from. This was known as the Tactical Air Control System (TACS), and it assured positive control of all areas where significant combat operations were performed. Without this system, it would not have been possible for the RVNAF to deploy its forces effectively where needed.
The TACS was in close proximity to the headquarters of the RVNAF and USAF forces in South Vietnam, and commanders of both Air Forces utilized its facilities. Subordinate to TACS was the Direct Air Support Centers (DASC) assigned to each of corps areas (I DASC – Da Nang AB, DASC Alpha – Nha Trang Air Base, II DASC – Pleiku AB, III DASC – Bien Hoa AB, and IV DASC – Cần Thơ AB). DASCs were responsible for the deployment of aircraft located within their sector in support of ground operations.
Operating under each DASC were numerous Tactical Air Control Party (TACPs), manned by one or more RVNAF/USAF personnel posted with the South Vietnamese Army (ARVN) ground forces. A communications network inked these three levels of command and control, giving the TACS overall control of the South Vietnamese air situation at all times.
Additional information was provided by a radar network that covered all of South Vietnam and beyond, monitoring all strike aircraft.
Another function of Tan Son Nhut Air Base was as an RVNAF recruiting center.
Use in coups
The base was adjacent to the headquarters of the Joint General Staff of South Vietnam, and was a key venue in various military coups, particularly the 1963 coup that deposed the nation's first President Ngô Đình Diệm. The plotters invited loyalist officers to a routine lunch meeting at JGS and captured them in the afternoon of 1 November 1963. The most notable was Colonel Lê Quang Tung, loyalist commander of the ARVN Special Forces, which was effectively a private Ngô family army, and his brother and deputy, Le Quang Trịeu. Later, Captain Nguyễn Văn Nhung, bodyguard of coup leader General Dương Văn Minh, shot the brothers on the edge of the base.
On 14 April 1966 a Viet Cong (VC) mortar attack on the base destroyed 2 RVNAF aircraft and killed 7 USAF and 2 RVNAF personnel.
The base was attacked by the VC in a sapper and mortar attack on the morning of 4 December 1966. The attack was repulsed for the loss of 3 US and 3 ARVN killed and 28 VC killed and 4 captured.
1968 Tet Offensive
The base was the target of major VC attacks during the 1968 Tet Offensive. The attack began early on 31 January with greater severity than anyone had expected. When the VC attacked much of the RVNAF was on leave to be with their families during the lunar new year. An immediate recall was issued, and within 72 hours, 90 percent of the RVNAF was on duty.
The main VC attack was made against the western perimeter of the base by 3 VC Battalions. The initial penetration was contained by the base's 377th Security Police Squadron, ad-hoc Army units of Task Force 35, ad-hoc RVNAF units and two ARVN Airborne battalions. The 3rd Squadron, 4th Cavalry Regiment was sent from Củ Chi Base Camp and prevented follow-on forces west of the base from reinforcing the VC inside the base and engaged them in a village and factory west of the base. By 16:30 on 31 January the base was secured. U.S. losses were 22 killed and 82 wounded, ARVN losses 29 killed and 15 wounded, VC losses were more than 669 killed and 26 captured. 14 aircraft were damaged at the base.
Over the next three weeks, the RVNAF flew over 1,300 strike sorties, bombing and strafing PAVN/VC positions throughout South Vietnam. Transport aircraft from Tan Son Nhut's 33d Wing dropped almost 15,000 flares in 12 nights, compared with a normal monthly average of 10,000. Observation aircraft also from Tan Son Nhut completed almost 700 reconnaissance sorties, with RVNAF pilots flying O-1 Bird Dogs and U-17 Skywagons.
At 01:15 on 18 February a VC rocket and mortar attack on the base destroyed 6 aircraft and damaged 33 others and killed one person. A rocket attack the next day hit the civilian air terminal killing 1 person and 6 further rocket/mortar attacks over this period killed another 6 people and wounded 151. On 24 February another rocket and mortar attack damaged base buildings killing 4 US personnel and wounding 21.
On 12 June 1968 a mortar attack on the base destroyed 2 USAF aircraft and killed 1 airman.
The Tet Offensive attacks and previous losses due to mortar and rocket attacks on air bases across South Vietnam led the Deputy Secretary of Defense Paul Nitze on 6 March 1968 to approve the construction of 165 "Wonderarch" roofed aircraft shelters at the major air bases. In addition airborne "rocket watch" patrols were established in the Saigon-Biên Hòa area to reduce attacks by fire.
Vietnamization and the 1972 Easter Offensive
On 2 July 1969 the first 5 AC-47 Spooky gunships were handed over to the RVNAF to form the 817th Combat Squadron which became operational at the base on 31 August.
In 1970, with American units leaving the country, the RVNAF transport fleet was greatly increased at Tan Son Nhut. The RVNAF 33rd and 53rd Tactical Wings were established flying C-123 Providers, C-47s and C-7 Caribous.
In mid 1970 the USAF began training RVNAF crews on the AC-119G Shadow gunship at the base. Other courses included navigation classes and helicopter transition and maintenance training for the CH-47 Chinook.
By November 1970, the RVNAF took total control of the Direct Air Support Centers (DASCs) at Bien Hoa AB, Da Nang AB and Pleiku AB.
At the end of 1971, the RVNAF were totally in control of command and control units at eight major air bases, supporting ARVN units for the expanded air-ground operations system. In September 1971, the USAF transferred two C-119 squadrons to the RVNAF at Tan Son Nhut.
In 1972, the buildup of the RVNAF at Tan Son Nhut was expanded when two C-130 Hercules squadrons were formed there. In December, the first RVNAF C-130 training facility was established at Tan Son Nhut, enabling the RVNAF to train its own C-130s pilots. As more C-130s were transferred to the RVNAF, older C-123s were returned to the USAF for disposal.
As the buildup of the RVNAF continued, the success of the Vietnamization program was evident during the 1972 Easter Offensive. Responding to the People's Army of Vietnam (PAVN) attack, the RVNAF flew more than 20,000 strike sorties which helped to stem the advance. In the first month of the offensive, transports from Tan Son Nhut ferried thousands of troops and delivered nearly 4,000 tons of supplies throughout the country. The offensive also resulted in additional deliveries of aircraft to the RVNAF under Operation Enhance. Also, fighter aircraft arrived at Tan Son Nhut for the first time in the F-5A/B Freedom Fighter and the F-5E Tiger II. The F-5s were subsequently transferred to Bien Hoa and Da Nang ABs.
1973 Ceasefire
The Paris Peace Accords of 1973 brought an end to the United States advisory capacity in South Vietnam. In its place, as part of the agreement, the Americans retained a Defense Attaché Office (DAO) at Tan Son Nhut Airport, with small field offices at other facilities around the country. The technical assistance provided by the personnel of the DAOs and by civilian contractors was essential to the RVNAF, however, because of the cease-fire agreement, the South Vietnamese could not be advised in any way on military operations, tactics or techniques of employment. It was through the DAO that the American/South Vietnamese relationship was maintained, and it was primarily from this source that information from within South Vietnam was obtained. The RVNAF provided statistics with regards to the military capability of their units to the DAO, however the accuracy of this information was not always reliable.
From the Easter Offensive of 1972, it was clear that without United States aid, especially air support, the ARVN would not be able to defend itself against continuing PAVN attacks. This was demonstrated at the fighting around Pleiku, An Lộc and Quảng Trị where the ARVN would have been defeated without continuous air support, mainly supplied by the USAF. The ARVN relied heavily on air support, and with the absence of the USAF, the full responsibility fell on the RVNAF. Although equipped with large numbers of Cessna A-37 Dragonfly and F-5 attack aircraft to conduct effective close air support operations, during the 1972 offensive, heavy bombardment duty was left to USAF aircraft.
As part of the Paris Peace Accords, a Joint Military Commission was established and VC/PAVN troops were deployed across South Vietnam to oversee the departure of US forces and the implementation of the ceasefire. 200-250 VC/PAVN soldiers were based at Camp Davis (see Davis Station below) at the base from March 1973 until the fall of South Vietnam.
Numerous violations of the Paris Peace Accords were committed by North Vietnamese beginning almost as soon as the United States withdrew its last personnel from South Vietnam by the end of March 1973. The North Vietnamese and the Provisional Revolutionary Government of South Vietnam continued their attempt to overthrow President Nguyễn Văn Thiệu and remove the U.S.-supported government. The U.S. had promised Thiệu that it would use airpower to support his government. On 14 January 1975 Secretary of Defense James Schlesinger stated that the U.S. was not living up to its promise that it would retaliate in the event North Vietnam tried to overwhelm South Vietnam.
When North Vietnam invaded in March 1975, the promised American intervention never materialized. Congress reflected the popular mood, halting the bombing in Cambodia effective 15 July 1973, and reducing aid to South Vietnam. Since Thiệu intended to fight the same kind of war he always had, with lavish use of firepower, the cuts in aid proved especially damaging.
Capture
In early 1975 North Vietnam realized the time was right to achieve its goal of re-uniting Vietnam under communist rule, launching a series of small ground attacks to test U.S. reaction.
On 8 January the North Vietnamese Politburo ordered a PAVN offensive to "liberate" South Vietnam by cross-border invasion. The general staff plan for the invasion of South Vietnam called for 20 divisions, it anticipated a two-year struggle for victory.
By 14 March, South Vietnamese President Thiệu decided to abandon the Central Highlands region and two northern provinces of South Vietnam and ordered a general withdrawal of ARVN forces from those areas. Instead of an orderly withdrawal, it turned into a general retreat, with masses of military and civilians fleeing, clogging roads and creating chaos.
On 30 March 100,000 South Vietnamese soldiers surrendered after being abandoned by their commanding officers. The large coastal cities of Da Nang, Qui Nhơn, Tuy Hòa and Nha Trang were abandoned by the South Vietnamese, yielding the entire northern half of South Vietnam to the North Vietnamese.
By late March the US Embassy began to reduce the number of US citizens in Vietnam by encouraging dependents and non-essential personnel to leave the country by commercial flights and on Military Airlift Command (MAC) C-141 and C-5 aircraft, which were still bringing in emergency military supplies. In late March, two or three of these MAC aircraft were arriving each day and were used for the evacuation of civilians and Vietnamese orphans. On 4 April a C-5A aircraft carrying 250 Vietnamese orphans and their escorts suffered explosive decompression over the sea near Vũng Tàu and made a crash-landing while attempting to return to Tan Son Nhut; 153 people on board died in the crash.
As the war in South Vietnam entered its conclusion, the pilots of the RVNAF flew sortie after sortie, supporting the retreating ARVN after it abandoned Cam Ranh Bay on 14 April. For two days after the ARVN left the area, the Wing Commander at Phan Rang Air Base fought on with the forces under his command. Airborne troops were sent in for one last attempt to hold the airfield, but the defenders were finally overrun on 16 April and Phan Rang Air Base was lost.
On 22 April Xuân Lộc fell to the PAVN after a two-week battle with the ARVN 18th Division which inflicted over 5000 PAVN casualties and delayed the Ho Chi Minh Campaign for two weeks. With the fall of Xuân Lộc and the capture of Bien Hoa Air Base in late April 1975 it was clear that South Vietnam was about to fall to the PAVN.
By 22 April 20 C-141 and 20 C-130s flights a day were flying evacuees out of Tan Son Nhut to Clark Air Base, some 1,000 miles away in the Philippines. On 23 April President Ferdinand Marcos of the Philippines announced that no more than 2,500 Vietnamese evacuees would be allowed in the Philippines at any one time, further increasing the strain on MAC which now had to move evacuees out of Saigon and move some 5,000 evacuees from Clark Air Base on to Guam, Wake Island and Yokota Air Base. President Thiệu and his family left Tan Son Nhut on 25 April on a USAF C-118 to go into exile in Taiwan. Also on 25 April the Federal Aviation Administration banned commercial flights into South Vietnam. This directive was subsequently reversed; some operators had ignored it anyway. In any case this effectively marked the end of the commercial airlift from Tan Son Nhut.
On 27 April PAVN rockets hit Saigon and Cholon for the first time since the 1973 ceasefire. It was decided that from this time only C-130s would be used for the evacuation due to their greater maneuverability. There was relatively little difference between the cargo loads of the two aircraft, C-141s had been loaded with up to 316 evacuees while C-130s had been taking off with in excess of 240.
On 28 April at 18:06, three A-37 Dragonflies piloted by former RVNAF pilots, who had defected to the Vietnamese People's Air Force at the fall of Da Nang, dropped six Mk81 250 lb bombs on the base damaging aircraft. RVNAF F-5s took off in pursuit, but they were unable to intercept the A-37s. C-130s leaving Tan Son Nhut reported receiving PAVN .51 cal and 37 mm anti-aircraft (AAA) fire, while sporadic PAVN rocket and artillery attacks also started to hit the airport and air base. C-130 flights were stopped temporarily after the air attack but resumed at 20:00 on 28 April.
At 03:58 on 29 April, C-130E, #72-1297, flown by a crew from the 776th Tactical Airlift Squadron, was destroyed by a 122 mm rocket while taxiing to pick up refugees after offloading a BLU-82 at the base. The crew evacuated the burning aircraft on the taxiway and departed the airfield on another C-130 that had previously landed. This was the last USAF fixed-wing aircraft to leave Tan Son Nhut.
At dawn on 29 April the RVNAF began to haphazardly depart Tan Son Nhut Air Base as A-37s, F-5s, C-7s, C-119s and C-130s departed for Thailand while UH-1s took off in search of the ships of Task Force 76. Some RVNAF aircraft stayed to continue to fight the advancing PAVN. One AC-119 gunship had spent the night of 28/29 April dropping flares and firing on the approaching PAVN. At dawn on 29 April two A-1 Skyraiders began patrolling the perimeter of Tan Son Nhut at until one was shot down, presumably by an SA-7 missile. At 07:00 the AC-119 was firing on PAVN to the east of Tan Son Nhut when it too was hit by an SA-7 and fell in flames to the ground.
At 08:00 on 29 April Lieutenant General Trần Văn Minh, commander of the RVNAF and 30 of his staff arrived at the DAO Compound demanding evacuation, signifying the complete loss of RVNAF command and control. At 10:51 on 29 April, the order was given by CINCPAC to commence Operation Frequent Wind, the helicopter evacuation of US personnel and at-risk Vietnamese.
In the final evacuation, over a hundred RVNAF aircraft arrived in Thailand, including twenty-six F-5s, eight A-37s, eleven A-1s, six C-130s, thirteen C-47s, five C-7s, and three AC-119s. Additionally close to 100 RVNAF helicopters landed on U.S. ships off the coast, although at least half were jettisoned. One O-1 managed to land on the , carrying a South Vietnamese major, his wife, and five children.
The ARVN 3rd Task Force, 81st Ranger Group commanded by Maj. Pham Chau Tai defended Tan Son Nhut and they were joined by the remnants of the Loi Ho unit. At 07:15 on 30 April the PAVN 24th Regiment approached the Bay Hien intersection () 1.5 km from the base's main gate. The lead T-54 was hit by M67 recoilless rifle and then the next T-54 was hit by a shell from an M48 tank. The PAVN infantry moved forward and engaged the ARVN in house to house fighting forcing them to withdraw to the base by 08:45. The PAVN then sent 3 tanks and an infantry battalion to assault the main gate and they were met by intensive anti-tank and machine gun fire knocking out the 3 tanks and killing at least 20 PAVN soldiers. The PAVN tried to bring forward an 85mm antiaircraft gun but the ARVN knocked it out before it could start firing. The PAVN 10th Division ordered 8 more tanks and another infantry battalion to join the attack, but as they approached the Bay Hien intersection they were hit by an airstrike from RVNAF jets operating from Binh Thuy Air Base which destroyed 2 T-54s. The 6 surviving tanks arrived at the main gate at 10:00 and began their attack, with 2 being knocked out by antitank fire in front of the gate and another destroyed as it attempted a flanking manoeuvre.
At approximately 10:30 Maj. Pham heard of the surrender broadcast of President Dương Văn Minh and went to the ARVN Joint General Staff Compound to seek instructions, he called General Minh who told him to prepare to surrender, Pham reportedly told Minh "If Viet Cong tanks are entering Independence Palace we will come down there to rescue you sir." Minh refused Pham's suggestion and Pham then told his men to withdraw from the base gates and at 11:30 the PAVN entered the base.
Following the war, Tan Son Nhut Air Base was taken over as a base for the Vietnam People's Air Force.
Known RVNAF units (June 1974)
Tan Son Nhut Air Base was the Headquarters of the RVNAF. It was also the headquarters of the RVNAF 5th Air Division.
33d Tactical Wing
314th Special Air Missions SquadronVC-47, U-17, UH-1, DC-6B
716th Reconnaissance Squadron R/EC-47, U-6A
718th Reconnaissance Squadron EC-47
429th Transport Squadron C-7B
431st Transport Squadron C-7B
Det H 259th Helicopter Squadron Bell UH-1H (Medevac)
53d Tactical Wing
819th Combat Squadron AC-119G
821st Combat Squadron AC-119G
435th Transport Squadron C-130A
437th Transport Squadron C-130A
Use by the United States
During the Vietnam War Tan Son Nhut Air Base was an important facility for both the USAF and the RVNAF. The base served as the focal point for the initial USAF deployment and buildup in South Vietnam in the early 1960s. Tan Son Nhut was initially the main air base for Military Airlift Command flights to and from South Vietnam, until other bases such as Bien Hoa and Cam Ranh opened in 1966. After 1966, with the establishment of the 7th Air Force as the main USAF command and control headquarters in South Vietnam, Tan Son Nhut functioned as a Headquarters base, a Tactical Reconnaissance base, and as a Special Operations base. With the drawdown of US forces in South Vietnam after 1971, the base took on a myriad of organizations transferred from deactivated bases across South Vietnam.
Between 1968 and 1974, Tan Son Nhut Airport was one of the busiest military airbases in the world. Pan Am schedules from 1973 showed Boeing 747 service was being operated four times a week to San Francisco via Guam and Manila. Continental Airlines operated up to 30 Boeing 707 military charters per week to and from Tan Son Nhut Airport during the 1968–74 period.
It was from Tan Son Nhut Air Base that the last U.S. Airman left South Vietnam in March 1973. The Air Force Post Office (APO) for Tan Son Nhut Air Base was APO San Francisco, 96307.
Military Assistance Advisory Group
Davis Station
On 13 May 1961 a 92-man unit of the Army Security Agency, operating under cover of the 3rd Radio Research Unit (3rd RRU), arrived at Tan Son Nhut AB and established a communications intelligence facility in disused RVNAF warehouses on the base (). This was the first full deployment of a US Army unit to South Vietnam. On 21 December 1961 SP4 James T. Davis of the 3rd RRU was operating a mobile PRD-1 receiver with an ARVN unit near Cầu Xáng when they were ambushed by VC and Davis was killed, becoming one of the first Americans killed in the Vietnam War. In early January 1962 the 3rd RRU's compound at Tan Son Nhut was renamed Davis Station.
On 1 June 1966 3rd RRU was redesignated the 509th Radio Research Group. The 509th RR Group continued operations until 7 March 1973, when they were among the last US units to leave South Vietnam.
507th Tactical Control Group
In late September 1961, the first permanent USAF unit, the 507th Tactical Control Group from Shaw Air Force Base deployed sixty-seven officers and airmen to Tan Son Nhut to install MPS-11 search and MPS-16 height-finding radars and began monitoring air traffic and training of RVNAF personnel to operate and service the equipment. Installation of the equipment commenced on 5 October 1961 and the unit would eventually grow to 314 assigned personnel. This organization formed the nucleus of South Vietnam's tactical air control system.
Tactical Reconnaissance Mission
On 18 October 1961, four RF-101C Voodoos and a photo processing unit from the 15th Tactical Reconnaissance Squadron of the 67th Tactical Reconnaissance Wing, based at Yokota AB Japan, arrived at Tan Son Nhut, with the reconnaissance craft flying photographic missions over South Vietnam and Laos from 20 October under Operation Pipe Stem. The RF-101s would depart in January 1962 leaving Detachment 1, 15th tactical Reconnaissance Squadron to undertake photo-processing.
In March 1962 a C-54 Skymaster outfitted for infrared reconnaissance arrived at the base and would remain here until February 1963, when it was replaced by a Brave Bull C-97.
In December 1962 following the signing of the International Agreement on the Neutrality of Laos, which banned aerial reconnaissance over Laos, all 4 Able Marble RF-101Cs of the moved to the base from Don Muang Royal Thai Air Force Base.
On 13 April 1963 the 13th Reconnaissance Technical Squadron was established at the base to provide photo interpretation and targeting information.
Following the Gulf of Tonkin Incident on 4 August 1964, 6 additional RF-101Cs deployed to the base.
The 67th TRW was soon followed by detachments of the 15th Tactical Reconnaissance Squadron of the 18th Tactical Fighter Wing, based at Kadena AB, Okinawa, which also flew RF-101 reconnaissance missions over Laos and South Vietnam, first from bases at Udorn Royal Thai Air Force Base, Thailand from 31 March 1965 to 31 October 1967 and then from South Vietnam. These reconnaissance missions lasted from November 1961 through the spring of 1964.
RF-101Cs flew pathfinder missions for F-100s during Operation Flaming Dart, the first USAF strike against North Vietnam on 8 February 1965. They initially operated out of South Vietnam, but later flew most of their missions over North Vietnam out of Thailand. Bombing missions against the North required a large amount of photographic reconnaissance support, and by the end of 1967, all but one of the Tactical Air Command RF-101C squadrons were deployed to Southeast Asia.
The reconnaissance Voodoos at Tan Son Nhut were incorporated into the 460th Tactical Reconnaissance Wing in February 1966. 1 RF-101C was destroyed in a sapper attack on Tan Son Nhut AB. The last 45th TRS RF-101C left Tan Son Nhut on 16 November 1970.
The need for additional reconnaissance assets, especially those capable of operating at night, led to the deployment of 2 Martin RB-57E Canberra Patricia Lynn reconnaissance aircraft of the 6091st Reconnaissance Squadron on 7 May 1963. The forward nose section of the RB-57Es were modified to house a KA-1 36-inch forward oblique camera and a low panoramic KA-56 camera used on the Lockheed U-2. Mounted inside the specially configured bomb bay door was a KA-1 vertical camera, a K-477 split vertical day-night camera, an infrared scanner, and a KA-1 left oblique camera. The Detachment flew nighttime reconnaissance missions to identify VC base camps, small arms factories, and storage and training areas. The Patricia Lynn operation was terminated in mid-1971 with the inactivation of the 460th TRW and the four surviving aircraft returned to the United States.
On 20 December 1964 Military Assistance Command, Vietnam (MACV) formed the Central Target Analysis and Research Center at the base as a unit of MACV J-2 (Intelligence) to coordinate Army and USAF infrared reconnaissance.
On 30 October 1965 the first RF-4C Phantom IIs of the 16th Tactical Reconnaissance Squadron arrived at the base and on 16 November they began flying missions over Laos and North Vietnam.
Farm Gate
On 11 October 1961, President John F. Kennedy directed, in NSAM 104, that the Defense Secretary "introduce the Air Force 'Jungle Jim' Squadron into Vietnam for the initial purpose of training Vietnamese forces." The 4400th Combat Crew Training Squadron was to proceed as a training mission and not for combat. The unit would be officially titled 4400th Combat Crew Training Squadron, code named Farm Gate. In mid-November the first 8 Farm Gate T-28s arrived at the base from Clark Air Base. At the same time Detachments 7 and 8, 6009th Tactical Support Group were established at the base to support operations. On 20 May these detachments were redesignated the 6220th Air Base Squadron.
In February 1963 4 RB-26C night photo-reconnaissance aircraft joined the Farm Gate planes at the base.
Tactical Air Control Center
The establishment of a country-wide tactical air control center was regarded as a priority for the effective utilisation of the RVNAF's limited strike capabilities, in addition an air operations center for central planning of air operations and a subordinate radar reporting center were also required. From 2–14 January the 5th Tactical Control Group was deployed to the base, beginning operations on 13 January 1962.
In March 1963 MACV formed a flight service center and network at the base for the control of all US military flights in South Vietnam.
Mule Train
On 6 December 1961, the Defense Department ordered the C-123 equipped 346th Troop Carrier Squadron (Assault) to the Far East for 120 days temporary duty. On 2 January 1962 the first of 16 C-123s landed at the base commencing Operation Mule Train to provide logistical support to US and South Vietnamese forces.
In March 1962 personnel from the 776th Troop Carrier Squadron, began replacing the temporary duty personnel. 10 of the C-123s were based at Tan Son Nhut, 2 at Da Nang Air Base and 4 at Clark Air Base.
In April 1963 the 777th Troop Carrier Squadron equipped with 16 C-123s deployed to the base.
In July 1963 the Mule Train squadrons at the base became the 309th and 310th Troop Carrier Squadrons assigned to the 315th Air Division.
Dirty Thirty
Additional USAF personnel arrived at Tan Son Nhut in early 1962 after the RVNAF transferred two dozen seasoned pilots from the 1st Transportation Group at Tan Son Nhut to provide aircrews for the newly activated 2nd Fighter Squadron then undergoing training at Bien Hoa AB. This sudden loss of qualified C-47 pilots brought the 1st Transportation Group's airlift capability dangerously low. In order to alleviate the problem, United States Secretary of Defense Robert McNamara, on the recommendation of MAAG Vietnam, ordered thirty USAF pilots temporarily assigned to the RVNAF to serve as C-47 co-pilots. This influx of U.S. personnel quickly returned the 1st Transportation Group to full strength.
Unlike the USAF Farm Gate personnel at Bien Hoa Air Base, the C-47 co-pilots actually became part of the RVNAF operational structure – though still under U.S. control. Because of their rather unusual situation, these pilots soon adopted the very unofficial nickname, The Dirty Thirty. In a sense they were the first U.S. airmen actually committed to combat in Vietnam, rather than being assigned as advisors or support personnel. The original Dirty Thirty pilots eventually rotated home during early 1963 and were replaced by a second contingent of American pilots. This detachment remained with the RVNAF until December 1963 when they were withdrawn from Vietnam.
509th Fighter-Interceptor Squadron
Starting on 21 March 1962 under Project Water Glass and later remaining under Project Candy Machine, the 509th Fighter-Interceptor Squadron began rotating F-102A Delta Dagger interceptors to Tan Son Nhut Air Base from Clark AB on a rotating basis to provide air defense of the Saigon area in the event of a North Vietnamese air attack. F-102s and TF-102s (two-seat trainer version) were deployed to Tan Son Nhut initially because ground radar sites frequently painted small aircraft penetrating South Vietnamese airspace.
The F-102, a supersonic, high altitude fighter interceptor designed to intercept Soviet bombers was given the mission of intercepting, identifying and, if necessary, destroying small aircraft, flying from treetop level to 2000 ft at speeds less than the final approach landing speed of the F-102. The TF-102, employing two pilots with one acting solely as radar intercept operator, was considered to be safer and more efficient as a low altitude interceptor. The T/F-102s would alternate with US Navy AD-5Qs. In May 1963 due to overcrowding at the base and the low-probability of air attack the T/F-102s and AD-5Qs were withdrawn to Clark AB from where they could redeploy to Tan Son Nhut on 12–24 hours' notice.
Following the Gulf of Tonkin Incident, 6 F-102s from the 16th Fighter Squadron deployed to the base.
Before the rotation ended in July 1970, pilots and F-102 aircraft from other Far East squadrons were used in the deployment.
Air rescue
In January 1962 5 USAF personnel from the Pacific Air Rescue Center were assigned to the base to establish a Search and Rescue Center, without having any aircraft assigned they were dependent on support from US Army advisers in each of South Vietnam's four military corps areas to use US Army and Marine Corps helicopters. In April 1962 the unit was designated Detachment 3, Pacific Air Rescue Center.
On 1 July 1965 Detachment 3 was redesignated the 38th Air Rescue Squadron and activated with its headquarters at the base and organized to control search and rescue detachments operating from bases in South Vietnam and Thailand. Detachment 14, an operational base rescue element, was later established at the base.
On 8 January 1966 the 3d Aerospace Recovery Group was established at the base to control search and rescue operations throughout the theater.
On 1 July 1971 the entire 38th ARRS was inactivated. Local base rescue helicopters and their crews then became detachments of the parent unit, the 3d Aerospace Rescue and Recovery Group.
In February 1973 the 3d Aerospace Rescue and Recovery Group left Tan Son Nhut AB and moved to Nakhon Phanom Royal Thai Navy Base.
Miscellaneous units
From December 1961, the 8th and 57th Transportation Companies (Light Helicopter) arrived with Piasecki CH-21C Shawnee's.
From 1962 the Utility Tactical Transport Helicopter Company (UTTHCO) was based here initially with Bell HU-1A Huey's then UH-1B's.
The 57th Medical Detachment (Helicopter Ambulance) with UH-1B Huey's from January 1963.
During December 1964 the 145th Aviation Battalion were deployed here.
In April 1964 5 EC-121D airborne early warning aircraft began staging from the base.
In June 1964 Detachment 2, 421st Air Refueling Squadron equipped with KB-50 aerial refueling aircraft deployed to the base to support Yankee Team operations over Laos.
In April 1965 a detachment of the 9th Tactical Reconnaissance Squadron comprising 4 RB–66Bs and 2 EB–66Cs arrived at the base. The RB–66Bs were equipped with night photo and infrared sensor equipment and began reconnaissance missions over South Vietnam, while the EB–66Cs began flying missions against North Vietnamese air defense radars. By the end of May, two more EB–66Cs arrived at the base and they all then redeployed to Takhli Royal Thai Air Force Base.
In mid-May 1965, following the disaster at Bien Hoa the 10 surviving B-57 bombers were transferred to Tan Son Nhut AB and continued to fly sorties on a reduced scale until replacement aircraft arrived from Clark AB. In June 1965, the B-57s were moved from Tan Son Nhut AB to Da Nang AB.
On 8 October 1965 the 20th Helicopter Squadron equipped with 14 CH-3 helicopters was activated at the base, it moved to Nha Trang Air Base on 15 June 1966.
33rd Tactical Group
On 8 July 1963 the units at the base were organized as the 33d Tactical Group, with subordinate units being the 33rd Air Base Squadron, the 33rd Consolidated Aircraft maintenance Squadron and the Detachment 1 reconnaissance elements. The Group's mission was to maintain and operate base support facilities at Tan Son Nhut, supporting the 2d Air Division and subordinate units by performing reconnaissance.
505th Tactical Air Control Group
The 505th Tactical Air Control Group was assigned to Tan Son Nhut on 8 April 1964. The Unit was primarily responsible for controlling the tactical air resources of the US and its allies in South Vietnam, Thailand, and to some extent Cambodia and Laos. Carrying out the mission of providing tactical air support required two major components, radar installations and forward air controllers (FACs).
The radar sites provided flight separation for attack and transport aircraft which took the form of flight following and, in some cases control by USAF Weapons Directors. FACs had the critical job of telling tactical fighters where to drop their ordnance. FAC's were generally attached to either US Army or ARVN units and served both on the ground and in the air.
Squadrons of the 505th located at Tan Son Nhut AB were:
619th Tactical Control Squadron activated at the base on 8 April 1964 It was responsible for operating and maintaining air traffic control and radar direction-finding equipment for the area from the Mekong Delta to Buôn Ma Thuột in the Central Highlands with detachments at various smaller airfields throughout its operational area. It remained operational until 15 March 1973.
505th Tactical Control Maintenance Squadron
Close air support
Following the introduction of US ground combat units in mid-1965, two F-100 squadrons were deployed to Tan Son Nhut AB to provide close air support for US ground forces:
481st Tactical Fighter Squadron, 29 June 1965 – 1 January 1966
416th Tactical Fighter Squadron, 1 November 1965 – 15 June 1966
The 481st returned to the United States; the 416th returned to Bien Hoa.
6250th Combat Support Group
The first tasks facing the USAF, however, were to set up a workable organizational structure in the region, improve the area's inadequate air bases, create an efficient airlift system, and develop equipment and techniques to support the ground battle.
Starting in 1965, the USAF adjusted its structure in Southeast Asia to absorb incoming units. Temporarily deployed squadrons became permanent in November. A wing structure replaced the groups. On 8 July 1965, the 33d Tactical Group was redesignated the 6250th Combat Support Group.
The number of personnel at Tan Son Nhut AB increased from 7780 at the beginning of 1965 to over 15,000 by the end of the year, placing substantial demands for accommodation and basic infrastructure.
On 14 November 1965 the 4th Air Commando Squadron equipped with 20 AC-47 Spooky gunships arrived at the base and was assigned to the 6250th Group. The aircraft were soon deployed to forward operating locations at Binh Thuy, Da Nang, Nha Trang and Pleiku Air Bases. In May 1966 the 4th Air Commando Squadron moved its base to Nha Trang AB where it came under the control of the 14th Air Commando Wing.
460th Tactical Reconnaissance Wing
On 18 February 1966 the 460th Tactical Reconnaissance Wing was activated. Its headquarters were shared with the Seventh Air Force Headquarters and MACV. When it stood up, the 460th TRW, alone, was responsible for the entire reconnaissance mission, both visual and electronic, throughout the whole theater. On 18 February 1966 the wing began activities with 74 aircraft of various types. By the end of June 1966, that number climbed to over 200 aircraft. When the 460th TRW stood up, the Wing gained several flying units at Tan Son Nhut:
16th Tactical Reconnaissance Squadron (RF-4C)
20th Tactical Reconnaissance Squadron: 12 November 1965 – 1 April 1966 (RF-101C)
Detachment 1 of the 460th Tactical Reconnaissance Wing
On 15 October 1966, the 460th TRW assumed aircraft maintenance responsibilities for Tan Son Nhut AB, including being responsible for all depot-level aircraft maintenance responsibility for all USAF organizations in South Vietnam. In addition to the reconnaissance operations, the 460th TFW's base flight operated in-theater transport service for Seventh Air Force and other senior commanders throughout South Vietnam. The base flight operated T-39A Saberliners, VC-123B Providers (also known as the "White Whale"), and U-3Bs between 1967 and 1971.
Photographic reconnaissance
45th Tactical Reconnaissance Squadron: 30 March 1966 – 31 December 1970 (RF-101C Tail Code: AH)
12th Tactical Reconnaissance Squadron: 2 September 1966 – 31 August 1971 (RF-4C Tail Code: AC)
On 18 September 1966, the 432d Tactical Reconnaissance Wing was activated at Takhli Royal Thai Air Force Base, Thailand. After the 432d TRW activated it took control of the reconnaissance squadrons in Thailand. With the activation of the 432d TRW, the 460th TRW was only responsible for RF-101 and RF-4C operations.
In 1970 the need for improved coordinate data of Southeast Asia for targeting purposes led to Loran-C-equipped RF–4Cs taking detailed photographs of target areas which were matched with the Loran coordinates of terrain features on the photo maps to calculate the precise coordinates. This information was converted into a computer program which by mid-1971 was used by the 12th Reconnaissance Intelligence Technical Squadron at the base for targeting.
Electronic reconnaissance
A few months after the 460th TRW's activation, two squadrons activated on 8 April 1966 as 460th TRW Det 2:
360th Tactical Electronic Warfare Squadron: 8 April 1966 – 31 August 1971 (EC-47N/P/Q Tail Code: AJ)
361st Tactical Electronic Warfare Squadron: 8 April 1966 – 31 August 1971 (EC-47N/P/Q Tail Code: AL) (Nha Trang Air Base)
362d Tactical Electronic Warfare Squadron: 1 February 1967 – 31 August 1971 (EC-47N/P/Q Tail Code: AN) (Pleiku Air Base)
Project Hawkeye conducted radio direction finding (RDF), whose main target were VC radio transmitters. Before this program RDF involved tracking the signals on the ground. Because this exposed the RDF team to ambushes, both the US Army and USAF began to look at airborne RDF. While the US Army used U-6 Beaver and U-8 Seminole aircraft for its own version of the Hawkeye platform, the USAF modified several C-47 Skytrains.
Project Phyllis Ann also used modified C-47s, however, the C-47s for this program were highly modified with an advanced navigational and reconnaissance equipment. On 4 April 1967, project Phyllis Ann changed to become Compass Dart. On 1 April 1968, Compass Dart became Combat Cougar. Because of security concerns the operation's name changed two more times first to Combat Cross and then to Commando Forge.
Project Drillpress also used modified C-47s, listening into VC/PAVN traffic and collected intelligence from it. This data gave insights into the plans and strategy of both the VC and the PAVN. Information from all three projects contributed in a major way to the intelligence picture of the battlefield in Vietnam. In fact about 95 percent of the Arc Light strikes conducted in South Vietnam were based, at least partially, on the data from these three programs. On 6 October 1967, Drillpress changed to Sentinel Sara.
The US would go to great lengths to prevent this equipment from falling into enemy hands, when an EC-47 from the 362d TEWS crashed on 22 April 1970, members of an explosive ordnance unit policed the area destroying anything they found and six F-100 tactical air sorties hit the area to be sure.
Detachments of these squadrons operated from different locations, including bases in Thailand. Each of the main squadrons and their detachments moved at least once due to operational and/or security reasons. Personnel operating the RDF and signal intelligence equipment in the back of the modified EC-47s were part of the 6994th Security Squadron.
On 1 June 1969 the unit transferred to become 360th TEWS Det 1.
Inactivation
As the Vietnamization program began, Vietnamese crews began flying with EC-47 crews from the 360th TEWS and 6994th SS, on 8 May 1971, to get training on operating the aircraft and its systems. The wing was inactivated in-place on 31 August 1971. Decorations awarded to the wing for its Vietnam War service include:
Presidential Unit Citation: 18 February 1966 – 30 June 1967; 1 September 1967 – 1 July 1968; 11 July 1968 – 31 August 1969; l February-31 March 1971.
Air Force Outstanding Unit Award with Combat "V" Device: 1 July 1969 – 30 June 1970; 1 July 1970 – 30 June 1971.
Republic of Vietnam Gallantry Cross with Palm: 1 August 1966 – 31 August 1971.
315th Air Commando Wing, Troop Carrier
In October 1962, there began what became known as the Southeast Asia Airlift System. Requirements were forecast out to 25 days, and these requirements were matched against available resources. In September 1962 Headquarters 6492nd Combat Cargo Group (Troop Carrier) and the 6493rd Aerial Port Squadron were organized and attached to the 315th Air Division, based at Tachikawa AB. On 8 December 1962 the 315th Air Commando Group, (Troop Carrier) was activated replacing the 6492nd Combat Cargo Group and became responsible for all in-country airlift in South Vietnam, including control over all USAF airlift assets. On the same date the 8th Aerial Port Squadron replaced the 6493rd Aerial Port Squadron.The 315th Group was assigned to the 315th Air Division, but came under the operational control of MACV through the 2d Air Division.
On 10 August 1964 6 Royal Australian Air Force RAAF Transport Flight Vietnam DHC-4 Caribous arrived at the base and were assigned to the airlift system.
In October 1964 the 19th Air Commando Squadron equipped with C-123s was established at the base and assigned to the 315th Troop Carrier Group.
On 8 March 1965 the 315th Troop Carrier Group was redesignated the 315th Air Commando Group. The 315th Air Commando Group was re-designated the 315th Air Commando Wing on 8 March 1966.
Squadrons of the 315th ACW/TC were:
12th Air Commando Squadron (Defoliation), 15 October 1966 – 30 September 1970 (Bien Hoa) (UC-123 Provider)
Det 1, 834th Air Division, 15 October 1966 – 1 December 1971 (Tan Son Nhut) (C-130B Hercules)
19th Air Commando Squadron 8 March 1966 – 10 June 1971 (Tan Son Nhut) (C-123 Provider) (including 2 Royal Thai Air Force-operated C-123s named Victory Flight)
309th Air Commando Squadron 8 March 1966 – 31 July 1970 (Phan Rang) (C-123)
310th Air Commando Squadron 8 March 1966 – 15 January 1972 (Phan Rang) (C-123)
311th Air Commando Squadron 8 March 1966 – 5 October 1971 (Phan Rang) (C-123)
Det 1., HQ 315th Air Commando Wing, Troop Carrier 1 August – 15 October 1966
Det 5., HQ 315th Air Division (Combat Cargo) 8 March – 15 October 1966
Det 6., HQ 315th Air Division (Combat Cargo) (8 March – 15 October 1966)
903rd Aeromedical Evacuation Squadron 8 July 1966
RAAF Transport Flight, Vietnam (RTFV) 8 March – 15 October 1966
The unit also performed C-123 airlift operations in Vietnam. Operations included aerial movement of troops and cargo, flare drops, aeromedical evacuation, and air-drops of critical supplies and paratroops
Operation Ranch Hand
The 315th ACG was responsible for Operation Ranch Hand Defoliant operations missions. After some modifications to the aircraft (which included adding armor for the crew), 3 C-123B Provider aircraft arrived at the base on 7 January 1962 under the code name Ranch Hand.
The 315th ACW was transferred to Phan Rang Air Base on 14 June 1967.
834th Air Division
On 15 October 1966 the 834th Airlift Division was assigned without personnel or equipment, to Tan Son Nhut AB to join the Seventh Air Force, providing an intermediate command and control organization and also act as host unit for the USAF forces at the base.
The 315th Air Commando Wing and 8th Aerial Port Squadron were assigned to the 834th Division. Initially the 834th AD had a strength of twenty-seven officers and twenty-one airmen, all of whom were on permanent assignment to Tan Son Nhut.
The Air Division served as a single manager for all tactical airlift operations in South Vietnam, using air transport to haul cargo and troops, which were air-landed or air-dropped, as combat needs dictated through December 1971. The 834th Air Division became the largest tactical airlift force in the world. It was capable of performing a variety of missions. In addition to airlift of cargo and personnel and RVNAF training. Its missions and activities included Ranch Hand defoliation and insecticide spraying, psychological leaflet distribution, helicopter landing zone preparation, airfield survey and the operation of aerial ports.
Units it directly controlled were:
315th Air Commando (later, 315th Special Operations; 315th Tactical Airlift) Wing: 15 October 1966 – 1 December 1971)
Located at: Tan Son Nhut AB; later Phan Rang AB (15 June 1967 – 1 December 1971) UC-123 Provider. Composed of four C-123 squadrons with augmentation by C-130 Hercules transports from the 315th Air Division, Tachikawa AB, Japan.
2 C-123 Squadrons (32 a/c) at Tan Son Nhut AB;
C-130B aircraft assignments were 23 aircraft by 1 November 1966
483d Troop Carrier (later, 483d Tactical Airlift) Wing: 15 October 1966 – 1 December 1971
2d Aerial Port Group (Tan Son Nhut)
8th Aerial Port Squadron, Tan Son Nhut (16 detachments)
Detachments were located at various points where airlift activity warranted continuous but less extensive aerial port services. Aerial port personnel loaded, unloaded, and stored cargo and processed passengers at each location.
In addition, the 834th supervised transport operations (primarily C-47's) of the RVNAF, 6 DHC-4 Wallaby transports operated by the RAAF 35 Squadron at Vũng Tàu Army Airfield and 2 Republic of Korea Air Force transport unit C-46 Commandos from 29 July 1967, later replaced by C-54s. The 834th's flying components also performed defoliation missions, propaganda leaflet drops, and other special missions.
The 834th received the Presidential Unit Citation recognizing their efforts during the Battle of Khe Sanh.
In late 1969 C Flight, 17th Special Operations Squadron equipped with 5 AC-119G gunships was deployed at the base. By the end of 1970 this Flight would grow to 9 AC-119Gs to support operations in Cambodia.
During its last few months, the 834th worked toward passing combat airlift control to Seventh Air Force. On 1 December 1971 the 834th AD was inactivated as part of the USAF withdrawal of forces from Vietnam.
377th Air Base Wing
The 377th Air Base Wing was responsible for the day-to-day operations and maintenance of the USAF portion of the facility from April 1966 until the last USAF personnel withdrew from South Vietnam in March 1973. In addition, the 377th ABW was responsible for housing numerous tenant organizations including Seventh Air Force, base defense, and liaison with the RVNAF.
In 1972 inactivating USAF units throughout South Vietnam began to assign units without equipment or personnel to the 377th ABW.
From Cam Ranh AB:
21st Tactical Air Support Squadron: 15 March 1972 – 23 February 1973.
From Phan Rang AB:
8th Special Operations Squadron: 15 January – 25 October 1972 (A-37)
9th Special Operations Squadron: 21 January – 29 February 1972 (C-47)
310th Tactical Airlift Squadron: January–June 1972 and March–October 1972 (C-123, C-7B)
360th Tactical Electronic Warfare Squadron: 1 February – 24 November 1972 (EC-47N/P/Q)
All of these units were inactivated at Tan Son Nhut AB.
An operating location of the wing headquarters was established at Bien Hoa AB on 14 April 1972 to provide turnaround service for F-4 Phantom IIs of other organizations, mostly based in Thailand. It was replaced on 20 June 1972 by Detachment l of the 377th Wing headquarters, which continued the F-4 turnaround service and added A-7 Corsair IIs for the deployed 354th Tactical Fighter Wing aircraft based at Korat Royal Thai Air Force Base, Thailand on 30 October 1972. The detachment continued operations through 11 February 1973.
The 377th ABW phased down for inactivation during February and March 1973, transferring many assets to the RVNAF. When inactivated on 28 March 1973, the 377th Air Base Wing was the last USAF unit in South Vietnam.
Post-1975 Vietnam People's Air Force use
Following the war, Tan Son Nhut Air Base was taken over as a base for the VPAF which is referred to by the name Tân Sơn Nhất.
Tân Sơn Nhất Air Base is base of 917th Mixed Air Transport Regiment (a.k.a. Đồng Tháp Squadron) of 370th Air Force Division. The regiment's fleet consisted of:
Bell UH-1 Iroquois
Mil Mi-8
Mil Mi-17
917th Mixed Air Transport Regiment was moved to Cần Thơ International Airport in 2017.
In November 2015, Camp Davis was recognized as a historical relic site by the Monuments Conservation Center of Ho Chi Minh City Department of Culture and Sports and the Ho Chi Minh City Monuments Review Board.
Accident and incidents
25 October 1967: F-105D Thunderchief #59-1737 crashed into a C-123K #54-0667 on landing in bad weather. The F-105 pilot was killed and both aircraft were destroyed.
19 June 1968 at 14:15 a pallet of ammunition exploded on a truck in the munitions area north of the base killing one U.S. soldier. An ambulance crossing the runway to the scene of the explosion was hit by a U.S. Army U-21 on takeoff killing two USAF medics in the ambulance.
11 October 1969: an AC-119G of the 17th Special Operations Squadron crashed shortly after takeoff. 6 crewmembers were killed and the aircraft was destroyed.
28 April 1970: an AC-119G of the 17th Special Operations Squadron crashed shortly after takeoff. 6 crewmembers were killed and the aircraft was destroyed.
See also
Republic of Vietnam Air Force
United States Pacific Air Forces
Seventh Air Force
References
Other sources
Endicott, Judy G. (1999) Active Air Force wings as of 1 October 1995; USAF active flying, space, and missile squadrons as of 1 October 1995. Maxwell AFB, Alabama: Office of Air Force History. CD-ROM.
Martin, Patrick (1994). Tail Code: The Complete History of USAF Tactical Aircraft Tail Code Markings. Schiffer Military Aviation History. .
Mesco, Jim (1987) VNAF Republic of Vietnam Air Force 1945–1975 Squadron/Signal Publications.
Mikesh, Robert C. (2005) Flying Dragons: The Republic of Vietnam Air Force. Schiffer Publishing, Ltd.
USAF Historical Research Division/Organizational History Branch – 35th Fighter Wing, 366th Wing
VNAF – The Republic of Vietnam Air Force 1951–1975
USAAS-USAAC-USAAF-USAF Aircraft Serial Numbers—1908 to present
External links
505th Tactical Control Group – Tactical Air Control in Vietnam and Thailand
C-130A 57–460 at the National Air And Space Museum
The Tan Son Nhut Association
Electronic Warfare "Electric Goon" EC-47 Association website
The Defense of Tan Son Nhut Air Base, 31 January 1968
The Fall of Saigon
Installations of the United States Air Force in South Vietnam
Military installations of South Vietnam
Airports in Vietnam
Buildings and structures in Ho Chi Minh City
Military airbases established in 1955
|
18689983
|
https://en.wikipedia.org/wiki/Human-based%20computation%20game
|
Human-based computation game
|
A human-based computation game or game with a purpose (GWAP) is a human-based computation technique of outsourcing steps within a computational process to humans in an entertaining way (gamification).
Luis von Ahn first proposed the idea of "human algorithm games", or games with a purpose (GWAPs), in order to harness human time and energy for addressing problems that computers cannot yet tackle on their own. He believes that human intellect is an important resource and contribution to the enhancement of computer processing and human computer interaction. He argues that games constitute a general mechanism for using brainpower to solve open computational problems. In this technique, human brains are compared to processors in a distributed system, each performing a small task of a massive computation. However, humans require an incentive to become part of a collective computation. Online games are used as a means to encourage participation in the process.
The tasks presented in these games are usually trivial for humans, but difficult for computers. These tasks include labeling images, transcribing ancient texts, common sense or human experience based activities, and more. Human-based computation games motivate people through entertainment rather than an interest in solving computation problems. This makes GWAPs more appealing to a larger audience. GWAPs can be used to help build the semantic web, annotate and classify collected data, crowdsource general knowledge, and improving other general computer processes.
GWAPs have a vast range of applications in variety of areas such as security, computer vision, Internet accessibility, adult content filtering, and Internet search. In applications such as these, games with a purpose have lowered the cost of annotating data and increased the level of human participation.
History
The first human-based computation game or games with a purpose was created in 2004 by Luis von Ahn. The idea was that ESP would use human power to help label images. The game is a two player agreement game and relied on players to come up with labels for images and attempt to guess what labels a partner was coming up with. ESP used microtasks, simple tasks that can be solved quickly without the need of any credentials.
Game design principles
Output agreement game
Games with a purpose categorized as output agreement games are microtask games where players are matched into pairs and randomly assigned partners attempt to match output with each other given a shared visible input. ESP is an example of an output agreement game.
Inversion problem games
Given an image, the ESP Game can be used to determine what objects are in the image, but cannot be used to determine the location of the object in the image. Location information is necessary for training and testing computer vision algorithms, so the data collected by the ESP Game is not sufficient. Thus, to deal with this problem, a new type of microtask game known as inversion problem games were introduced by creator of ESP, von Ahn in 2006. Peekaboom extended upon ESP and had players associate labels with a specific region of an image. In inversion problem games, two players are randomly paired together. One is assigned as the describer and the other is the guesser. The describer is given an input, which the guesser must reproduce given hints from the describer. In Peekaboom, for example, the describer slowly reveals small sections of an image until the guesser correctly guesses the label provided to the describer.
Input agreement games
In input-agreement games two randomly paired players are each given an input that is hidden from the other player. Player inputs will either match or be different. The goal of these games is for players to tag their input such that the other player can determine whether or not the two inputs match. In 2008, Edith L. M. Law created the input-agreement game called TagATune. In this game, players label sound clips. In TagATune, players describe sound clips and guess if their partner's sound clip is the same as their own given their partner's tags.
Macrotask games
Macrotask games, unlike microtask games, contain complex problems that are usually left to experts to solve. In 2008, a macrotask game called Foldit was created by Seth Cooper. The idea was that players would attempt to fold a three-dimensional representation of a protein. This task was a hard problem for computers to automate completely. Locating the biologically relevant native conformation of a protein is a difficult computational challenge given the very large size of the search space. By gamification and implementation of user friendly versions of algorithms, players are able to perform this complex task without much knowledge of biology.
Examples
Apetopia
The Apetopia game helps determining perceived color differences. Players' choices are used to model better color metrics. The Apetopia game, which was launched by University of Berlin, is designed to help scientists understand perceived color differences. This game is intended to provide data on how the shades of color are perceived by people in order to model the best color parameters.
Artigo
Artigo is a Web platform currently offering six artwork annotation games as well as an artwork search engine in English, French, and German. Three of Artigo's games, the ARTigo game, ARTigo Taboo, and TagATag, are variations of Luis von Ahn's ESP game (later Google Image Labeler). Three other games of the Artigo platform, Karido, Artigo-Quiz, and Combino, have been conceived so as to complement the data collected by the three aforementioned ESP game variations.
Artigo's search engine relies on an original tensor latent semantic analysis.
As of September 2013, Artigo had over 30,000 (pictures of) artworks mostly of Europe and of the "long 19th century", from the Promotheus Image Archive, the Rijksmuseum, Amsterdam, The Netherlands, the Staatliche Kunsthalle Karlsruhe, Karlsruhe, Germany, the University Museum of Contemporary Art, campus of the University of Massachusetts Amherst, USA. From 2008 through 2013, Artigo has collected over 7 million tags (mostly in German), 180,000 players (about a tenth of whom are registered), and in average 150 players per day.
Artigo is a joint research endeavor of art historians and computer scientists aiming at both developing an art work search engine and data analysis in art history.
ESP game
The first example was the ESP game, an effort in human computation originally conceived by Luis von Ahn of Carnegie Mellon University, which labels images. To make it an entertaining effort for humans, two players attempt to assign the same labels to an image. The game records the results of matches as image labels and the players enjoy the encounter because of the competitive and timed nature of it. To ensure that people do their best to accurately label the images, the game requires two people (chosen at random and unknown to each other), who have only the image in common, to choose the same word as an image label. This discourages vandalism because it would be self-defeating as a strategy.
The ESP game is a human-based computation game developed to address the problem of creating difficult metadata. The idea behind the game is to use the computational power of humans to perform a task that computers cannot (originally, image recognition) by packaging the task as a game. Google bought a licence to create its own version of the game (Google Image Labeler) in 2006 in order to return better search results for its online images. The license of the data acquired by Ahn's ESP Game, or the Google version, is not clear. Google's version was shut down on 16 September 2011 as part of the Google Labs closure in September 2011.
PeekaBoom
PeekaBoom is a web-based game that helps computers locate objects in images by using human gameplay to collect valuable metadata. Humans understand and are able to analyze everyday images with minimal effort (what objects are in the image, their location, as well as background and foreground information), while computers have trouble with these basic visual tasks. Peekaboom has two main components: "Peek" and "Boom". Two random players from the Web participate by taking different roles in the game. When one player is Peek, the other is Boom. Peek starts out with a blank screen, while Boom starts with an image and a word related to it. The goal of the game is for Boom to reveal parts of the image to Peek. In the meantime, Peek can guess associated words with the revealed parts of the image. When Peek guesses words that are closer to the image, Boom can indicate whether Peek's guesses are hot or cold. When Peek correctly, the players gets points and then switch roles.
EteRNA
EteRNA is a game in which players attempt to design RNA sequences that fold into a given configuration. The widely varied solutions from players, often non-biologists, are evaluated to improve computer models predicting RNA folding. Some designs are actually synthesized to evaluate the actual folding dynamics and directly compare with the computer models.
Eyewire
Eyewire is a game for finding the connectome of the retina.
Foldit
Crowdsourcing has been gamified in games like Foldit, a game designed by the University of Washington, in which players compete to manipulate proteins into more efficient structures. A 2010 paper in science journal Nature credited Foldit's 57,000 players with providing useful results that matched or outperformed algorithmically computed solutions.
Foldit, while also a GWAP, has a different type of method for tapping the collective human brain. This game challenges players to use their human intuition of 3-dimensional space to help with protein folding algorithms. Unlike the ESP game, which focuses on the results that humans are able to provide, Foldit is trying to understand how humans approach complicated 3-dimensional objects. By 'watching' how humans play the game, researchers hope to be able to improve their own computer programs. Instead of simply performing tasks that computers cannot do, this GWAP is asking humans to help make current machine algorithms better.
Guess the Correlation
Guess the Correlation is a game with a purpose challenging players to guess the true Pearson correlation coefficient in scatter plots. The collected data is used to study what features in scatter plots skew human perception of the true correlation. The game was developed by Omar Wagih at the European Bioinformatics Institute.
JeuxDeMots
is a game aiming to build a large semantic network. People are asked to associate terms according to some instructions that are provided for a given word. The French version of the produced network contains so far more than 350 million relations between 5 million lexical items (March 2021). The project was developed by academics of the Laboratoire d'Informatique, de Robotique et de Microélectronique de Montpellier/Montpellier 2 University.
Nanocrafter
Nanocrafter is a game about assembling pieces of DNA into structures with functional properties, such as logic circuits, to solve problems. Like Foldit, it is developed at the University of Washington.
OnToGalaxy
OnToGalaxy is a game in which players help to acquire common sense knowledge about words. Implemented as a space shooter, OnToGalaxy in its design quite different from other human computation games. The game was developed by Markus Krause at the University of Bremen.
Phrase Detectives
Phrase Detectives is an "annotation game" geared towards lovers of literature, grammar and language. It lets users indicate relationships between words and phrases to create a resource that is rich in linguistic information. Players are awarded with points for their contributions and are featured on a leader board. It was developed by academics Jon Chamberlain, Massimo Poesio and Udo Kruschwitz at the University of Essex.
Phylo
Phylo allows gamers to contribute to the greater good by trying to decode the code for genetic diseases. While playing the game and aligning the colored squares, one is helping the scientific community get a step closer to solving the age-old problem of multiple sequence alignment. The problem of multiple sequence alignment is too big for computers to handle. The goal is to understand how and where the function of an organism is encoded in the DNA. The game explains that "a sequence alignment is a way of arranging the sequences of DNA, RNA or protein to identify regions of similarity".
Play to Cure: Genes in Space
Play to Cure: Genes in Space is a mobile game that uses the collective force of players to analyse real genetic data to help with cancer research.
Quantum Moves
Quantum Moves is a dexterity and spatial problem solving game, where players move slippery particles across quantum space. Players' solutions on various levels are used to program and fine tune a real quantum computer at Aarhus University.
The game was first developed as a graphical interface for quantum simulation and education in 2012. In 2013 it was released to the public in a user-friendly form, and has been continually updated throughout 2014.
Reverse The Odds
Reverse The Odds is a mobile based game which helps researchers learn about analyzing cancers. By incorporating data analysis into Reverse The Odds, researchers can get thousands of players to help them learn more about different cancers including head and neck, lung, and bladder cancer.
Robot Trainer
Robot Trainer is a game with a purpose that aims in gathering Commonsense Knowledge. The player takes the role of a teacher. The goal of the game is to train a robot that will travel in deep space and will carry a significant amount of human knowledge so that it can teach other humans in the future, far away from earth. The game has three levels. At each level, the player gets a specific task, like building knowledge rules to answer questions, resolving conflicts and validating other players’ knowledge rules. Players are rewarded for submitting knowledge rules that help the robot answer a question and match the contribution of their fellow teachers.
Sea Hero Quest
Sea Hero Quest is an iOS and Android based game that helps advancing the research in the field of dementia.
Smorball
In the browser-based game Smorball, players are asked to type the words they see as quickly and accurately as possible to help their team to victory in the fictional sport of Smorball. The game presents players with phrases from scanned pages in the Biodiversity Heritage Library. After verification, the words players type are sent to the libraries that store the corresponding pages, allowing those pages to be searched and data mined and ultimately making historic literature more usable for institutions, scholars, educators, and the public. The game was developed by Tiltfactor Lab.
Train Robots
Train Robots is an annotation game similar to Phrase Detectives. Players are shown pairs of before/after images of a robot arm and blocks on a board, and asked to enter commands to instruct the robot to move from the first configuration to the second. The game collects natural language data for training linguistic and robotic processing systems.
Wikidata Game
The Wikidata Game represents a gamification approach to let users help resolve questions regarding persons, images etc. and thus automatically edit the corresponding data items in Wikidata, the structured knowledge repository supporting Wikipedia and Wikimedia Commons, the other Wikimedia projects, and more.
ZombiLingo
ZombiLingo is a French game where players are asked to find the right head (a word or expression) to gain brains and become a more and more degraded zombie. While playing, they in fact annotate syntactic relations in French corpora. It was designed and developed by researchers from LORIA and Université Paris-Sorbonne.
TagATune
While there are many games with a purpose that deal with visual data, there are few that attempt to label audio data. Annotating audio data can be used to search and index music and audio databases as well as generate training data for machine learning. However, currently manually labeling data is costly. Thus, one way to lessen the cost is to create a game with a purpose with the intention of labeling audio data. TagATune is an audio based online game that has human players tag and label descriptions of sounds and music. TagATune is played by randomly paired partners. The partners are given three minutes to come up with agreed descriptions for as many sounds as possible. In each round, a sound is randomly selected from the database and presented to the partners. The description then becomes a tag that can be used for search when it is agreed upon by enough people. After the first round, the comparison round presents a tune and asks players to compare it to one of two other tunes of the same type.
MajorMiner
MajorMiner is an online game in which players listen to 10 seconds of randomly selected sound and then describe the sound with tags. If one of the tags the players choose matches that of another players, each player gains one point. If that was the first time that tag was used for that specific sound, the player gains two points. The goal is to use player input to research automatic music labelling and recommendation based on the audio itself.
Wikispeedia
A game of the wikiracing type, where players are given two Wikipedia articles (start and target) and are tasked with finding a path from the start article to the target article, exclusively by clicking hyperlinks encountered along the way.
The path data collected via the game sheds light on the ways in which people reason about encyclopedic knowledge and how they interact with complex networks.
See also
Page Hunt
References
External links
ARTigo
Foldit
JeuxDeMots
ZombiLingo
Phrase Detectives
Train Robots
Karaoke Callout
Phylo
FunSAT
Apetopia
Human-based computation
Data collection
|
40548651
|
https://en.wikipedia.org/wiki/Range%20Networks
|
Range Networks
|
Range Networks, Inc. is a U.S. company that provides open-source software products used to operate cellular networks. Founded in 2011, Range Networks is headquartered in San Francisco, CA, with satellite offices worldwide.
History
In 2007 David Burgess and Harvind Samra created OpenBTS, subsequently releasing the source code to the public to provide cellular service to people in rural and remote regions.
In 2010 the founders incorporated as Range Networks to commercialize OpenBTS based products and deploy networks worldwide. Range Networks deployments can now be found on all seven continents including Antarctica.
In December, 2010 the company raised $12 million from Gray Ghost Ventures and Omidyar Network.
Technology
Range Networks is a provider of U.S.-made commercial open source cellular systems. Using a combination of Range Networks hardware and software, network operators can build networks in which traditional GSM handsets are treated as virtual SIP endpoints. The company supports 2G, 2.5G and 3G GSM systems.
OpenBTS Project
The OpenBTS Project, an open source software defined radio implementation of the GSM (Global System for Mobile communications) radio access network that presents normal GSM handsets as virtual SIP endpoints, was developed and is maintained by Range Networks. Range Networks produces proprietary software packages releasing their source code mostly under the GNU AGPL while holding copyright under single commercial entity selling commercial licenses, support and hardware.
In August 2013, Range Networks announced the release of an update to OpenBTS, providing developers with the ability to incorporate Internet access through a packet-oriented mobile data service known as General Packet Radio Service (GPRS).
Deployments
Range Networks has worked with university and research groups to deploy cellular networks in rural regions around the world.
Indonesia: Partnering with UC Berkeley’s Technology and Infrastructure for Emerging Regions (TIER) research group a cellular network was established in Papua, Indonesia. In mid-2012 a wireless Internet service provider in rural Papua contacted the TIER group about setting up a low power GSM base station in a remote village in the Central Highlands of Papua. The village now has both voice and global SMS service and the network is profitable for local service providers.
Zambia: In collaboration with the UC Santa Barbara’s Mobility Management and Networking Laboratory (Moment Lab) a cellular network was deployed to study the economic feasibility of bringing cellular networks to remote regions. The deployment of the network provided the remote village of Macha in Zambia with the capability of making and receiving calls and sending and receiving local SMS messages. The network also allowed for outgoing global calls and outgoing global SMS text messages on a trial basis.
Mexico: Through a partnership with non-profit organization Rhizomatica a cellular network was established in Oaxaca, Mexico. Covering a village of approximately 2500 residents, where traditional cellular service was previously non-existent, the network is now serving over 450 residents who are able to make local and global calls and send text messages. Today, the community has its own cellular infrastructure, including billing and management of the network on their own.
Antarctica: The Australian Antarctic Division (AAD), a division of the Australian Governments Department of the Environment and Energy has used Range Networks software to provide GSM services to its four research stations. The system is currently installed and operational at Casey, Davis and Mawson Stations in Antarctica as well as the sub-Antarctic Macquarie Island station.
References
Software companies based in California
Software companies of the United States
|
51213613
|
https://en.wikipedia.org/wiki/Nikola%20Jovanovi%C4%87%20%28basketball%2C%20born%201994%29
|
Nikola Jovanović (basketball, born 1994)
|
Nikola Jovanović (, born January 6, 1994) is a Serbian professional basketball player who last played for Nizhny Novgorod of the VTB United League. He played college basketball for the USC Trojans.
Early life
Growing up in Serbia, Jovanović played basketball for the junior cadet teams of KK Crvena zvezda (2010–11) and KK Partizan (2011–12). After moving to the United States in 2012, he enrolled at Arlington Country Day School in Jacksonville, Florida. In 2012–13, he averaged 15 points and 12 rebounds while leading the Apaches to a 30–4 record and a No. 2 ranking in the state of Florida. He was ranked at the No. 20 prospect in the state of Florida by Florida Hoops.
College career
Freshman year
As a freshman at USC in 2013–14, Jovanović averaged 8.0 points and 4.4 rebounds while making 24 starts and appearing in all 32 games. He made 76.1 percent of his free throws, which was eighth-best all-time by a Trojan freshman, and hit 51.6 percent of his field goals, which were 10th best by a USC freshman all-time. He scored a season-high 23 points on 8-of-10 shooting against California on January 22, 2014. He was bestowed the Harold Jones Award at the team banquet following the season as the team's Most Improved Player.
Sophomore year
As a sophomore in 2014–15, Jovanović showed improvement in almost every area, leading the team with an average of 7.0 rebounds per game and finishing second in scoring with a 12.3 average. He started 31 of USC's 32 games and played in each contest. On January 29, 2015, he scored a career-high 30 points in USC's triple-overtime 98–94 loss to Colorado. He earned the Bob Boyd Award as the team's top rebounder, given out at the USC Men's Basketball Awards Banquet following the season.
Junior year
As a junior in 2015–16, Jovanović averaged 12.1 points and 7.0 rebounds for a Trojans team that had a better-than-expected season and earned a spot in the NCAA tournament. On January 30, 2016, he scored a season-high 28 points against Washington. On February 28, he became the 36th Trojan to score 1,000 points or more in his USC men's basketball career. He earned the Bob Boyd Award as the team's top rebounder for the second consecutive year.
On April 14, 2016, Jovanović declared for the NBA draft, forgoing his final year of college eligibility.
Professional career
NBA Development League
After going undrafted in the 2016 NBA draft, Jovanović joined the Detroit Pistons for the Orlando Summer League and the Los Angeles Lakers for the Las Vegas Summer League. On September 26, 2016, he signed with the Pistons, but was waived on October 17 after appearing in one preseason game. On October 30, he was acquired by the Grand Rapids Drive of the NBA Development League, an affiliate of the Pistons. On March 2, 2017, Jovanović was traded to the Westchester Knicks.
Crvena zvezda and loans
On July 29, 2017, Jovanović signed a three-year deal with Serbian club Crvena zvezda. On August 13, 2018, Jovanović moved on loan to the Italian club Aquila Basket Trento for the 2018–19 season. After the year-long loan in Italy, Jovanović signed a new contract with Zvezda on August 28, 2019. On August 6, 2020, Jovanović moved on loan to the Bosnian club Igokea for the 2020–21 season.
Nizhny Novgorod
On September 14, 2021, Jovanović signed a deal with Russian club Nizhny Novgorod. He parted ways with Nizhny in December 2021.
Career statistics
Euroleague
|-
| style="text-align:left;"| 2017–18
| style="text-align:left;"| Crvena zvezda
| 16 || 0 || 8.1 || .543 || .333 || .333 || 2.8 || .2 || .1 || .2 || 3.6 || 4.0
|- class="sortbottom"
| style="text-align:center;" colspan=2| Career
| 16 || 0 || 8.1 || .543 || .333 || .333 || 2.8 || .2 || .1 || .2 || 3.6 || 4.0
Personal life
Jovanović is the son of Ljubiša and Dragana Jovanović, and he has one sister, Tamara, who is a student-athlete at Loyola Marymount University. His father played professional basketball in Europe for 15 years (Partizan, Rabotnički, Soproni Ászok, etc.). Jovanović is fluent in Serbian, French and English.
References
External links
Nikola Jovanović at usctrojans.com
1994 births
Living people
ABA League players
Aquila Basket Trento players
Basketball League of Serbia players
Basketball players from Belgrade
Centers (basketball)
Grand Rapids Drive players
KK Crvena zvezda players
KK Igokea players
Lega Basket Serie A players
Power forwards (basketball)
Serbian expatriate basketball people in Bosnia and Herzegovina
Serbian expatriate basketball people in Italy
Serbian expatriate basketball people in Russia
Serbian expatriate basketball people in the United States
Serbian men's basketball players
USC Trojans men's basketball players
Westchester Knicks players
|
69309
|
https://en.wikipedia.org/wiki/RT-11
|
RT-11
|
RT-11 ("RT" for real-time) is a discontinued small, low-end, single-user real-time operating system for the Digital Equipment Corporation PDP-11 family of 16-bit computers. RT-11, which stands for Real-Time, was first implemented in 1970 and was widely used for real-time systems, process control, and data acquisition across the full line of PDP-11 computers. It was also used for low-cost general-use computing.
Features
Multitasking
RT-11 systems did not support preemptive multitasking, but most versions could run multiple simultaneous applications. All variants of the monitors provided a background job. The FB, XM and ZM monitors also provided a foreground job, as well as six system jobs if selected via the SYSGEN system generation program. These tasks had fixed priorities, with the background job lowest and the foreground job highest. It was possible to switch between jobs from the system console user interface, and SYSGEN could generate a monitor that provided a single background job (the SB, XB and ZB variants).
Source code
RT-11 was written in assembly language. Heavy use of the conditional assembly and macro programming features of the MACRO-11 assembler allowed a significant degree of configurability and allowed programmers to specify high-level instructions otherwise unprovided for in machine code. RT-11 distributions included the source code of the operating system and its device drivers with all the comments removed and a program named "SYSGEN" which would build the operating system and drivers according to a user-specified configuration. Developer's documentation included a kernel listing that included comments.
Device drivers
In RT-11, device drivers were loadable, except that prior to V4.0 the device driver for the system device (boot device) was built into the kernel at configuration time. Because RT-11 was commonly used for device control and data acquisition, it was common for developers to write or enhance device drivers. DEC encouraged such driver development by making their hardware subsystems (from bus structure to code) open, documenting the internals of the operating system, encouraging third-party hardware and software vendors, and by fostering the development of the Digital Equipment Computer Users Society.
Human interface
Users generally operated RT-11 via a printing terminal or a video terminal, originally via a strap-selectable current-loop (for conventional teletypes) or via an RS-232 (later RS-422 as well) interface on one of the CPU cards; DEC also supported the VT11 and VS60 graphics display devices (vector graphics terminals with a graphic character generator for displaying text, and a light pen for graphical input). A third-party favorite was the Tektronix 4010 family.
The Keyboard Monitor (KMON) interpreted commands issued by the user and would invoke various utilities with Command String Interpreter (CSI) forms of the commands.
RT-11 command language had many features (such as commands and device names) that can be found later in the DOS line of operating systems which heavily borrowed from RT-11. The CSI form expected input and output filenames and options ('switches' on RT-11) in a precise order and syntax. The command-line switches were separated by a slash ("/") rather than the dash ("-") used in Unix-like operating systems. All commands had a full form and a short one to which they could be contracted. For example, the RENAME command could be contracted to REN.
Batch files and the batch processor could be used to issue a series of commands with some rudimentary control flow. Batch files had the extension .BAT.
In later releases of RT-11, it was possible to invoke a series of commands using a .COM command file, but they would be executed in sequence with no flow control. Even later, it was possible to execute a series of commands with great control through use of the Indirect Command File Processor (IND), which took .CMD control files as input.
Files with the extension .SAV were a sort of executables. They were known as "save files" because the RT-11 SAVE command could be used to save the contents of memory to a disk file which could be loaded and executed at a later time, allowing any session to be saved.
The SAVE command, along with GET, START, REENTER, EXAMINE and DEPOSIT were basic commands implemented in the KMON. Some commands and utilities were later borrowed in the DOS line of operating systems. These commands include DIR, COPY, RENAME, ASSIGN, CLS, DELETE, TYPE, HELP and others. The FORMAT command was used for physical disk formatting, although it was not capable of creating file system, for which purpose the INIT command was used (analogue of DOS command FORMAT /Q). Most commands supported using wildcards in file names.
Physical device names were specified in the form 'dd{n}:' where 'dd' was a two-character alphabetic device name and the optional 'n' was the unit number (0–7). When the unit number was omitted, unit 0 was assumed. For example, TT: referred to the console terminal, LP: (or LP0:) referred to the parallel line printer, and DX0:, DY1:, DL4: referred to disk volumes (RX01 unit 0, RX02 unit 1, RL01 or RL02 unit 4, respectively). Logical device names consisted of 1–3 alphanumeric characters and were used in the place of a physical device name. This was accomplished using the ASSIGN command. For example, one might issue ASSIGN DL0 ABC which would cause all future references to 'ABC:' to map to 'DL0:'. Reserved logical name DK: referred to the current default device. If a device was not included in a file specification, DK: was assumed. Reserved logical name SY: referred to the system device (the device from which the system had been booted).
Later versions of RT-11 allowed specification of up to 64 units (0–77 octal) for certain devices, but the device name was still limited to three alphanumeric characters. This feature was enabled through a SYSGEN selection, and only applied to the DU and LD device handlers. In these two cases, the device name form became 'dnn:' where 'd' was 'D' for the DU device and 'L' for the LD device, and 'nn' was 00–77(octal).
Software
RT-11 was distributed with utilities for performimg many actions. The utilities DIR, DUP, PIP and FORMAT were for managing disk volumes. TECO, EDIT, and the visual editors KED (for the DEC VT100) and K52 (for the DEC VT52) were used to create and edit source and data files. MACRO, LINK, and LIBR were for building executables. ODT, VDT and the SD device were used to debug programs. DEC's version of Runoff was for producing documents. Finally, VTCOM was used to connect with and use (or transfer files to and from) another computer system over the phone via a modem.
The system was complete enough to handle many modern personal computing tasks. Productivity software such as LEX-11, a word processing package, and a spreadsheet from Saturn Software, used under other PDP-11 operating systems, also ran on RT-11. Large amounts of free, user-contributed software for RT-11 were available from the Digital Equipment Computer Users Society (DECUS) including an implementation of C. Although the tools to develop and debug assembly-language programs were provided, other languages including C, Fortran, Pascal, and several versions of BASIC were available from DEC as "layered products" at extra cost. Versions of these and other programming languages were also available from other, third-party, sources. It is even possible to network RT-11 machines using DECNET, the Internet and protocols developed by other, third-party sources.
Distributions and minimal system configuration
The RT-11 operating system could be booted from, and perform useful work on, a machine consisting of two 8-inch 250KB floppy disks and 56KB of memory, and could support 8 terminals. Other boot options include the RK05 2.5MB removable hard disk platter, or magnetic tape. Distributions were available pre-installed or on punched tape, magnetic tape, cartridge tape, or floppy disk. A minimal but complete system supporting a single real-time user could run on a single floppy disk and in 8K 16-bit words (16KB) of RAM, including user programs. This was facilitated by support for swapping and overlaying. To realize operation on such small memory system, the keyboard command user interface would be swapped out during the execution of a user's program and then swapped into memory upon program termination. The system supported a real-time clock, printing terminal, VT11 vector graphic unit, 16 channel 100 kHz A/D converter with 2 channel D/A, 9600 baud serial port, 16 bit bidirectional boards, etc.
File system
RT-11 implemented a simple and fast file system employing six-character filenames with three-character extensions ("6.3") encoded in RADIX-50, which packed those nine characters into only three 16-bit words (six bytes). All files were contiguous, meaning that each file occupied consecutive blocks (the minimally addressable unit of disk storage, 512 bytes) on the disk. This meant that an entire file could be read (or written) very quickly. A side effect of this file system structure was that, as files were created and deleted on a volume over time, the unused disk blocks would likely not remain contiguous, which could become the limiting factor in the creation of large files; the remedy was to periodically “squeeze” (or "squish") a disk to consolidate the unused portions.
Each volume has only one directory which was preallocated at the beginning of the volume. The directory consists of an array of entries, one per file or unallocated space. Each directory entry is 8 (or more) 16-bit words, though a sysgen option allowed extra application-specific storage.
Compatibility with other DEC operating systems
Many RT11 programs (those that did not need specialized peripherals or direct access to the hardware) could be directly executed using the RT11 RTS (Run-time system) of the RSTS/E timesharing system or under RTEM (RT Emulator) on various releases of both RSX-11 and VMS.
The implementation of DCL for RT-11 increased its compatibility with the other DEC operating systems. Although each operating system had commands and options which were unique to that operating system, there were a number of commands and command options which were common.
Other PDP-11 operating systems
DEC also sold RSX-11, a multiuser, multitasking operating system with realtime features, and RSTS/E (originally named RSTS-11) a multiuser time-sharing system, but RT-11 remained the operating system of choice for data acquisition systems where real time response was required. The Unix operating system also became popular, but lacked the real-time features and extremely small size of RT-11.
Hardware
RT-11 ran on all members of the DEC PDP-11 family, both Q-Bus- and Unibus-based, from the PDP-11/05 (its first target, in 1970), to the final PDP-11 implementations (PDP-11/93 and /94). In addition, it ran on the Professional Series and the PDT-11 "Programmed Data Terminal" systems, also from DEC. Since the PDP-11 architecture was implemented in replacement products by other companies (E.g., the M100 and family from Mentec), or as reverse-engineered clones in other countries (E.g., the DVK from the Soviet Union), RT-11 runs on these machines as well.
Peripherals
Adding driver support for peripherals such as a CalComp plotter, typically involved copying files, and did not require a SYSGEN.
Compatible operating systems
Fuzzball
Fuzzball, routing software for Internet Protocols, was capable of running RT-11 programs.
SHAREplus
HAMMONDsoftware distributed a number of RT-11 compatible operating systems including STAReleven, an early multi-computer system and SHAREplus, a multi-process/multi-user implementation of RT-11 which borrowed some architectural concepts from the VAX/VMS operating system. RT-11 device drivers were required for operation. Transparent device access to other PDP-11s and VAX/VMS were supported with a network option. Limited RSX-11 application compatibility was also available. SHAREplus had its strongest user base in Europe.
TSX-11
TSX-11, developed by S&H Computing, was a multi-user, multi-processing implementation of RT-11. The only thing it didn't do was handle the boot process, so any TSX-Plus machine was required to boot RT-11 first before running TSX-Plus as a user program. Once TSX-Plus was running, it would take over complete control of the machine from RT-11. It provided true memory protection for users from other users, provided user accounts and maintained account separation on disk volumes and implemented a superset of the RT-11 EMT programmed requests.
S&H wrote the original TSX because "Spending $25K on a computer that could only support one user bugged" (founder Harry Sanders); the outcome was the initial four-user TSX in 1976. TSX-Plus (released in 1980) was the successor to TSX, released in 1976. The system was popular in the 1980s. RT-11 programs generally ran, unmodified, under TSX-Plus and, in fact, most of the RT-11 utilities were used as-is under TSX-Plus. Device drivers generally required only slight modifications.
Depending on which PDP-11 model and the amount of memory, the system could support a minimum of 12 users (14-18 users on a 2Mb 11/73, depending on workload). The last version of TSX-Plus had TCP/IP support.
Versions
Variants
Users could choose from four variants with differing levels of support for multitasking:
RT-11SJ (Single Job) allowed only one task. This was the initial distribution.
RT-11FB (Foreground/Background) supported two tasks: a high-priority, non-interactive "Foreground" job, and a low-priority, interactive "Background" job.
RT-11XM (eXtended Memory), a superset of FB, provided support for memory beyond 64kb, but required a minicomputer with memory management hardware; distributed from approx. 1975-on.
RT-11ZM provided support for systems with Separate Instruction and Data space (such as on the Unibus-based 11/44, 45, 55, 70, 84, and 94 and the Q-Bus-based 11/53, 73, 83, and 93)
Specialized versions
Several specialized PDP-11 systems were sold based on RT-11:
LAB-11 provided an LPS-11 analog peripheral for the collection of laboratory data
PEAK-11 provided further customization for use with gas chromatographs (analyzing the peaks produced by the GC); data collection ran in RT11's foreground process while the user's data analysis programs ran in the background.
GT4x systems added a VT11 vector graphics peripheral. Several very popular demo programs were provided with these systems including Lunar Lander and a version of Spacewar!.
GT62 systems added a VS60 vector graphics peripheral (VT11-compatible) in a credenza cabinet.
GAMMA-11 was a packaged RT-11 and PDP 11/34 system that was one of the first fully integrated Nuclear Medicine systems. It included fast analog/digital converters, 16 bit colour graphical displays, and an extensive software library for development of applications for the purpose of data collection, analysis and display from a nuclear medicine gamma camera.
Clones in the USSR
Several clones of RT-11 were made in the USSR:
RAFOS ("РАФОС") — SM EVM
FOBOS ("ФОБОС") — Elektronika 60
FODOS ("ФОДОС")
RUDOS ("РУДОС")
OS DVK ("ОС ДВК") — DVK
OS BK-11 ("ОС БК-11") — Elektronika BK
MASTER-11 ("МАСТЕР-11") — DVK
NEMIGA OS ("НЕМИГА") — Nemiga PK 588
See also
TSX-32
References
External links
PDP-11 How-to guide with RT-11 commands reference
RT-11 emulator for Windows console
DEC operating systems
Real-time operating systems
PDP-11
Assembly language software
Elektronika BK operating systems
|
287587
|
https://en.wikipedia.org/wiki/Hot%20swapping
|
Hot swapping
|
Hot swapping is the replacement or addition of components to a computer system without stopping, shutting down, or rebooting the system; hot plugging describes the addition of components only. Components which have such functionality are said to be hot-swappable or hot-pluggable; likewise, components which do not are cold-swappable or cold-pluggable.
Most desktop computer hardware, such as CPUs and memory, are only cold-pluggable. However, it is common for mid to high-end servers and mainframes to feature hot-swappable capability for hardware components, such as CPU, memory, PCIe, SATA and SAS drives.
A well-known example of hot swap functionality is the Universal Serial Bus (USB) connection, which allows users to add or remove peripherals such as a mouse, keyboard, printer, or portable hard drive. Such devices are characterized as hot-swappable or hot-pluggable depending on the supplier.
Most smartphones and tablets with tray-loading holders can interchange SIM cards without powering down the system.
Dedicated digital cameras and camcorders usually have readily accessible memory card and battery compartments for quick changing with only minimal interruption of operation. Batteries can be cycled through by recharging reserve batteries externally while unused. Many cameras and camcorders feature an internal memory to allow capturing when no memory card is inserted.
Rationale
Hot swapping is used whenever it is desirable to change the configuration or repair a working system without interrupting its operation. It may simply be for convenience of avoiding the delay and nuisance of shutting down and then restarting complex equipment or because it is essential for equipment, such as a server, to be continuously active.
Hot swapping may be used to add or remove peripherals or components, to allow a device to synchronize data with a computer, and to replace faulty modules without interrupting equipment operation. A machine may have dual power supplies, each adequate to power the machine; a faulty one may be hot-swapped. Important cards such as disk controller or host adapter may be designed with redundant paths therefore upgraded or replaced if they fail without requiring the computer system to be removed from operation.
System considerations
Machines that support hot swapping need to be able to modify their operation for the changed configuration, either automatically on detecting the change, or by user intervention. All electrical and mechanical connections associated with hot-swapping must be designed so that neither the equipment nor the user can be harmed while hot-swapping. Other components in the system must be designed so that the removal of a hot-swappable component does not interrupt operation.
Mechanical design
Protective covering plates, shields, or bezels may be used on either the removable components or the main device itself to prevent operator contact with live powered circuitry, to provide antistatic protection for components being added or removed, or to prevent the removable components from accidentally touching and shorting out the powered components in the operating device.
Additional guide slots, pins, notches, or holes may be used to aid in proper insertion of a component between other live components, while mechanical engagement latches, handles, or levers may be used to assist in proper insertion and removal of devices that either require large amounts of force to connect or disconnect, or to assist in the proper mating and holding together of power and communications connectors.
Variations
There are two slightly differing meanings of the term hot swapping. It may refer only to the ability to add or remove hardware without powering down the system, while the system software may have to be notified by the user of the event in order to cope with it. Examples include RS-232 and lower-end SCSI devices. Examples include USB, FireWire and higher-end SCSI devices.
Some implementations require a component shutdown procedure prior to removal. This simplifies the design, but such devices are not robust in the case of component failure. If a component is removed while it is being used, the operations to that device fail and the user is responsible for retrying if necessary, although this is not usually considered to be a problem.
More complex implementations may recommend but do not require that the component be shut down, with sufficient redundancy in the system to allow operation to continue if a component is removed without being shut down. In these systems hot swap is normally used for regular maintenance to the computer, or to replace a broken component.
Connectors
Most modern hot-swap methods use a specialized connector with staggered pins, so that certain pins are certain to be connected before others. Most staggered-pin designs have ground pins longer than the others, ensuring that no sensitive circuitry is connected before there is a reliable system ground. The other pins may all be the same length, but in some cases three pin lengths are used so that the incoming device is grounded first, data lines connected second, and power applied third, in rapid succession as the device is inserted. Pins of the same nominal length do not necessarily make contact at exactly the same time due to mechanical tolerances, and angling of the connector when inserted.
At one time staggered pins were thought to be an expensive solution, but many contemporary connector families now come with staggered pins as standard; for example, they are used on all modern serial SCSI disk-drives. Specialized hot-plug power connector pins are now commercially available with repeatable DC current interruption ratings of up to 16 A. Printed circuit boards are made with staggered edge-fingers for direct hot-plugging into a backplane connector.
Although the speed of plugging cannot be controlled precisely, practical considerations will provide limits that can be used to determine worst-case conditions. For a typical staggered pin design where the length difference is 0.5 mm, the elapsed time between long and short pin contact is between 25 ms and 250 ms. It is quite practical to design hot-swap circuits that can operate at that speed.
As long as the hot-swap connector is sufficiently rigid, one of the four corner pins will always be the first to engage. For a typical two-row connector arrangement this provides four first-to-make corner pins that are usually used for grounds. Other pins near the corners can be used for functions that would also benefit from this effect, for example sensing when the connector is fully seated. This diagram illustrates good practice where the grounds are in the corners and the power pins are near the center. Two sense pins are located in opposite corners so that fully seated detection is confirmed only when both of them are in contact with the slot. The remaining pins are used for all the other data signals.
Power electronics
The DC power supplies to a hot-swap component are usually pre-charged by dedicated long pins that make contact before the main power pins. These pre-charge pins are protected by a circuit that limits the inrush current to an acceptable value that cannot damage the pins nor disturb the supply voltage to adjacent slots. The pre-charge circuit might be a simple series resistor, a negative temperature coefficient (NTC) resistor, or a current-limiter circuit. Further protection can be provided by a "soft-start" circuit that provides a managed ramp-up of the internal DC supply voltages within the component.
A typical sequence for a hot-swap component being plugged into a slot could be as follows:
Long ground pins make contact; basic electrical safety and ESD protection becomes available.
Long (or medium) pre-charge pins make contact; decoupling capacitors start to charge up.
Real time delay of tens of milliseconds.
Short power/signal pins make contact.
Connector becomes fully seated; power-on reset signal asserted within component
Soft-start circuit starts to apply power to the component.
Real time delay of tens of milliseconds.
Soft-start circuit completes sequence; power-on reset circuit deasserted
Component begins normal operation.
Hot-swap power circuits can now be purchased commercially in specially designed ASICs called hot-swap power managers (HSPMs).
Radio transmitters
Modern day radio transmitters (and some TV transmitters as well) use high power RF transistor power modules instead of vacuum tubes. Hot swapping power modules is not a new technology, as many of the radio transmitters manufactured in the 1930s were capable of having power tubes swapped out while the transmitter was running—but this feature was not universally adopted due to the introduction of more reliable high power tubes.
In the mid-1990s, several radio transmitter manufactures in the US started offering swappable high power RF transistor modules.
There was no industry standard for the design of the swappable power modules at the time.
Early module designs had only limited patent protection.
By the early 2000s, many transmitter models were available that used many different kinds of power modules.
The reintroduction of power modules has been good for the radio transmitter industry, as it has fostered innovation. Modular transmitters have proven to be more reliable than tube transmitters, when the transmitter is properly chosen for the conditions at the transmitting site.
Power limitations:
Lowest power modular transmitter: generally 1.0 kW, using 600 W modules.
Highest power modular transmitter: 1.0 MW (for LW, MW).
Highest power modular transmitter: 45 kW (FM, TV).
Signal electronics
Circuitry attached to signal pins in a hot-swap component should include some protection against electrostatic discharge (ESD). This usually takes the form of clamp diodes to ground and to the DC power supply voltage. ESD effects can be reduced by careful design of the mechanical package around the hot-swap component, perhaps by coating it with a thin film of conductive material.
Particular care must be taken when designing systems with bussed signals which are wired to more than one hot-swap component. When a hot-swap component is inserted its input and output signal pins will represent a temporary short-circuit to ground. This can cause unwanted ground-level pulses on the signals which can disturb the operation of other hot-swap components in the system. This was a problem for early parallel SCSI disk-drives. One common design solution is to protect bussed signal pins with series diodes or resistors. CMOS buffer devices are now available with specialized inputs and outputs that minimize disturbance of bussed signals during the hot-swap operation. If all else fails, another solution is to quiesce the operation of all components during the hot-swap operation.
Gaming
Although most contemporary video game systems can interchange games and multimedia (e.g. Blu-rays) without powering down the system, older generations of systems varied in their support of hot-swapping capabilities. For example, whereas the Sony PlayStation and PlayStation 2 could eject a game disc with the system powered on, the Nintendo Game Boy Advance and the Nintendo 64 would freeze up and could potentially become corrupt if the game cartridge was removed with the power on. Manufacturers specifically warned against such practices in the owner's manual or on the game cartridge. It was supposedly for this reason that Stop 'N' Swop was taken out of the Banjo-Kazooie series. With the Sega Genesis/Mega Drive system, it was sometimes possible to apply cheats (such as a player having infinite lives) and other temporary software alterations to games by hot swapping cartridges, even though the cartridges were not designed to be hot swappable.
Software
Hot swapping can also refer to the ability to alter the running code of a program without needing to interrupt its execution. Interactive programming is a programming paradigm that makes extensive use of hot swapping, so the programming activity becomes part of the program flow itself.
Only a few programming languages support hot swapping natively, including Pike, Lisp, Erlang, Smalltalk, Visual Basic 6 (Not VB.net), Java and most recently Elm and Elixir. Microsoft Visual Studio supports a kind of hot swapping called Edit and Continue, which is supported by C#, VB.NET and C/C++ when running under a debugger.
Hot swapping is the central method in live coding, where programming is an integral part of the runtime process. In general, all programming languages used in live coding, such as SuperCollider, TidalCycles, or Extempore support hot swapping.
Some web-based frameworks, such as Django, support detecting module changes and reloading them on the fly. However, although the same as hotswapping for most intents and purposes, this is technically just a cache purge, triggered by a new file. This does not apply to markup and programming languages such as HTML and PHP respectively, in the general case, as these files are normally re-interpreted on each use by default. There are a few CMSs and other PHP-based frameworks (such as Drupal) that employ caching, however. In these cases, similar abilities and exceptions apply.
Hot swapping also facilitates developing systems where large amounts of data are being processed, as in entire genomes in bioinformatics algorithms.
Trademarks
The term "HOT PLUG" was registered as a trademark in the United States in November 1992 to Core International, Inc., and cancelled in May 1999.
See also
Dynamic software updating
Interactive programming
udev
References
Computer peripherals
Fault tolerance
Fault-tolerant computer systems
Live coding
|
37956528
|
https://en.wikipedia.org/wiki/Eric%20Grimson
|
Eric Grimson
|
William Eric Leifur Grimson (born 1953) is a Canadian-born computer scientist and professor at the Massachusetts Institute of Technology, where he served as Chancellor from 2011 to 2014. An expert in computer vision, he headed MIT's Department of Electrical Engineering and Computer Science from 2005 to 2011 and currently serves as its Chancellor for Academic Advancement.
Early life and education
Grimson was born in 1953 in Estevan, Saskatchewan. His father William was the principal of Estevan Collegiate Institute, the local high school, and his mother was an eminent musician and taught piano performance and music theory. The family later moved to Regina, where he attended Campbell Collegiate and the University of Regina, graduating in 1975 with a Bachelor of Science degree in mathematics and physics with high honours. In 1980, he received his PhD in mathematics from MIT. His doctoral dissertation, "Computing Shape Using a Theory of Human Stereo Vision", was on computer vision, a field that would become the focus of his research career. An expanded version of the dissertation was published by MIT Press in 1981 as From Images to Surfaces: A Computational Study of the Human Early Vision System, which was endorsed by Tomaso Poggio and Noam Chomsky.
Academia
After completing his PhD, Grimson worked as a research scientist at the MIT Artificial Intelligence Laboratory (now CSAIL) before joining the university's faculty in 1984. He eventually rose to Bernard Gordon Chair of Medical Engineering and holds a joint appointment as a Radiology Lecturer at Harvard Medical School and Brigham and Women's Hospital. After serving as Education Officer and Associate Department Head, he was appointed Head of the Department of Electrical Engineering and Computer Science (EECS) and served from 2005 to 2011. In February 2011, he was appointed Chancellor of MIT, succeeding Phillip Clay, and took up his post the following month and served until 2014 when he was replaced by Cynthia Barnhart.
Grimson has "long prized teaching" and has taught introductory computer science courses for 25 years, in addition to advising doctoral students and teaching advanced classes. He also teaches two introductory computer science courses on edX.
In his current position as Chancellor for Academic Advancement, Grimson reports directly to MIT President L. Rafael Reif. His role is to gather faculty and student input on MIT's fundraising priorities and to communicate these priorities to donors and alumni.
Personal life
Grimson is married to Wellesley College professor Ellen Hildreth. The couple have two sons.
Honors and awards
Association for Computing Machinery Fellow (2014): For contributions to computer vision and medical image computing
Institute of Electrical and Electronics Engineers Fellow (2004): For contributions to surface reconstruction, object-recognition, image database indexing, and medical applications
Association for the Advancement of Artificial Intelligence Fellow (2000): For contributions to the theory and application of computer vision, ranging from algorithms for binocular stereo, surface interpolation, and object recognition to deployed systems for computer-assisted surgery
Selected works
2003. Object Recognition by Computer: The Role of Geometric Constraints. MIT press
Grimson, W. Eric L. (2001) "Image Guided Surgery" (abstract). Stanford University, Broad Area Colloquium For AI-Geometry-Graphics-Robotics-Vision
1989. AI in the 1980s and Beyond: An MIT Survey. (Ed. with Ramesh S. Patil) MIT press
1981. From Images to Surfaces: A Computational Study of the Human Early Visual System. MIT press
Endorsed by Tomaso Poggio and Noam Chomsky;
Dedicated to David Marr.
References
Artificial intelligence researchers
Computer vision researchers
MIT School of Engineering faculty
Living people
1953 births
|
288541
|
https://en.wikipedia.org/wiki/Calendar%20%28Apple%29
|
Calendar (Apple)
|
Calendar is a personal calendar app made by Apple Inc. that runs on both the macOS desktop operating system and the iOS mobile operating system. It offers online cloud backup of calendars using Apple's iCloud service, or can synchronize with other calendar services, including Google Calendar and Microsoft Exchange Server.
The macOS version was known as iCal before the release of OS X Mountain Lion in July 2012. Originally released as a free download for Mac OS X v10.2 on September 10, 2002, it was bundled with the operating system as iCal 1.5 with the release of Mac OS X v10.3. iCal was the first calendar application for Mac OS X to offer support for multiple calendars and the ability to intermittently publish/subscribe to calendars on WebDAV servers. Version 2 of iCal was released as part of Mac OS X v10.4, Version 3 as part of Mac OS X v10.5, Version 4 as part of Mac OS X v10.6, Version 5 as part of Mac OS X v10.7, Version 6 as part of OS X v10.8, Version 7 as part of OS X v10.9, Version 8 as part of OS X v10.10 and OS X v10.11, and version 9 as part of macOS v10.12.
Apple licensed the iCal name from Brown Bear Software, who have used it for their iCal application since 1997.
iCal's initial development was quite different from other Apple software: it was designed independently by a small French team working "secretly" in Paris, led by Jean-Marie Hullot, a friend of Steve Jobs. iCal's development has since been transferred to Apple US headquarters in Cupertino.
Features
Calendar tracks events and appointments, allows multiple calendar views (such as calendars for "home", "work", and other calendars that a user can create) to quickly identify conflicts and free time. Users can subscribe to other calendars so they can keep up with friends and colleagues, and other things such as athletic schedules and television programs, as well as set notifications for upcoming events either in the Notification Center, by email, SMS, or pager. Attachments and notes can be added to iCloud Calendar items.
It is integrated with iCloud, so calendars can be shared and synced with other devices, such as other Macs, iPhones, iPads, iPod touch, and PCs over the internet. One can also share calendars via the WebDAV protocol. Google now supports WebDAV for Google Calendar making Calendar easily configurable.
Calendar includes the ability to see travel time and weather at the event's location, with the ability to set an alarm based on the travel time Different time zones can be selected when entering and editing start and end times. This allows long-distance airplane flight times, for example, to be entered accurately and for that "end" of a visualized time "box" to render accurately on either iOS or macOS when time zone support is turned on in Calendar and the time zone set in Date/Time to the location in question.
See also
Calendar and Contacts Server
iCalendar
List of applications with iCalendar support
SyncML open standard for calendar syncing
References
External links
New Software Lets Users Manage Multiple Calendars & Share Calendars Over The Internet - Apple's July 2002 press release introducing iCal
New Application to Manage & Share Multiple Calendars Now Available for Free Download - Apple's September 2002 press release announcing availability of iCal
Calendar and Scheduling Consortium part of next version of iCal Server (Leopard)
iCal4j - iCal Java library (with usage examples)
Perl script and instructions to subscribe from iCal to a Sun Calendar Server and subsequently sync it to mobile devices through iSync
Apple iCal calendars
Geody iCal and csv calendars - Free (CC-by-sa) calendars
iCalShare - Free calendars
Calendaring software
MacOS-only software made by Apple Inc.
IOS software
Calendar
WatchOS software
IOS
|
39183
|
https://en.wikipedia.org/wiki/ReiserFS
|
ReiserFS
|
ReiserFS was a general-purpose, journaling file system initially designed and implemented by a team at Namesys led by Hans Reiser and licensed under GPLv2. ReiserFS is currently supported on Linux (without quota support) but will be officially removed from Linux 2022 (exact date and version not yet specified) . Introduced in version 2.4.1 of the Linux kernel, it was the first journaling file system to be included in the standard kernel. ReiserFS was the default file system in Novell's SUSE Linux Enterprise until Novell decided to move to ext3 on October 12, 2006, for future releases.
Namesys considered ReiserFS version 3.6 which introduced a new on-disk format allowing bigger filesizes, now occasionally referred to as Reiser3, as stable and feature-complete and, with the exception of security updates and critical bug fixes, ceased development on it to concentrate on its successor, Reiser4. Namesys went out of business in 2008 after Reiser's conviction for murder. The product is now maintained as open source by volunteers. The reiserfsprogs 3.6.27 were released on 25 July 2017.
Features
At the time of its introduction, ReiserFS offered features that had not been available in existing Linux file systems. One example is tail packing—a scheme to reduce internal fragmentation. Tail packing can have a significant performance impact. Reiser4 may have improved this by packing tails where it does not negatively affect performance.
Design
ReiserFS stores file metadata ("stat items"), directory entries ("directory items"), inode block lists ("indirect items"), and tails of files ("direct items") in a single, combined B+ tree keyed by a universal object ID. Disk blocks allocated to nodes of the tree are "formatted internal blocks". Blocks for leaf nodes (in which items are packed end-to-end) are "formatted leaf blocks". All other blocks are "unformatted blocks" containing file contents. Directory items with too many entries or indirect items which are too long to fit into a node spill over into the right leaf neighbour. Block allocation is tracked by free space bitmaps in fixed locations.
By contrast, ext2 and other Berkeley FFS-like file systems of that time simply used a fixed formula for computing inode locations, hence limiting the number of files they may contain. Most such file systems also store directories as simple lists of entries, which makes directory lookups and updates linear time operations and degrades performance on very large directories. The single B+ tree design in ReiserFS avoids both of these problems due to better scalability properties.
Performance
Compared with ext2 and ext3 in version 2.4 of the Linux kernel, when dealing with files under 4 KiB and with tail packing enabled, ReiserFS may be faster.
Before Linux 2.6.33, ReiserFS heavily used the big kernel lock (BKL)—a global kernel-wide lock—which does not scale well for systems with multiple cores, as the critical code parts are only ever executed by one core at a time.
Usage
ReiserFS was the default file system in SuSE Linux since version 6.4 (released in 2000), until switching to ext3 in SUSE Linux Enterprise 10.2 and openSUSE 11, announced in 2006.
Jeff Mahoney of SUSE wrote a post on 14 September 2006 proposing to move from ReiserFS to ext3 for the default installation file system. Some reasons he mentioned were scalability, "performance problems with extended attributes and ACLs", "a small and shrinking development community", and that "Reiser4 is not an incremental update and requires a reformat, which is unreasonable for most people." On October 4 he wrote a response comment on a blog in order to clear up some issues. He wrote that his proposal for the switch was unrelated to Hans Reiser being under trial for murder. Mahoney wrote he "was concerned that people would make a connection where none existed" and that "the timing is entirely coincidental and the motivation is unrelated."
Criticism
Some directory operations (including (2)) are not synchronous on ReiserFS, which can result in data corruption with applications relying heavily on file-based locks (such as mail transfer agents qmail and Postfix) if the machine halts before it has synchronized the disk.
There are no programs to specifically defragment a ReiserFS file system, although tools have been written to automatically copy the contents of fragmented files hoping that more contiguous blocks of free space can be found. However, a "repacker" tool was planned for the next Reiser4 file system to deal with file fragmentation. With the rise of Solid State Disks this problem became irrelevant.
fsck
The tree rebuild process of ReiserFS's fsck has attracted much criticism by the *nix community: If the file system becomes so badly corrupted that its internal tree is unusable, performing a tree rebuild operation may further corrupt existing files or introduce new entries with unexpected contents, but this action is not part of normal operation or a normal file system check and has to be explicitly initiated and confirmed by the administrator.
ReiserFS v3 images should not be stored on a ReiserFS v3 partition (e.g. backups or disk images for emulators) without transforming them (e.g., by compressing or encrypting) in order to avoid confusing the rebuild. Reformatting an existing ReiserFS v3 partition can also leave behind data that could confuse the rebuild operation and make files from the old system reappear. This also allows malicious users to intentionally store files that will confuse the rebuilder. As the metadata is always in a consistent state after a file system check, corruption here means that contents of files are merged in unexpected ways with the contained file system's metadata. The ReiserFS successor, Reiser4, fixes this problem.
Earlier issues
ReiserFS in versions of the Linux kernel before 2.4.16 were considered unstable by Namesys and not recommended for production use, especially in conjunction with NFS.
Early implementations of ReiserFS (prior to that in Linux 2.6.2) were also susceptible to out-of-order write hazards. But the current journaling implementation in ReiserFS is now on par with that of ext3's "ordered" journaling level.
See also
List of file systems
Comparison of file systems
References
External links
ReiserFS 3.6 at Linus Torvalds' Git repository – nowadays (2019) the main development resource of ReiserFS 3
ReiserFS and Reiser4 wiki
Reiserfsprogs
convertfs, a utility which performs in-place conversion between any two file systems with sparse file support
Gentoo Forum Link – Discussion on ReiserFS fragmentation, including a script for measuring fragmentation and defragmenting files
Windows utilities to access ReiserFS: YAReG – Yet Another R(eiser)FStool GUI, rfsd – ReiserDriver.
2001 software
Disk file systems
File systems supported by the Linux kernel
|
53147583
|
https://en.wikipedia.org/wiki/Art%20of%20Illusion
|
Art of Illusion
|
Art of Illusion is a free software, and open source software package for making 3D graphics.
It provides tools for 3D modeling, texture mapping, and 3D rendering still images and animations. Art of Illusion can also export models for 3D printing in the STL file format.
Overview
Art of Illusion is 3D graphics software, such as Blender and Wings 3D (which are both free software), and Autodesk 3ds Max and Autodesk Maya (which are both proprietary software).
Although some sources seem to confuse 3D modeling with computer-aided design (CAD), Art of Illusion does not provide any CAD-like features, such as parametric modeling.
Some user reviews describe Art of Illusion as 'intuitive' 'straight forward to learn' and 'good candidate for the first 3D modelling tool', while some characterize it as 'software for experienced CAD users' or taking plenty of time to figure out. For its capabilities it has been described 'powerful, comprehensive and extensible'.
Art of Illusion has been entirely written in Java.
History
The development of the software was started in 1999 by Peter Eastman. Peter was the lead developer until the year 2016, when at Peter's request, Lucas Stanek started to host the development, while Peter assumed a more supervisory role. Lucas took the development from SourceForge to GitHub and the SourceForge-site serves as the software's discussion forum and delivery channel.
Since 1999 there have been over 40 releases of the software. The latest stable version, 3.2.0, was released on January 13, 2021.
Features
General buildup and the core software
Art of Illusion consists of the core software and various feature extensions, which come as plugins and scripts.
The core software package contains basic modelling, texturing, animation and rendering tools. Scripts are used either to create and edit objects or to modify behavior of the software. Plugins can add features, like tools and object types to the software or alter the user interface. Some of the core features like the renderers are implemented as plugins as well to facilitate maintenance.
Object types and modeling
Art of Illusion provides several types of objects with their specific editing tools for modeling: Primitives (cube, sphere, cylinder), Curve, Tube, Spline mesh, Triangle mesh, Polygon mesh (plugin), Implicit object (plugin), Hologram (plugin).
Animation
All 3D-objects can be animated by changing their position and orientation. In addition to that, properties of each object can be animated and procedural textures and materials can have animated features. Mesh objects can be rigged with a skeleton, that can be used to control shape changes. With skeletons it is possible to save predefined gestures that can be combined as poses, to generate complex repeatable movements. Animation data of each object is stored into animation tracks as key frames.
Rendering
Art or Illusion uses multi threading for rendering images and it provides several options for lighting. The core software package comes with two built in renderers:
The Ray Tracer renderer provides anti-aliasing, soft shadows, depth of field, transparent background, photon mapping caustics and subsurface scattering.
The Raster renderer provides a few options for shading methods and super sampling
Feature-extensions
Scripting
Art of Illusion supports two scripting languages, BeanShell and Groovy and it comes with a basic level text editor for writing, editing and running scripts. There are three different types of scripts each for their specific purpose: Tool scripts, Scripted objects and Start-up scripts.
Tool scripts operate at the same level as the commandSelected() function of a modeling tool. This means that with only minor changes the code from a script could be placed into a more permanent plugin, or the code from a plugin could be pulled out into a script to allow for changing the code within the environment.
Plugins
Art of Illusion provides a programming interface for plugins. The code for the plugins are written in Java like the core software. This code is combined with an extensions.xml file that describes what the plugin does and most importantly, which class implements it. In some cases the XML file specifies methods that are exported for use by other plugins or specifies plugins that are imported for use by the plugin. Tags used in the extensions.xml file are Author, Date, Plugin, Export, Import, Description, Comments, History, and Resource. The compiled .jar-files are added to the Plugins folder in Art of Illusion root directory and they are effective immediately at the next start up.
A large number of plugins have been developed for Art of Illusion that are available on the scripts and plugin repository. These include object types and their accompanying editors, user interface enhancements and various tools. These include the Scripts and Plugins Manager, that is used to download and update the extension parts.
The types of plugins that can be created for Art of Illusion are, Plugin, Renderer, Translator, ModellingTool, Texture, Material, TextureMapping, MaterialMapping, ImageFilter, Module.
Plugin — A general plugin type used for all plugins that don't fit one of the other categories.
Renderer — Methods used to render a scene, such as a special ray tracer.
Translator — Used for importing or exporting a scene to another file format.
ModellingTool — For tools that appear on the tools menu. They usually manipulate objects in the scene.
Texture — Defines a texture that is applied to an object.
Material — Defines a material that is applied to an object.
TextureMapping — Describes how a texture is mapped to an object.
MaterialMapping — Describes how a material is mapped to an object.
ImageFilter — Used for post-processing of a rendered image.
Module — Used for user defined 2D and 3D textures.
Cloth Simulation
A cloth simulator does not come with the basic install package but the capability is available as a plugin. The second edition of Extending Art of Illusion includes the ClothMaker plugin as one of the examples in the book. The author classifies the cloth simulator as "beta" and describes a number of problems with the tool. The ClothMaker plugin makes use of the Distortion class. The user selects an object in the scene to convert to a cloth. The user then selects the command to tell Art of Illusion to generate the cloth simulation. An editor window is provided for the user to select various settings. When the user selects ok the tool spends several minutes generating many frames of the simulation. Once the window closes the user can play simulation using the animation score.
Procedural editor
There are procedurally controlled options available of textures, materials, movements, lights and even some objects. Procedural editors provide a graphic interface, where you can combine input values, library patterns and mathematical expressions to create the desired output values.
Audio
Art of Illusion does not have any sound/audio processing capabilities. Audio is not mentioned in the documentation.
File formats and interoperability
Art of Illusion scene files are saved in their specific format, marked by the extension ".aoi". The core package contains a built-in import function for Wavefront (.obj) and export functions for Wavefront (.obj), Povray 3.5 (.pov) and VRML (.wrl). Additional translators are available as plugins.
Language support
The user interface of the core software has been translated to 14 languages. Plugins may not have complete sets of translations available.
System requirements
Art of Illusion 3.2.0 runs on Java Virtual Machine (JVM) versions 8 or later. Assembly packages are available for Mac OS, Windows and Linux and there is a generic zip package available for other systems or for cases, where a self-extracting package can not be used. OpenGL acceleration is available for interactive rendering.
Absolute minimum requirements or recommendations for the hardware have not been informed. By default Art of Illusion allocates 16 GB of memory for the JVM. This can be changed by launching Java by a command-line. Art of Illusion is capable of multithreading and therefore utilizing multicore processors, when rendering images.
Art of Illusion is designed to have full functionality either with a single button mouse or a 3-button mouse with a scroll wheel. A keyboard with a numberpad is recommended as some of the keyboard shortcuts are assigned to the number keys.
An interface for a 3D-controller, such as one of 3Dconnexion devices is available as a plugin.
References
Further reading
External links
3D graphics software
3D animation software
Global illumination software
Cross-platform free software
Free 3D graphics software
Free software programmed in Java (programming language)
Portable software
|
81756
|
https://en.wikipedia.org/wiki/Rochester%20Institute%20of%20Technology
|
Rochester Institute of Technology
|
Rochester Institute of Technology (RIT) is a private research university in the town of Henrietta in the Rochester, New York metropolitan area. The university offers undergraduate and graduate degrees, including doctoral and professional degrees and online masters as well.
The university was founded in 1829 and is the tenth largest private university in the United States in terms of full-time students. It is internationally known for its science, computer, engineering, and art programs, as well as for the National Technical Institute for the Deaf, a leading deaf-education institution that provides educational opportunities to more than 1000 deaf and hard-of-hearing students. RIT is known for its Co-op program that gives students professional and industrial experience. It has the fourth oldest and one of the largest Co-op programs in the world. It is classified among "R2: Doctoral Universities – High research activity".
RIT's student population is approximately 19,000 students, about 16,000 undergraduate and 3000 graduate. Demographically, students attend from all 50 states in the United States and from more than 100 countries around the world. The university has more than 4000 active faculty and staff members who engage with the students in a wide range of academic activities and research projects. It also has branches abroad, its global campuses, located in China, Croatia and United Arab Emirates (Dubai).
Eleven RIT alumni and faculty members have been recipients of the Pulitzer Prize, winning a total of 15 prizes.
History
The university began as a result of an 1891 merger between Rochester Athenæum, a struggling literary society founded in 1829 by Colonel Nathaniel Rochester and associates, and The Mechanics Institute, a Rochester school of practical technical training for local residents founded in 1885 by a consortium of local businessmen including Captain Henry Lomb, co-founder of Bausch & Lomb. The name of the merged institution at the time was called Rochester Athenæum and Mechanics Institute (RAMI). The Mechanics Institute was considered as the surviving school and took over The Rochester Athenaeum's 1829 founding charter. From the time of the merger until 1944, many of its students, administration and faculty staff alike, not only celebrated the former Mechanics Institute's 1885 founding charter, but its former name as well. In 1944, the school changed its name to Rochester Institute of Technology, re-established The Athenaeum's 1829 founding charter and became a full-fledged research university.
The university originally resided within the city of Rochester, New York, proper, on a block bounded by the Erie Canal, South Plymouth Avenue, Spring Street, and South Washington Street (approximately ). Its art department was originally located in the Bevier Memorial Building. By the middle of the twentieth century, RIT began to outgrow its facilities, and surrounding land was scarce and expensive; additionally, in 1959, the New York Department of Public Works announced a new freeway, the Inner Loop, was to be built through the city along a path that bisected the university's campus and required demolition of key university buildings. In 1961, an unanticipated donation of $3.27 million ($ today) from local Grace Watson, for whom RIT's dining hall was later named, allowed the university to purchase land for a new campus several miles south along the east bank of the Genesee River in suburban Henrietta. Upon completion in 1968, the university moved to the new suburban campus, where it resides today.
In 1966, RIT was selected by the Federal government to be the site of the newly founded National Technical Institute for the Deaf (NTID). NTID admitted its first students in 1968, concurrent with RIT's transition to the Henrietta campus.
In 1979, RIT took over Eisenhower College, a liberal arts college located in Seneca Falls, New York. Despite making a 5-year commitment to keep Eisenhower open, RIT announced in July 1982 that the college would close immediately. One final year of operation by Eisenhower's academic program took place in the 1982–83 school year on the Henrietta campus. The final Eisenhower graduation took place in May 1983 back in Seneca Falls.
In 1990, RIT started its first PhD program, in Imaging Science – the first PhD program of its kind in the U.S. RIT subsequently established PhD programs in six other fields: Astrophysical Sciences and Technology, Computing and Information Sciences, Color Science, Microsystems Engineering, Sustainability, and Engineering.
In 1996, RIT became the first college in the U.S to offer a Software Engineering degree at the undergraduate level.
Campus
The main campus is housed on a property. This property is largely covered with woodland and fresh-water swamp making it a very diverse wetland that is home to a number of somewhat rare plant species. The campus comprises 237 buildings and 5.1 million square feet (474,000 m2) of building space. The nearly universal use of bricks in the campus's construction – estimated at 15,194,656 bricks as of July 27, 2010 – prompted students to give it the semi-affectionate nickname "Brick City," reflected in the name of events such as the annual "Brick City Homecoming." Though the buildings erected in the first few decades of the campus's existence reflected the architectural style known as brutalism, the warm color of the bricks softened the impact somewhat. More recent additions to the campus have diversified the architecture while still incorporating the traditional brick colors. The main campus was listed as a census-designated place in 2020.
In 2009, the campus was named a "Campus Sustainability Leader" by the Sustainable Endowments Institute.
The residence halls and the academic side of campus are connected with a walkway called the "Quarter Mile." Along the Quarter Mile, between the academic and residence hall side are various administration and support buildings. On the academic side of the walkway is a courtyard, known as the Infinity Quad due to a striking polished stainless steel sculpture (by Jose' de Rivera, 1968, 19'×8'×2') of a continuous ribbon-like Möbius strip (commonly referred to as the infinity loop because if the sun hits the strip at a certain angle it will cast a shadow in the shape of an infinity symbol on the ground) in the middle of it; on the residence hall side is a sundial and a clock. These symbols represent time to infinity. The Quarter Mile is actually long when measured between the mobius sculpture and the sundial. The name predates a Sigma Pi Fraternity fundraiser called Quarter the Quarter-Mile, where donated quarters were lined up from the sundial to the Infinity Sculpture. Standing near the Administration Building and the Student Alumni Union is The Sentinel, a steel structure created by the acclaimed metal sculptor, Albert Paley. Reaching 70 feet (21 m) high and weighing 110 tons, the sculpture is the largest on any American university campus. There are four RIT-owned apartment complexes: Global Village, Perkins Green, Riverknoll and University Commons.
Along the Quarter Mile is the Gordon Field House, a , two-story athletic center. Opened in 2004 and named in honor of Lucius "Bob" Gordon and his wife Marie, the Field House hosts numerous campus and community activities, including concerts, career fairs, athletic competitions, graduations, and other functions. Other facilities between the residence halls and academic buildings include the Hale-Andrews Student Life Center, Student Alumni Union, Ingle Auditorium, Clark Gymnasium, Frank Ritter Memorial Ice Arena, and the Schmitt Interfaith Center.
The Red Barn at the west end of the campus is the site of RIT's Interactive Adventures program.
Park Point at RIT (originally referred to as "College Town") is an multi-use residential and commercial enterprise on the northeast corner of the campus. Park Point is accessible to the rest of the RIT campus through a regular bus service loop, numerous pedestrian paths connecting Park Point to the RIT Main Loop, and main roads. Although originally intended as added student housing, financial penalties resulting from developing on swampland led RIT to lease Park Point to Wilmorite for a period of twenty years and subsequently develop the property without the university incurring additional fees.
Art on Campus
The RIT Art Collection, part of the RIT Archive Collections at RIT Libraries, comprises thousands of works, including hundreds by RIT faculty, students, and alumni. The collection grows every year through the Purchase Prize Program, which enables the university to purchase select art works from students in the School of Art and Design, the School for American Crafts, and the School of Photographic Arts and Sciences.
Many pieces from the collection are on public display around campus, including:
Sentinel – a 73-foot-tall sculpture created by the acclaimed metal sculptor, Albert Paley, located on Administration Circle.
Growth and Youth – a set of two murals by Josef Albers located in the lobby of the George Eastman Building.
Principia – a mural by Larry Kirkland that is etched into the black granite floor of the atrium in the College of Science (Gosnell Hall). The work features illustrations, symbols, formulae, quotes, and images representing milestones in the history of science.
Three Piece Reclining Figure No. 1 – a bronze sculpture by English artist Henry Moore located in Eastman Kodak Quad.
Grand Hieroglyph – a 24-foot-long tapestry by Shiela Hicks located in the George Eastman Building.
Sundial – a sculpture by Alistair Bevington located on the Residence Quad.
The Monument to Ephemeral Facts – a mixed media sculpture by Douglas Holleley located in Wallace Library.
Unity – a 24-foot-tall stainless steel sculpture sited between the College of Art and Design, the College of Engineering Technology, and the College of Engineering.
Organization and administration
As of 2017, the president is David C. Munson Jr., formerly the dean of engineering at the University of Michigan. Munson, the university's tenth president, took office on July 1, 2017, replacing William W. Destler, who retired after 10 years at RIT. Ellen Granberg, formerly senior associate provost at Clemson University, was named provost in July 2018. She is the first female to serve as provost at RIT.
The school is also a member of the Association of Independent Technological Universities.
Colleges
RIT has nine colleges:
College of Art and Design
Saunders College of Business
Golisano College of Computing and Information Sciences
Kate Gleason College of Engineering
College of Engineering Technology
College of Health Sciences and Technology
College of Liberal Arts
National Technical Institute for the Deaf
College of Science
There are also two smaller academic units that grant RIT degrees but do not have full college faculties:
Golisano Institute for Sustainability
School of Individualized Study
In addition to these colleges, RIT operates three branch campuses in Europe, one in the Middle East and one in East Asia:
RIT Croatia (formerly the American College of Management and Technology) in Dubrovnik and Zagreb, Croatia
RIT Kosovo (formerly the American University in Kosovo) in Pristina, Kosovo
RIT Dubai in Dubai, United Arab Emirates
RIT China-Weihai Campus
RIT also has international partnerships with the following schools:
Yeditepe University in Istanbul, Turkey
Birla Institute of Technology and Science in India
Pontificia Universidad Catolica Madre y Maestra (PUCMM) in Dominican Republic
Instituto Tecnológico de Santo Domingo (INTEC) in Dominican Republic
Universidad Tecnologica Centro-Americana (UNITEC) in Honduras
Universidad del Norte (UNINORTE) in Colombia
Universidad Peruana de Ciencias Aplicadas (UPC) in Peru
Academics
RIT is known for its career focused education. The university is chartered by the New York state legislature and accredited by the Middle States Association of Colleges and Schools. The university offers more than 200 academic programs, including seven doctoral programs across its nine constituent colleges. In 2008–2009, RIT awarded 2,483 bachelor's degrees, 912 master's degrees, 10 doctorates, and 523 other certificates and diplomas.
The four-year, full-time undergraduate program constitutes the majority of enrollments at the university and emphasizes instruction in the "arts & sciences/professions." RIT is a member of the Rochester Area College consortium, which allows students to register at other colleges in the Rochester metropolitan area without tuition charges. RIT's full-time undergraduate and graduate programs used to operate on an approximately 10-week quarter system with the primary three academic quarters beginning on Labor Day in early September and ending in late May. In August 2013, RIT transitioned from a quarter system to a semester system. The change was hotly debated on campus, with a majority of students opposed according to an informal survey; Student Government also voted against the change.
Undergraduate tuition and fees for 2012–2013 totaled $45,602. RIT undergraduates receive over $200 million in financial assistance, and over 90% of students receive some form of financial aid. 3,210 students qualified for Pell Grants in 2007–2008.
Among the eight colleges, 6.8% of the student body is enrolled in the E. Philip Saunders College of Business, 15.0% in the Kate Gleason College of Engineering, 4.3% in the College of Liberal Arts, 25.4% in the College of Applied Science and Technology, 18.0% in the B. Thomas Golisano College of Computing and Information Sciences, 13.9% in the College of Imaging Arts and Science, 5.7% in the National Technical Institute for the Deaf, and 9.2% in the College of Science. The five most commonly awarded degrees are in Business Administration, Engineering Technology, School of Photographic Arts & Sciences, School of Art and Design, and Information Technology.
RIT has struggled with student retention, although the situation has improved during President Destler's tenure. 91.3% of freshmen in the fall of 2009 registered for fall 2010 classes, which Destler noted as a school record.
Student body
RIT enrolled 13,711 undergraduate (9,190 male, 4,466 female, and 55 unknown) and 3,131 graduate students in fall 2015. There were 11,226 males and 5,537 females, resulting in a ratio of just over 2 (2.03) males per 1 female. Admissions are characterized as "more selective, higher transfer-in" by the Carnegie Foundation. RIT received 12,725 applications for undergraduate admission in Fall 2008, 60% were admitted, 34% enrolled, and 84% of students re-matriculated as second-year students. The interquartile range on the SAT was 1630–1910. 26% of students graduated after four years and 64% after six years. As of 2013, the 25th–75th percentile SAT scores are 540–650 Critical Reading, 570–680 Math, and 520–630 Writing—the average composite score being 1630–1960.
Notable academic programs
The Imaging science department was the first at the university to offer a doctoral program, in 1989, and remains the only formal program in Imaging Science in the nation (as a multidisciplinary field—separate constituent fields of physics, optics, and computer science are common in higher education). Associations exist between the department and Rochester-area imagery and optics companies such as Xerox, Kodak, and the ITT Corporation. Such connections have reinforced the research portfolio, expertise, and graduate reputation of the imaging researchers and staff of the department. As of 2008, imaging-related research has the largest budget at the university from grants and independent research.
The Microelectronic Engineering program, created in 1982 and the only ABET-accredited undergraduate program in the country, was the nation's first Bachelor of Science program specializing in the fabrication of semiconductor devices and integrated circuits. The information technology program was the first nationally recognized IT degree, created in 1993.
In 1996, Rochester Institute of Technology established the first software engineering bachelor's degree program in the United States but did not obtain ABET accreditation until 2003, the same time as Clarkson University, Milwaukee School of Engineering and Mississippi State University.
Starting in 2000, RIT began admitting students in the top of their application pools into the RIT Honors Program. Each college participates voluntarily in the program and defines their own program details. As an example, the College of Engineering focuses on engineering in a global economy, and uses much of the honors budget to pay for domestic and international trips for engineering students. In contrast, the College of Science is focused on expanding research, and provides most of its budget to student research endeavors. Students admitted to the program are given a small scholarship and have the opportunity to live in the honors residence hall.
In 2019, the video game design program at RIT, one of two majors offered by School of Interactive Games and Media in the B. Thomas Golisano College of Computing and Information Sciences (GCCIS), was recognized by The Princeton Review as one of the top 10 programs in the country for video game design, with the undergraduate program ranking eighth, and the master's degree graduate program ranking seventh.
RIT is the first, and only school in the United States to offer an undergraduate minor in Free and Open Source Software and Free Culture.
Rankings
In 2017, RIT was ranked No. 97 (tie) in the National Universities category by U.S. News & World Report.
Business Insider ranked RIT No. 14 in Northeast and No. 36 in the country for Computer Science.
RIT was ranked among the top 50 national universities in a national survey of "High School Counselors Top College Picks".
RIT's Saunders College of Business ranked No. 26 in the United States for "Best Online MBA Programs" for the online executive MBA program by U.S. News & World Report. Times Higher Education/The Wall Street Journal ranked the MBA program at Saunders College of Business No. 54 among business colleges and universities around the world for the year 2019.
RIT was ranked among the top 20 universities recognized for excellent co-operative learning and internship programs. It was further placed at No. 24 in the top 30 universities for Computer Science with the best Returns on Investment (ROI) in the US.
College Factual, the ranking data provider for USA Today College Guide 2019, ranked RIT in various academic areas as follows:
No.1 in Computer Software and Application
No. 3 in Computer and Information Sciences
No. 3 in Computer Science in the state of New York
No. 29 in Computer Science overall
No. 14 in Computer Engineering
No. 62 in Electrical Engineering
No. 34 in Industrial Engineering
No. 6 in Management Information Systems
No. 31 in Applied Mathematics
No. 13 in Design and Applied Arts
No. 16 in Film, Video and Photographic Arts
No. 76 in Visual and Performing Arts
No. 16 in Film, Video and Photographic Arts
No. 31 in Hospitality Management
No. 43 in Criminal Justice
No. 53 for "Best School for Veterans"
No. 79 in Engineering
The Princeton Review ranked RIT No. 8 nationally for "Top Schools for Video Game Design for 2019" in undergraduate programs and No. 7 in graduate programs. Among the top 75 universities for Video Game Design in the US, RIT was ranked No. 4.
Co-op program
RIT's co-op program, which began in 1912, is the fourth-oldest in the world. It is also the fifth-largest in the nation, with approximately 3,500 students completing a co-op each year at over 2,000 businesses. The program requires (or allows, depending on major) students to work in the workplace for up to five quarters alternating with quarters of class. The amount of co-op varies by major, usually between 3 and 5 three-month "blocks" or academic quarters. Many employers prefer students to co-op for two consecutive blocks, referred to as a "double-block co-op". During a co-op, the student is not required to pay tuition to the school and is still considered a "full time" student. In addition, RIT was listed by U.S. News & World Report as one of only 12 colleges nationally recognized for excellence in the internships/co-ops category and has secured this ranking, which is based on nominations from college presidents, chief academic officers and deans, for four years in a row since U.S. News began the category in 2002. Additionally, according to the most recent PayScale College Salary Report, the median starting salary for a recent RIT graduate is $51,000 making it among the highest of all Rochester area institutions.
Library and special collections
RIT Libraries house renowned special collections that enhance teaching, learning, and research in many of RIT's academic programs. The Cary Graphic Arts Collection contains books, manuscripts, printing type specimens, letterpress printing equipment, documents, and other artifacts related to the history of graphic communication. RIT Archives document more than 180 years of the university's history, and students in the Museum Studies program frequently work with these artifacts and help create exhibitions. The RIT/NTID Deaf Studies Archive preserves and illustrates the history, art, culture, technology, and language of the Deaf community. The RIT Art Collection contains thousands of works showcasing RIT's visual arts curriculum.
Vignelli Center for Design Studies
The Vignelli Center for Design Studies was established in 2010 and houses the archives of Italian designers Massimo and Lella Vignelli. The center is a hub for design education, scholarship and research.
Research
RIT's research programs are rapidly expanding. The total value of research grants to university faculty for fiscal year 2007–2008 totaled $48.5 million, an increase of more than twenty-two percent over the grants from the previous year. The university currently offers eight PhD programs: Imaging science, Microsystems Engineering, Computing and Information Sciences, Color science, Astrophysical Sciences and Technology, Sustainability, Engineering, and Mathematical modeling.
In 1986, RIT founded the Chester F. Carlson Center for Imaging Science, and started its first doctoral program in Imaging Science in 1989. The Imaging Science department also offers the only Bachelors (BS) and Masters (MS) degree programs in imaging science in the country. The Carlson Center features a diverse research portfolio; its major research areas include Digital Image Restoration, Remote Sensing, Magnetic Resonance Imaging, Printing Systems Research, Color Science, Nanoimaging, Imaging Detectors, Astronomical Imaging, Visual Perception, and Ultrasonic Imaging.
The Center for Microelectronic and Computer Engineering was founded by RIT in 1986. The university was the first university to offer a bachelor's degree in Microelectronic Engineering. The center's facilities include 50,000 square feet (4,600 m2) of building space with 10,000 square feet (930 m2) of clean room space; the building will undergo an expansion later this year. Its research programs include nano-imaging, nano-lithography, nano-power, micro-optical devices, photonics subsystems integration, high-fidelity modeling and heterogeneous simulation, microelectronic manufacturing, microsystems integration, and micro-optical networks for computational applications.
The Center for Advancing the Study of CyberInfrastructure (CASCI) is a multidisciplinary center housed in the College of Computing and Information Sciences. The Departments of Computer science, Software Engineering, Information technology, Computer engineering, Imaging Science, and Bioinformatics collaborate in a variety of research programs at this center. RIT was the first university to launch a Bachelor's program in Information technology in 1991, the first university to launch a Bachelor's program in Software Engineering in 1996, and was also among the first universities to launch a Computer science Bachelor's program in 1972. RIT helped standardize the Forth programming language, and developed the CLAWS software package.
The Center for Computational Relativity and Gravitation was founded in 2007. The CCRG comprises faculty and postdoctoral research associates working in the areas of general relativity, gravitational waves, and galactic dynamics. Computing facilities in the CCRG include gravitySimulator, a novel 32-node supercomputer that uses special-purpose hardware to achieve speeds of 4TFlops in gravitational N-body calculations, and newHorizons, a state-of-the art 85-node Linux cluster for numerical relativity simulations.
The Center for Detectors was founded in 2010. The CfD designs, develops, and implements new advanced sensor technologies through collaboration with academic researchers, industry engineers, government scientists, and university/college students. The CfD operates four laboratories and has approximately a dozen funded projects to advance detectors in a broad array of applications, e.g. astrophysics, biomedical imaging, Earth system science, and inter-planetary travel. Center members span eight departments and four colleges.
RIT has collaborated with many industry players in the field of research as well, including IBM, Xerox, Rochester's Democrat and Chronicle, Siemens, NASA, and the Defense Advanced Research Projects Agency (DARPA). In 2005, it was announced by Russell W. Bessette, Executive Director New York State Office of Science Technology & Academic Research (NYSTAR), that RIT will lead the University at Buffalo and Alfred University in an initiative to create key technologies in microsystems, photonics, nanomaterials, and remote sensing systems and to integrate next generation IT systems. In addition, the collaboratory is tasked with helping to facilitate economic development and tech transfer in New York State. More than 35 other notable organizations have joined the collaboratory, including Boeing, Eastman Kodak, IBM, Intel, SEMATECH, ITT, Motorola, Xerox, and several Federal agencies, including as NASA.
RIT has emerged as a national leader in manufacturing research. In 2017, the U.S. Department of Energy selected RIT to lead its Reducing Embodied-Energy and Decreasing Emissions (REMADE) Institute aimed at forging new clean energy measures through the Manufacturing USA initiative. RIT also participates in five other Manufacturing USA research institutes.
In February 2022, James Hammer donated $1 million to establish the packaging and graphics media center at the university. Hammer is the retired CEO of Hammer Packaging and the gift will be used to integrate the print and packaging technologies researching new processes, materials, and sustainability initiatives.
Athletics
RIT has 24 men's and women's varsity teams including Men's Intercollegiate Baseball, Basketball, Crew, Cross Country, Ice Hockey, Lacrosse, Soccer, Swimming & Diving, Tennis, Track & Field and Wrestling along with Women's Intercollegiate Basketball, Cheerleading, Crew, Cross Country, Ice Hockey, Lacrosse, Soccer, Softball, Swimming & Diving, Tennis, Track & Field, and Volleyball.
RIT was a long-time member of the Empire 8, an NCAA Division III athletic conference, but moved to the Liberty League beginning with the 2011–2012 academic year. All of RIT's teams compete at the Division III level, with the exception of the men's and women's ice hockey programs, which play at the Division I level. In 2010, the men's ice hockey team was the first ever from the Atlantic Hockey conference to reach the NCAA tournament semi-finals: The Frozen Four.
In 2011–2012, the RIT women's ice hockey team had a regular season record of 28–1–1, and won the NCAA Division III national championship, defeating the defending champion Norwich University 4–1. The women's team had carried a record of 54–3–3 over their past two regular seasons leading up to that point. The women's hockey team then moved from Division III to Division I. Starting in the 2012–2013 season, the women's team played in the College Hockey America conference. In 2014–2015, the team became eligible for NCAA Division I postseason play.
Additionally, RIT has a wide variety of club, intramural, and pick-up sports and teams to provide a less-competitive recreational option to students.
RIT's Alpine Ski Club competes at United States Collegiate Ski & Snowboard Association (USCSA), which uses NCAA II competition and academic standards. The varsity Alpine Ski Team competes at the USCSA Mid East Region.
Tom Coughlin, coach of the NFL's 2008 and 2012 Super Bowl champion New York Giants, taught physical education and was the head coach of the RIT Men's Varsity Football team for four seasons in the early 1970s. Overseeing RIT football's transition from a club sport to an NCAA Division III team, this was the first head coaching job of Coughlin's career with him calling his time at RIT "a great experience."
Since 1968 RIT's hockey teams played at Frank Ritter Memorial Ice Arena on campus. In 2010, RIT began raising money for a new arena. In 2011, B. Thomas Golisano and the Polisseni Foundation donated $4.5 million for the new arena, which came to be named the Gene Polisseni Center. The new 4,300-seat arena was completed in 2014 and the Men's and Women's teams moved into the new facility in September for the 2014–2015 season.
Mascot
RIT's athletics nickname is the "Tigers", a name given following the undefeated men's basketball season of 1955–56. Prior to that, RIT's athletic teams were called the "Techmen" and had blue and silver as the sports colors. In 1963, RIT students fundraised using ‘Tigershares’ to buy a rescued Bengal tiger cub that became the university's mascot, named SpiRIT which stands for Student Pride in RIT(Rochester Institute of Technology). Ambitious students were trained as the Tiger Cubs handlers and took him to most sport events until 1964. It was then discovered that the cub was ill and eventually he was put down due to these health complications. The original tiger's pelt now resides in the RIT Archive Collections at RIT Libraries. RIT helped the Seneca Park Zoo purchase a new tiger shortly after SpiRIT's death, but it was not used as a school mascot. A bronze sculpture by D.H.S. Wehle in the center of the Henrietta campus now provides an everlasting version of the mascot.
RIT's team mascot is a version of this Bengal Tiger named RITchie. RITchie was the selected name entered in 1989 by alumnus Richard P. Mislan during a College Activities Board "Name the RIT Tiger" contest. After it was announced that the RIT Men's Hockey Team was moving from Division III to Division I in 2005, RITchie was redesigned and made his debut in the fall of 2006.
Student life
In addition to its academic and athletic endeavors, RIT has over 150 student clubs, 10 major student organizations, a diverse interfaith center and 30 different Greek organizations.
Reporter magazine, founded in 1951, is the university's primary student-run magazine. RIT also has its own ambulance corps, bi-weekly television athletics program RIT SportsZone, pep band, radio station, and tech crew.
The university's Gordon Field House and Activities Center is home to competitive and recreational athletics and aquatics, a fitness center, and an auditorium hosting frequent concerts and other entertainment. Its opening in late 2004 was inaugurated by concerts by performers including Kanye West and Bob Dylan. It is the second-largest venue in Monroe County.
Deaf and hard-of-hearing students
One of RIT's unique features is the large presence of deaf and hard-of-hearing students, who make up 8.8% of the student body. The National Technical Institute for the Deaf, one of RIT's nine colleges, provides interpreting and captioning services to students for classes and events. Many courses' lectures at RIT are interpreted into American Sign Language or captioned in real time for the benefit of hard-of-hearing and deaf students. There are several deaf and hard-of-hearing professors and lecturers, too; an interpreter can vocalize their lectures for hearing students. This significant portion of the RIT population provides another dynamic to the school's diversity, and it has contributed to Rochester's high number of deaf residents per-capita. In 2006, Lizzie Sorkin made RIT history when she became the first deaf RIT Student Government President. In 2010, Greg Pollock became the second deaf RIT Student Government President. In 2018, Robert "Bobby" Moakley became the third deaf RIT Student Government President.
Explore Your Future
Explore Your Future (EYF) is a six-day career exploration program at Rochester Institute of Technology for college-bound deaf and hard-of-hearing high school students who will begin their junior or senior year.
Fraternities and sororities
RIT's campus is host to thirty fraternities and sororities (eighteen fraternities and twelve sororities), that make up 6.5% of the total RIT population. RIT and Phi Kappa Psi alumni built six large buildings for Greek students on the academic side of campus next to the Riverknoll apartments. In addition to these six houses, there is also limited space within the residence halls for another six chapters.
Interfraternity Council
The Interfraternity Council (IFC) provides outlets for social interaction among the fraternity and sorority members. The IFC helps to sponsor educational opportunities for all of its members and to help to promote the fraternal ideals of leadership, scholarship, service, community and brotherhood. There are currently eleven chapters that are part of the IFC at RIT.
Panhellenic Council
The Panhellenic Council is the governing body of the sorority system. The Panhellenic Council provides many opportunities for involvement in campus life and the fraternity and sorority system outside of the individual sororities. Recruitment, social, and educational opportunities are provided by the council. All five social sororities recognized by Rochester Institute of Technology are active members of the National Panhellenic Conference.
Special Interest Houses
RIT's dormitories are home to seven "Special Interest Houses" — Art House, Computer Science House, Engineering House, House of General Science, International House, Photo House, and Unity House — that provide an environment to live immersed in a specific interest, such as art, engineering, or computing. Members of a special-interest house share their interests with each other and the rest of campus through academic focus and special activities. Special Interest Houses are self-governing and accept members based on their own criteria. In the early 2000s, RIT had a Special Interest House called Business Leaders for Tomorrow, but it no longer exists.
ROTC programs
RIT is the host of the Air Force ROTC Detachment 538 "Blue Tigers" and the Army ROTC "Tiger Battalion". RIT students may also enroll in the Naval ROTC program based at the University of Rochester.
In 2009, the "Tiger Battalion" was awarded the Eastern Region's Outstanding ROTC Unit Award, given annually by the Order of the Founders and Patriots of America. In 2010, it was awarded the National MacArthur Award for 2nd Brigade.
Reporter Magazine
Reporter Magazine (Reporter) is a completely student run organization through the Rochester Institute of Technology. The magazine is a 32-page full-color issue printed on the first Friday of the month for the duration of the academic year supplemented with daily online content. Reporter provides insightful content pertinent to the RIT community and the Rochester community at large.
K2GXT – RIT Amateur Radio Club
Students interested in amateur radio can join K2GXT, the RIT amateur radio club. It is the oldest club on campus, founded in 1952 at the original downtown Rochester campus. The club maintains a UHF and VHF amateur radio repeater system operating on the 2 meter band, and the 70 centimeter band. The repeater system serves the campus and surrounding areas.
WITR 89.7
An FM radio station run by students at RIT, WITR 89.7 broadcasts various music genres, RIT athletic events, and several talk radio programs. WITR can be heard throughout Rochester and its suburbs, and via an online stream on its website. The radio station recently opened up a studio in the SAU with a see-thru window in 2015.
College Activities Board
The College Activities Board, frequently abbreviated as CAB, is a student-run organization responsible for providing "diverse entertainment and activities to enhance student life on the RIT campus." CAB is responsible for annual concerts, class trips, movie screenings, and other frequent events.
Imagine RIT
An annual festival, publicized as "Imagine RIT", was initiated in May 2008 to showcase innovative and creative activity at RIT. It is one of the most prominent changes brought to RIT by former university president, William Destler.
An open event, visitors to Imagine RIT have an opportunity to tour the RIT campus and view new ideas for products and services, admire fine art, explore faculty and student research, examine engineering design projects, and interact with hundreds of hands-on exhibits. Theatrical and musical performances take place at stages in many locations on the RIT campus. Intended to appeal to visitors of all ages, including children, the festival features a variety of exhibits. More than 17,000 people attended the inaugural festival on May 3, 2008, and ten years later the number of people attending has doubled, reaching almost 35,000.
Rochester Game Festival
Sponsored by RIT's MAGIC Center, ROC Game Dev, and the Irondequoit Library, the Rochester Game Festival is an annual convention that showcases video games and tabletop games produced by students and by independent developers in the surrounding region. More than 1,300 people attended the festival in 2019.
RIT Ambulance
RIT Ambulance (RITA) is a community run, 9-1-1 dispatched New York State Certified Basic Life Support Ambulance agency. As a New York State certified ambulance service, RIT Ambulance is an active participant in providing reciprocal mutual-aid with in surrounding communities throughout Monroe county and even New York State. RIT Ambulance is prepared to support any of the neighboring communities’ ambulance services if additional resources are required. RIT Ambulance operates a New York State Certified Basic Life Support ambulance and a Basic Life Support first response / command vehicle.
RIT Ambulance provides coverage 24 hours a day, 7 days a week throughout the year. The ambulance is staffed on a volunteer basis by state-certified students, faculty, staff, alumni and community members. RIT Ambulance also provides standbys as requested for concerts, sporting events, and other social gatherings for the RIT community.
Public Safety
RIT Public Safety is the primary agency responsible for protection of students, staff and property as well as enforcement of both college policies and state laws. Officers are NYS Licensed Security Guards who possess an expanded scope of authority under NYS Education Law, and many Officers have prior law enforcement backgrounds. In 2016, it was announced that RIT Public Safety will deploy officers armed with long guns to respond to active shooter incidents. Public Safety Officers operate both a dispatch center and various types of patrol units on campus and at off campus holdings (such as The Inn and Conference Center) and also manage the Call Box System. Activating a call box will automatically place the user in touch with an Officer in the dispatch center who will direct Patrol Officers to respond to the location; if necessary, Officers will summon the Monroe County Sheriff's to respond as well. As the college does not have 24/7 on campus crisis intervention counselors, in the event of a mental or behavioral health incident during hours where a counselor is not available, Public Safety Officers are also trained to act as mediators until an on-call counselor can be summoned.
Dining services
RIT Dining Services manages a large number of restaurants and food shops, along with the sole dining hall on campus. There are multiple cafeterias and small retail locations throughout the campus, including near the Residence Halls, in the Student Alumni Union, Global Village, and in certain academic buildings. Dining Services at RIT is completely internal and run through the university. RIT Dining Services also provides opportunities for international students to work on campus. In early 2019 the campus started providing food from a Hydroponic farm on campus that supplied lettuce, kale, and other crops.
Governance
RIT is governed under a shared governance model. The shared governance system is composed of the Student Government, the Staff Council, and the Academic Senate. The University Council brings together representatives from all three groups and makes recommendations to the president of the university. Once the University Council has made a recommendation, the President makes the final decision.
Student Government
The Student Government consists of an elected student senate and a cabinet appointed by the President and Vice President. Elections for academic and community senators occur each spring, along with the elections for the President and Vice President. The cabinet is appointed by the President and Vice President.
The Student Government is an advocate for students and is responsible for basic representation as well as improving campus for students. The Student Government endorses proposal that are brought before the University Council.
Academic Senate
The Academic Senate is responsible for representing faculty within the shared governance system. The Academic Senate has 43 senators.
Staff Council
The Staff Council represents staff in the shared governance system.
Notable alumni
RIT has over 125,000 alumni worldwide, with 9 of them having gone on to receive 15 Pulitzer Prizes.
Notable alumni include Bob Duffy, former New York Lieutenant Governor; Tom Curley, former president and CEO of the Associated Press; Daniel Carp, former chairman of the Eastman Kodak Company; Koo Kwang-mo, chairman and CEO of LG Corporation; Clayton Turner, director of NASA’s Langley Research Center; John Resig, software developer and creator of jQuery; N. Katherine Hayles, critical theorist; Austin McChord, founder and CEO of Datto; Jack Van Antwerp, former director of photography for The Wall Street Journal; photojournalist Bernie Boston; and former Executive Director of the EAC and current Deputy Assistant Director of CISA, Mona Harrington.
Presidents and provosts
In the decades prior to the selection of RIT's first president, the university was administered primarily by the board of trustees.
In addition to the ten official presidents, Thomas R. Plough served as acting president twice: once, in February 1991 when M. Richard Rose was on sabbatical with the CIA, and again in 1992 between Rose's retirement and Albert J. Simone's installation.
See also
Association of Independent Technological Universities
List of Rochester Institute of Technology alumni
References
Further reading
External links
Rochester Institute of Technology
Universities and colleges in Monroe County, New York
Education in Rochester, New York
Engineering universities and colleges in New York (state)
Technological universities in the United States
Private universities and colleges in New York (state)
Educational institutions established in 1829
1829 establishments in New York (state)
Buildings and structures in Rochester, New York
|
47167
|
https://en.wikipedia.org/wiki/Route%20flapping
|
Route flapping
|
In computer networking and telecommunications, route flapping occurs when a router alternately advertises a destination network via one route then another, or as unavailable and then available again, in quick sequence.
Route flapping is caused by pathological conditions (hardware errors, software errors, configuration errors, intermittent errors in communications links, unreliable connections, etc.) within the network which cause certain reachability information to be repeatedly advertised and withdrawn. For example, link flap occurs when an interface on a router has a hardware failure that causes the router to announce it alternately as "up" and "down".
In networks with link-state routing protocols, route flapping will force frequent recalculation of the topology by all participating routers. In networks with distance-vector routing protocols, route flapping can trigger routing updates with every state change. In both cases, it prevents the network from converging.
Route flapping can be contained to a smaller area of the network if route aggregation is used. As an aggregate route will not be withdrawn as long as at least one of the aggregated subnets is still valid, a flapping route that is part of an aggregate will not disturb the routers that receive the aggregate.
See also
BGP route damping
Supernet
References
Flapping
|
57438872
|
https://en.wikipedia.org/wiki/Katalon%20Studio
|
Katalon Studio
|
Katalon Studio is an automation testing software tool developed by Katalon, Inc. The software is built on top of the open-source automation frameworks Selenium, Appium with a specialized IDE interface for web, API, mobile and desktop application testing. Its initial release for internal use was in January 2015. Its first public release was in September 2016. In 2018, the software acquired 9% of market penetration for UI test automation, according to The State of Testing 2018 Report by SmartBear.
Katalon is recognized as a March 2019 and March 2020 Gartner Peer Insights Customers’ Choice for Software Test Automation.
Product
Katalon Studio provides a dual interchangeable interface for creating test cases: a manual view for the less technical users and a script view gearing toward experienced testers to author automation tests with syntax highlight and intelligent code completion.
Katalon Studio follows the Page Object Model pattern. GUI elements on web, mobile, and desktop apps can be captured using the recording utility and stored into the Object Repository, which is accessible and reusable across different test cases.
Test cases can be structured using test suites with environment variables. Test execution can be parameterized and parallelized using profiles.
Remote execution in Katalon Studio can be triggered by CI systems via Docker container or command line interface (CLI).
From version 7.4.0, users are able to execute test cases from Selenium projects, along with the previous migration from TestNG and JUnit to Katalon Studio.
In version 7.8, users can save team effort while debugging with smart troubleshooting approaches offered via highlight features: Time Capsule, Browser-based Video Recorder, Self-healing and Test Failure Snapshots.
Provided in the latest version 8.0.0 is the native integration with Azure DevOps (ADO) which enables users to easily map test cases in Azure DevOps to automated test cases in Katalon Studio. Additionally, this new integration will allow users to automatically send test execution logs and reports from Katalon Studio to test run in ADO, which will enable them to get a clearer picture of the testing process. Other highlight features offered in this version are reusable desired capabilities across projects, 60% faster load time to speed up team working process, a newly-made product tour to enhance user experience and so on.
Technologies
The test automation framework provided within Katalon Studio was developed with the keyword-driven approach as the primary test authoring method with data-driven functionality for test execution.
The user interface is a complete integrated development environment (IDE) implemented on Eclipse rich client platform (RCP).
The keyword libraries are a composition of common actions for web, API, and mobile testings. External libraries written in Java can be imported into a project to be used as native functions.
The main programming language used in Katalon Studio are Groovy and Java. Katalon Studio supports cross-environment test executions based on Selenium and Appium.
Supported technologies
Modern web technologies: HTML, HTML5, JavaScript, Ajax, Angular
Windows desktop apps platforms: Universal Windows Platform (UWP), Windows Forms (WinForms), Windows Presentation Foundation (WPF), and Classic Windows (Win32)
Cross-browser testing: Firefox, Chrome, Microsoft Edge, Internet Explorer (9,10,11), Safari, headless browsers
Mobile apps: Android and iOS (Native apps and mobile web apps)
Web services: RESTful and SOAP
System requirements
Operating systems: Windows 7, Windows 8, Windows 10, macOS 10.11+, Linux (Ubuntu-based)
License
Katalon Studio started out as Freeware. In October 2019, Katalon introduced a new product set with proprietary licenses in its seventh release. The new products and licenses include, including Katalon Studio (Free), Katalon Studio Enterprise, and Katalon Runtime Engine, so that teams and projects of various complexities can have a flexible allocation on budget, licensing, and scalability.
Several features that were previously free were moved to the Katalon Studio Enterprise license.
Relevant products
Katalon TestOps
Katalon TestOps is a web-based application that provides visualized test data and execution results through charts, graphs, and reports. Its key features include test management, test planning, and test execution. Katalon TestOps can be integrated with Jira and other CI/CD tools.
Katalon TestOps was originally released as Katalon Analytics in November 2017. In October 2019, Katalon officially changed the name to Katalon TestOps. It is currently available in the May 2021 version and is expected to provide DevOps team with the optimal test orchestration.
Katalon Recorder
Katalon Recorder is a browser add-on for recording user's actions in web applications and generating test scripts. Katalon Recorder supports both Chrome and Firefox. Katalon Recorder functions in the same way as Katalon Studio's recording utility, but it can also execute test steps and export test scripts in many languages such as C#, Java, and Python.
Katalon Recorder 5.4 was released in May 2021.
Katalium
Katalium is a framework that provides a blueprint for test automation projects based on Selenium and TestNG. The framework is built to help users who still need to work with TestNG and Selenium to quickly set up test cases.
Katalium Server is a component of the Katalium framework. It is a set of enhancements to improve the user experience with Selenium Grid. Katalium Server can be run as a Standalone (single) server in development mode.
Both Katalium Framework and Katalium Server are made open-source.
Katalon Store
Katalon Store serves as a platform for testers and developers to install add-on products (or ‘plugins’) and add more features and optimize test automation strategies in Katalon Studio. Users can install, manage, rate, and write reviews for plugins.
In Katalon Store, plugins are made available in 3 main categories: Integration, Custom Keywords, and Utilities. Katalon Store also allows users to build and submit their own plugins.
Integrations
Katalon Studio can be integrated with other software products, including:
Software development life cycle (SDLC) management: Jira, TestRail, qTest, and TestLink
CI/CD integration: Jenkins, Bamboo, TeamCity, CircleCI, Azure DevOps, and Travis CI
Team collaboration: Git, Slack, and Microsoft Teams
Execution platform support: Selenium, BrowserStack, SauceLabs, LambdaTest, and Kobiton
Visual testing: Applitools
See also
Selenium (software)
Appium
Test automation
GUI software testing
Comparison of GUI testing tools
List of GUI testing tools
List of web testing tools
References
Graphical user interface testing
Software testing tools
|
22187350
|
https://en.wikipedia.org/wiki/Pericyma
|
Pericyma
|
Pericyma is a genus of moths in the family Erebidae. The genus was erected by Gottlieb August Wilhelm Herrich-Schäffer in 1851.
Species
Pericyma albidens (Walker, 1865) southern India
Pericyma albidentaria (Freyer, 1842) Greece, Saudi Arabia, Iran, Kasakhstan, from (Asia Minor, Middle East) - Afghanistan, Turkestan
Pericyma andrefana (Viette, 1988) Madagascar
Pericyma atrifusa (Hampson, 1902) Burkina Faso, Togo, Nigeria, Arabia, Sudan, Somalia, Kenya, Tanzania, Mozambique, Botswana, Zambia, Zimbabwe, Eswatini, South Africa, Namibia
Pericyma basalis (Saalmüller, 1891) Madagascar, Reunion
Pericyma caffraria (Möschler, 1884) South Africa
Pericyma cruegeri (Butler, 1886) Hong Kong, Taiwan, Vietnam, Thailand, Sumatra, Peninsular Malaysia, Borneo, Philippines, New Guinea, Queensland
Pericyma deducta (Walker, [1858]) South Africa
Pericyma detersa (Walker, 1865) northern India, Pakistan
Pericyma glaucinans (Guenée, 1852) Saudi Arabia, India (Silhet, Punjab), Myanmar, Thailand, Vietnam, Malaysia, Taiwan, Java, Philippines
Pericyma griveaudi (Laporte, 1973)
Pericyma madagascana Hacker, 2016 Madagascar
Pericyma mauritanica Hacker & Hausmann, 2010 Mauritania, Ivory Coast, Burkina Faso
Pericyma mendax (Walker, [1858]) Mauritania, Senegal, Gambia, Ghana, Burkina Faso, Nigeria, Congo, Zaire, Botswana, Saudi Arabia, Sudan, Somalia, Eritrea, Ethiopia, Uganda, Kenya, Malawi, Tanzania, Zambia, Zimbabwe, Mozambique, Eswatini, South Africa, Namibia, Madagascar, Mauritius
Pericyma metaleuca Hampson, 1913 Arabia, Ethiopia, Somalia, Kenya, Tanzania
Pericyma minyas (Fawcett, 1916) Ethiopia, Somalia, Kenya, Tanzania
Pericyma polygramma Hampson, 1913
Pericyma pratti (Kenrick, 1917) Madagascar
Pericyma scandulata (Felder & Rogenhofer, 1874)
Pericyma schreieri Hacker, 2016 Ethiopia, Somalia, Kenya, Tanzania
Pericyma signata Brandt, 1939 Saudi Arabia, Iran, Afghanistan, Nepal
Pericyma squalens Lederer, 1855 Turkey, Arabia, Libya, Egypt, Iran, Iraq, Jordan, Palestine, Tajikistan
Pericyma subbasalis Hacker, 2016 Taiwan
Pericyma subtusplaga Berio, 1984 Kenya
Pericyma umbrina (Guenée, 1852) India, Kenya, Somalia, Nambia, South Africa
Pericyma viettei (Berio, 1955) Madagascar
Pericyma vinsonii (Guenée, 1862) Madagascar, Mauritius, Reunion
References
Pericymini
Noctuoidea genera
|
39127116
|
https://en.wikipedia.org/wiki/Sheep%20dip%20%28computing%29
|
Sheep dip (computing)
|
In data security, a sheep-dip is the process of using a dedicated computer to test files on removable media for viruses before they are allowed to be used with other computers.
This protocol is a normal first line of defense against viruses in high-security computing environments and IT security specialists are expected to be familiar with the concept.
The process was originally developed in response to the problem of boot sector viruses on floppy discs. Subsequently, its scope has been expanded to include USB flash drives, portable hard discs, memory cards, CD-ROMs and other removable devices, all of which can potentially carry malware.
The name sheep-dip is derived from a method of preventing the spread of parasites in a flock of sheep by dipping all of the animals one after another in a trough of pesticide.
The term has been in use since at least the early 1990s, though footbath was also used at the time. A sheep-dip system can be considered a special case of a sandbox, used to test for malware.
Typical sheep-dip system
A sheep-dip is normally a stand-alone computer, not connected to any network. It has antivirus software in order to scan removable media and to protect the sheep dip computer itself. The system can be made more effective by having more than one antivirus program, because any single antivirus product will not be able to detect all types of virus.
It is very important to secure sheep-dip computers as strongly as possible against malware, because their role as a first line of defence means that they are particularly likely to be attacked. Software updates should be applied as soon as they become available. Antivirus signatures should be the most up-to-date that are available, which in practice means that they must be updated at least daily. The operating system should be hardened and locked down as far as possible.
Network connections are avoided for two reasons. Firstly, an Internet connection is a potential attack vector via which the computer could be compromised. Secondly, there is a risk that a worm on a removable device might escape into a local area network if the sheep-dip computer is connected to it.
Computers running Incident Command System (ICS) Protection Agent will not accept any removable USB media device that has not been scanned and validated by the USB scanner station, thereby blocking all file transfer and application execution from unauthorized devices.
Weaknesses of typical systems
Isolation from networks makes automatic updating impossible, because the sheep-dip computer is not able to make contact with the servers from which software updates and antivirus signatures are distributed. It is therefore normal for updates to be applied manually, after they have been downloaded by a separate network-connected computer and copied to a USB flash drive.
When a computer's security and antivirus updates are dependent on manual intervention by human beings, the system's security becomes vulnerable to human error. If pressure of work prevents updates from being applied as soon as they become available, a sheep-dip computer will gradually become more and more insecure.
Absence of network connections also makes it difficult for an organisation to monitor the status of sheep-dips if it has deployed them to several different locations. The people with central responsibility for IT security must rely on prompt and accurate reports from those who use the sheep-dips. Again, there is a risk of human error.
Active sheep-dip system
In an active sheep-dip the antivirus protection is monitored in real time with another program in order to increase security. Antivirus is only effective if it is up-to-date, properly configured, and running. Active sheep-dips add an extra layer of security by checking antivirus and intervening if necessary.
At the very least, an active sheep-dip must disable access to removable media if it detects that its own antivirus signatures are not up-to-date. A more advanced system can be allowed limited network access for automatic updates and remote monitoring, but it must only enable its network connection when there is no immediate malware risk. When the network connection is active all removable media access must be disabled.
References
External links
— Open source active sheep-dip software.
Data security
Cyberwarfare
|
61229292
|
https://en.wikipedia.org/wiki/Raylib
|
Raylib
|
Raylib (stylized in lowercase as raylib) is a cross-platform open-source software development library. The library is meant to create graphical applications and games. The official website introduces it as "a simple and easy-to-use library to enjoy video games programming."
The library is highly inspired by the Borland BGI graphics library and by the XNA framework and it's designed to be well suited for prototyping, tooling, graphical applications, embedded systems and education. The source code is written in plain C (C99) and it aims to be easy for beginners, distributed under a zlib/libpng OSI certified open-source license. It supports compilation to several target platforms, including Windows, Linux, macOS, FreeBSD, Android, Raspberry Pi and HTML5.
raylib has been ported to more than 50 programming languages (but most of those are not stable ports) in the form of bindings. raylib provides traditional documentation as well as a cheatsheet for a brief explanation on its functions and features.
History
raylib development started in August 2013 by Ramon Santamaria to support a game development course, focused on students with no previous coding experience and artistic profile; the library acted as a direct replacement of WinBGI. During the course, raylib was further developed based on the feedback of the students and by June 2014, the library was starting to be showcased in several game development events in Barcelona.
raylib 1.0 was released in November 2013 and it featured around 80 functions for window and inputs management, basic 2D and 3D shape drawing, texture loading and drawing, font loading, text drawing, audio system management and audio file loading and playback. The first raylib version had 8 subsequent minor releases (from raylib 1.1 to raylib 1.8), along the course of 5 years, which each introduced some new features. Some of the most notable improvements were Android, WebAssembly and Raspberry Pi support, multiple OpenGL backends, VR support and ten examples.
raylib 2.0 was released in July 2018 and removed all external dependencies from the build system. It also exposed a number of configuration options in the build system, to minimize size and increase support, supporting various continuous integration systems. Along the following two years, parts of the library were reviewed updated, and the ecosystem was built out. A single minor release, raylib 2.5, was released during this period.
raylib 3.0 was released in April 2020, refactoring many parts of the code to improve portability and bindings. It involved moving global variables to contexts, added support for custom memory allocators, a filesystem for loading assets and over 115 code examples. It received a minor update, raylib 3.5, in December 2020.
raylib 4.0 was released in November 2021, featuring a complete naming review for library consistency and coherency: function names, parameters, descriptions, comments and log output messages were reviewed. It added an internal Events Automation System and exposed game-loop control for the user. It also features some of its internal libraries to be used as standalone modules: rlgl and raymath. Zig and Odin programming languages officially support raylib. It has been the biggest update of the library to date.
Features
raylib offers the following features:
Support for multiple platforms, including Windows, Linux, macOS, Raspberry Pi Android and HTML5
Support for OpenGL 1.1, 2.1, 3.3, 4.3 and OpenGL ES 2.0 as backend
Image, textures and fonts loading and drawing from several formats
Audio loading and playing from several formats and streaming support
Math operations for vectors, matrices, and quaternions
2D rendering with a camera, including automatic sprites batching
3D models rendering including custom shaders and postprocessing shaders
Support for VR simulations with configurable HMD device parameters
Support for animated as well as non-animated 3D and 2D models
An example collection with +120 code examples
Reception and adoption
raylib was primarily intended for education on video games and graphics programming. However, since many developers found it simple and easy to use, it has been adopted in various hobbyist projects.
Multiple communities exist for raylib on services such as Reddit and Discord. On the raylib website, a handful of social networks are listed, including the personal sites of Santamaria, and communities dedicated to raylib.
GitHub lists over 120 projects on the raylib topic.
Software architecture
Modules
raylib consists of several modules that are exposed to the programmer through the API.
core – Handles the window creation and OpenGL context initialization as well as inputs management (keyboard, mouse, gamepad and touch input)
rlgl – Handles OpenGL backend, abstracting multiple versions to a common API. This module can be used standalone.
shapes – Handles basic 2D shape rendering (line, rectangle, circle...) and basic collision detection
textures – Handles image and texture loading (CPU and GPU) and management, including image manipulation functionality (crop, scale, tint, etc.)
text – Handles fonts loading as spritesheet and text rendering. Also includes some text processing functionality (join, split, replace, etc.)
models – Handles 3D model loading and rendering, including support for animated models
raudio – Handles audio device management and audio file loading and playback, including streaming support. This module can be used standalone.
raymath – Provides a set of math functions for vectors, matrices and quaternions
Bindings
raylib has bindings for more than 50 different programming languages, created by its community, including Rust, Go, C#, Lua, Python, and Nim. A list of bindings is available in the BINDINGS.md file in the raylib GitHub repository.
Many new programming languages like Beef, Odin and Ring provides binding for raylib. The Ring programming language includes raylib in the standard library.
Add-ons
The raylib community has contributed several add-ons to extend the features and connection of raylib with other libraries. Some of the modules are:
raygui – Immediate mode GUI module for raylib
physac – physics module intended to be used with raylib
libpartikel – particle system module for raylib
spine-raylib – Spine animations integration module for raylib
cimgui-raylib – Dear Imgui integration module for raylib
Awards
raylib has won a number of awards, which it proudly displays on its website.
In April 2019, Santamaria was awarded with the Google Open Source Peer Bonus award for contributing to the open-source ecosystem with raylib.
In August 2020, raylib was awarded with an Epic MegaGrant by Epic Games to support its development.
In April 2021, Santamaria was awarded with another Google Open Source Peer Bonus award for the same reasons.
Example
The following program in the C programming language uses raylib to create a white window with some centered text.
#include "raylib.h"
int main(void)
{
const int screenWidth = 800;
const int screenHeight = 450;
InitWindow(screenWidth, screenHeight, "raylib [core] example - basic window");
SetTargetFPS(60);
while (!WindowShouldClose())
{
BeginDrawing();
ClearBackground(RAYWHITE);
DrawText("Congrats! You created your first window!", 190, 200, 20, LIGHTGRAY);
EndDrawing();
}
CloseWindow();
return 0;
}
See also
References
External links
raylib games on Itch.io
Application programming interfaces
C (programming language) libraries
Graphics libraries
Audio libraries
Cross-platform software
Windows APIs
Linux APIs
MacOS APIs
Video game development
Video game development software for Linux
|
26533354
|
https://en.wikipedia.org/wiki/2010%20Pacific-10%20Conference%20football%20season
|
2010 Pacific-10 Conference football season
|
The 2010 Pacific-10 Conference football season began on September 2, 2010 with a victory by USC at Hawaii. Conference play began on September 11 with Stanford shutting out UCLA 35–0 in Pasadena on ESPN.
Oregon repeated as the conference champion, ending the regular season with a program-first twelve wins and with a #2 BCS ranking. The Ducks earned a berth in the 2011 BCS National Championship Game, which they lost to SEC Champion Auburn. Stanford repeated as the conference runner-up, ending the regular season with a program-first eleven wins (their sole loss was to Oregon) and with a #4 BCS ranking, giving them an at-large BCS berth. The Cardinal defeated ACC Champion Virginia Tech in the 2011 Orange Bowl. Arizona lost to Oklahoma State while Washington defeated Nebraska in non-BCS bowls.
This was the final season for the conference as a 10-team league. In July 2011, Colorado and Utah joined the conference, at which time the league's name changed to the Pac-12 Conference.
The Sagarin Ratings by the end of the bowl season, ranked the Pac-10 as the best conference in football, overall.
Previous season
During the 2009 NCAA Division I FBS football season, the Pac-10 teams won 2 and lost 5 bowl games:
Las Vegas Bowl – BYU 44, Oregon State 20
Poinsettia Bowl – Utah 37, California 27
Emerald Bowl – USC 24, Boston College 13
EagleBank Bowl – UCLA 30, Temple 21
Holiday Bowl – Nebraska 33, Arizona 0
Sun Bowl – Oklahoma 31, Stanford 27
Rose Bowl – Ohio State 26, Oregon 17
Preseason
March 12, 2010 – Coach Chip Kelly suspended quarterback Jeremiah Masoli for the 2010 season after he pleaded guilty to second-degree burglary charges.
March 19, 2010 – Oregon athletic director Mike Bellotti steps down to join ESPN as a football analyst.
June 9, 2010 – Oregon dismisses Masoli.
June 10, 2010 – The NCAA releases the report of its investigation of the USC football team for violations dealing with former Trojans running back Reggie Bush. Sanctions imposed include loss of scholarships and include a two-year postseason ban
June 10, 2010 – Colorado joins the Pac-10 as its 11th member effective July 1, 2012. (The school and its then-current conference, the Big 12, later reached an agreement in September 2010 to allow the Buffaloes to join the Pac-10 in 2011.)
June 17, 2010 – Utah joins the Pac-10 as its 12th member effective July 1, 2011. Although they are the 12th member to accept an invitation to the conference, they are at the time believed to be the 11th member to compete since Colorado is not initially scheduled to join until 2012.
July 1, 2010 – Running backs coach Todd McNair's contract at USC expired June 30, 2010. He played a key part in the NCAA's investigation of the school's athletic department in dealing with Reggie Bush.
July 6, 2010 – Seantrel Henderson, the nation's No. 1-ranked offensive tackle recruit was given a release from his commitment to play with USC. Defensive end Malik Jackson transferred to Tennessee.
July 29, 2010 – Annual media poll: 1. Oregon (314 points); 2. USC (311); 3. Oregon State (262); 4. Stanford (233); 5. Arizona (222); 6. Washington (209); 7. California (175); 8. UCLA (134); 9. Arizona State (81); 10. Washington State (39). Media day was held at the Rose Bowl.
Rankings
Highlights
September
September 11 – In the first conference game of the season, #25 Stanford defeated UCLA in a 35–0 shut out at the Rose Bowl, marking several firsts: the Cardinal's first victory in Pasadena since 1996, the first home shut out UCLA had suffered since an October 16, 1999, 17–0 loss to California, the first time Stanford had shut out an opponent on the road since 1974, and the first time since 1941 that Stanford shut out UCLA.
September 17 – In a matchup between the number one defense in the nation in Cal and the number one offense in the nation in Nevada, the Bears fell to the Wolf Pack 52–31 in Reno in the teams' first meeting since 1915.
September 18 – Oregon records two shutouts in a season for the first time since 1964 with a 69–0 blowout of Portland State and a 72–0 shut out of New Mexico in its September 4 season opener. Two Pac-10 teams upset their opponents: UCLA defeated No. 23 Houston in the Rose Bowl for the Bruins' first win against a ranked opponent since 2008 and Arizona defeated No. 10 Iowa at home, scoring the most points allowed so far by the Hawkeyes in the season.
September 19 – Five Pac-10 teams are ranked in the Top 25 (#5 Oregon, #14 Arizona, #16 Stanford, #20 USC, #24 Oregon State).
September 21 – Colorado and the Big 12 Conference reach an agreement that will allow the Buffaloes to join the Pac-10 in 2011.
September 25 – UCLA pulls off its second upset in a row of a ranked opponent with a 34–12 defeat of No. 7 Texas in front of a stadium-record crowd of 101,437 in Austin. Stanford wins at Notre Dame for the first time since 1992. No. 14 Arizona survives a scare in Tucson with a late touchdown and interception against Cal to hold on and prevent an upset in both teams' Pac-10 openers. Four Pac-10 teams (#4 Oregon, #9 Stanford, #14 Arizona, #18 USC) are 4–0.
October
October 2 – #9 Stanford visited #4 Oregon in a game that could decide the Pac-10 championship in their first meeting as ranked teams. The Ducks rallied to come from behind 21–3 and defeat the Cardinal. Washington upset #18 USC for two consecutive years, winning at the Los Angeles Memorial Coliseum 32–31 with a last-second field goal.
October 9 – #3 Oregon remains the sole undefeated Pac-10 team at 6–0 with a victory over Washington State. Oregon State upsets #9 Arizona 29–27 in Tucson. Cal snaps a 3-game winning streak by UCLA with a 35–7 rout in Berkeley. #16 Stanford defeats USC 37–35 for the Trojans' second loss in a row on a last-second field goal.
October 16 – USC quarterback Matt Barkley throws a school record-tying five touchdowns in a 48–14 blowout victory over Cal. Cal has won three games (all at home) by the margin of 139–17 and lost three games (all on the road) 110–54. Washington upsets #24 Oregon State in 35–34 in double overtime, snapping a six-game losing streak to the Beavers. Both teams were tied at 21 points apiece at the end of regulation.
October 17 – Oregon earns a #1 ranking in the AP and Coaches' Polls and a #2 BCS ranking.
October 21 – Oregon quarterback Darron Thomas throws for a career-high 308 yards in a 60–13 blowout of UCLA.
October 23 – Stanford gets their sixth victory in seven games to open a season for the first time since 1970 with a victory over Washington State, becoming bowl-eligible for the second straight season since 1995–96. Cal defeats Arizona State 50–17, while #15 Arizona routs Washington 44–14.
October 30 – Arizona State shuts out Washington State 42–0 and Washington is shut out at home for the first time since 1976 by No. 13 Stanford 41–0. No. 15 Arizona holds off UCLA to prevail 29–21, while Oregon State defeats Cal for the fourth time in a row, 35–7. Oregon running back LaMichael James sets a school record with his 15th career 100-yard rushing game and Darron Thomas becomes the first quarterback to throw 20 touchdown passes in a season since 2007 as #1 Oregon stays unbeaten with a 53–32 defeat of #24 USC.
October 31 – Oregon is ranked first in the BCS, AP, and Coaches Polls.
November
November 6 – Top-ranked Oregon fails to score in the first quarter for the first time in the season in a 53–16 rout of Washington. #10 Stanford dominates #13 Arizona in a 42–17 victory. USC edges out Arizona State 34–33 after a last minute Sun Devils field goal misses. UCLA defeats Oregon State 17–14 on a field goal with 1 second left in regulation. Cal holds off Washington State for its first road victory since the 2009 Big Game against Stanford.
November 13 – Oregon is held scoreless in the first quarter for the second week in a row and held to a season-low 317 yards of offense, but holds off Cal for a 15–13 victory, the first game of the season where the Ducks did not score at least 42 points and win by at least 11 points. #7 Stanford edges out Arizona State 17–13. USC upsets #18 Arizona 24–21. Washington State snaps a 16-game conference losing streak by defeating Oregon State 31–14 in Corvallis.
November 14 – Oregon holds its #1 rankings in all polls. Stanford holds its #7 ranking in the AP Poll and its #8 ranking in the Harris Polls while rising from #9 to #8 in the Coaches Poll. Arizona falls to #23 in all polls. USC returns to the AP rankings at #20. Three Pac-10 teams are bowl assured: Oregon, Stanford, and Arizona.
November 18 – In its home finale, Washington has two 100-yard rushers for the first time since 2007 and puts up a season-high 253 yards rushing in a 24–7 defeat of UCLA.
November 20 – #7 Stanford ties a 1975 Cal record for the most points in Big Game history to recapture the Stanford Axe from Cal in Berkeley, 48–14. Oregon State upsets #20 USC at Corvallis 36–7, the third consecutive victory for the Beavers over the Trojans in Oregon. They will have faced five Top 10 teams by the end of the year.
November 26 – Arizona State tops UCLA 55–34. Wildcats quarterback Nick Foles passes for a career-high 448 yards, but his performance is not enough to stage an upset of #1 Oregon by #20 Arizona, as the Ducks prevail 48–29.
November 27 – Washington keeps its bowl hopes alive by scoring a touchdown with 2 seconds left in the game in a matchup against Cal to prevail 16–13, ending Cal's bowl hopes. #6 Stanford has its first 11-game winning season in school history with a 38–0 shutout of Oregon State, its third conference shutout of the season. Notre Dame defeats USC 20–16 for its first win since 2001. Oregon moves down in the BCS rankings to #2, while Stanford moves up to #4.
December
December 2 – Arizona State blocks two PATs to defeat Arizona in double overtime 30–29 in their annual Territorial Cup game.
December 3 – The NCAA denies Arizona State's request for a waiver to play in a post-season bowl game.
December 4 – Oregon repeats as the conference champion with a victory over Oregon State in the Civil War to finish with 12 wins for the first time in program history. USC defeats UCLA for the fourth straight time to hold on to the Victory Bell. Washington defeats Washington State in the Apple Cup on a game-winning touchdown with 44 seconds left in the game to become bowl-eligible for the first time since 2001.
December 5 – Auburn moves past Oregon for the #1 AP Ranking. The two teams will meet in the BCS National Championship Game. #5 Stanford won an at-large BCS berth and will face ACC Champion Virginia Tech in the Orange Bowl, Arizona will face #16 Oklahoma State in the Alamo Bowl, and Washington will face #17 Nebraska in the Holiday Bowl.
December 6 – Two of the four finalists for the Heisman Trophy represent the Pac-10: Oregon running back LaMichael James and Stanford quarterback Andrew Luck. This is the second year in a row that Stanford has had a Heisman Trophy finalist. Oregon head coach Chip Kelly is named the Eddie Robinson Coach of the Year by the Football Writers Association of America.
December 9 – Oregon running back LaMichael James is the recipient of the Doak Walker Award, the second year in a row that a Pac-10 running back has received the award.
December 11 – Stanford quarterback Andrew Luck is the runner-up in Heisman Trophy balloting to Auburn quarterback Cameron Newton, the second year in a row that a Stanford player is the runner-up in balloting for the Heisman.
December 21 – Oregon head coach Chip Kelly is named the Associated Press College Football Coach of the Year. Stanford's Jim Harbaugh finished third in balloting.
Notes
USC is ineligible for the postseason due to sanctions imposed by the NCAA
USC kicked off the Pac-10 football season by visiting Hawai'i on Thursday, September 2, 2010.
The Pac-10 football season ends with games on Saturday, December 4, 2010
January 6, 2011 – Fox signed a contract to air the first Pac-12 Conference football championship game on December 3, 2011 for $14.5 million.
Statistics leaders
Players-of-the-week
National
September 13 – Cal linebacker Mike Mohamed was named Lott IMPACT Player of the Week.
September 21 – UCLA linebacker Patrick Larimore, who had a career-high and team-high 11 tackles (10 solos), including three for loss, forced a fumble and broke up a pass in the upset of No. 23 Houston on September 18 was named the FWAA/Bronko Nagurski National Defensive Player of the Week.
September 27 – UCLA linebacker Akeem Ayers was named Lott IMPACT Player of the Week. The UCLA Bruins (2–2) are the Tostitos Fiesta Bowl National Team of the Week for games of the weekend of September 25.
Pacific-10 Conference
Pac-10 vs. BCS matchups
Bowl games
All bowl games involving the Pac-10 aired on ESPN.
Head coaches
Mike Stoops, Arizona
Dennis Erickson, Arizona State
Jeff Tedford, California
Chip Kelly, Oregon
Mike Riley, Oregon State
Jim Harbaugh, Stanford
Rick Neuheisel, UCLA
Lane Kiffin, USC
Steve Sarkisian, Washington
Paul Wulff, Washington State
Awards and honors
Eddie Robinson Coach of the Year and Associated Press College Football Coach of the Year
Chip Kelly, Oregon
Woody Hayes Trophy
Jim Harbaugh, Stanford
Doak Walker Award
LaMichael James, RB, Oregon
Paul Hornung Award
Owen Marecic, FB and LB, Stanford.
National Finalists
Akeem Ayers, LB, UCLA, Butkus Award (most outstanding defensive player)
LaMichael James, RB, Oregon, Heisman Trophy (most outstanding player) and Doak Walker Award (most outstanding running back)
Andrew Luck, QB, Stanford, Heisman Trophy, Maxwell Award (best player), and Davey O'Brien Award (best quarterback)
Owen Marecic, FB/LB, Stanford, William V. Campbell Trophy (top scholar-athlete)
Mike Mohamed, LB, California, William V. Campbell Trophy
All-Americans
Walter Camp Football Foundation All-America:
Running back LaMichael James, Oregon, first team All-America
Quarterback Andrew Luck, Stanford, second team All-America
Center Chase Beeler, Stanford, second team All-America
Linebacker Akeem Ayers, UCLA, second-team All-America
Defensive back Cliff Harris, Oregon, second-team All-America
Kick returner Cliff Harris, Oregon, second-team All-America
Associated Press All-America First Team:
RB LaMichael James, Oregon
OL Chase Beeler, Stanford
DT Stephen Paea, Oregon State
FWAA All-America Team:
Sporting News All-America team:
RB LaMichael James, Soph., Oregon, Offense first-team
OL Chase Beeler, Sr., Stanford, Offense first-team
DT Stephen Paea, Sr., Oregon State, Defense first-team
LB Vontaze Burfict, Soph., Arizona State, Defense first-team
S Rahim Moore, Jr., UCLA, Defense first-team
PR Cliff Harris, Soph., Oregon, Defense first-team
AFCA Coaches' All-Americans First Team:
ESPN All-America team:
All-Pac-10 teams
Offensive Player of the Year: Andrew Luck, QB, Stanford
Pat Tillman Defensive Player of the Year: Stephen Paea, DT, Oregon State
Offensive Freshman of the Year: Robert Woods, WR, USC
Defensive Freshman of the Year: Junior Onyeali, DE, Arizona State
Coach of the Year: Chip Kelly, Oregon
First Team:
ST=special teams player (not a kicker or returner)
All-Academic
First Team:
2011 NFL Draft
References
|
21232638
|
https://en.wikipedia.org/wiki/College%20of%20Engineering%2C%20Cherthala
|
College of Engineering, Cherthala
|
The Government College of Engineering, Cherthala (Malayalam: കോളേജ് ഓഫ് എഞ്ചിനീയറിംഗ്, ചേര്ത്തല ) is an engineering college in the state of Kerala, India, established by Government of Kerala in 2004, under the propitious of the Institute of Human Resources Development (IHRD)
and is recognized by the All India Council for Technical Education, New Delhi. The college is affiliated to APJ Abdul Kalam Technological University.
The college is located at Pallippuram, close to the heart of Cherthala, is sufficiently removed from the hustle and bustle to provide a serene environment for higher learning. The college is surrounded by vegetation and greenery providing a panoramic view of the Vembanadu lake and is close to the newly set up InfoPark Smart Space at Pallippuram
Courses
Four-year B.Tech. degrees in:
Computer Science and Engineering (formerly Computer Engineering) accredited by NBA through academic year 2022-23
Electronics and Communication Engineering (formerly Electronics Engineering)
Electrical and Electronics Engineering (formerly Electrical Engineering)
Two year postgraduate M.Tech course in:
Electronics Engineering (Specialization in Signal Processing)
Computer Science & Engineering (Specialization in Computer & Information Science)
Admission
Undergraduate programmes
The admission for both the merit quota and management quota is purely based on the rank secured in the All Kerala Engineering Entrance Examination conducted by the Commissioner for Entrance Examinations, Govt. of Kerala. The difference between the merit quota and management quota is in the amount of fees that have to be paid by the candidates.35k for merit holders(2020). 15% of the seats are reserved for NRIs; the admission is based on the merit in the qualifying examination.
Post graduate programmes
Through Graduate Aptitude Test in Engineering exam administered and conducted jointly by the Indian Institute of Science and the Indian Institutes of Technology on behalf of the National Coordination Board – GATE, Department of Higher Education, Ministry of Human Resource Development (MHRD), Government of India.
Annual intake
CECTL has an annual intake of 270 students (+10% lateral entry students) through Government allotment, divided among the two branches as follows:
B.TECH
Computer Science and Engineering: 90 seats (+10% lateral entry students)
Electronics and Communication Engineering: 60 seats (+10% lateral entry students)
Electrical and electronics engineering:60 seats (+10% lateral entry students)
M.TECH
Computer Science Engineering (Computer & Information Science): 18 seats
Student activities
College Senate
The members of the College Senate are elected by and from the students of the college. The College Senate consists of elected representatives from each class and lady representatives from each year. The College Senate has an executive committee consisting of chairman, Vice-chairman, General Secretary, Treasurer, Editor, Sports Club Secretary and Arts Club Secretary. The tenure of office of the College Senate is one academic year. The objective of the Senate is to train the students in the responsibilities of citizenship, to promote opportunities for development of character among students, to organize debates, seminars and tours and to encourage educational and social activities.
Technical and non-technical organizations
IEEE
IEDC
NSS
Nature Club
Arts and Sport club
AECES
CSI
Arts and sports
The arts and sports festivals conducted every year by the college senate. The College Senate will divide students into 4 houses, each having one captain and a vice captain. There will be a college inauguration ceremony for Arts and Sports festivals.
Noticeable Achievements in Arts and Sports
2012-2013 Champions of intra IHRD 5s football tournament, by College Of Enngineering, Chengannur
2013-2014 Runners up of CUSAT university football tournament
2013-2014 Inter college 7s football tournament runners up, at Model Enngineering College, Thrikkakara
2014-2015 Champions of inter college 5s football tournament, College Of Enngineering, Karunagappally
2015-2016 Champions of intra ihrd 5s football tournament, College Of Enngineering, Chengannur
Training and Placement cell
Training and Placement Cell functions as a launching platform for the qualified candidates to make their dreams a reality. TPC is guided by the placement officer with faculties from all the departments and is enriched by students members. TPC prepares the students to face competitive examination and interviews through intensive training programs encompassing aptitude tests, group discussions, mock interviews and basics of behavioral psychology and body language. TPC also assists the students in career planning and employment strategies. It invites reputed companies to the college and organize campus placement session.
Placement Details
See also
Cochin University of Science and Technology
Model Engineering College
College of Engineering Chengannur
College of Engineering Adoor
College of Engineering Karunagappally
College of Engineering Poonjar
College of Engineering Kallooppara
College of Engineering Attingal
College of Engineering kottarakkara
Institute of Human Resources Development
List of Engineering Colleges in Kerala
Cherthala
References
External links
College website
Cochin University of Science And Technology official website
APJ Abdul Kalam Technological University
The Institute of Human Resource Development Kerala official website
Engineering colleges in Kerala
All India Council for Technical Education
Institute of Human Resources Development
Universities and colleges in Alappuzha district
Educational institutions established in 2004
2004 establishments in Kerala
|
53956355
|
https://en.wikipedia.org/wiki/Trojan.Win32.DNSChanger
|
Trojan.Win32.DNSChanger
|
Trojan.Win32.DNSChanger is a backdoor trojan that redirects users to various malicious websites through the means of altering the DNS settings of a victim's computer. The malware strain was first discovered by Microsoft Malware Protection Center on December 7, 2006 and later detected by McAfee Labs on April 19, 2009.
Behaviour
DNS changer trojans are dropped onto infected systems by other means of malicious software, such as TDSS or Koobface. The trojan is a malicious Windows executable file that cannot spread towards other computers. Therefore, it performs several actions on behalf of the attacker within a compromised computer, such as changing the DNS settings in order to divert traffic to unsolicited, and potentially illegal and/or malicious domains.
The Win32.DNSChanger trojan is used by organized crime syndicates to maintain click fraud. The user's browsing activity is manipulated through various means of modification (such as altering the destination of a legitimate link to then be forwarded to another site), allowing the attackers to generate revenue from pay-per-click online advertising schemes. The trojan is commonly found as a small file (+/- 1.5 kilobytes) that is designed to change the NameServer registry key value to a custom IP address or domain that is encrypted in the body of the trojan itself. As a result of this change, the victim's device would contact the newly assigned DNS server to resolve names of malicious webservers.
Trend Micro described the following behaviors of Win32.DNSChanger:
Steering unknowing users to malicious websites: These sites can be phishing pages that spoof well-known sites in order to trick users into handing out sensitive information. A user who wants to visit the iTunes site, for instance, is instead unknowingly redirected to a rogue site.
Replacing ads on legitimate sites: Visiting certain sites can serve users with infected systems a different set of ads from those whose systems are not infected.
Controlling and redirecting network traffic: Users of infected systems may not be granted access to download important OS and software updates from vendors like Microsoft and from their respective security vendors.
Pushing additional malware: Infected systems are more prone to other malware infections (e.g., FAKEAV infection).
Alternative aliases
Win32:KdCrypt[Cryp] (Avast)
TR/Vundo.Gen (Avira)
MemScan:Trojan.DNSChanger (Bitdefender Labs)
Win.Trojan.DNSChanger (ClamAV)
variant of Win32/TrojanDownloader.Zlob (ESET)
Trojan.Win32.Monder (Kaspersky Labs)
Troj/DNSCha (Sophos)
Mal_Zlob (Trend Micro)
MalwareScope.Trojan.DnsChange (Vba32 AntiVirus)
Other variants
Trojan.Win32.DNSChanger.al
F-Secure, a cybersecurity company, received samples of a variant that were named PayPal-2.5.200-MSWin32-x86-2005.exe. In this case, the PayPal attribution indicated that a phishing attack was likely. The trojan was programmed to change the DNS server name of a victim's computer to an IP address in the 193.227.xxx.xxx range.
The registry key that is affected by this trojan is:
HKLM\SYSTEM\ControlSet001\Services\Tcpip\Parameters\Interfaces\NameServer
Other registry modifications made involved the creation of the below keys:
HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\{random}, DhcpNameServer = 85.255.xx.xxx,85.255.xxx.xxx
HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\{random}, NameServer = 85.255.xxx.133,85.255.xxx.xxx
HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\, DhcpNameServer = 85.255.xxx.xxx,85.255.xxx.xxx
HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\, NameServer = 85.255.xxx.xxx,85.255.xxx.xxx
See also
DNSChanger
DNS hijacking
Rove Digital case
Zlob trojan
References
External links
How DNS Changer Trojans Direct Users to Threats by TrendMicro
FBI: Operation Ghost Click (F-Secure)
‘Biggest Cybercriminal Takedown in History’ (Brian Krebs @ krebsonsecurity.com)
Analysis of a DNSChanger file at VirusTotal
Adware
Consumer fraud
Cybercrime
Domain Name System
Hacking in the 2000s
Internet fraud
Internet Protocol based network software
Online advertising
Spamming
Windows trojans
|
19484779
|
https://en.wikipedia.org/wiki/Deep%20Freeze%20%28software%29
|
Deep Freeze (software)
|
Deep Freeze, by Faronics, is a reboot to restore software application available for the Microsoft Windows, and macOS operating systems which allows system administrators to protect the core operating system and configuration files on a workstation or server by restoring a computer back to the saved configuration, each time the computer is restarted.
Deep Freeze can also protect a computer from harmful malware, since it automatically deletes (or rather, no longer "sees") downloaded files when the computer is restarted. The advantage of using Deep Freeze is that it uses very few system resources, and thus does not slow down computer performance greatly. The disadvantage is that it does not provide real-time protection, therefore an infected computer would have to be restarted in order to remove malware.
Limitations and security
Deep Freeze only protects workstations in a "fresh-booted" state. That is, Deep Freeze prevents permanent tampering with protected hard drives/partitions across reboots, but user activity between restarts is not limited by the program. For example, Deep Freeze does not prevent application installation; a user can install a modified version of a Web browser (but seemingly harmless to the unknowing user) designed to secretly send users' passwords to a server connected to the Internet. As a workaround, Deep Freeze can be configured to restart after user logout, shutdown after a chosen period of inactivity, or restart/shutdown at a scheduled time in an attempt to ensure that no such installations are retained (as rebooting the system returns the system to its original, unmodified state).
Deep Freeze cannot protect the operating system and hard drive upon which it is installed if the computer is booted from another medium (such as another bootable partition or internal hard drive, an external hard drive, a USB device, optical media, or network server). In such cases, a user would have real access to the contents of the (supposedly) frozen system. This scenario may be prevented by configuring the CMOS (nonvolatile BIOS memory) on the workstation to boot only to the hard drive to be protected, then password-protecting the CMOS. A further precaution would be to lock the PC case shut with a physical lock or tiedown cable system to prevent access to motherboard jumpers. Failure to take such precautions can compromise the protection provided by the software.
Deep Freeze can protect hard drive partitions of larger than 2 TB capacity (using NTFS).
References
Further reading
Moon, Peter (March 10, 2009). "PCs protected by the Freeze". The Australian Financial Review.
Blum, Jonathan (January 14, 2008). "Macworld preview: New tools for small biz". Fortune Small Business.
Ricadela, Aaron (June 28, 2005). "Microsoft Tests Tool For Computer Classrooms". InformationWeek.
External links
Utilities for Windows
Utilities for macOS
Windows security software
MacOS security software
Shareware
|
1959211
|
https://en.wikipedia.org/wiki/Brian%20Scalabrine
|
Brian Scalabrine
|
Brian David Scalabrine (born March 18, 1978), nicknamed the "White Mamba", is an American former professional basketball player who is currently a television analyst for the Boston Celtics of the National Basketball Association (NBA). He is also the co-host of "The Starting Lineup", which airs weekdays from 7 a.m. to 10 a.m. ET on SiriusXM NBA Radio.
Raised in Enumclaw, Washington, Scalabrine attended the University of Southern California after transferring from Highline College. As a member of the USC Trojans men's basketball team, Scalabrine was the top scorer and a leader in field goals and rebounds. He also played at the center position in college.
The New Jersey Nets selected him in the second round of the 2001 NBA draft. The Nets made consecutive NBA Finals his first two years, and Scalabrine played four seasons with the team. In 2005, he signed with the Boston Celtics and won a championship with the team in 2008. The Celtics also appeared in the 2010 NBA Finals. Scalabrine signed with the Chicago Bulls the following season, and played with them until 2012. Throughout his NBA career, Scalabrine served as a backup power forward.
In 2013, Mark Jackson announced that Scalabrine would join his Golden State Warriors coaching staff. In 2014, Scalabrine took a job as an analyst for Celtics games on local Boston broadcasts.
Early life and college
Born in Long Beach, California, Scalabrine was one of four children in his family and graduated from Enumclaw High School at Enumclaw, Washington in 1996. He is of Italian ancestry. He enrolled at Highline College in 1996, played his first year with its basketball team the Thunderbirds, and redshirted his second year. As a freshman at Highline, Scalabrine averaged 16.3 points, 9.6 rebounds, 2.9 assists and 1.2 steals per game. Scalabrine recorded seventeen double-doubles, and led the team in rebounds, blocks, and free throw percentage (75%). The Thunderbirds went 31-1 in the 1996–97 season and won the state junior college championship. Scalabrine was a Northern Division All-Star in 1997 as well as part of the All-Northwest Athletic Association of Community Colleges Championship Tournament Team.
In 1998, he transferred to the University of Southern California (USC). In his first year with the USC Trojans men's basketball team, he was the only player to start all 28 games. He led the Trojans in scoring (14.6 points), rebounding (6.4), and field goals (53.1%). In scoring, blocked shots, and field goals, he was also the only Pac-10 conference player among the top 10 players in those areas. His best game performance was against American University on December 21, 1998: 26 points, seven rebounds, and two blocks. On February 13, 1999, he scored 22 points including an important three-pointer in overtime; the unranked USC won an upset victory over number-six Stanford 86-82 in overtime. He was the 1999 Pac-10 Newcomer of the Year and earned an All-Pac-10 honorable mention.
During his second season with USC, Scalabrine was named to the All-Pac-10 first team and the National Association of Basketball Coaches All-District 15 first team. He also earned a Sporting News All-American honorable mention. Again, he finished as USC's top scorer (17.8 ppg) and field goal shooter (53.1%) and was also the second-best Pac-10 scorer. He also made 40.3% of attempted three-pointers. Against the Oregon Ducks, Scalabrine scored 29 points and made 10 rebounds.
USC advanced to the NCAA tournament in 2001, Scalabrine's senior season. In the Elite Eight round, USC lost to Duke 79-69; Scalabrine scored 13 points. Scalabrine graduated with a degree in history.
Career
New Jersey Nets (2001–2005)
Because he injured his fifth metatarsal bone during workouts in late September 2001, Scalabrine missed the first ten days of New Jersey Nets training camp. During the second quarter of the final 2001–02 preseason game, which took place against the Detroit Pistons on October 26, 2001, Scalabrine again injured his right foot. He made his NBA debut on January 31, 2002, when the Nets played against the Milwaukee Bucks. As a rookie, Scalabrine averaged 2.1 points, 1.8 rebounds, and 0.8 assists per game. He played in six playoff games his debut season and averaged 0.3 points and 0.5 rebounds. The Nets were the Eastern Conference Champions of the 2001–02 season and lost the 2002 NBA Finals to the Los Angeles Lakers in four games. In a triple-overtime victory over the Detroit Pistons in Game 5 of the 2004 Eastern Conference Semifinal series, Scalabrine scored a career high 17 points. He surpassed that high with 29 points on January 26, 2005 against the Golden State Warriors. On April 15, 2005, he played a career high 45 minutes.
During his time with the Nets, Scalabrine gained the nickname "Veal", a play on words based on the dish veal scaloppini.
Boston Celtics (2005–2010)
On August 2, 2005, Scalabrine signed a five-year contract with the Boston Celtics. A month earlier, he and the team agreed on terms that the contract be worth $15 million over the five years.
Scalabrine started in nine of 48 games during the 2007–08 season, and played on average 10.7 minutes. He averaged 1.8 points and 1.6 rebounds per game. On April 16, 2008, in the final game of the regular season, Scalabrine tied a season-high with six rebounds and played 29 minutes. He did not make an appearance in the NBA playoffs. In the 2008 Finals, the Celtics defeated the Lakers in six games.
Chicago Bulls (2010–2011)
On September 21, 2010, Scalabrine agreed to a non-guaranteed contract with the Chicago Bulls. The Bulls visited the Boston Celtics on November 5, 2010, and in double overtime the Bulls won 110-105. Scalabrine played only three minutes that game. He played 18 games with the Bulls and averaged 1.1 points and 0.4 rebounds per game.
Treviso (2011)
On September 22, 2011, during the 2011 NBA lockout, Scalabrine signed with the Italian team Benetton Treviso. He left the team in December 2011 to pursue opportunities in the NBA after the lockout had ended.
Return to Chicago (2011–2012)
On December 12, 2011, Scalabrine re-signed with the Bulls. During the 2011-12 season, Scalabrine played in 28 games. In September 2012, he was offered a position as an assistant coach for the Bulls under Tom Thibiodeau, but instead opted to become a broadcaster for the Boston Celtics.
In 2017, Brian joined the Ball Hogs of the BIG3 basketball league.
Coaching career
In July 2013, Golden State Warriors coach Mark Jackson announced via Twitter that Scalabrine was joining his coaching staff. During the season, Jackson reassigned Scalabrine to the Warriors' D-League affiliate after a difference of opinion on the team's direction.
Broadcasting career
In September 2012, Scalabrine announced that he had turned down an opportunity to become an assistant coach with the Bulls so that he could join Comcast SportsNet New England as a commentator. Scalabrine described the job as "a trial run", and said there was a "small possibility" he would resume his playing career overseas in 2013.
In 2014, Scalabrine wrote an essay to Boston to announce his "homecoming" to become a Comcast SportsNet announcer. His essay spoofed a famous Sports Illustrated story about LeBron James's return to Cleveland.
Personal life
Scalabrine married Kristen Couch in 2003; their wedding ceremony was held in Hawaii. They have two children. He is also a member of the sports philanthropy organization Athletes for Hope.
Fan support
Despite his limited playing time, Scalabrine became a popular player. Bulls fans referred to him as "The White Mamba", a play on Kobe Bryant's nickname of "The Black Mamba".
NBA career statistics
Regular season
|-
| style="text-align:left;"|
| style="text-align:left;"| New Jersey
| 28 || 0 || 10.4 || .343 || .300 || .733 || 1.8 || .8 || .3 || .1 || 2.1
|-
| style="text-align:left;"|
| style="text-align:left;"| New Jersey
| 59 || 7 || 12.3 || .402 || .359 || .833 || 2.4 || .8 || .3 || .3 || 3.1
|-
| style="text-align:left;"|
| style="text-align:left;"| New Jersey
| 69 || 2 || 13.4 || .394 || .244 || .829 || 2.5 || .9 || .3 || .2 || 3.5
|-
| style="text-align:left;"|
| style="text-align:left;"| New Jersey
| 54 || 14 || 21.6 || .398 || .324 || .768 || 4.5 || 1.6 || .6 || .3 || 6.3
|-
| style="text-align:left;"|
| style="text-align:left;"| Boston
| 71 || 1 || 13.2 || .383 || .356 || .722 || 1.6 || .7 || .3 || .3 || 2.9
|-
| style="text-align:left;"|
| style="text-align:left;"| Boston
| 54 || 17 || 19.0 || .403 || .400 || .783 || 1.9 || 1.1 || .4 || .3 || 4.0
|-
|style="text-align:left;background:#afe6ba;"| †
| style="text-align:left;"| Boston
| 48 || 9 || 10.7 || .389 || .326 || .750 || 1.6 || .8 || .2 || .2 || 1.8
|-
| style="text-align:left;"|
| style="text-align:left;"| Boston
| 39 || 8 || 12.9 || .421 || .393 || .889 || 1.3 || .5 || .2 || .3 || 3.5
|-
| style="text-align:left;"|
| style="text-align:left;"| Boston
| 52 || 3 || 9.1 || .341 || .327 || .667 || .9 || .5 || .2 || .1 || 1.5
|-
| style="text-align:left;"|
| style="text-align:left;"| Chicago
| 18 || 0 || 4.9 || .526 || .000 || .000 || .4 || .3 || .2 || .2 || 1.1
|-
| style="text-align:left;"|
| style="text-align:left;"| Chicago
| 28 || 0 || 4.4 || .467 || .143 || .500 || .8 || .5 || .2 || .2 || 1.1
|- class="sortbottom"
| style="text-align:center;" colspan="2"| Career
| 520 || 61 || 13.0 || .390 || .344 || .783 || 2.0 || .8 || .3 || .2 || 3.1
Playoffs
|-
| style="text-align:left;"| 2002
| style="text-align:left;"| New Jersey
| 6 || 0 || 2.3 || .333 || .000 || .000 || .5 || .0 || .0 || .2 || .3
|-
| style="text-align:left;"| 2003
| style="text-align:left;"| New Jersey
| 7 || 0 || 2.9 || .500 || .000 || .000 || .6 || .0 || .0 || .0 || .6
|-
| style="text-align:left;"| 2004
| style="text-align:left;"| New Jersey
| 9 || 0 || 8.1 || .647 || .833 || .500 || 1.3 || .1 || .3 || .0 || 3.3
|-
| style="text-align:left;"| 2005
| style="text-align:left;"| New Jersey
| 4 || 3 || 15.3 || .182 || .250 || 1.000 || 1.8 || .5 || .3 || .5 || 2.3
|-
| style="text-align:left;"| 2009
| style="text-align:left;"| Boston
| 12 || 0 || 20.5 || .423 || .448 || 1.000 || 2.2 || 1.0 || .2 || .4 || 5.1
|-
| style="text-align:left;"| 2010
| style="text-align:left;"| Boston
| 1 || 0 || 1.0 || .000 || .000 || .000 || .0 || .0 || .0 || .0 || .0
|- class="sortbottom"
| style="text-align:center;" colspan="2"| Career
| 39 || 3 || 10.6 || .437 || .463 || .786 || 1.3 || .4 || .2 || .2 || 2.7
See also
References
External links
USC Trojans bio page
1978 births
Living people
American expatriate basketball people in Italy
American people of Italian descent
American men's basketball players
Basketball players from Long Beach, California
Basketball players from Washington (state)
Big3 players
Boston Celtics players
Chicago Bulls players
Golden State Warriors assistant coaches
Highline College alumni
Junior college men's basketball players in the United States
National Basketball Association broadcasters
New Jersey Nets draft picks
New Jersey Nets players
Pallacanestro Treviso players
People from Enumclaw, Washington
Power forwards (basketball)
Small forwards
Sportspeople from King County, Washington
Sportspeople from Long Beach, California
USC Trojans men's basketball players
American men's 3x3 basketball players
|
69689190
|
https://en.wikipedia.org/wiki/2016%E2%80%932021%20literary%20phishing%20thefts
|
2016–2021 literary phishing thefts
|
Between 2016 and 2021 multiple prepublication manuscripts were stolen via a phishing scheme that investigators believed were conducted by an industry insider or insiders. In 2022 the FBI arrested Filippo Bernardini, a 29-year-old Italian citizen living in London and working for Simon & Schuster.
Background
Piracy in the publishing industry can have a negative impact on profits and royalties, and some industry professionals take extreme precautions with highly-anticipated releases. Translators for some books in The Da Vinci Code series were reported by Vulture to have been "required to work in a basement with security guards clocking trips to the bathroom".
Phishing attempts
In 2016 individuals involved in the publishing industry as authors, editors, agents, and publishers reported successful attempts to coerce authors into emailing unpublished manuscripts to email addresses impersonating publishing professionals known to those authors. The attempts were made by emailing from a domain name that resembled a legitimate one; the domain names were created using "common phishing techniques" such as using the letters "rn" to mimic the look of the letter "m" in an organizational name such as Macmillan, spelling it instead Macrnillan. The emails ostensibly came from other publishing industry professionals who worked closely with the target on the manuscript in question. In 2020 a cybersecurity firm found that the thief or thieves had registered over 300 domain names, and that their own security measures were amateurish. Some of the domains may have been paid for with stolen credit cards, according to Vulture.
Many of the phishing attempts involved approaching multiple people involved in a particular book's release; in the case of The Man Who Chased His Shadow, the phisher, impersonating the book's Italian translator, emailed the book's publisher and the author's agent within minutes of each other.
The person or persons doing the phishing demonstrated familiarity with the industry and used jargon common within the industry. In the case of The Man Who Chased His Shadow, an industry insider estimated that the number of people worldwide who knew the necessary details to know whom to impersonate and whom to approach was "only a few dozen." The emails themselves seemed believable; one failed attempt was made on a William Morris Agency employee whose suspicions were raised only because 'her boss would never write "please" or "thank you"'. An Israeli publisher became suspicious because the request came in Hebrew, which he does not use for work emails. A literary agent found the emails so convincing they sent multiple manuscripts to the phisher over the course of seven months.
In 2018 the Association of American Representatives warned its members of the phishing scams.
During the coronavirus pandemic, the phishers became "more vicious", according to Vulture, telling one editor who thwarted a phishing attempt, "I hope you die of the Coronavirus." They also started hiring translators to read and report on books they'd stolen, then disappearing when payment was due. The thief also started impersonating the contacts of a journalist who was working on a story about the scam and conducting other online stalking of the journalist and a colleague of the journalist. In the summer of 2020 they started also impersonating industry professionals in Hollywood.
Motives
Motives for the phishing attacks were unclear. None of the manuscripts were subsequently sold on the black market or dark web and no ransoms were asked. Speculation as to motive included talent scouts or others in the industry or in Hollywood seeking early access to anticipated releases, impatient readers wanting the book solely for their own use, or "pleasure in the act itself". One IT professional speculated that portions of a highly-anticipated book might be used to convince readers to enter credit card information online. One agent wondered if the motive could be to sell security software to those who had been targeted. Hackers speculated that the attempts could be a low-risk training program for teaching hacking techniques.
After the arrest, the New York Times wrote, "Early knowledge in a rights department could be an advantage for an employee trying to prove his worth. Publishers compete and bid to publish work abroad, for example, and knowing what’s coming, who is buying what and how much they’re paying could give companies an edge." Other industry professionals were still puzzled, saying that early access to unpublished manuscripts would be of little benefit to a low-level foreign rights specialist like Bernardini.
Fallout
As news of the ongoing scam emails spread in the industry, many publishers increased their security measures to include even very obscure titles.
The attacks surrounding Margaret Atwood's The Testaments were so determined and concerning that her agency delayed sharing the final manuscript with multiple publishers, which delayed the book's global release.
Targets
Thefts or attempts were reported by representatives of Anthony Doerr, Jennifer Egan, Laila Lalami, Taffy Brodesser-Akner, Kevin Kwan, Joshua Ferris, Eka Kurniawan, Sally Rooney, Margaret Atwood, Hanna Bervoets, Ethan Hawke, Ian McEwan, Bong Joon Ho, Michael J. Fox, and Kiley Reid, as well as unknown debut authors. In September of 2020 a manuscript was stolen from a Pulitzer Prize-winning author, who according to Forbes has not been publicly identified. Agencies and publishers in Taipei, Istanbul, Barcelona, Sweden and Israel were targeted. Vulture reported as of 2020 at least 200 companies in 30 countries had been targeted or impersonated.
Arrest and charges
The FBI arrested Filippo Bernardini, a 29-year-old Italian citizen living in London, upon landing at John F. Kennedy International Airport on January 5, 2022. He was charged with federal counts of wire fraud and aggravated identity theft. The Washington Post reported that Bernardino's LinkedIn profile listed London's Simon and Schuster as his employer. Forbes reported he described himself in his profile as a "foreign rights management professional and a translator". The company released a statement saying they were "shocked and horrified to learn today of the allegations of fraud and identity theft by an employee.”
Prosecutors with the US Department of Justice alleged that Bernardini had registered "more than 160" domain names similar to those used by legitimate publishers, literary agents, talent scouts, and other industry professionals in order to send emails from those domain names impersonating editors, agents, scouts, and other industry insiders in order to convince authors to send pre-publication manuscripts to him. Prosecutors also alleged Bernardini had stolen emails and passwords from industry employees. Combined, the charges of fraud and identity theft are punishable in the US by up to 22 years.
Bernardini pleaded not guilty on condition of surrendering his passport, submitting to electronic monitoring, and providing bail of US$300,000.
References
2010s crimes
2020s crimes
Fraud
Identity theft incidents
Publishing
|
20556839
|
https://en.wikipedia.org/wiki/Coalition%20for%20Networked%20Information
|
Coalition for Networked Information
|
The Coalition for Networked Information (CNI) is an organization whose mission is to promote networked information technology as a way to further the advancement of intellectual collaboration and productivity.
Overview
The Coalition for Networked Information (CNI), a joint initiative of the Association of Research Libraries (ARL) and EDUCAUSE, promotes the use of digital information technology to advance scholarship and education. In establishing the Coalition under the leadership of founding Executive Director Paul Evan Peters, these sponsor organizations sought to broaden the community’s thinking beyond issues of network connectivity and bandwidth to encompass digital content and advanced applications to create, share, disseminate, and analyze such content in the service of research and education. CNI works on a broad array of issues related to the development and use of digital information in the research and education communities.
CNI fosters connections and collaboration between library and information technology communities, representing the interests of a wide range of member organizations from higher education, publishing, networking and telecommunications, information technology, government agencies, foundations, museums, libraries, and library organizations. Based in Washington, DC, CNI holds semi-annual membership meetings that serve as a bellwether for digital information issues and projects. CNI also hosts invitational conferences, co-sponsors related meetings and conferences, issues reports, advises government agencies and funders, and supports a variety of networked information initiatives.
History
In 1990, the Association of Research Libraries (ARL), Educom, and CAUSE joined together to form CNI to create a collaborative project focused on high speed networking that would integrate the interests of academic and research libraries (ARL) and computing in higher education (Educom and CAUSE). Educom and CAUSE consolidated their organizations in 1998 to form EDUCAUSE, which is now one half of the partnership that oversees CNI. Structurally, CNI is a program of its founding associations with administrative oversight provided by ARL; it is not a legally separate entity. CNI’s oversight is provided by the boards and CEOs of the founding organizations, and a steering committee guides its program.
Paul Evan Peters was the founding executive director; Joan K. Lippincott also joined CNI as the associate executive director at that time. In 1997, Clifford Lynch assumed the role of executive director, and continues to serve in that capacity as of 2020; Lippincott retired from the organization in December 2019. CNI’s program has included projects in the areas of architectures and standards for networked information, scholarly communication, economics of networked information, Internet technology and infrastructure, teaching and learning, institutional and professional implications of the networked environment, and government information on the Internet.
References
External links
Coalition for Networked Information website
Organizations established in 1990
Educational organizations based in the United States
Information technology organizations based in North America
Library-related organizations
Scholarly communication
|
17020858
|
https://en.wikipedia.org/wiki/2008%20IAFL%20season
|
2008 IAFL season
|
The 2008 Irish American Football League (IAFL) season is the 22nd season since its establishment. The current champions are the UL Vikings. The first games were played on 30 March (after the 16 March game between
DCU Saints and UL Vikings was postponed due to a waterlogged pitch). The
Shamrock Bowl was scheduled for 10 August in Cork Institute of Technology's brand new stadium. The Shamrock Bowl was the first major national event played there.
The 2008 season also has seen the first season of the new Development League, dubbed DV8. This is an eight a side league dedicated to helping rookie players and new teams get some experience before joining the league proper.
Results
Week 1
Carrickfergus Knights 18-7 DCU Saints
Dublin Rebels 16-0 Belfast Trojans
Belfast Bulls 0-66 Cork Admirals
Week 2
Dublin Rhinos 0-25 Carrickfergus Knights
Belfast Trojans 0-22 UL Vikings
Tallaght Outlaws 0-98 Cork Admirals
Week 3
Dublin Rebels 64-0 Dublin Rhinos
UL Vikings 22-6 Belfast Bulls
DCU Saints 44-0 Tallaght Outlaws
Week 4
Belfast Bulls 8-24 Belfast Trojans
Cork Admirals 6-0 Dublin Rebels
Week 5
Dublin Rhinos 3-32 D.C.U. Saints
Belfast Trojans 42-0 Carrickfergus Knights
Tallaght Outlaws 0-36 Univ. of Limerick Vikings
Week 6
DCU Saints 0-25 Dublin Rebels
Cork Admirals 3-20 UL Vikings
Carrickfergus Knights 14-30 Belfast Bulls
Week 7
Tallaght Outlaws 0-34 Dublin Rhinos
Belfast Trojans 30-14 Belfast Bulls
Week 8
Cork Admirals 60-0 Tallaght Outlaws
Dublin Rhinos 0-52 Dublin Rebels
Carrickfergus Knights 19-44 Belfast Trojans
Week 9
UL Vikings 34-25 Cork Admirals
Dublin Rebels 39-0 DCU Saints
Belfast Bulls 26-14 Carrickfergus Knights
Week 10
UL Vikings 92-0 Tallaght Outlaws
Week 11
DCU Saints 32-6 Dublin Rhinos
Week 12
Cork Admirals 44-32 Carrickfergus Knights
Belfast Bulls 0-72 Dublin Rebels
Belfast Trojans 56-0 Dublin Rhinos
Week 13
Tallaght Outlaws 0-30 Belfast Trojans
Dublin Rebels 32-14 UL Vikings
Week 14
Dublin Rhinos 6-34 Cork Admirals
Carrickfergus Knights 30-0 Tallaght Outlaws (Note: Tallaght forfeited the game and subsequently retired from the league)
DCU Saints 6-0 Belfast Bulls
Week 15
UL Vikings 47-12 DCU Saints
Wildcard Weekend
DCU Saints 2-34 Cork Admirals
Remaining Fixtures
Semi Final
Belfast Trojans 8 - 52 UL Vikings
Cork Admirals 12 - 19 Dublin Rebels
Shamrock Bowl XXII
10 August, CIT Stadium, Cork
Dublin Rebels(H)12 vs 14 University of Limerick Vikings(A)
DV8’s League
League Positions
Note: W = Wins, L = Losses, T = Ties
DV8’s League Table
In the DV8's League, a win is worth four points, scoring three touchdowns is worth one extra point and losing by less the 7 points is worth one extra point e.g. a team could lose 24-30 and still gain 2 points, due to scoring three touchdowns and losing by less than seven points.
DV8 Results
9 March – Edenderry Soldiers 6-67 Craigavon Cowboys
20 March – Edenderry Soldiers 0-22 Cork Admirals 2nd
5 April – TCD Thunderbolts 12-45 Craigavon Cowboys
6 April – Dublin Dragons 0-24 Cork Admirals 2nd
19 April – Craigavon Cowboys 52-8 TCD Thunderbolts
19 April – Edenderry Soldiers 6-34 Dublin Dragons
20 April - Cork Admirals 2nd 6-12 Dublin Rebels 2nd
28 April - Dublin Rebels 2nd 65-0 Edenderry Soldiers
4 May - TCD Thunderbolts 8-39 Dublin Rebels 2nd
11 May - Dublin Dragons 20-51 TCD Thunderbolts
18 May - Cork Admirals 2nd 14-0 TCD Thunderbolts
25 May - Dublin Rebels 2nd 22-0 TCD Thunderbolts
25 May - Dublin Dragons 60-20 Edenderry Soldiers
1 June - Craigavon Cowboys 51-13 Dublin Dragons
8 June - Dublin Rebels 2nd 25-0 Craigavon Cowboys
14 June - Cork Admirals 2nd 45-13 Dublin Dragons
14 June - Craigavon Cowboys 13-6 Dublin Rebels 2nd
DV8's Playoffs
Wildcard Match
Craigavon Cowboys 0-34 Dublin Rebels
Final
Dublin Rebels @ Cork Admirals
External links
IAFL official website
Irish American Football League
Iafl Season, 2008
2008 in Irish sport
|
8980330
|
https://en.wikipedia.org/wiki/WALL-E
|
WALL-E
|
WALL-E (stylized with an interpunct as WALL·E) is a 2008 American computer-animated science fiction film, produced by Pixar Animation Studios and released by Walt Disney Pictures. It was directed and co-written by Andrew Stanton, produced by Jim Morris, and co-written by Jim Reardon. It stars the voices of Ben Burtt, Elissa Knight, Jeff Garlin, John Ratzenberger, Kathy Najimy and Sigourney Weaver, with Fred Willard in the film's (and Pixar's) only prominent live-action role. The overall ninth feature film produced by the company, WALL-E follows a solitary robot on a future, uninhabitable, deserted Earth in 2805, left to clean up garbage. However, he is visited by a probe sent by the starship Axiom, a robot called EVE, with whom he falls in love and pursues across the galaxy.
After directing Finding Nemo, Stanton felt Pixar had created believable simulations of underwater physics and was willing to direct a film set largely in space. WALL-E has minimal dialogue in its early sequences; many of the characters do not have voices, but instead communicate with body language and robotic sounds designed by Burtt. The film incorporates various topics including consumerism, corporatocracy, nostalgia, waste management, human environmental impact and concerns, obesity, and global catastrophic risk. It is also Pixar's first animated film with segments featuring live-action characters. Following Pixar tradition, WALL-E was paired with a short film titled Presto for its theatrical release.
WALL-E was released in the United States on June 27, 2008. The film received critical acclaim for its animation, story, voice acting, characters, visuals, score, use of minimal dialogue, and scenes of romance. It was also commercially successful, grossing $521.3 million worldwide over a $180 million budget. It won the 2008 Golden Globe Award for Best Animated Feature Film, the 2009 Hugo Award for Best Long Form Dramatic Presentation, the final Nebula Award for Best Script, the Saturn Award for Best Animated Film and the Academy Award for Best Animated Feature with five nominations. It is considered by many critics as the best film of 2008, and to be among the best animated films ever made. The film topped Time's list of the "Best Movies of the Decade", and in 2016 was voted 29th among 100 films considered the best of the 21st century by 117 film critics from around the world.
In 2021, the film was selected for preservation in the United States National Film Registry by the Library of Congress as being "culturally, historically, or aesthetically significant".
Plot
In the 29th century, Earth has become a garbage-strewn wasteland due to rampant consumerism and corporate greed; seven centuries earlier, the megacorporation Buy-n-Large (BnL) evacuated humanity to space on giant starliners. Of all the trash compacting robots left by BnL to clean up, only one robot remains operational, Waste Allocation Load-Lifter: Earth-Class (WALL-E). One day, WALL-E's routine of compressing trash and collecting interesting objects is disrupted by the arrival of an unmanned probe carrying an egg-shaped robot named Extraterrestrial Vegetation Evaluator (EVE), who has been sent to scan the planet for signs of sustainable life. WALL-E is smitten by the sleek, otherworldly robot, and the two begin to connect, until EVE goes into standby when WALL-E shows her his most recent find; a living seedling. Sometime later, the probe returns to collect EVE and the plant, and, with WALL-E clinging on, returns to its mothership, the starliner Axiom.
In the centuries since the Axiom left Earth, its passengers have degenerated into corpulence due to laziness and microgravity, their every whim catered to by machinery. The captain, B. McCrea, is used to sitting back while his robot steering wheel AUTO pilots the ship. McCrea is unprepared to receive the positive probe response, but learns via a pre-recorded message from BnL that placing the plant in the ship's Holo-Detector will trigger a hyperjump back to Earth so humanity can begin recolonization. When McCrea inspects EVE's storage compartment, however, the plant is missing, and EVE blames WALL-E for its disappearance.
EVE is deemed faulty and taken to Diagnostics. Mistaking the process for torture, WALL-E intervenes and inadvertently releases a group of malfunctioning reject-bots, causing him and EVE to be designated as rogues. Frustrated, EVE tries to send WALL-E home in an escape pod, but before she can do so, they witness AUTO's gopherbot GO-4 stowing the plant in a pod set to self-destruct, revealing that WALL-E did not steal the plant. WALL-E attempts to retrieve it, but is launched into space in the pod. EVE uses an emergency exit to chase after WALL-E, and witnesses the pod explode, although both he and the plant survive unscathed. He and EVE reconcile, celebrating with a dance in space around the Axiom.
EVE brings the plant back to McCrea, who has been watching her recordings of Earth and concludes that they can and must save it. However, AUTO refuses, explaining that he has been programmed with the secret directive A113, issued after BnL declared in 2110 that the planet could not be saved, which ordered him to take control of the ship and never return to Earth, and reveals that he ordered GO-4 to get rid of the plant. When McCrea countermands the directive, AUTO and GO-4 mutiny, electrocuting WALL-E's circuit board, putting EVE into standby, throwing them both down the garbage chute, and locking McCrea in his quarters. EVE and WALL-E are nearly ejected into space along with the ship's refuse, but cleaner robot Microbe Obliterator (M-O), who had been following WALL-E's dirt trail across the ship, saves the two by alerting the WALL-A bots and prompting them to abort the ejection. As humans and robots help in securing the plant, WALL-E, EVE, M-O and the reject-bots head to the Holo-Detector, while McCrea and AUTO fight for control of the ship. WALL-E sacrifices himself by allowing himself to be crushed by the Holo-Detector, jamming it open and buying EVE time to successfully insert the plant, which initiates the hyperjump. McCrea eventually overpowers and deactivates AUTO by switching him to Manual Mode.
Arriving back on Earth, EVE repairs WALL-E, but finds that his memory and personality have been erased. Heartbroken, EVE gives WALL-E a goodbye "kiss", which restores him back to his normal self. WALL-E and EVE reunite with M-O and the reject-bots as the inhabitants of the Axiom take their first steps on Earth. During the credits, humans and robots turn the ravaged planet into a paradise, and the plant is shown to have grown into a mighty tree, which EVE and WALL-E rest beneath.
Cast
Ben Burtt as WALL-E (Waste Allocation Load Lifter: Earth-Class), the title character. As a compactor robot who has achieved sentience, he is the only one of his kind shown to be still functioning on Earth. He is a small mobile compactor box with all-terrain treads, three-fingered shovel hands, binocular eyes and retractable solar cells for power. Although working diligently to fulfill his directive to clean up the garbage (all the while accompanied by his cockroach friend named Hal and music playing from his on-board recorder), he is distracted by his curiosity, collecting trinkets of interest from amongst the garbage. He stores and displays these "treasures" such as a birdcage full of rubber ducks, a Rubik's Cube, Zippos, disposable cups filled with plastic cutlery and a golden trophy at his home where he examines and categorizes his finds. He ignores items valued by humans, throwing away a diamond ring but keeping the ring box. He neatly organizes his finds at the home he has created, where he watches a video cassette of Hello, Dolly! via an iPod viewed through a large Fresnel lens, mimicking the dance sequences.
Burtt also voices M-O (Microbe-Obliterator), a tiny, obsessive cleaning robot with a brush on his arms. He spends the majority of his appearances cleaning up the dirt trail WALL-E leaves behind.
Elissa Knight as EVE (Extraterrestrial Vegetation Evaluator: Earth-Class; which WALL-E and M-O pronounce Eva), a sleek probe robot whose directive is to verify that planets can have human habitability; she provides plant life as evidence specimens. She has a glossy white egg-shaped body and blue LED eyes. She moves using antigravity technology and is equipped with scanners, specimen storage and a "quasar ion cannon" in her arm, which she is quick to use.
Jeff Garlin as Captain B. McCrea, the commanding officer—though a mere figurehead—of the Axiom. He is credited simply as "Captain" and his name is only seen on a wall depicting portraits of all the ship's captains.
Fred Willard as Shelby Forthright, the CEO of the Buy-n-Large Corporation and President of Earth, and the only major live-action character, shown only in videos recorded around the time of the Axioms initial launch in the early 22nd century. Constantly optimistic, Forthright proposed the plan to evacuate Earth's population to space, then clean up the planet so they could return within five years. However, upon discovering that Earth had become too toxic to support life, the BnL corporation soon abandoned the cleanup and recolonization, with Forthright issuing to all starliner autopilots directive A113 which prevents anyone from returning to Earth, and subsequently dissolved. Forthright is the first (and so far the only) live-action character with a speaking role in any Pixar film.
MacInTalk, the text-to-speech program for the Apple Macintosh computers, was used for the voice of AUTO (pronounced Otto), the artificial intelligence that is the Axioms robotic steering wheel and autopilot, and handles all true command functions of the ship. AUTO is loyal only to directive A113, to the point of preventing even the captain from deviating from it; he is consequently the only computer not influenced by WALL-E.
John Ratzenberger and Kathy Najimy as John and Mary, respectively. John and Mary both live on the Axiom and are so dependent on their personal video screens and automatic services that they are oblivious to their surroundings, most notably not noticing that the ship features a giant swimming pool. However, they are brought out of their trances after separate encounters with WALL-E, eventually meeting face-to-face for the first time as they observe him and EVE "dancing in space".
Sigourney Weaver as the Axioms computer. Stanton joked about the role with Weaver, saying, "You realize you get to be 'Mother' now?" referring to the name of the ship's computer in the film Alien, which also starred Weaver.
Production
Writing
Andrew Stanton conceived WALL-E during a lunch with fellow writers John Lasseter, Pete Docter, and Joe Ranft in 1994. Toy Story was near completion and the writers brainstormed ideas for their next projects — A Bug's Life, Monsters, Inc., and Finding Nemo—at this lunch. Stanton asked, "What if mankind had to leave Earth and somebody forgot to turn off the last robot?" Having struggled for many years with making the characters in Toy Story appealing, Stanton found his simple Robinson Crusoe-esque idea of a lonely robot on a deserted planet strong. Stanton made WALL-E a waste collector as the idea was instantly understandable, and because it was a low-status menial job that made him sympathetic. Stanton also liked the imagery of stacked cubes of garbage. He did not find the idea dark because having a planet covered in garbage was for him a childish imagining of disaster.
Stanton and Docter developed the film under the title of Trash Planet for two months in 1995, but they did not know how to develop the story and Docter chose to direct Monsters, Inc. instead. Stanton came up with the idea of WALL-E finding a plant, because his life as the sole inhabitant on a deserted world reminded Stanton of a plant growing among pavements. Before they turned their attention to other projects, Stanton and Lasseter thought about having WALL-E fall in love, as it was the necessary progression away from loneliness. Stanton started writing WALL-E again in 2002 while completing Finding Nemo. Stanton formatted his script in a manner reminiscent of Dan O'Bannon's Alien. O'Bannon wrote his script in a manner Stanton found reminded him of haiku, where visual descriptions were done in continuous lines of a few words. Stanton wrote his robot "dialogue" conventionally, but placed them in brackets. In late 2003, Stanton and a few others created a story reel of the first twenty minutes of the film. Lasseter and Steve Jobs were impressed and officially began development, though Jobs stated he did not like the title, originally spelled "W.A.L.-E."
While the first act of WALL-E "fell out of the sky" for Stanton, he had originally wanted aliens to plant EVE to explore Earth and the rest of the film was different. When WALL-E comes to the Axiom, he incites a Spartacus-style rebellion by the robots against the remnants of the human race, which were cruel alien Gels (completely devolved, gelatinous, boneless, legless, see-through, green creatures that resemble Jell-O). James Hicks, a physiologist, mentioned to Stanton the concept of atrophy and the effects prolonged weightlessness would have on humans living in space for an inordinately extended time period. Therefore, this was the inspiration of the humans degenerating into the alien Gels, and their ancestry would have been revealed in a Planet of the Apes-style ending. The Gels also spoke a made-up gibberish language, but Stanton scrapped this idea because he thought it would be too complicated for the audience to understand and they could easily be driven off from the storyline. The Gels had a royal family, who host a dance in a castle on a lake in the back of the ship, and the Axiom curled up into a ball when returning to Earth in this incarnation of the story. Stanton decided this was too bizarre and unengaging, and conceived humanity as "big babies". Stanton developed the metaphorical theme of the humans learning to stand again and "grow[ing] up", wanting WALL-E and EVE's relationship to inspire humanity because he felt few films explore how utopian societies come to exist. The process of depicting the descendants of humanity as the way they appear in the movie was slow. Stanton first decided to put a nose and ears on the Gels so the audience could recognize them. Eventually, fingers, legs, clothes, and other characteristics were added until they arrived at the concept of being fetus-like to allow the audience to see themselves in the characters.
In a later version of the film, Auto comes to the docking bay to retrieve EVE's plant. The film would have its first cutaway to the captain, but Stanton moved that as he found it too early to begin moving away from WALL-E's point-of-view. As an homage to Get Smart, Auto takes the plant and goes into the bowels of the ship into a room resembling a brain where he watches videos of Buy n Large's scheme to clean up the Earth falling apart through the years. Stanton removed this to keep some mystery as to why the plant is taken from EVE. The captain appears to be unintelligent, but Stanton wanted him to just be unchallenged; otherwise he would have not been sympathetic. One example of how unintelligent the captain was depicted initially is that he was seen to wear his hat upside-down, only to fix it before he challenges Auto. In the finished film, he merely wears it casually atop his head, tightening it when he really takes command of the Axiom.
Stanton also moved the moment where WALL-E reveals his plant (which he had snatched from the self-destructing escape pod) from producing it from a closet to immediately after his escape, as it made EVE happier and gave them stronger motivation to dance around the ship. Originally, EVE would have been electrocuted by Auto, and then be quickly saved from ejection at the hands of the Waster Allocation Load Lifter: Axiom-class (WALL-A) robots, by WALL-E. He would have then revived her by replacing her power unit with a cigarette lighter he brought from Earth. Stanton reversed this following a 2007 test screening, as he wanted to show EVE replacing her directive of bringing the plant to the captain with repairing WALL-E, and it made WALL-E even more heroic if he held the holo-detector open despite being badly hurt. Stanton felt half the audience at the screening believed the humans would be unable to cope with living on Earth and would have died out after the film's end. Jim Capobianco, director of the Ratatouille short film Your Friend the Rat, created an end credits animation that continued the story—and stylized in different artistic movements throughout history—to clarify an optimistic tone.
Design
WALL-E was the most complex Pixar production since Monsters, Inc. because of the world and the history that had to be conveyed. Whereas most Pixar films have up to 75,000 storyboards, WALL-E required 125,000. Production designer Ralph Eggleston wanted the lighting of the first act on Earth to be romantic, and that of the second act on the Axiom to be cold and sterile. During the third act, the romantic lighting is slowly introduced into the Axiom environment. Pixar studied Chernobyl and the city of Sofia to create the ruined world; art director Anthony Christov was from Bulgaria and recalled Sofia used to have problems storing its garbage. Eggleston bleached out the whites on Earth to make WALL-E feel vulnerable. The overexposed light makes the location look more vast. Because of the haziness, the cubes making up the towers of garbage had to be large, otherwise they would have lost shape (in turn, this helped save rendering time). The dull tans of Earth subtly become soft pinks and blues when EVE arrives. When WALL-E shows EVE all his collected items, all the lights he has collected light up to give an inviting atmosphere, like a Christmas tree. Eggleston tried to avoid the colors yellow and green so WALL-E—who was made yellow to emulate a tractor—would not blend into the deserted Earth, and to make the plant more prominent.
Stanton also wanted the lighting to look realistic and evoke the science fiction films of his youth. He thought that Pixar captured the physics of being underwater with Finding Nemo and so for WALL-E, he wanted to push that for air. While rewatching some of his favorite science fiction films, he realized that Pixar's other movies had lacked the look of 70 mm film and its barrel distortion, lens flare, and racking focus. Producer Jim Morris invited Roger Deakins and Dennis Muren to advise on lighting and atmosphere. Muren spent several months with Pixar, while Deakins hosted one talk and was requested to stay on for another two weeks. Stanton said Muren's experience came from integrating computer animation into live-action settings, while Deakins helped them understand not to overly complicate their camerawork and lighting. 1970s Panavision cameras were used to help the animators understand and replicate handheld imperfections like unfocused backgrounds in digital environments. The first lighting test included building a three-dimensional replica of WALL-E, filming it with a 70 mm camera, and then trying to replicate that in the computer. Stanton cited the shallow lens work of Gus Van Sant's films as an influence, as it created intimacy in each close-up. Stanton chose angles for the virtual cameras that a live-action filmmaker would choose if filming on a set.
Stanton wanted the Axioms interior to resemble Shanghai and Dubai. Eggleston studied 1960s NASA paintings and the original concept art for Tomorrowland for the Axiom, to reflect that era's sense of optimism. Stanton remarked "We are all probably very similar in our backgrounds here [at Pixar] in that we all miss the Tomorrowland that was promised us from the heyday of Disneyland," and wanted a "jet pack" feel. Pixar also studied the Disney Cruise Line and visited Las Vegas, which was helpful in understanding artificial lighting. Eggleston based his Axiom designs on the futuristic architecture of Santiago Calatrava. Eggleston divided the inside of the ship into three sections; the rear's economy class has a basic gray concrete texture with graphics keeping to the red, blue, and white of the BnL logo. The coach class with living/shopping spaces has "S" shapes as people are always looking for "what's around the corner". Stanton intended to have many colorful signs, but he realized this would overwhelm the audience and went with Eggleston's original idea of a small number of larger signs. The premier class is a large Zen-like spa with colors limited to turquoise, cream, and tan, and leads on to the captain's warm carpeted and wooded quarters and the sleek dark bridge. In keeping with the artificial Axiom, camera movements were modeled after those of the steadicam.
The use of live action was a stepping stone for Pixar, as Stanton was planning to make John Carter of Mars his next project. Storyboarder Derek Thompson noted introducing live action meant that they would make the rest of the film look even more realistic. Eggleston added that if the historical humans had been animated and slightly caricaturized, the audience then would not have been able to recognize how serious their devolution was. Stanton cast Fred Willard as the historical Buy n Large CEO because "[h]e's the most friendly and insincere car salesman I could think of." The CEO says "stay the course", which Stanton used because he thought it was funny. Industrial Light & Magic did the visual effects for these shots.
Animation
WALL-E went undeveloped during the 1990s partly because Stanton and Pixar were not confident enough yet to have a feature-length film with a main character that behaved like Luxo Jr. or R2-D2. Stanton explained there are two types of robots in cinema: "human[s] with metal skin", like the Tin Man, or "machine[s] with function" like Luxo and R2. He found the latter idea "powerful" because it allowed the audience to project personalities onto the characters, as they do with babies and pets: "You're compelled ... you almost can't stop yourself from finishing the sentence 'Oh, I think it likes me! I think it's hungry! I think it wants to go for a walk!'" He added, "We wanted the audience to believe they were witnessing a machine that has come to life." The animators visited recycling stations to study machinery, and also met robot designers, visited NASA's Jet Propulsion Laboratory to study robots, watched a recording of a Mars rover, and borrowed a bomb detecting robot from the San Francisco Police Department. Simplicity was preferred in their performances as giving them too many movements would make them feel human.
Stanton wanted WALL-E to be a box and EVE to be like an egg. WALL-E's eyes were inspired by a pair of binoculars Stanton was given when watching the Oakland Athletics play against the Boston Red Sox. He "missed the entire inning" because he was distracted by them. The director was reminded of Buster Keaton and decided the robot would not need a nose or mouth. Stanton added a zoom lens to make WALL-E more sympathetic. Ralph Eggleston noted this feature gave the animators more to work with and gave the robot a childlike quality. Pixar's studies of trash compactors during their visits to recycling stations inspired his body. His tank treads were inspired by a wheelchair someone had developed that used treads instead of wheels. The animators wanted him to have elbows, but realized this was unrealistic because he is only designed to pull garbage into his body. His arms also looked flimsy when they did a test of him waving. Animation director Angus MacLane suggested they attach his arms to a track on the sides of his body to move them around, based on the inkjet printers his father designed. This arm design contributed to creating the character's posture, so if they wanted him to be nervous, they would lower them.
Stanton wanted EVE to be at the higher end of technology, and asked iPod designer Jonathan Ive to inspect her design. He was very impressed. Her eyes are modelled on Lite-Brite toys, but Pixar chose not to make them overly expressive as it would be too easy to have her eyes turn into hearts to express love or something similar. Her limited design meant the animators had to treat her like a drawing, relying on posing her body to express emotion. They also found her similar to a manatee or a narwhal because her floating body resembled an underwater creature. Auto was a conscious homage to HAL 9000 from 2001: A Space Odyssey, and the usage of Also sprach Zarathustra for the showdown between Captain McCrea and Auto furthers that. The manner in which he hangs from a wall or ceiling gives him a threatening feel, like a spider. Originally, Auto was designed entirely differently, resembling EVE, but masculine and authoritative and SECUR-T was also a more aggressive patrol steward robot. The majority of the robot cast were formed with the Build-a-bot program, where different heads, arms and treads were combined in over a hundred variations. The humans were modelled on sea lions due to their blubbery bodies, as well as babies. The filmmakers noticed baby fat is a lot tighter than adult fat and copied that texture for the film's humans.
To animate their robots, the film's story crew and animation crew watched a Keaton and a Charlie Chaplin film every day for almost a year, and occasionally a Harold Lloyd picture. Afterwards, the filmmakers knew all emotions could be conveyed silently. Stanton cited Keaton's "great stone face" as giving them perseverance in animating a character with an unchanging expression. As he rewatched these, Stanton felt that filmmakers—since the advent of sound—relied on dialogue too much to convey exposition. The filmmakers dubbed the cockroach WALL-E keeps as a pet "Hal", in reference to silent film producer Hal Roach (as well as being an additional reference to HAL 9000). They also watched 2001: A Space Odyssey, The Black Stallion and Never Cry Wolf, films that had sound but were not reliant on dialogue. Stanton acknowledged Silent Running as an influence because its silent robots were a forerunner to the likes of R2-D2, and that the "hopeless romantic" Woody Allen also inspired WALL-E.
Sound
Producer Jim Morris recommended Ben Burtt as sound designer for WALL-E because Stanton kept using R2-D2 as the benchmark for the robots. Burtt had completed Star Wars: Episode III – Revenge of the Sith and told his wife he would no longer work on films with robots, but found WALL-E and its substitution of voices with sound "fresh and exciting". He recorded 2,500 sounds for the film, which was twice the average number for a Star Wars film, and a record in his career. Burtt began work in 2005, and experimented with filtering his voice for two years. Burtt described the robot voices as "like a toddler ... universal language of intonation. 'Oh', 'Hm?', 'Huh!', you know?"
During production Burtt had the opportunity to look at the items used by Jimmy MacDonald, Disney's in-house sound designer for many of their classic films. Burtt used many of MacDonald's items on WALL-E. Because Burtt was not simply adding sound effects in post-production, the animators were always evaluating his new creations and ideas, which Burtt found an unusual experience. He worked in sync with the animators, returning their animation after adding the sounds to give them more ideas. Burtt would choose scientifically accurate sounds for each character, but if he could not find one that worked, he would choose a dramatic and unrealistic noise. Burtt would find hundreds of sounds by looking at concept art of characters, before he and Stanton pared it down to a distinct few for each robot.
Burtt saw a hand-cranked electrical generator while watching Island in the Sky, and bought an identical, unpacked device from 1950 on eBay to use for WALL-E moving around. Burtt also used an automobile self-starter for when WALL-E goes fast, and the sound of cars being wrecked at a demolition derby provided for WALL-E's compressing trash in his body. The Macintosh computer chime was used to signify when WALL-E has fully recharged his battery. For EVE, Burtt wanted her humming to have a musical quality. Burtt was only able to provide neutral or masculine voices, so Pixar employee Elissa Knight was asked to provide her voice for Burtt to electronically modify. Stanton deemed the sound effect good enough to properly cast her in the role. Burtt recorded a flying radio-controlled jet plane for EVE's flying, and for her plasma cannon, Burtt hit a slinky hung from a ladder with a timpani stick. He described it as a "cousin" to the blaster noise from Star Wars.
MacInTalk was used because Stanton "wanted Auto to be the epitome of a robot, cold, zeros & ones, calculating, and soulless [and] Stephen Hawking's kind of voice I thought was perfect." Additional sounds for the character were meant to give him a clockwork feel, to show he is always thinking and calculating.
Burtt had visited Niagara Falls in 1987 and used his recordings from his trip for the sounds of wind, and ran around a hall with a canvas bag up to record the sandstorm. For the scene where WALL-E flees from falling shopping carts, Burtt and his daughter went to a supermarket and placed a recorder in their cart. They crashed it around the parking lot and then let it tumble down a hill. To create Hal (WALL-E's pet cockroach)'s skittering, he recorded the clicking caused by taking apart and reassembling handcuffs.
Music
Thomas Newman recollaborated with Stanton on WALL-E since the two got along well on Finding Nemo, which gave Newman the Annie Award for Best Music in an Animated Feature. He began writing the score in 2005, in the hope that starting this task early would make him more involved with the finished film. But, Newman remarked that animation is so dependent on scheduling he should have begun work earlier on when Stanton and Reardon were writing the script. EVE's theme was arranged for the first time in October 2007. Her theme when played as she first flies around Earth originally used more orchestral elements, and Newman was encouraged to make it sound more feminine. Newman said Stanton had thought up many ideas for how he wanted the music to sound, and he generally followed them as he found scoring a partially silent film difficult. Stanton wanted the whole score to be orchestral, but Newman felt limited by this idea especially in scenes aboard the Axiom, and used electronics too.
Stanton originally wanted to juxtapose the opening shots of space with 1930s French swing music, but he saw The Triplets of Belleville (2003) and did not want to appear as if he were copying it. Stanton then thought about the song "Put On Your Sunday Clothes" from Hello, Dolly!, since he had portrayed the sidekick Barnaby Tucker in a 1980 high school production. Stanton found that the song was about two naive young men looking for love, which was similar to WALL-E's own hope for companionship. Jim Reardon, storyboard supervisor for the film, suggested WALL-E find the film on video, and Stanton included "It Only Takes a Moment" and the clip of the actors holding hands, because he wanted a visual way to show how WALL-E understands love and conveys it to EVE. Hello Dolly! composer Jerry Herman allowed the songs to be used without knowing what for; when he saw the film, he found its incorporation into the story "genius". Coincidentally, Newman's uncle Lionel worked on Hello, Dolly!
Newman travelled to London to compose the end credits song "Down to Earth" with Peter Gabriel, who was one of Stanton's favorite musicians. Afterwards, Newman rescored some of the film to include the song's composition, so it would not sound intrusive when played. Louis Armstrong's rendition of "La Vie en rose" was used for a montage where WALL-E attempts to impress EVE on Earth. The script also specified using Bing Crosby's "Stardust" for when the two robots dance around the Axiom, but Newman asked if he could score the scene himself. A similar switch occurred for the sequence in which WALL-E attempts to wake EVE up through various means; originally, the montage would play with the instrumental version of "Raindrops Keep Fallin' on My Head", but Newman wanted to challenge himself and scored an original piece for the sequence.
Themes
The film is recognized as social criticism. Katherine Ellison asserts that "Americans produce nearly 400 million tons of solid waste per year but recycle less than a third of it, according to a recent Columbia University study." Landfills were filling up so quickly that predictions were made that the UK could run out of landfill space by the year 2017.
Environment, waste, and nostalgia
In the DVD commentary, Stanton said that he has been asked if it was his intention to make a movie about consumerism. His answer was it was not; it was a way to answer the question of how would the Earth get to the state where one robot would be left to continue the cleanup by itself. Nevertheless, some critics have noted an incongruity between the perceived pro-environmental and anti-consumerist messaging of the film, and the environmental impacts in the production and merchandising of the film.
In "WALL-E: from environmental adaption to sentimental nostalgia," Robin Murray and Joseph Heumann explain the important theme of nostalgia in this film. Nostalgia is clearly represented by human artifacts, left behind, that WALL-E collects and cherishes, for example Zippo lighters, hubcaps, and plastic sporks. These modern items that are used out of necessity are made sentimental through the lens of the bleak future of Earth. Nostalgia is also expressed through the musical score, as the film opens with a camera shot of outer space that slowly zooms into a waste filled Earth while playing "Put on Your Sunday Clothes", reflecting on simpler and happier times in human history. This film also expresses nostalgia through the longing of nature and the natural world, as it is the sight and feeling of soil, and the plant brought back to the space ship by EVE, that make the captain decide it is time for humans to move back to Earth. WALL-E expresses nostalgia also, by reflecting on romantic themes of older Disney and silent films.
Stanton describes the theme of the film as "irrational love defeats life's programming":
I realized the point I was trying to push with these two programmed robots was the desire for them to try and figure out what the point of living was ... It took these really irrational acts of love to sort of discover them against how they were built ... I realized that that's a perfect metaphor for real life. We all fall into our habits, our routines and our ruts, consciously or unconsciously to avoid living. To avoid having to do the messy part. To avoid having relationships with other people or dealing with the person next to us. That's why we can all get on our cell phones and not have to deal with one another. I thought, 'That's a perfect amplification of the whole point of the movie.' I wanted to run with science in a way that would sort of logically project that.
Technology
Stanton noted many commentators placed emphasis on the environmental aspect of humanity's complacency in the film, because "that disconnection is going to be the cause, indirectly, of anything that happens in life that's bad for humanity or the planet". Stanton said that by taking away effort to work, the robots also take away humanity's need to put effort into relationships. Christian journalist Rod Dreher saw technology as the complicated villain of the film. The humans' artificial lifestyle on the Axiom has separated them from nature, making them "slaves of both technology and their own base appetites, and have lost what makes them human". Dreher contrasted the hardworking, dirt covered WALL-E with the sleek clean robots on the ship. However, it is the humans and not the robots who make themselves redundant. Humans on the ship and on Earth have overused robots and the ultra-modern technology. During the end credits, humans and robots are shown working alongside each other to renew the Earth. WALL·E is not a Luddite film," he said. "It doesn't demonize technology. It only argues that technology is properly used to help humans cultivate their true nature—that it must be subordinate to human flourishing, and help move that along."
Religion
Stanton, who is a Christian, named EVE after the Biblical figure because WALL-E's loneliness reminded him of Adam, before God created his wife. Dreher noted EVE's biblical namesake and saw her directive as an inversion of that story; EVE uses the plant to tell humanity to return to Earth and move away from the "false god" of BnL and the lazy lifestyle it offers. Dreher also noted this departure from classical Christian viewpoints, where Adam is cursed with painful labor, in that WALL-E argues hard work is what makes humans human. Dreher emphasized the false god parallels to BnL in a scene where a robot teaches infants "B is for Buy n Large, your very best friend", which he compared to modern corporations such as McDonald's creating brand loyalty in children. Megan Basham of World magazine felt the film criticizes the pursuit of leisure, whereas WALL-E in his stewardship learns to truly appreciate God's creation.
During writing, a Pixar employee noted to Jim Reardon that EVE was reminiscent of the dove with the olive branch from the story of Noah's Ark, and the story was reworked with EVE finding a plant to return humanity from its voyage. WALL-E himself has been compared to Prometheus, Sisyphus, and Butades: in an essay discussing WALL-E as representative of the artistic strive of Pixar itself, Hrag Vartanian compared WALL-E to Butades in a scene where the robot expresses his love for EVE by making a sculpture of her from spare parts. "The Ancient Greek tradition associates the birth of art with a Corinthian maiden who longing to preserve her lover's shadow traces it on the wall before he departed for war. The myth reminds us that art was born out of longing and often means more for the creator than the muse. In the same way Stanton and his Pixar team have told us a deeply personal story about their love of cinema and their vision for animation through the prism of all types of relationships."
Release
WALL-E premiered at the Greek Theatre in Los Angeles on June 23, 2008. Continuing a Pixar tradition, the film was paired with a short film for its theatrical release, Presto. The film is dedicated to Justin Wright (1981–2008), a Pixar animator who had worked on Ratatouille and died of a heart attack before WALL-Es release.
Walt Disney Imagineering (WDI) built animatronic WALL-Es to promote the picture, which made appearances at Disneyland Resort, the Franklin Institute, the Miami Science Museum, the Seattle Center, and the Tokyo International Film Festival. Due to safety concerns, the 318 kg robots were always strictly controlled and WDI always needed to know exactly what they were required to interact with. For this reason, they generally refused to have their puppets meet and greet children at the theme parks in case a WALL-E trod on a child's foot. Those who wanted to take a photograph with the character had to make do with a cardboard cutout.
The film was denied a theatrical release in China.
In 2016, Jim Morris noted that the studio has no plans for a sequel, as they consider WALL-E a finished story with no need for continuation.
Merchandise
Small quantities of merchandise were sold for WALL-E, as Cars items were still popular, and many manufacturers were more interested in Speed Racer, which was a successful product line despite the film's failure at the box office. Thinkway, which created the WALL-E toys, had previously made Toy Story dolls when other toy producers had not shown an interest. Among Thinkway's items were a WALL-E that danced when connected to a music player, a toy that could be taken apart and reassembled, and a groundbreaking remote control toy of him and EVE that had motion sensors that allowed them to interact with players. There were even plushies. The "Ultimate WALL-E" figures were not in stores until the film's home release in November 2008, at a retail price of almost $200, leading The Patriot-News to deem it an item for "hard-core fans and collectors only". On February 4, 2015, Lego announced that a WALL-E custom built by lead animator Angus MacLane was the latest design approved for mass production and release as part of Lego Ideas.
Home media
The film was released on Blu-ray Disc and DVD by Walt Disney Studios Home Entertainment on November 18, 2008. Various editions include the short film Presto, another short film BURN-E, the Leslie Iwerks documentary film The Pixar Story, shorts about the history of Buy n Large, behind-the-scenes special features, and a digital copy of the film that can be played through iTunes or Windows Media Player-compatible devices. This release sold 9,042,054 DVD units ($142,633,974) in total becoming the second-best-selling animated DVD among those released in 2008 in units sold (behind Kung Fu Panda), the best-selling animated feature in sales revenue, and the third-best-selling among all 2008 DVDs.
WALL-E was released on 4K Blu-ray on March 3, 2020.
Reception
Box office
WALL-E grossed $223.8 million in the United States and Canada and $297.5 million overseas, for a worldwide total of $521.3 million, making it the ninth-highest-grossing film of 2008.
In the US and Canada, WALL-E opened in 3,992 theaters on June 27, 2008. The film grossed $23.1 million on its opening day, the highest of all nine Pixar titles to date. During its opening weekend, it topped the box office with $63,087,526. This was the third-best opening weekend for a Pixar film, and the second-best opening weekend among films released in June. The film grossed $38 million the following weekend, losing its first place to Hancock. WALL-E crossed the $200 million mark by August 3, during its sixth weekend.
WALL-E grossed over $10 million in Japan ($44,005,222), UK, Ireland and Malta ($41,215,600), France and the Maghreb region ($27,984,103), Germany ($24,130,400), Mexico ($17,679,805), Spain ($14,973,097), Australia ($14,165,390), Italy ($12,210,993), and Russia and the CIS ($11,694,482).
Critical response
The American Film Institute named WALL-E as one of the best films of 2008; the jury rationale states:
WALL•E proves to this generation and beyond that the film medium's only true boundaries are the human imagination. Writer/director Andrew Stanton and his team have created a classic screen character from a metal trash compactor who rides to the rescue of a planet buried in the debris that embodies the broken promise of American life. Not since Chaplin's "Little Tramp" has so much story—so much emotion—been conveyed without words. When hope arrives in the form of a seedling, the film blossoms into one of the great screen romances as two robots remind audiences of the beating heart in all of us that yearns for humanity—and love—in the darkest of landscapes.
On Rotten Tomatoes, the film holds a 95% approval rating based on 260 reviews, with an average score of 8.55/10. The website's critical consensus reads, "Wall-Es stellar visuals testify once again to Pixar's ingenuity, while its charming star will captivate younger viewers—and its timely story offers thought-provoking subtext." At Metacritic, which assigns a normalized rating to reviews from mainstream critics, the film has an average score of 95 out of 100 based on 39 reviews, indicating "universal acclaim". Audiences polled by CinemaScore gave the film an average rating of "A" on an A+ to F scale.
IndieWire named WALL-E the third-best film of the year based on their annual survey of 100 film critics, while Movie City News shows that WALL-E appeared in 162 different top 10 lists, out of 286 different critics lists surveyed, the most mentions on a top 10 list of any film released in 2008.
Richard Corliss of Time named WALL-E his favorite film of 2008 (and later of the decade), noting the film succeeded in "connect[ing] with a huge audience" despite the main characters' lack of speech and "emotional signifiers like a mouth, eyebrows, shoulders, [and] elbows". It "evoke[d] the splendor of the movie past" and he also compared WALL-E and EVE's relationship to the chemistry of Spencer Tracy and Katharine Hepburn. Other critics who named WALL-E their favorite film of 2008 included Tom Charity of CNN, Michael Phillips of the Chicago Tribune, Lisa Schwarzbaum of Entertainment Weekly, A. O. Scott of The New York Times, Christopher Orr of The New Republic, Ty Burr and Wesley Morris of The Boston Globe, Joe Morgenstern of The Wall Street Journal, and Anthony Lane of The New Yorker.
Todd McCarthy of Variety called the film "Pixar's ninth consecutive wonder", saying it was imaginative yet straightforward. He said it pushed the boundaries of animation by balancing esoteric ideas with more immediately accessible ones, and that the main difference between the film and other science fiction projects rooted in an apocalypse was its optimism. Kirk Honeycutt of The Hollywood Reporter declared that WALL-E surpassed the achievements of Pixar's previous eight features and probably their most original film to date. He said it had the "heart, soul, spirit and romance" of the best silent films. Honeycutt said the film's definitive stroke of brilliance was in using a mix of archive film footage and computer graphics to trigger WALL-E's romantic leanings. He praised Burtt's sound design, saying "If there is such a thing as an aural sleight of hand, this is it."
Roger Ebert of the Chicago Sun-Times named WALL-E "an enthralling animated film, a visual wonderment, and a decent science-fiction story" and said the scarcity of dialogue would allow it to "cross language barriers" in a manner appropriate to the global theme, and noted it would appeal to adults and children. He praised the animation, describing the color palette as "bright and cheerful ... and a little bit realistic", and that Pixar managed to generate a "curious" regard for the WALL-E, comparing his "rusty and hard-working and plucky" design favorably to more obvious attempts at creating "lovable" lead characters. He said WALL-E was concerned with ideas rather than spectacle, saying it would trigger stimulating "little thoughts for the younger viewers." He named it as one of his twenty favorite films of 2008 and argued it was "the best science-fiction movie in years".
The film was interpreted as tackling a topical, ecologically-minded agenda, though McCarthy said it did so with a lightness of touch that granted the viewer the ability to accept or ignore the message. Kyle Smith of the New York Post, wrote that by depicting future humans as "a flabby mass of peabrained idiots who are literally too fat to walk", WALL-E was darker and more cynical than any major Disney feature film he could recall. He compared the humans to the patrons of Disney's theme parks and resorts, adding, "I'm also not sure I've ever seen a major corporation spend so much money to issue an insult to its customers." Maura Judkis of U.S. News & World Report questioned whether this depiction of "frighteningly obese humans" would resonate with children and make them prefer to "play outside rather than in front of the computer, to avoid a similar fate". The interpretation led to criticism of the film by conservative commentators such as Glenn Beck, and contributors to National Review Online including Shannen W. Coffin and Jonah Goldberg (although he admitted it was a "fascinating" and occasionally "brilliant" production).
A few notable critics have argued that the film is vastly overrated, claiming it failed to "live up to such blinding, high-wattage enthusiasm", and that there were "chasms of boredom watching it", in particular "the second and third acts spiraled into the expected". Other labels included "preachy" and "too long". Child reviews sent into CBBC were mixed, some citing boredom and an inadequate storyline.
Patrick J. Ford of The American Conservative said WALL-Es conservative critics missed lessons in the film that he felt appealed to traditional conservatism. He argued that the mass consumerism in the film was not shown to be a product of big business, but of too close a tie between big business and big government: "The government unilaterally provided its citizens with everything they needed, and this lack of variety led to Earth's downfall." Responding to Coffin's claim that the film points out the evils of mankind, Ford argued the only evils depicted were those that resulted from losing touch with our own humanity and that fundamental conservative representations such as the farm, the family unit, and wholesome entertainment were in the end held aloft by the human characters. He concluded, "By steering conservative families away from WALL-E, these commentators are doing their readers a great disservice."
Director Terry Gilliam praised the film as "A stunning bit of work. The scenes on what was left of planet Earth are just so beautiful: one of the great silent movies. And the most stunning artwork! It says more about ecology and society than any live-action film—all the people on their loungers floating around, brilliant stuff. Their social comment was so smart and right on the button."
Archaeologists have commented on the themes of human evolution that the film explores. Ben Marwick has written how the character of WALL-E resembles an archaeologist with his methodical collection and classification of quotidian human artefacts. He is shown facing a typological dilemma of classifying a spork as either a fork or spoon, and his nostalgic interest in the human past further demonstrated by his attachment to repeated viewings of the 1969 film Hello, Dolly!. Marwick notes that the film features major human evolutionary transitions such as obligate bipedalism (captain of the spaceship struggles with the autopilot to gain control of the vessel) and the invention of agriculture, as part of watershed moments in the story of the film. According to Marwick, one prominent message of the film "appears to be that the envelopment by technology that the humans in Wall-E experience paradoxically results in physical and cultural devolution." Scholars such as Ian Tattersall and Steve Jones have similarly discussed scenarios where elements of modern technology (such as medicine) may have caused human evolution to slow or stop.
In 2021, the film was selected for preservation in the United States National Film Registry by the Library of Congress as being "culturally, historically, or aesthetically significant".
Accolades
WALL-E won the Academy Award for Best Animated Feature and was nominated for Best Original Screenplay, Best Original Score, Best Original Song, Sound Editing, and Sound Mixing at the 81st Academy Awards, which it lost to Slumdog Millionaire (Original Score, Original Song, Sound Mixing), The Dark Knight (Sound Editing), and Milk (Original Screenplay). Walt Disney Pictures also pushed for an Academy Award for Best Picture nomination, but it was not nominated, sparking controversy over whether the Academy deliberately restricted WALL-E to the Best Animated Feature category. Film critic Peter Travers remarked, "If there was ever a time where an animated feature deserved to be nominated for best picture it's Wall-E." Only three animated films, 1991's Beauty and the Beast and Pixar's next two films, 2009's Up and 2010's Toy Story 3, have ever been nominated for the Academy Award for Best Picture. A reflective Stanton stated he was not disappointed the film was restricted to the Best Animated Film nomination because he was overwhelmed by the film's positive reception, and eventually "The line [between live-action and animation] is just getting so blurry that I think with each proceeding year, it's going to be tougher and tougher to say what's an animated movie and what's not an animated movie."
WALL-E made a healthy appearance at the various 2008 end-of-the-year awards circles, particularly in the Best Picture category, where animated films are often overlooked. It has won the award, or the equivalent of it, from the Boston Society of Film Critics (tied with Slumdog Millionaire), the Chicago Film Critics Association, the Central Ohio Film Critics awards, the Online Film Critics Society, and most notably the Los Angeles Film Critics Association, where it became the first animated feature to win the prestigious award. It was named as one of 2008's ten best films by the American Film Institute and the National Board of Review of Motion Pictures.
It won Best Animated Feature Film at the 66th Golden Globe Awards, 81st Academy Awards, and the Broadcast Film Critics Association Awards 2008. It was nominated for several awards at the 2009 Annie Awards, including Best Feature Film, Animated Effects, Character Animation, Direction, Production design, Storyboarding and Voice acting (for Ben Burtt); but was beaten out by Kung Fu Panda in every category. It won Best Animated Feature at the 62nd British Academy Film Awards and was also nominated there for Best Music and Sound. Thomas Newman and Peter Gabriel won two Grammy Awards for "Down to Earth" and "Define Dancing". It won all three awards it was nominated for by the Visual Effects Society: Best Animation, Best Character Animation (for WALL-E and EVE in the truck) and Best Effects in the Animated Motion Picture categories. It became the first animated film to win Best Editing for a Comedy or Musical from the American Cinema Editors. In 2009, Stanton, Reardon, and Docter won the Nebula Award, beating The Dark Knight and the Stargate Atlantis episode "The Shrine". It won Best Animated Film and was nominated for Best Director at the Saturn Awards.
At the British National Movie Awards, which is voted for by the public, it won Best Family Film. It was also voted Best Feature Film at the British Academy Children's Awards. WALL-E was listed at #63 on Empires online poll of the 100 greatest movie characters, conducted in 2008. In early 2010, Time ranked WALL-E #1 in "Best Movies of the Decade". In Sight & Sound magazine's 2012 poll of the greatest films of all time, WALL-E is the second-highest-ranking animated film behind My Neighbor Totoro (1988), while tying with the film Spirited Away (2001) at 202nd overall. In a 2016 BBC poll of international critics, it was voted the 29th-greatest film since 2000.
Robotic recreations
In 2012, Mike McMaster, an American robotics hobbyist, began working on his own model of WALL-E. The final product was built with more moving parts than the WALL-E which roams around Disneyland. McMaster's four-foot robot made an appearance at the Walt Disney Family Museum and was featured during the opening week of Tested.com a project headed up by Jamie Hyneman and Adam Savage of MythBusters. Since WALL-E's creation, Mike and the popular robot have made dozens of appearances at various events.
In the same year, Mike Senna completed his own WALL-E build. He also created an EVE. They were present at a photo op at Disney's D23 Expo 2015.
See also
Robots
Mars Cube One, which are nicknamed WALL-E and EVE
References
Further reading
External links
2008 films
2008 computer-animated films
2000s science fiction adventure films
2000s American animated films
2008 science fiction films
2008 romantic comedy films
American films
American adventure comedy films
American animated science fiction films
American science fiction comedy films
American science fiction adventure films
American robot films
American romance films
American satirical films
American animated feature films
Animated films set in the future
Animated romance films
Films about archaeology
Best Animated Feature Academy Award winners
Best Animated Feature BAFTA winners
Best Animated Feature Broadcast Film Critics Association Award winners
Best Animated Feature Film Golden Globe winners
American dystopian films
Environmental films
Films about evolution
Fictional robots
Films about consumerism
Films about solitude
Films scored by Thomas Newman
Films directed by Andrew Stanton
Films produced by Jim Morris
Films set in the 29th century
Generation ships in fiction
Hugo Award for Best Dramatic Presentation, Long Form winning works
Nebula Award for Best Script-winning works
Pixar animated films
Animated post-apocalyptic films
Animated films about robots
Films set in outer space
Films set on spacecraft
Films with screenplays by Pete Docter
Films with screenplays by Andrew Stanton
Films with screenplays by Jim Reardon
American post-apocalyptic films
Social science fiction films
United States National Film Registry films
Walt Disney Pictures films
American children's animated science fiction films
|
9444001
|
https://en.wikipedia.org/wiki/EUnet
|
EUnet
|
EUnet was a very loose collaboration of individual European UNIX sites in the 1980s that evolved into the fully commercial entity EUnet International Ltd in 1996. It was sold to Qwest in 1998.
EUnet played a decisive role in the adoption of TCP/IP in Europe beginning in 1988.
History
The roots of EUnet, originally an abbreviation for European UNIX Network, go back to 1982 under the auspices of the EUUG (European UNIX Users Group) (later EurOpen) and the first international UUCP connections.
FNET was the French branch of EUnet.
Once there was a central European backbone node that was separate from the expensive telecom network, TCP/IP was adopted in place of store and forward. This enabled EUnet to connect with NSFNET and CERN’s TCP/IP connections. A connection to the US was also established.
On January 1, 1990 EUnet began selling Internet access to non-academic customers in the Netherlands, making them one of the first companies to sell Internet access to the general public. EUnet provided local service through a respective national EUnet business partner in many European countries.
In 1990 the Soviet IP-based network RELCOM mostly operated by DEMOS powered computers was connected to the EUnet.
In April 1998 the company together with nearly all of the national European business partners of EUnet was sold to Qwest Communications International, which in turn later merged EUnet into the illfated joint-venture KPNQwest. In year 2000 it was estimated that KPNQwest was carrying more than 50% of European IP traffic. Some of the ISPs operating under the name EUnet today can be traced back to the original EUnet, some not.
Most national EUnet affiliates or subsidiaries predated other commercial Internet offerings in the respective countries by many years.
To completely understand the importance and history of EUnet, it is important to realize that until the early 1990s nearly every European country had a telecommunications monopoly with an incumbent national PTT, and that commercial and non-commercial provision of telecommunications services was prohibited or at least took place in a legal "grey zone". During the same period, as part of an industrial political strategy to stop US domination of future network technology, the EC embarked on efforts to promote OSI protocols, founding for example RARE and associated national "research" network operators (DFN, SURFnet, SWITCH to name a few).
Timeline
1982 UUCP links established between 4 countries (UK, Netherlands, Denmark and Sweden).
1984 kremvax April Fools Joke.
1986 FNET, the French branch of EUnet, converted from UUCP to TCP/IP.
1988 First connection in Europe to NSFnet by CWI, a Dutch computing centre.
1990 First offerings for "all comers".
1994 GBnet Ltd becomes EUnet GB Ltd.
1994 EUnet GB and EUnet Europe form a pan-European Eunet.
1994 EUnet DE purchased by UUnet.
1995 EUnet GB Ltd sold to PSI.
1996 EUnet International formed by share swaps with seven of the national organisations.
1998 Sale to Qwest for $154.4 million.
People
The following people were involved in EUnet:
Teus Hagen
Daniel Karrenberg
Piet Beertema
Peter Collinson
Jim Omand (EUnet GB – Chairman)
Keld Simonsen
Björn Eriksen
Julf Helsingius
Glenn Kowack
Luc De Vos
Michael Habeler
See also
History of the Internet
Protocol Wars
References
External links
LucDeVos.com
Godfatherof.nl, Piet Beertema
LivingInternet.com, Living Internet Article
CERN.ch, CERN Internet History
Internet technology companies of the Netherlands
KPN
Companies established in 1982
1982 establishments in Europe
|
68292767
|
https://en.wikipedia.org/wiki/Chorus%20Syst%C3%A8mes%20SA
|
Chorus Systèmes SA
|
Chorus Systèmes SA was a French software company that existed from 1986 to 1997, that was created to commercialise research work done at the Institut national de recherche en informatique et en automatique (INRIA). Its primary product was the Chorus distributed microkernel operating system, created at a time when microkernel technology was thought to have great promise for the future of operating systems. As such Chorus was in the middle of many strategic partnerships regarding Unix and related systems. The firm was acquired by Sun Microsystems in 1997.
Origins
The Chorus distributed operating system research project began at the French
Institut national de recherche en informatique et en automatique (INRIA) in 1979.
The project was begun by Hubert Zimmerman,
a pioneer of networked computing who devised the OSI reference model which became a popular way to describe network protocols.
In large part the French CYCLADES computer networking project was a precursor for the Chorus work, as essential to the idea of Chorus was to take advantage of what was learned in research into networking in order to add communication and distribution within heretofore monolithic operating system kernels.
Several iterations of the Chorus technology were produced at INRIA between 1980 and 1986, which were referred to by the Chorus creators as Chorus-v0 through Chorus-v2.
Concurrently, there was another INRIA project, called Sol. It had been begun by Michel Gien, who also had a background from CYCLADES; it sought to build a Unix operating system implementation for French minicomputers and microcomputers. Sol used the Pascal programming language rather than C for this, as part of adopting more modern software engineering techniques.
In 1984, the Sol project was merged into the Chorus project, and as one result, the Chorus-v2 iteration adopted the interfaces of Unix System V rather than having its own custom set of interfaces.
History
Beginning years
Microkernel technology was seen as having great promise for advancing the state of operating system and distributed computing.
Accordingly, Chorus Systèmes SA was founded in 1986, in order to commercialise the results of the INRIA research.
The co-founders were Zimmerman and Gien.
Having spent a decade or more enmeshed in the politics of publicly-funded research work, both felt that it was time to try a startup company, especially since they had seen others they knew doing so (such as the American networking pioneer Robert Metcalfe founding 3Com).
Some Chorus engineers from INRIA joined them in the new venture. Zimmermann became head of the new company, in a position described at different times as president, chairman, or CEO. Gien was variously described as chief of technology,
or general manager and director of research, for Chorus Systèmes.
At the time, technology startups in France were rare, a point emphasized by the French trade publication 01 Informatique in a profile of the company and by co-founder Gien in retrospect. Thus Chorus Systèmes and system software company ILOG, founded soon after, were in the vanguard. Venture capitalists did not exist in France, but the new firm was able to get funding from European projects and from government contracts. In particular this included funding from INRIA and France Telecom.
The offices of Chorus Systèmes were located at 6 avenue Gustave Eiffel in the town of Saint-Quentin-en-Yvelines in the Île-de-France region outside of Paris.
Chorus Systèmes was able to attract engineering talent from around the world, in part because of the connections the founders had in the research world, in part because of the interesting nature of the work, and in part because people were attracted to the idea of working in the Paris area.
By mid-1989, Chorus Systèmes had some 30 employees.
By arrangement with its financial backers, during its first two years Chorus Systèmes focused solely on improvements to the Chorus technology, with no attempts to garner revenue via consulting or similar activities.
The Chorus-v3 iteration consequently came out around 1988 from Chorus Systèmes, which improved on its real-time and distributed capabilities. Some of the improvements were inspired by work done in other microkernel projects; as an academic paper put out by two of Chorus's staff members stated, their goal was to "[build] on the experience of state-of-the-art research systems ... while taking into account constraints of the industrial environment." Chorus-v3 also featured a variant of Unix, called MiX, in such a way that, as one Chorus paper put it, "we will refer to the combination of the Chorus Nucleus and the set of Unix System V subsystem servers as the Chorus/MiX operating system."
Emphasis on Unix
Chorus Systèmes believed it held the key to the technological direction Unix should take and had large ambitions in this realm. Indeed, almost from the start of the company's history, Zimmerman was proclaiming that the existing Unix technology had reached the end of its useful life and that it needed a new kernel approach going forward. As part of this, Zimmerman wanted to expand usage of Unix into new areas and then, within a few years, capture ten percent of that expanded market.
As such, the company's executives met with people from both the Open Software Foundation and Unix International (the two sides of the Unix Wars then taking place) to seek their endorsements of the Chorus microkernel and to navigate their requirements.
Similary, Chorus Systèmes engaged with a number of hardware vendors in an effort to convince them to adopt the Chorus technology.
In early 1990, GEC Plessey Telecommunications agreed to adopt Chorus for a new generation of its System X product, a digital switching system. At the time it was the biggest deal Chorus Systèmes had made, and was subsequently mentioned in the general press.
Chorus Systèmes also made a deal with Gipsi SA, a maker of X terminals.
During 1990, Unisys agreed to use Chorus as the basis for a Unix operating system. The same year, Intel's Scientific Computers group agreed to use Chorus for its Intel iPSC supercomputer.
These successes were followed in 1991 by ports of the Chorus microkernel to the transputer architecture from Inmos and the ARM3 RISC architecture used by Acorn Computers. The year after that, Tolerance Computer agreed to work with the Chorus microkernel towards making the first fault-tolerant Unix for a microcomputer-level system.
Business aspects
The primary alternative to Chorus in the microkernel space was the Mach software at Carnegie Mellon University. Two other microkernel projects going on at the time were Amoeba from Vrije Universiteit Amsterdam and V at Stanford University.
Chorus and Mach shared many similar features of their outward design, but had differences in areas such as naming and addressing and protection schemes. In some cases this gave Chorus an advantage, because it provided greater flexibility at the kernel mode–user mode boundary.
In any case, Chorus was the only one of these projects that was ready with a commercial product.
In 1990, the company created a United States subsidiary, Chorus Systems Inc., located in Beaverton, Oregon, that initially had seven employees but plans to double that. Will Neuhauser was president of the subsidiary. Chorus employees did a lot of evangelizing of the technology, including in the United States. But initially, the large majority of the company's sales came from Europe.
By 1990, Chorus Systèmes had some $6.5 million in annual revenues.
Over time, Chorus Systèmes received various outside investments of funds. By mid-1991, 63 percent of the company was owned by its founders and employees; 16 percent by Innovacom; and amounts of less than 10 percent by, in descending order, Soffinova, Credit Lyonnais, Banexi Ventures, and Banque Hervet.
In 1991, Unix System Laboratories (USL), an off-shoot of Unix founder AT&T, forged an arrangement with Chorus Systèmes to engage in cooperative work on the Chorus microkernel technology, with the idea of supporting USL's Unix System V Release 4 on Chorus/MiX and thereby making it more scalable and better suited for parallel and distributed applications. As part of this, USL took a $1 million stake in Chorus Systèmes. Much of the USL Chorus work was done at the USL Europe facility in London. This was part of the larger Ouverture project, a $14 million effort that was itself part of the European Strategic Program on Research in Information Technology (ESPRIT), overseen by the European Commission.
Microkernels also offer the possibility of multiple operating systems running side-by-side on the same machine. The ability of Chorus to support this soon became of interest to Novell, which had acquired USL and was looking for a way to combined its flagship NetWare product with USL's SVR4-based UnixWare. In 1994 Novell began publicly describing its plans to develop "SuperNOS", a microkernel-based network operating system that would run NetWare's network services alongside UnixWare's application services and accordingly be a product that could successfully compete with Microsoft's Windows NT. SuperNOS, which attracted considerable industry attention, was based on the work that had already started between USL and Chorus Systèmes, and a significant number of engineers got assigned to it. The project endured prolonged internal architectural debates, including Gien and Novell's chief scientist Drew Major disagreeing in the trade press about whether the existent Chorus technology was up to the task. In any case, later in 1995, Novell sold the Unix technology to The Santa Cruz Operation (SCO) and SuperNOS was abandoned.
SCO itself had had its own dealings with Chorus Systèmes, going back to 1992 with an agreement between the two companies for cooperative work in the context of combining SCO's OpenServer variant of Unix with the Chorus microkernel for use in real-time processing environments in telecommunications and other areas. The first result of this, a dual-functionality product called Chorus/Fusion for SCO Open Systems Software, was released in 1994. Further work between the two companies took place during the next few years; by 1995, SCO had set up a business unit for the venture and was spending considerable amounts of engineering resources on what was now a re-implementation of OpenServer to run on top of the Chorus microkernel, in what was going to be called the SCO Telecommunications OS Platform. But the project ended up being scrapped before it achieved fruition.
Other projects
Object-oriented operating systems were another area of active research at the time and there were several efforts to provide ones on top of microkernels. One was GUIDE, a project of the Universities of Grenoble, which implemented their object-oriented OS on Chorus, Mach, and regular Unix, and drew comparisons between the three.
Another was COOL and was undertaken by Chorus Systèmes itself. Standing for the Chorus Object-Oriented Layer, the first version of COOL was done in conjunction with INRIA and the SEPT, a research laboratory of France Telecom, and came into being in late 1988. A primary aim of the COOL work was to support distributed groupware applications; with that goal partly in mind, COOL was substantially revised into a two-layer architecture with clusters on the lower layer and objects represented through the higher layer. This revision was developed in partnership with the ISA and Commandos projects under the aegis of ESPRIT and materialised in late 1991. The findings from the COOL project were described in an article in Communications of the ACM in 1993.
Change of focus
Over time, development effort on Chorus shifted towards real-time operating systems for embedded systems. As part of the ESPRIT's STREAM project, Chorus was structured into a scaled series of capabilities, with the smallest of these being a 10K-byte "nanokernel" with a simple executive and memory management logic up to a full-featured distributed operating system that could run Unix.
Subsequently the company looked to change directions away from Unix, as it said its customers were more interested in the Java software platform and its capabilities on real-time devices. In February 1997, the company announced the Chorus/Jazz product, which was intended to allow Java applications to run in a distributed, real-time embedded system environment. The basis of Chorus/Jazz was Chorus Systèmes having licensed JavaOS from Sun Microsystems and replaced that technology's hardware abstraction layer with the Chorus microkernel. At this point, Chorus Systèmes offered three products for the embedded systems space: Chorus/Micro, for small, hard real-time applications; Chorus/ClassiX for larger, RT-POSIX-compliant applications, and Chorus/Jazz in the Java realm.
By 1997, Chorus Systèmes numbered among its customers in the telecommunications area Alcatel-Alsthom, Lucent Technologies, Matra, and Motorola. Its revenues were $10 million.
By this point, Chorus Systèmes was looking to get acquired by another company. A couple of years previously, SCO had inquired about such a possibility, but felt that Chorus Systèmes was valuing itself too highly. But with the Java work going on, and a personal connection that Gien had with Sun co-founder Bill Joy, there was an obvious possibility in this respect.
Acquisition by Sun and aftermath
In September 1997, it was announced that Sun Microsystems was acquiring Chorus Systèmes SA. The total amount paid for the company was the equivalent of $26.5 million. The deal was part of an overall desire by Sun to enter the embedded systems market, which was a growing industry that was attracting the attention of analysts and investors. Given the declining interest in microkernels, the industry publication Computergram International considered Chorus Systèmes fortunate to have found a buyer for itself.
The Sun acquisition closed on 21 October 1997. The Chorus technology became part a new Embedded Systems Software business group at Sun. The name of Chorus itself was changed to ChorusOS. Some of the work done at Sun included providing a combination of ChorusOS and Sun Solaris for high-availability systems in the telecommunications market.
Subsequently, Sun went through a restructuring during the early 2000s recession and decided to jettison the ChorusOS technology. Some three dozen Sun employees working on Chorus formed their own company, Jaluna, which used microkernel-analogous approaches to the increasingly important domain of virtualization. This company was then renamed to VirtualLogix, which was then acquired by Red Bend Software in 2010.
References
Further reading
Section 18.3.
Software companies of France
Companies based in Île-de-France
Software companies established in 1986
Software companies disestablished in 1997
French companies established in 1986
French companies disestablished in 1997
Unix history
|
125774
|
https://en.wikipedia.org/wiki/Berkeley%20Heights%2C%20New%20Jersey
|
Berkeley Heights, New Jersey
|
Berkeley Heights is a township in Union County, New Jersey, United States. A bedroom community in northern-central New Jersey, the township is nestled within the Raritan Valley region in the New York metropolitan area. As of the 2010 United States Census, the township's population was 13,183, reflecting a decline of 224 (-1.7%) from the 13,407 counted in the 2000 Census, which had in turn increased by 1,427 (+11.9%) from the 11,980 counted in the 1990 Census.
Berkeley Heights was originally incorporated as New Providence Township by the New Jersey Legislature on November 8, 1809, from portions of Springfield Township, while the area was still part of Essex County. New Providence Township became part of the newly formed Union County at its creation on March 19, 1857. Portions of the township were taken on March 23, 1869, to create Summit, and on March 14, 1899, to form the borough of New Providence. On November 6, 1951, the name of the township was changed to Berkeley Heights, based on the results of a referendum held that day. The township was named for John Berkeley, 1st Baron Berkeley of Stratton, one of the founders of the Province of New Jersey.
The township has been ranked as one of the state's highest-income communities. Based on data from the American Community Survey for 2013–2017, township residents had a median household income of $147,614, ranked 15th in the state among municipalities with more than 10,000 residents, almost double the statewide median of $76,475.
In Money magazine's 2013 Best Places to Live rankings, Berkeley Heights was ranked 6th in the nation, the highest among the three places in New Jersey included in the top 50 list. The magazine's 2007 list had the township ranked 45th out of a potential 2,800 places in the United States with populations above 7,500 and under 50,000.
In its 2010 rankings of the "Best Places to Live", New Jersey Monthly magazine ranked Berkeley Heights as the 19th best place to live in New Jersey.<ref>[http://njmonthly.com/articles/towns_and_schools/best-places-to-live-2010.html "Best Places To Live 2010], New Jersey Monthly, February 11, 2010. Accessed July 3, 2011.</ref> In its 2008 rankings of the "Best Places To Live" New Jersey Monthly magazine ranked Berkeley Heights as the 59th best place to live in New Jersey.
History
The Lenape Native Americans were known to inhabit the region, including the area now known as Berkeley Heights, dating back to the 1524 voyage of Giovanni da Verrazzano to what is now the lower New York Bay.
The earliest construction in Berkeley Heights began in an area that is now part of the Watchung Reservation, a Union County park that includes of the township.
The first European settler was Peter Willcox, who received a land grant in 1720 from the Elizabethtown Associates. This group bought much of northern New Jersey from the Lenape in the late 17th century. Willcox built a grist and lumber mill across Green Brook.
In 1793, a regional government was formed. It encompassed the area from present-day Springfield Township, Summit, New Providence, and Berkeley Heights, and was called Springfield Township. Growth continued in the area, and by 1809, Springfield Township divided into Springfield Township and New Providence Township, which included present day Summit, New Providence, and Berkeley Heights.
In 1845, Willcox's heirs sold the mill to David Felt, a paper manufacturer from New York. Felt built a small village around the mill aptly named Feltville. It included homes for workers and their families, dormitories, orchards, a post office and a general store with a second floor church.
In 1860, Feltville was sold to sarsaparilla makers. Other manufacturing operations continued until Feltville went into bankruptcy in 1882. When residents moved away, the area became known as Deserted Village. Village remains consist of seven houses, a store, the mill and a barn. Deserted Village is listed on the National Register of Historic Places and is undergoing restoration by the Union County Parks Department. Restoration grants of almost $2 million were received from various state agencies. Deserted Village, in the Watchung Reservation, is open daily for unguided walking tours during daylight hours.
On March 23, 1869, Summit Township (now the City of Summit) seceded from New Providence Township. On March 14, 1899, the Borough of New Providence seceded from New Providence Township. Present day Berkeley Heights remained as New Providence Township. Many of the townships and regional areas in New Jersey were separating into small, locally governed communities at that time due to acts of the New Jersey Legislature that made it economically advantageous for the communities to do so.
Due to confusion between the adjacent municipalities of the Borough of New Providence and the Township of New Providence, the township conducted a referendum in 1952 and voted to change the name to Berkeley Heights Township. The origin of the township's name has never been fully established, but is believed to have been taken from an area of town that was referred to by this moniker, which itself was assumed to be derived from Lord John Berkeley, who was co-proprietor of New Jersey from 1664 to 1674.
Early life in Berkeley Heights is documented in the Littell-Lord Farmhouse Museum & Farmstead (31 Horseshoe Road in Berkeley Heights), an museum surrounding two houses, one of which was built in the 1750s and the other near the start of the 19th century.
Among the exhibits are a Victorian master bedroom and a Victorian children's room, furnished with period antiques. The children's room also has reproductions of antique toys, which visitors can play with. The museum, which is on the National Register of Historic Places, also includes an outbuilding that was used as a summer kitchen, a corn crib dating to the 19th century and a spring house built around a spring and used for refrigeration. The museum is open 2-4 p.m. on the third Sunday of each month from April through December, or by appointment.
The township owes its rural character to its late development. Until 1941, when the American Telephone and Telegraph Company built the AT&T Bell Laboratories research facility in the township, it was a sleepy farming and resort community.
Berkeley Heights is host to a traditional religious procession and feast carried out by members of Our Lady of Mt. Carmel Society. The feast is capped by one of the largest fireworks shows in the state. The Feast of Mt. Carmel has been a town tradition since 1909.
In 1958, part of a Nike missile battery (NY-73) was installed in Berkeley Heights. The missiles were located in nearby Mountainside, while the radar station was installed in Berkeley Heights. It remained in operation until 1963, and remnants of the site are located adjacent to Governor Livingston High School.
Free Acres
Another early Berkeley Heights community of note is the Free Acres, established in 1910 by Bolton Hall, a New York entrepreneur and reformer who believed in the idea of Henry George, the economist, of single taxation, under which residents pay tax to the community, which, in turn, pays a lump sum to the municipality. Among the early residents of Free Acres were the actor James Cagney and his wife, Billie.
Residents of Free Acres pay tax to their association, which maintains its streets and swimming pool, approves architectural changes to homes and pays a lump sum in taxes to the municipality.
Geography
According to the United States Census Bureau, the township had a total area of 6.26 square miles (16.21 km2), including 6.22 square miles (16.11 km2) of land and 0.04 square miles (0.10 km2) of water (0.59%). Certain portions of Berkeley Heights are located in various flood zones.
The township is located partially on the crest of the Second Watchung Mountain and in the Passaic River Valley, aptly named as the Passaic River forms the township's northern border. The township is also located partially in the Raritan Valley region, in which the Green Brook (a tributary of the Raritan River) forms the township's eastern border near the Watchung Reservation. Berkeley Heights is located in northwestern Union County, at the confluence of Union, Morris, and Somerset Counties. Berkeley Heights is bordered by New Providence and Summit to the east, Scotch Plains to the southeast, Chatham to the north, Watchung to the south, and Warren Township and Long Hill Township to the west.Union County Municipal Profiles, Union County, New Jersey. Accessed March 19, 2020.
Unincorporated communities, localities and place names located partially or completely within the township include Benders Corners, Glenside Park, Stony Hill and Union Village.
Downtown
Downtown Berkeley Heights is located along Springfield Avenue, approximately between the intersections with Plainfield Avenue and Snyder Avenue. Downtown is home to more than 20 restaurants which join with the Downtown Beautification Committee to hold an annual Restaurant Week each September. In addition, a post office, the Municipal Building, police station, train station, Walgreens, CVS, Stop & Shop and other shops and services are located in this downtown section.
A brick walk with personalized bricks engraved with the names of many long-time Berkeley Heights residents runs from near the railroad station towards the planned Stratton House development, at the site of the former Kings. A memorial to the victims of the September 11 terrorist attacks adjoins a wooded area alongside Park Avenue, just southwest of downtown.
Certain portions of Berkeley Heights are located in flood zones. Residential homes, as well as some commercial areas along the downtown Springfield Avenue area, are impacted by flooding.
Demographics
2020 Census
The 2020 United States census counted 13,285 people, and 3,718 families in the township. The population density was 2,135.8 per square mile. There were 4,660 households (4,484 of which were occupied).
2010 Census
The Census Bureau's 2006-2010 American Community Survey showed that (in 2010 inflation-adjusted dollars) median household income was $132,089 (with a margin of error of +/- $11,331) and the median family income was $150,105 (+/- $17,689). Males had a median income of $105,733 (+/- $10,158) versus $55,545 (+/- $11,985) for females. The per capita income for the township was $56,737 (+/- $5,135). About 0.8% of families and 1.4% of the population were below the poverty line, including 1.7% of those under age 18 and 0.7% of those age 65 or over.
2000 Census
As of the 2000 United States Census there were 13,407 people, 4,479 households, and 3,717 families residing in the township. The population density was 2,140.7 people per square mile (826.9/km2). There were 4,562 housing units at an average density of 728.4 per square mile (281.4/km2). The racial makeup of the township was 89.65% White, 1.11% African American, 0.08% Native American, 7.87% Asian, 0.61% from other races, and 0.68% from two or more races. Hispanic or Latino of any race were 3.68% of the population.DP-1: Profile of General Demographic Characteristics: 2000 - Census 2000 Summary File 1 (SF 1) 100-Percent Data for Berkeley Heights township, Union County, New Jersey, United States Census Bureau. Accessed May 5, 2013.
There were 4,479 households, out of which 41.5% had children under the age of 18 living with them, 74.1% were married couples living together, 6.9% had a female householder with no husband present, and 17.0% were non-families. 14.8% of all households were made up of individuals, and 7.5% had someone living alone who was 65 years of age or older. The average household size was 2.89 and the average family size was 3.21.
In the township the population was spread out, with 26.8% under the age of 18, 4.2% from 18 to 24, 27.8% from 25 to 44, 24.8% from 45 to 64, and 16.4% who were 65 years of age or older. The median age was 40 years. For every 100 females, there were 91.0 males. For every 100 females age 18 and over, there were 87.4 males.
The median income for a household in the township was $107,716, and the median income for a family was $118,862. Males had a median income of $83,175 versus $50,022 for females. The per capita income for the township was $43,981. About 1.5% of families and 2.1% of the population were below the poverty line, including 1.8% of those under age 18 and 3.1% of those age 65 or over.
Economy
Berkeley Heights is home to the Murray Hill Bell Labs headquarters of Nokia. The transistor, solar cell, laser, and AT&T Unix (precursor to Unix) were invented in this facility when it was part of AT&T.
Berkeley Heights is also home to L'Oréal USA's New Jersey headquarters.
In 2003, Summit Medical Group signed a lease to build its main campus on the site of the former D&B Corporation headquarters located on Diamond Hill Road. Summit Medical Group merged with CityMD in 2019 to form Summit Health, which has 2,500 health care providers in the New York City area and Oregon.
Parks and recreation
Located in Berkeley Heights are many municipal parks, including the largest one, Columbia Park (located along Plainfield Avenue). Columbia Park boasts tennis courts, two baseball fields, basketball courts, and a large children's play area. It is operated by the Recreation Commission. In addition to those located at each of the schools, athletic fields are located along Horseshoe Road (Sansone Field) and along Springfield Avenue (Passaic River Park).
There are three swimming clubs located in Berkeley Heights: the Berkeley Heights Community Pool (Locust Avenue), the Berkeley Swim Club (behind Columbia Park), and Berkeley Aquatic (off of Springfield Avenue).
The Watchung Reservation and Passaic River Parkway are in the township and maintained by Union County. The Watchung Reservation has hiking trails, horseback riding trails, a large lake (Lake Surprise), the deserted community of Feltville and picnic areas.
Government
Local government
In accordance with a ballot question that was passed in November 2005, Berkeley Heights switched from a Township Committee form to a Mayor-Council-Administrator form of government under the Faulkner Act. The township is one of three municipalities (of the 565) statewide that use this form of government. The switch took effect on January 1, 2007. In the fall 2006 elections all seats were open. Under the new form of government, the mayor is directly elected to a four-year term. The Township Committee has been replaced with a Township Council comprised of six members elected to staggered, three-year terms. With all six Township Council seats open in 2006, two councilpersons were elected to one-year terms, after which those seats were open for three-year terms in 2007. Two other seats were open for two-year terms in 2006. The final two were open for three-year terms from the beginning. The responsibilities of the Township Administrator are unchanged.
, the Mayor of Berkeley Heights is Democrat Angie D. Devanney, whose term of office ends on December 31, 2022. Members of the Township Council are Gentiana Brahimaj (R, 2022), Council Vice President Manuel Couto (R, 2022), Paul Donnelly (R, 2024), John Foster (R, 2024), Council President Jeanne Kingsley (R, 2023), and Jeff Varnerin (R, 2023).2019 Municipal Data Sheet, Berkeley Heights Township. Accessed September 10, 2019.General Election November 2, 2021 Official Results, Union County, New Jersey, updated November 15, 2021. Accessed January 22, 2022.General Election November 5, 2019 Official Results, Union County, New Jersey, updated December 5, 2019. Accessed January 1, 2020.
The Council President serves as Acting Mayor in the Mayor's absence; the Council Vice President serves as Acting Mayor in the absence of both the Mayor and the Council President.
The Berkeley Heights Municipal Building is located at 29 Park Avenue. A new Municipal Complex is under construction at the same location.
Federal, State and County Representation
Berkeley Heights is located in the 7th Congressional District and is part of New Jersey's 21st state legislative district.2019 New Jersey Citizen's Guide to Government, New Jersey League of Women Voters. Accessed October 30, 2019.
Politics
As of May 18, 2017, there were a total of 9,558 registered voters in Berkeley Heights Township, of which 2,387 (25.0% vs. 45.2% countywide) were registered as Democrats, 3,368 (35.2% vs. 14.9%) were registered as Republicans and 3,780 (39.5% vs. 39.4%) were registered as Unaffiliated. There were 23 voters registered to other parties. Among the township's 2010 Census population, 68.8% (vs. 53.3% in Union County) were registered to vote, including 94.2% of those ages 18 and over (vs. 70.6% countywide).GCT-P7: Selected Age Groups: 2010 - State -- County Subdivision; 2010 Census Summary File 1 for New Jersey , United States Census Bureau. Accessed May 4, 2013.
In the 2016 presidential election, Democrat Hillary Clinton received 3,482 votes (48.23% vs. 65.94% countywide), ahead of Republican Donald Trump with 3,359 votes (46.53% vs. 30.47% countywide), and other candidates with 378 votes (5.1% vs 3.6% countywide) among the 7,325 ballots cast by the township's 9,775 voters, for a turnout of 74.9%
In the 2012 presidential election, Republican Mitt Romney received 3,897 votes (57.3% vs. 32.3% countywide), ahead of Democrat Barack Obama with 2,799 votes (41.1% vs. 66.0%) and other candidates with 76 votes (1.1% vs. 0.8%), among the 6,802 ballots cast by the township's 9,400 registered voters, for a turnout of 72.4% (vs. 68.8% in Union County).Number of Registered Voters and Ballots Cast November 6, 2012 General Election Results – Union County, New Jersey Department of State Division of Elections, March 15, 2013. Accessed May 5, 2013. In the 2008 presidential election, Republican John McCain received 4,011 votes (55.3% vs. 35.2% countywide), ahead of Democrat Barack Obama with 3,094 votes (42.7% vs. 63.1%) and other candidates with 93 votes (1.3% vs. 0.9%), among the 7,248 ballots cast by the township's 9,375 registered voters, for a turnout of 77.3% (vs. 74.7% in Union County). In the 2004 presidential election, Republican George W. Bush received 4,146 votes (57.1% vs. 40.3% countywide), ahead of Democrat John Kerry with 3,019 votes (41.6% vs. 58.3%) and other candidates with 60 votes (0.8% vs. 0.7%), among the 7,258 ballots cast by the township's 9,121 registered voters, for a turnout of 79.6% (vs. 72.3% in the whole county).
In the 2013 gubernatorial election, Republican Chris Christie received 72.2% of the vote (3,145 cast), ahead of Democrat Barbara Buono with 26.4% (1,150 votes), and other candidates with 1.4% (63 votes), among the 4,457 ballots cast by the township's 9,193 registered voters (99 ballots were spoiled), for a turnout of 48.5%. In the 2009 gubernatorial election, Republican Chris Christie received 3,136 votes (60.0% vs. 41.7% countywide), ahead of Democrat Jon Corzine with 1,589 votes (30.4% vs. 50.6%), Independent Chris Daggett with 409 votes (7.8% vs. 5.9%) and other candidates with 32 votes (0.6% vs. 0.8%), among the 5,223 ballots cast by the township's 9,201 registered voters, yielding a 56.8% turnout (vs. 46.5% in the county).
Education
Public schools
The Berkeley Heights Public Schools serves students in pre-kindergarten through twelfth grade. As of the 2018–19 school year, the district, comprised of six schools, had an enrollment of 2,582 students and 224.3 classroom teachers (on an FTE basis), for a student–teacher ratio of 11.5:1. Schools in the district (with 2018–19 enrollment data from the National Center for Education Statistics) are
Mary Kay McMillin Early Childhood Center with 304 students in PreK-1st grade,
Thomas P. Hughes Elementary School with 273 students in grades 2-5,
Mountain Park Elementary School with 230 students in grades 2-5,
William Woodruff Elementary School with 201 students in grades 2-5,
Columbia Middle School with 601 students in grades 6-8,
Governor Livingston High School with 965 students in grades 9-12.New Jersey School Directory for the Berkeley Heights Public Schools, New Jersey Department of Education. Accessed December 29, 2016.
The district's high school serves public school students of Berkeley Heights, along with approximately 300 students from neighboring Borough of Mountainside who are educated at the high school as part of a sending/receiving relationship with the Mountainside School District that is covered by an agreement that runs through the end of 2021–22 school year.Mustac, Frank. "Contract Signed to Continue Sending Mountainside Students to Governor Livingston High School", TAP into Mountainside, October 12, 2016. Accessed February 5, 2020. "With the Berkeley Heights Board of Education's recent approval of a renegotiated send/receive agreement, new terms are now in place by which the Mountainside School District will be sending its students in grades nine through 12 to Governor Livingston High School.... The new contract runs for five years from July 1, 2017 through June 30, 2022, with a renewal option for an additional five years... The business administrator explained that 30 percent of the Mountainside School District annual budget goes to paying the Berkeley Heights district for sending about 300 students who live in Mountainside to Governor Livingston High School." Governor Livingston provides programs for deaf, hard of hearing and cognitively-impaired students in the district and those who are enrolled from all over north-central New Jersey who attend on a tuition basis.
Governor Livingston was the 30th-ranked public high school in New Jersey out of 305 schools statewide in New Jersey Monthly magazine's September 2018 cover story on the state's "Top Public High Schools".
Private schools
There are four private pre-kindergarten schools in Berkeley Heights. The Westminster Nursery School is located at the corner of Plainfield Avenue and Mountain Avenue, the Union Village Nursery is located bordering Warren Township at the corner of Mountain Avenue and Hillcrest Road, the Diamond Hill Montessori is located along Diamond Hill Road opposite McMane Avenue and Primrose on Springfield Avenue.
FlexSchool, a private school for twice-exceptional and gifted fifth through twelfth graders, is the only private secondary school in Berkeley Heights.
Infrastructure
Transportation
Roads and highways
, the township had a total of of roadways, of which were maintained by the municipality, by Union County and by the New Jersey Department of Transportation.
The most significant highway serving Berkeley Heights is Interstate 78, which runs from New York City to Pennsylvania. Other major roads in Berkeley Heights include Springfield Avenue, Mountain Avenue, Snyder Avenue, Plainfield Avenue, and Park Avenue. Springfield Avenue and Mountain Avenue run east–west, Snyder Avenue and Plainfield Avenue run north–south, while Park Avenue runs northeast–southwest. Each of these roads is heavily residential (except parts of Springfield Avenue) with only one travel lane in each direction.
Public transportation
NJ Transit provide service at the Berkeley Heights station serving Hoboken Terminal, Newark Broad Street Station, and Penn Station in Midtown Manhattan as part of the Gladstone Branch. Berkeley Heights is also in close proximity of the Summit train station, which provides frequent commuter rail service to New York City.
NJ Transit offers local bus service on the 986 route. Lakeland Bus Lines also provides commuter bus service to the Port Authority Bus Terminal in Midtown Manhattan and a connection to Gladstone.
Freight rail transportation had been provided by Norfolk Southern via off-peak use of New Jersey Transit's Gladstone Branch line until a final run on November 7, 2008, after 126 years of service. The Berkeley Heights plant of Reheis Chemical located on Snyder Avenue was the last freight customer on the Gladstone Branch, receiving shipments of hydrochloric acid.
Newark Liberty International Airport is approximately east of Berkeley Heights.
Healthcare
The Summit Medical Group, located on Mountain Avenue, is the main medical facility in Berkeley Heights.
Public library
Originally opened in 1949, Berkeley Heights Public Library closed its doors to the public at its 290 Plainfield Avenue location. It was moved to a temporary home at 110 Roosevelt Avenue, otherwise known as the Little Flower Church Rectory. The library is a member of the Infolink region of libraries, the Morris Union Federation (MUF) and the Middlesex Union Reciprocal Agreement Libraries (MURAL).
Police, fire, and emergency services
The Berkeley Heights Police Department is located at the Municipal Building, 29 Park Avenue. This is also the location of the Berkeley Heights Municipal Court.
The Berkeley Heights Volunteer Rescue Squad, founded in 1942, is located at the corner of Snyder Avenue and Locust Avenue. The closest trauma centers are Morristown Medical Center (in Morristown) and University Hospital in Newark. The closest hospital emergency room is Overlook Hospital in Summit. The all-volunteer Rescue Squad provides emergency medical services to the township seven days per week. As of April 2019, the squad had 60 riding members including college and high school students of which 32 are certified EMTs.
The Berkeley Heights Fire Department is a volunteer fire department commanded by Chief Anthony Padovano. In addition to fire suppression, the department has members trained to respond to technical rescue and hazardous materials releases.
Notable people
People who were born in, residents of, or otherwise closely associated with Berkeley Heights include:
Al Aronowitz (1928–2005), rock journalist who claimed that Bob Dylan wrote his famous "Mr. Tambourine Man" in Aronowitz's former Berkeley Heights home.
Steve Balboni (born 1957), former New York Yankee.
BEDlight for BlueEYES, an alternative rock band.
Dennis Boutsikaris (born 1952), actor.Mann, Virginia. "The Good Doctor Next Door", The Record, May 14, 1991. Accessed August 26, 2013. "There are several reasons why "Barney Miller" creator Danny Arnold wanted Dennis Boutsikaris for the lead in his new hospital sitcom Stat (9:30 tonight, Channel 7).... The actor, who was raised in Berkeley Heights and graduated from Hampshire College in Amherst, Mass., began his acting career with John Houseman's Acting Company."
James Cagney (1899–1986), actor who resided in Free Acres.
David Cantor (born 1954), actor.
John Carlini, jazz guitarist.
Ronald Chen (born 1958), former Public Advocate of New Jersey, nominated to fill the position on January 5, 2006, by Governor of New Jersey Jon Corzine.
Christopher Durang (born 1949), playwright and actor.
Cathy Engelbert (born 1965), CEO of Deloitte, first female CEO of a major U.S. accounting firm.
Lauren Beth Gash (born 1960), lawyer and politician who served in the Illinois House of Representatives from 1993 to 2001.
Gina Genovese (born 1959), businesswoman and politician who has served as mayor of Long Hill Township.
Scott M. Gimple (born 1971), television and comic book writer.
Bolton Hall (1854–1938), founder of Free Acres.
MacKinlay Kantor (1904–1977), screenwriter and novelist, formerly resided in Free Acres.
Harry Kelly (1871–1953), anarchist.
Victor Kilian (1891–1979), actor.
P. F. Kluge (born 1942), novelist.
Mary Jo Kopechne (1940–1969), political aide who drowned off Chappaquiddick Island when Senator Ted Kennedy (D-Mass.) drove his car off a bridge on July 18, 1969.
John R. Pierce (1910–2002), communications engineer, scientist, and father of the communications satellite.Kamin, Arthur Z. "State Becomes a Part of Celebrating Marconi's Achievements", The New York Times, October 23, 1994. Accessed July 24, 2013. "The recipient in 1979 was Dr. John R. Pierce, then of the California Institute of Technology who had been with AT&T Bell Laboratories at Murray Hill and at Holmdel. Dr. Pierce had lived in Berkeley Heights and now lives in Palo Alto, Calif."
Jerry Ragonese (born 1986), professional lacrosse player for the Redwoods Lacrosse Club of the Premier Lacrosse League.
Juliette Reilly (born 1993), singer and YouTube personality.
Dennis Ritchie (1941–2011), creator of the C programming language and co-inventor of the UNIX operating system.
Bertha Runkle (1879–1958), novelist and playwright.
Peter Sagal (born 1965), playwright, screenwriter, actor, and host of the National Public Radio game show Wait Wait... Don't Tell Me!.
Jill Santoriello, playwright and author of the new Broadway musical A Tale of Two Cities'', graduated from Governor Livingston High School.
Thorne Smith (1892–1934), author.
Zenon Snylyk (1923–2002), soccer player.
Betty Wilson (born 1932), politician who served in the New Jersey General Assembly from 1974 to 1976.
References
External links
1809 establishments in New Jersey
Faulkner Act (mayor–council–administrator)
Populated places established in 1809
Townships in Union County, New Jersey
|
1005207
|
https://en.wikipedia.org/wiki/Indent%20%28Unix%29
|
Indent (Unix)
|
indent is a Unix utility that reformats C and C++ code in a user-defined indentation style and coding style. Support for C++ code is minimal.
The original version of indent was written by David Willcox at the University of Illinois in November 1976. It was incorporated into 4.1BSD in October 1982. GNU indent was first written by Jim Kingdon in 1989. The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities.
Examples of usage
The following command
$ indent -st -bap -bli0 -i4 -l79 -ncs -npcs -npsl -fca -lc79 -fc1 -ts4 some_file.c
indents some_file.c in a style resembling BSD/Allman style and writes the result to the standard output.
GNU indent
GNU indent is the GNU Project's version of indent. A different indentation style, the GNU style, is used by default.
GUI
UniversalIndentGUI
References
External links
GNU indent Homepage
clang-format (an alternative to indent)
GNU Project software
Unix programming tools
de:Astyle
|
58764024
|
https://en.wikipedia.org/wiki/Carla%20Brodley
|
Carla Brodley
|
Carla E. Brodley is a computer scientist specializing in machine learning. Brodley is a Fellow of the ACM, the Association for the Advancement of Artificial Intelligence (AAAI), and the American Association for the Advancement of Science (AAAS). She is the Dean of Inclusive Computing at Northeastern University, where she serves as the Executive Director for the Center for Inclusive Computing and holds a tenured appointment in Khoury College of Computer Sciences. Brodley served as dean of Khoury College from 2014-2021. She is a proponent for greater enrollment of women and under-represented minorities in computer science.
Education and career
Brodley is a 1985 graduate of McGill University. At McGill, she initially chose to major in English, quickly switching to economics, but switched again to a double major in mathematics and computer science after taking and enjoying a computer programming course as a sophomore.
After working as a consultant and computer programmer in Boston, she returned to graduate school, initially planning only to work for a master's degree in artificial intelligence, but continuing there for a Ph.D. under the supervision of Paul Utgoff.
After finishing her doctorate in 1994, she joined the electrical engineering faculty of Purdue University School of Electrical and Computer Engineering. She moved from Purdue to Tufts University in 2004, and became chair of the department of computer science at Tufts from 2010 to 2013, also holding an affiliation with the Clinical and Translational Science Institute at Tufts Medical Center. She moved again from Tufts to Northeastern in 2014.
Recognition
Brodley was named a Fellow of the Association for Computing Machinery in 2016 "for applications of machine learning and for increasing participation of women in computer science". Brodley is also a fellow of the Association for the Advancement of Artificial Intelligence (AAAI), and the American Association for the Advancement of Science (AAAS).
References
External links
Home page
Year of birth missing (living people)
Living people
American computer scientists
American women computer scientists
McGill University Faculty of Science alumni
University of Massachusetts Amherst alumni
Purdue University faculty
Tufts University faculty
Northeastern University faculty
Fellows of the Association for Computing Machinery
20th-century American women scientists
21st-century American women scientists
American women academics
|
67499060
|
https://en.wikipedia.org/wiki/Familiar%20Linux
|
Familiar Linux
|
Familiar Linux is a discontinued Linux distribution for iPAQ devices and other personal digital assistants (PDAs), intended as a replacement for Windows CE. It can use OPIE or GPE Palmtop Environment as the graphical user interface.
Technical details
It is loosely based on the Debian ARM distribution, but uses the ipkg package manager. It contained Python and XFree86.
History
In May 2000, Alexander Guy took a kernel that had been worked on by Compaq programmers, built a complete Linux distribution around it, and released the first version of Familiar (v0.1).
The first version was released in May 2000.
It was developed as part of the Handhelds.org project.
Reception
According to a review by IBM, Familiar Linux isn't for everyone and needed to be polished more.
References
External links
Linux.com interview with the original author
Linux
ARM Linux distributions
Linux distributions
|
2076483
|
https://en.wikipedia.org/wiki/Chloe%20O%27Brian
|
Chloe O'Brian
|
Chloe O'Brian is a fictional character played by actress Mary Lynn Rajskub on the US television series 24. An analyst at CTU Los Angeles (and later New York), she is Jack Bauer's most trusted colleague, often doing unconventional and unauthorized favors for him, even at personal risk to herself. As O'Brian, Rajskub appeared in 137 episodes of 24, more than any other actor except series star Kiefer Sutherland, who appeared in all 204 episodes of the series. UGO.com named her one of the best TV nerds. AOL named her one of the 100 Most Memorable Female TV Characters.
Characteristics
O'Brian is exceptionally intelligent; in particular she displays extraordinary mastery of computer science. Spending most of her time behind a computer terminal, she is rarely sent on field assignments; however, she has demonstrated proficiency with weapons in Day 4, 5, and 8. She works very well under pressure, yet it is obvious when she is under pressure, that she has always demonstrated a degree of social awkwardness. Despite her lack of social graces, Chloe has gained several friends at CTU and has shown to have a great deal of loyalty to them. She helped CTU Agent Chase Edmunds take care of his daughter from a previous relationship after the child's mother abandoned Chase and their child. She even tried to pass the child off as her own, so as to keep Chase's then-girlfriend, Kim Bauer, from knowing.
Before the start of Day 6, she and Milo Pressman briefly dated. This was a source of hostility between Milo and Morris.
Toward the end of Day 6, it is revealed that Chloe is pregnant with Morris's child.
In interviews to promote the show's return to Fox with the miniseries, 24: Live Another Day, actor Kiefer Sutherland revealed that Rajskub would be reprising Chloe, but hinted that the relationship between her and Jack Bauer would be more adversarial than before. He also suggested that the reason for this was possibly related to her actions during the show's final season.
Appearances
24: Season 3
Introduced in the third season of the series, Chloe O'Brian is a senior analyst at CTU. Her other experience at CTU includes Intelligence Agent and Internet Protocol Manager. She received her education at the University of California-Davis, having received her BSc in Computer Science. In 24: The Game, it is revealed that before coming to CTU Los Angeles, she worked at CTU: Washington DC with Chase Edmunds.
24: Season 4
Chloe continues to work at CTU as an analyst. She continues to help Jack (at the time, not a member of CTU) to follow the terrorists while risking her career. She is detained and fired when Director of CTU Los Angeles Erin Driscoll (the person who fired Jack) discovered Chloe is helping Jack behind her back.
When Driscoll resigns and Michelle Dessler steps in, she is reinstated since none of the other CTU employees were as skilled as Chloe was. When the crisis ends later in the day, she, along with Michelle Dessler, Tony Almeida and former President David Palmer, help Jack to fake his death.
24: Season 5
Chloe is given a love interest, a subordinate named Spenser Wolff. She finds out later that Spenser is a mole (albeit unknowingly), and turns him in immediately. He later states he is not a mole but at CTU as an Internal Affairs investigator.
Chloe is the only person who had continual contact with Jack since he faked his death. Chloe is also the only one involved in the plot to fake his death to survive from murder attempts committed by unknown parties (later discovered to be President Charles Logan and Jack's brother, Graem Bauer), narrowly escaping a car bomb and a subsequent attack by the terrorists who planted the bomb.
She is briefly arrested for aiding Jack, though her skills with computers allow her to avoid the consequences of her actions and returns to work at CTU. Sadly, Edgar Stiles does not survive the events of Day 5 and Chloe watches him die through the glass door. Chloe is visibly shaken by the death of one of her few best friends and regrets having been short with him earlier that day.
Chloe is later forced to work directly against her superiors in Homeland Security in order to help Audrey and Jack. She hacks into a CTU server and interferes with a satellite tracking Audrey's car. She is able to slip into the bathroom just before being caught red handed, but Homeland Security bureaucrat Miles Papazian is very suspicious that she is deliberately interfering with their orders to attempt to apprehend Jack, at that time subject to a warrant by President Logan.
Homeland Security tricks Chloe into calling a pay phone where Audrey is currently located. They track the call to Van Nuys Airport and alert President Logan that they have located Bauer. Chloe is then put into custody by Miles. However, Chloe steals his keycard and escapes from detention. She grabs her laptop and leaves CTU. When leaving she encounters Shari Rothenburg but Chloe blackmails her to stay quiet. Chloe works with Buchanan from his home but Papazian is able to track her. Karen Hayes contacts Bill to alert them that Papazian has sent a team to arrest Chloe and that she has mere minutes to leave the house.
Chloe is able to escape and continues to aid Jack from a nearby hotel. Combined with Hayes, she works to help Jack locate the passenger on a private plane in possession of the audio tape implicating Logan in Palmer's death. Eventually she confirms that it is the co-pilot who has the tape. Hayes and Buchanan bring Chloe back to CTU shortly thereafter to help Jack make an emergency landing, because Logan has ordered an F18 to shoot down the plane. Bill Buchanan is able to locate a 4000 ft. strip on a Los Angeles Highway to use for a landing. Jack eventually lands safely and escapes from Logan's marine force with the help of Curtis Manning. Once Jack gets the audio tape back to CTU, Chloe sets to work preparing the audio tape for the United States Attorney General. Unfortunately, unbeknownst to Chloe, Miles destroys the digital recording while she is distracted.
In the final hours of Day 5, Chloe aids Jack in preventing Bierko from firing missiles from a Russian submarine. After the mission ends successfully, Jack tells her that he is going to attempt to get a confession from Logan and will need her help. With help from Mike Novick, Chloe is able to get Jack the necessary papers to board the presidential helicopter as a co-pilot. After Jack places a listening device on Logan that records his confession to Martha, Chloe transmits the recording to the Attorney General.
When the crisis ends, Bill brings her something from Edgar's locker, a picture of Chloe and Edgar together. An emotionally spent Chloe leaves CTU for the day with her ex-husband Morris O'Brian, whom she had enlisted to help her at CTU.
24: Season 6
After the failed assassination of Assad via military helicopter, Chloe found an image of Jack rescuing the terrorist. She shared the information with Bill Buchanan, who conceded that from Jack's perspective, a rescue was the right action. Chloe suggests if Jack is right, then Fayed should be monitored. Buchanan agrees, and subsequently CTU obtains information that Fayed was indeed behind the latest waves of bombings.
Chloe is later able to recover data from the hard drive of one of Fayed's men that he was particularly interested in a specific set of terrorists Fayed demanded be set free. Later, after Jack notices Curtis Manning's demeanor around Assad, asks her to find out if there is a past connection between the two men. She later confirms that Manning's military unit took heavy losses at Assad's hands and that Assad beheaded two members of Manning's unit on television. Jack uses that information to prevent Manning from killing Assad, though the price is Manning's life.
When chatter is intercepted between Fayed and Darren McCarthy, a profile of the man able to arm the four remaining suitcase nukes is sent, though the message is badly degraded. Morris works on reconstructing the image, while Milo finds information that states Morris's brother has been exposed to the radiation from the Valencia bomb and is in a hospital. Chloe informs him, but while Morris wants to go to his brother's side at once, Chloe insists he retrieve the data. After Morris downloads an illegal program that will speed up the reconstruction of the data, he leaves, and Chloe kisses him goodbye. Chloe then monitors the retrieval to find that the engineer was, in fact, Morris. Bill has her call Morris via cell phone with Jack on the line, and Chloe jumps nervously after hearing gunfire erupt as McCarthy corners and kidnaps Morris from his car and put into his with the help of McCarthy's girlfriend, Rita.
Chloe works to help locate Morris, who was tortured by Fayed into programming a device which would allow the detonation of the suitcase nukes. Chloe gives Jack the needed information in how to disable the suitcase nuke Fayed left behind after CTU assaulted his safe house. Afterwards, Jack and Chloe have a reunion at CTU where she thanks him for saving Morris's life, and she tells him she's glad Fayed didn't kill him. Jack thanks her for everything. She later visits Morris in the infirmary, but her attempts at compassion are dismissed by Morris's claims that he's responsible for Fayed ability to arm the remaining suitcase nukes. He tells her to go away. She later goes back to say they have a lead, and asks him to return to duty. He dismisses it as a ruse to get him to go back to work, and admits he's a coward. Chloe retorts he's pissing her off, and Morris says she can add it to his list of failings. Chloe slaps him, and he tells her if she wants to save somebody, save somebody who is worth saving. She tries to slap him again, but he blocks it. She tells Morris to stop feeling sorry for himself and get back to work.
Later, she finds Morris is not at his work station. She investigates his palmtop, and calls his sponsor, who says she hasn't been in touch with Morris for years. Chloe confronts Morris in the men's room. He tells her he has a different sponsor now – who he did speak to on the phone – and berates Chloe for being 'obsessive' about his contacts. However, when Chloe leaves, he drains what is left of a bottle of whiskey down the sink.
When Milo suspects that Morris has been drinking again, he asks Chloe to check Morris for any signs of alcohol. She reluctantly agrees, and walks over to him and kissed him, when he asked "What was that for?" she responded by saying "Just checking your breath."
Later, she helps Jack to steal the bomb's schematics. However, Morris discovers her act and forces her to tell Bill. When Bill orders everyone to help Jack Bauer under presidential order, he excludes her saying that "I don't trust you" but he changes his mind in the next episode when he tells her he needs his best people working on this.
Chloe is apparently upset and angry at Morris for forcing her to tell Bill the truth and the pair argue until Chloe's anger gets the better of her and she throws the fact that he armed nuclear weapons for terrorists, back in his face. Chloe insists that she didn't mean it but Morris puts in for a transfer out of Com and to no longer work aside Chloe. She is visibly upset and scolds herself for pushing the issue too far. Later, she attempts to apologize, but Morris tells her that their relationship is over. As Morris goes back to work, Chloe bursts into tears. Minutes later, Nadia notices that Chloe is not at her station and Morris claims not to know where she is.
Later, Chloe confronts Morris over his decision to break up. Morris tells Chloe that he ended their relationship because he felt that neither of them would be able to move past the fact that he had armed the nuclear weapons. Soon afterwards, CTU came under attack and the entire staff, including Chloe, was taken hostage. Eventually Nadia, Jack, and Morris attacked the men holding them hostage and Chloe commended Nadia on her bravery.
While at work, Chloe faints and is taken to the CTU medical department. It was revealed near the end of the final episode that she is pregnant. Morris, presumably the father, appears pleased at the news, and the two resume their relationship once again (in typical Morris fashion, he dismisses their breakup with a "Sod that!" comment).
24: Season 7
Chloe does not appear in 24: Redemption, the two-hour TV prequel, which aired on Sunday, November 23, 2008, she and Bill Buchanan first appear in the third episode of the season.
Actress Mary Lynn Rajskub revealed her role in the upcoming seventh season: “I show up, time has passed and I have a 4-year-old and a wedding ring, [and I'm] calling Morris (Carlo Rota) while I'm busy. So far he's taking care of the baby, but he'll be around." "We're a rogue operation – we are working outside of the government to uncover the conspiracy within the government." When asked by Matobo if she is a federal agent, she replies, "No, I'm a stay-at-home-mom." She and Morris have named their son Prescott.
Kiefer Sutherland commented "Chloe is crankier than ever; the dammit count is pretty high."
Chloe was working with Bill Buchanan and Tony Almeida who was undercover with Emerson's gang, to uncover the conspiracy within the US government who had been supporting General Juma and his regime in Sangala, Africa. Chloe co-ordinates numerous operations for the team, until later they become compromised through their efforts to secure the CIP device used by Juma's henchman, Dubaku to launch attacks on America. After this Chloe collaborates with the FBI, working from their headquarters in Washington.
Later on, after Dubaku is captured and the threat seemingly ended, Jack is informed by Tony of another impending attack. Jack enlists Chloe's help while he follows a lead on Ryan Burnett — a US-based conspirator named as a traitor in the files retrieved from Dubaku. Jack asks Chloe to erase Burnett's name from the files to buy him time to get to Burnett and interrogate him. Janis Gold, one of the FBI's analysts, becomes suspicious of Chloe's and finds out what she has done. Janis reports her to Agent Larry Moss, who has Chloe arrested and detained. Chloe is later released when her husband Morris cuts an immunity deal for her, and the two go home to get some sleep. This plot development was partially to work around Rajskub's pregnancy. In earlier episodes, "I sit behind my computer and every time I stand up, they yell cut, and bring in a body double." Chloe's arrest allowed her to be temporarily written out of the show so that Rajskub could go on maternity leave; meanwhile, Morris takes over her role in the story.
Chloe is called back into action by Jack at around 3:30 AM, when CTU's servers are dug out of storage and made available for FBI use; Chloe is (of course) the person best suited to integrating them and getting them running. Jack tells her that Tony has betrayed them, but (as of 4 AM) has not informed her of his own condition. Jack eventually informed her of his condition, but asked her to remain focused and help them find Tony and the pathogen. Thanks to Chloe's help, Jack is able to find and capture Tony. However, the tables are turned and Jack is kidnapped by Tony. When Kim recovers a laptop from one of Tony's lackeys, Chloe is able to track Jack's location and save him.
At the 7:30 am mark, Chloe has decided to stay and be there for Jack in his final hours.
24: Season 8
In Season 8, Chloe is re-employed by the revived CTU, but is at times struggling with the new hardware, software and interfaces; dialogue between her and Head of CTU Brian Hastings (Mykelti Williamson) indicates that Morris has lost his job and Chloe is keeping the family afloat. She is also subordinate to Dana Walsh (Katee Sackhoff), who holds Chloe's usual position of Head Analyst, causing the insecure Chloe additional stress.
Chloe quickly goes head-to-head with her co-workers when evidence is uncovered that implicates journalist Meredith Reed (Jennifer Westfeldt) in an assassination plot against President Omar Hassan (Anil Kapoor). Chloe is suspicious of the ease with which CTU obtained this evidence, feeling that Reed might be being framed by actual conspirators. Hastings refuses to follow up on her suspicions, even threatening her job if she takes time to investigate, so she recruits Jack to do it instead. As it turns out, Chloe's instincts are proved correct, leading CTU to the actual assassin, and Hastings formally apologizes and commends her actions in the official logs. Later, after Dana's behavior in relation to former associate Kevin Wade (Clayne Crawford) affects her job performance, Chloe is then reinstated to Head Analyst, with Dana now reporting to her.
In the 183rd episode of the show (Season 8, 6:00 am – 7:00 am), Chloe surpassed Tony Almeida (Carlos Bernard)'s episode count of 115, becoming the character who has appeared in the most episodes of the show other than Jack Bauer. Perhaps appropriately, just after 8 am that day, Chloe is promoted to (Acting) Director of CTU, replacing Hastings. This puts Chloe in an abnormal position in regard to Jack: normally she helps him carry out clandestine operations in defiance of their mutual boss, but now she is the boss he is defying. This new dynamic is put to the test within two hours of her promotion, when Jack steals a helicopter to pursue justice in direct defiance of orders from President Allison Taylor (Cherry Jones). Chloe sticks with the responsibilities of her position, ordering pursuit instead of supporting Jack; this is the first time in several years the two have pursued clashing goals.
At the end of the 8th season (3:00 pm to 4:00 pm), Chloe manages to talk Jack out of assassinating the Russian president in revenge of a friend's murder earlier that day and gets him to agree to expose the conspiracy her way. Jack orders Chloe to shoot him in order to free herself from suspicion and to expose the cover up of Hassan's murder. Chloe refuses to go through with it until Jack points the gun at his own head, forcing her to either shoot him or having him kill himself. She shoots him in the shoulder. Coordinating with Cole Ortiz, she tries to get evidence collected by Jack vital to expose the cover up, but is stopped by Jason Pillar and CTU. After Jack is saved from death by President Alison Taylor's order, he calls Chloe and makes her promise to protect Jack's family, and along with President Taylor, plans to buy him as much time as she can for him to flee the country before the Americans and Russians come after him for his actions. Jack thanks her for all she has done for him since she joined CTU and forgives her for her actions during the day. Chloe has the distinction of speaking the final line in the series by saying "shut it down" to Arlo Glass in regards to the CTU drone. As Jack looks one last time towards CTU's monitoring, Chloe gives one final tearful look to Jack in CTU's monitor screen before the clock counts down to zero.
In the Season 8 DVD bonuses, it is revealed that Chloe was arrested soon after the events of Day 8 by the FBI for covering up Jack's escape. She returned in the miniseries 24: Live Another Day, which started airing in May 2014.
24: Live Another Day
In 24: Live Another Day, four years after the end of Day 8, Chloe is situated in London and has taken to a new, darker appearance. Since being arrested for helping Jack escape, she has become a member of the free information movement and is working with a hacker group named Open Cell, which devotes itself to exposing government secrets. Her new goals are a stark contrast to the loyal CTU agent Chloe once was; it is suggested she was betrayed by the American government, at one point telling Jack not to judge her "after what [she's] been through."
At the start of the miniseries, she was being detained by the CIA for leaking thousands of classified DOD documents. Jack, allowing himself to be captured to gain access to the facility, frees her and then follows her to the hideout of Open Cell's leader, hacker Adrian Cross (Michael Wincott). Chloe convinces her colleagues to assist Jack in locating Derrick Yates (Joseph Millson), a former member of their movement who Jack believes to be involved in a planned assassination attempt on President James Heller (William Devane). By this time, Yates had already managed to commandeer a U.S. drone and fire upon a military convoy in Afghanistan, killing two American and two British soldiers. Together, Jack and Chloe track Yates to an apartment complex, but are circumvented by the CIA's efforts to apprehend them. After losing Yates, Jack evades capture and escapes with Chloe.
In the show's third episode, Jack discovers Yates' corpse, left behind by his female companion, Simone Al-Harazi (Emily Berrington). Chloe loses sight of Al-Harazi after being distracted by a passing family, after which she reveals to Jack that Morris and Prescott were killed as a result of her knowledge of Jack's disappearance following Day 8. Together, they return to Open Cell's headquarters in order to establish a cover for Jack to infiltrate the U.S. embassy in London and question Lieutenant Chris Tanner (John Boyega), the man Yates had framed for the drone attack. However, Cross betrays Jack by botching Jack's cover; Chloe senses something amiss and warns him to escape, giving Jack enough time to create a diversion and enter the embassy.
Throughout the rest of the day Chloe aids Jack as he searches for Margot Al-Harazi. Chloe manages to hack into Al-Harazi's video feed from her drone's cameras and fake President Heller's death and then track her with Adrian Cross' help. Jack kills Al-Harazi and stops an attack on Waterloo station, but Chloe refuses to have anything more to do with him, telling Jack it was good working with him again before rejoining Cross.
After Cross receives the override device from Steve Navarro, Chloe tries to steal it but is forced by Cross to give it back. She is shocked to discover that Cross is working with Jack's old enemy Cheng Zhi who forces Chloe to fix the device and murders Cross who admits that he learned that the death of Morris and Prescott were actually an accident and he kept it from her so she wouldn't leave him. A horrified Chloe can only watch as Cheng uses the override device to order the sinking of a Chinese aircraft carrier by a United States submarine to spark a war between the two countries, but manages to leave a recording of Cheng behind so that Jack knows who is behind everything. As she is being transported, Chloe escapes and contacts Jack. Remorseful over her decisions, Chloe offers Jack her help in guiding him through Cheng's hideout, telling him she's the only friend he's got left. Reluctant to trust her, Jack agrees. Chloe runs satellite surveillance for Jack and Belcheck, but they lose contact with her shortly before Jack captures Cheng. Shortly afterwards, Jack gets a call from the Russians to trade himself for Chloe who they have kidnapped.
The next morning, Jack meets with the Russians and willingly trades himself for Chloe and his family's safety. Before leaving, Jack affirms that Chloe is his best friend and asks her to look after Kim. As Jack leaves with the Russians, a reluctant Chloe drives away with Belcheck.
Project CHLOE
Project CHLOE, a Department of Homeland Security surveillance technology development program aimed at protecting airliners from terrorist missiles, was named after Chloe O'Brian because 24 is former Homeland Security Secretary Michael Chertoff's favorite show.
References
External links
on the official 24 website
24 (TV series) characters
American female characters in television
Fictional characters on the autism spectrum
Fictional government agents
Fictional hackers
Fictional prison escapees
Television characters introduced in 2003
Television sidekicks
|
9530029
|
https://en.wikipedia.org/wiki/Comparison%20of%20audio%20synthesis%20environments
|
Comparison of audio synthesis environments
|
Software audio synthesis environments typically consist of an audio programming language (which may be graphical) and a user environment to design/run the language in. Although many of these environments are comparable in their abilities to produce high-quality audio, their differences and specialties are what draw users to a particular platform. This article compares noteworthy audio synthesis environments, and enumerates basic issues associated with their use.
Subjective comparisons
Audio synthesis environments comprise a wide and varying range of software and hardware configurations. Even different versions of the same environment can differ dramatically. Because of this broad variability, certain aspects of different systems cannot be directly compared. Moreover, some levels of comparison are either very difficult to objectively quantify, or depend purely on personal preference.
Some of the commonly considered subjective attributes for comparison include:
Usability (how difficult is it for beginners to generate some kind of meaningful output)
Learnability (how steep the learning curve is for new, average, and advancing users)
Sound "quality" (which environment produces the most subjectively appealing sound)
Creative flow (in what ways does the environment affect the creative process - e.g. guiding the user in certain directions)
These attributes can vary strongly depending on the tasks used for evaluation.
Some other common comparisons include:
Audio performance (issues such as throughput, latency, concurrency, etc.)
System performance (issues such as buggyness or stability)
Support and community (who uses the system and who provides help, advice, training and tutorials)
System capabilities (what is possible and what is not possible [regardless of effort] with the system)
Interoperability (how well does the system integrate with other systems from different vendors)
Building blocks of sound and sound "quality"
Audio software often has a slightly different "sound" when compared against others. This is because there are different ways to implement the basic building blocks (such as sinewaves, pink noise, or FFT) which result in slightly different aural characteristics. Although people can of course prefer one system's "sound" over another, perhaps the best output can be determined by using sophisticated audio analyzers in combination with the listener's ears. The idea of this would be to arrive at what most would agree is as "pure" a sound as possible.
User interface
The interface to an audio system often has a significant influence on the creative flow of the user, not because of what is possible (the stable/mature systems listed here are fully featured enough to be able to achieve an enormous range of sonic/compositional objectives), but because of what is made easy and what is made difficult. This is again very difficult to boil down to a brief comparative statement. One issue may be which interface metaphors are used (e.g. boxes-and-wires, documents, flow graphs, hardware mixing desks).
General
Programming language features
Data interface methods
Interfaces between the language environment and other software or hardware (not user interfaces).
Technical
References
See also
List of music software
Audio programming languages
Electronic music software
Multimedia software comparisons
Software synthesizers
|
1098088
|
https://en.wikipedia.org/wiki/Ted%20Henter
|
Ted Henter
|
Ted Henter is an American computer programmer and businessperson known for having invented the JAWS screen reader for the blind. He studied engineering, but learned computer programming and started his own business after becoming blind in a car accident in 1978, which put an end to a promising career as an international motorcycle racer.
In 1987, he teamed up with businessperson Bill Joyce, who together founded Henter-Joyce in St. Petersburg, Florida. Henter was president and led the operation and provided technology direction while Joyce acted as a silent partner. Henter-Joyce produced JAWS, a screen reader for personal computers using MS-DOS, and later Microsoft Windows.
After becoming blinded, Henter rediscovered waterskiing, and started competing in waterskiing events. He won six times out of seven competitions in the United States and twice in international competition. He retired in 1991 after winning the overall Gold medal in the United States and World Championship for Disabled Skiers.
Henter-Joyce merged with Arkenstone and Blazie Engineering in 2000 to form Freedom Scientific. Henter currently remains on the board of directors of Freedom Scientific, and in 2002 he founded Henter Math, to produce software that helps the "pencil-impaired" with mathematics.
References
External links
Biography
Henter Math
American computer programmers
Living people
Blind people from the United States
American water skiers
American motorcycle racers
250cc World Championship riders
Year of birth missing (living people)
|
1363992
|
https://en.wikipedia.org/wiki/Roof%20and%20tunnel%20hacking
|
Roof and tunnel hacking
|
Roof and tunnel hacking is the unauthorized exploration of roof and utility tunnel spaces. The term carries a strong collegiate connotation, stemming from its use at MIT, where the practice has a long history. It is a form of urban exploration. Some participants use it as a means of carrying out collegiate pranks, by hanging banners from high places or, in one notable example from MIT, placing a life-size model police car on top of a university building. Others are interested in exploring inaccessible and seldom-seen places; that such exploration is unauthorized is often part of the thrill. Roofers, in particular, may be interested in the skyline views from the highest points on a campus. On August 1, 2016, Red Bull TV launched the documentary series URBEX – Enter At Your Own Risk, that also chronicles roof and tunnel hacking.
Vadding
Vadding is a verb which has become synonymous with urban exploration. The word comes from MIT where, for a time in the late 1970s, some of the student population was addicted to a computer game called ADVENT (also known as Colossal Cave Adventure). In an attempt to hide the game from system administrators who would delete it if found, the game file was renamed ADV. As the system administrators became aware of this, the filename was changed again, this time to the permutation VAD. The verb vad appeared, meaning to play the game. Likewise, vadders were people who spent a lot of time playing the game.
Thus, vadding and vadders began to refer to people who undertook actions in real life similar to those in the game. Since ADVENT was all about exploring tunnels, the MIT sport of roof and tunnel hacking became known as vadding.
Today, the word vadding is rarely used at MIT (usually only by old-timers) and roof and tunnel hacking has returned as the preferred descriptive term. Those who participate in it generally refer to it simply as "hacking".
Roof hacking
Many buildings at American universities have flat roofs, whereas pitched roofs designed to shed snow or heavy rain present safety challenges for roof hackers. Entry points, such as trapdoors, exterior ladders, and elevators to penthouses that open onto roofs, are usually tightly secured. Roofers bypass locks (by lock picking or other methods), or use unsecured entry points to gain access to roofs. Once there, explorers may take photographs or enjoy the view; pranksters may hang banners or execute other sorts of mischief.
Tunnel hacking
Some universities have utility tunnels to carry steam heat and other utilities. Utility tunnels are usually designed for infrequent access for maintenance and the installation of new utilities, so they tend to be small and often cramped. Sometimes, utilities are routed through much larger pedestrian access tunnels (MIT has a number of such tunnels, reducing the need for large networks of steam tunnels; for this reason, there is only one traditional steam tunnel at MIT, built before many buildings were connected).
Tunnels range from cold, damp, and muddy to unbearably hot (especially during cold weather). Some are large enough to allow a person to walk freely; others are low-ceilinged, forcing explorers to stoop, bend their knees, or even crawl. Even large tunnels may have points where crisscrossing pipes force an explorer to crawl under or climb over a pipe — a highly dangerous activity, especially when the pipe contains scalding high-pressure steam (and may not be particularly well insulated, or may have weakened over the years since installation).
Tunnels also tend to be loud. Background noise may prevent an explorer from hearing another person in the tunnel — who might be a fellow explorer, a police officer, or a homeless person sheltering there. Tunnels may be well lit or pitch-dark, and the same tunnel may have sections of both.
Tunnel access points tend to be in locked mechanical rooms where steam pipes and other utilities enter a building, and through manholes. As with roofs, explorers bypass locks to enter mechanical rooms and the connected tunnels. Some adventurers may open manholes from above with crowbars or specialized manhole-opening hooks.
Shafting
Buildings may have maintenance shafts for passage of pipes and ducts between floors. Climbing these shafts is known as shafting. The practice is similar to buildering, which is done on the outsides of buildings.
Regular use of a shaft can wear down insulation and cause other problems. To fix these problems, hackers sometimes take special trips into the shafts to correct any problems with duct tape or other equipment.
A dangerous variant of shafting involves entering elevator shafts, either to ride on the top of the elevators, or to explore the shaft itself. This activity is sometimes called elevator surfing. The elevator is first switched to "manual" mode, before boarding or exiting, and back to "automatic" mode after, to allow normal operation (and avoid detection). Switching elevators, getting too near the ceiling (or under the elevator) or the counterweight (or cables), or otherwise failing to follow safety precautions can lead to death or injury. Crackdowns may increase in both frequency and harshness, both legally and with respect to physical access to coveted locations.
Some shafts (such as those intended for but lacking an elevator) are accessible by use of rope but are not actually climbable by themselves.
Dangers
Legal dangers
Universities generally prohibit roof and tunnel hacking, either by explicit policies or blanket rules against entry into non-public utility spaces. The reasoning behind these policies generally stems from concern for university infrastructure and concern for students. Consequences vary from university to university; those caught may be warned, fined, officially reprimanded, suspended, or expelled. Depending on the circumstances, tunnelers and roofers may be charged with trespassing, breaking and entering, or other criminal charges.
MIT, once a vanguard of roof and tunnel hacking (books have been published on hacks and hacking at MIT), has been cracking down on the activity. In October 2006, three students were caught hacking near a crawl space in the MIT Faculty Club, arrested by the MIT police, and later charged with trespassing, breaking and entering with the intent to commit a felony. The charges raised an outcry among students and alumni who believed that MIT ought to have continued its history of handling hacking-related incidents internally.
Charges against those students were eventually dropped. In June 2008, another graduate student was arrested and faced charges of breaking and entering with intent to commit a felony and possession of burglarious instruments after being caught after-hours in a caged room in a research building's basement.
Risks to building infrastructure
Utility tunnels carry everything from drinking water to power to fiber-optic network cabling. Some roofs have high power radio broadcast or radio reception equipment and weather-surveillance equipment, damage to which can be costly. Roofs and tunnels also may contain switches, valves, and controls for utility systems that were not designed to be accessible to the general public.
Due to security concerns there has been a trend towards installing intrusion alarms to protect particularly hazardous or high-value equipment.
Personal hazards
Roofs are dangerous; aside from the obvious risk of toppling over the edge (especially at night, in inclement weather, or after drinking) students could be injured by high-voltage cabling or by microwave radiation from roof-mounted equipment. In addition, laboratory buildings often vent hazardous gasses through exhaust stacks on the roof.
Tunnels can be extremely dangerous — superheated steam pipes are not always completely insulated; when they are insulated, it is occasionally with carcinogenic materials like asbestos. Opening or damaging a steam valve or pipe can be potentially deadly. Steam contains significantly more thermal energy than boiling water, and transfers that energy when it condenses on solid objects such as skin. It is typically provided under high pressure, meaning that comparatively minor pipe damage can fill a tunnel with steam quickly. In 2008, a high-pressure steam pipe exploded in the subbasement of Building 66 at MIT, apparently due to a construction defect. The explosion and ensuing flood caused extensive damage and lethal conditions in the subbasement.
Confined spaces contain a range of hazards — from toxic gases like hydrogen sulfide and carbon monoxide, to structures that may flood or entrap an adventurer. An explorer who enters a tunnel via a lock bypass method or via an inadvertently-left-open door may find themself trapped if the door locks behind them — quite possibly in an area with no cell phone reception, and no one within earshot.
See also
Columbia University tunnels
Elevator surfing
Hacks at the Massachusetts Institute of Technology
Hacker (term)
Rooftopping
Urban exploration
References
External links
MIT hacks site; deals primarily with pranks, some of which involve a roof hacking component
Infiltration.org page on college tunnels
A link to urban exploring at the University of Virginia. Contains map of steam tunnels
Daily Princetonian article on a student injured in a fall while exploring a tower in the University's chapel
UCSDSecrets, an introductory blog about UCSD. including its tunnels
Institute Historian, T. F. Peterson, Nightwork: A History of Hacks and Pranks at MIT (revised edition), MIT Press, Cambridge, Massachusetts. 2011. — Extensive documentation, many photographs, special essays
"Abandon Hope, Part 1" and "Abandon Hope, Part 2", a two-part article on the Columbia University tunnels
Types of climbing
Urban exploration
Subterranea (geography)
|
12040277
|
https://en.wikipedia.org/wiki/Jawbone%20%28company%29
|
Jawbone (company)
|
Jawbone was an American privately held wearable technology company headquartered in San Francisco, California. Since June 19, 2017, it has been undergoing liquidation via an assignment for the benefit of creditors. It developed and sold wristbands and portable audio devices and Bluetooth headsets. Jawbone marketed its wearable products as part of the Internet of things.
History
Alexander Asseily and Hosain Rahman, who met as Stanford University undergraduates, founded Aliph (which would later become Jawbone) in March 1998 in San Francisco.
Aliph
According to later legal documents, the company was originally called AliphCom and formed in March 1998 during the dot-com bubble.
In 2002, Aliph won a contract with DARPA, the U.S. military's research arm, to research ways for combat soldiers to communicate with each other in difficult conditions. The pair began to develop a mobile phone headset designed to suppress background noise.
After undisclosed seed funding, about $1.5 million was raised in June 2002.
In 2006, Aliph released a YouTube demonstration of a wireless version of its Jawbone headset and announced that Yves Béhar would be hired as vice president and creative director.
The company's earliest venture capital investor was the Mayfield Fund, which invested $0.8 million in December 2006.
In January 2007, Aliph revealed its wireless Jawbone headset at the Consumer Electronics Show.
In July 2007, Khosla Ventures made a $5 million investment in the company.
Expansion (2008 to 2010)
At the beginning of 2008, Aliph received another major investment of $30 million from Sequoia Capital.
Aliph announced another Bluetooth headset in May 2008. New Jawbone became available for sale at the Apple Store in summer 2008. Aliph promoted New Jawbone by offering a $20 discount to drivers who had been cited for using mobile phones while driving after the state of California passed legislation to ban the use of handheld phones for drivers.
In April 2009, Aliph announced a third edition of its Bluetooth headset, Jawbone Prime. In January 2010, Aliph announced the Jawbone Icon, and software for users to customize their Jawbone device with free applications and updates. The company announced collaboration with Cisco Systems to use its software and devices with Cisco's IP phones. The partnership included an exclusive Jawbone Icon for Cisco Bluetooth headset.
In 2010, Aliph released its first non-headset product, Jambox: a wireless, Bluetooth speaker and speakerphone
In December 2010, the Jawbone was named a design of the decade from the Industrial Designers Society of America.
New name (2011)
Throughout 2011, Jawbone closed three different rounds of funding – first securing a $49 million investment from venture capital firm Andreessen Horowitz in March, then $70 million from a group of investors advised by JP Morgan Asset Management, and finally closing out the year with an announcement of $40 million combined from Deutsche Telekom, Kleiner Perkins Caufield & Byers, private investor Yuri Milner, and investors advised by JP Morgan Asset Management.
In January 2011, the company released its fifth Bluetooth headset, Jawbone Era, and dropped the name Aliph to officially adopt its “Jawbone” moniker. Later that year, Jawbone unveiled a new Bluetooth headset concept, Icon HD + The Nerd. The company also announced its Companion for Android app, which allows Android mobile phone users to view their headset's remaining battery life on their phone, hear calendar alerts, and dial into conference calls.
That year Jawbone launched LiveAudio for Jambox, a free update to recreate the effects of live music.
Also in 2011, Jawbone announced (and then paused production) of its lifestyle tracking system, UP by Jawbone. The UP wristband and accompanying app was first announced at the TED conference in Scotland in July 2011. Highly anticipated by Jawbone fans and the media, the UP lifestyle tracker and app system launched in November 2011. FastCompany Design reported, “If UP works, it could augur a huge shift in the way we approach weight loss and staying healthy.” Jawbone halted production of the product a month later in response to widespread customer claims of issues with charging, syncing, and in some cases, product failure. The guarantee, offered purchasers of UP full refunds for any reason, even if they wanted to keep their wristbands.
In December 2011, Jawbone teamed up with Snoop Dogg and Brazilian rapper Marcelo D2 on a single titled “Obrigado, Brazil.” The video featured the Jambox.
2012 to 2013
By February 2012, Jawbone was valued at an estimated $1.5 billion.
In May 2012, Jawbone introduced Big Jambox, and in August 2012, custom color combinations for Jambox.
In September 2012, with the iPhone 5, Apple introduced a proprietary Lightning connector, incompatible with previous generations of the iPhone. This prompted a shift from plug-in audio docks to wireless speakers that supported Bluetooth and AirPlay. Jawbone had an advertising campaign and released a YouTube video showcasing exploding speakers with outdated audio docks.
In November 2012, Jawbone released a new UP and a redesigned iOS app for UP. Since original UP users had been refunded (even if they kept the device), they did not receive a new UP. Jawbone also used the intervening time to add new features to its software, making UP a more powerful life-tracking device.
In February 2013, Jawbone completed an acquisition of design firm Visere and MassiveHealth, best known for its crowd-sourced food app, The Eatery.
In March 2013, Jawbone announced that UP would be available internationally. The company also launched of the Android app for UP. However, this app was not compatible with most Android tablets, such as the Nexus 7. A month later Jawbone announced software to allow developers to access Jawbone UP data and integrate their apps.
Also in April 2013, Jawbone announced its acquisition of BodyMedia, a maker of wearable health tracking devices.
In May 2013 Jawbone added Marissa Mayer, CEO of Yahoo!, and Robert Wiesenthal, COO of Warner Music Group, to its board of directors.
Mindy Mount (from Microsoft) became president of the company at the same time.
Also in May 2013, Jawbone announced custom colors for the Big Jambox.
In September 2013, Jawbone announced the Mini Jambox, and a water-resistant leather case for it.
In 2012, CEO and founder Rahman was named to Fortune magazine's 40 Under 40. The following year, he was among Fast Company magazine's most creative people and Vanity Fair magazine's New Establishment and was recognized as one of TIME 100's most influential people of 2014. He talked at TED, DLD, LeWeb, SXSW and the D:Mobile Conference.
In September 2013, Jawbone raised $93 million in debt financing from Silver Lake, Fortress Investment Group, J.P. Morgan and Wells Fargo, plus $20 million in equity funding from previous investors.
2014 to 2017
In February 2014, a round of investment estimated at $250 million led by the firm of Suhail Rizvi was reported.
Mindy Mount left as president about the same time, after less than a year.
In January 2015 in advance of the BlackRock deal, Alexander Asseily resigned as chairman of the company and from its board of directors.
By early 2015, a lawsuit from manufacturer Flextronics over $20 million of payment disagreements was reportedly settled.
In May 2015 Jawbone filed a lawsuit against Fitbit in California State Court, accusing Fitbit of hiring away employees who took confidential and proprietary information along with them.
In April, 2015, the company closed $300 million in debt financing from investment management firm BlackRock and ended previous loan agreements.
After a market research firm estimated Jawbone had only 2.8% of the market for fitness trackers, lay-offs were reported in November 2015.
In April, 2016, the United States International Trade Commission sided with Fitbit in a patent dispute. Another round of rulings in August 2016 was mixed, but analysts said "the tide has turned in favor of Fitbit". According to a report by Business Insider, as of September 2016, Jawbone has almost no inventory left and has struggled to pay one of its customer service agencies. By July 2017, The Information reported that "Jawbone is shuttering operation after years of financial pressure. The bluetooth headset-turned-speaker-turned-wearables maker faced stiff competition from the likes of Apple and Fitbit, the latter of which supposedly attempted to buy its rival last year."
In July 2017, Jawbone announced it would liquidate its assets. Since the app is still available for at least some phones (Android) and the servers seem to be running, it is unclear who has access to collected personal data.
Products
UP
Announced in November 2011, UP by Jawbone was introduced as the company's first non-audio activity tracker. It consisted of a flexible rubber-coated wristband and accompanying iPhone and Android app,
UP allows users to track their sleep, eating habits, and daily activity including steps taken and calories burned. The wristband is water-resistant, with a rechargeable battery. The wristband features a vibration motor that can be programmed as an alarm to wake users, or act as a reminder when users have been sedentary too long. The UP app includes social-networking software to add motivation. Jawbone partnered with fitness related apps including: IFTTT, LoseIt!, Maxwell Health, MapMyFitness, MyFitnessPal, Notch, RunKeeper, Sleepio, Wello and Withings.
In a 2014, test by a sleep specialist, the Jawbone UP "produced an impressive amount of data" which however showed "little resemblance to [the subject's] actual night of sleep".
UP24
In November 2013, Jawbone announced the UP24 and a software update. With similar dimensions to the UP, UP24 features the ability to sync wirelessly via Bluetooth to the updated app. UP24 has as a 7-day battery life (depending on use) and the previous 3.5mm connector was replaced by a 2.5mm connector. The software provided more real-time information to help motivate users. The app also suggests goals based on user's habits. Live notifications are provided on the UP24 so users will get push notifications when they get close to their goals. A new activity log gives a snapshot of a user's day and when the UP24 last synced. The 3.0 app also will automatically analyze sleep data from the previous night if users forget to press the button for sleep mode, and lets users edit sleep/wake times.
The UP24 had a slightly different texture on the skin, but the same wrap around design. The button end had a softer and more rounded piece of metal. The indicator lights appear the same as on the UP.
In early July 2015, PC Magazine listed UP24 as one of the best fitness trackers for 2015.
UPmove
In November 2014, Jawbone released the UP move with Smart Coach, a guide to process the user's data in order to provide advice. Unlike other UP products, it is not a wrist band, but rather a clip that can be worn on a belt or attached to clothing or wearable accessories
UP for Groups
In December 2014, Jawbone released UP for Groups, software for corporate wellness programs.
Jawbone's Up for Groups program only shares information in the aggregate with employers.
UP2, UP3, UP4
In April 2015, the UP2 was introduced as a watch-clasp style bracelet unit with additional features from the UP24. It was later relaunched in September 2015 with a hook and loop design after critical reviews by customers. The new features on the UP2 include Smart Alarm, Idle Alert, and Automatic Sleep Detection. The Smart Alarm feature allows the user to be gently awakened with a vibration on the device when the unit detects the optimal moment within 30 minutes of a preset time. Along with the UP3 & UP4, the UP2 was designed with more attention to fashion, offering several available colors, depending on the model. The 2.5mm charging dongle was replaced with a USB powered magnetic connector
Almost simultaneously, The UP3 was launched, offering Jawbone's first major upgrade in hardware with the addition of a heart monitor. In addition to the functionality of the UP2 and Heart Health Monitoring, the UP3 automatically detects when the user is sleeping.
The UP4 is almost identical to the UP3, with the added feature of being able to make NFC purchases with American Express.
Speakers
Jambox
Announced in November 2010, the wireless, portable Bluetooth speaker and speakerphone Jambox was Jawbone's first product outside of the headset category. Jambox received positive reviews including the New York Times, Popular Science, and USA Today.
The acoustic technology was licensed from SoundMatters that had previously released their own similarly-sized portable Bluetooth speaker, the FoxL.
Big Jambox
Announced in May 2012, Big Jambox is Jawbone's second speaker product.
Inside the airtight enclosure are proprietary neodymium drivers and two opposing passive bass radiators along with a newly designed omnidirectional microphone, which is capable of 360-degree sound input with improved echo-cancellation and full duplex communication. The speaker also has LiveAudio "three-dimensional sound" technology built-in and updates to the speaker's driver system are handled through Jawbone's online interface. Connectivity is via a Bluetooth connection, headphone jack, or audio line out. Big Jambox can also connect to multiple devices at once, and users can control volume and play sequences from their device in addition to the speaker. A 2,600mAh rechargeable battery will provide roughly 15 hours of wireless listening time and 500 hours on standby. The Big Jambox speakerphone is intended for more of a conference room setting and while the Jambox has a front-facing microphone, intended to be used facing a user, Big Jambox sports a top-mounted omni-directional microphone to pick up sound from all angles. Big Jambox is rated as class one speakerphone.
Big Jambox used digital signal processing (DSP) algorithms to enhance and optimize output. A review by Engadget lauded the high volume levels that Big Jambox was capable of producing at a small size, but criticized the sound quality because of distortions.
Big Jambox aimed to allow listeners to experience three-dimensional sound, a feature that according to Engagdet "comes at the cost of the sound getting a bit mushy in some areas" and only works well with some types of source material.
Mini Jambox
Announced in September 2013, Mini Jambox was Jawbone's third speaker product, meant to fit in a pocket or purse. Two neodymium drivers and a passive bass radiator are housed by an extruded aluminum uni-body casing. The speaker comes with LiveAudio and speakerphone capabilities. Along with the speaker is an app which allows the device to be named. The app also allows the user to combine and create playlists from Spotify, Rdio, Deezer, and iTunes.
The Mini Jambox slimness comes from the design of its extruded aluminum uni-body casing. Its external skin is also the internal skeleton. The speaker was designed with computer numerical control, which is typically used to machine mechanical internal details, allowed a variety of external textures.
Headsets
Jawbone Icon
Launched in January 2010, Jawbone Icon is Jawbone's fourth Bluetooth headset. It was the first Jawbone headset with software that could be updated online.
CNET called the Icon "quite possibly the most innovative Bluetooth headset yet", being one of the first headsets in the world to have a built-in "operating system."
Six designs corresponded with the names of the six original audio apps – The Hero, The Rogue, The Ace, The Catch, The Thinker and The Bombshell. Jawbone expanded the Icon line with four new designs as part of its Icon EarWear Collection, as well as adding a new voice-messaging app to its software platform called “Thoughts.”
By May, 2015, the Jawbone Updater would no longer work with legacy devices including the Icon.
Jawbone Era
In January 2011, the company's fifth Bluetooth headset, Jawbone Era was announced, the first to have a built-in accelerometer and motion sensing software. It functions via motion commands which involve shaking or tapping the headset twice to answer, end, or switch calls. Shaking the headset four times puts the headset in pairing mode.
The Era has almost the same measurements as the Icon. The Era appears rectangular from the front, but is slightly curved to fit to the side of the face. On the top of the headset is a horizontal bar that functions as the multifunction talk button. Right above that is the Micro-USB charging jack.
Icon HD + The Nerd
In August 2011, Jawbone launched Icon HD and The Nerd USB Bluetooth adapter. The Icon HD had a larger speaker. When paired with The Nerd, the Icon HD can connect simultaneously two devices (one USB-enabled and one Bluetooth-enabled) and switch between audio and calls.
References
External links
Bluetooth
Science and technology in the San Francisco Bay Area
Privately held companies of the United States
Companies based in San Francisco
Activity trackers
Wearable devices
Fashion accessories
Smart bands
American companies established in 1998
Electronics companies disestablished in 2017
Electronics companies established in 1998
1998 establishments in California
2017 disestablishments in California
|
12107044
|
https://en.wikipedia.org/wiki/Godfried%20Toussaint
|
Godfried Toussaint
|
Godfried Theodore Patrick Toussaint (1944 – July 2019) was a Canadian Computer Scientist, a Professor of Computer Science, and the Head of the Computer Science Program at New York University Abu Dhabi (NYUAD) in Abu Dhabi, United Arab Emirates. He is considered to be the father of computational geometry in Canada. He did research on various aspects of computational geometry, discrete geometry, and their applications: pattern recognition (k-nearest neighbor algorithm, cluster analysis), motion planning, visualization (computer graphics), knot theory (stuck unknot problem), linkage (mechanical) reconfiguration, the art gallery problem, polygon triangulation, the largest empty circle problem, unimodality (unimodal function), and others. Other interests included meander (art), compass and straightedge constructions, instance-based learning, music information retrieval, and computational music theory.
He was a co-founder of the Annual ACM Symposium on Computational Geometry, and the annual Canadian Conference on Computational Geometry.
Along with Selim Akl, he was an author and namesake of the efficient "Akl–Toussaint algorithm" for the construction of the convex hull of a planar point set. This algorithm exhibits a computational complexity with expected value linear in the size of the input. In 1980 he introduced the relative neighborhood graph (RNG) to the fields of pattern recognition and machine learning, and showed that it contained the minimum spanning tree, and was a subgraph of the Delaunay triangulation. Three other well known proximity graphs are the nearest neighbor graph, the Urquhart graph, and the Gabriel graph. The first is contained in the minimum spanning tree, and the Urquhart graph contains the RNG, and is contained in the Delaunay triangulation. Since all these graphs are nested together they are referred to as the Toussaint hierarchy.
Biography
Toussaint was born in 1944 in Belgium.
After graduating in 1968 from the University of Tulsa,
he went to the University of British Columbia for graduate study, completing his Ph.D. there in 1972. His dissertation, Feature Evaluation Criteria and Contextual Decoding Algorithms in Statistical Pattern Recognition, was supervised by Robert W. Donaldson.
He joined the McGill University faculty in 1972, and became a professor emeritus there in 2007. After retiring from McGill, he became a professor of computer science and head of the computer science department at New York University Abu Dhabi.
He died in July 2019 in Tokyo, Japan. He was in Tokyo to present his work on "The Levenshtein distance as a measure of mirror symmetry and homogeneity for binary digital patterns" in a special session titled "Design & Computation in Geovisualization" convened by the International Cartographic Association Commission on Visual Analytics at the 2019 International Cartographic Conference.
Mathematical research in music
He spent a year in the Music Department at Harvard University doing research on musical similarity, a branch of music cognition. From 2005 he was also a researcher at the Centre for Interdisciplinary Research in Music Media and Technology in the Schulich School of Music at McGill University. He applied computational geometric and discrete mathematics methods to the analysis of symbolically represented music in general, and rhythm in particular. In 2004 he discovered that the Euclidean algorithm for computing the greatest common divisor of two numbers implicitly generates almost all the most important traditional rhythms of the world. His application of mathematical methods for tracing the roots of Flamenco music were the focus of two Canadian television programs.<ref>"Flamenco Forensics", McGill Reporter, January 26, 2006.</ref>
Awards
In 2018 he was awarded a Lifetime Achievement Award by the Canadian Association of Computer Science. In 1978 he was the recipient of the Pattern Recognition Society's Best Paper of the Year Award. In 1985 he was awarded a two-year Izaak Walton Killam Senior Research Fellowship by the Canada Council for the Arts. In 1988 he received an Advanced Systems Institute Fellowship from the British Columbia Advanced Systems Institute. In 1995 he was given the Vice-Chancellor's Research Best-Practice Fellowship by the University of Newcastle in Australia. In 1996 he won the Canadian Image Processing and Pattern Recognition Society's Service Award for his "outstanding contribution to research and education in Computational Geometry." In May 2001 he was honored with the David Thomson Award for excellence in graduate supervision and teaching at McGill University. In 2009 he won a Radcliffe Fellowship from the Radcliffe Institute for Advanced Study at Harvard University to carry out a research project on the phylogenetics of the musical rhythms of the world.
Books and book chapters
G. T. Toussaint, The Geometry of Musical Rhythm, Chapman and Hall/CRC, January 2013.
G. T. Toussaint, Computational Geometry, Editor, North-Holland Publishing Company, Amsterdam, 1985.
G. T. Toussaint, Computational Morphology, Editor, North-Holland Publishing Company, Amsterdam, 1988.
E. D. Demaine, B. Gassend, J. O'Rourke, and G. T. Toussaint, "All polygons flip finitely... right?" Surveys on Discrete and Computational Geometry: Twenty Years Later, J. E. Goodman, J. Pach, and R. Pollack, Editors, in Contemporary Mathematics, Vol. 453, 2008, pp. 231–255.
J. O'Rourke and G. T. Toussaint, "Pattern recognition," Chapter 51 in the Handbook of Discrete and Computational Geometry, Eds., J. E. Goodman and J. O'Rourke, Chapman & Hall/CRC, New York, 2004, pp. 1135–1162.
M. Soss and G. T. Toussaint, "Convexifying polygons in 3D: a survey," in Physical Knots: Knotting, Linking, and Folding Geometric Objects in R3, AMS Special Session on Physical Knotting, Linking, and Unknotting, Eds. J. A. Calvo, K. Millett, and E. Rawdon, American Mathematical Society, Contemporary Mathematics Vol. 304, 2002, pp. 269–285.
G. T. Toussaint, "Applications of the Erdős–Nagy theorem to robotics, polymer physics and molecular biology," Año Mundial de la Matematica, Sección de Publicaciones de la Escuela Tecnica Superior de Ingenieros Industriales, Universidad Politecnica de Madrid, 2002, pp. 195–198.
J. O'Rourke and G. T. Toussaint, "Pattern recognition," Chapter 43 in the Handbook of Discrete and Computational Geometry, Eds., J. E. Goodman and J. O'Rourke, CRC Press, New York, 1997, pp. 797–813.
G. T. Toussaint, "Computational geometry and computer vision," in Vision Geometry, Contemporary Mathematics, Volume 119, R. A. Melter, A. Rozenfeld and P. Bhattacharya, Editors, American Mathematical Society, 1991, pp. 213–224.
G. T. Toussaint, "A graph-theoretical primal sketch," in Computational Morphology, G. T. Toussaint, Ed., North-Holland, 1988, pp. 229–260.
G. T. Toussaint, "Movable separability of sets," in Computational Geometry'', G.T. Toussaint, Ed., North-Holland Publishing Co., 1985, pp. 335–375.
References
1944 births
2019 deaths
Belgian computer scientists
Canadian computer scientists
Researchers in geometric algorithms
University of Tulsa alumni
University of British Columbia alumni
McGill University faculty
New York University Abu Dhabi faculty
|
7407288
|
https://en.wikipedia.org/wiki/Bee%20Card
|
Bee Card
|
A is a ROM cartridge developed by Hudson Soft as a software distribution medium for MSX computers. Bee Cards are approximately the size of a credit card, but thicker. Compared to most game cartridges, the Bee Card is small and compact. Bee Cards were released in Japan and in Europe, but not in North America because the MSX was unsuccessful in North America. However, Atari Corporation adopted the Bee Card for the Atari Portfolio, a handheld PC released in 1989 in North America. Bee Cards were also used by some Korg Synthesizers and workstations as external storage of user content like sound programs or song data. Even though these systems all use Bee Cards, they are incompatible with each other.
Only a small number of MSX software titles were published on Bee Card: 6 in Japan, and only two in Europe and Italy. In order to accept a Bee Card, the cartridge slot of the MSX had to be fitted with a removable adapter: the Hudson Soft BeePack. The first mass-produced Bee Cards, however, were EEPROM telephone cards manufactured by Mitsubishi Plastics; these were first sold in Japan in 1985. The trade names Bee Card and Bee Pack derive from Hudson Soft's corporate logo, which features a cartoon bee.
MSX software published on Bee Card
Hudson Soft and other software publishers distributed at least eleven MSX software titles on Bee Card:
HuCard
Hudson Soft later collaborated with NEC to develop a video game console called the PC Engine. The companies elected to use Hudson Soft's slim ROM cartridge technology to distribute PC Engine software. Hudson Soft adapted the design for their needs, and produced the HuCard. HuCards are slightly thicker than Bee Cards; also, whereas a Bee Card has 32 pins, a HuCard has 38.
References
Computer-related introductions in 1985
MSX
Solid-state computer storage media
Konami
|
3634262
|
https://en.wikipedia.org/wiki/Blue%20Brain%20Project
|
Blue Brain Project
|
The Blue Brain Project is a Swiss brain research initiative that aims to create a digital reconstruction of the mouse brain. The project was founded in May 2005 by the Brain and Mind Institute of École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. Its mission is to use biologically-detailed digital reconstructions and simulations of the mammalian brain to identify the fundamental principles of brain structure and function.
The project is headed by the founding director Henry Markram—who also launched the European Human Brain Project—and is co-directed by Felix Schürmann, Adriana Salvatore and Sean Hill. Using a Blue Gene supercomputer running Michael Hines's NEURON software, the simulation involves a biologically realistic model of neurons and an empirically reconstructed model connectome.
There are a number of collaborations, including the Cajal Blue Brain, which is coordinated by the Supercomputing and Visualization Center of Madrid (CeSViMa), and others run by universities and independent laboratories.
Goal
The initial goal of the project, which was completed in December 2006, was the creation of a simulated rat neocortical column, which is considered by some researchers to be the smallest functional unit of the neocortex, which is thought to be responsible for higher functions such as conscious thought. In humans, each column is about in length, has a diameter of and contains about 60,000 neurons. Rat neocortical columns are very similar in structure but contain only 10,000 neurons and 108 synapses. Between 1995 and 2005, Markram mapped the types of neurons and their connections in such a column.
Progress
By 2005, the first cellular model was completed. The first artificial cellular neocortical column of 10,000 cells was built by 2008. By July 2011, a cellular mesocircuit of 100 neocortical columns with a million cells in total was built. A cellular rat brain had been planned for 2014 with 100 mesocircuits totalling a hundred million cells. A cellular human brain equivalent to 1,000 rat brains with a total of a hundred billion cells has been predicted to be possible by 2023.
In November 2007, the project reported the end of the first phase, delivering a data-driven process for creating, validating, and researching the neocortical column.
In 2015, scientists at École Polytechnique Fédérale de Lausanne (EPFL) developed a quantitative model of the previously unknown relationship between the glial cell astrocytes and neurons. This model describes the energy management of the brain through the function of the neuro-glial vascular unit (NGV). The additional layer of neuron-glial cells is being added to Blue Brain Project models to improve functionality of the system.
In 2017, Blue Brain Project discovered that neural cliques connected to one another in up to eleven dimensions. The project's director suggested that the difficulty of understanding the brain is partly because the mathematics usually applied for studying networks cannot detect that many dimensions. The Blue Brain Project was able to model these networks using algebraic topology.
In 2018, Blue Brain Project released its first digital 3D brain cell atlas which, according to ScienceDaily, is like "going from hand-drawn maps to Google Earth", providing information about major cell types, numbers, and positions in 737 regions of the brain.
In 2019, Idan Segev, one of the computational neuroscientists working on the Blue Brain Project, gave a talk titled: "Brain in the computer: what did I learn from simulating the brain." In his talk, he mentioned that the whole cortex for the mouse brain was complete and virtual EEG experiments would begin soon. He also mentioned that the model had become too heavy on the supercomputers they were using at the time, and that they were consequently exploring methods in which every neuron could be represented as a neural network (see citation for details).
Software
The Blue Brain Project has developed a number of software to reconstruct and to simulate the mouse brain.
Blue Brain NEXUS
Blue Brain NEXUS is a data integration platform which allows users to search, deposit, and organise data. It stands on the FAIR data principle to provide flexible data management solutions beyond neuroscience studies. It is an open source software and available for everyone on GitHub.
BluePyOpt
BluePyOpt is a tool that is used to build electrical models of single neurons. For this, it uses evolutionary algorithms to constrain the parameters to experimental electrophysiological data. Attempts to reconstruct single neurons using BluePyOpt are reported by Rosanna Migliore, and Stefano Masori. It is an open source software and available for everyone on GitHub.
CoreNEURON
CoreNEURON is a supplemental tool to NEURON, which allows large scale simulation by boosting memory usage and computational speed. It is an open source software and available for everyone on GitHub.
NeuroMorphoVis
NeuroMorphoVis is a visualisation tool for morphologies of neurons. It is an open source software and available for everyone on GitHub.
SONATA
SONATA is a joint effort between Blue Brain Project and Allen Institute for Brain Science, to develop a standard for data format, which realises a multiple platform working environment with greater computational memory and efficiency. It is an open source software and available for everyone on GitHub.
Funding
The project is funded primarily by the Swiss government and the Future and Emerging Technologies (FET) Flagship grant from the European Commission, and secondarily by grants and donations from private individuals. The EPFL bought the Blue Gene computer at a reduced cost because it was still a prototype and IBM was interested in exploring how applications would perform on the machine. BBP was viewed as a validation of the Blue Gene supercomputer concept.
Related projects
Cajal Blue Brain
The Cajal Blue Brain Project is coordinated by the Technical University of Madrid and uses the facilities of the Supercomputing and Visualization Center of Madrid and its supercomputer Magerit. The Cajal Institute also participates in this collaboration. The main lines of research currently being pursued at Cajal Blue Brain include neurological experimentation and computer simulations. Nanotechnology, in the form of a newly designed brain microscope, plays an important role in its research plans.
Documentary
A 10-part documentary is being made by Noah Hutton; each installment will explore the year-long workings of the project at the EPFL. Filming began in 2009, and the documentary is planned to be released in 2020. Other similar research projects are also mentioned.
See also
Artificial brain
Artificial intelligence
Artificial neural network
BRAIN Initiative
CoDi
Cognitive architecture
Cognitive science
Computational neuroscience
Google Brain
Human Brain Project
Neural network
Neuroinformatics
Noogenesis
Outline of brain mapping
Outline of the human brain
Project Joshua Blue
Simulation argument
Simulated reality
Social simulation
Whole brain emulation
References
Further reading
Blue Brain Project site, Lausanne.
FAQ on Blue Brain.
NCS documentation.
NEURON documentation.
Growing a Brain in Switzerland, Der Spiegel, 7 February 2007
Out of the Blue -- Can a thinking, remembering, decision-making, biologically accurate brain be built from a supercomputer?, Seed Magazine, March 2008
Reconstructing the Heart of Mammalian Intelligence Henry Markram's Lecture, March 4, 2008.
The Blue Brain Project Henry Markram's Lecture, Neuro Informatics 2008.
The Blue Brain Project an Interview with Idan Segev.
Simulated brain closer to thought BBC News 22 April 2009
Firing Up the Blue Brain -"We Are 10 Years Away From a Functional Artificial Human Brain" Luke McKinney, July 2009
Henry Ankara builds a brain in a supercomputer TED Conference. July 2009
Indian startup to help copy your brain on computers Silicon India. 1 February 2010
External links
Blue Brain Project
Blue Brain Portal - Knowledge Space for Neuroscience
Out of the Blue, SEEDMAGAZINE.Com
Neuroinformatics
IBM supercomputers
Computational neuroscience
Virtual reality organizations
Cognitive modeling
École Polytechnique Fédérale de Lausanne
|
40584315
|
https://en.wikipedia.org/wiki/Multipath%20TCP
|
Multipath TCP
|
Multipath TCP (MPTCP) is an ongoing effort of the Internet Engineering Task Force's (IETF) Multipath TCP working group, that aims at allowing a Transmission Control Protocol (TCP) connection to use multiple paths to maximize resource usage and increase redundancy.
In January 2013, the IETF published the Multipath specification as an Experimental standard in RFC 6824. It was replaced in March 2020 by the Multipath TCP v1 specification in RFC 8684.
Benefits
The redundancy offered by Multipath TCP enables inverse multiplexing of resources, and thus increases TCP throughput to the sum of all available link-level channels instead of using a single one as required by standard TCP. Multipath TCP is backward compatible with standard TCP.
Multipath TCP is particularly useful in the context of wireless networks; using both Wi-Fi and a mobile network is a typical use case. In addition to the gains in throughput from inverse multiplexing, links may be added or dropped as the user moves in or out of coverage without disrupting the end-to-end TCP connection.
The problem of link handover is thus solved by abstraction in the transport layer, without any special mechanisms at the network or link layers. Handover functionality can then be implemented at the endpoints without requiring special functionality in the subnetworks - in accordance to the Internet's end-to-end principle.
Multipath TCP also brings performance benefits in datacenter environments. In contrast to Ethernet channel bonding using 802.3ad link aggregation, Multipath TCP can balance a single TCP connection across multiple interfaces and reach very high throughput.
Multipath TCP causes a number of new issues. From a network security perspective, multipath routing causes cross-path data fragmentation that results in firewalls and malware scanners becoming inefficient when they only see one path's traffic. In addition, SSL decryption will become inefficient by way of the end-to-end encryption protocols.
User interface
In order to facilitate its deployment, Multipath TCP presents the same socket interface as TCP. This implies that any standard TCP application can be used above Multipath TCP while in fact spreading data across several subflows.
Some applications could benefit from an enhanced API to control the underlying Multipath TCP stack. Two different APIs have been proposed to expose some of features of the Multipath TCP stack to applications: an API that extends Netlink on Linux and an enhanced socket API.
Implementation
In July 2013, the MPTCP working group reported five independent implementations of Multipath TCP, including the reference implementation in the Linux kernel.
The currently available implementations are:
Linux kernel (reference implementation) from Université catholique de Louvain researchers and other collaborators ,
FreeBSD (IPv4 only) from Swinburne University of Technology,
F5 Networks BIG-IP LTM,
Citrix Netscaler,
Apple iOS 7, released on September 18, 2013 is the first large scale commercial deployment of Multipath TCP. Since iOS 7, any application can use Multipath TCP.
Apple Mac OS X 10.10, released on October 16, 2014.
Alcatel-Lucent released MPTCP proxy version 0.9 source code on October 26, 2012.
In July 2014, Oracle reported that an implementation on Solaris was being developed. In June 2015, work is in progress.
During the MPTCP WG meeting at IETF 93, SungHoon Seo announced that KT had deployed since mid June a commercial service that allows smartphone users to reach 1 Gbit/s using a MPTCP proxy service. Tessares uses the Linux kernel implementation to deploy Hybrid Access Networks
There is an ongoing effort to push a new Multipath TCP implementation in the mainline Linux kernel,
Use cases
Multipath TCP was designed to be backward compatible with regular TCP. As such, it can support any application. However, some specific deployments leverage the ability of simultaneously using different paths.
Apple uses Multipath TCP to support the Siri application on iPhone. Siri sends voice samples over an HTTPS session to Apple servers. Those servers reply with the information requested by the users. According to Apple engineers, the main benefits of Multipath TCP with this application are :
User-feedback (Time-to-First-Word) 20% faster in the 95th percentile
5x reduction of network failures
Other deployment use Multipath TCP to aggregate the bandwidth of different networks. For example, several types of smartphones, notably in Korea, use Multipath TCP to bond WiFi and 4G through SOCKS proxies. Another example are the Hybrid Access Networks that are deployed by network operators willing to combine xDSL and LTE networks. In this deployment, Multipath TCP is used to efficiently balance the traffic over the xDSL and the LTE network.
In standardisation of converged fixed and mobile communication networks 3GPP and BBF are interoperating to provide an ATSSS (Access Traffic Selection, Switching, Splitting) feature to support multipath sessions, e.g, by applying Multipath TCP both in the User Equipment (UE) or Residential Gateway (RG) and on the network side.
Multipath TCP options
Multipath TCP uses options that are described in detail in RFC 6824. All Multipath TCP options are encoded as TCP options with Option Kind is 30, as reserved by IANA.
The Multipath TCP option has the Kind (30), length (variable) and the remainder of the content begins with a 4-bit subtype field, for which IANA has created and will maintain a sub-registry entitled "MPTCP Option Subtypes" under the "Transmission Control Protocol (TCP) Parameters" registry. Those subtype fields are defined as follows:
Values 0x8 through 0xe are currently unassigned.
Protocol operation
Simplified description
The core idea of multipath TCP is to define a way to build a connection between two hosts and not between two interfaces (as standard TCP does).
For instance, Alice has a smartphone with 3G and WiFi interfaces (with IP addresses 10.11.12.13 and 10.11.12.14) and Bob has a computer with an Ethernet interface (with IP address 20.21.22.23).
In standard TCP, the connection should be established between two IP addresses. Each TCP connection is identified by a four-tuple (source and destination addresses and ports). Given this restriction, an application can only create one TCP connection through a single link. Multipath TCP allows the connection to use several paths simultaneously. For this, Multipath TCP creates one TCP connection, called subflow, over each path that needs to be used.
The purpose of the different protocol operations (defined in RFC 6824) are:
to handle when and how to add/remove paths (for instance if there's a connection lost of some congestion control)
to be compatible with legacy TCP hardware (such as some firewalls that can automatically reject TCP connections if the sequence number aren't successive)
to define a fair congestion control strategy between the different links and the different hosts (especially with those that don't support MPTCP)
Multipath TCP adds new mechanisms to TCP transmissions:
The subflow system, used to gather multiple standard TCP connections (the paths from one host to another). Subflows are identified during the TCP three-way handshake. After the handshake, an application can add or remove some subflows (subtypes 0x3 and 0x4).
The MPTCP DSS option contains a data sequence number and an acknowledgement number. These allow receiving data from multiple subflows in the original order, without any corruption (message subtype 0x2)
A modified retransmission protocol handles congestion control and reliability.
Detailed specification
The detailed protocol specification is provided in RFC 8684. Several survey articles provide an introduction to the protocol.
Congestion control
Several congestion control mechanisms have been defined for Multipath TCP. Their main difference with classical TCP congestion control schemes is that they need to react to congestion on the different paths without being unfair with single path TCP sources that could compete with them on one of the paths. Four Multipath TCP congestion control schemes are currently supported by the Multipath TCP implementation in the Linux kernel.
The Linked Increase Algorithm defined in RFC 6356
The Opportunistic Linked Increase Algorithm
The wVegas delay based congestion control algorithm
The Balanced Linked Increase Algorithm
Alternatives
Stream Control Transmission Protocol
Stream Control Transmission Protocol (SCTP) is a reliable in-order datagram stream transport protocol originally intended for telecommunication signaling. It supports concurrent use of multiple access links and allows the application to influence the access interface selections on a datagram stream basis. It also supports mobility via access renegotiation. Hence, SCTP is also a transport layer solution. It offers type 3 flow granularity with concurrency, but with more flow scheduling control than Multipath TCP. It also fully supports mobility in a fashion similar to Multipath TCP.
IMS SIP
Within the IP Multimedia Subsystem (IMS) architecture, Session Initiation Protocol (SIP) can support the concurrent use of multiple contact IP addresses for the registration of one or more IMS user agents. This allows for the creation of multiple IMS signaling paths. On these signaling paths, signaling messages carry Session Description Protocol (SDP) messaging to negotiate media streams. SDP allows for the (re-)negotiation of the streams of one media session over multiple paths. In turn, this enables application layer multipath transport. From this point of view, IMS can therefore offer application layer multipath support with flow granularity and concurrent access. A multipath extension to Real-time Transport Protocol (RTP) has been under discussion within the IETF. Multipath RTP can offer flow granularity with concurrent access and mobility (via IMS, SDP signaling or the RTP control protocol). Very recently in addition a proposal to extend also DCCP (Datagram Congestion Control Protocol) by a multipath feature is discussed at IETF in TSVWG (Transport Area Working Group) dubbed as MP-DCCP.
Multipath QUIC
The IETF is currently developing the QUIC protocol that integrates the features that are traditionally found in the TCP, TLS and HTTP protocols. Thanks to the flexibility and extensibility of QUIC, it is possible to extend it to support multiple paths and address the same use cases as Multipath TCP. A first design for Multipath QUIC has been proposed, implemented and evaluated.
Other protocols and experiments
At the session layer, the Mobile Access Router project experimented in 2003 with the aggregation of multiple wireless accesses with heterogeneous technologies, transparently balancing traffic between them in response to the perceived performance of each of them.
Parallel access schemes used to accelerate transfers by taking advantage of HTTP range requests to initiate connections to multiple servers of a replicated content, are not equivalent to Multipath TCP as they involve the application layer and are limited to content of known size.
RFC
- Threat Analysis for TCP Extensions for Multipath Operation with Multiple Addresses
- Architectural Guidelines for Multipath TCP Development
- Coupled Congestion Control for Multipath Transport Protocols
- TCP Extensions for Multipath Operation with Multiple Addresses (v0; replaced by RFC 8684)
- Multipath TCP (MPTCP) Application Interface Considerations
- Analysis of Residual Threats and Possible Fixes for Multipath TCP (MPTCP)
- Use Cases and Operational Experience with Multipath TCP
- TCP Extensions for Multipath Operation with Multiple Addresses (v1)
- 0-RTT TCP Convert Protocol
See also
Transport protocol comparison table
References
External links
The Linux Kernel MultiPath TCP project
A clear article explaining the Linux MPTCP implementation
|
1206461
|
https://en.wikipedia.org/wiki/ChorusOS
|
ChorusOS
|
ChorusOS is a microkernel real-time operating system designed as a message passing computing model. ChorusOS began as the Chorus distributed real-time operating system research project at the French Institute for Research in Computer Science and Automation (INRIA) in 1979. During the 1980s, Chorus was one of two earliest microkernels (the other being Mach) and was developed commercially by startup company Chorus Systèmes SA. Over time, development effort shifted away from distribution aspects to real-time for embedded systems.
In 1997, Sun Microsystems acquired Chorus Systèmes for its microkernel technology, which went toward the new JavaOS. Sun (and henceforth Oracle) no longer supports ChorusOS. The founders of Chorus Systèmes started a new company called Jaluna in August 2002. Jaluna then became VirtualLogix, which was then acquired by Red Bend in September 2010. VirtualLogix designed embedded systems using Linux and ChorusOS (which they named VirtualLogix C5). C5 was described by them as a carrier grade operating system, and was actively maintained by them.
The latest source tree of ChorusOS, an evolution of version 5.0, was released as open-source software by Sun and is available at the Sun Download Center. The Jaluna project has completed these sources and published it online. Jaluna-1 is described there as a real-time Portable Operating System Interface (RT-POSIX) layer based on FreeBSD 4.1, and the CDE cross-platform software development environment. ChorusOS is supported by popular Secure Socket Layer and Transport Layer Security (SSL/TLS) libraries such as wolfSSL.
See also
JavaOS
References
External links
Red Bend WEB site
Sun's ChorusOS 4.0.1 Common Documentation Collection
Sun's ChorusOS 5.0 Documentation Collection
Distributed operating systems
French inventions
Microkernel-based operating systems
Microkernels
Real-time operating systems
Sun Microsystems software
|
934683
|
https://en.wikipedia.org/wiki/Digital%20obsolescence
|
Digital obsolescence
|
Digital obsolescence is the risk of data loss because of inabilities to access digital assets, due to the hardware or software required for information retrieval being repeatedly replaced by newer devices and systems, resulting in increasingly incompatible formats. While the threat of an eventual "digital dark age" (where large swaths of important cultural and intellectual information stored on archaic formats becomes irretrievably lost) was initially met with little concern until the 1990s, modern digital preservation efforts in the information and archival fields have implemented protocols and strategies such as data migration and technical audits, while the salvage and emulation of antiquated hardware and software address digital obsolescence to limit the potential damage to long-term information access.
Background
A false sense of security persists regarding digital documents: because an infinite number of identical copies can be created from original files, many users assume that their documents have a virtually indefinite shelf life. In reality, the mediums utilized for digital information storage and access present unique preservation challenges compared to many of the physical formats traditionally handled by archives and libraries. Paper materials and printed media migrated to film-based microform, for example, can be accessible for centuries if created and maintained under ideal conditions, compared to mere decades of physical stability offered by magnetic tape and disk or optical formats. Therefore, digital media have more urgent preservation concerns than the gradual change in written or spoken language experienced with the printed word.
Little professional thought in the fields of library and archival science was directed toward the topic of digital obsolescence as the use of computerized systems grew more widespread and commonplace, but much discussion began to emerge in the 1990s. Despite this, few options were proposed as genuine alternatives to the standard method of continuously migrating data to increasingly newer storage media, employed since magnetic tape began succeeding paper punch cards as practical data storage in the 1960s and 1970s. These basic migration practices persist into the modern era of hard disk and solid-state drives as research has shown many digital storage mediums frequently last considerably shorter in the field compared to manufacturer claims or laboratory testing, leading to the facetious observation that "digital documents last forever—or five years, whichever comes first."
The causes for digital obsolescence aren’t always purely technical. Capitalistic accumulation and consumerism have been labeled key motivators toward digital obsolescence in society, with newly introduced products frequently assigned greater value than older products. Digital preservation relies on the continuous maintenance and usage of hardware and software formats, which the threat of obsolescence can interfere with. Four types of digital obsolescence exist in the realm of hardware and software access:.
Functional obsolescence, or the mechanical failure of a device that prevents information access, which can be the result of damage through rough handling, gradual wear from extended usage, or intentional failure through planned obsolescence;
Postponement obsolescence, or intentionally upgrading some information systems within an institution, but not all of them, that is often implemented as part of a "security through obsolescence" strategy;
Systemic obsolescence, or deliberate design changes made to programs and applications so that newer updates are increasingly incompatible with older versions, forcing the user to purchase newer software editions or hardware;
Technical obsolescence, or the adoption of newer, more accessible technologies with the intention to replace older, often outdated software or hardware, occurring on the side of the consumer or manufacturer.
Examples of digital obsolescence
Because the majority of digital information relies on two factors for curation and retrieval, it is important to separately classify how digital obsolescence impacts digital preservation through both hardware and software mediums.
Hardware
Hardware concerns are two-fold in archival and library fields: in addition to the physical storage medium of magnetic tape, optical disc, or solid-state computer memory, a separate electronic device is often required for information access. And while proper storage can help mitigate some environmental vulnerabilities to storage formats (including dust, humidity, radiation, and temperature) and extend preservation for decades, there are other inevitable endangering factors. Magnetic tape and floppy disks are vulnerable to both the deterioration of adhesive holding the magnetic data layer to its backing or the demagnetization of the data layer, commonly called "bit rot"; optical discs are specifically susceptible to physical damage to their readable surface, and to oxidation occurring between improperly sealed outer layers; a process referred to as "disc rot" or, inaccurately, "laser rot" (particularly in reference to LaserDiscs). Older forms of read-only-memory chip-based storage such as cartridges and memory cards encounter their own form of bit rot when the electrons representing individual bits of binary information change polarity (called "flipping") and the data in rendered unreadable.
The operability of a format’s appropriate playback or recording device possess their own vulnerabilities. Cassette decks and disk drives rely on the functionality of precision-manufactured moving parts that are susceptible to damages caused by repetitive physical stress and foreign materials like dust and grime. Routine maintenance, calibrations, and cleaning operations can help extend the lifetime of many devices, but broken or failing parts will need repair or replacement: sourcing parts becomes more difficult and expensive as the supply stock for older machines reaches scarcity, and user technical skills grow challenged as newer machines and storage formats use less electromechanical parts and more integrated circuits and other complex components.
Only a decade after the 1970s Viking program, NASA personnel discovered that much of the mission data stored on magnetic tapes, including over 3000 unprocessed images of the Martian surface transmitted by the two Viking probes, was inaccessible due to a multitude of factors. While in possession of indecipherable notes written by long-departed or deceased programmers, the computer hardware and source code needed to correctly run the decoding software had been replaced and disposed of by the agency. Information was eventually recovered after more than a year of reverse engineering how the raw data was encoded onto the tapes, which included consulting with the original engineers of the Viking landers’ cameras and imaging hardware. NASA experienced similar issues when attempting to recover and process images from 1960s lunar orbiter missions. Engineers at the Jet Propulsion Laboratory acknowledged in 1990, following a one-year search that located a compatible data tape reader at a United States Air Force base, that a missing part might need rebuilt in-house if a replacement could not be sourced from computer salvage yards.
Software
Over the past several decades, there have been a number of various, once industry-standard file formats and application platforms for data, images, and text that have been repeatedly replaced and superseded by newer iterations of software formats and applications, often with increasingly greater degrees of incompatibility between each other and along their own product lines. Such incompatibilities now frequently extend to which version of the operating system is installed on the system (such as instances of Microsoft Works predating Version 4.5 being unable to run on the Windows 2000 operating system and beyond). One example of a developer cancelling an instance of planned obsolescence occurred in 2008, when Microsoft retracted intentions of an Office service package dropping support for a number of older file formats, due to the intensity of public outcry.
Systemic obsolescence in software can be exemplified by the history of the word processor WordStar. A popular option for WYSIWYG document editing on C/PM and MS-DOS operating systems during the 1980s, a delayed port to Windows 1.0 caused WordStar to lose significant market share to competitors WordPerfect and Microsoft Word by 1991. Further development of the Windows version stopped in 1994, and WordStar 7 for MS-DOS was last updated in 1999. Over time, any version of WordStar grew increasingly incompatible with modern versions of Windows beyond 3.1 to the frustration of long-devoted users, including authors William F. Buckley, Jr. and Anne Rice.
Digital obsolescence has a prominent effect on the preservation of video game history, since many older games and hardware were regarded by players as ephemeral products, due to the continuous process of computer hardware upgrading and home console generation cycles. Such cycles are often the result of both systemic and technical obsolescence. Some of the oldest computer games, like 1962's Spacewar! for the PDP-1 commercial minicomputer, were developed for hardware platforms so outdated that they are virtually nonexistent today. Many older games of the 1960s and 1970s built for contemporary mainframe terminals and microcomputers can only be played today through software emulation. While video games and other software applications can be orphaned by their parent developers or publishing companies and classified as abandonware, the copyright issues surrounding software are a very complicated hurdle in the path of digital preservation.
One prime example of copyright issues with software were those encountered during preservation efforts for the BBC Domesday Project, a 1986 UK multimedia data collection survey that commemorated the 900th anniversary of the original Domesday Book. While the project's specially customized LaserDisc reader resulted in its own hardware-based preservation problems, the combination of one million personal copyrights belonging to participating civilians, in addition to corporate claims on the specialized computer hardware, means that publicly accessible digital preservation efforts might be stalled unit 2090.
Prevention strategies
Organizations possessing digital archives should perform assessments of their records in order to identify file corruption and reduce the risks associated with file format obsolescence. Such assessments can be accomplished through internal file format action plans, which list digital file types in an archive's holdings and assess the actions taken in order to ensure continued accessibility.
One emerging strategic avenue in combatting digital obsolescence is the adoption of open source software, due to source code availability, transparency, and potential adaptability in modern hardware environments. For example, the Apache Software Foundation's OpenOffice application supports access for a number of legacy word processor formats, including Version 6 of Microsoft Word, and basic support for Version 4 of WordPerfect. This contrasts with criticism directed toward Microsoft's own purported Open XML format from the open source community for non-disclosure agreements and translator demands.
Standard strategies for digital preservation utilized by information institutions are frequently interconnected or otherwise related in function or purpose. Bitstream copying (or data backup) is a foundational operation often employed before many other practices, and facilitates establishing the redundancy of multiple storage locations: refreshing is the transportation of unchanging data, frequently between identical or functionally similar storage formats, while migration converts the format or coding of digital information to enable moving it between different operating systems and hardware generations. Normalization reduces organizational complexity for archival institutions by reducing the number of similar filetypes through conversion, and encapsulation assembles digital information with its associated metadata to guarantee information accessibility. Digital archives employ canonicalization to ensure that key aspects of documents have survived the process of conversion, while a reliance on standards established by regional archival institutions maintains organization within the broader spectrum of the field. Technology preservation (also called computer museum) and digital archeology respectively involve institutions maintaining possession or access to legacy hardware and software platforms, and the salvaging methods employed to recover digital information from damaged or obsolete media and devices. Following recovery, some data, such as documentation, can be converted to analog backups in the form of physically accessible copies, while executable code can be launched through emulation platforms within modern hardware and software environments designed to simulate obsolete computer systems.
Writing in 1999, Jeff Rothenberg was critical of many contemporary preservation procedures and how they improperly addressed digital obsolescence as the most prominent problem in long-term digital information storage. Rothenberg disapproved of the reliance on hard copies, arguing that printing digital documents stripped them of their inherent "digital" qualities, including machine readability and dynamic, user functionalities. Computer museums were also cited as an inadequate practice. There are practical limitations of a limited number of locations capable of maintaining obsolete hardware forever, realistically limiting the full access capabilities of legacy digital documents: additionally, most older data rarely exists in coding formats to take full advantage of their original hardware or software environments. Two digital preservation processes specifically criticized were the implementation of relational database (RDB) standards and an overreliance on migration. While designed for standardization, RDBs and the features of their management systems (RDBMS) often promoted unintentional tribalistic practices among regional institutions, introducing incompatibilities between RDBs: meanwhile, the ubiquity of file and program migration frequently risked failing to compensate for conversional paradigm shifts between increasingly newer software environments. Emulation, with the digital data supported by an encapsulation of metadata, documentation, and software and emulation environment specifications, was argued as the most ideal preservation practice in the face of digital obsolescence.
The UK National Archives published a second revision to their Information Assurance Maturity Model (IAMM) in 2009, overviewing digital obsolescence risk management for institutions and businesses. After instructing senior information risk owners on the initial requirements that determined both potential risk of digital obsolescence and the mitigating actions to counter it, the guide dissects a multi-step process toward maintaining digital continuity of archival information. Such steps run the gamut from enforcing responsibility of information continuity and confirming the degree of content metadata, to ensuring critical information discovery through institutional usage and that system migration doesn’t affect information accessibility, to guaranteeing IT support and enforcing contingency plans for information survivability through organizational changes.
In 2014, the National Digital Stewardship Alliance recommended developing file format action plans, stating "it is important to shift from more abstract considerations about file format obsolescence to develop actionable strategies for monitoring and mining information about the heterogeneous digital files the organizations are managing". Other important resources for assessment support are the Library of Congress' Sustainability of Digital Formats page, and the UK National Archives' PRONOM online file format registry.
CERN began its Digital Memory Project in 2016, aiming to preserve decades of the organization’s media output through standardized initiatives. CERN determined that their solution would require continuous access to metadata, the implementation of an Open Archival Information System (OAIS) archive as soon as possible to reduce costs, and the advance execution of any new system’s archiving plan. Using OAIS, CERN developed certification for trustworthy digital repositories (TDR), the ISO 16363 standard, and implemented E-Ternity as the prototype for its compliant digital archive model.
On 1 January 2021, Adobe ended support and blocked content from running in its Flash Player in response to the advancements in open standards for the Web. This action followed a July 2017 announcement despite affecting the user experience for millions of websites to varying degrees. Since January 2018, BlueMaxima’s Flashpoint has been one of several Adobe Flash Player preservation projects salvaging more than 110,000 animations and games.
See also
BBC Domesday Project
Data degradation
Data migration
Digital dark age
Digital data
Digital preservation
Disc rot
Emulation (computing)
OAIS
Obsolescence
Open source software
Video game preservation
References
External links
Chamber of Horrors: Obsolete and Endangered Media
Digital Preservation at ICPSR
The Library of Congress: Sustainability of Digital Formats
The National Archive: PRONOM welcome screen
Wired Magazine: What death can't destroy and how to digitize it
Data management
Digital preservation
Future problems
Obsolescence
Records management
|
63423103
|
https://en.wikipedia.org/wiki/Datera
|
Datera
|
Datera was a global enterprise software company headquartered in Santa Clara, California that developed an enterprise software-defined storage platform. Datera went into liquidation in February 2021.
The Datera Data Services Platform is the company's commercial product aimed at hyperscale storage environments and cloud service providers who want to deploy a hybrid cloud, autonomous and dynamic data infrastructure. Datera software deploys on industry-standard servers from Dell EMC, Fujitsu, Hewlett Packard Enterprise, Intel, Lenovo, Supermicro, and QUANTA to store blocks and objects in on-premises data centers, and private cloud and hybrid cloud environments.
History
Datera was co-founded in 2013 by contributors to open-source LIO_(SCSI_target) storage, Nicholas Bellinger, Claudio Fleiner and Marc Fleischmann. In 2016, Datera emerged from stealth and raised $40 million in funding from Khosla Ventures, Samsung Ventures, Andy Bechtolsheim, and Pradeep Sindhu.
Datera partnered with open source private cloud platform, vScaler in 2017 to deliver scalable private clouds for a range of workloads from high-performance databases to archival storage.
Guy Churchward, the former CEO at DataTorrent and Divisional President of Core Technologies at Dell EMC, joined the Datera board of directors in 2018 and was appointed CEO in December of that year. Flavio Santoni, the former EVP at LSI Corporation and former CEO of Syncsort, was appointed Chief Revenue Officer in January 2017. Narasimha Valiveti, former VP of software engineering at Dell EMC was appointed Chief Product Officer in May 2018.
In January 2019, Datera announced a go-to-market partnership with Hewlett Packard Enterprise as part of the HPE Complete program to allow businesses to procure integrated solutions on a single HPE purchase order. Datera reported 500 percent business growth in the first half of 2019 that was attributed to the HPE partnership.
In October 2019, Datera announced the HPE Datera Cloud Kit in partnership with HPE, a pre-packaged configuration for HPE customers that included a Datera software license, HPE Smart Fabric Orchestrator license, HPE M-Series switches, HPE DL360 servers, cabling, and support for containers, virtual machines, and bare metal applications.
The company was named by CRN as an Emerging Storage Vendor in 2019.
In 2020, Datera achieved a “Veeam-Ready” designation in the repository category. The program (from backup solutions and cloud data management provider, Veeam) signifies the company partners that have met standards via a “qualification and testing program” and are certified to work in conjunction with one another. The platform also received Red Hat OpenStack Certification for the Datera Cinder driver with both Red Hat OpenStack Platform 13 and Red Hat OpenStack Platform 16.
The company announced a partner agreement with Fujitsu to integrate the Datera Data Services Platform into Fujitsu’s product portfolio and bring it to market globally.
On April 15, 2020, Datera co-founder Marc Fleishmann announced his departure from Datera to pursue new opportunities via a Linkedin posting.
On February 19, 2021, blockandfiles.com reported that Datera has gone into liquidation.
Technology
Datera's principal product is a block, scale-out software-defined storage platform for transaction-intensive workloads that utilizes the iSCSI protocol. It provides elastic, primary storage for on-premises data centers, and private and hybrid cloud environments. Enterprises can combine different hardware and media from multiple vendors to create distributed, scale-out storage clusters for workloads running on bare metal, virtual machines, or containers.
The software also uses automation and data orchestration to place data by location, node, and media type while continuously tuning performance for each application. The company states that this eliminates the need for manual administration. Datera uses artificial intelligence and machine learning to assess an organization's data environment and automatically takes actions to optimize it through user-defined policies.
Datera natively integrates with OpenStack, VMware vSphere, CloudStack and container orchestration platforms such as Docker, Kubernetes, and Mesos.
References
Computer storage companies
Computer companies established in 2013
Storage software
Storage virtualization
|
83971
|
https://en.wikipedia.org/wiki/Earth%20Simulator
|
Earth Simulator
|
The , developed by the Japanese government's initiative "Earth Simulator Project", was a highly parallel vector supercomputer system for running global climate models to evaluate the effects of global warming and problems in solid earth geophysics. The system was developed for Japan Aerospace Exploration Agency, Japan Atomic Energy Research Institute, and Japan Marine Science and Technology Center (JAMSTEC) in 1997. Construction started in October 1999, and the site officially opened on 11 March 2002. The project cost 60 billion yen.
Built by NEC, ES was based on their SX-6 architecture. It consisted of 640 nodes with eight vector processors and 16 gigabytes of computer memory at each node, for a total of 5120 processors and 10 terabytes of memory. Two nodes were installed per 1 metre × 1.4 metre × 2 metre cabinet. Each cabinet consumed 20 kW of power. The system had 700 terabytes of disk storage (450 for the system and 250 for the users) and 1.6 petabytes of mass storage in tape drives. It was able to run holistic simulations of global climate in both the atmosphere and the oceans down to a resolution of 10 km. Its performance on the LINPACK benchmark was 35.86 TFLOPS, which was almost five times faster than the previous fastest supercomputer, ASCI White. As of 2020, comparable performance can be achieved by using 4 Nvidia A100 GPUs, each with 9.746 FP64 TFlops.
ES was the fastest supercomputer in the world from 2002 to 2004. Its capacity was surpassed by IBM's Blue Gene/L prototype on 29 September 2004.
ES was replaced by the Earth Simulator 2 (ES2) in March 2009. ES2 is an NEC SX-9/E system, and has a quarter as many nodes each of 12.8 times the performance (3.2× clock speed, four times the processing resource per node), for a peak performance of 131 TFLOPS. With a delivered LINPACK performance of 122.4 TFLOPS, ES2 was the most efficient supercomputer in the world at that point. In November 2010, NEC announced that ES2 topped the Global FFT, one of the measures of the HPC Challenge Awards, with the performance number of 11.876 TFLOPS.
ES2 was replaced by the Earth Simulator 3 (ES3) in March 2015. ES3 is a NEC SX-ACE system with 5120 nodes, and a performance of 1.3 PFLOPS.
ES3, from 2017 to 2018, ran alongside Gyoukou, a supercomputer with immersion cooling that can achieve up to 19 PFLOPS.
System overview
Hardware
The Earth Simulator (ES for short) was developed as a national project by three governmental agencies: the National Space Development Agency of Japan (NASDA), the Japan Atomic Energy Research Institute (JAERI), and the Japan Marine Science and Technology Center (JAMSTEC). The ES is housed in the Earth Simulator Building (approx; 50m × 65m × 17m). The Earth Simulator 2 (ES2) uses 160 nodes of NEC's SX-9E. The upgrade of the Earth Simulator has been completed in March 2015. The Earth Simulator 3(ES3) system uses 5120 nodes of NEC's SX-ACE.
System configuration
The ES is a highly parallel vector supercomputer system of the distributed-memory type, and consisted of 160 processor nodes connected by Fat-Tree Network. Each Processor nodes is a system with a shared memory, consisting of 8 vector-type arithmetic processors, a 128-GB main memory system. The peak performance of each Arithmetic processors is 102.4Gflops. The ES as a whole thus consists of 1280 arithmetic processors with 20 TB of main memory and the theoretical performance of 131Tflops.
Construction of CPU
Each CPU consists of a 4-way super-scalar unit (SU), a vector unit (VU), and main memory access control unit on a single LSI chip. The CPU operates at a clock frequency of 3.2 GHz. Each VU has 72 vector registers, each of which has 256 vector elements, along with 8 sets of six different types of vector pipelines: addition /shifting, multiplication, division, logical operations, masking, and load/store. The same type of vector pipelines works together by a single vector instruction and pipelines of different types can operate concurrently.
Processor Node (PN)
The processor node is composed of 8 CPU and 10 memory modules.
Interconnection Network (IN)
The RCU is directly connected to the crossbar switches and controls inter-node data communications at 64 GB/s bidirectional transfer rate for both sending and receiving data. Thus the total bandwidth of inter-node network is about 10 TB/s.
Processor Node (PN) Cabinet
The processor node is composed two nodes of one cabinet, and consists of power supply part 8 memory modules and PCI box with 8 CPU modules.
Software
Below is the description of software technologies used in the operating system, Job Scheduling and the programming environment of ES2.
Operating system
The operating system running on ES, "Earth Simulator Operating System", is a custom version of NEC's SUPER-UX used for the NEC SX supercomputers that make up ES.
Mass storage file system
If a large parallel job running on 640 PNs reads from/writes to one disk installed in a PN, each PN accesses to the disk in sequence and performance degrades terribly. Although local I/O in which each PN reads from or writes to its own disk solves the problem, it is a very hard work to manage such a large number of partial files. Then ES adopts Staging and Global File System (GFS) that offers a high-speed I/O performance.
Job scheduling
ES is basically a batch-job system. Network Queuing System II (NQSII) is introduced to manage the batch job.
Queue configuration of the Earth Simulator.
ES has two-type queues. S batch queue is designed for single-node batch jobs and L batch queue is for multi-node batch queue.
There are two-type queues. One is L batch queue and the other is S batch queue. S batch queue is aimed at being used for a pre-run or a post-run for large-scale batch jobs (making initial data, processing results of a simulation and other processes), and L batch queue is for a production run. Users choose the appropriate queue for their job.
The nodes allocated to a batch job are used exclusively for that batch job.
The batch job is scheduled based on elapsed time instead of CPU time.
Strategy (1) enables to estimate the job termination time and to make it easy to allocate nodes for the next batch jobs in advance. Strategy (2) contributes to an efficient job execution. The job can use the nodes exclusively and the processes in each node can be executed simultaneously. As a result, the large-scale parallel program is able to be executed efficiently.
PNs of L-system are prohibited from access to the user disk to ensure enough disk I/O performance. Herefore the files used by the batch job are copied from the user disk to the work disk before the job execution. This process is called "stage-in". It is important to hide this staging time for the job scheduling.
Main steps of the job scheduling are summarized as follows;
Node Allocation
Stage-in (copies files from the user disk to the work disk automatically)
Job Escalation (rescheduling for the earlier estimated start time if possible)
Job Execution
Stage-out (copies files from the work disk to the user disk automatically)
When a new batch job is submitted, the scheduler searches available nodes (Step.1). After the nodes and the estimated start time are allocated to the batch job, stage-in process starts (Step.2). The job waits until the estimated start time after stage-in process is finished. If the scheduler find the earlier start time than the estimated start time, it allocates the new start time to the batch job. This process is called "Job Escalation" (Step.3). When the estimated start time has arrived, the scheduler executes the batch job (Step.4). The scheduler terminates the batch job and starts stage-out process after the job execution is finished or the declared elapsed time is over (Step.5).
To execute the batch job, the user logs into the login-server and submits the batch script to ES. And the user waits until the job execution is done. During that time, the user can see the state of the batch job using the conventional web browser or user commands. The node scheduling, the file staging and other processing are automatically processed by the system according to the batch script.
Programming environment
Programming model in ES
The ES hardware has a 3-level hierarchy of parallelism: vector processing in an AP, parallel processing with shared memory in a PN, and parallel processing among PNs via IN. To bring out high performance of ES fully, you must develop parallel programs that make the most use of such parallelism. the 3-level hierarchy of parallelism of ES can be used in two manners, which are called hybrid and flat parallelization, respectively . In the hybrid parallelization, the inter-node parallelism is expressed by HPF or MPI, and the intra-node by microtasking or OpenMP, and you must, therefore, consider the hierarchical parallelism in writing your programs. In the flat parallelization, the both inter- and intra-node parallelism can be expressed by HPF or MPI, and it is not necessary for you to consider such complicated parallelism. Generally speaking, the hybrid parallelization is superior to the flat in performance and vice versa in ease of programming. Note that the MPI libraries and the HPF runtimes are optimized to perform as well as possible both in the hybrid and flat parallelization.
Languages
Compilers for Fortran 90, C and C++ are available. All of them have an advanced capability of automatic vectorization and microtasking. Microtasking is a sort of multitasking provided for the Cray's supercomputer at the same time and is also used for intra-node parallelization on ES. Microtasking can be controlled by inserting directives into source programs or using the compiler's automatic parallelization. (Note that OpenMP is also available in Fortran 90 and C++ for intra-node parallelization.)
Parallelization
Message Passing Interface (MPI)
MPI is a message passing library based on the MPI-1 and MPI-2 standards and provides high-speed communication capability that fully exploits the features of IXS and shared memory. It can be used for both intra- and inter-node parallelization. An MPI process is assigned to an AP in the flat parallelization, or to a PN that contains microtasks or OpenMP threads in the hybrid parallelization. MPI libraries are designed and optimizedcarefully to achieve highest performance of communication on the ES architecture in both of the parallelization manner.
High Performance Fortrans (HPF)
Principal users of ES are considered to be natural scientists who are not necessarily familiar with the parallel programming or rather dislike it. Accordingly, a higher-level parallel language is in great demand.
HPF/SX provides easy and efficient parallel programming on ES to supply the demand. It supports the specifications of HPF2.0, its approved extensions, HPF/JA, and some unique extensions for ES
Tools
-Integrated development environment (PSUITE)
Integrated development environment (PSUITE) is integration of various tools to develop the program that operates by SUPER-UX. Because PSUITE assumes that various tools can be used by GUI, and has the coordinated function between tools, it comes to be able to develop the program more efficiently than the method of developing the past the program and easily.
-Debug Support
In SUPER-UX, the following are prepared as strong debug support functions to support the program development.
Facilities
Features of the Earth Simulator building
Protection from natural disasters
The Earth Simulator Center has several special features that help to protect the computer from natural disasters or occurrences. A wire nest hangs over the building which helps to protect from lightning. The nest itself uses high-voltage shielded cables to release lightning current into the ground. A special light propagation system utilizes halogen lamps, installed outside of the shielded machine room walls, to prevent any magnetic interference from reaching the computers. The building is constructed on a seismic isolation system, composed of rubber supports, that protect the building during earthquakes.
Lightning protection system
Three basic features:
Four poles at both sides of the Earth Simulator Building compose wire nest to protect the building from lightning strikes.
Special high-voltage shielded cable is used for inductive wire which releases a lightning current to the earth.
Ground plates are laid by keeping apart from the building about 10 meters.
Illumination
Lighting: Light propagation system inside a tube
(255mm diameter, 44m(49yd) length, 19 tubes)
Light source: halogen lamps of 1 kW
Illumination: 300 lx at the floor in average
The light sources installed out of the shielded machine room walls.
Seismic isolation system
11 isolators
(1 ft height, 3.3 ft. Diameter, 20-layered rubbers supporting the bottom of the ES building)
Performance
LINPACK
The new Earth Simulator system (ES2), which began operation in March 2009, achieved sustained performance of 122.4 TFLOPS and computing efficiency (*2) of 93.38% on the LINPACK Benchmark (*1).
1. LINPACK Benchmark
The LINPACK Benchmark is a measure of a computer's performance and is used as a standard benchmark to rank computer systems in the TOP500 project.
LINPACK is a program for performing numerical linear algebra on computers.
2. Computing efficiency
Computing efficiency is the ratio of sustained performance to a peak computing performance. Here, it is the ratio of 122.4TFLOPS to 131.072TFLOPS.
Computational performance of WRF on Earth Simulator
WRF (Weather Research and Forecasting Model) is a mesoscale meteorological simulation code which has been developed under the collaboration among US institutions, including NCAR (National Center for Atmospheric Research) and NCEP (National Centers for Environmental Prediction). JAMSTEC has optimized WRFV2 on the Earth Simulator (ES2) renewed in 2009 with the measurement of computational performance. As a result, it was successfully demonstrated that WRFV2 can run on the ES2 with outstanding and sustained performance.
The numerical meteorological simulation was conducted by using WRF on the Earth Simulator for the earth's hemisphere with the Nature Run model condition. The model spatial resolution is 4486 by 4486 horizontally with the grid spacing of 5 km and 101 levels vertically. Mostly adiabatic conditions were applied with the time integration step of 6 seconds.
A very high performance on the Earth Simulator was achieved for high-resolution WRF. While the number of CPU cores used is only 1% as compared to the world fastest class system Jaguar (CRAY XT5) at Oak Ridge National Laboratory, the sustained performance obtained on the Earth Simulator is almost 50% of that measured on the Jaguar system. The peak performance ratio on the Earth Simulator is also record-high 22.2%.
See also
Supercomputing in Japan
Attribution of recent climate change
NCAR
HadCM3
EdGCM
References
External links
ES for kids
2002 in science
Effects of climate change
NEC supercomputers
Numerical climate and weather models
One-of-a-kind computers
Scientific simulation software
Vector supercomputers
64-bit computers
Japan Agency for Marine-Earth Science and Technology
|
1209826
|
https://en.wikipedia.org/wiki/Security%20token
|
Security token
|
A security token is a peripheral device used to gain access to an electronically restricted resource. The token is used in addition to or in place of a password. It acts like an electronic key to access something. Examples include a wireless keycard opening a locked door, or in the case of a customer trying to access their bank account online, the use of a bank-provided token can prove that the customer is who they claim to be.
Some tokens may store cryptographic keys that may be used to generate a digital signature, or biometric data, such as fingerprint details. Some may also store passwords. Some designs incorporate tamper resistant packaging, while others may include small keypads to allow entry of a PIN or a simple button to start a generating routine with some display capability to show a generated key number. Connected tokens utilize a variety of interfaces including USB, near-field communication (NFC), radio-frequency identification (RFID), or Bluetooth. Some tokens have an audio capability designed for vision-impaired people.
Password types
All tokens contain some secret information that is used to prove identity. There are four different ways in which this information can be used:
Static password tokenThe device contains a password which is physically hidden (not visible to the possessor), but which is transmitted for each authentication. This type is vulnerable to replay attacks.
Synchronous dynamic password tokenA timer is used to rotate through various combinations produced by a cryptographic algorithm. The token and the authentication server must have synchronized clocks.
Asynchronous password token A one-time password is generated without the use of a clock, either from a one-time pad or cryptographic algorithm.
Challenge response token Using public key cryptography, it is possible to prove possession of a private key without revealing that key. The authentication server encrypts a challenge (typically a random number, or at least data with some random parts) with a public key; the device proves it possesses a copy of the matching private key by providing the decrypted challenge.
Time-synchronized one-time passwords change constantly at a set time interval; e.g., once per minute. To do this some sort of synchronization must exist between the client's token and the authentication server. For disconnected tokens, this time-synchronization is done before the token is distributed to the client. Other token types do the synchronization when the token is inserted into an input device. The main problem with time-synchronized tokens is that they can, over time, become unsynchronized. However, some such systems, such as RSA's SecurID, allow the user to resynchronize the server with the token, sometimes by entering several consecutive passcodes. Most also cannot have replaceable batteries and only last up to 5 years before having to be replaced – so there is an additional cost.
Another type of one-time password uses a complex mathematical algorithm, such as a hash chain, to generate a series of one-time passwords from a secret shared key. Each password is unguessable, even when previous passwords are known. The open-source OATH algorithm is standardized; other algorithms are covered by US patents. Each password is observably unpredictable and independent of previous ones, whereby an adversary would be unable to guess what the next password may be, even with knowledge of all previous passwords.
Physical types
Tokens can contain chips with functions varying from very simple to very complex, including multiple authentication methods.
The simplest security tokens do not need any connection to a computer. The tokens have a physical display; the authenticating user simply enters the displayed number to log in. Other tokens connect to the computer using wireless techniques, such as Bluetooth. These tokens transfer a key sequence to the local client or to a nearby access point.
Alternatively, another form of token that has been widely available for many years is a mobile device which communicates using an out-of-band channel (like voice, SMS, or USSD).
Still other tokens plug into the computer, and may require a PIN. Depending on the type of the token, the computer OS will then either read the key from the token and perform a cryptographic operation on it, or ask the token's firmware to perform this operation.
A related application is the hardware dongle required by some computer programs to prove ownership of the software. The dongle is placed in an input device and the software accesses the I/O device in question to authorize the use of the software in question.
Commercial solutions are provided by a variety of vendors, each with their own proprietary (and often patented) implementation of variously used security features. Token designs meeting certain security standards are certified in the United States as compliant with FIPS 140, a federal security standard. Tokens without any kind of certification are sometimes viewed as suspect, as they often do not meet accepted government or industry security standards, have not been put through rigorous testing, and likely cannot provide the same level of cryptographic security as token solutions which have had their designs independently audited by third-party agencies.
Disconnected tokens
Disconnected tokens have neither a physical nor logical connection to the client computer. They typically do not require a special input device, and instead use a built-in screen to display the generated authentication data, which the user enters manually themselves via a keyboard or keypad. Disconnected tokens are the most common type of security token used (usually in combination with a password) in two-factor authentication for online identification.
Connected tokens
Connected tokens are tokens that must be physically connected to the computer with which the user is authenticating. Tokens in this category automatically transmit the authentication information to the client computer once a physical connection is made, eliminating the need for the user to manually enter the authentication information. However, in order to use a connected token, the appropriate input device must be installed. The most common types of physical tokens are smart cards and USB tokens, which require a smart card reader and a USB port respectively. Increasingly, FIDO2 tokens, supported by the open specification group FIDO Alliance have become popular for consumers with mainstream browser support beginning in 2015 and supported by popular websites and social media sites.
Older PC card tokens are made to work primarily with laptops. Type II PC Cards are preferred as a token as they are half as thick as Type III.
The audio jack port is a relatively practical method to establish connection between mobile devices, such as iPhone, iPad and Android, and other accessories. The most well known device is called Square, a credit card reader for iOS and Android devices.
Some use a special purpose interface (e.g. the crypto ignition key deployed by the United States National Security Agency). Tokens can also be used as a photo ID card. Cell phones and PDAs can also serve as security tokens with proper programming.
Smart cards
Many connected tokens use smart card technology. Smart cards can be very cheap (around ten cents) and contain proven security mechanisms (as used by financial institutions, like cash cards). However, computational performance of smart cards is often rather limited because of extreme low power consumption and ultra-thin form-factor requirements.
Smart-card-based USB tokens which contain a smart card chip inside provide the functionality of both USB tokens and smart cards. They enable a broad range of security solutions and provide the abilities and security of a traditional smart card without requiring a unique input device. From the computer operating system's point of view such a token is a USB-connected smart card reader with one non-removable smart card present.
Contactless tokens
Unlike connected tokens, contactless tokens form a logical connection to the client computer but do not require a physical connection. The absence of the need for physical contact makes them more convenient than both connected and disconnected tokens. As a result, contactless tokens are a popular choice for keyless entry systems and electronic payment solutions such as Mobil Speedpass, which uses RFID to transmit authentication info from a keychain token. However, there have been various security concerns raised about RFID tokens after researchers at Johns Hopkins University and RSA Laboratories discovered that RFID tags could be easily cracked and cloned.
Another downside is that contactless tokens have relatively short battery lives; usually only 5–6 years, which is low compared to USB tokens which may last more than 10 years. Some tokens however do allow the batteries to be changed, thus reducing costs.
Bluetooth tokens
The Bluetooth Low Energy protocols serve for long lasting battery lifecycle of wireless transmission.
The transmission of inherent Bluetooth identity data is the lowest quality for supporting authentication.
A bidirectional connection for transactional data interchange serves for the most sophisticated authentication procedures.
However, the automatic transmission power control antagonizes to attempts for radial distance estimates. The escape is available apart from the standardised Bluetooth power control algorithm to provide a calibration on minimally required transmission power.
Bluetooth tokens are often combined with a USB token, thus working in both a connected and a disconnected state. Bluetooth authentication works when closer than 32 feet (10 meters). When the Bluetooth link is not properly operable, the token may be inserted into a USB input device to function.
Another combination is with smart card to store locally larger amounts of identity data and process information as well. Another is a contactless BLE token that combines secure storage and tokenized release of fingerprint credentials.
In the USB mode of operation sign-off requires care for the token while mechanically coupled to the USB plug. The advantage with the Bluetooth mode of operation is the option of combining sign-off with distance metrics. Respective products are in preparation, following the concepts of electronic leash.
NFC tokens
Near-field communication (NFC) tokens combined with a Bluetooth token may operate in several modes, thus working in both a connected and a disconnected state. NFC authentication works when closer than 1 foot (0.3 meters). The NFC protocol bridges short distances to the reader while the Bluetooth connection serves for data provision with the token to enable authentication. Also when the Bluetooth link is not connected, the token may serve the locally stored authentication information in coarse positioning to the NFC reader and relieves from exact positioning to a connector.
Single sign-on software tokens
Some types of single sign-on (SSO) solutions, like enterprise single sign-on, use the token to store software that allows for seamless authentication and password filling. As the passwords are stored on the token, users need not remember their passwords and therefore can select more secure passwords, or have more secure passwords assigned. Usually most tokens store a cryptographic hash of the password so that if the token is compromised, the password is still protected.
Programmable tokens
Programmable tokens are marketed as "drop-in" replacement of mobile applications such as Google Authenticator (miniOTP). They can be used as mobile app replacement, as well as in parallel as a backup.
Vulnerabilities
Loss and theft
The simplest vulnerability with any password container is theft or loss of the device. The chances of this happening, or happening unawares, can be reduced with physical security measures such as locks, electronic leash, or body sensor and alarm. Stolen tokens can be made useless by using two factor authentication. Commonly, in order to authenticate, a personal identification number (PIN) must be entered along with the information provided by the token the same time as the output of the token.
Attacking
Any system which allows users to authenticate via an untrusted network (such as the Internet) is vulnerable to man-in-the-middle attacks. In this type of attack, a fraudster acts as the "go-between" of the user and the legitimate system, soliciting the token output from the legitimate user and then supplying it to the authentication system themselves. Since the token value is mathematically correct, the authentication succeeds and the fraudster is granted access. Citibank made headline news in 2006 when its hardware-token-equipped business users became the victims of a large Ukrainian-based man-in-the-middle phishing attack.
Breach of codes
In 2012, the Prosecco research team at INRIA Paris-Rocquencourt developed an efficient method of extracting the secret key from several PKCS #11 cryptographic devices, including the SecurID 800. These findings were documented in INRIA Technical Report RR-7944, ID hal-00691958, and published at CRYPTO 2012.
Digital signature
Trusted as a regular hand-written signature, the digital signature must be made with a private key known only to the person authorized to make the signature. Tokens that allow secure on-board generation and storage of private keys enable secure digital signatures, and can also be used for user authentication, as the private key also serves as a proof of the user’s identity.
For tokens to identify the user, all tokens must have some kind of number that is unique. Not all approaches fully qualify as digital signatures according to some national laws. Tokens with no on-board keyboard or another user interface cannot be used in some signing scenarios, such as confirming a bank transaction based on the bank account number that the funds are to be transferred to.
See also
Authentication
Hardware security module
Identity management
Initiative For Open Authentication
Mobile signature
Multi-factor authentication
Mutual authentication
One-time pad
Single sign-on
Software token
Authenticator
References
General references
US Personal Identity Verification (PIV)
External links
OATH Initiative for open authentication
Computer access control
Authentication methods
|
40232630
|
https://en.wikipedia.org/wiki/.NET%20Bio
|
.NET Bio
|
.NET Bio is an open source bioinformatics and genomics library created to enable simple loading, saving and analysis of biological data. It was designed for .NET Standard 2.0 and was part of the Microsoft Biology Initiative in the eScience division.
History
.NET Bio was originally built and released by Microsoft Research under the name Microsoft Biology Foundation (MBF) and was later repackaged and released by the Outercurve Foundation as a fully public and open source project under the Apache License 2.0.
Capabilities
The library consists of a set of object-oriented classes written in C# to perform common bioinformatic tasks such as:
Read and write standard alignment and sequence-oriented data files such as FASTA and GenBank.
Access online web services such as NCBI BLAST to search known databases for sequence fragments.
Algorithms for local and global alignments.
Algorithms for sequence assembly, including a parallel DeNovo assembler implementation.
Even though the library itself is written in C#, it may be used from any .NET compatible language and has samples of various usages including from IronPython scripting.
See also
Genome Compiler
Open Bioinformatics Foundation
BioJava, BioPerl, BioPython, BioRuby
Bioclipse
References
External links
.NET Bio Website
Original MBF Website
Microsoft Biology Initiative
.NET software
Software that uses Mono (software)
Bioinformatics software
Free and open-source software
Microsoft free software
Software using the Apache license
|
42065078
|
https://en.wikipedia.org/wiki/Alexander%20Ollongren
|
Alexander Ollongren
|
Jhr. Alexander Ollongren (born November 9, 1928) is a professor emeritus at Leiden University. He serves on the Advisory Council of METI (Messaging Extraterrestrial Intelligence).
Personal life
Alexander Ollongren was born on November 9, 1928 on a coffee plantation in Kepahiang, in the southwestern part of Sumatra, Netherlands East Indies. His father, Alexander Ollongren (1901–1989), was born in Russia, and was of mixed Finnish and Swedish descent. He was a member of the Finnish noble family, Ållongren. His mother, Selma Hedwig Adèle Jaeger (1901–2000), was of Dutch and German heritage. The family moved to Java in early 1932 and lived in Yogyakarta, while the Japanese army occupied the Netherlands East Indies in 1942. In 1945, the family was interned at various Japanese internment camps, most notably Fort van den Bosch in the modern Ngawi Regency. After the war, Ollongren was further educated in Jakarta. The family stayed in Australia for six months in order to recuperate and later moved to the Netherlands where Ollongren decided to enroll in Leiden University.
He married Gunvor Ulla Marie Lundgren, a Swede, in 1965 in Jönköping. Their children are Karin Hildur (Kajsa) Ollongren, a noted liberal politician and government minister, born in 1967, and Peter Gunnar Ollongren, born in 1970.
Education
His education at Leiden University started with undergraduate and graduate studies in mathematics, Hamiltonian mechanics, physics, and astronomy, after which he gained his MSc degree in 1955. After completing his master's degree, he served almost two years in the military. In 1958, he started his doctoral research in galactic astronomy, supervised by Jan H. Oort and Hendrik C. van de Hulst of the Astronomical Department at Leiden. His research topic was the three-dimensional orbital motions of stars in the galaxy. Characterizing orbital stellar motion in a galaxy could not be done analytically, so a number of sample orbits had to be computed using the rudimentary computers of the time. In cooperation with astronomer Ingrid Torgård (1918–2001) of Lund Observatory in Sweden, the then famous and extremely fast electronic computer BESK in Stockholm was programmed to do the necessary computing. The analysis of the problem, together with the computational results and Ollongren's interpretation of them, earned him a PhD degree in astronomy from Leiden University in 1962.
Career
Leiden University
In 1961, the Leiden University Council decided that the university was in need of an institute to operate and manage a fast electronic computer in order to meet computing demands from a wide range of institutions. Thus, the Central Computing Institute was created. A modern, transistorized computer, built by the Dutch company Electrologica, was installed and Ollongren was appointed Acting Director of the Institute. A year later he became Associate Director of the university computer centre. As demands for computing services were increasing in the university, it became evident that the central computing institute would need more powerful computer facilities. After the appointment of Guus Zoutendijk, mathematician, as General Director in 1964, switching to an IBM mainframe was seriously considered and eventually effected. In the wake of the new orientation, Ollongren was granted a leave of absence.
Yale University
After being invited by Dirk Brouwer, for approximately a year and a half, between 1965 and 1967, Ollongren was a postdoctoral visiting research member in celestial mechanics and lecturer in mathematics at the well-known Research Center of Celestial Mechanics at Yale University, New Haven, Connecticut. While in the United States, he became well acquainted with the programming and use of modern, large-size IBM computing equipment. He then returned to the newly created Department of Applied Mathematics at Leiden University, and in 1968, became a lecturer in numerical mathematics and computer science. A year later, he became an Associate Professor in theoretical computer science, covering aspects of programming languages. In 1971, he was granted another leave of absence, enabling him to accept the position of Visiting Research Member at the IBM Research Laboratory in Vienna, Austria for three months.
Return to Leiden University
In 1980, Ollongren became a Full Professor of computer science at Leiden, specializing in the semantics of programming languages. That same year, he spent a half year sabbatical at the Department of Computer Science and Artificial Intelligence of Linköping University in Sweden. Several years later, the computer science section of the department became the Leiden Institute of Advanced Computer Science (LIACS).
Ollongren retired at the age of 65 years. He became Emeritus Professor of Leiden University in November 1993, delivering the public lecture called Vix Famulis Audenda Parat, including an invited speech by ‘Alan Turing’, which was enacted by George K. Miley, a university astronomer, in the University’s auditorium.
Ollongren is a member of several societies of computer science; astronomy, including the International Astronomical Union; and astronautics.
SETI
After his retirement, he became interested in the academic debate on the Search for ExtraTerrestrial Intelligence (SETI), within the International Astronautical Academy. In particular, he wrote several studies in the field of interstellar communication with extraterrestrials. He also developed a new version of Lincos, a universally comprehensible language based on logic for the purpose of communication with extraterrestrial intelligence. His major contribution to this field is his book, Astrolinguistics, was published by Springer in 2013.
Further reading
Astrolinguistics, Design of a Linguistic System for Interstellar Communication Based on Logic (New York: Springer, 2013)
Definition of programming languages by interpreting automata (London: Academic Press, 1974)
References
1928 births
Living people
Dutch computer scientists
20th-century Dutch astronomers
Interstellar messages
Jonkheers of the Netherlands
Leiden University faculty
Leiden University alumni
People from Sumatra
Dutch people of Finnish descent
|
43306524
|
https://en.wikipedia.org/wiki/KDE%20Plasma%205
|
KDE Plasma 5
|
KDE Plasma 5 is the fifth and current generation of the graphical workspaces environment created by KDE primarily for Linux systems. KDE Plasma 5 is the successor of KDE Plasma 4 and was first released on 15 July 2014.
It includes a new default theme, known as "Breeze", as well as increased convergence across different devices. The graphical interface was fully migrated to QML, which uses OpenGL for hardware acceleration, which resulted in better performance and reduced power consumption.
Plasma Mobile is a Plasma 5 variant for Linux-based smartphones.
Overview
Software architecture
KDE Plasma 5 is built using Qt 5 and KDE Frameworks 5, predominantly plasma-framework.
It improves support for HiDPI displays and ships a convergable graphical shell, which can adjust itself according to the device in use. 5.0 also includes a new default theme, dubbed Breeze. Qt 5's QtQuick 2 uses a hardware-accelerated OpenGL(ES) scene graph (canvas) to compose and render graphics on the screen, which allows for the offloading of computationally expensive graphics rendering tasks onto the GPU, freeing up resources on the system's main CPU.
Windowing systems
KDE Plasma 5 uses the X Window System and Wayland. Support for Wayland was prepared in the compositor and planned for a later release. It was made initially available in the 5.4 release. Stable support for a basic Wayland session was provided in the 5.5 release (December 2015).
Support for NVIDIA proprietary driver for Plasma on Wayland was added in the 5.16 release (June 2019).
Development
Since the split of the KDE Software Compilation into KDE Plasma, KDE Frameworks and KDE Applications, each subproject can develop at its own pace. KDE Plasma 5 is on its own release schedule, with feature releases every four months, and bugfix releases in the intervening months.
Workspaces
The latest Plasma 5 features the following workspaces:
Plasma Desktop for any mouse or keyboard driven computing devices like desktops or laptops
Plasma Mobile for smartphones
Plasma Bigscreen for TVs and set-top boxes incl. voice interaction
Plasma Nano, a minimal shell for embedded and touch-enabled devices, like IoT or automotive
Desktop features
KRunner, a search feature with many available plugins. In addition to launching apps, it can find files and folders, open websites, convert from one currency or unit to another, calculate simple mathematical expressions, and perform numerous other useful tasks.
Flexible desktop and panel layouts composed of individual Widgets (also known as "Plasmoids") which can be individually configured, moved around, replaced with alternatives, or deleted. Each screen's layout can be individually configured. New widgets created by others can be downloaded within Plasma.
Powerful clipboard with a memory of previously-copied pieces of text that can be called up at will.
Systemwide notification system supporting quick reply and drag-and-drop straight from notifications, history view, and a Do Not Disturb mode.
Central location to control playback of media in open apps, your phone (with KDE Connect installed), or your web browser (with Plasma Browser Integration installed)
Activities, which allow you to separate your methods of using the system into distinct workspaces. Each activity can have its own set of favorite and recently used applications, wallpapers, "virtual desktops", panels, window styles, and layout configurations. It also couples with (X Session Manager implementation) which keeps track of apps that can be run or shutdown along with given activity via subSessions functionality that keep track of state of applications (not all applications support this feature as they don't implement XSMP protocol)..
Encrypted vaults for storing sensitive data.
Night Color, which can automatically warm the screen colors at night, or user-specified times, or manually.
Styling for icons, cursors, application colors, user interface elements, splash screens and more can be changed, with new styles created by others being downloadable from within the System Settings application. Global Themes allow the entire look-and-feel of the system to be changed in one click.
Session Management allows apps which were running when the system shut down to be automatically restarted in the same state they were in before.
Linux distributions using Plasma
Plasma 5 is a default desktop environment (or one of the defaults) on Linux distributions, such as:
ArcoLinux
Fedora – Fedora KDE Plasma Desktop Edition is an official Fedora spin distributed by the project
KaOS
KDE neon
Kubuntu
LliureX
Manjaro – as Manjaro KDE edition
MXLinux
Netrunner
openSUSE
PCLinuxOS
Q4OS
Slackware
Solus Plasma
SteamOS 3.0
Ubuntu Studio – beginning with 20.10
History
The first Technology Preview of Plasma 5 (at that time called Plasma 2) was released on 13 December 2013. On 15 July 2014, the first release version Plasma 5.0 saw the light of day.
In spring 2015, Plasma 5 replaced Plasma 4 in many popular distributions, such as Fedora 22, Kubuntu 15.04, and openSUSE Tumbleweed.
Releases
Feature releases are released every four months (up to 5.8 every three months) and bugfix releases in the intervening months. Following version 5.8 LTS KDE plans to support each new LTS version for 18 months with bug fixes, while new regular releases will see feature improvements.
Future Planning
Plasma 6: According to the official schedule, "Following 5.24 LTS, our next Plasma 5 LTS will likely be at the same time as the first Plasma 6 release, no date has been set yet".
See also
Comparison of X Window System desktop environments
Gallery
References
External links
Plasma Mobile website
Plasma user wiki
Plasma developer wiki
Free desktop environments
KDE Plasma
Software that uses QML
Unix windowing system-related software
Widget engines
|
40001291
|
https://en.wikipedia.org/wiki/List%20of%20ARM%20Cortex-M%20development%20tools
|
List of ARM Cortex-M development tools
|
This is a list of development tools for 32-bit ARM Cortex-M-based microcontrollers, which consists of Cortex-M0, Cortex-M0+, Cortex-M1, Cortex-M3, Cortex-M4, Cortex-M7, Cortex-M23, Cortex-M33 cores.
Development toolchains
IDE, compiler, linker, debugger, flashing (in alphabetical order):
Ac6 System Workbench for STM32 (based on Eclipse and the GNU GCC toolchain with direct support for all ST-provided evaluation boards, Eval, Discovery and Nucleo, debug with ST-LINK)
ARM Development Studio 5 by ARM Ltd.
Atmel Studio by Atmel (based on Visual Studio and GNU GCC Toolchain)
Code Composer Studio by Texas Instruments
CoIDE by CooCox (note - website dead since 2018)
Crossware Development Suite for ARM by Crossware
CrossWorks for ARM by Rowley
Dave by Infineon. For XMC processors only. Includes project wizard, detailed register decoding and a code library still under development.
DRT by SOMNIUM Technologies. Based on GCC toolchain and proprietary linker technology. Available as a plugin for Atmel Studio and an Eclipse-based IDE.
Eclipse as IDE, with GNU Tools as compiler/linker, e.g. aided with GNU ARM Eclipse plug-ins
EmBitz (formerly Em::Blocks) – free, fast (non-eclipse) IDE for ST-LINK (live data updates), OpenOCD, including GNU Tools for ARM and project wizards for ST, Atmel, EnergyMicro etc.
Embeetle IDE - free, fast (non-eclipse) IDE. Works both on Linux and Windows.
emIDE by emide – free Visual Studio Style IDE including GNU Tools for ARM
GNU ARM Eclipse – A family of Eclipse CDT extensions and tools for GNU ARM development
GNU Tools (aka GCC) for ARM Embedded Processors by ARM Ltd – free GCC for bare metal
IAR Embedded Workbench for ARM by IAR
ICC by ImageCraft
Keil MDK-ARM by Keil
LPCXpresso by NXP (formerly Red Suite by Code Red Technologies)
MikroC by mikroe – mikroC
MULTI by Green Hills Software, for all Arm 7, 9, Cortex-M, Cortex-R, Cortex-A
Ride and RKit for ARM by Raisonance
SEGGER Embedded Studio for ARM by SEGGER.
SEGGER Ozone by SEGGER.
STM32CubeIDE by ST - Combines STCubeMX with TrueSTUDIO into a single Eclipse style package
Sourcery CodeBench by Mentor Graphics
TASKING VX-Toolset by Altium
TrueSTUDIO by Atollic
Visual Studio by Microsoft as IDE, with GNU Tools as compiler/linker – e.g. supported by VisualGDB
VXM Design's Buildroot toolchain for Cortex. It integrates GNU toolchain, Nuttx, filesystem and debugger/flasher in one build.
winIDEA/winIDEAOpen by iSYSTEM
YAGARTO – free GCC (no longer supported)
Code::Blocks (EPS edition) (debug with ST-LINK no GDB and no OpenOCD required)
IDE for Arduino ARM boards
Arduino – IDE for Atmel SAM3X (Arduino Due)
Energia – Arduino IDE for Texas Instruments Tiva and CC3200
Notes:
Debugging tools
JTAG and/or SWD debug interface host adapters (in alphabetical order):
Black Magic Probe by 1BitSquared.
CMSIS-DAP by mbed.
Crossconnect by Rowley Associates.
DSTREAM by ARM Ltd.
Green Hills Probe and SuperTrace Probe.
iTAG by iSYSTEM.
I-jet by IAR.
Jaguar by Crossware.
J-Link by SEGGER Supports JTAG and SWD. Supports ARM7, ARM9, ARM11, Cortex-A, Cortex-M, Cortex-R, Renesas RX, Microchip PIC32. Eclipse plug-in available. Supports GDB, RDI, Ozone debuggers.
J-Trace by SEGGER. Supports JTAG, SWD, and ETM trace on Cortex-M.
JTAGjet by Signum.
LPC-LINK by Embedded Artists (for NXP) This is only embedded on NXP LPCXpresso development boards.
LPC-LINK 2 by NXP. This device can be reconfigured to support 3 different protocols: J-Link by SEGGER, CMSIS-DAP by ARM, Redlink by Code Red.
Multilink debug probes, Cyclone in-system programming/debugging interfaces, and a GDB Server plug-in for Eclipse-based ARM IDEs by PEmicro.
OpenOCD open source GDB server supports a variety of JTAG probes OpenOCD Eclipse plug-in available in GNU ARM Eclipse Plug-ins.
AK-OPENJTAG by Artekit (Open JTAG-compatible).
AK-LINK by Artekit.
PEEDI by RONETIX
RLink by Raisonance.
ST-LINK/V2 by STMicroelectronics The ST-LINK/V2 debugger embedded on STM32 Nucleo and Discovery development boards can be converted to SEGGER J-Link protocol.
TRACE32 Debugger and ETM/ITM Trace by Lauterbach.
ULINK by Keil.
Debugging tools and/or debugging plug-ins (in alphabetical order):
GNU ARM Eclipse J-Link Debugging plug-in.
GNU ARM Eclipse OpenOCD Debugging plug-in.
Memfault Error Analysis for post mortem debugging
Percepio Tracealyzer, RTOS trace visualizer (with Eclipse plugin).
SEGGER SystemView, RTOS trace visualizer.
Real-time operating systems
Commonly referred to as RTOS:
C/C++ software libraries
The following are free C/C++ libraries:
ARM Cortex libraries:
Cortex Microcontroller Software Interface Standard (CMSIS)
CMSIS++: a proposal for the next generation CMSIS, written in C++
libopencm3 (formerly called libopenstm32)
libmaple for STM32F1 chips
LPCOpen for NXP LPC chips
Alternate C standard libraries:
Bionic libc, dietlibc, EGLIBC, glibc, klibc, musl, Newlib, uClibc
FAT file system libraries:
EFSL, FatFs, Petit FatFs
Fixed-point math libraries:
libfixmath, fixedptc, FPMLib
Encryption libraries:
Comparison of TLS implementations
wolfSSL
Non-C/C++ computer languages and software libraries
See also
List of free and open-source software packages
Comparison of real-time operating systems
List of terminal emulators
References
Further reading
External links
Cortex-M development tools
Programming tools
Lists of software
|
1970952
|
https://en.wikipedia.org/wiki/UXu
|
UXu
|
uXu, or Underground eXperts United was an underground ezine active from 1991 to 2002. It was founded in 1991 by ex-members of the Swedish Hackers Association and was based in Sweden. The group was influenced by a similar movement in the United States known as Cult of the Dead Cow, or CDC.
The group has written and published 617 articles in English and more than 100 in Swedish. The first published articles, which were written in ASCII text, included descriptions of bombs, technology, and the computer scene in Sweden. These themes soon expanded over the years to include journal entries, philosophy, song lyrics, and interviews.''
External links
The complete works of the uXu 1991 - 2002
Hackers vandalize CIA homepage
The Electronic Intrusion Threat to National Security and Emergency Preparedness Telecommunications
1991 establishments in Sweden
2002 disestablishments in Sweden
Defunct magazines published in Sweden
Hacker magazines
Hacker groups
Magazines established in 1991
Magazines disestablished in 2002
Computer magazines published in Sweden
Swedish-language magazines
Works about computer hacking
Defunct computer magazines
|
30605512
|
https://en.wikipedia.org/wiki/1934%20USC%20Trojans%20football%20team
|
1934 USC Trojans football team
|
The 1934 USC Trojans football team represented the University of Southern California (USC) in the 1934 college football season. In their tenth year under head coach Howard Jones, the Trojans compiled a 4–6–1 record (1–4–1 against conference opponents), finished in seventh place in the Pacific Coast Conference, and outscored their opponents by a combined total of 120 to 110.
Schedule
References
USC
USC Trojans football seasons
USC Trojans football
|
39901704
|
https://en.wikipedia.org/wiki/List%20of%20display%20servers
|
List of display servers
|
This is a list of display servers.
X11
Cygwin/X
KDrive
Low Bandwidth X
MacX
Mir (display server)
MKS X/Server
Multi-Pointer X
Reflection X
RISCwindows
WiredX
X Window System
X-Win32
X.Org Server
X386
Xapollo
XDarwin
Xephyr
XFree86
Xming
Xmove
Xnest
Xnews (X11 server)
Xpra
XQuartz
Xsgi
Xsun
Xvfb
XWinLogon
Wayland
1 A pivotal difference between Android and the other Linux kernel-based operating systems is the C standard library: Android's libbionic is different in that it does not aim to support POSIX to the same extent as the other libraries. With the help of libhybris it is possible to run Android-only software on other Linux kernel based operating systems, as long as this software does not depend on subsystems found only in the Android-forked Linux kernel, such as binder, pmem, ashmem, etc. Whether software programmed for Linux can run on Android, depends entirely on the extent to which libbionic matches the API of the glibc.
2 provides device detection via udev, device handling, input device event processing and abstraction. also provides a generic X.Org input driver. support was first merged in Weston 1.5. and is also used by Mutter.
Other
DirectFB
Quartz Compositor
SPICE
SurfaceFlinger
See also
Display server
Windowing system
References
Display servers
Computer graphics
Communications protocols
|
513844
|
https://en.wikipedia.org/wiki/Emoji
|
Emoji
|
An emoji ( ; plural emoji or emojis) is a pictogram, logogram, ideogram or smiley embedded in text and used in electronic messages and web pages. The primary function of emoji is to fill in emotional cues otherwise missing from typed conversation. Some examples of emoji are 😂, 😃, 🧘🏻♂️, 🌍, 🌦️, 🍞, 🚗, 📞, 🎉, ❤️, 🍆, 🏁, among many others. Emoji exist in various genres, including facial expressions, common objects, places and types of weather, and animals. They are much like emoticons, but emoji are pictures rather than typographic approximations; the term "emoji" in the strict sense refers to such pictures which can be represented as encoded characters, but it is sometimes applied to messaging stickers by extension. Originally meaning pictograph, the word emoji comes from Japanese + ; the resemblance to the English words emotion and emoticon is purely coincidental. The ISO 15924 script code for emoji is Zsye.
Originating on Japanese mobile phones in 1997, emoji became increasingly popular worldwide in the 2010s after being added to several mobile operating systems. They are now considered to be a large part of popular culture in the West and around the world. In 2015, Oxford Dictionaries named the Face with Tears of Joy emoji (😂) the word of the year.
History
Evolution from emoticons (1990s)
The emoji was predated by the emoticon, a concept first put into practice in 1982 by computer scientist Scott Fahlman when he suggested text-based symbols such as :-) and :-( could be used to replace language. Theories about language replacement can be traced back to the 1960s, when Russian novelist and professor Vladimir Nabokov stated in an interview with The New York Times: "I often think there should exist a special typographical sign for a smile — some sort of concave mark, a supine round bracket." It did not become a mainstream concept until the 1990s when Japanese, American and European companies started experimenting with modified versions of Fahlman's idea. Mary Kalantzis and Bill Cope stated this concept was further developed by Bruce Parello, a student at the University of Illinois, on PLATO IV, the first e-learning system, in 1972. The PLATO system was no considered mainstream, and therefore Parello's pictograms were only used by a small number of people. Scott Fahlman's emoticons importantly both used common alphabet symbols and aimed to replace language/text to demonstrate emotions and for that reason, are seen as the true origin of emoticons.
Wingdings, a font invented by Charles Bigelow and Kris Holmes, was first used by Microsoft in 1990. It could be used to send pictographs in rich text messages, but would only load on devices with the Wingdings font installed. In 1995, the French newspaper Le Monde announced that Alcatel would be launching a new phone, the BC 600. Its welcome screen displayed a digital smiley face, replacing the usual text seen as part of the "welcome message" often seen on other devices at the time. In 1997, J-Phone launched the SkyWalker DP-211SW, which contained a set of 90 emoji. It is thought to be the first set of its kind. Its designs, each measuring 12 by 12 pixels were black and white, depicting numbers, sports, the time, moon phases and the weather. It notably contained the Pile of Poo emoji. The J-Phone model experienced low sales, and the emoji set was thus rarely used.
In 1999, Shigetaka Kurita created 176 emoji as part of NTT DoCoMo's i-mode, used on its mobile platform. They were intended to help facilitate electronic communication, and to serve as a distinguishing feature from other services. Due to their influence, Kurita's designs were once frequently claimed to be the first cellular emoji; however, Kurita has denied this to be the case. According to interviews, he took inspiration from Japanese manga where characters are often drawn with symbolic representations called manpu (such as a water drop on a face representing representing nervousness or confusion), and weather pictograms used to depict the weather conditions at any given time. He also drew inspiration from Chinese characters and street sign pictograms. The DoCoMo i-Mode set included facial expressions, such as smiley faces, derived from a Japanese visual style commonly found in manga and anime, combined with kaomoji and smiley elements. Kurita's work is now displayed in the Museum of Modern Art in New York City.
Kurita's emoji were brightly colored, albeit with a single color per glyph. General-use emoji, such as sports, actions and weather, can easily be traced back to Kurita's emoji set. The notable absentee from the set was the use of pictograms that demonstrated emotion. The yellow-faced emoji commonly used today evolved from other emoticon sets and cannot be traced back to Kurita's work. His set was also made up of generic images much like the J-Phones. Elsewhere in the 1990s, Nokia phones began including preset pictograms in its text messaging app, which they defined as "smileys and symbols". A third notable emoji set was introduced by Japanese mobile phone brand au by KDDI.
Development of emoji sets (2000–2007)
The basic 12-by-12-pixel emoji in Japan grew in popularity across various platforms over the next decade. This was aided by the popularity of DoCoMo i-mode, which for many was the origins of the smartphone. The i-mode service also saw the introduction of emoji in conversation form on messenger apps. By 2004, i-mode had 40 million subscribers, meaning numerous people were exposed to the emoji for the first time between 2000 and 2004. The popularity of i-mode led to other manufacturers competing with similar offerings and therefore developed their own emoji sets. While emoji adoption was high in Japan during this time, the companies failed to collaborate and come up with a uniform set of emoji to be used across all platforms in the country.
The Universal Coded Character Set (Unicode), overseen by the Unicode Consortium and ISO/IEC JTC 1/SC 2, had already been established as the international standard for text representation (ISO/IEC 10646) since 1993, although variants of Shift JIS remained relatively common in Japan. Unicode included several characters which would subsequently be classified as emoji, including some from North American or Western European sources such as DOS code page 437, ITC Zapf Dingbats or the WordPerfect Iconic Symbols set. Unicode's coverage of written characters was extended several times by new editions during the 2000s, with little interest in incorporating the Japanese cellular emoji sets (which were deemed out of scope), although symbol characters which would subsequently be classified as emoji continued to be added. For example, Unicode 4.0 release contained 16 new emoji, which included direction arrows, a warning triangle, and an eject button. Besides Zapf Dingbats, other dingbat fonts such as Wingdings or Webdings also included additional pictographic symbols in their own custom pi font encodings; unlike Zapf Dingbats, however, many of these would not be available as Unicode emoji until 2014.
The Smiley Company developed The Smiley Dictionary, which was launched in 2001. The desktop platform was aimed at allowing people to insert smileys as text when sending emails and writing on a desktop computer. The smiley toolbar offered a variety of symbols and smileys and was used on platforms such as MSN Messenger. Nokia as one of the largest telecoms companies globally at the time, were still referring to today's emoji sets as smileys in 2001. The digital smiley movement was headed up by Nicolas Loufrani, the CEO of The Smiley Company. He created a smiley toolbar, which was available at smileydictionary.com during the early 2000s to be sent as emoji are today.
Beginnings of Unicode emoji (2008–2014)
Mobile providers in both the United States and Europe began discussions on how to introduce their own emoji sets from 2004 onwards. Many companies did not begin to take emoji seriously until Google employees requested that Unicode look into the possibility of a uniform emoji set. Apple quickly followed and began to collaborate with not only Google, but also providers in Europe and Japan. In August 2007, Mark Davis and his colleagues Kat Momoi and Markus Scherer wrote the first draft for consideration by the Unicode Technical Committee (UTC) to introduce emoji into the Unicode standard. The UTC, having previously deemed emoji to be out of scope for Unicode, made the decision to broaden this scope, to enable compatibility with the Japanese cellular carrier formats which were becoming more widespread. Peter Edberg and Yasuo Kida joined the collaborative efforts from Apple Inc. shortly after and the official UTC proposal as co-authors came in January 2009.
Pending the assignment of standard Unicode code points, Google and Apple implemented emoji support via Private Use Area schemes. Google first introduced emoji in Gmail in October 2008, in collaboration with au by KDDI, and Apple introduced the first release of Apple Color Emoji to iPhone OS on 21 November 2008. Initially, Apple's emoji support was implemented for holders of a SoftBank SIM card; the emoji themselves were represented using SoftBank's Private Use Area scheme and mostly resembled the SoftBank designs. Gmail emoji used their own Private Use Area scheme, in a supplementary Private Use plane.
Separately, a proposal had been submitted in 2008 to add the ARIB extended characters used in broadcasting in Japan to Unicode. This included several pictographic symbols. These were added in Unicode 5.2 in 2009, a year before the cellular emoji sets were fully added; they include several characters which either also appeared amongst the cellular emoji or were subsequently classified as emoji.
After iPhone users in the United States discovered that downloading Japanese apps allowed access to the keyboard, pressure grew to expand the availability of the emoji keyboard beyond Japan. The Emoji application for iOS, which altered the Settings app to allow access to the emoji keyboard, was created by Josh Gare in February 2010. Before the existence of Gare's Emoji app, Apple had intended for the emoji keyboard to only be available in Japan in iOS version 2.2.
Throughout 2009, members of the Unicode Consortium and national standardization bodies of various countries gave feedback and proposed changes to the international standardization of the emoji. The feedback from various bodies in the United States, Europe, and Japan agreed on a set of 722 emoji as the standard set. This would be released in October 2010 in Unicode 6.0. Apple made the emoji keyboard available to those outside of Japan in iOS version 5.0 in 2011. Later, Unicode 7.0 (June 2014) added the character repertoires of the Webdings and Wingdings fonts to Unicode, resulting in approximately 250 more Unicode emoji.
The Unicode emoji whose code points were assigned in 2014 or earlier are therefore taken from several sources. A single character could exist in multiple sources, and characters from a source were unified with existing characters where appropriate: for example, the "shower" weather symbol (☔️) from the ARIB source was unified with an existing umbrella with raindrops character, which had been added for KPS 9566 compatibility. The emoji characters named from all three Japanese carriers were in turn unified with the ARIB character. However, the Unicode Consortium groups the most significant sources of emoji into four categories:
UTS #51 and modern emoji (2015–present)
In late 2014, a Public Review Issue was created by the Unicode Technical Committee, seeking feedback on a proposed Unicode Technical Report (UTR) titled "Unicode Emoji". This was intended to improve interoperability of emoji between vendors, and define a means of supporting multiple skin tones. The feedback period closed in January 2015. Also in January 2015, the use of the zero width joiner to indicate that a sequence of emoji could be shown as a single equivalent glyph (analogous to a ligature) as a means of implementing emoji without atomic code points, such as varied compositions of families, was discussed within the "emoji ad-hoc committee".
Unicode 8.0 (June 2015) added another 41 emoji, including articles of sports equipment such as the cricket bat, food items such as the taco, new facial expressions, and symbols for places of worship, as well as five characters (crab, scorpion, lion face, bow and arrow, amphora) to improve support for pictorial rather than symbolic representations of the signs of the Zodiac.
Also in June 2015, the first approved version ("Emoji 1.0") of the Unicode Emoji report was published as Unicode Technical Report #51 (UTR #51). This introduced the mechanism of skin tone indicators, the first official recommendations about which Unicode characters were to be considered emoji, and the first official recommendations about which characters were to be displayed in an emoji font in absence of a variation selector, and listed the zero width joiner sequences for families and couples that were implemented by existing vendors. Maintenance of UTR #51, taking emoji requests, and creating proposals for emoji characters and emoji mechanisms was made the responsibility of the Unicode Emoji Subcommittee (ESC), operating as a subcommittee of the Unicode Technical Committee,
With the release of version 5.0 in May 2017 alongside Unicode 10.0, UTR #51 was redesignated a Unicode Technical Standard (UTS #51), making it an independent specification rather than merely an informative document. , there were 2,666 Unicode emoji listed. The next version of UTS #51 (published in May 2018) skipped to the version number Emoji 11.0, so as to synchronise its major version number with the corresponding version of the Unicode Standard.
The popularity of emoji has caused pressure from vendors and international markets to add additional designs into the Unicode standard to meet the demands of different cultures. Some characters now defined as emoji are inherited from a variety of pre-Unicode messenger systems not only used in Japan, including Yahoo and MSN Messenger.
Corporate demand for emoji standardization has placed pressures on the Unicode Consortium, with some members complaining that it had overtaken the group's traditional focus on standardizing characters used for minority languages and transcribing historical records. Conversely, the Consortium recognises that public desire for emoji support has put pressure on vendors to improve their Unicode support, which is especially true for characters outside the Basic Multilingual Plane, thus leading to better support for Unicode's historic and minority scripts in deployed software.
Cultural influence
Oxford Dictionaries named its 2015 Word of the Year. Oxford noted that 2015 had seen a sizable increase in the use of the word "emoji" and recognized its impact on popular culture. Oxford Dictionaries President Caspar Grathwohl expressed that "traditional alphabet scripts have been struggling to meet the rapid-fire, visually focused demands of 21st Century communication. It's not surprising that a pictographic script like emoji has stepped in to fill those gaps—it's flexible, immediate, and infuses tone beautifully." SwiftKey found that "Face with Tears of Joy" was the most popular emoji across the world. The American Dialect Society declared to be the "Most Notable Emoji" of 2015 in their Word of the Year vote.
Some emoji are specific to Japanese culture, such as a bowing businessman (), the shoshinsha mark used to indicate a beginner driver (), a white flower () used to denote "brilliant homework", or a group of emoji representing popular foods: ramen noodles (), dango (), onigiri (), Japanese curry (), and sushi (). Unicode Consortium founder Mark Davis compared the use of emoji to a developing language, particularly mentioning the American use of eggplant () to represent a phallus. Some linguists have classified emoji and emoticons as discourse markers.
In December 2015 a sentiment analysis of emoji was published, and the Emoji Sentiment Ranking 1.0 was provided. In 2016, a musical about emoji premiered in Los Angeles. The computer-animated The Emoji Movie was released in summer 2017.
In January 2017, in what is believed to be the first large-scale study of emoji usage, researchers at the University of Michigan analysed over 1.2 billion messages input via the Kika Emoji Keyboard and announced that the Face With Tears of Joy was the most popular emoji. The Heart and the Heart eyes emoji stood second and third respectively. The study also found that the French use heart emoji the most. People in countries like Australia, France and the Czech Republic used more happy emoji, while this was not so for people in Mexico, Colombia, Chile and Argentina, where people used more negative emoji in comparison to cultural hubs known for restraint and self-discipline, like Turkey, France and Russia.
There has been discussion among legal experts on whether or not emoji could be admissible as evidence in court trials. Furthermore, as emoji continue to develop and grow as a "language" of symbols, there may also be the potential of the formation of emoji "dialects". Emoji are being used as more than just to show reactions and emotions. Snapchat has even incorporated emoji in its trophy and friends system with each emoji showing a complex meaning.
Emoji that further modern causes
On March 5, 2019, a drop of blood () emoji was released, which is intended to help break the stigma of menstruation. In addition to normalizing periods, it will also be relevant to describe medical topics such as donating blood and other blood-related activities.
A mosquito () emoji was added in 2018 to raise awareness for diseases spread by the insect, such as dengue and malaria.
Emoji communication problems
Research has shown that emoji are often misunderstood. In some cases, this misunderstanding is related to how the actual emoji design is interpreted by the viewer; in other cases, the emoji that was sent is not shown in the same way on the receiving side.
The first issue relates to the cultural or contextual interpretation of the emoji. When the author picks an emoji, they think about it in a certain way, but the same character may not trigger the same thoughts in the mind of the receiver (see also Models of communication).
For example, people in China have developed a system for using emoji subversively, so that a smiley face could be sent to convey a despising, mocking, and even obnoxious attitude, as the orbicularis oculi (the muscle near that upper eye corner) on the face of the emoji does not move, and the orbicularis oris (the one near the mouth) tightens, which is believed to be a sign of suppressing a smile.
The second problem relates to technology and branding. When an author of a message picks an emoji from a list, it is normally encoded in a non-graphical manner during the transmission, and if the author and the reader do not use the same software or operating system for their devices, the reader's device may visualize the same emoji in a different way. Small changes to a character's look may completely alter its perceived meaning with the receiver. As an example, in April 2020, British actress and presenter Jameela Jamil posted a tweet from her iPhone using the Face with Hand Over Mouth emoji (🤭) as part of a comment on people shopping for food during the COVID-19 pandemic. On Apple's iOS, the emoji expression is neutral and pensive, but on other platforms the emoji shows as a giggling face. Many fans were initially upset thinking that she, as a well off celebrity, was mocking poor people, but this was not her intended meaning.
Researchers from German Studies Institute at Ruhr-Universität Bochum found that most people can easily understand an emoji when it replaces a word directly – like an icon for a rose instead of the word 'rose' – yet it takes people about 50 percent longer to comprehend the emoji.
Variation and ambiguity
Emoji characters vary slightly between platforms within the limits in meaning defined by the Unicode specification, as companies have tried to provide artistic presentations of ideas and objects. For example, following an Apple tradition, the calendar emoji on Apple products always shows July 17, the date in 2002 Apple announced its iCal calendar application for macOS. This led some Apple product users to initially nickname July 17 "World Emoji Day". Other emoji fonts show different dates or do not show a specific one.
Some Apple emoji are very similar to the SoftBank standard, since SoftBank was the first Japanese network on which the iPhone launched. For example, is female on Apple and SoftBank standards but male or gender-neutral on others.
Journalists have noted that the ambiguity of emoji has allowed them to take on culture-specific meanings not present in the original glyphs. For example, has been described as being used in English-language communities to signify "non-caring fabulousness" and "anything from shutting haters down to a sense of accomplishment". Unicode manuals sometimes provide notes on auxiliary meanings of an object to guide designers on how emoji may be used, for example noting that some users may expect to stand for "a reserved or ticketed seat, as for an airplane, train, or theater".
Controversial emoji
Some emoji have been involved in controversy due to their perceived meanings. Multiple arrests and imprisonments have followed usage of pistol (), knife (), and bomb () emoji in ways that authorities deemed credible threats.
In the lead-up to the 2016 Summer Olympics, the Unicode Consortium considered proposals to add several Olympic-related emoji, including medals and events such as handball and water polo. By October 2015, these candidate emoji included "rifle" () and "modern pentathlon" (). However, in 2016, Apple and Microsoft opposed these two emoji, and the characters were added without emoji presentations, meaning that software is expected to render them in black-and-white rather than color, and emoji-specific software such as onscreen keyboards will generally not include them. In addition, while the original incarnations of the modern pentathlon emoji depicted its five events, including a man pointing a gun, the final glyph contains a person riding a horse, along with a laser pistol target in the corner.
On August 1, 2016, Apple announced that in iOS 10, the pistol emoji () would be changed from a realistic revolver to a water pistol. Conversely, the following day, Microsoft pushed out an update to Windows 10 that changed its longstanding depiction of the pistol emoji as a toy ray-gun to a real revolver. Microsoft stated that the change was made to bring the glyph more in line with industry-standard designs and customer expectations. By 2018, most major platforms such as Google, Microsoft, Samsung, Facebook, and Twitter had transitioned their rendering of the pistol emoji to match Apple's water gun implementation. Apple's change of depiction from a realistic gun to a toy gun was criticised by, among others, the editor of Emojipedia, because it could lead to messages appearing differently to the receiver than the sender had intended. Insider Rob Price said it created the potential for "serious miscommunication across different platforms", and asked "What if a joke sent from an Apple user to a Google user is misconstrued because of differences in rendering? Or if a genuine threat sent by a Google user to an Apple user goes unreported because it is taken as a joke?"
The eggplant (aubergine) emoji () has also seen controversy due to it being used, almost solely in North America to represent a penis. Beginning in December 2014, the hashtag began to rise to popularity on Instagram for use in marking photos featuring clothed or unclothed penises. This became such a popular trend that, beginning in April 2015, Instagram disabled the ability to search for not only the tag, but also other eggplant-containing hashtags, including simply and .
The peach emoji () has likewise been used as a euphemistic icon for buttocks, with a 2016 Emojipedia analysis revealing that only seven percent of English language tweets with the peach emoji refer to the actual fruit. In 2016, Apple attempted to redesign the emoji to less resemble buttocks. This was met with fierce backlash in beta testing, and Apple reversed its decision by the time it went live to the public.
In December 2017, a lawyer in Delhi, India, threatened to file a lawsuit against WhatsApp for allowing use of the middle finger emoji () on the basis that the company is "directly abetting the use of an offensive, lewd, obscene gesture" in violation of the Indian Penal Code.
Emoji implementation
Early implementation in Japan
Various, often incompatible, character encoding schemes were developed by the different mobile providers in Japan for their own emoji sets. For example, the extended Shift JIS representation F797 is used for a convenience store (🏪) by SoftBank, but for a wristwatch (⌚️) by KDDI. All three vendors also developed schemes for encoding their emoji in the Unicode Private Use Area: DoCoMo, for example, used the range U+E63E through U+E757. Versions of iOS prior to 5.1 encoded emoji in the SoftBank private use area.
Unicode support considerations
Most, but not all, emoji are included in the Supplementary Multilingual Plane (SMP) of Unicode, which is also used for ancient scripts, some modern scripts such as Adlam or Osage, and special-use characters such as Mathematical Alphanumeric Symbols. Some systems introduced prior to the advent of Unicode emoji were only designed to support characters in the Basic Multilingual Plane (BMP), on the assumption that non-BMP characters would rarely be encountered, although failure to properly handle characters outside of the BMP precludes Unicode compliance.
The introduction of Unicode emoji created an incentive for vendors to improve their support for non-BMP characters. The Unicode Consortium notes that "[b]ecause of the demand for emoji, many implementations have upgraded their Unicode support substantially", also helping support for minority languages that use those features.
Color support
Any operating system that supports adding additional fonts to the system can add an emoji-supporting font. However, inclusion of colorful emoji in existing font formats requires dedicated support for color glyphs. Not all operating systems have support for color fonts, so in these cases emoji might have to be rendered as black-and-white line art or not at all. There are four different formats used for multi-color glyphs in an SFNT font, not all of which are necessarily supported by a given operating system library or software package such as a web browser or graphical program. This means that color fonts may need to be supplied in several formats to be usable on multiple operating systems, or in multiple applications.
Implementation by different platforms and vendors
Apple first introduced emoji to their desktop operating system with the release of OS X 10.7 Lion, in 2011. Users can view emoji characters sent through email and messaging applications, which are commonly shared by mobile users, as well as any other application. Users can create emoji symbols using the "Characters" special input panel from almost any native application by selecting the "Edit" menu and pulling down to "Special Characters", or by the key combination . The emoji keyboard was first available in Japan with the release of iPhone OS version 2.2 in 2008. The emoji keyboard was not officially made available outside of Japan until iOS version 5.0. From iPhone OS 2.2 through to iOS 4.3.5 (2011), those outside Japan could access the keyboard but had to use a third-party app to enable it. Apple has revealed that the "face with tears of joy" is the most popular emoji among English speaking Americans. On second place is the "heart" emoji followed by the "Loudly Crying Face".
An update for Windows 7 and Windows Server 2008 R2 brought a subset of the monochrome Unicode set to those operating systems as part of the Segoe UI Symbol font. As of Windows 8.1 Preview, the Segoe UI Emoji font is included, which supplies full-color pictographs. The plain Segoe UI font lacks emoji characters, whereas Segoe UI Symbol and Segoe UI Emoji include them. Emoji characters are accessed through the onscreen keyboard's key, or through the physical keyboard shortcut .
Facebook and Twitter replace all Unicode emoji used on their websites with their own custom graphics. Prior to October 2017, Facebook had different sets for the main site and for its Messenger service, where only the former provides complete coverage. Messenger now uses Apple emoji on iOS, and the main Facebook set elsewhere. Facebook reactions are only partially compatible with standard emoji.
Modifiers
Emoji versus text presentation
Unicode defines variation sequences for many of its emoji to indicate their desired presentation.
Specifying the desired presentation is done by following the base emoji with either U+FE0E VARIATION SELECTOR-15 (VS15) for text or U+FE0F VARIATION SELECTOR-16 (VS16) for emoji-style.
Skin color
Five symbol modifier characters were added with Unicode 8.0 to provide a range of skin tones for human emoji. These modifiers are called EMOJI MODIFIER FITZPATRICK TYPE-1-2, , , , and (U+1F3FB–U+1F3FF): 🏻 🏼 🏽 🏾 🏿. They are based on the Fitzpatrick scale for classifying human skin color. Human emoji that are not followed by one of these five modifiers should be displayed in a generic, non-realistic skin tone, such as bright yellow (■), blue (■), or gray (■). Non-human emoji (like ) are unaffected by the Fitzpatrick modifiers.
As of Unicode 14.0, Fitzpatrick modifiers can be used with 129 human emoji spread across seven blocks: Dingbats, Emoticons, Miscellaneous Symbols, Miscellaneous Symbols and Pictographs, Supplemental Symbols and Pictographs, Transport and Map Symbols, and Symbols and Pictographs Extended-A.
The following table shows both the Unicode characters and the open-source "Twemoji" images, designed by Twitter:
Joining
Implementations may use a zero-width joiner (ZWJ) between multiple emoji to make them behave like a single, unique emoji character. For example, the sequence , , , , (👨👩👧) could be displayed as a single emoji depicting a family with a man, a woman, and a girl if the implementation supports it. Systems that do not support it would ignore the ZWJs, displaying only the three base emoji in order (👨👩👧).
Unicode previously maintained a catalog of emoji ZWJ sequences that were supported on at least one commonly available platform. The consortium has since switched to documenting sequences that are recommended for general interchange (RGI). These are clusters that emoji fonts are expected to include as part of the standard.
Unicode blocks
Unicode 14.0 represents emoji using 1,404 characters spread across 24 blocks, of which 26 are Regional Indicator Symbols that combine in pairs to form flag emoji, and 12 (#, * and 0–9) are base characters for keycap emoji sequences:
637 of the 768 code points in the Miscellaneous Symbols and Pictographs block are considered emoji. 242 of the 256 code points in the Supplemental Symbols and Pictographs block are considered emoji. All of the 88 code points in the Symbols and Pictographs Extended-A block are considered emoji. All of the 80 code points in the Emoticons block are considered emoji. 104 of the 117 code points in the Transport and Map Symbols block are considered emoji. 83 of the 256 code points in the Miscellaneous Symbols block are considered emoji. 33 of the 192 code points in the Dingbats block are considered emoji.
Additional emoji can be found in the following Unicode blocks: Arrows (8 code points considered emoji), Basic Latin (12), CJK Symbols and Punctuation (2), Enclosed Alphanumeric Supplement (41), Enclosed Alphanumerics (1), Enclosed CJK Letters and Months (2), Enclosed Ideographic Supplement (15), General Punctuation (2), Geometric Shapes (8), Geometric Shapes Extended (13), Latin-1 Supplement (2), Letterlike Symbols (2), Mahjong Tiles (1), Miscellaneous Symbols and Arrows (7), Miscellaneous Technical (18), Playing Cards (1), and Supplemental Arrows-B (2).
Additions
Some vendors, most notably Microsoft, Samsung and HTC, add emoji presentation to some other existing Unicode characters or coin their own ZWJ sequences.
Microsoft displays all Mahjong tiles (U+1F000‥2B, not just ) and alternative card suits (, , , ) as emoji. They also support additional pencils (, ) and a heart-shaped bullet ().
While only is officially an emoji, Microsoft and Samsung add the other three directions as well (, , ).
Both vendors pair the standard checked ballot box emoji with its crossed variant , but only Samsung also has the empty ballot box .
Samsung almost completely covers the rest of the Miscellaneous Symbols block (U+2600‥FF) as emoji, which includes Chess pieces, game die faces, some traffic sign as well as genealogical and astronomical symbols for instance.
HTC supports most additional pictographs from the Miscellaneous Symbols and Pictographs (U+1F300‥5FF) and Transport and Map Symbols (U+1F680‥FF) blocks. Some of them are also shown as emoji on Samsung devices.
The open source projects Emojidex and Emojitwo are trying to cover all of these extensions established by major vendors.
In popular culture
The 2009 film Moon featured a robot named GERTY who communicates using a neutral-toned synthesized voice together with a screen showing emoji representing the corresponding emotional content.
In 2014, the Library of Congress acquired an emoji version of Herman Melville's Moby Dick created by Fred Benenson.
A musical called Emojiland premiered at Rockwell Table & Stage in Los Angeles in May 2016, after selected songs were presented at the same venue in 2015.
In October 2016, the Museum of Modern Art acquired the original collection of emoji distributed by NTT Docomo in 1999.
In November 2016, the first emoji-themed convention, Emojicon, was held in San Francisco.
In March 2017, the first episode of the fifth season of Samurai Jack featured alien characters who communicate in emoji.
In April 2017, the Doctor Who episode "Smile" featured nanobots called Vardy, which communicate through robotic avatars that use emoji (without any accompanying speech output) and are sometimes referred to by the time travelers as "Emojibots".
On July 28, 2017, Sony Pictures Animation released The Emoji Movie, a 3D computer animated movie featuring the voices of Patrick Stewart, Christina Aguilera, Sofía Vergara, Anna Faris, T. J. Miller, and other notable actors and comedians.
On September 3, 2021 Drake (musician) released his sixth studio album, Certified Lover Boy with album cover art featuring twelve emoji of pregnant women in varying clothing colours, hair colors and skin tones.
See also
Pictograph
Emojipedia
iConji
Kaomoji
Emojli
Hieroglyphics
Blob emoji
Notes
References
Further reading
External links
Unicode Technical Report #51: Unicode emoji
The Unicode FAQ – Emoji & Dingbats
Emoji Symbols – the original proposals for encoding of Emoji symbols as Unicode characters
Background data for Unicode proposal
emojitracker – list of most popularly used emoji on the Twitter platform; updated in real-time
Computer-related introductions in 1997
Computer icons
Internet culture
Internet slang
Japanese inventions
Japanese writing system terms
Japanese writing system
Online chat
Pictograms
|
978969
|
https://en.wikipedia.org/wiki/DOS/V
|
DOS/V
|
DOS/V is a Japanese computing initiative starting in 1990 to allow DOS on IBM PC compatibles with VGA cards to handle double-byte (DBCS) Japanese text via software alone. It was initially developed from PC DOS by IBM for its PS/55 machines (a localized version of the PS/2), but IBM gave the source code of drivers to Microsoft so Microsoft licensed a DOS/V-compatible version of MS-DOS to other companies. Kanji fonts and other locale information are store on the hard disk rather than on special chips as in the preceding AX architecture. As with AX, its great value for the Japanese computing industry is in allowing compatibility with foreign software. This had not been possible under NEC's proprietary PC-98 system, which was the market leader before DOS/V emerged. DOS/V stands for "Disk Operating System/VGA" (not "version 5"; DOS/V came out at approximately the same time as DOS 5). In Japan, IBM compatible PCs became popular along with DOS/V, so they are often referred to as "DOS/V machine" or "DOS/V pasocom" even though DOS/V operating systems are no longer common.
The promotion of DOS/V was done by IBM and its consortium called PC Open Architecture Developers' Group (OADG).
Digital Research released a Japanese DOS/V-compatible version of DR DOS 6.0 in 1992.
History
In the early 1980s, IBM Japan developed two x86-based personal computer lines for the Asia-Pacific region, IBM 5550 and IBM JX. The 5550 reads Kanji fonts from the disk, and draws text as graphic characters on 1024×768 high resolution monitor. The JX extends IBM PCjr and IBM PC architecture. It supports English and Japanese versions of PC DOS with 720×512 resolution monitor. Both machines couldn't break dominant NEC's PC-98 in consumer market in Japan. Because the 5550 was expensive, it was mostly sold for large enterprises who used IBM's mainframe. The JX used 8088 processor instead of faster 8086 processor because IBM thought a consumer-class JX mustn't surpass a business-class 5550. It damaged buyer's reputations whatever the actual speed was. In another point, a software company said IBM was uncooperative for developing JX software. IBM Japan planned a 100% PC/AT compatible machine codenamed "JX2", but cancelled it in 1986.
Masahiko Hatori was a developer of JX's DOS. Through the development of JX, he learned the skills needed to localize an English computer into Japanese. In 1987, he started developing the DOS/V during spare time at IBM Yamato Development Laboratory. He thought the 480-line mode of VGA and a processor as fast as the 80386 would realize his idea, but they were expensive hardwares as of 1987. In this era, Toshiba released the J-3100 laptop computer, and Microsoft introduced the AX architecture. IBM Japan didn't join in the AX consortium. His boss, Tsutomu Maruyama , thought IBM's headquarters wouldn't allow to adopt the AX because they requested IBM Japan to use the same standard as worldwide IBM offices used. In October 1987, IBM Japan released the PS/55 Model 5535 which was a proprietary laptop using a special version of DOS. It was more expensive than the J-3100 because its LCD display used a non-standard 720×512 resolution. Hatori thought IBM needed to shift their own proprietary PC to IBM PC compatibles. Maruyama and Nobuo Mii thought Japan's closed PC market needed to be changed and this attempt couldn't be done by IBM alone. In summer of 1989, they decided to carry out the development of DOS/V, disclose the architecture of PS/55, and found the PC Open Architecture Developers' Group (OADG).
The DOS/V development team designed the DOS/V to be simple for better scalability and compatibility with original PC DOS. They had difficulty reducing text drawing time. "A stopwatch was a necessity for DOS/V development", Hatori said.
IBM Japan announced the first version of DOS/V, IBM DOS J4.0/V, on 11 October 1990, and shipped out in November 1990. At the same time, IBM Japan released the PS/55 Model 5535-S, a laptop computer with VGA resolution. The announcement letter stated DOS/V was designed for low-end desktops and laptops of PS/55, but users reported on BBS that they could run DOS/V on IBM PC clones. The development team unofficially confirmed these comments, and modified incompatibilities of DOS/V. It was a secret inside the company because it would prevent sales of PS/55 and meet with opposition. Hatori said,
Maruyama and Mii had to convince IBM's branches to agree with the plan. In the beginning of December 1990, Maruyama went to IBM's Management Committee, and presented his plan "The low-end PC strategy in Japan". At the committee, a topic usually took 15 minutes, but his topic took an hour. The plan was finally approved by John Akers.
After the committee, Susumu Furukawa, a president of Microsoft Japan, could make an appointment with IBM Japan to share the source code of DOS/V. On 20 December 1990, IBM Japan announced they founded OADG and Microsoft would supply DOS/V for other PC manufacturers. From 1992 to 1994, many Japanese manufacturers began selling IBM PC clones with DOS/V. Some global PC manufacturers entered into the Japanese market, Compaq in 1992 and Dell in 1993. Fujitsu released IBM PC clones (FMV series) in October 1993, and about 200,000 units were shipped in 1994.
The initial goal of DOS/V was to enable Japanese software to run on laptop computers based on the IBM global standards rather than the domestic computer architecture. As of 1989, the VGA was not common, but they expected the LCD panels with VGA resolution would be affordable within a few years. The DOS/V lacked its software library, so IBM Japan requested third-party companies to port their software to the DOS/V. The PS/55 Model 5535-S was released as a laptop terminal for the corporate sector. They only had to supply a few major business software to the DOS/V.
In March 1991, IBM Japan released the PS/55note Model 5523-S which was the lower-price laptop computer. It was a strategically important product to popularize the DOS/V into the consumer market, and led to the success of subsequent consumer products such as the ThinkPad. However, the DOS/V itself sold much better than the 5523S because advanced users purchased it to build a Japanese language environment on their IBM compatible PCs.
In 1992, IBM Japan released the PS/V (similar to the PS/ValuePoint) and the ThinkPad. They were based upon an architecture closer to PC compatibles, and intended to compete with rivals in the consumer market. As of December 1992, the PS/V was the most selling DOS/V computer. In January 1993, NEC released a new generation of the PC-98 to take back its initiative. NEC advertised that the scrolling speed of the word processor Ichitaro on the PC-9801BX was faster than on the PS/V 2405-W. Yuzuru Takemura of IBM Japan said, "Let us suppose the movement towards Windows is inevitability. Processors and graphics cards will become faster and faster. If the PC-98 holds its architecture, it never beat our machine at speed. Windows is developed for the PC/AT architecture. Kanji glyphs are also supplied as a software font. The only thing IBM have to do is tuning up it for the video card. On the different architecture, it will be hard to tune up Windows".In 1993, Microsoft Japan released first retail versions of Windows (Windows 3.1) for both DOS/V and PC-98. The DOS/V contributed the dawn of IBM PC clones in Japan, yet PC-98 had kept 50% of market share until 1996. It was turned round by the release of Windows 95.
Drivers
Three device drivers enable DBCS code page support in DOS on IBM PC compatibles with VGA; the font driver, the display driver and the input assisted subsystem driver. The font driver loads a complete set of the glyphs from a font file into the extended memory. The display driver sets the 640×480 graphics mode on the VGA, and allocates about 20 KB of the conventional memory for text, called the simulated video buffer. A DOS/V program writes the codes of the characters to the simulated video buffer through DOS output functions, or writes them directly and calls driver's function to refresh the screen. The display driver copies the font bitmap data from the extended memory to the actual video memory, corresponding to the simulated video buffer. The input assisted subsystem driver communicates with optional input methods and enables the text editing in the on-the-spot or below-the-spot styles. Without installing these drivers, the DOS/V is equivalent to the generic MS-DOS without DBCS code page support.
$FONT.SYS – Font driver
$DISP.SYS – Display driver
$IAS.SYS – Input assist subsystem (IAS) with front end processor (FEP) support driver
$PRN.SYS – Printer driver
$PRNUSER.SYS – Printer driver
$PRNESCP.SYS – Printer driver for Epson ESC/P J84
Versions
In 1988, IBM Japan released a new model of the PS/55 which was based on the PS/2 with Japanese language support. It is equipped with a proprietary video card, the Display Adapter, which has a high resolution text mode and a Japanese character set stored in a ROM on the card. It supports Japanese DOS K3.3, PC DOS 3.3 (English) and OS/2.
IBM DOS J4.0 was released in 1989. It combines Japanese DOS and PC DOS, which runs Japanese DOS as the Japanese mode (PS/55 mode) and PC DOS as the English mode (PS/2 mode). Although it had two separated modes that needed a reboot to switch between them, IBM Japan called it bilingual. This version requires the PS/55 display adapter.
The first version of DOS/V, IBM DOS J4.0/V (J4.05/V), was released in the end of 1990. The word 'DOS/V' was quickly known to Japanese computer industry, but the DOS/V itself didn't spread quickly. As of 1991, some small companies sold American or Taiwanese computers in Japan, but DOS J4.0/V caused some issues on PC compatibles. Its EMS driver only supports IBM's Expanded Memory Adapter. The input method doesn't support the US keyboard nor the Japanese AX keyboard, so it locates some keys at the wrong place. PS/55 keyboards were available from IBM, but it must be used with an AT to PS/2 adapter because AX machines (thus PC/AT clones) generally have the older 5-pin DIN connector. Scrolling text with the common Tseng Labs ET4000 graphics controller makes the screen unreadable. This issue can be fixed by the new /HS=LC switch of $DISP.SYS in DOS J4.07/V. "Some VGA clones did not correctly implement the CRTC address wraparound. Most likely those were Super VGAs with more video memory than the original VGA (i.e. more than 256 KB). Software relying on the address wraparound was very rare and therefore the functionality was not necessarily correctly implemented in hardware. On the other hand, the split screen technique was relatively well documented and well understood, and commercial software (especially games) sometimes used it. It was therefore likely to be tested and properly implemented in hardware."
IBM Japan released DOS J5.0/V in October 1991, and DOS J5.0 in December 1991. DOS J5.0 combines Japanese DOS and DOS/V. This is the last version developed for the PS/55 display adapter. DOS J5.02/V was released in March 1992. It added official support for the IBM PS/2 and the US English layout keyboard.
The development of MS-DOS 5.0/V was delayed because IBM and Microsoft disputed how to implement the API for input methods. It took a few months to make an agreement that the OEM adaptation kit (OAK) of MS-DOS 5.0/V provided both IAS (Input Assist Subsystem) and MKKC (Microsoft Kana-Kanji Conversion). Microsoft planned to add the AX application support into DOS/V, but cancelled it because its beta release was strongly criticized by users for lacking compatibility. Some PC manufacturers couldn't wait Microsoft's DOS/V. Toshiba developed a DOS/V emulator that could run DOS/V applications on a VGA-equipped J-3100 computer. AST Research Japan and Sharp decided to bundle IBM DOS J5.0/V. Compaq developed own DOS/V drivers, and released their first DOS/V computers in April 1992.
On 10 December 1993, Microsoft Japan and IBM Japan released new versions of DOS/V, MS-DOS 6.2/V Upgrade and PC DOS J6.1/V. Although both were released at the same time, they were separately developed. MS-DOS 6.2/V Upgrade is the only Japanese version of MS-DOS released by Microsoft under its own brand for retail sales. Microsoft Japan continued selling it after Microsoft released MS-DOS 6.22 to resolve patent infringement of DoubleSpace disk compression.
IBM Japan ended support for PC DOS 2000 on 31 January 2001, and Microsoft Japan ended support for MS-DOS on 31 December 2001.
Japanese versions of Windows 2000 and XP have a DOS/V environment in NTVDM. It was removed in Windows Vista.
PC DOS versions
PC DOS versions of DOS/V (J for Japanese, P for Chinese (PRC), T for Taiwanese, H for Korean (Hangul)):
IBM DOS J4.0/V "5605-PNA" (version 4.00 – 4.04 were not released for DOS/V)
IBM DOS J4.05/V for PS/55 (announced 1990-10-11, shipped 1990-11-05)
IBM DOS J4.06/V (1991-04)
IBM DOS J4.07/V (1991-07)
IBM DOS J5.0/V "5605-PJA" (1991-10), IBM DOS T5.0/V, IBM DOS H5.0/V
IBM DOS J5.02/V for PS/55 (1992-03)
IBM DOS J5.02A/V
IBM DOS J5.02B/V
IBM DOS J5.02C/V
IBM DOS J5.02D/V (1993-05)
Sony OADG DOS/V (includes IBM DOS J5.0/V and drivers for AX machines)
PC DOS J6.1/V "5605-PTA" (1993-12), PC DOS P6.1/V, PC DOS T6.10/V
PC DOS J6.10A/V (1994-03)
PC DOS J6.3/V "5605-PDA" (1994-05)
PC DOS J6.30A/V
PC DOS J6.30B/V
PC DOS J6.30C/V (1995-06)
PC DOS J7.0/V "5605-PPW" (1995-08), PC DOS P7/V, PC DOS T7/V, PC DOS H7/V
PC DOS J7.00A/V
PC DOS J7.00B/V
PC DOS J7.00C/V (1998-07)
PC DOS 2000 Japanese Edition "04L5610" (1998-07)
MS-DOS versions
MS-DOS versions of DOS/V:
Toshiba Nichi-Ei (日英; Japanese-English) MS-DOS 5.0
Compaq MS-DOS 5.0J/V (1992-04)
MS-DOS 5.0/V (OEM, generic MS-DOS 5.0/V)
MS-DOS 6.0/V
MS-DOS 6.2/V (Retail, 1993-12)
MS-DOS 6.22/V (1994-08)
Fujitsu Towns OS for FM Towns (only late issues had DOS/V compatibility added)
DR DOS versions
DR DOS versions of DOS/V:
DR DOS 6.0/V (Japanese) (1992-07), DR DOS 6.0/V (Korean)
ViewMAX 2 (Japanese) (1991–1992)
NetWare Lite 1.1J (Japanese) (1992–1997)
Novell DOS 7 (Japanese)?
Personal NetWare J 1.0 (Japanese) (1994–1995)
(DR-DOS 7.0x/V) (2001–2006) (an attempt to build a DR-DOS/V from existing components)
Extensions
IBM DOS/V Extension extends DOS/V drivers to set up a variety of text modes for certain video adapters. The High-quality Text Mode is the default 80 columns by 25 rows with 12×24 pixels large characters. The High-density Text Mode (Variable Text; V-Text) offers large text modes with various font sizes. DOS/V Extension V1.0 included drivers for VGA, XGA, PS/55 Display Adapter, SVGA (800×600) and ET4000 (1024×768). Some of its drivers were included in PC DOS J6.1/V and later.
IBM DOS/V Extension V1.0 (1993-01) includes V-Text support
IBM DOS/V Extension V2.0 "5605-PXB"
See also
Unicode
List of DOS commands
Kanji CP/M-86 (1984)
(A Japanese magazine on IBM clones)
Notes
References
Further reading
DOS on IBM PC compatibles
1990 software
|
51897712
|
https://en.wikipedia.org/wiki/Paddy%20McGuinness%20%28civil%20servant%29
|
Paddy McGuinness (civil servant)
|
Patrick "Paddy" Joseph McGuinness (born 27 April 1963) is a former senior British civil servant who now advises businesses and governments globally on their resilience, Crisis, Technology, Data and Cyber issues. He was the Deputy National Security Adviser for Intelligence, Security, and Resilience in the Cabinet Office, from 2014 to January 2018.
Early life
Born in Oxford to Professors Rosamond McGuinness and Brian McGuinness, McGuinness went to Ampleforth College, and then to Balliol College, Oxford where he took a BA in modern history. He has a sister Catherine McGuinness who chairs the Policy and Resources Committee of the City of London Corporation.
Career in Government service
McGuinness joined the Foreign and Commonwealth Office in 1985. His first overseas posting was as Second Secretary in Sana'a from 1988 to 1991. After that, he served as First Secretary in Abu Dhabi from 1994–1996, then as Counsellor in Cairo, Egypt from 1996–1999 and in Rome from 2003–2006.
McGuinness was appointed the Deputy National Security Adviser for Intelligence, Security, and Resilience in 2014, taking over from Oliver Robbins and replaced by Richard Moore. He advised first David Cameron then Theresa May and reported to the National Security Adviser who is Secretary to the National Security Council, alongside the other Deputy National Security Adviser for Foreign and Defence Policy.
National Cyber Security Programme
As DNSA McGuinness was the Senior Responsible Officer for the UK’s two five year National Cyber Security Programmes overseeing the development of and response to the 2016 National Cyber Security Strategy and through that the launch of the National Cyber Security Centre
The Cloud Act
McGuinness was the UK's principal public advocate for the Cloud Act. On 24 May 2017 McGuinness became the first serving British official to testify to a Congressional Committee when he joined Richard J Downing of the US Department of Justice in front of the US Senate Judiciary Committee subcommittee on Crime and Terrorism advocating for lawful access to data to counter Serious Organised Crime through the Cloud Act. His written evidence is here. On 15 June 2017 he then appeared in front of the Judiciary Committee of the House of Representatives. He published a number of articles in US newspapers and online
D Notice Committee
McGuinness represented the Cabinet Office on the Defence and Security Media Advisory Committee formerly known as the D Notice Committee.
Undercover Policing Inquiry
In January 2016 McGuinness provided written testimony to the Undercover Policing Inquiry on the importance of the “Neither Confirm Nor Deny” principle for National Security.
Career since 2018
McGuinness is an Adviser at Brunswick Group advising on crisis and resilience issues, providing senior counsel to clients on ever-evolving business and political risk.
McGuinness is a co-founder of Oxford Digital Health an Oxford University Spin Out providing software as a service to transform healthcare.
McGuinness is a member of the Advisory Board at Glasswall Solutions.
McGuinness was a member of the Oxford Technology and Elections Commission on Technology OxTEC which made a series of recommendations aimed at securing the information infrastructure of elections and creating a trusted environment for the democratic use of technology reported in October 2019
In January 2018 the Sunday Times newspaper reported that McGuinness was to advise the State of Qatar on the security of the soccer World Cup.
McGuinness was a Special Adviser to the UK Parliament’s Joint Committee on the National Security Strategy
Charitable work
McGuinness is the Chair of Trustees at St Joseph’s Hospice in Hackney, London.
Awards
McGuinness was appointed an Officer of the Order of the British Empire (OBE) in 1997, and a Companion of the Order of St Michael and St George (CMG) in 2014.
References
Living people
British civil servants
1963 births
Companions of the Order of St Michael and St George
Officers of the Order of the British Empire
People educated at Ampleforth College
Alumni of Balliol College, Oxford
Secret Intelligence Service personnel
|
464793
|
https://en.wikipedia.org/wiki/Dead%20code%20elimination
|
Dead code elimination
|
In compiler theory, dead code elimination (also known as DCE, dead code removal, dead code stripping, or dead code strip) is a compiler optimization to remove code which does not affect the program results. Removing such code has several benefits: it shrinks program size, an important consideration in some contexts, and it allows the running program to avoid executing irrelevant operations, which reduces its running time. It can also enable further optimizations by simplifying program structure.
Dead code includes code that can never be executed (unreachable code), and code that only affects dead variables (written to, but never read again), that is, irrelevant to the program.
Examples
Consider the following example written in C.
int foo(void)
{
int a = 24;
int b = 25; /* Assignment to dead variable */
int c;
c = a * 4;
return c;
b = 24; /* Unreachable code */
return 0;
}
Simple analysis of the uses of values would show that the value of b after the first assignment is not used inside foo. Furthermore, b is declared as a local variable inside foo, so its value cannot be used outside foo. Thus, the variable b is dead and an optimizer can reclaim its storage space and eliminate its initialization.
Furthermore, because the first return statement is executed unconditionally, no feasible execution path reaches the second assignment to b. Thus, the assignment is unreachable and can be removed.
If the procedure had a more complex control flow, such as a label after the return statement and a goto elsewhere in the procedure, then a feasible execution path might exist to the assignment to b.
Also, even though some calculations are performed in the function, their values are not stored in locations accessible outside the scope of this function. Furthermore, given the function returns a static value (96), it may be simplified to the value it returns (this simplification is called constant folding).
Most advanced compilers have options to activate dead code elimination, sometimes at varying levels. A lower level might only remove instructions that cannot be executed. A higher level might also not reserve space for unused variables. A yet higher level might determine instructions or functions that serve no purpose and eliminate them.
A common use of dead code elimination is as an alternative to optional code inclusion via a preprocessor. Consider the following code.
int main(void) {
int a = 5;
int b = 6;
int c;
c = a * (b / 2);
if (0) { /* DEBUG */
printf("%d\n", c);
}
return c;
}
Because the expression 0 will always evaluate to false, the code inside the if statement can never be executed, and dead code elimination would remove it entirely from the optimized program. This technique is common in debugging to optionally activate blocks of code; using an optimizer with dead code elimination eliminates the need for using a preprocessor to perform the same task.
In practice, much of the dead code that an optimizer finds is created by other transformations in the optimizer. For example, the classic techniques for operator strength reduction insert new computations into the code and render the older, more expensive computations dead. Subsequent dead code elimination removes those calculations and completes the effect (without complicating the strength-reduction algorithm).
Historically, dead code elimination was performed using information derived from data-flow analysis. An algorithm based on static single assignment form (SSA) appears in the original journal article on SSA form by Ron Cytron et al. Robert Shillingsburg (aka Shillner) improved on the algorithm and developed a companion algorithm for removing useless control-flow operations.
Dynamic dead code elimination
Dead code is normally considered dead unconditionally. Therefore, it is reasonable attempting to remove dead code through dead code elimination at compile time.
However, in practice it is also common for code sections to represent dead or unreachable code only under certain conditions, which may not be known at the time of compilation or assembly. Such conditions may be imposed by different runtime environments (for example different versions of an operating system, or different sets and combinations of drivers or services loaded in a particular target environment), which may require different sets of special cases in the code, but at the same time become conditionally dead code for the other cases. Also, the software (for example, a driver or resident service) may be configurable to include or exclude certain features depending on user preferences, rendering unused code portions useless in a particular scenario. While modular software may be developed to dynamically load libraries on demand only, in most cases, it is not possible to load only the relevant routines from a particular library, and even if this would be supported, a routine may still include code sections which can be considered dead code in a given scenario, but could not be ruled out at compile time, already.
The techniques used to dynamically detect demand, identify and resolve dependencies, remove such conditionally dead code, and to recombine the remaining code at load or runtime are called dynamic dead code elimination or dynamic dead instruction elimination.
Most programming languages, compilers and operating systems offer no or little more support than dynamic loading of libraries and late linking, therefore software utilizing dynamic dead code elimination is very rare in conjunction with languages compiled ahead-of-time or written in assembly language. However, language implementations doing just-in-time compilation may dynamically optimize for dead code elimination.
Although with a rather different focus, similar approaches are sometimes also utilized for dynamic software updating and hot patching.
See also
Redundant code
Simplification (symbolic computation)
Partial redundancy elimination
Conjunction elimination
Dynamic software updating
Dynamic coupling (computing)
Self-relocation
Software cruft
Tree shaking
Post-pass optimization
Profile-guided optimization
Superoptimizer
Compacting garbage collection
Function multi-versioning
References
Further reading
External links
How to trick C/C++ compilers into generating terrible code?
Compiler optimizations
|
32924172
|
https://en.wikipedia.org/wiki/NovaStor
|
NovaStor
|
NovaStor is a privately held software company based in Agoura Hills, California, with offices in Hamburg, Germany, and Zug, Switzerland.
The company's primary focus is providing backup and recovery software to home users, small and medium-sized businesses, and enterprises. NovaStor and its products are often compared to other backup and data protection solutions such as Symantec and EMC.
History
NovaStor was founded in 1987. In June 2009, it was bought out by a management-led investor group, which remains as the company's majority shareholder.
Products
NovaStor provides backup and recovery software for home users, businesses, and managed service providers. Their core focus is cloud and enterprise solutions, which provide technology for onsite and offsite backup and the deployment of cloud-based managed services.
NovaStor product features include:
PC and server backup and restore
Local and offsite backup and restore
Tape and disk imaging software
Disaster recovery
Virtualization
Cloud storage backup enablement
Hard drive recovery and disk booting
See also
List of backup software
References
External links
Software companies based in California
Technology companies based in Greater Los Angeles
Companies based in Agoura Hills, California
American companies established in 1987
Software companies established in 1987
1987 establishments in California
Private equity portfolio companies
Software companies of the United States
|
4867507
|
https://en.wikipedia.org/wiki/W%20%28Unix%29
|
W (Unix)
|
The command w on many Unix-like operating systems provides a quick summary of every user logged into a computer, what each user is currently doing, and what load all the activity is imposing on the computer itself. The command is a one-command combination of several other Unix programs: , , and .
Example
Sample output (which may vary between systems):
$ w
11:12am up 608 day(s), 19:56, 6 users, load average: 0.36, 0.36, 0.37
User tty login@ idle what
smithj pts/5 8:52am w
jonesm pts/23 20Apr06 28 -bash
harry pts/18 9:01am 9 pine
peterb pts/19 21Apr06 emacs -nw html/index.html
janetmcq pts/8 10:12am 3days -csh
singh pts/12 16Apr06 5:29 /usr/bin/perl -w perl/test/program.pl
References
External links
Unix user management and support-related utilities
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.