id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
52764404 | https://en.wikipedia.org/wiki/SinaSoft%20Corporation | SinaSoft Corporation | SinaSoft Corporation () is an Iranian software company founded in 1985.
SinaSoft is well-known for products like Zarnegar (for DOS and Windows), Sayeh, Pishkar, Paradox, Payvand, Kelk (winner of best software product at GITEX '95), Windows 3.1 with Persian capabilities (a.k.a. Windows 3.1 Sina) , Windows 95 with Persian capabilities (a.k.a. Windows 95 Sina), Nasher, Garce, and Payvazh.
SinaSoft has been inactive since early 2000s and most of the software development and support has been transferred to Sina Cultural and Software Foundation.
References
External links
SinaSoft website
زرنگار و دیگر هیچ (Persian), an interview with leaders of Sina Cultural and Software Foundation.
Software companies of Iran
Software companies established in 1985
Business software companies
Iranian companies established in 1985 |
67801265 | https://en.wikipedia.org/wiki/Guilded | Guilded | Guilded is a VoIP, instant messaging and digital distribution platform designed by Guilded Inc. and owned by Roblox Corporation. Guilded is based in San Francisco. Users communicate with voice calls, video calls, text messaging, media and files in private chats or as part of communities called "guilds". Guilded was founded by Eli Brown, a former Facebook and Xbox employee. Guilded runs on Windows, Linux, macOS, Android and iOS.
Guilded is a Discord competitor that is primarily focused on video gaming communities, such as those focused on competitive gaming and esports. It provides features intended for video gaming clans, such as scheduling tools and integrated calendars. Guilded is developed by Guilded, Inc. which has been an independent product group of the Roblox Corporation since August 16, 2021.
References
Instant messaging clients
2017 software
Freeware
Android (operating system) software
Internet properties established in 2017
IOS software
MacOS instant messaging clients
Instant messaging clients for Linux
Windows instant messaging clients
Proprietary cross-platform software
Voice over IP clients for Linux
Proprietary freeware for Linux |
9412774 | https://en.wikipedia.org/wiki/Government%20Dayal%20Singh%20College%2C%20Lahore | Government Dayal Singh College, Lahore | Government Dyal Singh Graduate College, Lahore is a college for graduate and post-graduate students affiliated to Board of Intermediate and Secondary Education, Lahore and University of the Punjab, Lahore, Pakistan.
History
The college was founded in accordance with the will Dyal Singh, in Lahore to implant Brahmo ideas. During Socio-Cultural reform movement in Indian subcontinent , in 1910 , Dayal singh was influenced by rational , scientific ideas of Brahmo samaj (started by Raja Rammohan Roy) . He is known as the "founder of The Tribune" and the one who bequeathed his largely self-earned assets including buildings in Lahore and land in Amritsar, Lahore and Gurdaspur districts worth about Rs.30 lakh in 1898 to two trusts that established Dyal Singh College and Dyal Singh Public Library in Lahore.
The college was run by an educational trust, consisted of Dayal Singh College, Dayal Singh Library, Dayal Singh Majithia Hall and Dayal Singh Mansions at The Mall Lahore, adjacent to Lahore High Court.
Dyal Singh was a philanthropist and a lover of education. A man of great vision and action, he donated all his assets for the propagation of education. He gave almost all his property in Lahore, Pakistan for the establishment of the college.
Programs offered
HSSC groups
Compulsory subjects:
English
Urdu
Islamic Education/Pak. Studies
Pre-medical - Physics, Chemistry, Biology
Pre-engineering - Physics, Chemistry, Mathematics
General Science - Statistics, Mathematics, Economics
Computer Science (ICS)
Physics, Mathematics, Computer Science
Statistics, Mathematics, Computer Science
Economics, Mathematics, Computer Science
Commerce group (I.COM) - Accounting, Economics, Business, Math, Principals of Commerce
Arts group
Economics, Civics, Education
Economics, Civics, Punjabi
Economics, Psychology, Education
Islamiat, Civics, Education
Islamiat, Civics, Punjabi
Islamiat, Civics, Persian
Islamiat, Civics, Arabic
Islamiat, Civics, Urdu Advance
Islamiat, Education, Arabic
Islamiat, Education, Urdu Advance
Islamiat, Education, Psychology
Islamiat, Psychology, Persian
Physical Education, Civics, Arabic
Physical Education, Civics, Education
Physical Education, Civics, Urdu Advance
Physical Education, Civics, Punjabi
Physical Education, Psychology, Education
Physical Education, Psychology, Persian
History, Psychology, Persian
History, Psychology, Education
BSc. groups
Compulsory subjects:
English
Islamic Education
Pak. Studies
Bachelor of Science
Botany, Zoology, Chemistry
Physics, Chemistry, Mathematics (G)
Economics, Statistics, Mathematics (G)
Statistics, Mathematics, Computer Science
Physics, Mathematics A course, Mathematics B course
Statistics, Mathematics A course, Mathematics B course
B.A groups
Compulsory subjects:
English
Islamic Education/Pak. Studies
Bachelor of Arts
Economics, Education, Statistics (opt)
Economics, Political Science, Statistics (opt)
Economics, Political Science, Persian (opt)
Economics, Punjabi, Arabic (opt)
Islamiat, Arabic, Punjabi (opt)
Islamiat, Education, Punjabi (opt)
Islamiat, Education, Arabic (opt)
Islamiat, Education, Persian (opt)
Islamiat, Political Science, Punjabi (opt)
Islamiat, Political Science, Arabic (opt)
Islamiat, Political Science, Persian (opt)
Islamiat, Punjabi, Urdu (opt)
Islamiat, Punjabi, Arabic (opt)
Islamiat, Punjabi, Persian (opt)
Islamiat, History, Persian (opt)
Islamiat, Education, Punjabi (opt)
Islamiat, Persian, Urdu (opt)
Economics History, Persian (opt)
Economics, Persian, Arabic (opt)
B.COM (IT) group
BC-301 Business Statistics & Mathematics
BC-302 Computer Application in Business
BC-303 Economics
BC-304 Financial Accounting
BC-305 Functional English
BC-306 Introduction to Business
BC-307 Money Banking And Finance
BC-308 Islamic Studies / Ethical Behaviours
Masters
MSc Mathematics
MA English
College uniform
Summer: white shirt and steel-grey trousers or white shalwar qameez
Winter: white shirt and steel-grey trousers or white shalwar qameez, navy blue blazer/sweater/jersey
Location
The college is on Nisbat Road, near Lakshami Chowk, Lahore, Pakistan.
Notable faculty members
Sadhu T.L. Vaswani, principal, 1912-1915
Prof. Altaf Hussain Chahat, Chairman Department of English, 2000-2006
References
Dyal Singh Majithia
External links
University of the Punjab - Affiliation
Govt. Dyal Singh College Lahore
Panoramio Group
Facebook
Dyal Singh College Delhi
Dyal Singh College Karnal
Dyal Singh Public School Karnal
Dyal Singh Public School Jagadhari
Dayal Singh Trust Library Lahore
Universities and colleges in Lahore
University of the Punjab |
305296 | https://en.wikipedia.org/wiki/Color%20grading | Color grading | Color grading is the process of improving the appearance of an image for presentation in different environments on different devices. Various attributes of an image such as contrast, color, saturation, detail, black level, and white point may be enhanced whether for motion pictures, videos, or still images. Color grading and color correction are often used synonymously as terms for this process and can include the generation of artistic color effects through creative blending and compositing of different images. Color grading is generally now performed in a digital process either in a controlled environment such as a color suite, or in any location where a computer can be used in dim lighting.
The earlier photochemical film process, referred to as color timing, was performed at a film lab during printing by varying the intensity and color of light used to expose the rephotographed image. Since, with this process alone, the user was unable to immediately view the outcome of their changes, the use of a Hazeltine color analyzer was common for viewing these modifications in real time.
Color timing
Color timing is used in reproducing film elements. "Color grading" was originally a lab term for the process of changing color appearance in film reproduction when going to the answer print or release print in the film reproduction chain. By the late 2010s, this film grading technique had become known as color timing and still involved changing the duration of exposure through different filters during the film development process. Color timing is specified in printer points which represent presets in a lab contact printer where 7 to 12 printer points represent one stop of light. The number of points per stop varied based upon negative or print stock and different presets at film labs.
In a film production, the creative team would meet with the “lab timer” who would watch a running film and make notes dependent upon the team's directions. After the session, the timer would return to the lab and put the film negative on a device (the Hazeltine) which had preview filters with a controlled backlight, picking exact settings of each printer point for each scene. These settings were then punched onto a paper tape and fed to the high-speed printer where the negative was exposed through a backlight to a print stock. Filter settings were changed on the fly to match the printer lights that were on the paper tape. For complex work such as visual effects shots, "wedges” running through combinations of filters were sometimes processed to aid the choice of the correct grading.
This process is used wherever film materials are being reproduced.
Telecine
With the advent of television, broadcasters quickly realised the limitations of live television broadcasts and they turned to broadcasting feature films from release prints directly from a telecine. This was before 1956 when Ampex introduced the first Quadruplex videotape recorder (VTR) VRX-1000. Live television shows could also be recorded to film and aired at different times in different time zones by filming a video monitor. The heart of this system was the kinescope, a device for recording a television broadcast to film.
The early telecine hardware was the "film chain" for broadcasting from film and utilized a film projector connected to a video camera. As explained by Jay Holben in American Cinematographer Magazine, "The telecine didn't truly become a viable post-production tool until it was given the ability to perform colour correction on a video signal."
How telecine coloring works
In a cathode-ray tube (CRT) system, an electron beam is projected at a phosphor-coated envelope, producing a spot of light the size of a single pixel. This beam is then scanned across a film frame from left to right, capturing the "vertical" frame information. Horizontal scanning of the frame is then accomplished as the film moves past the CRT's beam. Once this photon beam passes through the film frame, it encounters a series of dichroic mirrors which separate the image into its primary red, green and blue components. From there, each individual beam is reflected onto a photomultiplier tube (PMT) where the photons are converted into an electronic signal to be recorded to tape.
In a charge-coupled device (CCD) telecine, a white light is shone through the exposed film image onto a prism, which separates the image into the three primary colors, red, green and blue. Each beam of colored light is then projected at a different CCD, one for each color. The CCD converts the light into an electronic signal, and the telecine electronics modulate these into a video signal that can then be color graded.
Early color correction on Rank Cintel MkIII CRT telecine systems was accomplished by varying the primary gain voltages on each of the three photomultiplier tubes to vary the output of red, green and blue. Further advancements converted much of the color-processing equipment from analog to digital and then, with the next-generation telecine, the Ursa, the coloring process was completely digital in the 4:2:2 color space. The Ursa Gold brought about color grading in the full 4:4:4 color space.
Color correction control systems started with the Rank Cintel TOPSY (Telecine Operations Programming SYstem) in 1978. In 1984 Da Vinci Systems introduced their first color corrector, a computer-controlled interface that would manipulate the color voltages on the Rank Cintel MkIII systems. Since then, technology has improved to give extraordinary power to the digital colorist. Today there are many companies making color correction control interfaces including Da Vinci Systems, Pandora International, Pogle and more.
Some telecines are still in operation in 2018.
Color correction
Some of the main artistic functions of color correction (digital color grading) include:
Reproducing accurately what was shot
Compensating for variations in the material (i.e., film errors, white balance, varying lighting conditions)
Compensating for the intended viewing environment (dark, dim, bright surrounds)
Optimizing base appearance for inclusion of special visual effects
Establishing a desired artistic 'look'
Enhancing and/or altering the mood of a scene — the visual equivalent to the musical accompaniment of a film; compare also film tinting
Note that some of these functions must be prioritized over others; for example, color grading may be done to ensure that the recorded colors match those of the original scene, whereas other times, the goal may instead be to establish a very artificial stylized look.
Traditionally, color grading was done towards practical goals. For example, in the film Marianne, grading was used so that night scenes could be filmed more cheaply in daylight. Secondary color correction was originally used to establish color continuity; however, the trend today is increasingly moving towards creative goals such as improving the aesthetics of an image, establishing stylized looks, and setting the mood of a scene through color. Due to this trend, some colorists suggest the phrase "color enhancement" over "color correction".
Primary and secondary color grading
Primary color grading affects the whole image by providing control over the color density curves of red, green, blue color channels, across the entire frame. Secondary correction can isolate a range of hue, saturation and brightness values to bring about alterations in hue, saturation and luminance only in that range, allowing the grading of secondary colors, while having a minimal or usually no effect on the remainder of the color spectrum. Using digital grading, objects and color ranges within a scene can be isolated with precision and adjusted. Color tints can be manipulated and visual treatments pushed to extremes not physically possible with laboratory processing. With these advancements, the color correction process has become increasingly similar to well-established digital painting techniques, ushering forth a new era of digital cinematography.
Masks, mattes, power windows
The evolution of digital color grading tools has advanced to the point where the colorist can use geometric shapes (such as mattes or masks in photo software such as Adobe Photoshop) to isolate color adjustments to specific areas of an image. These tools can highlight a wall in the background and color only that wall, leaving the rest of the frame alone, or color everything but that wall. Subsequent color correctors (typically software-based) have the ability to use spline-based shapes for even greater control over isolating color adjustments. Color keying is also used for isolating areas to adjust.
Inside and outside of area-based isolations, digital filtration can be applied to soften, sharpen or mimic the effects of traditional glass photographic filters in nearly infinite degrees.
Motion tracking
When trying to isolate a color adjustment on a moving subject, the colorist traditionally would have needed to manually move a mask to follow the subject. In its most simple form, motion tracking software automates this time-consuming process using algorithms to evaluate the motion of a group of pixels. These techniques are generally derived from match moving techniques used in special effects and compositing work.
Digital intermediate
The evolution of the telecine device into film scanning allowed the digital information scanned from a film negative to be of sufficient resolution to transfer back to film. In the early 1990s, Kodak developed the Cineon Film System to capture, manipulate, and record back to film and they called this the “Digital Intermediate”. This term stuck. The first digital intermediate of any form was the Cinesite restoration of “Snow White and The Seven Dwarves” in 1993. (Previously, in 1990, for Rescuers Down Under, the Disney CAPS system had been used to scan artwork, color and composite it, and then record it to film, but this was also intermixed with a traditional lab development process over a length of time]
In the late 1990s, the films Pleasantville and O Brother, Where Art Thou? advanced the technology to the point that the creation of a digital intermediate was practical, which greatly expanded the capabilities of the digital telecine colorist in a traditionally film-oriented world. Since 2010, almost all feature films have gone through the DI process, while manipulation through photochemical processing is rare or used on archival films.
In Hollywood, O Brother, Where Art Thou? was the first film to be wholly digitally graded. The negative was scanned with a Spirit DataCine at 2K resolution, then colors were digitally fine-tuned using a Pandora MegaDef color corrector on a Virtual DataCine. The process took several weeks, and the resulting digital master was output to film again with a Kodak laser recorder to create a master internegative.
Modern motion picture processing typically uses both digital cameras and digital projectors and calibrated devices are essential to predict whether the appropriate colors are appearing.
Hardware-based versus software-based systems
In early use, hardware-based systems (da Vinci 2K, Pandora International MegaDEF, etc.) have historically offered better performance but a smaller feature set than software-based systems. Their real time performance was optimised to particular resolution and bit depths, as opposed to software platforms using standard computer industry hardware that often trade speed for resolution independence, e.g. Apple's Color (previously Silicon Color Final Touch), ASSIMILATE SCRATCH, Adobe SpeedGrade and SGO Mistika. While hardware-based systems always offer real-time performance, some software-based systems need to pre-render as the complexity of the color grading increases. On the other hand, software-based systems tend to have more features such as spline-based windows/masks and advanced motion tracking.
The line between hardware and software no longer exists as many software-based color correctors (e.g. Pablo , Mistika, SCRATCH, Autodesk Lustre, Nucoda Film Master and FilmLight's Baselight) use multi processor workstations and a GPU (graphics processing unit) as a means of hardware acceleration. As well, some newer software-based systems use a cluster of multiple parallel GPUs on the one computer system to improve performance at the very high resolutions required for feature film grading. e.g. Blackmagic Design's DaVinci Resolve. Some color grading software like Synthetic Aperture's Color Finesse runs solely as software and will even run on low-end computer systems. High-speed RAID arrays are an essential part of the process for all systems.
Hardware
Hardware systems are no longer common because of the price/performance of software systems. The control panels are placed in a color suite for the colorist to operate the telecine remotely.
Many telecines were controlled by a Da Vinci Systems color corrector 2k or 2k Plus.
Other hardware systems are controlled by Pandora Int.'s Pogle, often with either a MegaDEF, Pixi, or Revolution color grading system.
For some real-time systems used in "linear" editing, color grading systems required an edit controller. The edit controller controls the telecine and a VTR(s) or other recording/playback devices to ensure frame accurate film frame editing. There are a number of systems which can be used for edit control. Some color grading products such as Pandora Int.'s Pogle have a built-in edit controller. Otherwise, a separate device such as Da Vinci Systems' TLC edit controller would be used.
Older systems are: Renaissance, Classic analog, Da Vinci Systems's: The Whiz (1982) and 888; The Corporate Communications's System 60XL (1982–1989) and Copernicus-Sunburst; Bosch Fernseh's FRP-60 (1983–1989); Dubner (1978–1985?), Cintel's TOPSY (1978), Amigo (1983), and ARCAS (1992) systems. All of these older systems work only with standard-definition 525 and 625 video signals, and are considered near obsolete today.
Organizations
In 2016, an international professional organization for film colorists, the Colorist Society International, was founded at the NAB Show in Las Vegas.
Gallery
See also
One-light
Color balance
References
External links
Colorist Society International (CSI) - The Professional Body for Colourists
The TKColorist Internet Group
Why Do You Need Color Correction, video by Terence Curren, senior colorist at Apha Dogs, Inc.
What Can a Colorist Learn from a Director of Photography? Interview with Ellie Ann Fenton
Cinematic Color: From Your Monitor to the Big Screen
A Walk By The Digital Film Colorist… Revolution?
Color
Film and video technology
Filmmaking occupations |
44444669 | https://en.wikipedia.org/wiki/San%20Francisco%20%28sans-serif%20typeface%29 | San Francisco (sans-serif typeface) | San Francisco is a neo-grotesque typeface made by Apple Inc. It was first released to developers on November 18, 2014. It is the first new typeface designed at Apple in nearly twenty years and has been inspired by Helvetica and DIN.
The macOS Catalina font Galvji is similar to the San Francisco variant SF Pro Text but has lower leading and bigger spacing.
Variants
Note: SF has the codename SFNS in macOS and SFUI in iOS, regardless of the official name.
Some variants have two optical sizes: "display" for large and "text" for small text. Compared to display, the letters in text have larger apertures and more generous letter-spacing. The operating system automatically chooses the display optical size for sizes of at least 20 points, and the text optical size otherwise.
Distributed fonts
SF Compact
Initial font introduced with the Apple Watch and watchOS, but was later rebranded as SF Compact with the introduction of SF UI at WWDC 2015. Different from SF Pro, its characters' round curves are flatter, allowing the letters to be laid out with more space between them, thereby making the text more legible at small sizes, which Apple Watch small screen demands. SP Compact Rounded was introduced in 2016.
SF Compact Text comes with 9 weights with their italics. Initially had only 6 weights when introduced.
SF Compact Display comes with 9 weights with their italics.
SF Compact Rounded comes with 9 weights. It has the same figure as the "display" version but with rounded corners.
SF Pro/SF UI
UI font for macOS, iOS, iPadOS, and tvOS. In 2017, a revised version, SF Pro was introduced, supporting an expanded list of weights, optical sizes, glyphs and languages. SF Pro Rounded (codename SFUIRounded) was introduced in 2018.
SF Pro Text comes with 9 weights with their italics. Initially had only 6 weights when introduced.
SF Pro Display comes with 9 weights with their italics.
SF Pro Rounded comes with 9 weights. It has the same figure as the "display" version but with rounded corners.
SF Mono
A monospaced variant. UI font for the Terminal, Console, and Xcode applications. It was introduced at WWDC 2016.
SF Mono comes with 6 weights with their italics.
Non-distributed fonts
These fonts, for use in different languages, can be found on the Apple website in their corresponding regions of use as variations of SF Pro:
SF Pro AR is an Arabic font; SF Pro JP is a Japanese font. SF Pro KR is a Korean font, and SF Pro TH is a Thai font.
SF Pro SC, SF Pro TC and SF Pro HK are Chinese fonts; they are labeled as the PingFang family.
SF Condensed
A condensed variant of SF Pro.
SF Condensed Text has 6 weights.
SF Condensed Display has 9 weights.
A variant called SF Condensed Photos was formerly used in Photos. SF Condensed Photos has a tighter width and spacing, and smaller aperture than "SF Shields Condensed-Bold", especially the letter "c", which has less gap at its end than any other variant.
SF Hello
A print-optimized variant with adjusted letter-spacing and intermediate optical sizing between SF Pro Text and SF Pro Display; however, some characters are tweaked. It is restricted to Apple employees and permitted contractors and vendors, and is therefore unavailable for public use.
SF Cash
A alternative condensed variant. It includes a chiseled-style SVG Color font titled "SF Cash Chiseled", a plain version titled "SF Cash Plain", and "SF Cash Text Condensed Semibold" which appears more condensed than "SF Condensed Text Semibold".
SF Shields
A compressed variant. The "SF Shields Semicondensed-Bold" is narrower than SF Condensed Display-Bold, the "SF Shields Condensed-Bold" is narrower than "SF Shields Semicondensed-Bold", the "SF Display Shields Compressed-Bold" has the most narrow style.
SF Camera
Introduced on September 10, 2019 at Apple's keynote; Phil Schiller mentioned it while summarizing the camera updates on iPhone 11 Pro. Different from SF Pro, this variant has a boxier design which gives an industrial and professional look. Its figure and tracking are similar to SF Compact Text.
Other fonts
SF Serif (New York)
A serif variant. It was introduced as SF Serif (codename Serif UI) at WWDC 2018 as the UI font for the redesigned Apple Books app for IOS 12. It was officially released under the name New York on the Apple Developer site on June 3, 2019.
New York Small comes with 6 weights with their italics.
New York Medium comes with 6 weights with their italics.
New York Large comes with 6 weights with their italics.
New York Extra Large comes with 6 weights with their italics.
The font includes OpenType features for lining and old-style figures in both proportional and tabular widths. Despite Apple having a font with the same name with the bitmap format for the original Macintosh (and later converted to TrueType format), it is unrelated to this design.
Variable Fonts
Apple introduced the OpenType Font Variations feature of their SF fonts in WWDC20. It is included as a TrueType Font in the installer file on the Developer website.
SF Pro, SF Pro Italic, and SF Compact feature variable weights and variable optical sizes of between "text" and "display".
SF Compact Italic features variable weights but has "text" optical size only.
New York and New York Italic feature variable weights and variable optical sizes between "small" and "extra large".
SF Symbols
SF Symbols refers to symbols and icons used in the Apple operating systems. To fit Apple's objectives of core functionality and "easy to use"-ness, these symbols are designed using Apple's visual language and unified design elements. They also include the squircle instead of standard rounded corners for a more comfortable look, similar to what Apple has used in their other designs. By using unified symbols, users can experience the easiness and intuitiveness when interacting between Apple's devices, services, and apps.
Apple's symbols are included as glyphs in the font file of SF Pro, SF Pro Rounded, SF Compact, and SF Compact Rounded (also in their variable font file). Each symbol is available in 3 sizes. These symbols change their thickness and negative space according to chosen weight, they even utilized with the Opentype Variation feature. Using the SF Symbols app can access more features such as refined alignment, multicolor, and localization of symbols. The symbols properties seems not unified across variants, such as different Unicode arrangement for few symbols resulting different symbols when switching between variants, and also some symbols has noticeable very little details difference in some variants.version 16.0d18e1
These symbols are available for developers to use in their apps on Apple platforms only. Developers are allowed to customize it to desired styles and colors, but certain symbols may not be modified and may only be used to refer to its respective Apple services or devices as listed in the license description.
Usage
Since its introduction, San Francisco has gradually replaced most of Apple's other typefaces on their software and hardware products and for overall branding and has replaced Lucida Grande and Helvetica Neue as the system typeface of macOS and iOS since OS X El Capitan and iOS 9. Apple uses it on its website and for its product wordmarks, where it replaced Myriad Pro. It is also used on Magic Keyboard and on the keyboard of the 2015 MacBook and on the 2016 MacBook Pro, replacing VAG Rounded. It is also used as Apple's corporate typeface.
Apple restricts the usage of the typeface by others. It is licensed to registered third-party developers only for the design and development of applications for Apple's platforms. Only SF Pro, SF Compact, SF Mono, SF Arabic, and New York variants are available for download on Developer website and they are the only SF variants allowed to be used by developers.
The San Francisco Chronicle described the font as having nothing to do with the city and just being "Helvetica on a low-carb diet".
See also
IBM Plex
Roboto
Noto
Segoe
References
External links
San Francisco on Apple's developer website
Apple Inc. typefaces
Sans-serif typefaces
Monospaced typefaces
Typefaces with optical sizes
Computer-related introductions in 2014
Typefaces and fonts introduced in 2014
Cyrillic typefaces
Display typefaces
Greek typefaces
Latin-script typefaces |
42725242 | https://en.wikipedia.org/wiki/SCO%20Forum | SCO Forum | SCO Forum was a technical computer conference sponsored by the Santa Cruz Operation (SCO), briefly by Caldera International, and later The SCO Group that took place during the 1980s through 2000s. It was held annually, most often in August of each year, and typically lasted for much of a week. From 1987 through 2001 it was held in Santa Cruz, California, on the campus of the University of California, Santa Cruz. The scenic location, amongst redwood trees and overlooking Monterey Bay, was considered one of the major features of the conference. From 2002 through 2008 it was held in Las Vegas, Nevada, at one of several hotels on the Las Vegas Strip. Despite the name and location changes, the conference was considered to be the same entity, with both the company and attendees including all instances in their counts of how many ones they had been to.
During the keynote addresses for the Santa Cruz conferences, SCO would present its vision of the direction of the computer industry and how its products fit into that direction. There were then many highly technical breakout sessions and "birds of a feather" discussions where SCO operating systems and other technologies were explained in detail and customers and partners could engage SCO engineers regarding them. Typically some – attendees came to each Forum. Due to its useful content and to its relaxed, fun atmosphere, the Santa Cruz Forum became known as one of the best such conferences to go to in the industry. It was the largest tech event in the Santa Cruz area and made a multi-million dollar impact on the local economy.
During the Las Vegas years, Forum was used to convey the SCO Group's side in the SCO–Linux disputes. It was also used to showcase the company's efforts to revitalize its operating system business and to get into new business areas.
SCO in Santa Cruz years
Aims
The goal of SCO Forum was to spread the company's message and inform its users and partners as to the capabilities and technical characteristics of its products and express optimism about the future path of the company. Representatives from SCO included executives, product managers, development engineers, and others. Attendees from outside SCO included value-added resellers (VARs), channel distributors, application developers, and computer manufacturers.
Forum helped establish a community around SCO, where people reinforced each other in believing that using Unix as a basis for business solutions – by no means a given in those days – was the correct approach, and that SCO provided the right products from both a technical and business aspect to do so.
As such, SCO Forum was considered a popular and very successful event. As Dr. Dobb's Journal later wrote, "SCO Forum was the place to be if you were a Unixhead."
With SCO having built a successful business with its Unix-on-commodity-hardware offering, Forum was used by the company to argue why new competitors in the space, such as Univel and SunSoft, would not be successful. In later years, when Unix itself came under threat, first from Microsoft's Windows NT and then from open source Linux, it was a role of Forum to stress that Unix was not going away and that business success could still be had with it. As SCO CEO Doug Michels said at Forum 1999, "In spite of all the rumors and opinions that Unix would end, it didn't."
New deals between SCO and other companies in the industry were often announced at Forum.
Alternatively, panel discussions were held to discuss the state of already existing partnerships, such as one for Project Monterey, the strategic importance of which was given much attention at the time. Company slogans were advanced, such as "the Internet Way of Computing". On the other hand, failed initiatives announced at previous events, such as SCO's involvement in the Advanced Computing Environment (ACE), were explained away as quickly as possible.
History
The first conference took place in 1987; it was referred to as the SCO XENIX 386 Developer Conference. SCO was looking for a place to hold an event that would bring together developers to exchange ideas, and the university said that it could provide such a spot in late August, before students returned to campus for the fall quarter.
By August 1988, the trade publication InfoWorld was mentioning "SCO Forum '88, a conference for Xenix developers." However, unlike the previous years' Forum, this one was not restricted to developers, with resellers invited as well as part of SCO's effort to build a strong reseller base. The conference featured an announcement from SCO partner AT&T about a merged Unix and Xenix OS product.
SCO Forum '89 was also reported on in InfoWorld as well as in PC Week. It was held during August 21–25, as was promoted ahead of time by Newsbytes News Network. It featured third-party vendors announcing new releases of their products. In particular, an agreement with Microsoft to support Word and related products on SCO systems was highlighted. Speakers at Forum '89 included Paul Maritz of Microsoft and Ray Noorda of Novell as well as the company's two founders, Larry Michels and Doug Michels.
Both the company and the conference underwent growth. SCO held other technical and marketing events and seminars during the year and around the world, but Forum was certainly the largest of them. Advertisements for Forum stressed the value attending it would hold for a wide range of industry people – executives, managers, hardware developers, software developers, resellers, distributors, dealers, third-party vendors, and end users, as well as journalists and industry analysts, with session tracks available for each of these audiences.
With the company showing some profitable quarters, anticipating going public, and holding a roughly 75 percent share in the small-to-medium-sized businesses market, SCO Forum92 saw people in attendance, a big jump of about twice the previous year's total. From a third to a half of the attendees were from overseas, reflecting the company's worldwide success. Among these were about thirty attendees from formerly communist Eastern Bloc states.
Advertisements for Forum93 tried to give the conference an almost academic flavor, billing as it an "International Open Systems Symposium", making reference to the open systems movement then popular. Courses were said to be available in certain "majors" and upon completion would result in the attendee earning a certificate of completion from SCO. But the tenor of the week was set by SCO chief executive Lars Turndal's opening keynote address, where he attempted to soothe anxieties related to the company's past year of management shakeups and poor stock performance.
By 1994, Forum was on UNIX Reviews recommended list of shows and conferences for readers to attend, and in a survey of events they characterized it as one of "the industry's leading-edge trade shows".
An increase in technically-oriented, future-focused content was noted for Forum94. Forum94 had one of the more celebrated demonstrations, that of SCO's back-end role in the creation of PizzaNet, which enabled computer users for the first time to order pizza delivery from their local Pizza Hut restaurant via the Internet. That year's conference also witnessed what is said to have been the first-ever scheduled live music concert to be broadcast across the Internet, in an August 23 performance by local band Deth Specula on the Mbone.
SCO had been an original co-sponsor of the UniForum association of Unix users and had long had a close relationship with it. By 1996 attendees to Forum were given a trial membership in UniForum. And by Forum98 there was an explicit UniForum track of breakout sessions available.
In some cases gatherings at Forum led to industry initiatives taking place, such as the 86open effort to form consensus on a common binary file format for x86-based Unix and Unix-like operating systems, which had its initial meeting at SCO's Santa Cruz offices on the final day of Forum97.
Peak Forum attendance was in 1997 and 1998, when about people attended each event. Some 60 different countries were represented.
Forum often featured individuals and groups who came year after year, viewing it as something of an annual pilgrimage and reunion. One such group was of developers and resellers in the United States who qualified for the SCO Advanced Product Center (APC) designation. After first getting together at a "birds of a feather" session at Forum in 1989, they formed an association known as APC Open in 1990, that was renamed to APC International in 1998 and iXorg in 2000. Another such attendee was Dupaco, the founder of which attended every Forum from the beginning and built a multi-million dollar business with SCO Xenix and later products while becoming the sole distributor of SCO products in the Netherlands.
Many writers considered SCO Forum to be unique in the industry.
As a Dataquest columnist said, "It was a veritable treat. Set amidst the verdant redwood trees at the ... (UCSC) campus, it was a Unix feast that lasted for five Californian summer days."
An industry observer for eWeek recalled that both Forum and the company Santa Cruz Operation itself had "reflected the ethos of the community for which it was named" and that "based in the college/beach town of Santa Cruz, Calif., epitomized an industry culture [soon to be] gone."
And as one ZDNet writer stated, "SCO Forum ... is like no conference or industry confab you'll ever attend. Part pep rally, part study session, part sales pitch, and part schmoozefest, Forum has a far different atmosphere than any conventional trade show."
Structure
The conference was sometimes arranged through the Jack Baskin School of Engineering of UC Santa Cruz and typically used classrooms, dining facilities, recreational areas, parking lots, and campus housing, most often at Cowell College and Stevenson College. In one year a dean at UC Santa Cruz sent a memorandum to the campus community stating:
"I know that this conference occupies numerous UCSC facilities and may create confusion about where to park and eat. For some, the noise created by the presence of the conference participants will impact their work environment. However, this event provides many benefits for our campus community, and I hope that you are able to make accommodations in order to minimize the effects of the forum".
Keynote addresses were held each morning in the university quarry, an open-air amphitheater nestled within coastal redwoods. Upwards of attendees would come to these sessions. Doug Michels saw the quarry as an advantage over the dark hotel ballrooms where most conferences presented their over-produced keynote addresses, saying, "It's impossible to give a slide show in the Quarry." Due to the sharp diurnal temperature fluctuations characteristic of the Santa Cruz Mountains, the quarry was often fog-enshrouded and chilly in the morning, but attendees were advised to dress in layers that could be shed later as the fog burned off and the sun shone over Monterey Bay. The quarry had plain wood bleachers for which cushions were provided to sit on. (For Forum 1999 only, which had a decline in attendance from the peak, keynotes were moved to a location on the campus's East Field, where a stage and seating area were constructed. It was cold and foggy there in the morning too, but as CNN reported, that "couldn't dampen the spirits of Unix enthusiasts" in attendance.)
Keynote addresses came not just from SCO executives but from major figures in the industry, including Andy Grove, CEO of Intel, as well as from various executives of partner companies.
Technology observers would also debate such matters as, in SCO Forum 95, the nature of the much-ballyhooed "information superhighway", with writer Clifford Stoll and Electronic Frontier Foundation founder and Grateful Dead lyricist John Perry Barlow reaching divergent conclusions. A third participant, one that was ironic in light of later developments, was Linus Torvalds, who offered his own view of things. Predictions made during Forum keynotes were not always accurate; Torvalds himself said that Linux made a more reliable desktop than Microsoft and that "if Unix decides to ignore the desktop market and tries to be a server, even if it's a server that tries to serve desktops, Unix is eventually going to die. And I think the future is acknowledging that the desktop market is where it's at."
Even business rivals were sometimes represented, with Sun Microsystems CEO Scott McNealy – who was also in the Unix-on-Intel space – speaking at Forum in 1996. McNealy pointed out some areas of common interest in the process of giving what ZDNet recalled several years later as "an extremely entertaining speech".
In addition, guest speakers often included humorists of one kind or another, including such figures as Dilbert cartoonist Scott Adams, who one reporter said "enthralled" the crowd. Another such speaker was author Dave Barry.
There were a hundred or more hardware and software exhibitors at the conference, who would set up labs and demonstrations in college halls. These companies included major systems vendors such as IBM, Compaq Computer and NEC. SCO set up pavilions to demonstrate various advances in networking and client-server computing. The general public was invited to attend the keynote addresses and visit the exhibits and pavilions, after paying a relatively small entrance fee.
Breakout sessions took place during the late morning and in the afternoon after lunch, and were devoted to the most detailed level explanations of SCO products, with tracks devoted towards both technical and marketing audiences. Sample session names included "Tuning and Monitoring Your SCO Internet Server", "Rejuvenate Character Applications with SCO TermVision", and "Retail Business Opportunities II: Over the Counter Profits". In addition, the years 1996 through 1999 saw Thursday and Friday added on for a supplemental "Developer Fast Track" program; these sessions covered hard-core topics such as "DDC 8 Tutorial and Driver Walk-Through", "JDK 1.2 – Benefits for Application Programmers", and "Porting Applications to IPv6". In the evenings after dinner, "birds of a feather" sessions were held in a number of classrooms and other locations, allowing attendees even more direct contact with SCO product managers and development engineers. The SCO Skunkware collection of open source built and packaged for SCO operating systems was an example of something that was spread through birds of a feather sessions at multiple Forums.
Forum made a large, positive economic impact on the town of Santa Cruz, the university, and surrounding Santa Cruz County. In the mid-late 1990s this benefit was estimated at $3–4 million. Indeed, the mayor of Santa Cruz would sometimes label the week as "SCO Forum Week" or open the conference on Monday morning, and it was a largest tech industry gathering of any kind in the county.
Hotels and motels in the area would be booked for the week. Some attendees were put up in campus rooms and apartments, a tradition dating back to the early years of Forum when typical SCO developers could not afford anything more.
Resellers who performed the best during the year were rewarded by getting their travel expenses to Forum paid for. The age of attendees was older than usual for technology conferences, with many VARs having well-established businesses. In addition, more women were present at Forum than were typically seen at technology conferences, which one writer partly attributed to the more mature nature of the SCO reseller base.
Fun
As one Linux Journal piece noted, "SCO Forum [is] famous for its fun, casual environment." UNIX Review mentioned Forum as being associated with "the usual Santa Cruzian gaiety". Indeed, the 'having a fun experience' aspect was something that the company's two founders, Larry Michels and Doug Michels, both emphasized.
The environment and the dress code were both casual (although some vendor representatives did not always get the message at first). One first-time attendee termed the week a "romp in the redwoods". Even finding one's way around could be considered enjoyable, as one attendee later recalled: "Had you ever been to one of [the company's] shindigs at the University of Santa Cruz? It was called the SCO Forum and by the fourth day ... you've finally remembered, which giant redwood to go left at and which sandy cliff you should climb to make it back to your dorm."
Side activities at Forum often included a golf tournament, a soccer tournament with international teams, a fun run, beach volleyball, wine tastings in the nearby mountains, and rides on the Santa Cruz, Big Trees and Pacific Railway or Roaring Camp & Big Trees Narrow Gauge Railroad to a barbecue and the spectacular Henry Cowell Redwoods State Park.
Parties were frequent at Forum and local catering companies did quite well. A large contingent of Forum attendees from Latin America made their presence felt in this respect. Parties were also sometimes held off-campus, such as at the Santa Cruz Beach Boardwalk. The existence of official SCO Forum bottles of wine gave further credence to this aspect of the conference.
Typically SCO hosted a Barbecue and Anniversary Celebration on the Tuesday night of Forum, with a band that played until 11pm, after which some attendees carried on in what, as the conference guide said, was "a social event that has become legendary in the computer industry." Many other relaxations took place as well. Name musical acts featured at SCO Forum, for the Tuesday night party and in other time slots, included Tower of Power, The Kingsmen, The Surfaris, Jan and Dean, Jefferson Starship, and appearances over three consecutive years from folk-rock legend Roger McGuinn. Local bands that performed for the Tuesday night party included Big Bang Beat and Dick Bright's SRO.
Caldera interlude
On August 2, 2000, following several months of negotiations, Santa Cruz Operation announced that it would sell its Server Software and Services Divisions to Caldera Systems. The sale came after a series of good financial results had gone sour for SCO as 1999 turned into 2000. As a result, the conference held later that month was called not SCO Forum 2000 but just Forum 2000. Both Doug Michels and Ransom Love, CEO of Caldera Systems, gave keynote addresses.
By August 2001, Caldera International, the name of the merged company, was suffering both from the effects of the dot-com bust and from a lengthy and difficult acquisition process of SCO that had alienated some longtime SCO customers and partners.
Now for the first time, Forum was explicitly held under the Caldera name. Caldera CEO Ransom Love said he hoped that the event would do better even better than before and that he admired the history of the event: "It is unique in the industry because it is not a trade show and people do not go there to be sold something; they go to interact. There are far too few events like that."
Nevertheless, attendance at Caldera Forum 2001 was less than half that of the previous year. Love said at the event, "you have to get through the storm to get through to the beautiful day."
SCO Group in Vegas years
Caldera International continued to encounter significant financial struggles, made worse by the effects of the early 2000s recession.
As a cost-saving measure, in May 2002 the company indicated that the world-wide Forum conference in Santa Cruz would be dropped; instead, there would be smaller events around the world, including one at a different location in the United States.
In June 2002, Caldera International changed management, with Darl McBride taking over as CEO from Ransom Love.
In July 2002 the annual Forum conference was renamed for that year to Caldera GeoFORUM, and its location was moved to an environment that could not have been more different from the redwoods of Santa Cruz – the Las Vegas Strip, at the MGM Grand.
Then during the opening keynote address of the conference, on August 26, 2002, it was announced that Caldera was changing its name back to SCO, in the form of the new name The SCO Group. This reflected recognition of the reality that almost all of the company's revenue was coming from SCO Unix, not the Linux products that had come from Caldera, and that resellers were not making the switch to Linux.
McBride made the announcement in flashy style; as Linux Journal described, "Using a high-tech multimedia show, the Caldera image was shattered into shards by the new SCO Group logo, which is pretty much the same as the old SCO logo."
The announcement was met with a standing ovation from the Forum audience, almost all of whom were longtime SCO resellers. (Some former employees of the Santa Cruz Operation, however, grew to resent the rebirth of the SCO name and said that "it was no longer our SCO." Some industry observers expressed the same lament.) Some new initiatives were announced, such as the SCObiz collaboration with Vista.com, where that company's CEO John Wall gave a keynote showing how SCObiz would give a Web-based e-commerce capability to older SCO-based applications in the small-to-medium-business segment.
By the time of the Las Vegas Forum 2003 rolled around, McBride had led the SCO Group in a very different direction, issuing proclamations and lawsuits based upon a belief that SCO Unix intellectual property had been incorporated into Linux in an unlawful and uncompensated manor, and halting sales of the company's own Linux product. The SCO–Linux disputes were fully underway and the SCO Group was mired in controversy.
eWeek magazine reported that in response to pressure from the open source community and Linux vendors, Intel withdrew its sponsorship of Forum 2003 and HP decided not to give a partner keynote address. Nonetheless, HP did sponsor the welcome reception at the hotel, which eWeek said was well attended.
During the opening keynote, held on August 17, 2003, and accompanied by James Bond music (Vegas Forums tended to use Hollywood or Vegas motifs in their opening sessions), McBride, vice president Chris Sontag, and a representative from law firm Boies Schiller showed what they said were clear examples of SCOs protected Unix code being found in Linux. Despite the prominence of the legal situation, there was also emphasis at this Forum on SCO products and their roadmaps for further development and features.
SCO continued to be the subject of intensely hostile feelings from the open source and Linux community, with the Groklaw website leading the way. SCO would soon become, as Businessweek headlined, "The Most Hated Company In Tech".
SCO Forum 2004, themed "The Power of UNIX", explicitly emphasized the history of SCO Unix and ongoing product development work over the Linux matters.
It attracted some 550 attendees.
McBride said, "It's a quiet show and boring [perhaps for the media] in a good way. It shows we're committed to Unix and we're not just a litigation shop." A new program, SCO Marketplace, was unveiled, that would let developers bid on new development efforts of software that could be used on SCO Unix. When still faced with attention regarding legal issues, McBride said, "when people say we're only about litigation, it really bugs me. We have strong engineering talent, and 95 percent of our company is focused on building strong products, not on intellectual property litigation."
By SCO Forum 2005, the company said that attendance was 374, with invitations going out to only those from North America, and within that, only VARs and distributors and not end-customers. Events would be held in the rest of the world for partners elsewhere.
SCO Forum 2006 saw a move to The Mirage in Las Vegas. It also saw the return of Doug Michels to the SCO Forum stage, with McBride presenting him an award for lifetime achievement. But the main point of emphasis during this Forum was SCO's initiatives in the mobile app and mobile backend as a service spaces, as represented by its Me Inc. mobile software services and EdgeClick mobile application development platform. McBride said, "Today is the coming out party for Me Inc. Over the next few years, we want to be a leading provider of mobile application software to the marketplace. ... This is a seminal moment for us." The Forum 2006 schedule, subtitled "Mobility Everywhere", held some sixteen different breakout and training sessions related to Me Inc. and EdgeClick. One such new product, HipCheck, which allowed the remote monitoring of business-critical servers on Palm Treo smartphones, was given its debut announcement and demonstration at Forum.
As it happened, the mobility initiatives found difficulty gaining traction. For 2007, Forum was renamed to SCO Tec Forum and shortened in length to two full days, with technical breakout sessions replacing most of the keynotes and business sessions.
Just three days after Tec Forum 2007 wrapped, SCO suffered an adverse ruling in the SCO v. Novell case that rejected SCO's belief in its ownership of Unix-related copyrights and undermined much of the rest of its legal position. The following month, SCO Group filed a voluntary petition for reorganization under Chapter 11 of the United States Bankruptcy Code.
The 2008 edition of SCO Tec Forum was first planned to take place in the spring, then in August as usual, and then finally took place during October 19–21, 2008, at the Luxor. This was the 22nd consecutive year of Forum;
some attendees had continued to come to the conference year after year, as illustrated by the aforementioned SCO-focused reseller organization iXorg.
But the company's financial situation continued to deteriorate and this was to be the last SCO Forum.
List of Forums
References
External links
SCO Forum 1999 description and presentations – Internet Archive (partly intact)
Forum 2000 webcasts and presentations – Internet Archive (partly intact)
Caldera Forum 2001 description and presentations – Internet Archive (partly intact)
Caldera/SCO GeoFORUM 2002 keynotes and other geographies – Internet Archive
SCO Forum 2003 wrap-up and presentations – Internet Archive
SCO Forum 2004 wrap-up and presentations – Internet Archive
SCO Forum 2005 presentations – Xinuos
SCO Forum 2006 presentations – Xinuos
SCO Tec Forum 2007 registration page and presentations – Xinuos
SCO Tec Forum 2008 registration page and presentations – Xinuos
Computer conferences
Conferences in the United States
Caldera (company)
Recurring events established in 1987
Recurring events disestablished in 2008
Unix history |
49982918 | https://en.wikipedia.org/wiki/Ministry%20of%20Telecom%20and%20Information%20Technology%20of%20the%20State%20of%20Palestine | Ministry of Telecom and Information Technology of the State of Palestine | Ministry of Telecom and Information Technology of the State of Palestine is one of the government offices of the Palestinian administration. The Telecom and Information Technology Ministry of the State of Palestine transformed from Ministry of Telecom and Information Technology of the Palestinian National Authority, following the November 29, 2012 vote in UN over upgrade of Palestine to non-member state status. As of 2014, the Ministry is headed by Dr. Allam Mousa.
References
External links
Communications ministries
1994 establishments in the Palestinian territories |
41273483 | https://en.wikipedia.org/wiki/Cortex%20Plus | Cortex Plus | The Cortex Plus System is a toolkit RPG system that evolved from Margaret Weis Productions, Ltd's Cortex System. It has been used for four published games and one published preview to date, and the design principles are in the Cortex Plus Hacker's Guide, a book of advice in how to create new games using Cortex Plus, and list of new games produced via Kickstarter. According to the Hacker's Guide there are three basic 'flavors' of Cortex Plus; Action, Drama, and Heroic.
Of the four games published using this system, Leverage: The Roleplaying Game was nominated for the 2011 Origins Award for best Role Playing Game, and Marvel Heroic Roleplaying won the 2013 award and the award for best support as well as the 2012 ENnie Award for Best Rules and runner up for Best Game.
System
Unlike the Cortex System, Cortex Plus is a roll and keep system in which you roll one die from each category and keep the two highest dice in your dice pool. What goes into your dice pool is whatever is considered important for the game you are playing - but different stories have different things they consider important so the implementation of the system has been different for each game so far. All versions of Cortex Plus use standard polyhedral dice and the normal dice notation ranging from d4 (a 4 sided tetrahedral die) to d12 (a 12-sided dodecahedral die), and narratively notable features are given dice from this list, with d6 being the default.
In all cases Cortex Plus uses dice pools ranging from d4 (terrible) to d12 (the best possible), and every die in your pool that rolls a natural 1 (called an "Opportunity") doesn't count toward your total and causes some form of negative consequence which, depending on the game, either creates a complication for the characters to overcome or adds to the Doom Pool that provides the Game Master resources within the scene. Adding a d4 to your dice pool is considered a penalty because it isn't likely to roll one of your best two die results and there's such a high chance of rolling a 1.
Most versions of Cortex Plus (other than the Smallville Roleplaying Game) give the character a set of three distinctions that they can choose to add to their dice pool either as a d8 to assist them, or as a d4 to hinder them - but they gain a plot point by using the d4 option.
Equipment is normally handled by enabling the character to do things (you can't shoot someone without a gun) but where it is especially notable it is given a dice rating and added to the dice pool as an asset. And things that get in the way are treated similarly, and added to the dice pool as complications.
Cortex Plus also uses Plot Points - the normal uses of which are to establish something as notable (turning it into an asset), power a stunt or ability, to add a die to your roll, or to keep an additional die after you have rolled.
Action
Cortex Plus Action is used in both Leverage: The Roleplaying Game and the Firefly Role-Playing Game, and is the most traditional of the three. The important factors are an attribute (Leverage uses the six from the Cortex System, Firefly uses Physical/Mental/Social), and a Skill (the Leverage RPG uses Grifter, Hacker, Hitter, Mastermind, and Thief which are based on the TV show, while the Firefly RPG has a list of 22). Also in the dice pool can be a distinction, an asset, and a complication affecting the opposition.
The Leverage RPG, as a heist or con game, allows characters to spend a plot point to establish flashback scenes to explain what is really going on and why things are not as bad as they appear.
Drama
Cortex Plus Drama is used in the Smallville Roleplaying Game, which was the first Cortex Plus game. It has the most complicated character generation; the players start by drawing a relationship map that step by step ties the player characters to each other and the gameworld. Instead of attributes and skills, the important factors are considered Relationships with other characters and Values (Duty, Glory, Justice, Love, Power, Truth in Smallville) and each relationship and values having a statement attached and you can use the dice when acting in line with the statement. A character can also challenge a value statement and possibly reject it, allowing them to use it three times in the dice pool on that roll - but using a smaller die for the rest of the session every time they want to invoke that value or relationship.
When a character loses a contest and doesn't give in Smallville they take stress (Insecure, Afraid, Angry, Exhausted, and Injured) which can be used against them - or they can use at the cost of increasing the amount of stress they've taken, and being unable to act beyond d12 stress.
Heroic
Cortex Plus Heroic was written for Marvel Heroic Roleplaying and has the largest dice pools. In addition to distinctions, assets, complications, and stress (as used in Drama - with stress being Physical, Mental, and Emotional in Marvel Heroic), Cortex Plus Heroic characters have an Affiliation (Solo, Buddy, Team), at least one powerset and possibly more, and some Specialities (which represent a mix of skills, resources, and contacts). The powersets are further detailed with SFX and Limits so they more closely represent the vision of the character, and character creation is largely freeform. Unlike other Cortex Plus games, the default is to keep three dice with the third die representing the effect size and not being added to the total, only the size of this die mattering. This third die is known as the Effect Die. Marvel Heroic Roleplaying commonly uses large dice pools with seven separate categories (and potentially more than one power set). The dice in Marvel Heroic are:
From the character:
Distinction which may be invoked either positively for a d8 or negatively for a d4 (more chance of being a complication, little chance of being one of the best two dice)
Affiliation (Solo/Budy/Team). This reflects who the character is with and the nature of the scene. Each character has one at d6, one at d8, one at d10.
Power Sets - representing the characters' powers. To better represent a character's abilities these are tweaked with Sfx and limits. One die per power set by default - but Sfx can increase this.
Speciality which represents a mix of character skill, knowledge, and connections in their specialty field. Experts are rated d8, and Masters d10. A d10 speciality may be replaced by 2d8 or 3d6 and a d8 speciality may be replaced by 2d6.
There are also three potential dice from the situation:
Asset - a situational advantage or an object that's been created earlier.
Resource created in a "transition scene" a resource represents a character building something with their skill or calling on their contacts in advance.
Their opponent's state - opponents may have taken stress (i.e. harm) or complications (i.e. temporary disadvantages) from previous actions, and one die for this may be included.
Extra dice may be included through Plot Points, and characters seldom use all those dice.
Published games using Cortex Plus
Smallville Roleplaying Game
Leverage: The Roleplaying Game
Marvel Heroic Roleplaying
Dragon Brigade Roleplaying Game
Firefly Role-Playing Game
Reception
Reception to Cortex Plus games has been good, with Marvel Heroic Roleplaying winning best rules at the 2012 ENnie Awards. A common theme in reviews is that there are no procedural elements, and you are instead rolling based on what you consider relevant to the situation and the way that 1s add narrative complications to the results that would not normally be expected in other role-playing games. Another theme picked up on in the system is the way that it allows balance between characters such as Wolverine and Captain America while having enough meat to distinguish them.
References
Margaret Weis Productions games
Role-playing game systems |
11115658 | https://en.wikipedia.org/wiki/Line%20discipline | Line discipline | A line discipline (LDISC) is a layer in the terminal subsystem in some Unix-like systems. The terminal subsystem consists of three layers: the upper layer to provide the character device interface, the lower hardware driver to communicate with the hardware or pseudo terminal, and the middle line discipline to implement behavior common to terminal devices.
The line discipline glues the low level device driver code with the high level generic interface routines (such as read(2), write(2) and ioctl(2)), and is responsible for implementing the semantics associated with the device. The policy is separated from the device driver so that the same serial hardware driver can be used by devices that require different data handling.
For example, the standard line discipline processes the data it receives from the hardware driver and from applications writing to the device according to the requirements of a terminal on a Unix-like system. On input, it handles special characters such as the interrupt character (typically Control-C) and the erase and kill characters (typically backspace or delete, and Control-U, respectively) and, on output, it replaces all the LF characters with a CR/LF sequence.
A serial port could also be used for a dial-up Internet connection using a serial modem and PPP. In this case, a PPP line discipline would be used; it would accumulate input data from the serial line into PPP input packets, delivering them to the networking stack rather than to the character device, and would transmit packets delivered to it by the networking stack on the serial line.
Some Unix-like systems use STREAMS to implement line disciplines.
References
Computer terminals
Unix |
37920930 | https://en.wikipedia.org/wiki/Thirty%20Flights%20of%20Loving | Thirty Flights of Loving | Thirty Flights of Loving is a first-person adventure video game developed by Brendon Chung's indie video game studio, Blendo Games. It was released in August 2012 for Microsoft Windows, and in November 2012 for OS X. The game employs a modified version of id Software's 1997-era id Tech 2 engine—originally used for Quake 2—and incorporates music composed by Idle Thumbs member Chris Remo. It follows three people as they prepare for an alcohol heist and the aftermath of the operation.
The game is a non-direct sequel to Gravity Bone (2008) and features the same main character—an unnamed spy. It was developed as part of the Kickstarter campaign for the revival of the Idle Thumbs podcast and included a free copy of its predecessor. Thirty Flights of Loving received generally favorable reviews from video game media outlets, scoring 88 out of 100 on aggregate website Metacritic. A follow-up, Quadrilateral Cowboy, was released on July 25, 2016.
Gameplay
Thirty Flights of Loving is a first-person adventure video game that is estimated to take about 15 minutes on average to complete. Using the WASD keys and mouse, the player controls the main character, an unnamed spy who participates in an alcohol-smuggling operation. The player works alongside non-playable characters Anita, a demolitions expert, and Borges, a forger. The game follows the group as they prepare for a heist and experience its aftermath. The robbery is omitted from the game, although it is revealed that it went wrong.
Unlike Gravity Bone, Thirty Flights of Loving employs non-linear storytelling, forcing the player to piece together the narrative. During gameplay, objectives and guidance are provided through the player's interactions with objects. The player has little control over the game mechanics and is only able to move freely and pick up objects as needed to progress. Several optional actions, such as drinking alcohol, are available at several stages of the game.
Story
Thirty Flights of Loving begins with the player walking through a small corridor where individual gameplay elements such as movement and key allocations are explained. After walking through a bar and several more corridors, Anita and Borges are introduced. All three characters then exit on a plane. A smash cut skips the narrative forward to a scene with Anita and Borges lying shot in a room full of crates. The player character lifts Borges and takes him outside to what looks to be an airport. The player is then taken to a dark room with Anita sitting on a chair, peeling and eating oranges. After walking through another corridor, Anita, Borges, and the player join a wedding.
Anita and the player get drunk on a table while the rest of the characters start dancing and flying across the room. Then the player is taken again to the room where Anita was peeling oranges, and then back to the room where both she and Borges were lying shot. The player is then shown leaving the airport carrying Borges on a luggage cart. They arrive at a small place where the gunfight sequence takes place, followed by the motorcycle ride sequence, which ends with a crash that leads the player into a museum. In this area, there are several plaques showing the game's name and credits. The player leaves the area and goes into a new one where Bernoulli's principle about low and high air pressures is explained. Then, the player is again moved to the motorcycle sequence, where the game ends.
Development
Thirty Flights of Loving was developed by Brendon Chung's video game studio Blendo Games. Chung, who worked as a level designer for Pandemic Studios, has contributed to the development of Full Spectrum Warrior and Lord of the Rings: Conquest. Thirty Flights of Loving was created using a modified version of KMQuake II, a port of id Software's id Tech 2, the graphics engine for Quake 2. It incorporates a gameplay enhancement add-on named Lazarus, developed by David Hyde and Mad Dog. Chung acknowledged that although he has worked with newer, "powerful and flexible" engines, he preferred the older engine because it was released as an open-source platform, "so you can redistribute it for free." The source code of Thirty Flights of Loving itself has been released under version 2 of the GNU General Public License, making it free software.
The game was first conceived as a prototype to Gravity Bone, and was scrapped because it was "too dialogue heavy." However, Chung revived the idea after being contacted by Idle Thumbs to develop a game for their Kickstarter campaign. The main development phase, in which content creation took place, was finished within three months. Several more months were spent polishing the game and fixing software bugs. Chung brought multiple existing assets from Gravity Bone to develop Thirty Flights of Loving, and used a diverse set of tools to create the elements of the game. Blender was picked for the creation of models, while Audacity and Adobe Photoshop were used for audio and texture work. Another tool, GtkRadiant, was used to create the game's levels.
Chung developed Thirty Flights of Loving environment as a way to present the criminal nature of the group. He intentionally avoided the use of voice-overs, and instead modeled the environment to bridge "the disconnect between the player's knowledge and the player's character's knowledge." Characters Anita and Borges were to be introduced using dialogue, but this was removed. However, montages were later added after Idle Thumbs' crew expressed concerns that the characters' relationships were unclear. Chung included a system to automate the generation of non-playable characters to replace the process of manually scripting every person in the game. He explained that although it allows characters to "randomly wander near waypoints," the software is "occasionally glitchy and behaves badly around staircases." This automation code was originally developed for a surveillance game prototype "that never panned out."
A first-person meal simulator was designed for Thirty Flights of Loving. The sequence included the main characters "enjoy[ing] street noodles." However, the idea was scrapped and replaced with the motorcycle ride featured in the final version. The gunfight scene portrayed in the game was supposed to have a "musical rhythm," inspired by the film Koyaanisqatsi and Baraka. The last level of the game is modeled from the French National Museum of Natural History. Chung explained that when developing levels, he first spends time researching and "learning how things work." He elaborated that researching is important in "how it gives specificity and grounding" to a game. Thirty Flights of Loving is the seventh "Citizen Abel" game developed by Chung. The first two games were coded in 1999, while the following three were written between 2000 and 2004. The sixth game in the series, Gravity Bone (2008), became the first to be published. On the Tone Control podcast, he spoke about how every game he has produced, including Thirty Flights of Loving, takes place in the same shared universe.
Thirty Flights of Loving includes references and Easter eggs, as did Gravity Bone. Films such as Three Days of the Condor and The Conversation, film directors Steven Soderbergh and Quentin Tarantino, games such as Zork and Saints Row: The Third, and animated shows like Animaniacs and TaleSpin are referenced in the campaign. Unlike most of Chung's previous games, Thirty Flights of Loving was not framed around a certain musical composition. It incorporates music composed by Idle Thumbs member Chris Remo, while additional audio was provided by Jared Emerson-Johnson and A.J. Locascio. It makes use of Soundsnap's sound library.
Release
Thirty Flights of Loving was announced in February 2012 as part of the Kickstarter campaign for Idle Thumbs' podcast. The Idle Thumbs team talked to Chung about a possible sequel to Gravity Bone, which was offered as one of the rewards of their Kickstarter campaign. Those who supported the campaign received Thirty Flights of Loving before its official release in August 2012. They also gained access to an exclusive "Goldblum mode" that was not part of the general release. It replaced the character model with ones resembling actor Jeff Goldblum. The game, alongside a free copy of Gravity Bone, was made available to early supporters in July 2012 and to the general public a month later via Steam. A Mac OS X release followed in November 2012.
Reception
Thirty Flights of Loving received generally favorable reviews upon release. On Metacritic, which assigns a normalized rating out of 100 to reviews from mainstream critics, the game received an average score of 88 out of 100, based on 10 reviews. Destructoid Patrick Hancock awarded the game 9.5 out of 10, stating that "you'll never look at linear storytelling the same way again."
GameSpot's Carolyn Petit wrote that "the pleasure of Thirty Flights of Loving emerges from the things left unshown", allowing the player to infer and imagine the events, such as the heist itself, that are not otherwise shown. Graham Smith of PC Gamer extolled the minimalist storytelling, asserting that Thirty Flights of Loving "tells a better story in 13 minutes than most games do in 13 hours". Mark Brown from Wired UK classified the game as a "brassy, super-short, cubic heist drama," and stated that Chung "spins a memorable yarn, delivers it with confidence and panache [...] with a 15-year-old engine, without voice acting, in 20 minutes."
IGN's Nathan Meunier said the game "gets off to a fascinating start before completely throwing any and all expectations you might form during its first few minutes into the wood chipper." British video game magazine Edge found Thirty Flights of Loving to be "an intriguing psychological thriller that feels like Wes Anderson taking on Hitchcock." The magazine added that the game had a "wonderfully ambiguous" story, crafted by replacing dialogue with "artful framing and shrewd gestures, and booting out cutscenes in favour of prickly jump-cuts." Thirty Flights of Loving was a Narrative Award finalist at the 2013 Independent Games Festival. However, Richard Hofmeier's Cart Life (2011) became the winner.
Sequel
A follow-up to Thirty Flights of Loving, Quadrilateral Cowboy, was developed by Chung. The game takes place in the same universe as Gravity Bone and Thirty Flights of Loving but is not a direct sequel. It follows a hacker who oversees agents who infiltrate buildings and steal documents. Unlike its predecessors, Quadrilateral Cowboy uses id Software's id Tech 4 engine—originally used for Doom 3. According to Chung, the new engine provides "a lot more modern functionality" than the earlier engine used in the first two games.
References
External links
Official website
2012 video games
Kickstarter-funded video games
MacOS games
Single-player video games
Video game sequels
Video games developed in the United States
Video games scored by Chris Remo
Video games with commentaries
Windows games
Exploration video games
Commercial video games with freely available source code |
429909 | https://en.wikipedia.org/wiki/Internet%20culture | Internet culture | Internet culture, or cyberculture, is the culture based on the many manifestations of the use of computer networks for communication, entertainment, business, and recreation. Some features of Internet culture include online communities, gaming, social media, and more, as well as topics related to identity and privacy. Due to the internet's large scale use and adoption, the impacts of internet culture on society and non-digital cultures have been widespread. Additionally, due to the all encompassing nature of the internet and internet culture, different facets of internet culture are often studied individually rather than holistically, such as social media, gaming, specific communities, and more.
The cultural history of the internet involves unusually rapid change. The internet evolved in parallel with rapid and sustained technological advances in computing and data communication, with also an increasing breadth of access as the cost structure declined over multiple orders of magnitude.
Each technological era spawned a distinct cultural response.
As its outset, digital culture tilted toward the Anglosphere. Due to computer technology's early reliance on textual coding systems suited mainly to the English language, Anglophone societies—followed by other societies with languages based on Latin script (mostly European)—enjoyed privileged access to digital culture from the early beginnings in the late 1960s, before globally multilingual software became ubiquitious in the 2010s. Additionally, it was not until the advent of inexpensive smartphones that internet culture began to close the societal wealth divide.
Psychologically, electronic and digital culture is engrossing for many participants, to such a degree that it sometimes seems to compete with physical reality. For many years the term "cyberspace" was synonymous with digital culture, having first appeared in fiction in the 1980s in the work of cyberpunk science fiction author William Gibson, notably in his 1984 novel Neuromancer. This work romanticized digital culture as an alternate world of an entirely different social order, with distinct limitations and possibilities. Excessive neglect of the traditional physical and social world in favour of internet culture became codified as a medical condition under the diagnosis of internet addiction disorder. Nevertheless, internet culture seems destined to plunge headlong into the metaverse. This is the avowed ambition of Facebook as rebranded Meta Platforms in October 2021.
Overview
Since the boundaries of cyberculture are difficult to define, the term is used flexibly, and its application to specific circumstances can be controversial. It generally refers at least to the cultures of virtual communities, but extends to a wide range of cultural issues relating to "cyber-topics", e.g. cybernetics, and the perceived or predicted cyborgization of the human body and human society itself. It can also embrace associated intellectual and cultural movements, such as cyborg theory and cyberpunk. The term often incorporates an implicit anticipation of the future.
The Oxford English Dictionary lists the earliest usage of the term "cyberculture" in 1963, when Alice Mary Hilton wrote the following, "In the era of cyberculture, all the plows pull themselves and the fried chickens fly right onto our plates."
This example, and all others, up through 1995 are used to support the definition of cyberculture as "the social conditions brought about by automation and computerization." The American Heritage Dictionary broadens the sense in which "cyberculture" is used by defining it as, "The culture arising from the use of computer networks, as for communication, entertainment, work, and business". However, both OED and the American Heritage Dictionary fail to describe cyberculture as a culture within and among users of computer networks. This cyberculture may be purely an online culture or it may span both virtual and physical worlds. This is to say, that cyberculture is a culture endemic to online communities; it is not just the culture that results from computer use, but culture that is directly mediated by the computer. Another way to envision cyberculture is as the electronically enabled linkage of like-minded, but potentially geographically disparate (or physically disabled and hence less mobile) persons.
Cyberculture is a wide social and cultural movement closely linked to advanced information science and information technology, their emergence, development and rise to social and cultural prominence between the 1960s and the 1990s. Cyberculture was influenced at its genesis by those early users of the internet, frequently including the architects of the original project. These individuals were often guided in their actions by the hacker ethic. While early cyberculture was based on a small cultural sample, and its ideals, the modern cyberculture is a much more diverse group of users and the ideals that they espouse.
Numerous specific concepts of cyberculture have been formulated by such authors as Lev Manovich, Arturo Escobar and Fred Forest. However, most of these concepts concentrate only on certain aspects, and they do not cover these in great detail. Some authors aim to achieve a more comprehensive understanding distinguished between early and contemporary cyberculture (Jakub Macek), or between cyberculture as the cultural context of information technology and cyberculture (more specifically cyberculture studies) as "a particular approach to the study of the 'culture + technology' complex" (David Lister et al.).
Historical evolution
The cultural antecedent of digital culture was amateur radio (commonly known as ham radio), which at this point was electronic, but not yet digital. By connecting over great distances, Ham operators were able to form a distinct cultural community with a strong technocratic foundation, as the radio gear involved was finicky and prone to failure. The area that later became Silicon Valley, where much of modern Internet technology originates, had been an early locus of radio engineering. Alongside the original mandate for robustness and resiliency, the renegade spirit of the early ham radio community later infused the cultural value of decentralization and near-total rejection of regulation and political control that characterized the internet's original growth era, with strong undercurrents of the Wild West spirit of the American frontier.
At its inception in the early 1970s as part of ARPANET, digital networks were small, institutional, arcane, and slow, which confined the majority of use to the exchange of textual information, such as interpersonal messages and source code. Access to these networks was largely limited to a technological elite based at a small number of prestigious universities; the original American network connected one computer in Utah with three in California.
Text on these digital networks was usually encoded in the ASCII character set, which was minimalistic even for established English typography, barely suited to other European languages sharing a Latin script (but with an additional requirement to support accented characters), and entirely unsuitable to any language not based on a Latin script, such as Mandarin, Arabic, or Hindi.
Interactive use was discouraged except for high value activities. Hence a store and forward architecture was employed for many message systems, functioning more like a post office than modern instant messaging; however, by the standards of postal mail, the system (when it worked) was stunningly fast and cheap. Among the heaviest users were those actively involved in advancing the technology, most of whom implicitly shared much the same base of arcane knowledge, effectively forming a technological priesthood.
The origins of social media predate the Internet proper. The first bulletin board system was created in 1978, GEnie was created by General Electric in 1985, the mailing list Listserv appeared in 1986, and Internet Relay Chat was created in 1988. The first official social media site, SixDegrees launched in 1997.
In the 1980s, the network grew to encompass most universities and many corporations, especially those involved with technology, including heavy but segregated participation within the American military–industrial complex. Use of interactivity grew, and the user base became less dominated by programmers, computer scientists and hawkish industrialists, but it remained largely an academic culture centered around institutions of higher learning. It was observed that each September, with an intake of new students, standards of productive discourse would plummet until the established user base brought the influx up to speed on cultural etiquette.
Commercial internet service providers (ISPs) emerged in 1989 in the United States and Australia, opening the door for public participation. Soon the network was no longer dominated by academic culture, and the term eternal September, initially referring to September 1993, was coined as internet slang for the endless intake of cultural newbies.
Commercial use became established alongside academic and professional use, beginning with a sharp rise in unsolicited commercial e-mail commonly called spam. Around this same time, the network transitioned to support the burgeoning World Wide Web (largely erected on the emerging exchange cultures of free software and open source). Multimedia formats such as audio, graphics, and video become commonplace and began to displace plain text, but multimedia remained painfully slow for dial-up users. Also around this time the internet also began to internationalize, supporting most of the world's major languages, but support for many languages remained patchy and incomplete into the 2010s.
On the arrival of broadband access, file sharing services grew rapidly, especially of digital audio (with a prevalence of bootlegged commercial music) with the arrival of Napster in 1999 and similar projects which effectively catered to music enthusiasts, especially teenagers and young adults, soon becoming established as a prototype for rapid evolution into modern social media. Alongside ongoing challenges to traditional norms of intellectual property, business models of many of the largest internet corporations evolved into what Shoshana Zuboff terms surveillance capitalism. Not only is social media a novel form of social culture, but also a novel form of economic culture where sharing is frictionless, but personal privacy has become a scarce good.
In 1998, there was Hampster Dance, the first successful internet meme.
In 1999, Aaron Peckham created Urban Dictionary, an online, crowdsourced dictionary of slang. He had kept the server for Urban Dictionary under his bed.
In 2000, there was great demand for images of a dress that Jennifer Lopez wore. As a result, Google's co-founders created Google Images.
In 2001, Wikipedia was created.
In 2005, YouTube was created because people wanted to find video of Janet Jackson's wardrobe malfunction at the Super Bowl in 2004. YouTube was later acquired by Google in 2006.
In 2009, Bitcoin was created.
Since the 2010s, there has been an enormous rise in woke culture - highly emphasizing the importance of race and racism. This movement, having begun around 2014, has been called the "Great Awokening".
Since 2020, Internet culture has been affected by the COVID-19 pandemic.
Since 2021, there has been an unprecedented surge of interest in the concept of the metaverse. In particular, Facebook Inc. renamed itself to Meta Platforms in October 2021, amid the crisis of the Facebook Papers.
Manifestations
Manifestations of cyberculture include various human interactions mediated by computer networks. They can be activities, pursuits, games, places, and metaphors, and include a diverse base of applications. Some are supported by specialized software and others work on commonly accepted internet protocols. Examples include but are not limited to:
Blog
Bulletin Board Systems
Chat
Cybersex
E-Commerce
Games
Internet forums
Internet memes
Microblogs
Online videos
Peer-to-peer file sharing
Social networks
Usenet
Virtual worlds
Wikis
Social impact
The Internet is one of the most popular forms of communication today with billions of people using it every day. This is because the internet is full of a wide variety of tools that can allow for information retrieval and communication, which can occur between individuals, groups, or even within mass contexts. It has created a culture that many people are involved in which has led to countless positive and negative impacts.
The Internet provides an array of tools for people to use for information retrieval and communication in individual, group, and mass contexts.
Positive
The creation of the Internet has impacted our society greatly, giving us the ability to communicate with others online, store information such as files and pictures, and help maintain our government. As the Internet progressed, digital and audio files could be created and shared on the Internet, it became one of the main sources of information, business, and entertainment, and it led to the creation of different social media platforms such as Instagram, Twitter, Facebook and Snapchat. Communicating with others has never been easier in our day and age allowing people to connect and interact with each other. The Internet helps us maintain our relationships with others by acting as a supplement to physical interactions with our friends and family. People are also able to make forums and talk about different topics with each other which can help form and build relationships. This gives people the ability to express their own views freely. Social groups created on the Internet have also been connected to improving and maintaining our health in general. Interacting with social groups online can help prevent and possibly treat depression. In response to the rising prevalence of mental health disorders, including anxiety and depression, a 2019 study by Christo El Morr and others demonstrated that York University students in Toronto were extremely interested in participating in an online mental health support community. The study mentions that many students prefer an anonymous online mental health community to a traditional in person service, due to the social stigmatization of mental health disorders. Overall, online communication with others gives people the sense that they are wanted and are welcomed into social groups.
Negative
With access to the Internet becoming easier for people, it has led to a substantial amount of disadvantages. Addiction is a notable issue, as the internet is becoming increasingly relied on for various everyday tasks. There are a range of different symptoms connected to addiction such as withdrawal, anxiety, and mood swings. Addiction to social media is very prevalent with adolescents, but the interaction they have with one another can be detrimental for their health. Rude comments on posts can lower individuals self-esteem making them feel unworthy and may lead to depression. Social interaction online may substitute face-to-face interactions for some people instead of acting as a supplement. This can negatively impact people's social skills and cause one to have feelings of loneliness. People may also have the chance of being cyber bullied when using online applications. Cyber bullying may include harassment, video shaming, impersonating, and much more. A concept called cyber bullying theory is now being used to describe that children who use social networking more frequently are more likely to become victims of cyber bullying. Additionally, some evidence shows that too much internet use can stunt memory and attention development in children. The ease of access to information which the internet provides discourages information retention. However, the cognitive consequences are not yet fully known. The staggering amount of available information online can lead to feelings of information overload. Some effects of this phenomenon include reduced comprehension, decision making, and behavior control.
Qualities
First and foremost, cyberculture derives from traditional notions of culture, as the roots of the word imply. In non-cyberculture, it would be odd to speak of a single, monolithic culture. In cyberculture, by extension, searching for a single thing that is cyberculture would likely be problematic. The notion that there is a single, definable cyberculture is likely the complete dominance of early cyber territory by affluent North Americans. Writing by early proponents of cyberspace tends to reflect this assumption (see Howard Rheingold).
The ethnography of cyberspace is an important aspect of cyberculture that does not reflect a single unified culture. It "is not a monolithic or placeless 'cyberspace'; rather, it is numerous new technologies and capabilities, used by diverse people, in diverse real-world locations." It is malleable, perishable, and can be shaped by the vagaries of external forces on its users. For example, the laws of physical world governments, social norms, the architecture of cyberspace, and market forces shape the way cybercultures form and evolve. As with physical world cultures, cybercultures lend themselves to identification and study.
There are several qualities that cybercultures share that make them warrant the prefix "cyber-". Some of those qualities are that cyberculture:
Is a community mediated by ICTs.
Is culture "mediated by computer screens".
Relies heavily on the notion of information and knowledge exchange.
Depends on the ability to manipulate tools to a degree not present in other forms of culture (even artisan culture, e.g., a glass-blowing culture).
Allows vastly expanded weak ties and has been criticized for overly emphasizing the same (see Bowling Alone and other works).
Multiplies the number of eyeballs on a given problem, beyond that which would be possible using traditional means, given physical, geographic, and temporal constraints.
Is a "cognitive and social culture, not a geographic one".
Is "the product of like-minded people finding a common 'place' to interact."
Is inherently more "fragile" than traditional forms of community and culture (John C. Dvorak).
Thus, cyberculture can be generally defined as the set of technologies (material and intellectual), practices, attitudes, modes of thought, and values that developed with cyberspace.
Sharing has been argued to be an important quality for the Internet culture.
Identity – "Architectures of credibility"
Cyberculture, like culture in general, relies on establishing identity and credibility. However, in the absence of direct physical interaction, it could be argued that the process for such establishment is more difficult.
One early study, conducted from 1998-1999, found that the participants view information obtained online as being slightly more credible than information from magazines, radio, and television. However, the same study found that the participants viewed information obtained from newspapers as the most credible, on average. Finally, this study found that an individual's rate of verification of information obtained online was low, and perhaps over reported depending on the type of information.
How does cyberculture rely on and establish identity and credibility? This relationship is two-way, with identity and credibility being both used to define the community in cyberspace and to be created within and by online communities.
In some senses, online credibility is established in much the same way that it is established in the offline world; however, since these are two separate worlds, it is not surprising that there are differences in their mechanisms and interactions of the markers found in each.
Following the model put forth by Lawrence Lessig in Code: Version 2.0, the architecture of a given online community may be the single most important factor regulating the establishment of credibility within online communities. Some factors may be:
Anonymous versus Known
Linked to Physical Identity versus Internet-based Identity Only
Unrated Commentary System versus Rated Commentary System
Positive Feedback-oriented versus Mixed Feedback (positive and negative) oriented
Moderated versus Unmoderated
Anonymous versus known
Many sites allow anonymous commentary, where the user-id attached to the comment is something like "guest" or "anonymous user". In an architecture that allows anonymous posting about other works, the credibility being impacted is only that of the product for sale, the original opinion expressed, the code written, the video, or other entity about which comments are made (e.g., a Slashdot post). Sites that require "known" postings can vary widely from simply requiring some kind of name to be associated with the comment to requiring registration, wherein the identity of the registrant is visible to other readers of the comment. These "known" identities allow and even require commentators to be aware of their own credibility, based on the fact that other users will associate particular content and styles with their identity. By definition, then, all blog postings are "known" in that the blog exists in a consistently defined virtual location, which helps to establish an identity, around which credibility can gather. Conversely, anonymous postings are inherently incredible. Note that a "known" identity need have nothing to do with a given identity in the physical world.
Linked to physical identity versus internet-based identity only
Architectures can require that physical identity be associated with commentary, as in Lessig's example of Counsel Connect. However, to require linkage to physical identity, many more steps must be taken (collecting and storing sensitive information about a user) and safeguards for that collected information must be established-the users must have more trust of the sites collecting the information (yet another form of credibility). Irrespective of safeguards, as with Counsel Connect, using physical identities links credibility across the frames of the internet and real space, influencing the behaviors of those who contribute in those spaces. However, even purely internet-based identities have credibility. Just as Lessig describes linkage to a character or a particular online gaming environment, nothing inherently links a person or group to their internet-based persona, but credibility (similar to "characters") is "earned rather than bought, and because this takes time and (credibility is) not fungible, it becomes increasingly hard" to create a new persona.
Unrated commentary system versus rated commentary system
In some architectures, those who review or offer comments can, in turn, be rated by other users. This technique offers the ability to regulate the credibility of given authors by subjecting their comments to direct "quantifiable" approval ratings.
Positive feedback-oriented versus mixed feedback (positive and negative) oriented
Architectures can be oriented around positive feedback or a mix of both positive and negative feedback. While a particular user may be able to equate fewer stars with a "negative" rating, the semantic difference is potentially important. The ability to actively rate an entity negatively may violate laws or norms that are important in the jurisdiction in which the internet property is important. The more public a site, the more important this concern may be, as noted by Goldsmith & Wu regarding eBay.
Moderated versus unmoderated
Architectures can also be oriented to give editorial control to a group or individual. Many email lists are worked in this fashion (e.g., Freecycle). In these situations, the architecture usually allows, but does not require that contributions be moderated. Further, moderation may take two different forms: reactive or proactive. In the reactive mode, an editor removes posts, reviews, or content that is deemed offensive after it has been placed on the site or list. In the proactive mode, an editor must review all contributions before they are made public.
In a moderated setting, credibility is often given to the moderator. However, that credibility can be damaged by appearing to edit in a heavy-handed way, whether reactive or proactive (as experienced by digg.com). In an unmoderated setting, credibility lies with the contributors alone.
The very existence of an architecture allowing moderation may lend credibility to the forum being used (as in Howard Rheingold's examples from the WELL), or it may take away credibility (as in corporate web sites that post feedback, but edit it highly).
Cyberculture studies
The field of cyberculture studies examines the topics explained above, including the communities emerging within the networked spaces sustained by the use of modern technology. Students of cyberculture engage with political, philosophical, sociological, and psychological issues that arise from the networked interactions of human beings by humans who act in various relations to information science and technology.
Donna Haraway, Sadie Plant, Manuel De Landa, Bruce Sterling, Kevin Kelly, Wolfgang Schirmacher, Pierre Levy, David Gunkel, Victor J.Vitanza, Gregory Ulmer, Charles D. Laughlin, and Jean Baudrillard are among the key theorists and critics who have produced relevant work that speaks to, or has influenced studies in, cyberculture.
Following the lead of Rob Kitchin, in his work Cyberspace: The World in the Wires, cyberculture might be viewed from different critical perspectives. These perspectives include futurism or techno-utopianism, technological determinism, social constructionism, postmodernism, poststructuralism, and feminist theory.
See also
Anonymous
Cicada 3301
Cyber law
Cyberdelic
Cyberpunk
Digitalism
Information ethics
Infosphere
Internet trolls
Netnography
Postliterate society
Technology and society
Techno-progressivism
Technocriticism
Technorealism
References
Further reading
David Gunkel (2001) Hacking Cyberspace, Westview Press,
Clemens Apprich (2017) Technotopia: A Media Genealogy of Net Cultures, Rowman & Littlefield International, London
Sandrine Baranski (2010) La musique en réseau, une musique de la complexité ?, Éditions universitaires européennes La musique en réseau
David J. Bell, Brian D Loader, Nicholas Pleace, Douglas Schuler (2004) Cyberculture: The Key Concepts, Routledge: London.
Donna Haraway (1991) Simians, Cyborgs and Women: The Reinvention of Nature, Routledge, New York, NY
Donna Haraway (1997) Modest Witness Second Millennium FemaleMan Meets OncoMouse, Routledge, New York, NY
N. Katherine Hayles (1999) How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics, Chicago University Press, Chicago, IL
Jarzombek, Mark (2016) Digital Stockholm Syndrome in the Post-Ontological Age, University of Minnesota Press, Minneapolis, MN
Sherry Turkle (1997) Life on the Screen: Identity in the Age of the Internet, Simon & Schuster Inc, New York, NY
(retrieved February 4, 2009)
(retrieved February 4, 2009)
External links
First Monday, a peer reviewed journal on the internet
Institute of Network cultures
Resource Centre for Cyberculture Studies
Cyberspace
Computer folklore
Subcultures |
53358988 | https://en.wikipedia.org/wiki/Troy%3A%20Fall%20of%20a%20City | Troy: Fall of a City | Troy: Fall of a City is a British-American miniseries based on the Trojan War and the love affair between Paris and Helen. The show tells the story of the 10-year siege of Troy, set in the 13th century BC. It is not an adaption of Homer's Iliad or Odyssey but rather an original take on the Greek myths, and covers some ground only alluded to in those works. The series was commissioned by BBC One and is a co-production between BBC One and Netflix, with BBC One airing the show on 17 February 2018 in the United Kingdom, and Netflix streaming the show internationally outside the UK.
Premise
The story of the 10-year siege of Troy by the Greeks is told after Paris, the young prince of Troy, and Helen of Sparta, wife of the Greek king Menelaus, fall in love and leave Sparta together for Troy.
Cast
Louis Hunter as Paris/Alexander
Bella Dayne as Helen of Troy
David Threlfall as Priam
Frances O'Connor as Hecuba
Tom Weston-Jones as Hector
Joseph Mawle as Odysseus
Chloe Pirrie as Andromache
Johnny Harris as Agamemnon
David Gyasi as Achilles
Jonas Armstrong as Menelaus
Alfred Enoch as Aeneas
Aimee-Ffion Edwards as Cassandra
Hakeem Kae-Kazim as Zeus
Chris Fisher as Deiphobus
Christiaan Schoombie as Troilus
Alex Lanipekun as Pandarus
Jonathan Pienaar as Litos
David Avery as Xanthias
Lex King as Aphrodite
Amy Louise Wilson as Briseis
Inge Beckmann as Hera
Shamilla Miller as Athena
Diarmaid Murtagh as Hermes
Thando Hopa as Artemis
Nina Milner as Penthesilea
Grace Hogg-Robinson as Hermione
Jovan Muthray as Kaidas
Lemogang Tsipa as Patroclus
Production
The series was filmed near Cape Town and consists of eight episodes. It is written by David Farr, Nancy Harris, Mika Watkins, and Joe Barton, and directed by Owen Harris and Mark Brozel.
Episodes
Changes from earlier adaptations
The show makes a number of alterations from the original Greek texts, as well as departures from earlier modern adaptations of the legend. For instance, it vilifies Menelaus, proposes a resolution to Briseis' captivity, and omits Aeneas' identity as the son of Aphrodite. The show also omits the final reconciliation between Achilles and Agamemnon from the Iliad, instead replacing this with Agamemnon resorting to "ignoble trickery". It also reimagines the circumstances of the Trojan Horse stratagem by making it filled with grain for the starving city, thus making the Trojans more likely to bring it in. More significantly, it also incorporates myths about the lead-up to the war and about the backgrounds of the major characters that are not found in the Iliad and are not normally included in most modern adaptations.
One of the show's most radical changes from earlier adaptations was its decision to include the Greek gods as human-like characters played by live actors who speak normal dialogue. While the gods are major figures in the original Homeric epics, ever since the mid-twentieth century, adaptations of the Trojan War have nearly always either removed the gods from the story or heavily reduced their role in it. Most twenty-first-century adaptations of the Trojan War, including the film Troy (2004), Alessandro Baricco's Iliad (2004), Margaret George's Helen of Troy (2006), and Alice Oswald's Memorial (2011) omit them entirely. The gods play an active role in the show for the first half of the series, but they recede into the background halfway through after Zeus orders them to stop intervening in the war. Zeus does give this command in the original Iliad, but it is almost immediately violated and eventually repealed entirely.
The most controversial change was the showrunners' decision to cast David Gyasi, a black actor of Ghanaian descent, as Achilles and Nigerian-born Hakeem Kae-Kazim, another black actor, as Zeus. These decisions resulted in almost immediate backlash as both roles are traditionally portrayed by white actors and historically depicted as white. Tim Whitmarsh, a professor of Greek culture at the University of Cambridge, defended the production, arguing that historical Greeks were "unlikely to be uniformly pale-skinned", that "dark-skinned North Africans existed" in ancient Greece, citing Memnon of Ethiopia as an example. Whitemarsh also stated the question of whether ‘black people’ lived in Ancient Greece is itself flawed as the ancient Greeks did not have a concept of "race". He added that "Our best estimate is that the Greeks would be a spectrum of hair colours and skin types in antiquity. I don't think there's any reason to doubt they were Mediterranean in skin type (lighter than some and darker than other Europeans), with a fair amount of inter-mixing." and that there is no single, absolutely definitive version of the Trojan War story: "Homer's poems are merely one version and the Greeks themselves understood the story could change... There's never been an authentic retelling of the Iliad and the Odyssey – they've always been fluid texts. They're not designed to be set in stone and it's not blasphemous to change them."
Reception
Ratings
The show's ratings were a disappointment to its creators. Despite its Saturday night prime time slot and each episode's £2 million budget, the first episode aired to an audience of only 3.2 million viewers, while other shows in the same time slot have easily surpassed 5 million. By episode four, the viewership had dropped to only 1.6 million.
Critical reception
On review aggregator website Rotten Tomatoes, the series holds a critics' approval rating of 71% based on 15 reviews, and an average rating of 5.67/10, indicating generally favourable reviews. The critics' consensus reads: "Troy: Fall of a City never tries to reinvent the bronze wheel but succeeds in engaging audiences with both royal and divine intrigue, making for a highly enjoyable romp in the lost kingdom."
In a 16 February 2018 review for The Independent, Jacob Stolworthy praised the series for its willingness to alter the myths to explain or remove illogical aspects, as well as Louis Hunter's acting in his lead role as Paris. He also praised the show's elaborate costuming, "its lavish set design, production values and sci-fi soundtrack", commenting, "Viewers are immediately transported to ancient locations (in actuality beautiful Cape Town) in scene one and never relents. If it's escapism you're wanting, series link away." He criticized the first episode; however, for seeming "too tame" in light of the numerous early comparisons to HBO's Game of Thrones.
In an 18 February review for The Guardian, Euan Ferguson praised the show for its faithfulness to the original myths and for its strong portrayal of Helen, which he stated stood in stark contrast to the demure portrayal of the character by Diane Kruger in the 2004 Hollywood blockbuster film Troy, which had starred Brad Pitt as Achilles. Ferguson compared Troy: Fall of a City favourably to Game of Thrones and commented that the show will "hopefully expunge any residual memories of the 2004 Brad Pitt epic". He comments, "...older viewers can marvel at the silked lushness of the sea scenes while revelling in an old tale well told, younger ones can learn a little, about the names of the gods, and the fire-haunted dreams of Cassandra, and about mankind’s ancient rush towards betrayal."
A review from the same day by Rupert Hawksley for The Daily Telegraph tentatively praised Troy: Fall of a City for its more thoughtful, psychologically complicated interpretation of the Trojan War in sharp contrast to the 2004 film Troy, which Hawksley derided as a "shallow flex-fest". Nonetheless, Hawksley criticized the characters' occasionally stilted dialogue. He concluded, "Troy: Fall of a City might just be a fresh, psychologically knotty take on one of the greatest tales of them all." Also on the same day, Camilla Long, reviewing for The Sunday Times, panned the show, writing, "Troy: Fall of a City, a reworking of the oldest drive-by in history, is so far removed from anything Sophocles might recognize, they should have named it The Real Housewives of Ilium."
In a 24 February review for The Spectator, James Walton dismissed the script as "pitched somewhere between a particularly corny Hollywood epic and a play by Ernie Wise", while the dialogue was pronounced "staggeringly creaky and endlessly bathetic." Walton goes on: "‘How did you two get together?’ Paris asked Helen and Menelaus at the banquet given in his honour. [...] Impressively, the dialogue even managed to descend into cliché when nobody was actually using any words — as in the scene where the two defeated goddesses from the beauty contest went for one of those anguished bellows that causes all the nearby birds to fly theatrically from the trees."
A review from 28 February by Rachel Cooke for New Statesman panned the show, complaining that "all the men look as if they're in a Calvin Klein ad", that the dialogue is unrealistic, and that its portrayal of Helen and Paris's relationship is "tediously 21st century". Cooke concludes: "The dialogue is so richly silted with self-help banalities, we might as well be watching a Meghan and Harry biopic as a drama inspired by the greatest of all epic poems. There's also something exceedingly creepy about its retro, soft-porny direction (by Owen Harris); every time Helen takes a shower, you half expect her to whip out a Flake."
In an 8 April review for IndieWire, Steve Greene criticized the show for telling the same story that has been told thousands of times before and offering very little innovation. He concludes: "The result is a series more competent than compelling. The tiny diversions from the norm seem thrilling by comparison". He did, however, offer extended praise for David Gyasi's performance as Achilles and Joseph Mawle's performance as Odysseus and for the show's creators' unusual decision to include the gods in the show.
In an unreservedly positive review for Buffalo News on 26 May 2018, Randy Schiff praised the show for its pace and acting, commenting specifically on Hunter, Dayne, Gyasi, Mawle, O'Connor, and Threlfall's performances. He also lauded the portrayal of Helen as a "stately and intelligent" woman whose "deep desire for independence" is only satisfied once she goes to Troy, where women are valued just as much as men. He also expressed wonderment at the show's portrayal of the Greek deities, writing, "I found myself especially mesmerized by the show's eerie presentation of deities: here, spectacularly partisan goddesses strut across raging battlefields, while a world-weary Zeus (Hakeem Kae-Kazim) remains resolutely neutral amidst the chaos."
Andrea Tallarita defended the show in a 28 June 2018 review for PopMatters, arguing that the show's commercial failure may have been partially a result of the viewing audience's ignorance of the original classical texts, which the show treated with surprising fidelity. She generally praised the show, stating that it has "a dignified life of [its] own", but she criticized the decision to make the gods less involved for the second half of the series, as well as the fact that the show limited itself to only include a small number of especially important deities rather than the vast pantheon appearing in the Iliad, calling this decision "such a wasted opportunity".
References
External links
2018 British television series debuts
2018 British television series endings
2010s British drama television series
English-language television shows
English-language Netflix original programming
BBC television dramas
2010s British television miniseries
Trojan War films
Troy
Television series by Endemol
Television shows filmed in South Africa
Television series based on classical mythology
Cultural depictions of Helen of Troy
2010s Australian drama television series
Television series set in ancient Greece
Agamemnon |
48534076 | https://en.wikipedia.org/wiki/Input%20enhancement%20%28computer%20science%29 | Input enhancement (computer science) | In computer science, input enhancement is the principle that processing a given input to a problem and altering it in a specific way will increase runtime efficiency or space efficiency, or both. The altered input is usually stored and accessed to simplify the problem. By exploiting the structure and properties of the inputs, input enhancement creates various speed-ups in the efficiency of the algorithm.
Searching
Input enhancement when searching has been an essential component of the algorithm world for some time in computer science. The main idea behind this principle is that the efficiency of a search is much faster when the time is taken to create or sort a data structure of the given input before attempting to search for the element in said data structure.
Presorting
Presorting is the technique of sorting an input before attempting to search it. Because the addition of a sorting component to an algorithm is added to the runtime of the searching algorithm, and not multiplied, it only competes for the slowest portion of the algorithm. Since the efficiency of algorithms is measured by the slowest component, the addition of the sorting component is negligible if the search is less efficient. Unfortunately, presorting is usually the slowest component of the algorithm. Contrasting, a searching algorithm without a presort is almost always slower than that with a presort.
The sorting portion of the algorithm processes the input of the problem before the searching portion of the algorithm is even reached. Having the elements of the input sorted in some sort of order makes the search trivial in practice. The simplest sorting algorithms – insertion sort, selection sort, and bubble sort – all have a worst case runtime of O(n2), while the more advanced sorting algorithms – heapsort, merge sort – which have a worst case runtime of O(n log n) – and quicksort – which has a worst case of O(n2) but is almost always O(n log n). Utilizing these sorting algorithms, a search algorithm that incorporates presorting will yield these big-O efficiencies.
A simple example of the benefits of presorting can be seen with an algorithm that checks an array for unique elements: If an array of n elements is given, return true if every element in the array is unique, otherwise return false. The pseudocode is presented below:
algorithm uniqueElementSearch(A[0...n]) is
for i := 0 to n – 1 do
for j := i + 1 to n do
if A[i] = A[j] then
return false
return true
Without a presort, at worst case, this algorithm would require every element to be checked against every other element with two possible outcomes: either there is no duplicate element in the array, or the last two elements in the array are the duplicates. This results in an O(n2) efficiency.
Now compare this to a similar algorithm that utilizes presorting. This algorithm sorts the inputted array, and then checks each pair of elements for a duplicate. The pseudocode is presented below:
algorithm presortUniqueElementSearch(A[0...n]) is
sort(A[0...n])
for i := 0 to n – 1 do
if A[i] = A[i + 1] then
return false
return true
As previously stated, the least efficient part of this algorithm is the sorting of the array, which, if an efficient sort is selected, would run in O(n log n). But after the array is sorted, the array only needs to be traversed once, which would run in O(n). This results in an O(n log n) efficiency.
This simple example demonstrates what is capable with an input enhancement technique such as presorting. The algorithm went from quadratic runtime to linearithmic runtime which will result in speed-ups for large inputs.
In trees
Creating data structures to more efficiently search through data is also a form of input enhancement. Placing data into a tree to store and search through inputs is another popular technique. Trees are used throughout computer science and many different types of trees - binary search trees, AVL trees, red-black trees, and 2-3 trees to name just a small few - have been developed to properly store, access, and manipulate data while maintaining their structure. Trees are a principal data structure for dictionary implementation.
The benefits of putting data in a tree are great, especially if the data is being manipulated or repeatedly searched through. Binary search trees are the most simplest, yet most common type of tree for this implementation. The insertion, deletion, and searching of items in a tree are all worst case O(n), but are most often executed in O(log n). This makes the repeated searching of elements even quicker for large inputs. There are many different types of binary search trees that work more efficiently and even self-balance upon addition and removal of items, like the AVL tree which has a worst case O(log n) for all searching, inserting, and deletion.
Taking the time to put the inputted data into such a structure will have great speed-ups for repeated searching of elements, as opposed to searching through the data that hasn't enhanced.
String matching
String matching is a complex issue in the world of programming now that search engines are the forefront of the internet and the online world. When given a keyword or a string that needs to be searched among millions upon millions of words, it would take an unbelievable amount of time to match this string character per character. Input enhancement allows an input to be altered to make this process that much faster.
The brute-force algorithm for this problem would perform as follows:
When presented with a string of n characters, often called the key or pattern, the string would be compared to every single character of a longer string m, often called the text. If a matched character occurs, it checks the second character of the key to see if it matches. If it does, the next character is checked and so on until the string matches or the subsequent character doesn't match and the entire key shifts a single character. This continues until the key is found or until the text is exhausted.
This algorithm is extremely inefficient. The maximum number of check trials would be m-n+1 trials, making the big-O efficiency at worst case O(mn). On average case, the maximum number of check trials would never be reached and only a few would be executed, resulting in an average time efficiency of O(m+n).
Because of the necessity of more efficient string matching algorithms, several faster algorithms have been developed, with most of them utilizing the idea of input enhancement. The key is preprocessed to gather information about what to look for in the text and that information is stored in order to refer back to them when necessary. The accessing of this information is constant time and greatly increases the runtime efficiency of the algorithms that use it, most famously the Knuth-Morris-Pratt algorithm and the Boyer-Moore algorithm. These algorithms, for the most part, use the same methods to obtain its efficiency with the main difference being on how the key is composed.
Horspool's algorithm
As a demonstration of input enhancement in string matching, one should examine a simplified version of the Boyer-Moore algorithm, Horspool's algorithm. The algorithm starts at the nth character of the text m and compares the character. Let's call this character x. There are 4 possible cases of what can happen next.
Case 1:
The first possible case is that the character x is not in the key. If this occurs, the entire key can be shifted the length of the key.
Case 2:
The second possible case is that the character x is not the current character, but x is in the key. If this occurs, the key is shifted to align the rightmost occurrence of the character x.
Case 3:
The third possible case is that the character x matches with the last character in the key but the other characters don't fully match the key and x doesn't occur again in the key. If this occurs, the entire key can be shifted the length of the key.
Case 4:
The fourth and last possible case is that character x matches the key but the other characters don't fully match the key and x does occur again in the key. If this occurs, the key is shifted to align the rightmost occurrence if the character x.
This may seem like it is not more efficient than the brute-force algorithm since it has to check all of the characters on every check. However, this is not the case. Horspool's algorithm utilizes a shift table to store the number of characters the algorithm should shift if it runs into a specific character. The input is precomputed into a table with every possible character that can be encountered in the text. The shift size is computed with two options: one, if the character is in not in the key, then the shift size is n, the length of the key; or two, if the character appears in the key, then its shift value is the distance from the rightmost occurrence of the character in the first n-1 characters in the key. The algorithm for the shift table generator is given the key and an alphabet of possible characters that could appear in the string (K[0...n-1]) as input and returns the shift table (T[0...s-1]). Pseudocode for the shift table generator and an example of the shift table for the string ‘POTATO’ is displayed below:
algorithm shiftTableGenerator(K[0...n-1]) is
for i = 0 to s – 1 do
T[i] := m
for j := 0 to n – 2 do
T[P[j]] := n – 1 – j
return T
After the shift table is constructed in the input enhancement stage, the algorithm lines up the key and starts executing. The algorithm executes until a matching substring of text m is found or the key overlaps the last characters of text m. If the algorithm encounters a pair of characters that do not match, it accesses the table for the character's shift value and shifts accordingly. Horspool's algorithm takes the key (K[0...n-1]) and the text (M[0...m-1]) and outputs either the index of the matching substring or the string “Key not found” depending on the result. Pseudocode for Horspool's algorithm is presented below:
algorithm HorspoolsAlgorithm(K[0...n-1]), M[0...m-1]) is
shiftTableGenerator(K[0...n-1])
i := n – 1
while i ≤ m – 1 do
k := 0
while k ≤ m – 1 and K[n – 1 – k] = M[i – k] do
k := k + 1
if k = m then
return i – n + 1
else
i = i + T[M[i]]
return “Key not found”
Although it may not be evident, the worst case runtime efficiency of this algorithm is O(mn). Fortunately, on texts that are random, the runtime efficiency is linear, O(n/m). This places Horspool's algorithm, which utilizes input enhancement, in a much faster class than the brute-force algorithm for this problem.
Related concepts
Input enhancement is often used interchangeably with precomputation and preprocessing. Although they are related, there are several important differences that must be noted.
Precomputing and input enhancement can sometimes be used synonymously. More specifically, precomputation is the calculation of a given input before anything else is done to the input. Oftentimes a table is generated to be looked back on during the actual execution of the algorithm. Input enhancement that calculates values and assigns them to elements of the input can be classified as precomputation, but the similarities stop there. There are sections of input enhancement that do not utilize precomputing and the terms should not be mutually used.
When speaking about altering inputs, preprocessing is often misused. In computer science, a preprocessor and preprocessing are entirely different. When preprocessing is used in context, the usual intention is to portray the concept of input enhancement, and not that of utilizing a preprocessor. Implementing a preprocessor is the concept in which a program takes an input and processes it into an output to be used by another program entirely. This sounds like input enhancement, but the application of preprocessor applies to the generic program that processes the source input to be outputted in a format that a compiler can read and can then be compiled.
References
Levitin, Anany (2012). Introduction to The Design & Analysis of Algorithms (Third Edition). Pearson.
Sebesta, Robert W. (2012). Concepts of Programming Languages (Tenth Edition). Pearson.
Software optimization |
43285 | https://en.wikipedia.org/wiki/Common%20Object%20Request%20Broker%20Architecture | Common Object Request Broker Architecture | The Common Object Request Broker Architecture (CORBA) is a standard defined by the Object Management Group (OMG) designed to facilitate the communication of systems that are deployed on diverse platforms. CORBA enables collaboration between systems on different operating systems, programming languages, and computing hardware. CORBA uses an object-oriented model although the systems that use the CORBA do not have to be object-oriented. CORBA is an example of the distributed object paradigm.
Overview
CORBA enables communication between software written in different languages and running on different computers. Implementation details from specific operating systems, programming languages, and hardware platforms are all removed from the responsibility of developers who use CORBA. CORBA normalizes the method-call semantics between application objects residing either in the same address-space (application) or in remote address-spaces (same host, or remote host on a network). Version 1.0 was released in October 1991.
CORBA uses an interface definition language (IDL) to specify the interfaces that objects present to the outer world. CORBA then specifies a mapping from IDL to a specific implementation language like C++ or Java. Standard mappings exist for Ada, C, C++, C++11, COBOL, Java, Lisp, PL/I, Object Pascal, Python, Ruby and Smalltalk. Non-standard mappings exist for C#, Erlang, Perl, Tcl and Visual Basic implemented by object request brokers (ORBs) written for those languages.
The CORBA specification dictates there shall be an ORB through which an application would interact with other objects. This is how it is implemented in practice:
The application initializes the ORB, and accesses an internal Object Adapter, which maintains things like reference counting, object (and reference) instantiation policies, and object lifetime policies.
The Object Adapter is used to register instances of the generated code classes. Generated code classes are the result of compiling the user IDL code, which translates the high-level interface definition into an OS- and language-specific class base for use by the user application. This step is necessary in order to enforce CORBA semantics and provide a clean user process for interfacing with the CORBA infrastructure.
Some IDL mappings are more difficult to use than others. For example, due to the nature of Java, the IDL-Java mapping is rather straightforward and makes usage of CORBA very simple in a Java application. This is also true of the IDL to Python mapping. The C++ mapping requires the programmer to learn datatypes that predate the C++ Standard Template Library (STL). By contrast, the C++11 mapping is easier to use, but requires heavy use of the STL. Since the C language is not object-oriented, the IDL to C mapping requires a C programmer to manually emulate object-oriented features.
In order to build a system that uses or implements a CORBA-based distributed object interface, a developer must either obtain or write the IDL code that defines the object-oriented interface to the logic the system will use or implement. Typically, an ORB implementation includes a tool called an IDL compiler that translates the IDL interface into the target language for use in that part of the system. A traditional compiler then compiles the generated code to create the linkable-object files for use in the application. This diagram illustrates how the generated code is used within the CORBA infrastructure:
This figure illustrates the high-level paradigm for remote interprocess communications using CORBA. The CORBA specification further addresses data typing, exceptions, network protocols, communication timeouts, etc. For example: Normally the server side has the Portable Object Adapter (POA) that redirects calls either to the local servants or (to balance the load) to the other servers. The CORBA specification (and thus this figure) leaves various aspects of distributed system to the application to define including object lifetimes (although reference counting semantics are available to applications), redundancy/fail-over, memory management, dynamic load balancing, and application-oriented models such as the separation between display/data/control semantics (e.g. see Model–view–controller), etc.
In addition to providing users with a language and a platform-neutral remote procedure call (RPC) specification, CORBA defines commonly needed services such as transactions and security, events, time, and other domain-specific interface models.
Versions history
This table presents the history of CORBA standard versions.
Servants
A servant is the invocation target containing methods for handling the remote method invocations. In the newer CORBA versions, the remote object (on the server side) is split into the object (that is exposed to remote invocations) and servant (to which the former part forwards the method calls). It can be one servant per remote object, or the same servant can support several (possibly all) objects, associated with the given Portable Object Adapter. The servant for each object can be set or found "once and forever" (servant activation) or dynamically chosen each time the method on that object is invoked (servant location). Both servant locator and servant activator can forward the calls to another server. In total, this system provides a very powerful means to balance the load, distributing requests between several machines. In the object-oriented languages, both remote object and its servant are objects from the viewpoint of the object-oriented programming.
Incarnation is the act of associating a servant with a CORBA object so that it may service requests. Incarnation provides a concrete servant form for the virtual CORBA object. Activation and deactivation refer only to CORBA objects, while the terms incarnation and etherealization refer to servants. However, the lifetimes of objects and servants are independent. You always incarnate a servant before calling activate_object(), but the reverse is also possible, create_reference() activates an object without incarnating a servant, and servant incarnation is later done on demand with a Servant Manager.
The (POA) is the CORBA object responsible for splitting the server side remote invocation handler into the remote object and its servant. The object is exposed for the remote invocations, while the servant contains the methods that are actually handling the requests. The servant for each object can be chosen either statically (once) or dynamically (for each remote invocation), in both cases allowing the call forwarding to another server.
On the server side, the POAs form a tree-like structure, where each POA is responsible for one or more objects being served. The branches of this tree can be independently activated/deactivated, have the different code for the servant location or activation and the different request handling policies.
Features
The following describes some of the most significant ways that CORBA can be used to facilitate communication among distributed objects.
Objects By Reference
This reference is either acquired through a stringified Uniform Resource Locator (URL), NameService lookup (similar to Domain Name System (DNS)), or passed-in as a method parameter during a call.
Object references are lightweight objects matching the interface of the real object (remote or local). Method calls on the reference result in subsequent calls to the ORB and blocking on the thread while waiting for a reply, success or failure. The parameters, return data (if any), and exception data are marshaled internally by the ORB according to the local language and OS mapping.
Data By Value
The CORBA Interface Definition Language provides the language- and OS-neutral inter-object communication definition. CORBA Objects are passed by reference, while data (integers, doubles, structs, enums, etc.) are passed by value. The combination of Objects-by-reference and data-by-value provides the means to enforce great data typing while compiling clients and servers, yet preserve the flexibility inherent in the CORBA problem-space.
Objects By Value (OBV)
Apart from remote objects, the CORBA and RMI-IIOP define the concept of the OBV and Valuetypes. The code inside the methods of Valuetype objects is executed locally by default. If the OBV has been received from the remote side, the needed code must be either a priori known for both sides or dynamically downloaded from the sender. To make this possible, the record, defining OBV, contains the Code Base that is a space-separated list of URLs whence this code should be downloaded. The OBV can also have the remote methods.
CORBA Component Model (CCM)
CORBA Component Model (CCM) is an addition to the family of CORBA definitions. It was introduced with CORBA 3 and it describes a standard application framework for CORBA components. Though not dependent on "language dependent Enterprise Java Beans (EJB)", it is a more general form of EJB, providing four component types instead of the two that EJB defines. It provides an abstraction of entities that can provide and accept services through well-defined named interfaces called ports.
The CCM has a component container, where software components can be deployed. The container offers a set of services that the components can use. These services include (but are not limited to) notification, authentication, persistence and transaction processing. These are the most-used services any distributed system requires, and, by moving the implementation of these services from the software components to the component container, the complexity of the components is dramatically reduced.
Portable interceptors
Portable interceptors are the "hooks", used by CORBA and RMI-IIOP to mediate the most important functions of the CORBA system. The CORBA standard defines the following types of interceptors:
IOR interceptors mediate the creation of the new references to the remote objects, presented by the current server.
Client interceptors usually mediate the remote method calls on the client (caller) side. If the object Servant exists on the same server where the method is invoked, they also mediate the local calls.
Server interceptors mediate the handling of the remote method calls on the server (handler) side.
The interceptors can attach the specific information to the messages being sent and IORs being created. This information can be later read by the corresponding interceptor on the remote side. Interceptors can also throw forwarding exceptions, redirecting request to another target.
General InterORB Protocol (GIOP)
The GIOP is an abstract protocol by which Object request brokers (ORBs) communicate. Standards associated with the protocol are maintained by the Object Management Group (OMG). The GIOP architecture provides several concrete protocols, including:
Internet InterORB Protocol (IIOP) – The Internet Inter-Orb Protocol is an implementation of the GIOP for use over the Internet, and provides a mapping between GIOP messages and the TCP/IP layer.
SSL InterORB Protocol (SSLIOP) – SSLIOP is IIOP over SSL, providing encryption and authentication.
HyperText InterORB Protocol (HTIOP) – HTIOP is IIOP over HTTP, providing transparent proxy bypassing.
Zipped IOP (ZIOP) – A zipped version of GIOP that reduces the bandwidth usage.
VMCID (Vendor Minor Codeset ID)
Each standard CORBA exception includes a minor code to designate the subcategory of the exception. Minor exception codes are of type unsigned long and consist of a 20-bit "Vendor Minor Codeset ID" (VMCID), which occupies the high order 20 bits, and the minor code proper which occupies the low order 12 bits.
Minor codes for the standard exceptions are prefaced by the VMCID assigned to OMG, defined as the unsigned long constant CORBA::OMGVMCID, which has the VMCID allocated to OMG occupying the high order 20 bits. The minor exception codes associated with the standard exceptions that are found in Table 3–13 on page 3-58 are or-ed with OMGVMCID to get the minor code value that is returned in the ex_body structure (see Section 3.17.1, "Standard Exception Definitions", on page 3-52 and Section 3.17.2, "Standard Minor Exception Codes", on page 3-58).
Within a vendor assigned space, the assignment of values to minor codes is left to the vendor. Vendors may request allocation of VMCIDs by sending email to [email protected]. A list of currently assigned VMCIDs can be found on the OMG website at: http://www.omg.org/cgi-bin/doc?vendor-tags
The VMCID 0 and 0xfffff are reserved for experimental use. The VMCID OMGVMCID (Section 3.17.1, "Standard Exception Definitions", on page 3-52) and 1 through 0xf are reserved for OMG use.
The Common Object Request Broker: Architecture and Specification (CORBA 2.3)
Corba Location (CorbaLoc)
Corba Location (CorbaLoc) refers to a stringified object reference for a CORBA object that looks similar to a URL.
All CORBA products must support two OMG-defined URLs: "" and "". The purpose of these is to provide a human readable and editable way to specify a location where an IOR can be obtained.
An example of corbaloc is shown below:
A CORBA product may optionally support the "", "" and "" formats. The semantics of these is that they provide details of how to download a stringified IOR (or, recursively, download another URL that will eventually provide a stringified IOR). Some ORBs do deliver additional formats which are proprietary for that ORB.
Benefits
CORBA's benefits include language- and OS-independence, freedom from technology-linked implementations, strong data-typing, high level of tunability, and freedom from the details of distributed data transfers.
Language independenceCORBA was designed to free engineers from limitations of coupling their designs to a particular software language. Currently there are many languages supported by various CORBA providers, the most popular being Java and C++. There are also C++11, C-only, Smalltalk, Perl, Ada, Ruby, and Python implementations, just to mention a few.
OS-independence CORBA's design is meant to be OS-independent. CORBA is available in Java (OS-independent), as well as natively for Linux/Unix, Windows, Solaris, OS X, OpenVMS, HPUX, Android, LynxOS, VxWorks, ThreadX, INTEGRITY, and others.
Freedom from technologies One of the main implicit benefits is that CORBA provides a neutral playing field for engineers to be able to normalize the interfaces between various new and legacy systems. When integrating C, C++, Object Pascal, Java, Fortran, Python, and any other language or OS into a single cohesive system design model, CORBA provides the means to level the field and allow disparate teams to develop systems and unit tests that can later be joined together into a whole system. This does not rule out the need for basic system engineering decisions, such as threading, timing, object lifetime, etc. These issues are part of any system regardless of technology. CORBA allows system elements to be normalized into a single cohesive system model. For example, the design of a multitier architecture is made simple using Java Servlets in the web server and various CORBA servers containing the business logic and wrapping the database accesses. This allows the implementations of the business logic to change, while the interface changes would need to be handled as in any other technology. For example, a database wrapped by a server can have its database schema change for the sake of improved disk usage or performance (or even whole-scale database vendor change), without affecting the external interfaces. At the same time, C++ legacy code can talk to C/Fortran legacy code and Java database code, and can provide data to a web interface.
Data-typing CORBA provides flexible data typing, for example an "ANY" datatype. CORBA also enforces tightly coupled datatyping, reducing human errors. In a situation where Name-Value pairs are passed around, it is conceivable that a server provides a number where a string was expected. CORBA Interface Definition Language provides the mechanism to ensure that user-code conforms to method-names, return-, parameter-types, and exceptions.
High tunability Many implementations (e.g. ORBexpress (Ada, C++, and Java implementation) and OmniORB (open source C++ and Python implementation)) have options for tuning the threading and connection management features. Not all ORB implementations provide the same features.
Freedom from data-transfer details When handling low-level connection and threading, CORBA provides a high level of detail in error conditions. This is defined in the CORBA-defined standard exception set and the implementation-specific extended exception set. Through the exceptions, the application can determine if a call failed for reasons such as "Small problem, so try again", "The server is dead" or "The reference does not make sense." The general rule is: Not receiving an exception means that the method call completed successfully. This is a very powerful design feature.
Compression CORBA marshals its data in a binary form and supports compression. IONA, Remedy IT, and Telefónica have worked on an extension to the CORBA standard that delivers compression. This extension is called ZIOP and this is now a formal OMG standard.
Problems and criticism
While CORBA delivered much in the way code was written and software constructed, it has been the subject of criticism.
Much of the criticism of CORBA stems from poor implementations of the standard and not deficiencies of the standard itself. Some of the failures of the standard itself were due to the process by which the CORBA specification was created and the compromises inherent in the politics and business of writing a common standard sourced by many competing implementors.
Initial implementation incompatibilities
The initial specifications of CORBA defined only the IDL, not the on-the-wire format. This meant that source-code compatibility was the best that was available for several years. With CORBA 2 and later this issue was resolved.
Location transparency
CORBA's notion of location transparency has been criticized; that is, that objects residing in the same address space and accessible with a simple function call are treated the same as objects residing elsewhere (different processes on the same machine, or different machines). This is a fundamental design flaw, as it makes all object access as complex as the most complex case (i.e., remote network call with a wide class of failures that are not possible in local calls). It also hides the inescapable differences between the two classes, making it impossible for applications to select an appropriate use strategy (that is, a call with 1µs latency and guaranteed return will be used very differently from a call with 1s latency with possible transport failure, in which the delivery status is potentially unknown and might take 30s to time out).
Design and process deficiencies
The creation of the CORBA standard is also often cited for its process of design by committee. There was no process to arbitrate between conflicting proposals or to decide on the hierarchy of problems to tackle. Thus the standard was created by taking a union of the features in all proposals with no regard to their coherence. This made the specification complex, expensive to implement entirely, and often ambiguous.
A design committee composed of a mixture of implementation vendors and customers created a diverse set of interests. This diversity made difficult a cohesive standard. Standards and interoperability increased competition and eased customers' movement between alternative implementations. This led to much political fighting within the committee and frequent releases of revisions of the CORBA standard that some ORB implementors ensured were difficult to use without proprietary extensions. Less ethical CORBA vendors encouraged customer lock-in and achieved strong short-term results. Over time the ORB vendors that encourage portability took over market share.
Problems with implementations
Through its history, CORBA has been plagued by shortcomings in poor ORB implementations. Unfortunately many of the papers criticizing CORBA as a standard are simply criticisms of a particularly bad CORBA ORB implementation.
CORBA is a comprehensive standard with many features. Few implementations attempt to implement all of the specifications, and initial implementations were incomplete or inadequate. As there were no requirements to provide a reference implementation, members were free to propose features which were never tested for usefulness or implementability. Implementations were further hindered by the general tendency of the standard to be verbose, and the common practice of compromising by adopting the sum of all submitted proposals, which often created APIs that were incoherent and difficult to use, even if the individual proposals were perfectly reasonable.
Robust implementations of CORBA have been very difficult to acquire in the past, but are now much easier to find. The SUN Java SDK comes with CORBA built-in. Some poorly designed implementations have been found to be complex, slow, incompatible and incomplete. Robust commercial versions began to appear but for significant cost. As good quality free implementations became available the bad commercial implementations died quickly.
Firewalls
CORBA (more precisely, GIOP) is not tied to any particular communications transport. A specialization of GIOP is the Internet Inter-ORB Protocol or IIOP. IIOP uses raw TCP/IP connections in order to transmit data.
If the client is behind a very restrictive firewall or transparent proxy server environment that only allows HTTP connections to the outside through port 80, communication may be impossible, unless the proxy server in question allows the HTTP CONNECT method or SOCKS connections as well. At one time, it was difficult even to force implementations to use a single standard port – they tended to pick multiple random ports instead. As of today, current ORBs do have these deficiencies. Due to such difficulties, some users have made increasing use of web services instead of CORBA. These communicate using XML/SOAP via port 80, which is normally left open or filtered through a HTTP proxy inside the organization, for web browsing via HTTP. Recent CORBA implementations, though, support SSL and can be easily configured to work on a single port. Some ORBS, such as TAO, omniORB and JacORB also support bidirectional GIOP, which gives CORBA the advantage of being able to use callback communication rather than the polling approach characteristic of web service implementations. Also, most modern firewalls support GIOP & IIOP and are thus CORBA-friendly firewalls.
See also
Software engineering
Component-based software engineering
Distributed computing
Portable object
Service-oriented architecture (SOA)
Component-based software technologies
Freedesktop.org D-Bus – current open cross-language cross-platform object model
GNOME Bonobo – deprecated GNOME cross-language object model
KDE DCOP – deprecated KDE interprocess and software componentry communication system
KDE KParts – KDE component framework
Component Object Model (COM) – Microsoft Windows-only cross-language object model
DCOM (Distributed COM) – extension making COM able to work in networks
Common Language Infrastructure – Current .NET cross-language cross-platform object model
XPCOM (Cross Platform Component Object Model) – developed by Mozilla for applications based on it (e.g. Mozilla Application Suite, SeaMonkey 1.x)
IBM System Object Model SOM and DSOM – component systems from IBM used in OS/2 and AIX
Internet Communications Engine (ICE)
Java remote method invocation (Java RMI)
Java Platform, Enterprise Edition (Java EE)
JavaBean
OpenAIR
Remote procedure call (RPC)
Windows Communication Foundation (WCF)
Software Communications Architecture (SCA) – components for embedded systems, cross-language, cross-transport, cross-platform
Language bindings
Language binding
Foreign function interface
Calling convention
Dynamic Invocation Interface
Name mangling
Application programming interface - API
Application binary interface - ABI
Comparison of application virtual machines
SWIG opensource automatic interfaces bindings generator from many languages to many languages
References
Further reading
External links
Official OMG CORBA Components page
Unofficial CORBA Component Model page
Comparing IDL to C++ with IDL to C++11
Corba: Gone But (Hopefully) Not Forgotten
OMG XMI Specification
Component-based software engineering
GNOME
Inter-process communication
ISO standards
Object-oriented programming |
52112692 | https://en.wikipedia.org/wiki/ThetaRay | ThetaRay | ThetaRay is a cyber security and big data analytics company with headquarters in Hod HaSharon, Israel, and offices in New York and Singapore. The company provides a platform for detection of unknown threat and risks to protect critical infrastructure and financial services. The platform is also used to uncover unknown opportunities based on big data. The company utilizes patented mathematical algorithms developed by the company founders.
History
ThetaRay was founded in 2013 by Amir Averbuch and Ronald Coifman. Averbuch is a professor of computer science at Tel Aviv University with a main research focus on big data processing and analysis. Coifman is a professor mathematics at Yale University and recipient of the 1999 National Medal of Science. His main research focus is on efficient computation and numerical analysis. Mark Gazit, an international security expert and serial startup entrepreneur, is co-founder and CEO of ThetaRay.
In June 2013, ThetaRay raised its seed funding from Jerusalem Venture Partners (JVP) as part of their cyber security portfolio. Two months later, General Electric (GE) joined JVP as an investor and ThetaRay launched its Advanced Analytics Platform for big data. It was followed by operational risk solutions for financial organizations in April 2015. In July 2015, ThetaRay opened an office in New York and two months later launched its Credit Risk Detection Model for online lending. In December 2015, ThetaRay and PricewaterhouseCoopers signed a Joint Business Relations agreement. ThetaRay has customers such as ING Group that purchased ThetaRay’s Advanced Analytics solution for fraud detection. ThetaRay opened an office in Singapore in July 2016.
Awards
Winner of the 2014 Red Herring Top 100 Award, sector Security
Winner of the 2014 TiE50 Award
Winner of the 2014 Global Frost & Sullivan Entrepreneurial Company of the Year Award
Named Gartner Cool Vendor in Security for Technology and Service Providers, 2015
See also
Big Data
Cyber Security
References
External links
Official Website
Big data companies
Software companies established in 2013
Software companies of Israel |
24685572 | https://en.wikipedia.org/wiki/Ssh-keygen | Ssh-keygen | ssh-keygen is a standard component of the Secure Shell (SSH) protocol suite found on Unix, Unix-like and Microsoft Windows computer systems used to establish secure shell sessions between remote computers over insecure networks, through the use of various cryptographic techniques. The ssh-keygen utility is used to generate, manage, and convert authentication keys.
Overview
ssh-keygen is able to generate a key using one of three different digital signature algorithms. With the help of the ssh-keygen tool, a user can create passphrase keys for any of these key types. To provide for unattended operation, the passphrase can be left empty, albeit at increased risk. These keys differ from keys used by the related tool GNU Privacy Guard.
OpenSSH-based client and server programs have been included in Windows 10 since version 1803. The SSH client and key agent are enabled and available by default and the SSH server is an optional Feature-on-Demand.
Key formats supported
Originally, with SSH protocol version 1 (now deprecated) only the RSA algorithm was supported. As of 2016, RSA is still considered strong, but the recommended key length has increased over time.
The SSH protocol version 2 additionally introduced support for the DSA algorithm. DSA is now considered weak and was disabled in OpenSSH 7.0.
Subsequently, OpenSSH added support for a third digital signature algorithm, ECDSA (this key format no longer uses the previous PEM file format for private keys, nor does it depend upon the OpenSSL library to provide the cryptographic implementation).
A fourth format is supported using ed25519, originally developed by independent cryptography researcher Daniel J. Bernstein.
Command syntax
The syntax of the ssh-keygen command is as follows:
ssh-keygen [options]
Some important options of the ssh-keygen command are as follows:
Files used by the ssh-keygen utility
The ssh-keygen utility uses various files for storing public and private keys. The files used by ssh-keygen utility are as follows:
$HOME/.ssh/identity: The $HOME/.ssh/identity file contains the RSA private key when using the SSH protocol version 1.
$HOME/.ssh/identity.pub: The $HOME/.ssh/identity.pub file contains the RSA public key for authentication when you are using the SSH protocol version 1. A user should copy its contents in the $HOME/.ssh/authorized_keys file of the remote system where a user wants to log in using RSA authentication.
$HOME/.ssh/id_dsa: The $HOME/.ssh/id_dsa file contains the protocol version 2 DSA authentication identity of the user.
$HOME/.ssh/id_dsa.pub: The $HOME/.ssh/id_dsa.pub file contains the DSA public key for authentication when you are using the SSH protocol version 2. A user should copy its contents in the $HOME/.ssh/authorized_keys file of the remote system where a user wants to log in using DSA authentication.
$HOME/.ssh/id_rsa: The $HOME/.ssh/id_rsa file contains the protocol version 2 RSA authentication identity of the user. This file should not be readable by anyone but the user.
$HOME/.ssh/id_rsa.pub: The $HOME/.ssh/id_rsa.pub file contains the protocol version 2 RSA public key for authentication. The contents of this file should be added to $HOME/.ssh/authorized_keys on all computers where a user wishes to log in using public key authentication.
References
External links
Generating an SSH key, a guide from GitHub
ssh-keygen manual from the OpenBSD project
Linux man page from die.net
Operating system security
Unix network-related software
Secure Shell |
57203458 | https://en.wikipedia.org/wiki/Toman%20%28film%29 | Toman (film) | Toman (also known as Zdeněk Toman) is a 2018 Czech historical film by Ondřej Trojan. It focuses on Zdeněk Toman, who led Czechoslovak intelligence from 1945 to 1948. It premiered at Slavonice Film Festival on 2 August 2018.
Plot
The film follows the rise and fall of Zdeněk Toman, Head of Czechoslovak Intelligence from 1945 to 1948.
The film starts in April 1948 when Toman is interrogated by Inspector Putna. The film then moves to March 1945 when Toman was repatriation officer in Carpathian Ruthenia. He comes into conflict with NKVD officers who insist that nobody will be repatriated from Carpathian Ruthenia as it will become part of Soviet Union. Toman bribes NKVD officers and later meets Imrich Rosenberg. Rosenberg asks Toman to help repatriate Jews from Carpathian Ruthenia. Toman agrees to help for money. Toman later organises a meeting of the exiled Czechoslovak government in Košice. He meets Václav Nosek, who agrees to help him with his career. Nosek helps Toman to become part of Czechoslovak Intelligence, which is led by non-communist General Josef Bartík. Toman befriends a prominent Communist Rudolf Slánský, who tasks Toman to get finances for the Communist Party of Czechoslovakia. Toman uses his contacts in Great Britain to get the finances and also rises in prominence. He replaces Bartík as the Head of Intelligence. He also uses his influence to help Jewish refugees and to help establish Israel. Toman's finances help Communists to win 1946 election. He meets Klement Gottwald but also becomes an enemy of the Communist Clique represented by Bedřich Reicin, which has ties in the Soviet Union. Toman's support for Israel gets him into conflict with the Soviets. Toman is removed from his position during the 1948 coup d'état. Toman is arrested while his wife commits suicide. Toman manages to escape, and with the help of Jewish representatives he previously helped, he escapes to Bavaria where he surrenders to American soldiers.
Cast
Jiří Macháček as Zdeněk Toman
Kateřina Winterová as Pesla Tomanová
Kristýna Boková as Milada Třískalová
Stanislav Majer as Rudolf Slánský
Marek Taclík as Bedřich Reicin
Roman Luknár as Jan Masaryk
Lukáš Latinák as Vlado Clementis
Táňa Pauhofová as Aurélia Tomanová
Lukáš Melník as František Kuracin
Jaromír Dulava as Václav Nosek
Jiří Dvořák as General Bártík
Martin Finger as Adolf Püchler
Aleš Procházka as Klement Gottwald
Radek Holub as Karel Šváb
Ondřej Malý as Aladar Berger
Matěj Ruppert as Marian Kargul
Jaroslav Plesl as Rosenberg
Marián Mitaš as Karel Vaš
Miroslav Táborský as Veselý
Ady Hajdu as Zorin
Václav Neužil jr. as Bedřich Pokorný
Pavel Liška as Evžen Zeman
Lukáš Hlavica as JUDr. Horák
Halka Třešňáková as Josefa Slánská
Petr Vaněk as Gaynor Jacobson
Jaroslav Kubera as Edvard Beneš
Production
Zdeňka Šimandlová received request for screenplay about Zdeněk Toman. She worked on it for 4 years even after the request was taken back. Šimandlová approached Ondřej Trojan with the screenplay in 2010. Trojan became fascinated by the character of Zdeněk Trojn and agreed to make film according to the screenplay. The screenplay was severely modified to make it more historically accurate. Martin Šmok was invited to the production as a history advise to assure historical accuracy of the film.
Trojan started to gather finances for the film. He planned to start filming in 2014 but negotiations with Czech Television took longer than expected. Trojan decided to start when the film had 70% of finances ensured.
Filming started on 4 April 2017. It took place in Brdy and later moved to Prague. Some parts were filmed at Barrandov Studios. Shooting finished on 30 October 2017. Director and producer Ondřej Trojan announced that the film is expected to have difficult post-production.
Release
Premiere was scheduled for 12 April 2018 but had to be moved to 4 October 2018 as a result of problematic post-production. The film eventually had a limited premiere for accredited audience at Slavonice Film Festival. It was distributed for Cinema on 4 October 2018.
Reception
The film premiered at Slavonice Film Festival. It received positive reactions and a long ovation from audience. František Fuka published his review on 26 September 2018. He was writing of the film but praised final part of the film. Mirka Spáčilová called Toman a talking Enciclopedia. The film received overall mixed to positive reviews from critics as it holds 66% at Kinobox.cz. The film was nominated for 13 Czech Lion Awards.
Accolades
References
External links
Toman at CSFD.cz
Toman at cfn.cz
2018 films
Czech films
Czech historical thriller films
Czech-language films
2010s historical thriller films
Films set in the 1940s
Slovak films
Czech Lion Awards winners (films) |
15932162 | https://en.wikipedia.org/wiki/Grisham%20Stadium | Grisham Stadium | Grisham Stadium, Maddox - Musselwhite Track at Historic Trojan Field is a multi-purpose stadium in Carrollton, Georgia, United States. The stadium is home to the many athletic teams at Carrollton High School, and hosts various additional functions for the Carrollton City School District. The stadium is also home to Georgia Storm FC.
History
Trojan Field
With the construction of a new Carrollton High School in 1963, a new athletic stadium was needed adjacent to the school building. The new stadium opened on September 3rd, 1965 and was named Trojan Field on November 5th.
Hugh Maddox
Hugh G. Maddox was a track and football coach who in his tenure during the 1950s, was able to secure five consecutive track state titles. He also led the Trojan football team to their first state championship in 1956. In 1971, the Hugh G. Maddox Track in Trojan Field was named in his honor.
Charlie Grisham
Charlie Grisham, previously serving under Maddox, was named head coach of the high school football team beginning in 1958, and led the team to five additional state titles. The stadium was renamed in his honor in 1974.
Craig Musselwhite
In 2021, the Carrollton Board of Education voted to rename the track at Grisham Stadium to Maddox - Musselwhite Track, honoring the then recently retired head coach of track. During his career at the school system, Musselwhite led the track team to five state championships as an assistant coach and eight more as head coach. He also contributed to two consecutive team titles as a student at the school in 1982-1983, and achieved three individual state titles in high jump from his sophomore to senior year.
Other tenants
Georgia Storm FC
Georgia Storm FC joined the NPSL in 2020 and made its debut season in the 2021 Southeast Conference. Grisham Stadium serves as the home field for the Carrollton based team.
West Georgia Wolves
The University of West Georgia, having utilized the stadium for its football team since 1981, would continue to do so until the construction of the college's own counterpart in 2009.
2008 Renovations
In 2008, the local school board administration approved a tax-funded plan for renovations of Grisham Stadium along with the development of a new fine arts center and improvements to the nearby gymnasium. The stadium went under extensive renovations, which included a state-of-the-art artificial track and field, complete visitor's locker rooms, and updated concessions along with new home side grandstands and a three level press box. A new grand entrance was incorporated in 2008-2009 to include a brick memorial walkway. These additions were also linked with the existing "TrojanTron" and Matrix Boards, as well as the home fieldhouse which houses the Trojan Hall of Fame. A bronze Trojan monument also greeted fans upon entering the updated stadium. However, the statue was moved in 2019 to serve as the centerpiece of the newly renovated high school courtyard.
Funding controversy
There was backlash against the school district and its administration at the time due to funding for the facilities falling through. Parents and Carrollton community voices expressed displeasure and concern due to the construction plans exceeding budget and many accused the school system of neglecting the students' academic needs and wasting taxpayer money. There were others in the community that came to defend the district's decision citing the need for Carrollton students' access to athletic facilities. The stadium renovations, having already been completed when cost exceeded limits, forced the school district to delay construction of the arts center and scale down the plans of the gym renovations.
Scoreboard replacement
In 2021, the "TrojanTron" Matrix scoreboard, having been out of operation for years, was replaced with an entirely screen based score board system. The signage on this board and on the press box was also updated to match the newly renamed "Maddox - Musselwhite Track".
Notes
References
American football venues in Georgia (U.S. state)
College football venues
West Georgia Wolves football
High school football venues in the United States
Multi-purpose stadiums in the United States
Buildings and structures in Carroll County, Georgia
Sports venues completed in 2008
2008 establishments in Georgia (U.S. state) |
66342646 | https://en.wikipedia.org/wiki/Losslesscut | Losslesscut | LosslessCut is a free, platform independent Video editing software, which supports numerous audio, video and container formats.
Basically, LosslessCut is a graphical user interface, in particular useable under MacOS, Windows and Linux, for the Multimedia Framework FFmpeg. Thereby it supports all formats supported by FFmpeg. The software focusses on the lossless editing of the video files. By copying the selected image sequences without transcoding or re-rendering it achieves very fast creation of the target file in comparison to tools that re-encode frames.
Completely lossless copying is achieved when the source file is cut at the reference frames of a group of pictures only. This is being visualised when operating the program.
With a size short of 100 MB, the software is small and portable, thus it can be started from an external storage medium without prior installation. The FFmpeg framework needs to be present on the computer already.
Core functions
Essential functions of the software are:
Cutting videos and reassembling scenes in selectable order.
Separation of audio or subtitle tracks from video, or adding of a new track
Concatenating multiple tracks with the same codec parameters
Multiplexing into selectable container format
Saving of single images (snapshots) in JPG or PNG format
Adjustment of metadata for rotation or orientation of the video
Zoomable timeline with annotation of the reference frames and jump functions
Display of thumbnails of the video and the waveform of the audio track
Display, name and reorder list of cut segments
Automatically saving the cut list in CSV format; import and export cut lists
Display the generated FFmpeg command line for individual adjustments
Limitations
Typically, the segment start will be "rounded to the nearest previous keyframe", thus the author emphasises that the programme is not meant for exact cutting. This limitation is by design to allow the cutting to be lossless, i.e. without re-encoding the frames adjacent to a cut for the codecs using interframe Motion compensation.
The file handling is not fully compliant with operating systems standards, in particular Softpedia reports awkwardness in the input selection window does not filter compatible files, or the output being saved in the same location without prompting.
See also
Glossary of video terms
References
External links
Source code in GitHub
Cross-platform free software
Linux software
Free video conversion software
Software that uses FFmpeg
Video editing software for macOS
Video editing software for Linux
Video editing software for Windows |
22495629 | https://en.wikipedia.org/wiki/GlobalSight | GlobalSight | GlobalSight is a free and open source translation management system (TMS) released under the Apache License 2.0. As of version 7.1 it supports the TMX and SRX 2.0 Localization Industry Standards Association standards. It was developed in the Java programming language and uses a MySQL database. GlobalSight also supports computer-assisted translation and machine translation.
History
From 1997 to 2005 it was called Ambassador Suite and was developed and owned by GlobalSight Corp. that according to Red Herring magazine was one of the "ten companies to watch" in 1999. In 2005, Transware Inc. acquired it and continued its development. In May 2008, Welocalize acquired Transware and GlobalSight. In January 2009 after replacing the proprietary technology used in the product (workflow, database, object relationship mapping, middleware, directory management and scheduling) with open source components Welocalize released version 7.1.
Steering committee
The Steering committee formed by representatives of the main companies currently involved in the project are listed here.
Stephen Roantree from AOL
Mirko Plitt from Autodesk
Jessica Roland from EMC Corporation
Frank Rojas from IBM
Daniel McGowan from Novell
Martin Wunderlich
Melissa Biggs from Sun Microsystems
Tex Texin from XenCraft
Reinhard Schaler from The Rosetta Foundation
Phil Ritchie from VistaTEC
Sultan Ghaznawi from YYZ Translations
Derek Coffey from Welocalize
Other companies involved
In December 2008 there were four Language Service Providers involved in the project: Afghan Translation Service, Applied Language Solutions, Lloyd International Translations and VistaTEC.
Features
According to the Translator and Reviewer Training Guide and the GlobalSight vs WorldServer, the software has the following features:
Customized workflows, created and edited using graphical workflow editor
Support for both human translation and fully integrated machine translation (MT)
Automation of many traditionally manual steps in the localization process, including: filtering and segmentation, TM leveraging, analysis, costing, file handoffs, email notifications, TM update, target file generation
Translation Memory (TM) management and leveraging, including multilingual TMs, and the ability to leverage from multiple TMs
In Context Exact matching, as well as exact and fuzzy matching
Terminology management and leveraging
Centralized and simplified Translation memory and terminology management
Full support for translation processes that utilize multiple Language Service Providers (LSPs)
Two online translation editors
Support for desktop Computer Aided Translation (CAT) tools such as Trados
Cost calculation based on configurable rates for each step of the localization process
Filters for dozens of filetypes, including Word, RTF, PowerPoint, Excel, XML, HTML, JavaScript, PHP, ASP, JSP, Java Properties, Frame, InDesign, etc.
Concordance search
Alignment mechanism for generating Translation memory from previously translated documents
Reporting
Web services API for programmatic access to GlobalSight functionality and data
Although a plugin called Crowdsight intended to extend the functionality and support crowdsourcing, GlobalSight was found not suitable to support crowdsourcing processes that depend on redundant inputs.
Integration with other platforms
In 2011, Globalme Language & Technology released an open source plugin which connects the back end of a Drupal or Wordpress website to GlobalSight. Publishers can send their content directly to GlobalSight using this CMS plugin.
Drupal CMS
In 2014 Globalme and Welocalize published an open source Drupal plugin to provide integration capabilities with the Drupal TMGMT translation management plugin.
See also
Comparison of machine translation applications
References
External links
GlobalSight official website
CMSwithTMS Drupal & Wordpress translation plugin
Translation software
Free software programmed in Java (programming language)
Computer-assisted translation software programmed in Java
Formerly proprietary software
Translation companies
Free software projects
Companies established in 1997
Computer-assisted translation |
67657842 | https://en.wikipedia.org/wiki/Unison%20%28software%29 | Unison (software) | Unison is a file synchronization tool for Windows and various Unix-like systems (including macOS and Linux). It allows two replicas of a collection of files and directories to be stored on different hosts (or different disks on the same host), modified separately, and then brought up to date by propagating the changes in each replica to the other. Syncing replicas directly Unison is independent of third-party providers.
Features
Features of Unison include to handle file changes on both sides of replication; conflicts (same file changed on both sides) are displayed and can be resolved manually, optionally creating backups of changed files. Unison allows synchronization via computer networks (LAN, Internet) by direct connection (socket) or tunneled via ssh. By using the rsync algorithm only changed blocks of files have to be transferred, thus saving bandwidth.
Usage
Unison can be called via the command line with parameters or controlled via profile files. It can be executed interactively or batch-controlled automatically. In batch mode, unique changes are automatically synchronized. Files with replication conflicts are skipped.
After startup, Unison checks the file inventory per directory or computer and compares the timestamps of the files. If it detects changes, the changes to the corresponding files are analyzed in more detail. Afterwards Unison creates a replication list with suggestions for their synchronization and marks conflicts that cannot be resolved automatically.
GUI versions of Unison exist for interactive use. Under Windows and Linux they are generally based on GTK+. The GUI versions allow an easier overview of the replicas and the proposed synchronization. Changes can be marked individually by keyboard or mouse and then implemented in bundles.
See also
Comparison of file hosting services
Comparison of file synchronization software
Comparison of online backup services
References
External links
Unison home page
Unison source code
File sharing software for Linux
File copy utilities |
62179466 | https://en.wikipedia.org/wiki/Aniela%20Pawlikowska | Aniela Pawlikowska | Aniela Pawlikowska known as Lela Pawlikowska, (11 July 1901, Lwów - 23 December 1980, London) was a Polish artist, illustrator, and society portrait painter who came to prominence in the United Kingdom in the 1950s and '60s.
Background
Aniela Pawlikowska was born to a family with a rich literary and scientific heritage. Her mother was Maryla Wolska, a Polish poet, the daughter of Wanda Młodnicka, née Monné, muse and fiancée of the painter Artur Grottger, herself a writer and translator. Her father was , engineer, inventor, author on mathematical logic, linguist, an early pioneer of the Polish petroleum industry, and associate of the Canadian petroleum entrepreneur, William Henry McGarvey.
Aniela was the youngest of five children. Her older sister was the writer and poet, Beata Obertyńska.
Aniela was home-schooled. One of her tutors was a family friend, the university professor of philosophy and psychology and artist, Władysław Witwicki. Aniela ("Lela") Wolska's artistic talent was noted early; she held her first solo exhibition at age nine. On that occasion, 54 of her works were exhibited by the Lwów Society of Friends of Fine Art (Towarzystwo Przyjaciół Sztuk Pięknych). Her home education did not conclude with a matriculation examination, so she attended lectures on the history of art given at Lwów University by Professors Bołoz-Antoniewicz and as an auditor.
In 1924 Lela Wolska married , a bibliophile, writer, and publisher. She joined him at the family seat of Medyka, near Przemyśl. They had three daughters and a son. The family would also spend a lot of time at their mountain chalet, "Pod Jedlam" ("The Firs"), in Zakopane. It had been designed for Michał's father, , by the modernist artist, Stanisław Witkiewicz. With her husband's active support she continued her studies in art at the Krakow Academy of Fine Arts with Wojciech Weiss and Kazimierz Sichulski.
In the 1930s she exhibited widely in Lwów, Kraków, Warsaw and Zakopane and abroad in Leipzig, Rome, Florence and Turin. With the outbreak of World War II she took her four small children to Lwów where she went to live with her sister in the Wolski villa, known as "Zaświecie". In May 1940 already after the first wave of Soviet deportations of Poles to Siberia, she managed to get a Laissez-passer for herself and the children into the Nazi General Government. They were taken in by a relative, the political activist and poet , on her estate in Goszyce. In April 1942, thanks to her husband's intervention, he was then in Rome, she was able to travel to Italy to join him.
In Rome she provided for the family by painting portraits of Italian aristocrats and diplomats stationed there. It was also a time of family tragedy when the Pawlikowskis' second daughter died of leukemia. By the end of 1946 the family had moved to London to join the thousands of demobilised allied Polish military personnel who were allowed to settle in the United Kingdom, now that their homeland had been given over to Soviet Ukraine as part of the Yalta Accord. In London she continued to support the family through portrait commissions so that she became one of the most sought after portrait painters in the country.
In 1955 her popularity led to a solo exhibition at London's Parsons Gallery which was deemed one of the cultural events of the year, not least among the Polish emigrant community. Among her sitters were Princess Alexandra of Kent, the daughters of the King of Spain and the wartime SOE agent, Krystyna Skarbek. She continued to work virtually to the end of her life, despite losing the sight in one eye. From 1962 she would visit Poland for several months each year and stay in Zakopane in the family chalet. Her husband was killed in a road traffic accident in 1970.
She died in 1980 in a Polish care home, "Antokol" on the outskirts of London. Her ashes were laid to rest in Poland.
The fate of Medyka
After the Soviet invasion of Poland in 1939, the village of Medyka, and therefore the Pawlikowski estate, found themselves annexed by the Soviet Union. With 1948 border adjustments between the new Polish People's Republic and its neighbour, Medyka found itself once again just over the Polish side of the new border. The Pawlikowski estate became state property and was turned into a State Agricultural Farm (PGR). In the 1960s it was decided to demolish the palace on the grounds that it was a vestige of the earlier "bourgeois hegemony". Fortunately, the family had taken the precaution at the outbreak of war to donate part of its valuable collections to the Ossolineum in Lwów and to take other parts to their chalet in Zakopane. The gamble paid off, as most of the archive managed to survive in scattered form. Pawlikowska and the family never returned to Medyka after the war to see the devastation of the place where they had spent the happiest time of their lives. It was a fate shared by the totality of Polish landowners in the Kresy region of Poland and marked a "caesura" in history and the obliteration not only of a way of life, but also of a centuries-old hugely rich and diverse cultural heritage centred on the city of Lwów.
Works
Given Pawlikowska's creativity may be defined by her highly traditional and conservative background, undoubtedly influenced by her husband's, patriotic and nationalistic views, as the main champion of her work, at a time of deep crisis for the nation coupled with exile in the Free World, she did not seek to join the Avant-garde but sought instead in her own words, to "make links between the basic elements of art and means of expression with her Polishness". Her inspiration coupled with the mature style of her creativity, enable her work to be characterised as art déco style.
Illustrations
Her public debut, aside from the Juvenilia, were illustrations and graphic designs for a first library edition of her husband's work, Agnieszka albo o Pannie na niedźwiedziu, "Agnes or the Maiden atop the Bear", (Medyka 1925). It was a pastiche on a medieval Incunable, particularly on the Balthasar Behem Codex, well known to her and which inaugurated the publishing venture of the Medyka Library series which until it ceased in 1939, went on to produce 15 richly illustrated titles, generally the work of Pawlikowska. Among her significant contributions were two volumes by Beata Obesityńska, A Guitare and others - Gitara i tamci (Medyka 1926) and The tale of Brothers Frost. A Calendar dream - O Braciach Mroźnych. Sen kalendarzowy (Medyka 1930). Notable were her gouache colour and black and white linocut illustrations for Zofia Kossak-Szczucka's God's Madman - Szaleńcy Boży (Kraków 1929). In London Pawlikowska illustrated over a dozen titles for Veritas, a Polish religious publishing foundation.
Religious themes
Sacred topics were central to Pawlikowska's creativity, especially of Marian inspiration. Her portfolio of ten linocuts, Bogurodzica (Mother of God), overlain with watercolours and gilded are a reference to folk woodcarvings and paintings on glass. They were published by Medyka in 1930. Commenting on this art déco series, and other single works after the war, Pawlikowska said: "those pictures are on a religious theme, but my aim was to express them in Polish, not drawing on any pattern or style, rather perhaps relying on folk art (...) the point was to reflect the world through the Marian calendar and traditions, for example, Our Lady of Sowing, Our Lady of Berries, Our Lady of Herbs... and also to convey it by the simplest artistic means through line and thereby to confer as much expression as possible". A similar intention lies behind the style of the coloured linocut of "St. Hubertus" of 1936.
The period after the war saw the creation of several important religious works by Pawlikowska. Among them are the 1947 depiction of "Saint Stanisław Szczepanowski, bishop and martyr" for the altar of the Marian Fathers' chapel at Fawley Court in Buckinghamshire England, and two paintings whose fate is unknown, one from 1947 the other dated 1962. There are reproductions of "Prayer for the souls in purgatory" (1947) and "Father Maksymilian Kolbe" (1962). Throughout her career, she designed many Christmas and Easter cards.
Portraiture and landscapes
Pawlikowska devoted her entire life to the study of the human form and to nature. Her sketchbooks were filled with human figures, plants and animals as part of her uninterrupted daily atelier. She was an amateur botanist and an acute observer. Her landscapes, chiefly in watercolour, were initially a throwback to the 19th-century. Later they clearly referred to a growing fascination with Japanese painting and colour experimentation of the Interwar period. A separate chapter in her creative work were the studies of interiors - mainly of the palace in Medyka. After the war, knowing she could never return to her home, she immortalised those interiors from memory substituting artistic style for the eroded detail.
Her portraiture that was to become the mainstay of émigré family life and support for the chalet in Poland, oscillated initially between a style redolent of secessionism and new experiments with colour and form. However, the pressures of wartime and the difficult period in Italy of necessity turned much of her artwork into a commissioned commodity. It gained popularity in high society, but at the price of a reversion into traditional academic art.
From her émigré years, only a handful of works survive that were untainted by the loss of hope of ever returning to her homeland or by the increasing rigour of having to earn her living. Among them are three still lives and a painting of roses. They are outstanding works torn out of the daily drudgery and mark the artist's farewell to creative freedom. As though painted in haste in meaty oils is the painting "Roses", then in disharmonious colour comes the grotesque "Pinocchio and the doll" (circa 1943), "Black pudding on straw" (1960), and finally, "clay pots" (1970) were the last expressions of the artist's soul. Of her commissions in England, a certain "relic" remains in the form of a pastel drawing of the infant head of the future Diana, Princess of Wales.
See also
List of Poles
Notes
References
Bibliography
Świat Leli Pawlikowskiej. Prace z lat 1915-1965. Catalogue of the exhibition in the Krakow National Museum ed. M. Romanowska, Kraków 1997
Lela Pawlikowska w Medyce, ed. M. Trojanowska, Przemyśl 2002
Marta Trojanowska. Dama z Medyki z Londynu. Lela Pawlikowska 1901-1980'', Przemyśl 2005
Marta Trojanowska. Dama z Medyki z Londynu. Lela Pawlikowska 1901-1980 Selection of works including oil, woodcut, watercolour and witty illustrations
External links
Selection of Pawlikowska's drawings from the National Museum in Cracow
Pawlikowska Exhibition in Kraków Cloth Hall: "Lela Pawlikowska. Drawings" 16-23.11.2016. in English
1901 births
1980 deaths
Artists from Lviv
Polish women illustrators
20th-century Polish painters
Polish women painters
Polish portrait painters
Artists from Kraków
20th-century Polish women artists
Polish emigrants to the United Kingdom |
45324674 | https://en.wikipedia.org/wiki/MKVToolNix | MKVToolNix | MKVToolNix is a collection of tools for the Matroska media container format by Moritz Bunkus including mkvmerge. The free and open source Matroska libraries and tools are available for various platforms including Linux and BSD distributions, macOS and Microsoft Windows. The tools can be also downloaded from video software distributors and FOSS repositories.
Applications
MKVToolNix was reviewed by the Linux Journal, Linux Format, the ICTE Journal, and Softpedia among others. The tools are cited in patents for a "Universal container for audio data". A "portable" Windows edition exists, but is not available in the PortableApps format.
Components
MKVToolNix GUI is a Qt GUI for mkvmerge and a successor of mmg.
mkvmerge merges multimedia streams into a Matroska file.
mkvinfo lists all elements contained in a Matroska file.
mkvextract extracts specific parts from a Matroska file to other formats.
mkvpropedit allows to analyze and modify some Matroska file properties.
See also
Converting video on Wikimedia Commons
List of open-source codecs
References
External links
mkvtoolnix project on GitLab
Free multimedia software
Software that was ported from wxWidgets to Qt
Video software that uses Qt
Free software programmed in C++ |
41272005 | https://en.wikipedia.org/wiki/Game%20engine%20recreation | Game engine recreation | Game engine recreation is a type of video game engine remastering process whereby a new game engine is written from scratch as a clone of the original with the ability to load the original game's data files such as music, textures, scripts, shaders, levels, and more. The new engine should read these data files and, in theory, load and understand them in a way that is indistinguishable from the original. The result of a proper engine clone is often the ability to play a game on modern systems that the old game could no longer run on. It also opens the possibility of community collaboration, as many engine remake projects tend to be open source. Game engine recreation can be beneficial to game publishers because the legal use of a re-creation still requires the original data files, as a player must still purchase the original game in order to legally play the re-created game (as detailed in this list of game engine recreations).
Motivation
Game engine recreations are made to allow the usage of classical games with newer operating system versions, recent hardware or even completely different operating systems than originally intended. Another motivation is the ability to fix engine bugs which is often hard or impossible with the original engines (with notable exceptions, see community patch) once a software has become unsupported abandonware, with the source code not available.
Methods
Top down
When game engine recreations are made in a top down development methodology, in the first step the general game's functionality is programmed and the structure is defined. Then, in later steps, the resulting engine is adapted to the specific detail behaviour of the original game, often by reverse engineering, debugging and profiling the original. An example is OpenRA based on specifications contributed by the community by clean-room re-implementations without dis-assembling the original executable, which result in game engines whose behavior differs from the original. Another example is the Total Annihilation engine remake Spring Engine, which resulted in being used for many more games. Typically, this approach results in an approximation of the original behaviour only and not a "clock cycle wise" identical behaviour. On the positive side, running code exists faster, and the finally resulting source code is less specifically tied to a specific, single game and can be reused as a general game engine for other games.
Bottom up
Unlike Top down game engine recreations, bottom up dis-assembled/decompiled versions for a specific game are often able to replicate the behaviour of the original exactly. In these cases, the game core is recreated bottom up with reverse engineering of the original dis-assembled binary executable, CPU instruction for instruction. In the development phase this has the disadvantage that for a long time no running prototype exists. Also on the negative side, the resulting code is very specifically tied to this single game, often ugly ("pseudo-assembly code"), and can hardly be reused as general game engine. Examples are CSBWin or OpenTTD. Most often, the result is also not called "game engine" but "game recreation" or "game clone". MAME is an example of a video game engine emulation project which also follows this philosophy for accurate representation of the games.
Source code ports
Occasionally, as was the case with some of the engines/game cores in ScummVM, the original developers have helped the projects by supplying the original source code (those can be then called source ports). This is the best case, optimal for accuracy and minimizing the effort. An example is Beneath a Steel Sky.
Alternatives
Emulation of classical systems or operating systems is an alternative to an engine recreation; for instance DOSBox is a notable emulator of the PC/MS-DOS environment. Static recompilation is another approach based on the original binary executable, potentially leading to better performance than emulation; an example is the 2014 ARM architecture version of StarCraft for the Pandora. Another alternative are source ports for the seldom cases that the source code is available; examples are Jagged Alliance 2 or Homeworld (more examples in the List of commercial video games with available source code).
See also
List of game engine recreations
References |
11485403 | https://en.wikipedia.org/wiki/1961%20New%20York%20Yankees%20season | 1961 New York Yankees season | The 1961 New York Yankees season was the 59th season for the team in New York, and its 61st season overall. The team finished with a record of 109–53, eight games ahead of the Detroit Tigers, and won their 26th American League pennant. New York was managed by Ralph Houk. The Yankees played their home games at Yankee Stadium. In the World Series, they defeated the Cincinnati Reds in 5 games. This season was best known for the home run chase between Roger Maris and Mickey Mantle, with the former beating Babe Ruth's single season record by hitting 61.
The 1961 Yankees are often mentioned as a candidate for the unofficial title of greatest baseball team in history.
Offseason
December 14, 1960: Bob Cerv was drafted from the Yankees by the Los Angeles Angels in the 1960 MLB expansion draft.
January 16, 1961: Mickey Mantle became the highest-paid baseball player by signing a $75,000 contract.
Prior to 1961 season: Art López was signed as an amateur free agent by the Yankees.
Prior to 1961 season: Ole Miss Rebels football quarterback Jake Gibbs was signed as an amateur free agent by the Yankees.
Regular season
The 1961 season was notable for the race between center fielder Mickey Mantle and right fielder Roger Maris to break Babe Ruth's record of 60 home runs in a season (set in 1927). Maris eventually broke the record, hitting his 61st home run on October 1, the season's final day. During the season, Maris had seven multi-home run games; in a doubleheader against the Chicago White Sox, he hit four home runs.
1961 was an expansion year, with the American League increasing from eight to ten teams, the first expansion in the 61-year history of the league. The old schedule of 154 games (seven opponents multiplied by 22 games apiece) was replaced by 162 games (nine opponents multiplied by 18 games apiece) which led to some controversy due to the eight extra games that Maris had to try to hit 61.
Ultimately, when Maris broke Ruth’s record in game 162, baseball commissioner Ford Frick instigated "The Asterisk", which designated that Maris had only accomplished the feat in a longer season, and disallowed any reference to him as the record-holder. When commissioner Fay Vincent removed "The Asterisk" in 1991, Maris was finally given credit as the single-season home run record-holder. However, Maris had died in 1985, never knowing that the record belonged to him.
In addition to the individual exploits of Maris and Mantle, the '61 Yankees hit a major league record 240 home runs. The record stood until 1996 when the Baltimore Orioles, with the added benefit of the designated hitter, hit 257 home runs as a team.
Roger Maris
In 1961, the American League expanded from eight to ten teams, generally watering down the pitching, but leaving the Yankees pretty much intact. Yankee home runs began to come at a record pace. One famous photograph lined up six 1961 Yankee players, including Mantle, Maris, Yogi Berra, Elston Howard, Johnny Blanchard, and Bill Skowron, under the nickname "Murderers Row", because they hit a combined 207 home runs that year. The title "Murderers Row", originally coined in 1918, had most famously been used to refer to the Yankees side of the late 1920s.
As mid-season approached, it seemed quite possible that either Maris or Mantle, or perhaps both, would break Babe Ruth's 34-year-old home run record. Unlike the home run race of 1998, in which the competition between Mark McGwire and Sammy Sosa was given extensive positive media coverage, sportswriters in 1961 began to play the "M&M Boys" against each other, inventing a rivalry where none existed, as Yogi Berra has testified in recent interviews.
The 1961 home run race between Maris and Mantle was dramatized in the 2001 film 61*, filmed under the direction of Billy Crystal.
Roger Maris 61 Home Runs
The Yankees played one tie game which was later made up, and hence took 163 games to achieve 162 decisions.
Season standings
Record vs. opponents
Monthly record
Record vs. American League
Notable transactions
May 8, 1961: Lee Thomas, Ryne Duren, and Johnny James was traded by the Yankees to the Los Angeles Angels for Bob Cerv and Tex Clevenger.
July 1, 1961: Roy White was signed as an amateur free agent by the Yankees.
Roster
Game log
|- style="text-align:center;background-color:#ffbbbb"
| 1 || April 11 || Twins || 6–0 || Ramos (1–0) || Ford (0–1) || || 14,607 || 0–1
|- style="text-align:center;background-color:#bbffbb"
| 2 || April 15 || Athletics || 5–3 || Turley (1–0) || Daley (0–1) || Stafford (1) || 11,802 || 1–1
|- style="text-align:center;background-color:#bbffbb"
| 3 || April 17 || Athletics || 3–0 || Ford (1–1) || Walker (0–1) || || 1,947 || 2–1
|- style="text-align:center;background-color:#bbffbb"
| 4 || April 20 || Angels || 7–5 || Ditmar (1–0) || Grba (0–1) || Stafford (2) || || 3–1
|- style="text-align:center;background-color:#bbffbb"
| 5 || April 20 || Angels || 4–2 || Turley (2–0) || Garver (0–1) || Arroyo (1) || 7,059 || 4–1
|- style="text-align:center;background-color:#bbffbb"
| 6 || April 21 || @ Orioles || 4–2 || Ford (2–1) || Barber (1–1) || || 12,368 || 5–1
|- style="text-align:center;background-color:#ffbbbb"
| 7 || April 22 || @ Orioles || 5–3 || Wilhelm (1–0) || Duren (0–1) || || 12,536 || 5–2
|- style="text-align:center;background-color:#ffffcc"
| 8 || April 22 || @ Orioles || 5 – 5 || || || || 14,126 || 5–2
|- style="text-align:center;background-color:#ffbbbb"
| 9 || April 23 || @ Orioles || 4–1 || Estrada (1–1) || McDevitt (0–1) || Hall (1) || 18,704 || 5–3
|- style="text-align:center;background-color:#ffbbbb"
| 10 || April 24 || @ Tigers || 4–3 || Lary (3–0) || Turley (2–1) || || 5,662 || 5–4
|- style="text-align:center;background-color:#bbffbb"
| 11 || April 26 || @ Tigers || 13 – 11 || Arroyo (1–0) || Aguirre (0–1) || || 4,676 || 6–4
|- style="text-align:center;background-color:#bbffbb"
| 12 || April 27 || Indians || 4–3 || Ditmar (2–0) || Antonelli (0–2) || || 8,897 || 7–4
|- style="text-align:center;background-color:#bbffbb"
| 13 || April 29 || Indians || 4–2 || Terry (1–0) || Perry (2–1) || Arroyo (2) || 14,624 || 8–4
|- style="text-align:center;background-color:#bbffbb"
| 14 || April 30 || @ Senators || 4–3 || Ford (3–1) || Donovan (0–4) || Arroyo (3) || || 9–4
|- style="text-align:center;background-color:#ffbbbb"
| 15 || April 30 || @ Senators || 2–1 || Woodeshick (1–1) || Sheldon (0–1) || Burnside (1) || 21,904 || 9–5
|- style="text-align:center;background-color:#bbffbb"
| 16 || May 2 || @ Twins || 6 – 4 || Coates (1–0) || Pascual (2–1) || Arroyo (4) || 16,669 || 10–5
|- style="text-align:center;background-color:#bbffbb"
| 17 || May 3 || @ Twins || 7–3 || Turley (3–1) || Ramos (2–1) || || 18,158 || 11–5
|- style="text-align:center;background-color:#bbffbb"
| 18 || May 4 || @ Twins || 5–2 || Ford (4–1) || Kaat (1–2) || Coates (1) || 18,179 || 12–5
|- style="text-align:center;background-color:#bbffbb"
| 19 || May 5 || @ Angels || 5–4 || McDevitt (1–1) || Clevenger (2–1) || Arroyo (5) || 17,801 || 13–5
|- style="text-align:center;background-color:#ffbbbb"
| 20 || May 6 || @ Angels || 5–3 || Grba (2–2) || Ditmar (2–1) || || 19,865 || 13–6
|- style="text-align:center;background-color:#ffbbbb"
| 21 || May 7 || @ Angels || 5–3 || Kline (1–0) || Coates (1–1) || Clevenger (1) || 19,722 || 13–7
|- style="text-align:center;background-color:#ffbbbb"
| 22 || May 9 || @ Athletics || 5–4 || Herbert (2–1) || Arroyo (1–1) || Archer (1) || 13,623 || 13–8
|- style="text-align:center;background-color:#bbffbb"
| 23 || May 10 || @ Athletics || 9–4 || Clevenger (3–1) || Daley (3–4) || || 15,986 || 14–8
|- style="text-align:center;background-color:#ffbbbb"
| 24 || May 12 || Tigers || 4–3 || Lary (5–1) || Coates (1–2) || || 23,556 || 14–9
|- style="text-align:center;background-color:#ffbbbb"
| 25 || May 13 || Tigers || 8–3 || Regan (3–0) || Turley (3–2) || || 18,036 || 14–10
|- style="text-align:center;background-color:#bbffbb"
| 26 || May 14 || Tigers || 5 – 4 || Coates (2–2) || Aguirre (1–2) || || || 15–10
|- style="text-align:center;background-color:#bbffbb"
| 27 || May 14 || Tigers || 8–6 || Coates (3–2) || Bunning (2–3) || || 40,968 || 16–10
|- style="text-align:center;background-color:#ffbbbb"
| 28 || May 16 || Senators || 3–2 || Woodeshick (2–1) || Stafford (0–1) || Sisler (5) || 10,050 || 16–11
|- style="text-align:center;background-color:#ffbbbb"
| 29 || May 17 || Senators || 8–7 || Burnside (1–2) || Ditmar (2–2) || Gabler (1) || 6,197 || 16–12
|- style="text-align:center;background-color:#ffbbbb"
| 30 || May 19 || @ Indians || 9–7 || Latman (3–0) || Clevenger (3–2) || Allen (2) || 21,240 || 16–13
|- style="text-align:center;background-color:#ffbbbb"
| 31 || May 20 || @ Indians || 4–3 || Funk (4–2) || Stafford (0–2) || || 8,431 || 16–14
|- style="text-align:center;background-color:#bbffbb"
| 32 || May 21 || Orioles || 4–2 || Ford (5–1) || Estrada (2–3) || || || 17–14
|- style="text-align:center;background-color:#ffbbbb"
| 33 || May 21 || Orioles || 3–2 || Barber (5–3) || Sheldon (0–2) || Wilhelm (5) || 47,890 || 17–15
|- style="text-align:center;background-color:#bbffbb"
| 34 || May 22 || Orioles || 8–2 || Coates (4–2) || Fisher (1–5) || Arroyo (6) || 16,923 || 18–15
|- style="text-align:center;background-color:#bbffbb"
| 35 || May 24 || Red Sox || 3–2 || Terry (2–0) || Nichols (0–1) || || 7,673 || 19–15
|- style="text-align:center;background-color:#bbffbb"
| 36 || May 25 || Red Sox || 6–4 || Ford (6–1) || Muffett (0–4) || Arroyo (7) || 13,087 || 20–15
|- style="text-align:center;background-color:#ffbbbb"
| 37 || May 28 || White Sox || 14–9 || Lown (2–2) || Arroyo (1–2) || Pierce (2) || || 20–16
|- style="text-align:center;background-color:#bbffbb"
| 38 || May 28 || White Sox || 5–3 || Coates (5–2) || McLish (2–5) || || 44,435 || 21–16
|- style="text-align:center;background-color:#ffbbbb"
| 39 || May 29 || @ Red Sox || 2–1 || Delock (3–1) || Ford (6–2) || || 21,804 || 21–17
|- style="text-align:center;background-color:#bbffbb"
| 40 || May 30 || @ Red Sox || 12–3 || Stafford (1–2) || Conley (2–4) || Coates (2) || 19,582 || 22–17
|- style="text-align:center;background-color:#bbffbb"
| 41 || May 31 || @ Red Sox || 7–6 || Sheldon (1–2) || Muffett (0–5) || McDevitt (1) || 17,318 || 23–17
|- style="text-align:center;background-color:#ffbbbb"
| 42 || June 1 || @ Red Sox || 7–5 || Monbouquette (4–5) || Turley (3–3) || Stallard (1) || 5,257 || 23–18
|- style="text-align:center;background-color:#bbffbb"
| 43 || June 2 || @ White Sox || 6–2 || Ford (7–2) || McLish (2–6) || || 38,410 || 24–18
|- style="text-align:center;background-color:#ffbbbb"
| 44 || June 3 || @ White Sox || 6 – 5 || Hacker (1–0) || Ditmar (2–3) || || 16,480 || 24–19
|- style="text-align:center;background-color:#bbffbb"
| 45 || June 4 || @ White Sox || 10–1 || Stafford (2–2) || Pierce (1–5) || || 28,362 || 25–19
|- style="text-align:center;background-color:#bbffbb"
| 46 || June 5 || Twins || 6–2 || Coates (6–2) || Lee (0–2) || Arroyo (8) || || 26–19
|- style="text-align:center;background-color:#bbffbb"
| 47 || June 5 || Twins || 6–1 || Sheldon (2–2) || Stobbs (0–2) || || 23,103 || 27–19
|- style="text-align:center;background-color:#bbffbb"
| 48 || June 6 || Twins || 7–2 || Ford (8–2) || Kralick (4–4) || Arroyo (9) || 17,129 || 28–19
|- style="text-align:center;background-color:#bbffbb"
| 49 || June 7 || Twins || 5–1 || Terry (3–0) || Ramos (3–7) || || 9,016 || 29–19
|- style="text-align:center;background-color:#bbffbb"
| 50 || June 8 || Athletics || 6–1 || Stafford (3–2) || Bass (4–3) || || || 30–19
|- style="text-align:center;background-color:#ffbbbb"
| 51 || June 8 || Athletics || 9–6 || Archer (3–1) || McDevitt (1–2) || || 13,157 || 30–20
|- style="text-align:center;background-color:#bbffbb"
| 52 || June 9 || Athletics || 8–6 || Arroyo (2–2) || Herbert (3–6) || || 22,418 || 31–20
|- style="text-align:center;background-color:#bbffbb"
| 53 || June 10 || Athletics || 5–3 || Ford (9–2) || Nuxhall (4–2) || || 17,272 || 32–20
|- style="text-align:center;background-color:#bbffbb"
| 54 || June 11 || Angels || 2–1 || Terry (4–0) || McBride (5–4) || || || 33–20
|- style="text-align:center;background-color:#bbffbb"
| 55 || June 11 || Angels || 5–1 || Sheldon (3–2) || Grba (5–5) || Arroyo (10) || 37,378 || 34–20
|- style="text-align:center;background-color:#bbffbb"
| 56 || June 12 || Angels || 3–1 || Stafford (4–2) || Bowsfield (2–2) || || 16,363 || 35–20
|- style="text-align:center;background-color:#ffbbbb"
| 57 || June 13 || @ Indians || 7–2 || Perry (5–4) || Coates (6–3) || Funk (5) || 21,704 || 35–21
|- style="text-align:center;background-color:#bbffbb"
| 58 || June 14 || @ Indians || 11–5 || Ford (10–2) || Bell (4–6) || Arroyo (11) || 25,095 || 36–21
|- style="text-align:center;background-color:#bbffbb"
| 59 || June 15 || @ Indians || 3 – 2 || Terry (5–0) || Funk (7–5) || || 23,350 || 37–21
|- style="text-align:center;background-color:#ffbbbb"
| 60 || June 16 || @ Tigers || 4–2 || Regan (7–2) || Stafford (4–3) || || 51,744 || 37–22
|- style="text-align:center;background-color:#ffbbbb"
| 61 || June 17 || @ Tigers || 12–10 || Foytack (4–4) || Daley (4–9) || Fox (4) || 51,509 || 37–23
|- style="text-align:center;background-color:#bbffbb"
| 62 || June 18 || @ Tigers || 9–0 || Ford (11–2) || Lary (10–4) || Arroyo (12) || 44,459 || 38–23
|- style="text-align:center;background-color:#ffbbbb"
| 63 || June 19 || @ Athletics || 4–3 || Archer (5–1) || Arroyo (2–3) || || 16,715 || 38–24
|- style="text-align:center;background-color:#bbffbb"
| 64 || June 20 || @ Athletics || 6–2 || Stafford (5–3) || Nuxhall (4–3) || Coates (3) || 19,928 || 39–24
|- style="text-align:center;background-color:#bbffbb"
| 65 || June 21 || @ Athletics || 5–3 || Daley (5–9) || Shaw (3–6) || Arroyo (13) || 19,416 || 40–24
|- style="text-align:center;background-color:#bbffbb"
| 66 || June 22 || @ Athletics || 8–3 || Ford (12–2) || Bass (4–6) || Arroyo (14) || 17,254 || 41–24
|- style="text-align:center;background-color:#ffbbbb"
| 67 || June 23 || @ Twins || 4–0 || Pascual (5–9) || Turley (3–4) || || 30,940 || 41–25
|- style="text-align:center;background-color:#bbffbb"
| 68 || June 24 || @ Twins || 10–7 || Sheldon (4–2) || Cueto (0–2) || Arroyo (15) || 35,199 || 42–25
|- style="text-align:center;background-color:#bbffbb"
| 69 || June 25 || @ Twins || 8–4 || Stafford (6–3) || Kralick (6–5) || Coates (4) || 35,152 || 43–25
|- style="text-align:center;background-color:#bbffbb"
| 70 || June 26 || @ Angels || 8–6 || Ford (13–2) || Donohue (1–2) || Arroyo (16) || 18,870 || 44–25
|- style="text-align:center;background-color:#ffbbbb"
| 71 || June 27 || @ Angels || 7–6 || Bowsfield (4–2) || Daley (5–10) || Grba (2) || 16,108 || 44–26
|- style="text-align:center;background-color:#ffbbbb"
| 72 || June 28 || @ Angels || 5–3 || Duren (3–8) || Turley (3–5) || Donohue (3) || 14,674 || 44–27
|- style="text-align:center;background-color:#bbffbb"
| 73 || June 30 || @ Senators || 5–1 || Ford (14–2) || Donovan (3–8) || || 28,019 || 45–27
|- style="text-align:center;background-color:#bbffbb"
| 74 || July 1 || @ Senators || 7–6 || Arroyo (3–3) || Sisler (1–3) || || 16,015 || 46–27
|- style="text-align:center;background-color:#bbffbb"
| 75 || July 2 || @ Senators || 13–4 || Daley (6–10) || Burnside (1–5) || Arroyo (17) || 19,794 || 47–27
|- style="text-align:center;background-color:#bbffbb"
| 76 || July 4 || Tigers || 6–2 || Ford (15–2) || Mossi (9–2) || || || 48–27
|- style="text-align:center;background-color:#ffbbbb"
| 77 || July 4 || Tigers || 4 – 3 || Lary (12–4) || Stafford (6–4) || Fox (6) || 74,246 || 48–28
|- style="text-align:center;background-color:#bbffbb"
| 78 || July 5 || Indians || 6–0 || Sheldon (5–2) || Bell (5–9) || || 24,377 || 49–28
|- style="text-align:center;background-color:#bbffbb"
| 79 || July 6 || Indians || 4–0 || Stafford (7–4) || Stigman (2–2) || || 37,136 || 50–28
|- style="text-align:center;background-color:#bbffbb"
| 80 || July 7 || Red Sox || 14–3 || Daley (7–10) || Conley (3–7) || || 29,199 || 51–28
|- style="text-align:center;background-color:#bbffbb"
| 81 || July 8 || Red Sox || 8–5 || Ford (16–2) || Delock (5–5) || Arroyo (18) || 23,381 || 52–28
|- style="text-align:center;background-color:#bbffbb"
| 82 || July 9 || Red Sox || 3–0 || Sheldon (6–2) || Monbouquette (8–7) || || || 53–28
|- style="text-align:center;background-color:#ffbbbb"
| 83 || July 9 || Red Sox || 9–6 || Schwall (7–2) || Terry (5–1) || Earley (1) || 47,875 || 53–29
|- style="text-align:center;background-color:#bbcaff"
| – || July 11 || 30th All-Star Game || colspan=7 | National League vs. American League (Candlestick Park, San Francisco) NL defeats AL, 5–4
|- style="text-align:center;background-color:#bbffbb"
| 84 || July 13 || @ White Sox || 6–2 || Stafford (8–4) || Wynn (7–2) || Arroyo (19) || 43,960 || 54–29
|- style="text-align:center;background-color:#ffbbbb"
| 85 || July 14 || @ White Sox || 6–1 || Pizarro (5–3) || Sheldon (6–3) || || 43,450 || 54–30
|- style="text-align:center;background-color:#bbffbb"
| 86 || July 15 || @ White Sox || 9 – 8 || Arroyo (4–3) || Hacker (2–2) || || 37,730 || 55–30
|- style="text-align:center;background-color:#bbffbb"
| 87 || July 16 || @ Orioles || 2–1 || Daley (8–10) || Barber (10–7) || || 38,487 || 56–30
|- style="text-align:center;background-color:#bbffbb"
| 88 || July 17 || @ Orioles || 5–0 || Ford (17–2) || Pappas (6–5) || || 44,332 || 57–30
|- style="text-align:center;background-color:#bbffbb"
| 89 || July 18 || @ Senators || 5–3 || Arroyo (5–3) || McClain (7–9) || || 17,695 || 58–30
|- style="text-align:center;background-color:#ffbbbb"
| 90 || July 19 || @ Senators || 8–4 || Daniels (5–5) || Daley (8–11) || || || 58–31
|- style="text-align:center;background-color:#ffbbbb"
| 91 || July 19 || @ Senators || 12–2 || Donovan (6–8) || Downing (0–1) || || 27,176 || 58–32
|- style="text-align:center;background-color:#bbffbb"
| 92 || July 21 || @ Red Sox || 11–8 || Arroyo (6–3) || Earley (1–4) || || 32,186 || 59–32
|- style="text-align:center;background-color:#bbffbb"
| 93 || July 22 || @ Red Sox || 11–9 || Arroyo (7–3) || Conley (4–9) || || 25,089 || 60–32
|- style="text-align:center;background-color:#ffbbbb"
| 94 || July 23 || @ Red Sox || 5–4 || Schwall (10–2) || Daley (8–12) || || 28,575 || 60–33
|- style="text-align:center;background-color:#bbffbb"
| 95 || July 25 || White Sox || 5–1 || Ford (18–2) || Baumann (7–8) || Arroyo (20) || || 61–33
|- style="text-align:center;background-color:#bbffbb"
| 96 || July 25 || White Sox || 12–0 || Stafford (9–4) || Pizarro (6–4) || || 46,240 || 62–33
|- style="text-align:center;background-color:#bbffbb"
| 97 || July 26 || White Sox || 5–2 || Sheldon (7–3) || Herbert (7–9) || || 22,366 || 63–33
|- style="text-align:center;background-color:#bbffbb"
| 98 || July 27 || White Sox || 4–3 || Terry (6–1) || Pierce (5–7) || Arroyo (21) || 20,529 || 64–33
|- style="text-align:center;background-color:#ffbbbb"
| 99 || July 28 || Orioles || 4–0 || Brown (8–3) || Daley (8–13) || || 39,623 || 64–34
|- style="text-align:center;background-color:#bbffbb"
| 100 || July 29 || Orioles || 5–4 || Ford (19–2) || Fisher (4–10) || || 42,990 || 65–34
|- style="text-align:center;background-color:#ffbbbb"
| 101 || July 30 || Orioles || 4–0 || Barber (12–8) || Stafford (9–5) || || || 65–35
|- style="text-align:center;background-color:#ffbbbb"
| 102 || July 30 || Orioles || 2–1 || Pappas (7–6) || Daley (8–14) || Hall (3) || 57,180 || 65–36
|- style="text-align:center;background-color:#bbcaff"
| – || July 31 || 31st All-Star Game || colspan=7 | National League vs. American League (Fenway Park, Boston) AL tied NL, 1–1
|- style="text-align:center;background-color:#bbffbb"
| 103 || August 2 || Athletics || 6–5 || Arroyo (8–3) || Archer (7–6) || || || 66–36
|- style="text-align:center;background-color:#bbffbb"
| 104 || August 2 || Athletics || 12–5 || Terry (7–1) || Ditmar (2–6) || Reniff (1) || 23,616 || 67–36
|- style="text-align:center;background-color:#ffbbbb"
| 105 || August 3 || Athletics || 6–1 || Shaw (7–9) || Daley (8–15) || || 12,584 || 67–37
|- style="text-align:center;background-color:#bbffbb"
| 106 || August 4 || Twins || 8 – 5 || Arroyo (9–3) || Pleis (3–2) || || 24,109 || 68–37
|- style="text-align:center;background-color:#bbffbb"
| 107 || August 5 || Twins || 2–1 || Coates (7–3) || Kralick (10–7) || || 18,880 || 69–37
|- style="text-align:center;background-color:#bbffbb"
| 108 || August 6 || Twins || 7 – 6 || Reniff (1–0) || Moore (4–4) || || || 70–37
|- style="text-align:center;background-color:#bbffbb"
| 109 || August 6 || Twins || 3–2 || Sheldon (8–3) || Schroll (0–1) || || 39,408 || 71–37
|- style="text-align:center;background-color:#bbffbb"
| 110 || August 7 || Angels || 4–1 || Daley (9–15) || McBride (9–8) || || 13,944 || 72–37
|- style="text-align:center;background-color:#bbffbb"
| 111 || August 8 || Angels || 5 – 4 || Arroyo (10–3) || Fowler (5–5) || || 24,084 || 73–37
|- style="text-align:center;background-color:#bbffbb"
| 112 || August 9 || Angels || 2–0 || Coates (8–3) || Bowsfield (8–4) || || 17,261 || 74–37
|- style="text-align:center;background-color:#bbffbb"
| 113 || August 10 || Angels || 3–1 || Ford (20–2) || Donohue (4–5) || Arroyo (22) || 15,575 || 75–37
|- style="text-align:center;background-color:#bbffbb"
| 114 || August 11 || @ Senators || 12–5 || Terry (8–1) || McClain (7–13) || Reniff (2) || 22,601 || 76–37
|- style="text-align:center;background-color:#ffbbbb"
| 115 || August 12 || @ Senators || 5–1 || Donovan (8–8) || Stafford (9–6) || || 15,870 || 76–38
|- style="text-align:center;background-color:#ffbbbb"
| 116 || August 13 || @ Senators || 12–2 || Daniels (7–6) || Daley (9–16) || || || 76–39
|- style="text-align:center;background-color:#bbffbb"
| 117 || August 13 || @ Senators || 9–4 || Coates (9–3) || Kutyna (6–4) || || 27,368 || 77–39
|- style="text-align:center;background-color:#ffbbbb"
| 118 || August 15 || White Sox || 2–1 || Pizarro (8–5) || Ford (20–3) || || 49,059 || 77–40
|- style="text-align:center;background-color:#bbffbb"
| 119 || August 16 || White Sox || 5–4 || Terry (9–1) || Lown (6–5) || || 29,728 || 78–40
|- style="text-align:center;background-color:#bbffbb"
| 120 || August 17 || White Sox || 5–3 || Stafford (10–6) || Baumann (9–10) || Arroyo (23) || 25,532 || 79–40
|- style="text-align:center;background-color:#ffbbbb"
| 121 || August 18 || @ Indians || 5–1 || Grant (12–6) || Coates (9–4) || || 37,840 || 79–41
|- style="text-align:center;background-color:#bbffbb"
| 122 || August 19 || @ Indians || 3 – 2 || Ford (21–3) || Locke (4–2) || Arroyo (24) || 23,398 || 80–41
|- style="text-align:center;background-color:#bbffbb"
| 123 || August 20 || @ Indians || 6–0 || Terry (10–1) || Perry (9–11) || || || 81–41
|- style="text-align:center;background-color:#bbffbb"
| 124 || August 20 || @ Indians || 5–2 || Sheldon (9–3) || Bell (8–13) || || 56,307 || 82–41
|- style="text-align:center;background-color:#ffbbbb"
| 125 || August 22 || @ Angels || 4–3 || McBride (10–10) || Stafford (10–7) || || 19,930 || 82–42
|- style="text-align:center;background-color:#bbffbb"
| 126 || August 23 || @ Angels || 8 – 6 || Arroyo (11–3) || Donohue (4–6) || || 19,773 || 83–42
|- style="text-align:center;background-color:#ffbbbb"
| 127 || August 24 || @ Angels || 6–4 || Morgan (6–2) || Coates (9–5) || || 19,819 || 83–43
|- style="text-align:center;background-color:#bbffbb"
| 128 || August 25 || @ Athletics || 3–0 || Terry (11–1) || Archer (8–10) || || 30,830 || 84–43
|- style="text-align:center;background-color:#bbffbb"
| 129 || August 26 || @ Athletics || 5–1 || Stafford (11–7) || Walker (5–11) || || 32,149 || 85–43
|- style="text-align:center;background-color:#bbffbb"
| 130 || August 27 || @ Athletics || 8–7 || Ford (22–3) || Shaw (8–12) || Arroyo (25) || 34,065 || 86–43
|- style="text-align:center;background-color:#ffbbbb"
| 131 || August 29 || @ Twins || 3–0 || Pascual (12–13) || Terry (11–2) || || 40,118 || 86–44
|- style="text-align:center;background-color:#bbffbb"
| 132 || August 30 || @ Twins || 4–0 || Stafford (12–7) || Kaat (7–13) || || 41,357 || 87–44
|- style="text-align:center;background-color:#ffbbbb"
| 133 || August 31 || @ Twins || 5–4 || Kralick (12–9) || Sheldon (9–4) || || 33,709 || 87–45
|- style="text-align:center;background-color:#bbffbb"
| 134 || September 1 || Tigers || 1–0 || Arroyo (12–3) || Mossi (14–4) || || 65,566 || 88–45
|- style="text-align:center;background-color:#bbffbb"
| 135 || September 2 || Tigers || 7–2 || Terry (12–2) || Lary (19–8) || Arroyo (26) || 50,261 || 89–45
|- style="text-align:center;background-color:#bbffbb"
| 136 || September 3 || Tigers || 8–5 || Arroyo (13–3) || Staley (2–5) || || 55,676 || 90–45
|- style="text-align:center;background-color:#bbffbb"
| 137 || September 4 || Senators || 5–3 || Reniff (2–0) || Daniels (8–10) || || || 91–45
|- style="text-align:center;background-color:#bbffbb"
| 138 || September 4 || Senators || 3–2 || Daley (10–16) || Burnside (1–7) || || 34,683 || 92–45
|- style="text-align:center;background-color:#bbffbb"
| 139 || September 5 || Senators || 6–1 || Coates (10–5) || McClain (8–16) || || 16,917 || 93–45
|- style="text-align:center;background-color:#bbffbb"
| 140 || September 6 || Senators || 8–0 || Ford (23–3) || Cheney (1–3) || || 12,295 || 94–45
|- style="text-align:center;background-color:#bbffbb"
| 141 || September 7 || Indians || 7–3 || Terry (13–2) || Stigman (2–4) || || 18,549 || 95–45
|- style="text-align:center;background-color:#bbffbb"
| 142 || September 8 || Indians || 9–1 || Stafford (13–7) || Bell (9–15) || || 41,762 || 96–45
|- style="text-align:center;background-color:#bbffbb"
| 143 || September 9 || Indians || 8–7 || Arroyo (14–3) || Funk (11–10) || || 37,161 || 97–45
|- style="text-align:center;background-color:#bbffbb"
| 144 || September 10 || Indians || 7–6 || Coates (11–5) || Locke (4–4) || Arroyo (27) || || 98–45
|- style="text-align:center;background-color:#bbffbb"
| 145 || September 10 || Indians || 9–3 || Daley (11–16) || Perry (10–14) || || 57,824 || 99–45
|- style="text-align:center;background-color:#bbffbb"
| 146 || September 12 || @ White Sox || 4 – 3 || Terry (14–2) || Pierce (9–9) || || 36,166 || 100–45
|- style="text-align:center;background-color:#ffbbbb"
| 147 || September 14 || @ White Sox || 8–3 || Herbert (10–12) || Sheldon (9–5) || Hacker (7) || || 100–46
|- style="text-align:center;background-color:#ffbbbb"
| 148 || September 14 || @ White Sox || 4–3 || Kemmerer (3–3) || Arroyo (14–4) || || 18,120 || 100–47
|- style="text-align:center;background-color:#bbffbb"
| 149 || September 15 || @ Tigers || 11–1 || Ford (24–3) || Mossi (14–7) || || || 101–47
|- style="text-align:center;background-color:#ffbbbb"
| 150 || September 15 || @ Tigers || 4–2 || Kline (7–8) || Daley (11–17) || || 42,267 || 101–48
|- style="text-align:center;background-color:#ffbbbb"
| 151 || September 16 || @ Tigers || 10–4 || Lary (21–9) || Terry (14–3) || || 35,820 || 101–49
|- style="text-align:center;background-color:#bbffbb"
| 152 || September 17 || @ Tigers || 6 – 4 || Arroyo (15–4) || Fox (4–2) || || 44,219 || 102–49
|- style="text-align:center;background-color:#ffbbbb"
| 153 || September 19 || @ Orioles || 1–0 || Barber (17–11) || Ford (24–4) || || || 102–50
|- style="text-align:center;background-color:#bbffbb"
| 154 || September 19 || @ Orioles || 3–1 || Daley (12–17) || Brown (10–6) || || 31,317 || 103–50
|- style="text-align:center;background-color:#bbffbb"
| 155 || September 20 || @ Orioles || 4–2 || Terry (15–3) || Pappas (12–9) || || 21,032 || 104–50
|- style="text-align:center;background-color:#ffbbbb"
| 156 || September 21 || @ Orioles || 5–3 || Fisher (10–12) || Stafford (13–8) || || 22,089 || 104–51
|- style="text-align:center;background-color:#bbffbb"
| 157 || September 23 || @ Red Sox || 8–3 || Ford (25–4) || Schwall (15–6) || Arroyo (28) || 28,128 || 105–51
|- style="text-align:center;background-color:#ffbbbb"
| 158 || September 24 || @ Red Sox || 3–1 || Monbouquette (14–13) || Arroyo (15–5) || || 30,802 || 105–52
|- style="text-align:center;background-color:#bbffbb"
| 159 || September 26 || Orioles || 3–2 || Sheldon (10–5) || Fisher (10–13) || || 19,401 || 106–52
|- style="text-align:center;background-color:#ffbbbb"
| 160 || September 27 || Orioles || 3–2 || Barber (18–12) || Stafford (13–9) || Hall (4) || 7,594 || 106–53
|- style="text-align:center;background-color:#bbffbb"
| 161 || September 29 || Red Sox || 2–1 || Sheldon (11–5) || Monbouquette (14–14) || || 21,485 || 107–53
|- style="text-align:center;background-color:#bbffbb"
| 162 || September 30 || Red Sox || 3–1 || Terry (16–3) || Schwall (15–7) || Coates (5) || 19,061 || 108–53
|- style="text-align:center;background-color:#bbffbb"
| 163 || October 1 || Red Sox || 1–0 || Stafford (14–9) || Stallard (2–7) || Arroyo (29) || 23,154 || 109–53
|-
| Source:
Postseason Game log
|- align="center" bgcolor="bbffbb"
| 1 || October 4 || Reds || 2–0 || Ford (1–0) || O'Toole (0–1) || || 62,397 || 1–0
|- align="center" bgcolor="ffbbbb"
| 2 || October 5 || Reds || 2–6 || Jay (1–0) || Terry (0–1) || || 63,083 || 1–1
|- align="center" bgcolor="bbffbb"
| 3 || October 7 || @ Reds || 3–2 || Arroyo (1–0) || Purkey (0–1) || || 32,589 || 2–1
|- align="center" bgcolor="bbffbb"
| 4 || October 8 || @ Reds || 7–0 || Ford (2–0) || O'Toole (0–2) || Coates (1) || 32,589 || 3–1
|- align="center" bgcolor="bbffbb"
| 5 || October 9 || @ Reds || 13–5 || Daley (1–0) || Jay (1–1) || || 32,589 || 4–1
Player stats
Batting
Starters by position
Note: Pos = Position; G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in
Other batters
Note: G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in
Pitching
Starting pitchers
Note: G = Games pitched; IP = Innings pitched; W = Wins; L = Losses; ERA = Earned run average; SO = Strikeouts
Other pitchers
Note: G = Games pitched; IP = Innings pitched; W = Wins; L = Losses; ERA = Earned run average; SO = Strikeouts
Relief pitchers
Note: G = Games pitched; W = Wins; L = Losses; SV = Saves; ERA = Earned run average; SO = Strikeouts
1961 World Series
Awards and honors
Roger Maris, American League MVP
Roger Maris, Associated Press Athlete of the Year
Whitey Ford, Cy Young Award
Whitey Ford, Babe Ruth Award
1961 All-Star Game
Whitey Ford, starter, pitcher
Tony Kubek, starter, shortstop
Mickey Mantle, starter, center field
Roger Maris, starter, right field
Luis Arroyo, reserve
Yogi Berra, reserve
Elston Howard, reserve
Bill Skowron, reserve
League leaders
Whitey Ford, led league in innings: (283)
Whitey Ford, led league in games started: (39)
Whitey Ford, led league in batters faced: (1,159)
Roger Maris, Major League Baseball home run champion, (61)
Franchise records
Roger Maris, Yankees single season record, home runs in a season: (61)
Mickey Mantle, Yankees single season record, home runs by a center fielder: (54)
Team leaders
Home runs – Roger Maris (61)
RBI – Roger Maris (142)
Batting average – Elston Howard (.348)
Hits – Bobby Richardson (173)
Stolen bases – Mickey Mantle (12)
Walks – Mickey Mantle (126)
Wins – Whitey Ford (25)
Earned run average – Luis Arroyo (2.19)
Strikeouts – Whitey Ford (209)
Farm system
Harlan affiliation shared with Chicago White Sox
Notes
References
1961 New York Yankees
1961 World Series
1961 New York Yankees at Baseball Almanac
New York Yankees seasons
New York Yankees
New York Yankees
20th century in the Bronx
American League champion seasons
World Series champion seasons
Yankee Stadium (1923) |
3161727 | https://en.wikipedia.org/wiki/Windows%20thumbnail%20cache | Windows thumbnail cache | On Microsoft Windows operating systems, starting with the Internet Explorer 4 Active Desktop Update for Windows 95 to 98, a thumbnail cache is used to store thumbnail images for Windows Explorer's thumbnail view. This speeds up the display of images as these smaller images do not need to be recalculated every time the user views the folder.
Purpose
Windows stores thumbnails of graphics files, and certain document and movie files, in the Thumbnail Cache file, including the following formats: JPEG, BMP, GIF, PNG, TIFF, AVI, PDF, PPTX, DOCX, HTML, and many others. Its purpose is to prevent intensive disk I/O, CPU processing, and load times when a folder that contains a large number of files is set to display each file as a thumbnail. This effect is more clearly seen when accessing a DVD containing thousands of photos without the thumbs.db file and setting the view to show thumbnails next to the filenames. Thumbnail caching was introduced in Windows 2000; wherein the thumbnails were stored in the image file's alternate data stream if the operating system was installed on a drive with the NTFS file system. A separate Thumbs.db file was created if Windows 2000 was installed on a FAT32 volume. Windows Me also created Thumbs.db files. From Windows XP, thumbnail caching, and thus creation of Thumbs.db, can optionally be turned off. In Windows XP only, from Windows Explorer Tools Menu, Folder Options, by checking "Do not cache thumbnails" on the View tab. Under Windows 2000, Windows Me, and Windows XP, a context menu command to force refreshing the thumbnail is available by right clicking the image in Thumbnail view of Windows Explorer.
Thumbs.db
Thumbs.db files are stored in each directory that contains thumbnails on Windows systems. The file is created locally among the images, however, preventing system wide use of the data and creating additional data load on removable devices. Windows XP Media Center Edition also creates ehthumbs.db which holds previews of video files. Each thumbnail created in a directory is represented in this database file as a small JPEG file, regardless of the file's original format. The images are resized to 96×96 pixels by default or a proportional miniature of their original shape for non-square images, with 96 pixels on the longer side. The size can be controlled by a setting on Windows Registry. Each folder with initiated thumbnail views (that is where they have displayed a Thumbnails or Filmstrip view in Windows Explorer) will have a Thumbs.db file. Folders with pictures also display previews on their icon when displayed in Thumbnail mode – the first four images in the folder at 40×40 pixels (or proportionally shaped), with a 1-pixel divider overlaid on a standard large folder icon. The Thumbs.db file is stored in Compound File Binary Format format, the same format that many Microsoft Office products use.
Centralized thumbnail cache
Beginning with Windows Vista, thumbnail previews are stored in a centralized location on the system. This provides the system with access to images independent of their location, and addresses issues with the locality of Thumbs.db files. The cache is stored at %userprofile%\AppData\Local\Microsoft\Windows\Explorer as a number of files with the label thumbcache_xxx.db (numbered by size); as well as an index used to find thumbnails in each sized database.
However, when browsing network shares with write permission, Windows Vista and Windows 7 store a Thumbs.db file in the remote directory instead of using the (local) central thumbnail cache. This can cause issues when deleting remote shares, as the directory will become locked for a period of time when selected as Windows Explorer automatically creates a remote Thumbs.db file.
Creating Thumbs.db files on remote shares can be disabled with a Group Policy setting.
As forensic evidence
Law-enforcement agencies have used this file to prove that illicit photos were previously stored on the hard drive. For example, the FBI used the "thumbs.db" file in 2008 as evidence of viewing depictions of child pornography.
In 2013, research was conducted that focused on the Digital Forensic implications of thumbnail caches and recovering partial thumbnail cache files. It identified that whilst there is a standard definition of a thumbnail cache the structure and forensic artifacts recoverable from them varies significantly between operating systems. The work also showed that the thumbcache_256.db contains non-standard thumbnail cache records which can store interesting data such as network place names and allocated drive letters.
See also
.DS_Store
Quick Look
References
External links
Thumbcache Viewer – open-source thumbcache_*.db viewer
Thumbs Viewer – open-source viewers for both Thumbs.db (legacy mode) and Thumnail Cache (modern)
Vinetto is a forensics tool to examine Thumbs.db files.
– Description of thumbs.db file
Prevent the creation of thumbs.db files via Group Policy (Windows 7)
Windows files |
11190001 | https://en.wikipedia.org/wiki/OneDrive | OneDrive | Microsoft OneDrive (formerly SkyDrive) is a file hosting service that Microsoft operates. First launched in August 2007, it enables registered users to share and synchronize their files. OneDrive also works as the storage back-end of the web version of Microsoft Office. OneDrive offers 5 GB of storage space free of charge, with 100 GB, 1 TB, and 6 TB storage options available either separately or with Office 365 subscriptions.
The OneDrive client app adds file synchronization and cloud backup features to its device. The app comes bundled with Microsoft Windows and is available for macOS, Android, iOS, Windows Phone, Xbox 360, Xbox One, and Xbox Series X and S. In addition, Microsoft Office apps directly integrate with OneDrive.
History
At its launch the service, known as Windows Live Folders at the time (with a codename of SkyDrive), was provided as a limited beta available to a few testers in the United States. On August 1, 2007, the service was expanded to a wider audience. Shortly thereafter, on August 9, 2007, the service was renamed Windows Live SkyDrive and made available to testers in the United Kingdom and India. SkyDrive was initially available in 38 countries and regions, later expanded to 62. On December 2, 2008, the capacity of an individual SkyDrive account was upgraded from 5 GB to 25 GB, and Microsoft added a separate entry point called Windows Live Photos which allowed users to access their photos and videos stored on SkyDrive. This entry point allowed users to add "People tags" to their photos, download photos into Windows Photo Gallery or as a ZIP file, as well as viewing Exif metadata such as camera information for the photos uploaded. Microsoft also added the ability to have full-screen slide shows for photos using Silverlight.
SkyDrive was updated to "Wave 4" release on June 7, 2010, and added the ability to work with Office Web Apps (now known as Office Online), with versioning. In this update, due to the discontinuation of Windows Live Toolbar, the ability to synchronise and share bookmarked web links between users via SkyDrive was also discontinued. However, users were still able to use Windows Live Mesh, which replaced the previous Windows Live Favorites, to synchronize their favorites between computers until its discontinuation in February 2013.
In June 2010, users of Office Live Workspace, released in October 2007, were migrated to Windows Live Office. The migration included all existing workspaces, documents, and sharing permissions. The merger of the two services was a result of Microsoft's decision to merge its Office Live team into Windows Live in January 2009, as well as several deficiencies with Office Live Workspace, which lacked high-fidelity document viewing and did not allow files to be edited from within the web browser. Office Live Workspace also did not offer offline collaboration and co-authoring functionality – instead documents were "checked out" and "checked in", though the service did integrate with SharedView for real-time screen sharing.
On June 20, 2011, Microsoft overhauled the user interface for SkyDrive, built using HTML5 technologies. The updated version featured caching, hardware acceleration, HTML5 video, quick views, cleaner arrangement of photos and infinite scrolling. Microsoft also doubled the file size limit from 50 MB to 100 MB per file. With this update, Microsoft consolidated the different entry points for SkyDrive, such as Windows Live Photos and Windows Live Office, into one single interface. Files and folders shared with a user, including those in Windows Live Groups, were also accessible in the new interface. On November 29, 2011, Microsoft updated SkyDrive to make sharing and file management easier, as well as HTML5 and other updates. This update also allowed users to see how much storage they had (and how much they had used), a feature that had been removed in the previous update as part of the redesign.
On December 3, 2011, Microsoft released SkyDrive apps for iOS and Windows Phone, which are available in the App Store and Windows Phone Store respectively. On April 22, 2012, Microsoft released a SkyDrive desktop app for Windows Vista, 7 and 8, as well as macOS, allowing users to synchronize files on SkyDrive, much like Windows Live Mesh, and to "fetch" files on their computer via the web browser. In addition, SkyDrive also provided additional storage available for purchase and reduced the free storage space for new users to 7 GB (from 25 GB.) Existing users were offered a free upgrade offer to retain their 25 GB of free storage. The updated SkyDrive also allowed files up to 2 GB in size (uploaded via the SkyDrive desktop app). The update also brought additional features such as Open Document Format (ODF) capability, URL shortening services and direct sharing of files to Twitter.
On August 14, 2012, Microsoft announced a new update for SkyDrive which brought changes and improvements to SkyDrive.com, SkyDrive for Windows desktop and OS X, and the SkyDrive API as part of Live Connect. For SkyDrive.com, the updates brought a new "modern" design for the web service consistent with Outlook.com, and along with the UI update the service also received improvements such as instant search, contextual toolbar, multi-select in thumbnail view, drag-and-drop files into folders, and sorting improvements. For the SkyDrive for Windows desktop and macOS applications, the update brought new performance improvements to photo uploads and the sync experience. The update also improved the SkyDrive API with the removal of file type restrictions, ability to upload images in their full resolution, as well as a new SkyDrive file picker for opening and saving files. On August 28, 2012, Microsoft released a SkyDrive app for Android on Google Play store. On September 18, 2012, Microsoft also introduced a recycle bin feature on SkyDrive and announced that SkyDrive will allow users to create online surveys via Excel Web App.
Sky lawsuit and OneDrive renaming
Microsoft became involved in a lawsuit with British television broadcaster Sky UK for using the word "Sky", resulting in a High Court ruling in June 2013 that the service's brand breached Sky's trademark. On July 31, 2013, in a joint press release between Sky and Microsoft, it was announced that a settlement had been reached and as a result the 'SkyDrive' name would be changed to 'OneDrive'. Sky allowed Microsoft to continue using the brand "for a reasonable period of time to allow for an orderly transition to a new brand". The change was made on most platforms on February 19, 2014, following an announcement on January 27.
On June 18, 2015, Microsoft launched an improved design of OneDrive for the web.
In 2015 Microsoft removed the unlimited storage plan for Office 365 Home, Personal and University packages, reduced the free OneDrive storage from 15 GB to 5 GB, and replaced paid subscriptions to 100 GB and 200 GB plans to a $1.99 per month 50 GB plan. These changes caused major controversy with users, some of whom petitioned Microsoft to reverse the plans. By November 21, 2015, in response to Microsoft's November 2 announcement, over 70,000 people had taken to the official OneDrive uservoice to voice their concerns. According to Microsoft these changes were a response to people abusing the service by using OneDrive to store PC backups, movie collections, and DVR recordings.
Storage
Quota
the service offers 5 GB of free storage for new users. Additional storage is available for purchase.
The amount of storage available has changed several times. Initially, the service provided 7 GB of storage and, for one year, an additional 3 GB of free storage to students. Users who signed up to OneDrive prior to April 22, 2012 were able to opt-in for a limited time offer of 25 GB of free storage upgrade. The service is built using HTML5 technologies, and files up to 300 MB can be uploaded via drag and drop into the web browser, or up to 10 GB via the OneDrive desktop application for Microsoft Windows and OS X. From September 23, 2013 onwards, in addition to 7 GB of free storage (or 25 GB for users eligible for the free upgrade), power users who required more storage could choose from one of four paid storage plans.
Users in some regions may need to have a certain payment card or PayPal account to pay. The paid storage plan is renewed automatically each year unless Microsoft or the user cancels the service.
Upon the re-launch as OneDrive, monthly payment plans were introduced, along with the ability to earn up to 5 GB of free storage for referring new users to OneDrive (500 MB each), and 3 GB if users enable automatic uploads of photos using the OneDrive mobile apps on smartphones. Subscribers to Office 365's home-oriented plans also receive additional storage for use with the service, with 20 GB per user.
In June 2014 it was announced that OneDrive's default storage would increase to 15 GB, putting it in line with its competitor Google Drive. An additional 15 GB were offered for activating camera roll backup on a mobile device, putting it ahead of Google Drive until November 2015, when this bonus was cancelled. The amount of additional storage for Office 365 subscribers also increased to 1 TB. Microsoft reduced the price of OneDrive storage subscriptions at that time.
In October 2014 Microsoft announced that it would offer unlimited OneDrive storage to all Office 365 subscribers. However, on November 3, 2015, the 1 TB cap was reinstated. Microsoft additionally announced the planned replacement of its 100 GB and 200 GB plans with a new 50 GB plan in early 2016, and the reduction of free storage from 15 GB to 5 GB. Any current accounts over this limit could keep the increased storage for at least 12 months. Following calls for Microsoft to reverse the reduction decision, Microsoft announced on December 11 of the same year that it would allow existing users to request to have up to 30 GB of free storage unaffected by the reduction, and said it would fully refund customers of Office 365 not satisfied with the 1 TB cap, among other redress.
In June 2019, alongside the announcement for the Personal Vault, Microsoft announced that it would increase the OneDrive standalone storage plan from 50 GB to 100 GB at no additional charge, and that it would be giving Office 365 subscribers a new option to add more storage as they need it.
Versioning
OneDrive initially did not store previous versions of files, except for Microsoft Office formats. In July 2017, however, Microsoft OneDrive team announced that version history support for all file types was the top requested feature; as such, OneDrive would keep older versions of all files for up to 30 days.
Recycle bin
OneDrive implements a "recycle bin"; files the user chooses to delete are stored there for a time, without counting as part of the user's allocation, and can be reinstated until they are ultimately purged from OneDrive.
Download as ZIP files
Entire folders can be downloaded as a single ZIP file with OneDrive. For a single download, there is a limit of 15 GB; the total ZIP file size limit is 20 GB; and up to 10,000 files can be included in a ZIP file.
Files On-Demand
On Windows 10, OneDrive can utilize Files On-Demand, where files synchronized with OneDrive show up in file listings, but do not require any disk space. As soon as the content of the file is required, the file is downloaded in the background.
Editing
Office for the web
Microsoft added Office for the web (known at the time as Office Web Apps, later renamed to Office Online and again to just Office) capability to OneDrive in its "Wave 4" update, allowing users to upload, create, edit and share Word, Excel, PowerPoint and OneNote documents directly within a web browser. In addition, Office for the web allows multiple users to simultaneously co-author Excel documents in a web browser, and co-author OneNote documents with another web user or the desktop application. Users can also view the version history of Office documents stored on OneDrive.
Formats
OneDrive allows the viewing of documents in Portable Document Format (PDF), and in the Open Document Format (ODF), an XML-based file format supported by a number of word processing applications, including Microsoft Office, LibreOffice, Apache OpenOffice and Corel's WordPerfect. OneDrive's search function supports search within PDF documents.
OneDrive includes an online text editor that allows users to view and edit files in plain text format, such as text files and batch files. Syntax highlighting and code completion is available for a number of programming and markup languages, including C#, Visual Basic, JavaScript, Windows PowerShell, CSS, HTML, XML, PHP and Java. This online editor includes a find-and-replace feature and a way to manage file merging conflicts.
Photos and videos
OneDrive can use geo-location data for photos uploaded to the service, and will automatically display a map of the tagged location. OneDrive also allows users to tag people in photos uploaded via the web interface or via Windows Photo Gallery. OneDrive also has support for the UWP app, Microsoft Photos.
Photos uploaded to OneDrive can be played as an automatic slideshow. Images uploaded to OneDrive will be recognized as 360° images if they are clicked with popular models of 360° cameras in a panoramic mode, right from within the OneDrive.
Client apps
Microsoft has released OneDrive client applications for Android, iOS, Windows 8, Windows 10, Windows 10 Mobile, Windows Phone Xbox 360, and Xbox One that allow users to browse, view and organize files stored on their OneDrive cloud storage. In addition, Microsoft also released desktop applications for Microsoft Windows (Vista and later) and macOS (10.7 Lion and later) that allow users to synchronize their entire OneDrive storage with their computers for offline access, as well as between multiple computers. The OneDrive client for Windows allows users to "fetch" the contents of their PCs via the web browser, provided the user enabled this option; macOS users can fetch from a PC, but not vice versa. The Android, iOS and Windows Phone 8 versions also allow camera photos to automatically be uploaded to OneDrive. Upon the re-branding as OneDrive, the Xbox One app also added achievements.
In addition to the client apps, OneDrive is integrated into Windows 8.1 and later, Microsoft Office 2010 and later, as well as the Office and Photos hub in Windows Phone, enabling users to access documents, photos and videos stored on their OneDrive account. OneDrive in Windows 8.1 can sync user settings and files, through either the included OneDrive app (originally called SkyDrive, until the name was changed with a Windows update) or File Explorer, deprecating the previous Windows client. Along with the use of reparse points, these changes allow files to be accessed directly from OneDrive as if they are stored locally. The OneDrive app was also updated to include a local file manager. Unlike on Windows 8, use of OneDrive on Windows 8.1 requires the user's Windows account be linked to a Microsoft account; the previous OneDrive desktop client (which did not have this requirement) no longer works on Windows 8.1. Additionally, the Fetch feature does not work on Windows 8.1.
In an update on July 4, 2017, OneDrive desktop client started showing an error message to the effect that the local OneDrive folder must be located on an NTFS volume only. Other file systems, including the older FAT32 and exFAT, as well as the newer ReFS were not supported. Microsoft further commented that this was always the requirement; it had merely fixed a bug in which the warning was not displayed. Microsoft also denied this feature having anything to do with the forthcoming OneDrive Files On-Demand.
Integration with Microsoft Office
Microsoft Office, starting with Microsoft Office 2010 and Microsoft Office for Mac 2011, allows users to directly open or save documents to OneDrive, or simultaneously edit shared documents with other users. Changes are synchronized when a document is saved and, where conflicts occur, the saving user can choose which version to keep; users can also use several different desktop and web programs to edit the same shared document.
Microsoft OneNote users can sync one or more of their notebooks using OneDrive. Once a notebook is selected for sharing, OneDrive copies the notebook from the user's computer to OneDrive, and that online copy then becomes the original for all future changes. The originating copy remains on the user's hard drive but is no longer updated by OneNote. Users can switch back to an offline-only version of the notebook by manually changing its location in OneNote, but unpredictable results may occur, including the OneNote application crashing and loss of notebook data under certain conditions. Under such circumstances, re-sharing the Notebook to OneDrive may result in recovery of the lost data.
Personal Vault
In September 2019 Microsoft announced Personal Vault. It is a protected area in OneDrive where users can store their most important or sensitive files and photos without sacrificing the convenience of anywhere access. Personal Vault has a strong authentication method or a second step of identity verification, such as fingerprint, face, PIN, or a code sent via email or SMS. Personal Vault is not available in macOS app.
Interoperability
OneDrive allows users to embed their Word, Excel and PowerPoint documents into other web pages. These embedded documents allow anyone who visits these web pages to interact with them, such as browsing an embedded PowerPoint slideshow or perform calculations within an embedded Excel spreadsheet. In addition, Microsoft has released a set of APIs for OneDrive via Live Connect to enable developers to develop web services and client apps utilizing OneDrive's cloud storage. This allows users of these web services and client apps to browse, view, upload or edit files stored on OneDrive. A software development kit (SDK) is available for .NET Framework, iOS, Android and Python with a limited set of API for web apps and Windows.
OneDrive is already interoperable with a host of web services, including:
Outlook.com: Allows users to:
Directly upload Office documents and photos within Outlook.com, store them on OneDrive and share them with other users.
Directly save Office documents within Outlook.com to OneDrive, and view or edit these documents directly within the web browser.
Edit Office documents within the web browser using Office Online and reply directly back to the sender with the edits made.
Facebook, Twitter and LinkedIn: Enables users to quickly share their files with their contacts on these social networks. OneDrive maintains an access control list of all users with permissions to view or edit the files, including those users on social networks.
Bing: Save & Share feature allows users to save search histories into a OneDrive folder.
Windows Live Groups: Before being discontinued, Windows Live Groups provided each group with 1 GB of storage space on OneDrive to be shared between the group members. Group members were allowed to access, create, modify and delete files within the group's OneDrive folders, along with the other functionality that OneDrive provides. However, these features eventually became native to OneDrive.
Samsung Gallery: Users can sync their photos and videos from the gallery of Samsung Devices to OneDrive through the partnership of Microsoft and Samsung.
Privacy concerns
Data stored on OneDrive is subject to monitoring by Microsoft, and any content that is in violation of Microsoft's Code of Conduct is subject to removal and may lead to temporary or permanent shutdown of the account. This has led to privacy concerns in relation to data stored on OneDrive. Microsoft has responded by indicating that "strict internal policies [are] in place to limit access to a user’s data", and that advanced mechanisms, such as Microsoft's automated PhotoDNA scanning tool, are utilized to ensure users abide with the Code of Conduct and that their account does not contain files in contravention thereof, such as partial human nudity (including art or drawings), or any online surveys.
OneDrive for Business
Microsoft has a similarly named but unrelated software plus service offering called OneDrive for Business (previously SkyDrive Pro). While OneDrive is a personal storage service on the web, OneDrive for Business is a managed cloud storage for business users that replaces SharePoint Workspace. The physical medium on which the information is stored can be either hosted on-premises or purchased as service subscription from Microsoft.
See also
Comparison of file hosting services
Comparison of online backup services
References
External links
Windows Live
Cloud storage
Data synchronization
Email attachment replacements
File hosting
File sharing services
Online backup services
Web applications
Proprietary cross-platform software
Android (operating system) software
Internet properties established in 2007
IOS software
Windows Phone software
File hosting for macOS
File hosting for Windows
OneDrive
Windows components
Microsoft websites
Universal Windows Platform apps
Companies' terms of service |
10033351 | https://en.wikipedia.org/wiki/Applied%20Research%20in%20Patacriticism | Applied Research in Patacriticism | The Applied Research in Patacriticism (ARP) was a digital humanities lab based at the University of Virginia founded and run by Jerome McGann and Johanna Drucker. ARP's open-source tools include Juxta, IVANHOE, and Collex. Collex is the social software and faceted browsing backbone of the NINES federation. ARP was funded by the Mellon Foundation.
Projects
IVANHOE
IVANHOE is an open source electronic role-playing game for educational use. It was developed by ARP at the University of Virginia. It is so named because Sir Walter Scott's novel, Ivanhoe, was used as the source text for the very first IVANHOE game. IVANHOE is notable as an example of the use of ludic or game-related techniques in higher education in the humanities.
NINES
NINES is the Networked Infrastructure for Nineteenth-century Electronic Scholarship, a scholarly organization in British and American nineteenth-century studies supported by ARP, a software development group assembling a suite of critical and editorial tools for digital scholarship. It was founded in 2003 by Jerome McGann at the University of Virginia.
NINES serves as a clearinghouse for peer-reviewed digital resources, which can be collected, annotated, and re-used in online "exhibits." It is powered by open-source Collex software.
In 2011, the NINES model was expanded to include a sister site in eighteenth-century studies, called 18thConnect.
Collex
Collex is an open source social software and faceted browsing tool designed for digital humanities. It includes folksonomy features and is under construction at ARP. The first release of Collex is used in the NINES initiative, but it is a generalizable tool that can be applied to other subject domains. Collex is an early example of a scholar-driven Library 2.0 initiative and, like NINES, was conceived as a response to economic problems in tenure and academic publishing.
Juxta
Juxta is an open-source tool for performing bibliographical collations for scholarly use in textual criticism. It was developed by ARP at the University of Virginia under the direction of textual theorist Jerome McGann. The original application was a Java-based client available for free download.
In October 2012, the Research and Development team at NINES released Juxta Commons, a fully online version of the software.
References
Selected bibliography
Jerome McGann, Texts in N-Dimensions and Interpretation in a New Key, in: Text Technology 12,2 (2003)
Johanna Drucker, Designing Ivanhoe, in: Text Technology 12,2 (2003)
Chandler Sansing, Case Study and Appeal: Building the Ivanhoe Game for Classroom Flexibility, in: Text Technology 12,2 (2003)
Bethany Nowviskie, Subjectivity in the Ivanhoe Game:Visual and Computational Strategies, in: Text Technology 12,2 (2003)
External links
ARP tools on the NINES website
ARP website
IVANHOE website and development blog
NINES informational website
Collex development blog
NINES resources in Collex
NINES whitepaper by Bethany Nowviskie and Jerome McGann
Juxta software website
University of Virginia
Textual scholarship |
286880 | https://en.wikipedia.org/wiki/National%20University%20of%20Singapore | National University of Singapore | The National University of Singapore (NUS) is a national collegiate research university in Queenstown, Singapore. Founded in 1905 as the Straits Settlements and Federated Malay States Government Medical School, NUS has consistently been considered as being one of the top and prestigious academic institutions in the world as well as in the Asia-Pacific itself. It plays a key role in the further development of modern technology and science, offering a global approach to education and research, with a focus on expertise and perspectives of Asia. In 2022, the QS World University Rankings ranked NUS 11th in the world and first in Asia.
NUS is the oldest autonomous university in the country. It is largely a comprehensive research university, offering degree programmes in a wide range of disciplines at both the undergraduate and postgraduate levels, including in the sciences, medicine and dentistry, design and environment, law, arts and social sciences, engineering, business, computing, and music.
NUS's main campus is located in the southwestern part of Singapore, adjacent to the Kent Ridge subzone of Queenstown, accommodating an area of . The Duke–NUS Medical School, a postgraduate medical school jointly established with Duke University, is located at the Outram campus; its Bukit Timah campus houses the Faculty of Law and Lee Kuan Yew School of Public Policy; and the Yale-NUS College, a liberal arts college established in collaboration with Yale University that is scheduled to be merged with the University Scholars Programme in 2025 to form NUS College, is located at University Town (commonly known as UTown).
NUS has one Nobel laureate, one Tang Prize laureate and one Vautrin Lud Laureate affiliated as alumni, faculty members, or researchers.
History
In September 1904, Tan Jiak Kim led a group of representatives of the Chinese and other non-European communities to petition the Governor of the Straits Settlements, Sir John Anderson, to establish a medical school in Singapore. It was noted by Anderson that there were other petitions prior which were not successful due to concerns over having a sufficient number of students and support from the local community. Tan, who was the first president of the Straits Chinese British Association, managed to raise 87,077 Straits dollars from the community, including a personal donation of $12,000. On 3 July 1905, the medical school was founded and was known as the Straits Settlements and Federated Malay States Government Medical School. At Anderson's directions, the school was hosted temporarily at a recently emptied block at a Government-run asylum in Pasir Panjang while providing the staff required to run the school.
In 1912, the medical school received an endowment of $120,000 from King Edward VII Memorial Fund, started by physician Lim Boon Keng. Subsequently, on 18 November 1913, the name of the school was changed to King Edward VII Medical School. In 1921, it was again changed to King Edward VII College of Medicine to reflect its academic status.
In 1928, Raffles College, a separate institution from the medical school, was established to promote education in arts and social sciences.
University of Malaya (1949–1962)
In 1949, Raffles College was merged with King Edward VII College of Medicine to form University of Malaya on 8 October 1949. The two institutions were merged to provide for the higher education needs of the Federation of Malaya.
The growth of University of Malaya was very rapid during the first decade of its establishment and resulted in the setting up of two autonomous divisions in 1959, one located in Singapore and the other in Kuala Lumpur.
Nanyang University (1955-1980)
In 1955, Nanyang University (abbreviated Nantah, 南大) was established on the backdrop of the Chinese community in Singapore.
University of Singapore (1962–1980)
In 1960, the governments of then Federation of Malaya and Singapore indicated their desire to change the status of the divisions into that of a national university. Legislation was passed in 1961, establishing the former Kuala Lumpur division as the University of Malaya, while the Singapore division was renamed the University of Singapore on 1 January 1962.
Present form
The National University of Singapore (NUS) was formed with the merger of the University of Singapore and Nanyang University on 6 August 1980. This was done in part due to the government's desire to pool the two institutions' resources into a single, stronger entity and promote English as Singapore's main language of education. The original crest of Nanyang University with three intertwined rings was incorporated into the new coat-of-arms of NUS.
NUS began its entrepreneurial education endeavours in the 1980s, with the setting up of the Centre for Management of Innovation and Technopreneurship in 1988. In 2001, this was renamed the NUS Entrepreneurship Centre (NEC), and became a division of NUS Enterprise. NEC is currently headed by Wong Poh Kam and its activities are organised into four areas, including a business incubator, experiential education, entrepreneurship development and entrepreneurship research.
NUS has 17 faculties and schools across three campus locations in Singapore – Kent Ridge, Bukit Timah and Outram.
Education
NUS has a semester-based modular system for conducting courses. It adopts features of the British system, such as small group teaching (tutorials) on top of regular two-hour lectures, and the American system (course credits). Students may transfer between courses within their first two semesters, enroll in cross-faculty modules or take up electives from different faculties (compulsory for most degrees). Other cross-disciplinary initiatives study programmes include double-degree undergraduate degrees in Arts & Social Sciences and Engineering; Arts & Social Sciences and Law; Business and Engineering; and Business and Law. At the start of every semester, students are allocated a set number of points with which they use to bid for enrollment to their module of choice. NUS has 17 faculties and schools across three campuses, including a music conservatory.
NUS offers many programmes for student enrichment, with one of these being the Undergraduate Research Opportunities Program (UROP). Students are expected to spend several hours each week on their projects during the semester, and to work full-time on them during the vacation. UROP is offered in the following Faculties/Schools/Residential College: Arts & Social Sciences, Computing, Dentistry, Engineering, Law, Medicine, Science, University Scholars Programme, and Tembusu College.
University rankings
In 2021, the QS World University Rankings ranked NUS 11th in the world and first in Asia Pacific. The Times Higher Education World University Rankings 2022 placed NUS at 21st in the world and third in Asia Pacific, while it placed at 25th and 24th in the reputation rankings in prior years. In 2020, it is ranked 29th among the universities around the world by SCImago Institutions Rankings. NUS placed 28th for 2019-2020 and 32nd for 2020-2021 globally in the Informatics Institute/METU University Ranking by Academic Performance.
NUS was ranked first in Singapore and Asia Pacific, and 22nd in the world according to the 2018 Times Higher Education World University Rankings, and first in Asia Pacific and 11th in the world according to the 2018 QS World University Rankings. In 2018, NUS was named the world's fourth most international university in the Times Higher Education World University Rankings. In the QS Graduate Employability Rankings 2018, an annual ranking of university graduates' employability, NUS was ranked 30th in the world.
Faculties and schools
Business
The NUS Business School was founded as the Department of Business Administration in 1965. It has seven departments: Accounting, Strategy and Policy, Decision Sciences, Finance, Management and Organisation, Marketing, and Real Estate.
Graduate programmes offered include the Master of Business Administration (MBA) which currently rank 18th globally according to the Financial Times Global Rankings 2018, NUS MBA Double Degree (conducted jointly with Peking University), UCLA-NUS Executive MBA Programme, Asia-Pacific Executive MBA (English and Chinese), S3 Asia MBA (conducted jointly with Fudan University and Korea University).
Computing
The School of Computing (SoC), established in 1998, has two departments – Computer Science and Information Systems and Analytics. The department of Computer Science offers two undergraduate degree programmes – Computer Science and Information Security, while the department of Information Systems and Analytics offers the other two - Information Systems and Business Analytics. The School of Computing also offers a Master's program in Computer Science, with the option of completing it via coursework, a significant project, or a dissertation.
Dentistry
The Faculty of Dentistry had its early beginnings in 1929 as a Department of Dentistry within the King Edward VII College of Medicine. The faculty conducts a four-year dental course leading to the Bachelor of Dental Surgery (BDS) degree. The undergraduate programme comprises two pre-clinical (first two years) and two clinical years. The Faculty of Dentistry is organised into five academic departments covering the disciplines of Oral Sciences: Oral and Maxillofacial Surgery; Endodontics; Operative Dentistry and Prosthodontics; Periodontics; and Orthodontics and Paediatric Dentistry.
Design and Engineering
The interdisciplinary College of Design and Engineering (CDE) was launched in 2021, bringing together two pre-existing faculties, the School of Design and Environment (SDE) and the Faculty of Engineering (FoE).
Design and Environment
The School of Design and Environment has three departments: Department of Architecture, Department of the Built Environment, and the Division of Industrial Design.
Engineering
The Faculty of Engineering was launched in 1968. It is the largest faculty in the university and consists of several divisions/departments: Biomedical Engineering; Chemical & Biomolecular Engineering; Civil & Environmental Engineering; Electrical & Computer Engineering; Engineering Science Programme; Industrial Systems Engineering & Management (ISEM); Materials Science & Engineering; and Mechanical Engineering. The Department of Industrial Systems Engineering & Management (ISEM) resulted from a merger between the Department of Industrial & Systems Engineering and Division of Engineering & Technology Management in 2017.
The NUS Faculty of Engineering was ranked sixth in the world by the Academic Ranking of World Universities for Engineering/Technology and Computer Sciences. It has also been ranked seventh in the world in the subject category of Engineering and Technology by the 2017 QS World University Subject Rankings and 2016-2017 Times Higher Education World University Subject Rankings.
Duke–NUS Medical School
The Duke–NUS Medical School (Duke–NUS) is a graduate medical school in Singapore. The school was set up in April 2005 as the Duke–NUS Graduate Medical School, Singapore's second medical school, after the Yong Loo Lin School of Medicine, and before the Lee Kong Chian School of Medicine. The Duke–NUS Medical School is a collaboration between Duke University in North Carolina, United States and the National University of Singapore.
Humanities and Sciences
The interdisciplinary College of Humanities and Sciences (CHS) was launched in 2020, bringing together two of the largest pre-existing faculties, the Faculty of Arts and Social Sciences (FASS) and the Faculty of Science (FoS). The two faculties continue to operate independently, but will jointly admit students into CHS from the academic year beginning 2021 onwards. For CHS, students will be required to spend one-third of their four-year curriculum on interdisciplinary modules offered by both faculties, while the minimum requirement for a major course will be reduced from half to one-third of the total curriculum, thus allowing for cross-disciplinary double-degree programmes offered by both faculties to be completed within four years.
Arts and social sciences
FASS majors are organized into three divisions: Asian Studies, Humanities, and Social Sciences.
Undergraduate degrees include the Bachelor of Arts (BA, to be abolished as all students in CHS will be on the four-year Honours track), Bachelor of Arts with Honours (BA (Hons)) and Bachelor of Social Sciences with Honours (BSocSci (Hons)).
Science
The Faculty of Science comprises the departments of Biological Sciences, Chemistry, Food Science & Technology, Mathematics, Pharmacy, Physics, and Statistics & Applied Probability and offers degree programmes in Chemistry, Computational Biology (Suspended temporarily), Data Science & Analytics, Environmental Studies (Environmental Biology), Food Science & Technology, Life Sciences, Mathematics, Pharmaceutical Science, Pharmacy, Physics, Quantitative Finance, and Statistics. The first female Dean of the Faculty of Science was Gloria Lim, who was appointed in 1973. She served a four-year term and was reappointed in 1979, but resigned after one year to allow Koh Lip Lin to continue his post. In 1980, University of Singapore merged with Nanyang University to form NUS, resulting in overlapping posts.
Integrative sciences and engineering
NUS Graduate School for Integrative Sciences and Engineering (NGS) was established in 2003. The principal purpose of NGS is "to promote integrative PhD research encompassing both laboratory work and coursework programmes which not only transcend traditional subject boundaries but also provides students with a depth of experience about science and the way it is carried out".
NGS' PhD programmes are firmly anchored in cross-disciplinary research. It offers a spectrum of research areas spanning science, engineering, related aspects of medicine, and interactive & digital media.
NGS also offers the following PhD degree programmes.
Joint NUS-Karolinska PhD Programme
NUS PhD-MBA
Law
The NUS Faculty of Law was first established as a Department of Law in the then University of Malaya in 1956. The first law students were admitted to the Bukit Timah campus of the university the following year. In 1980, the faculty shifted to the Kent Ridge campus, but in 2006 it relocated back to the Bukit Timah site.
Apart from the traditional LLB which runs for four years, the law school also offers double honours degrees in Business Administration & Law, Economics & Law, Law & Life Sciences, and a concurrent degree programme in Law & Public Policy. For graduate students, the law school offers coursework LLM specialisations in areas such as Corporate and Financial Services Law, Intellectual Property & Technology Law, International & Comparative Law, Maritime Law and Asian Legal Studies.
Medicine
The Yong Loo Lin School of Medicine at NUS was first established as the Straits Settlements and Federated Malay States Government Medical School in 1905. The School comprises departments such as the Alice Lee Centre for Nursing Studies, Anaesthesia, Anatomy, Biochemistry, Diagnostic Radiology, Epidemiology and Public Health, Medicine, Microbiology, Obstetrics & Gynaecology, Ophthalmology, Orthopaedic Surgery, Otolaryngology, Paediatrics, Pathology, Pharmacology, Physiology, Psychological Medicine and Surgery.
The School uses the British undergraduate medical system, offering a full-time undergraduate programme leading to the Bachelor of Medicine and Bachelor of Surgery (MBBS). For Nursing, the Bachelor of Science (Nursing) conducted by the Alice Lee Centre for Nursing Studies is offered. The department also offers postgraduate Master of Nursing, Master of Science (Nursing) and Doctor of Philosophy programmes.
Music
The Yong Siew Toh Conservatory of Music (YSTCM) is a collaboration between NUS and the Peabody Institute of Johns Hopkins University. Singapore's first conservatory of music, YSTCM was founded as the Singapore Conservatory of Music in 2001. The School was renamed Yong Siew Toh Conservatory of Music in recognition of a gift from the family of the late Dr Yong Loo Lin in memory of his daughter.
Public health
The Saw Swee Hock School of Public Health is Singapore's first and only tertiary education institution for public health. It traces its beginnings to the University of Malaya's Department of Social Medicine and Public Health, formed in 1948. The school collaborates with partners including the London School of Hygiene and Tropical Medicine, Karolinska Institutet, Harvard School of Public Health and University of Michigan School of Public Health.
Public policy
The Lee Kuan Yew School of Public Policy was formally established in 2004 as an autonomous graduate school of NUS. Although the School was formally launched in 2004, it inherited NUS' Public Policy Programme, which was established in 1992 in partnership with Harvard University's John F Kennedy School of Government.
University Scholars Programme
The University Scholars Programme (USP) is an undergraduate academic programme established in 2001 in NUS. Each year, USP admits around 240 undergraduates from across seven faculties and schools in NUS.
The USP education focuses on strengthening core academic and professional skills – writing and critical thinking, analytical and quantitative reasoning, the ability to ask the right questions and pursue research, and the habit of reflecting upon ideas within a broad intellectual landscape. This is done through an intensive and rigorous multi-disciplinary curriculum, and a rich offering of local and international programmes. USP's focus on core skills complements the students' strengths in their major disciplines, enabling them to make substantial connections across fields, enhancing their intellectual depth and breadth.
USP students reside in Cinnamon College at the NUS University Town. Alongside a vibrant student life, the residential college is a space for discussions on diverse issues, allowing students to develop meaningful engagement with real-world matters.
Yale-NUS College
The Yale-NUS College is a liberal arts college in Singapore which opened in August 2013 as a joint project of Yale University and the National University of Singapore. It exists as an autonomous college within NUS, allowing it greater freedom to develop its own policies while tapping on the existing facilities and resources of the main university. Students who graduate receive a Bachelor of Arts (Honours) or a Bachelor of Science (Honours) degree from Yale-NUS College awarded by NUS. Pericles Lewis, a former professor at Yale, was appointed as the founding president in 2012.
In August 2021, NUS announced that it was going to merge Yale-NUS College with the University Scholars Programme to form a new honours college by 2025, it was later announced in January 2022 to be named NUS College. The merger would mark the dissolution of NUS' partnership with Yale University. With the announcement it was also revealed that the last class of Yale-NUS College students were those admitted for the 2021–2022 academic year, after which Yale-NUS College would still operate for several years but would no longer be accepting new students.
Teaching centres
NUS has a variety of teaching centres including:
Centre for Development of Teaching and Learning (CDTL), which is the NUS academic development unit and in that capacity seeks to support teaching so as to improve student learning.
Centre for Instructional Technology (CIT), which provides for the exploration, development and application of digital and audio-visual technologies to support and enhance teaching and learning. This is done through the NUS-developed Integrated Virtual Learning Environment and by developing new applications/services and incorporating multimedia content in courses for academia.
Centre for English Language Communication (CELC).
Institute of Systems Science (ISS), which offers professional information technology continuing education to managers and IT practitioners.
Centre for Teaching and Learning CTL at Yale-NUS College supports academic development in the distinctive pedagogy of liberal arts education. Efforts focus in many areas, but especially in team-based learning, student-centred learning, grading and assessment, effective classroom discussion, impactful feedback, and intercultural engagement for all the divisions - Sciences, Social Sciences, and Humanities.
NUS High School of Mathematics and Science
NUS High School of Mathematics and Science is a school specialising in mathematics and science, and provides secondary and pre-tertiary education to many students with an inclination to these fields.
Research
Among the major research focuses at NUS are biomedical and life sciences, physical sciences, engineering, nanoscience and nanotechnology, materials science and engineering, infocommunication and infotechnology, humanities and social sciences, and defence-related research.
One of several niche research areas of strategic importance to Singapore being undertaken at NUS is bioengineering. Initiatives in this area include bioimaging, tissue engineering and tissue modulation. Another new field which holds much promise is nanoscience and nanotechnology.
Apart from higher-performance but lower-maintenance materials for manufacturing, defence, transportation, space and environmental applications, this field also heralds the development of accelerated biotechnical applications in medicine, health care and agriculture.
Research institutes and centres
Currently, NUS hosts 21 university-level research institutes and centres (RICs) in various fields such as research on Asia, risk management, logistics, engineering sciences, mathematical sciences, biomedical and life sciences, nanotechnology to marine studies. NUS currently hosts four Research Centres of Excellence, namely, the Cancer Science Institute of Singapore, Centre for Quantum Technologies, Mechanobiology Institute and Institute for Functional Intelligent Materials.
Besides university-level RICs, NUS also has close affiliation with many national research centres and institutes.
A special mention is required for The Logistics Institute – Asia Pacific, which is a collaborative effort between NUS and the Georgia Institute of Technology for research and education programmes in logistics. NUS announced its most recent research institute, the Next Age Institute, a partnership with Washington University in St. Louis, in February 2015.
Major research facilities
Comparative Medicine was set up to provide professional and technical service for laboratory animal care, veterinary medical services, and animal research project support for NUS staff and students.
National University Medical Institutes focuses its efforts on the development of centralised research facilities and services for the Yong Loo Lin School of Medicine at NUS and developing research programmes in cancer and cardiovascular diseases.
Entrepreneurship
NUS began its entrepreneurial education endeavours in the 1980s, with the setting up of the Centre for Management of Innovation and Technopreneurship in 1988. In 2001, this was renamed the NUS Entrepreneurship Centre (NEC), and became a division of NUS Enterprise. NUS Enterprise is the entrepreneurial arm of NUS. Its activities include a business incubator, entrepreneurial education, entrepreneurship outreach and technology commercialisation.
The NUS Overseas Colleges (NOC) programme was started in 2001, giving students the opportunity to experience, live, work and study in an entrepreneurial hub. Participants of the programme either spend 6 months or a year overseas, taking courses at partner universities and working in start-ups. The program is offered at twelve different locations: Silicon Valley and New York in the United States, Toronto in Canada, Shanghai, Beijing and Shenzhen in China, Stockholm in Sweden, Munich in Germany, Lausanne in Switzerland, Israel, Singapore and Southeast Asia (Jakarta and Vietnam).
The NUS Industry Liaison Office (ILO) is another department that is involved in the creation of deep tech start-ups. It manages the university's technology transfer and promotes research collaborations with industry and partners. ILO manages NUS intellectual property, commercialises its intellectual assets and facilitates the spinning off of technologies into start-up companies.
Campus facilities and resources
IT and computing services
The IT facilities and network are generally provided by its central IT department, Computer Centre. NUSNET is used in research, teaching, learning and administration. In 2004, a campus-wide grid computing network based on UD Grid MP was deployed, connecting at least 1,000 computers. This becomes one of the largest such virtual supercomputing facilities in the region.
NUS used Internet2 technology to make distance learning possible. Students from Singapore and Massachusetts Institute of Technology were able to learn and interact in one virtual classroom.
Library services
The NUS Libraries comprises 8 libraries, namely, the Central Library, Chinese Library, CJ Koh Law Library, Hon Sui Sen Memorial Library, the Medical Library, Music Library, Science Library and East Asian Institute Library. Its primary clients are NUS and NUS-affiliated research institutes, students, teaching faculty, research and administrative staff members, as well as a sizeable group of external members. Its collection encompasses subjects in architecture, building and real estate, business, dentistry, engineering, computer science, the humanities and social sciences, law, medicine, music, nursing and science. As of June 2017, there are 2,354,741 unique titles, and 26,074 microform resources in the collection.
NUS University Town
The NUS University Town (UTown) opened in August 2011. Located across the NUS Kent Ridge campus, it was built on the site of a former golf course. This is where 2,400 undergraduate students, 1,700 graduate students and 1,000 researchers work, live, and learn in close proximity. There are four residential colleges: Cinnamon College, Tembusu College, College of Alice & Peter Tan, and Residential College 4 – initially named Cinnamon, Tembusu, Angsana and Khaya, respectively. An Education Resource Centre, the Stephen Riady Centre and a Graduate Residence are also located here.
Transportation
The university has a dedicated bus system called the Internal Shuttle Bus which operates both within the Kent Ridge campus and between the Kent Ridge and Bukit Timah campuses.
Student accommodation
NUS has three types of student accommodation: Halls of residence; Student residences; and Residential colleges. There are about 6,000 residential places distributed between Halls of Residence and Student Residences on campus, in addition to around 4,100 students who live in the residential colleges and graduate residences at University Town. There are free Internal Shuttle Buses that ply the entire campus seven days a week.
Halls of residence
NUS has 7 Halls of Residence with about 3,000 residential places. Halls have their own interest groups and regular student productions, in addition to university-wide student co-curricular activities. Halls compete with each other in the Inter-Hall Games.
The Halls of Residence are:
Eusoff Hall
Kent Ridge Hall
King Edward VII Hall
Raffles Hall
Sheares Hall
Temasek Hall
Prince George's Park House
Student residences
NUS has two student residences for undergraduate and graduate students with clusters of 11 to 15 single rooms with their own kitchen and bathroom facilities. Kitchen and dining areas are equipped with basic cooking appliances. The NUS University Town houses the UTown Residences for residents with the option of both apartments and single rooms.
The student residences are:
Prince George's Park Residences
UTown Residences
Residential colleges
The more recent residential developments at NUS are residential colleges, modeled after the residential college systems of universities in the United Kingdom and the United States. Like halls, residential colleges have unique co-curricular activities. Residential colleges also have their own academic programmes, with general education (GE) requirements different from each other and from the rest of the university. The academic programmes in residential colleges take place in small-sized seminar classes.
Cinnamon College
Cinnamon College houses the University Scholars Programme (USP). USP students and faculty are accommodated in 600 rooms.
USP students take modules at the college and follow the current USP curriculum. They are required to take twelve multi-disciplinary modules specially designed for USP students, including Writing and Critical Thinking, Quantitative Reasoning Foundation, and the University Scholars Seminar. Students have various options to fulfil their USP advanced curriculum requirements that include individual research with faculty mentors, and industrial and entrepreneurial attachments.
Tembusu College
Tembusu College is one of the first two Residential Colleges in NUS University Town, an extension to the main NUS campus at Kent Ridge. Tembusu houses mainly undergraduates, in addition to resident faculty, distinguished visiting scholars and a few graduate fellows.
The college offers five multi-disciplinary modules fulfilling the "University-Level Requirements" (2 General Education modules, 2 Breadth modules, and 1 Singapore Studies module) which most NUS undergraduates must read to graduate. Students read the rest of their modules in their home faculties. A University Town Residential Programme Certificate is issued to eligible students, along with the regular degree scroll. Students from non-modular faculties (i.e. Law, Medicine and Dentistry) also belong to the college, but with coursework tailored to their specific programmes. The Rector of Tembusu College is Singapore's Ambassador-at-Large and former United Nations Ambassador Tommy Koh, who is also the former Dean of the NUS Faculty of Law.
College of Alice & Peter Tan
The College of Alice & Peter Tan (CAPT) is a Residential College for all NUS undergraduates. In addition to providing a two-year academic programme (the University Town College Programme), CAPT is distinguished by the vision of helping students engage with the community within and outside of NUS. It consciously weaves the theme of active citizenship and community engagement through its curriculum and other aspects of the student experience.
Residential College 4
Residential College 4 (RC4) is the newest Residential College in NUS University Town to offer the University Town College Programme (UTCP). RC4 believes in catalysing a generation of systems citizens in Singapore by employing systems thinking and systems dynamic modelling to elicit mental models to emulate the complexity of the problems around us – such as population dynamics, sustainability issues, diseases, and healthcare. Additionally, there are many student interest groups available to cater to the different preferences of its residents ranging from special interest groups that focus on arts, fine arts, literature, and inner engineering to different events and activities held in RC4, which would allow students to be able to explore avenues that enrich the living experience for themselves and the communities around them.
Ridge View Residential College
Ridge View Residential College (RVRC) was formally established in April 2014, housed in the former Ridge View Residences. It is the only residential college that is situated outside University Town. The low-rise interconnected buildings are nestled against the backdrop of the Kent Ridge Forest, visually distinct with their brick-clad exteriors, open courtyards and heritage trees. The site was the former location for Kent Ridge Hall until November 2002. As the college program gradually evolved and the student community grew, construction began in November 2015 for a new building to complement the needs of the college. The new Annex building was completed in February 2017.
List of principal officers
The following table is a list of the principal officers of the National University of Singapore's predecessors. Note that the office of the President of Raffles College was renamed Principal of Raffles College from 1938.
Alumni
Since its inception in 1905, NUS has had many distinguished alumni from Singapore and Malaysia, including 4 Singaporean prime ministers and presidents, 2 Malaysian prime ministers, politicians, judiciaries, business executives, educators and local celebrities. It counts among its graduates heads of states Abdul Razak Hussein, Benjamin Sheares, Goh Chok Tong, Mahathir Mohamad and S. R. Nathan. A number of its graduates are also notable politicians such as Rais Yatim, Malaysia's former minister for information, communications and culture, Ng Eng Hen, Singapore's minister of defence, and S Jayakumar, Singapore's former deputy prime minister, United Nations representative, and minister for law, home affairs, labour, and foreign affairs.
Business leaders such as former chairman of the Singapore Exchange and Singapore Tourism Board Chew Choon Seng, CEO of the Hyflux Group Olivia Lum, CEO of the Temasek Holdings Ho Ching, chairman of SPRING Singapore Philip Yeo and CEO of Razer Inc Min-Liang Tan are alumni of NUS.
In international politics, the school has produced the director general of World Health Organization Margaret Chan, former president of United Nations Security Council Kishore Mahbubani, and vice-president of the International Olympic Committee Ng Ser Miang.
In Singapore's legal sector, NUS served as Singapore's only law school for half a century, until the Singapore Management University was set up in 2007. Therefore, most of Singapore's judiciaries come from the school. This includes Singapore's minister for law and for home affairs and former minister for foreign affairs K. Shanmugam, the fourth Chief Justice of Singapore Sundaresh Menon and the third Chief Justice of Singapore Chan Sek Keong.
In academia, NUS includes former vice-president of finance for University of Virginia and Cornell University Yoke San Reynolds, and former vice-chancellor of the University of Hong Kong Wang Gungwu.
See also
National University Hospital
Nanyang University
S*, a collaboration between seven universities and the Karolinska Institutet for training in bioinformatics and genomics
References
External links
National University of Singapore official site
ASEAN University Network
Educational institutions established in 1980
Education in Singapore
1980 establishments in Singapore
Queenstown, Singapore
Tanglin
Autonomous Universities in Singapore |
4672881 | https://en.wikipedia.org/wiki/Bandwidth%20management | Bandwidth management | Bandwidth management is the process of measuring and controlling the communications (traffic, packets) on a network link, to avoid filling the link to capacity or overfilling the link, which would result in network congestion and poor performance of the network. Bandwidth is described by bit rate and measured in units of bits per second (bit/s) or bytes per second (B/s).
Bandwidth management mechanisms and techniques
Bandwidth management mechanisms may be used to further engineer performance and includes:
Traffic shaping (rate limiting):
Token bucket
Leaky bucket
TCP rate control - artificially adjusting TCP window size as well as controlling the rate of ACKs being returned to the sender
Scheduling algorithms:
Weighted fair queuing (WFQ)
Class based weighted fair queuing
Weighted round robin (WRR)
Deficit weighted round robin (DWRR)
Hierarchical Fair Service Curve (HFSC)
Congestion avoidance:
RED, WRED - Lessens the possibility of port queue buffer tail-drops and this lowers the likelihood of TCP global synchronization
Policing (marking/dropping the packet in excess of the committed traffic rate and burst size)
Explicit congestion notification
Buffer tuning - allows you to modify the way a router allocates buffers from its available memory, and helps prevent packet drops during a temporary burst of traffic.
Bandwidth reservation protocols / algorithms
Resource reservation protocol (RSVP) - is the means by which applications communicate their requirements to the network in an efficient and robust manner.
Constraint-based Routing Label Distribution Protocol (CR-LDP)
Top-nodes algorithm
Traffic classification - categorising traffic according to some policy in order that the above techniques can be applied to each class of traffic differently
Link performance
Issues which may limit the performance of a given link include:
TCP determines the capacity of a connection by flooding it until packets start being dropped (Slow-start)
Queueing in routers results in higher latency and jitter as the network approaches (and occasionally exceeds) capacity
TCP global synchronization when the network reaches capacity results in waste of bandwidth
Burstiness of web traffic requires spare bandwidth to rapidly accommodate the bursty traffic
Lack of widespread support for explicit congestion notification and quality of service management on the Internet
Internet Service Providers typically retain control over queue management and quality of service at their end of the link
Window Shaping allows higher end products to reduce traffic flows, which reduce queue depth and allow more users to share more bandwidth fairly
Tools and techniques
Packet sniffer is a program or a device that eavesdrops on the network traffic by grabbing information traveling over a network
Network traffic measurement
See also
Bandwidth cap
Bandwidth management is a subset of network management and performance management
Bandwidth management using NetFlow and IPFIX data
Bandwidth throttling
Customer service unit a device to balance the data rate on user's telecommunication equipment
INASP runs bandwidth management training workshops and produces reports
Network congestion avoidance lists some techniques for prevention and management of congestion on routers
Network traffic measurement is a subset of network monitoring
Traffic shaping and rate limiting are bandwidth management (traffic control) techniques
References
"Deploying IP and MPLS QoS for Multiservice Networks: Theory and Practice" by John Evans, Clarence Filsfils (Morgan Kaufmann, 2007, )
External links
Bandwidth Management Tools, Strategies, and Issues
TechSoup for Libraries: Bandwidth Management
The True Price of Bandwidth Monitoring
Network performance
de:Netzwerk-Scheduler
Sniffers Basics and Detection |
12970258 | https://en.wikipedia.org/wiki/Storm%20botnet | Storm botnet | The Storm botnet or Storm worm botnet (also known as Dorf botnet and Ecard malware) is a remotely controlled network of "zombie" computers (or "botnet") that have been linked by the Storm Worm, a Trojan horse spread through e-mail spam. At its height in September 2007, the Storm botnet was running on anywhere from 1 million to 50 million computer systems, and accounted for 8% of all malware on Microsoft Windows computers. It was first identified around January 2007, having been distributed by email with subjects such as "230 dead as storm batters Europe," giving it its well-known name. The botnet began to decline in late 2007, and by mid-2008 had been reduced to infecting about 85,000 computers, far less than it had infected a year earlier.
As of December 2012, the original creators of Storm have not been found. The Storm botnet has displayed defensive behaviors that indicated that its controllers were actively protecting the botnet against attempts at tracking and disabling it, by specifically attacking the online operations of some security vendors and researchers who had attempted to investigate it. Security expert Joe Stewart revealed that in late 2007, the operators of the botnet began to further decentralize their operations, in possible plans to sell portions of the Storm botnet to other operators. It was reportedly powerful enough to force entire countries off the Internet, and was estimated to be capable of executing more instructions per second than some of the world's top supercomputers. The United States Federal Bureau of Investigation considered the botnet a major risk to increased bank fraud, identity theft, and other cybercrimes.
Origins
First detected on the Internet in January 2007, the Storm botnet and worm are so-called because of the storm-related subject lines its infectious e-mail employed initially, such as "230 dead as storm batters Europe." Later provocative subjects included "Chinese missile shot down USA aircraft", and "U.S. Secretary of State Condoleezza Rice has kicked German Chancellor Angela Merkel." It is suspected by some information security professionals that well-known fugitive spammers, including Leo Kuvayev, may have been involved in the operation and control of the Storm botnet. According to technology journalist Daniel Tynan, writing under his "Robert X. Cringely" pseudonym, a great portion of the fault for the existence of the Storm botnet lay with Microsoft and Adobe Systems. Other sources state that Storm Worm's primary method of victim acquisition was through enticing users via frequently changing social engineering (confidence trickery) schemes. According to Patrick Runald, the Storm botnet had a strong American focus, and likely had agents working to support it within the United States. Some experts, however, believe the Storm botnet controllers were Russian, some pointing specifically at the Russian Business Network, citing that the Storm software mentions a hatred of the Moscow-based security firm Kaspersky Lab, and includes the Russian word "buldozhka," which means "bulldog."
Composition
The botnet, or zombie network, comprises computers running Microsoft Windows as their operating system. Once infected, a computer becomes known as a bot. This bot then performs automated tasks—anything from gathering data on the user, to attacking web sites, to forwarding infected e-mail—without its owner's knowledge or permission. Estimates indicate that 5,000 to 6,000 computers are dedicated to propagating the spread of the worm through the use of e-mails with infected attachments; 1.2 billion virus messages have been sent by the botnet through September 2007, including a record 57 million on August 22, 2007 alone. Lawrence Baldwin, a computer forensics specialist, was quoted as saying, "Cumulatively, Storm is sending billions of messages a day. It could be double digits in the billions, easily." One of the methods used to entice victims to infection-hosting web sites are offers of free music, from artists such as Beyoncé Knowles, Kelly Clarkson, Rihanna, The Eagles, Foo Fighters, R. Kelly, and Velvet Revolver. Signature-based detection, the main defense of most computer systems against virus and malware infections, is hampered by the large number of Storm variants.
Back-end servers that control the spread of the botnet and Storm worm automatically re-encode their distributed infection software twice an hour, for new transmissions, making it difficult for anti-virus vendors to stop the virus and infection spread. Additionally, the location of the remote servers which control the botnet are hidden behind a constantly changing DNS technique called 'fast flux', making it difficult to find and stop virus hosting sites and mail servers. In short, the name and location of such machines are frequently changed and rotated, often on a minute by minute basis. The Storm botnet's operators control the system via peer-to-peer techniques, making external monitoring and disabling of the system more difficult. There is no central "command-and-control point" in the Storm botnet that can be shut down. The botnet also makes use of encrypted traffic. Efforts to infect computers usually revolve around convincing people to download e-mail attachments which contain the virus through subtle manipulation. In one instance, the botnet's controllers took advantage of the National Football League's opening weekend, sending out mail offering "football tracking programs" which did nothing more than infect a user's computer. According to Matt Sergeant, chief anti-spam technologist at MessageLabs, "In terms of power, [the botnet] utterly blows the supercomputers away. If you add up all 500 of the top supercomputers, it blows them all away with just 2 million of its machines. It's very frightening that criminals have access to that much computing power, but there's not much we can do about it." It is estimated that only of the total capacity and power of the Storm botnet is currently being used.
Computer security expert Joe Stewart detailed the process by which compromised machines join the botnet: attempts to join the botnet are made by launching a series of EXE files on the compromised machine, in stages. Usually, they are named in a sequence from game0.exe through game5.exe, or similar. It will then continue launching executables in turn. They typically perform the following:
game0.exe - Backdoor/downloader
game1.exe - SMTP relay
game2.exe - E-mail address stealer
game3.exe - E-mail virus spreader
game4.exe - Distributed Denial of Service (DDoS) attack tool
game5.exe - Updated copy of Storm Worm dropper
At each stage the compromised system will connect into the botnet; fast flux DNS makes tracking this process exceptionally difficult.
This code is run from %windir%\system32\wincom32.sys on a Windows system, via a kernel rootkit, and all connections back to the botnet are sent through a modified version of the eDonkey/Overnet communications protocol.
Method
The Storm botnet and its variants employ a variety of attack vectors, and a variety of defensive steps exist as well. The Storm botnet was observed to be defending itself, and attacking computer systems that scanned for Storm virus-infected computer systems online. The botnet will defend itself with DDoS counter-attacks, to maintain its own internal integrity. At certain points in time, the Storm worm used to spread the botnet has attempted to release hundreds or thousands of versions of itself onto the Internet, in a concentrated attempt to overwhelm the defenses of anti-virus and malware security firms. According to Joshua Corman, an IBM security researcher, "This is the first time that I can remember ever seeing researchers who were actually afraid of investigating an exploit." Researchers are still unsure if the botnet's defenses and counterattacks are a form of automation, or manually executed by the system's operators. "If you try to attach a debugger, or query sites it's reporting into, it knows and punishes you instantaneously. [Over at] SecureWorks, a chunk of it DDoS-ed [distributed-denial-of-service attacked] a researcher off the network. Every time I hear of an investigator trying to investigate, they're automatically punished. It knows it's being investigated, and it punishes them. It fights back", Corman said.
Spameater.com as well as other sites such as 419eater.com and Artists Against 419, both of which deal with 419 spam e-mail fraud, have experienced DDoS attacks, temporarily rendering them completely inoperable. The DDoS attacks consist of making massed parallel network calls to those and other target IP addresses, overloading the servers' capacities and preventing them from responding to requests. Other anti-spam and anti-fraud groups, such as the Spamhaus Project, were also attacked. The webmaster of Artists Against 419 said that the website's server succumbed after the attack increased to over 100Mbit. Similar attacks were perpetrated against over a dozen anti-fraud site hosts. Jeff Chan, a spam researcher, stated, "In terms of mitigating Storm, it's challenging at best and impossible at worst since the bad guys control many hundreds of megabits of traffic. There's some evidence that they may control hundreds of Gigabits of traffic, which is enough to force some countries off the Internet."
The Storm botnet's systems also take steps to defend itself locally, on victims' computer systems. The botnet, on some compromised systems, creates a computer process on the Windows machine that notifies the Storm systems whenever a new program or other processes begin. Previously, the Storm worms locally would tell the other programs—such as anti-virus, or anti-malware software, to simply not run. However, according to IBM security research, versions of Storm also now simply "fool" the local computer system into thinking it has run the hostile program successfully, but in fact, they are not doing anything. "Programs, including not just AV exes, dlls and sys files, but also software such as the P2P applications BearShare and eDonkey, will appear to run successfully, even though they didn't actually do anything, which is far less suspicious than a process that gets terminated suddenly from the outside", said Richard Cohen of Sophos. Compromised users, and related security systems, will assume that security software is running successfully when it in fact is not.
On September 17, 2007, a Republican Party website in the United States was compromised, and used to propagate the Storm worm and botnet. In October 2007, the botnet took advantage of flaws in YouTube's captcha application on its mail systems, to send targeted spam e-mails to Xbox owners with a scam involving winning a special version of the video game Halo 3. Other attack methods include using appealing animated images of laughing cats to get people to click on a trojan software download, and tricking users of Yahoo!'s GeoCities service to download software that was claimed to be needed to use GeoCities itself. The GeoCities attack in particular was called a "full-fledged attack vector" by Paul Ferguson of Trend Micro, and implicated members of the Russian Business Network, a well-known spam and malware service. On Christmas Eve in 2007, the Storm botnet began sending out holiday-themed messages revolving around male interest in women, with such titles as "Find Some Christmas Tail", "The Twelve Girls of Christmas", and "Mrs. Claus Is Out Tonight!" and photos of attractive women. It was described as an attempt to draw more unprotected systems into the botnet and boost its size over the holidays, when security updates from protection vendors may take longer to be distributed. A day after the e-mails with Christmas strippers were distributed, the Storm botnet operators immediately began sending new infected e-mails that claimed to wish their recipients a "Happy New Year 2008!"
In January 2008, the botnet was detected for the first time to be involved in phishing attacks against major financial institutions, targeting both Barclays and Halifax.
Encryption and sales
Around October 15, 2007, it was uncovered that portions of the Storm botnet and its variants could be for sale. This is being done by using unique security keys in the encryption of the botnet's Internet traffic and information. The unique keys will allow each segment, or sub-section of the Storm botnet, to communicate with a section that has a matching security key. However, this may also allow people to detect, track, and block Storm botnet traffic in the future, if the security keys have unique lengths and signatures. Computer security vendor Sophos has agreed with the assessment that the partitioning of the Storm botnet indicated likely resale of its services. Graham Cluley of Sophos said, "Storm's use of encrypted traffic is an interesting feature which has raised eyebrows in our lab. Its most likely use is for the cybercriminals to lease out portions of the network for misuse. It wouldn't be a surprise if the network was used for spamming, distributed denial-of-service attacks, and other malicious activities." Security experts reported that if Storm is broken up for the malware market, in the form of a "ready-to-use botnet-making spam kit", the world could see a sharp rise in the number of Storm related infections and compromised computer systems. The encryption only seems to affect systems compromised by Storm from the second week of October 2007 onwards, meaning that any of the computer systems compromised after that time frame will remain difficult to track and block.
Within days of the discovery of this segmenting of the Storm botnet, spam e-mail from the new subsection was uncovered by major security vendors. On the evening of October 17, security vendors began seeing new spam with embedded MP3 sound files, which attempted to trick victims into investing in a penny stock, as part of an illegal pump-and-dump stock scam. It was believed that this was the first-ever spam e-mail scam that made use of audio to fool victims. Unlike nearly all other Storm-related e-mails, however, these new audio stock scam messages did not include any sort of virus or Storm malware payload; they were simply part of the stock scam.
In January 2008, the botnet was detected for the first time to be involved in phishing attacks against the customers of major financial institutions, targeting banking establishments in Europe including Barclays, Halifax and the Royal Bank of Scotland. The unique security keys used indicated to F-Secure that segments of the botnet were being leased.
Claimed decline of the botnet
On September 25, 2007, it was estimated that a Microsoft update to the Windows Malicious Software Removal Tool (MSRT) may have helped reduce the size of the botnet by up to 20%. The new patch, as claimed by Microsoft, removed Storm from approximately 274,372 infected systems out of 2.6 million scanned Windows systems. However, according to senior security staff at Microsoft, "the 180,000+ additional machines that have been cleaned by MSRT since the first day are likely to be home user machines that were not notably incorporated into the daily operation of the 'Storm' botnet," indicating that the MSRT cleaning may have been symbolic at best.
As of late October 2007, some reports indicated that the Storm botnet was losing the size of its Internet footprint, and was significantly reduced in size. Brandon Enright, a University of California at San Diego security analyst, estimated that the botnet had by late October fallen to a size of approximately 160,000 compromised systems, from Enright's previous estimated high in July 2007 of 1,500,000 systems. Enright noted, however, that the botnet's composition was constantly changing, and that it was still actively defending itself against attacks and observation. "If you're a researcher and you hit the pages hosting the malware too much… there is an automated process that automatically launches a denial of service [attack] against you", he said, and added that his research caused a Storm botnet attack that knocked part of the UC San Diego network offline.
The computer security company McAfee is reported as saying that the Storm Worm would be the basis of future attacks. Craig Schmugar, a noted security expert who discovered the Mydoom worm, called the Storm botnet a trend-setter, which has led to more usage of similar tactics by criminals. One such derivative botnet has been dubbed the "Celebrity Spam Gang", due to their use of similar technical tools as the Storm botnet controllers. Unlike the sophisticated social engineering that the Storm operators use to entice victims, however, the Celebrity spammers make use of offers of nude images of celebrities such as Angelina Jolie and Britney Spears. Cisco Systems security experts stated in a report that they believe the Storm botnet would remain a critical threat in 2008, and said they estimated that its size remained in the "millions".
As of early 2008, the Storm botnet also found business competition in its black hat economy, in the form of Nugache, another similar botnet which was first identified in 2006. Reports have indicated a price war may be underway between the operators of both botnets, for the sale of their spam E-mail delivery. Following the Christmas and New Year's holidays bridging 2007–2008, the researchers of the German Honeynet Project reported that the Storm botnet may have increased in size by up to 20% over the holidays. The MessageLabs Intelligence report dated March 2008 estimates that over 20% of all spam on the Internet originates from Storm.
Present state of the botnet
The Storm botnet was sending out spam for more than two years until its decline in late 2008. One factor in this—on account of making it less interesting for the creators to maintain the botnet—may have been the Stormfucker tool, which made it possible to take control over parts of the botnet.
Stormbot 2
On April 28, 2010, McAfee made an announcement that the so-called "rumors" of a Stormbot 2 were verified. Mark Schloesser, Tillmann Werner, and Felix Leder, the German researchers who did a lot of work in analyzing the original Storm, found that around two-thirds of the "new" functions are a copy and paste from the last Storm code base. The only thing missing is the P2P infrastructure, perhaps because of the tool which used P2P to bring down the original Storm. Honeynet blog dubbed this Stormbot 2.
See also
Alureon
Bagle (computer worm)
Botnet
Conficker
E-mail spam
Gameover ZeuS
Helpful worm
Internet crime
Internet security
McColo
Operation: Bot Roast
Rustock botnet
Regin (malware)
Srizbi botnet
Zombie (computer science)
ZeroAccess botnet
Zeus (malware)
References
External links
"The Storm worm: can you be certain your machine isn't infected?" The target page is no longer on this website.
"TrustedSource Storm Tracker": Top Storm domains and latest web proxies The target page is no longer on this website.
Internet security
Multi-agent systems
Distributed computing projects
Spamming
Botnets |
36089423 | https://en.wikipedia.org/wiki/SolveIT%20Software | SolveIT Software | SolveIT Software Pty Ltd is a provider of advanced planning and scheduling enterprise software for supply and demand optimisation and predictive modelling. Based in Adelaide, South Australia, 70% of its turnover is generated from software deployed in the mining and bulk material handling sectors.
History
The company was set up in 2005 by four academics who were also experienced business people, all recent immigrants to Australia. The team was headed by ex-Ernst & Young consultant Matthew Michalewicz, who had moved to Adelaide in 2004 after selling his last company, NuTech Solutions. The other three partners were Zbigniew Michalewicz PhD, Martin Schmidt and Constantin Chiriac, all four of which were co-authors of the book Adaptive Business Intelligence.
The company first developed an optimization and predictive modeling platform based on Artificial Intelligence, and then built its supply chain applications for planning, scheduling, and demand forecasting on this platform. Early customers included Orlando Wines, ABB Grain, the Fosters wine brands and later Pernod Ricard that were also located in the Barossa Valley region.
In 2008, Rio Tinto Iron Ore asked the company to improve its mining planning and scheduling operations based at Pilbara. SolveIT succeeded in applying its advanced planning and scheduling product, based on non-linear optimization, to the Rio Tinto mine scheduling problem, after many other vendors had failed over a period of ten years.
With 30 employees at this point, it then won an additional contract in the mining sector with the BHP Mitsubishi Alliance, leading to subsequent tender wins in the sector, including: BHP, CBH Group, Fortescue Metals Group, Hills Holdings, Pacific National, and Xstrata.
On 3 September 2012, SolveIT announced it was acquired by Schneider Electric, a global specialist in energy management.
Operations
Headquartered in Adelaide, the company has over 150 staff based across operational offices in: Melbourne; Brisbane; Perth; and Chişinău, Moldova.
The company develops advanced planning and scheduling business optimisation software, which helps manage complex operations using artificial intelligence. Most of the products were initially developed around the key South Australian industries of wine and grain handling, and today SolveIt has a specialist mining division due to early adoption of the company's software within the mining market. The software helps companies accurately predict and plan their production, supply chain, shipping and currency hedging.
Due to the scientific optimisation components embedded in the company's software products, it sometimes uses a prize-based system to recruit the required high-level of talent. In 2011, the company used a Magic square problem, won by University of Nottingham graduate Yuri Bykov, who developed a program which solved a constrained version of a 2600 by 2600 magic square within a minute.
In 2011, the company won the Australian National iAward in the e-Logistics and Supply Chain category for its Supply Chain Network Optimiser (SCNO). In February 2012, SolveIT and Schneider Electric became co-organisers of the Integrated Planning and Optimisation Summit, held at the Adelaide Convention Centre.
Products/Services
Advanced Planning & Scheduling (APS): Enterprise software for optimising complex planning and scheduling activities, especially those that are heavily constrained or require multi-stage processing. To allow for optimisation of a wide variety of planning and scheduling activities, APS can include variable inputs covering specific constraints, business rules, and processes. It is deployed in planning for production, workforce, maintenance and equipment scheduling.
Supply Chain Network Optimisation (SCNO): a whole-of-supply-chain system based on proprietary non-linear optimisation, prediction, and what-if analysis. Capable of managing and optimising complete operational and strategic supply chain activities, it claims to cut transportation costs, working capital requirements, and stock outs. SCNO was named as a finalist for 2012 Supply Chain Distinction Awards, to be held in Berlin in June 2012.
Demand Planning & Forecasting (DPF): based on multiple prediction algorithms and techniques, DPF can be applied to: replenishment; promotions and incentives; pricing and discounting; cross selling and upselling; product churn.
Predictive Modelling: based on a combination of both classical forecasting methods and new, non-traditional prediction technologies, which are tuned to each customers business requirements and circumstances. The service includes project planning, data validation, model training and calibration, onsite support, and parallel running.
Technology Platforms: based on proprietary adaptive technologies that work competitively and cooperatively together as a "hybrid system", it claims to produce more accurate forecasts and optimal schedule and plans in less time. The technology is at the centre of Adaptive Business Intelligence, which allows the systems and predictive models to "learn" from previous experiences and "self-validate" as changes occur in the customers marketplace.
Services: cover supply chain consulting, IT roadmaps, and data mining and analytics.
The company's mining division provides integrated planning, scheduling and optimisation for the whole traditional mining supply chain network of mine, process plant, transport network, port and trading desk. Variables into the systems allow for asset management, workforce variability, maintenance, accommodation and market factors.
Publications
References
External links
Company website
Adaptive Business Intelligence®
Software companies of Australia
Business software companies
Data analysis software
Data mining and machine learning software
ERP software companies
Production and manufacturing software
Companies based in Adelaide
Software companies established in 2005
Evolutionary computation
Australian brands
Schneider Electric
Australian companies established in 2005 |
11499117 | https://en.wikipedia.org/wiki/Microsoft%20PixelSense | Microsoft PixelSense | Microsoft PixelSense (formerly called Microsoft Surface) is an interactive surface computing platform that allows one or more people to use and touch real-world objects, and share digital content at the same time. The PixelSense platform consists of software and hardware products that combine vision based multitouch PC hardware, 360-degree multiuser application design, and Windows software to create a natural user interface (NUI).
Overview
Microsoft Surface 1.0, the first version of PixelSense, was announced on May 29, 2007, at the D5 Conference. It shipped to customers in 2008 as an end-to-end solution with Microsoft producing and selling the combined hardware/software platform. It is a 30-inch (76 cm) 4:3 rear projection display (1024×768) with integrated PC and five near-infrared (IR) cameras that can see fingers and objects placed on the display. The display is placed in a horizontal orientation, giving it a table-like appearance. The product and its applications are designed so that several people can approach the display from all sides to simultaneously share and interact with digital content. The cameras’ vision capabilities enable the product to see a near-IR image of what’s placed on the screen, captured at approximately 60 times per second. The Surface platform processing identifies three types of objects touching the screen: fingers, tags, and blobs. Raw vision data is also available and can be used in applications. The device is optimized to recognize 52 simultaneous multitouch points of contact. Microsoft Corporation produced the hardware and software for the Microsoft Surface 1.0 product. Sales of Microsoft Surface 1.0 were discontinued in 2011 in anticipation of the release of the Samsung SUR40 for Microsoft Surface and the Microsoft Surface 2.0 software platform.
Microsoft and Samsung partnered to announce the current version of PixelSense, the Samsung SUR40 for Microsoft Surface (“SUR40”), at the Consumer Electronics Show (CES) in 2011. Samsung began shipping the new SUR40 hardware with the Microsoft Surface 2.0 software platform to customers in early 2012.
The Samsung SUR40 is a 40-inch (102 cm) 16:9 LED backlit LCD (1920×1080) with integrated PC and PixelSense technology, which replaces the cameras in the previous product. PixelSense technology enables Samsung and Microsoft to reduce the thickness of the product from 22 in (56 cm) to 4 in (10 cm). The size reduction enables the product to be placed horizontally, and adds the capability to be mounted vertically while retaining the ability to recognize fingers, tags, blobs and utilize raw vision data. Samsung produces the hardware and Microsoft produces the software platform for the SUR40.
Target market
PixelSense is designed primarily for use by commercial customers to use in public settings. People interact with the product using direct touch interactions and by placing objects on the screen. Objects of a specific size and shape, or with tag patterns, can be uniquely identified to initiate a preprogrammed response by the computer. The device does not require the use of a traditional PC mouse or keyboard, and generally does not require training or foreknowledge to operate. Additionally, the system is designed to interact with several people at the same time so that content can be shared without the limitations of a single-user device. These combined characteristics place the Microsoft Surface platform in the category of so-called natural user interface (NUI), the apparent successor to the graphical user interface (GUI) systems popularized in the 1980s and 1990s.
Microsoft states that sales of PixelSense are targeted toward the following industry verticals: retail, media and entertainment, healthcare, financial services, education, and government. PixelSense is available for sale in over 40 countries, including United States, Canada, Austria, Belgium, Denmark, France, Germany, Ireland, Italy, Norway, Netherlands, Qatar, Saudi Arabia, Spain, Sweden, Switzerland, United Arab Emirates (UAE), United Kingdom (UK), Australia, Korea, India, Singapore, and Hong Kong.
History
The idea for the product was initially conceptualized in 2001 by Steven Bathiche of Microsoft Hardware and Andy Wilson of Microsoft Research.
In October 2001, DJ Kurlander, Michael Kim, Joel Dehlin, Bathiche and Wilson formed a virtual team to bring the idea to the next stage of development.
In 2003, the team presented the idea to the Microsoft Chairman Bill Gates, in a group review. Later, the virtual team was expanded and a prototype nicknamed T1 was produced within a month. The prototype was based on an IKEA table with a hole cut in the top and a sheet of architect vellum used as a diffuser. The team also developed some applications, including pinball, a photo browser, and a video puzzle. Over the next year, Microsoft built more than 85 prototypes. The final hardware design was completed in 2005.
A similar concept was used in the 2002 science fiction movie Minority Report. As noted in the DVD commentary, the director Steven Spielberg stated the concept of the device came from consultation with Microsoft during the making of the movie. One of the film's technology consultant's associates from MIT later joined Microsoft to work on the project.
The technology was unveiled as under the "Microsoft Surface" name by Microsoft CEO Steve Ballmer on May 30, 2007, at The Wall Street Journal'''s 'D: All Things Digital' conference in Carlsbad, California. Surface Computing is part of Microsoft's Productivity and Extended Consumer Experiences Group, which is within the Entertainment & Devices division. The first few companies slated to deploy it were Harrah's Entertainment, Starwood, T-Mobile and a distributor, International Game Technology.
On April 17, 2008, AT&T became the first retailer to sell the product. In June 2008 Harrah’s Entertainment launched Microsoft Surface at Rio iBar and Disneyland launched it in Tomorrowland, Innoventions Dream Home. On August 13, 2008, Sheraton Hotels introduced it in their hotel lobbies at 5 locations. On September 8, 2008, MSNBC began using it to work with election maps for the 2008 U.S. Presidential Election on air.
On June 18, 2012, the product was re-branded under the name "Microsoft PixelSense" as a result of the company adopting the Surface brand for its newly unveiled series of tablet PCs.
Features
Microsoft notes four main components being important in the PixelSense interface: direct interaction, multi-touch contact, a multi-user experience, and object recognition.Direct interaction refers to the user's ability to simply reach out and touch the interface of an application in order to interact with it, without the need for a mouse or keyboard. Multi-touch contact refers to the ability to have multiple contact points with an interface, unlike with a mouse, where there is only one cursor. Multi-user experience is a benefit of multi-touch: several people can orient themselves on different sides of the surface to interact with an application simultaneously. Object recognition'' refers to the device's ability to recognize the presence and orientation of tagged objects placed on top of it.
The technology allows non-digital objects to be used as input devices. In one example, a normal paint brush was used to create a digital painting in the software. This is made possible by the fact that, in using cameras for input, the system does not rely on restrictive properties required of conventional touchscreen or touchpad devices such as the capacitance, electrical resistance, or temperature of the tool used (see Touchscreen).
In the old technology, the computer's "vision" was created by a near-infrared, 850-nanometer-wavelength LED light source aimed at the surface. When an object touched the tabletop, the light was reflected to multiple infrared cameras with a net resolution of 1024×768, allowing it to sense, and react to items touching the tabletop.
The system ships with basic applications, including photos, music, virtual concierge, and games, that can be customized for the customers.
A feature that comes preinstalled is the "Attract" application, an image of water with leaves and rocks within it. By touching the screen, users can create ripples in the water, much like a real stream. Additionally, the pressure of touch alters the size of the ripple created, and objects placed into the water create a barrier that ripples bounce off, just as they would in a real pond.
The technology used in newer devices allows recognition of fingers, tag, blob, raw data, and objects that are placed on the screen, allowing vision-based interaction without the use of cameras. Sensors in the individual pixels in the display register what is touching the screen.
Hardware specifications
Microsoft Surface 1.0
Software development kit (SDK): Microsoft Surface 1.0
Form factor usage: Tables and counters
Display + vision input technology: Rear projection DLP w/cameras
Price: Starting at $10,000 USD
Weight: 198 lbs (90 kg)
Physical dimensions (L × W × H): 42.5 × 27 × 21 in (108 × 68.6 × 53.3 cm)
CPU: Intel Core 2 Duo E6400 2.13 GHz processor
Graphics (GPU): ATI Radeon X1650 – 256 MB
Memory: 2 GB DDR2
Storage (hard drive): 160 GB HDD
Display size: 30 in (76.2 cm) diagonal
Display resolution: 1024×768 – 4:3 aspect ratio
Extensions (ports): XGA (DE-15) video out, RGB analog component video out, RCA analog component audio out, 4 USB ports
Networking: Wi-Fi 802.11g, Bluetooth, and Ethernet 10/100
Operating system: Windows Vista (32-bit)
Samsung SUR40 with Microsoft PixelSense
Software development kit (SDK): Microsoft Surface 2.0
Form factor usage: Tables, counters, kiosks and walls
Display + vision input technology: Thin LCD w/PixelSense technology
Price: Starting at $8,400 USD
Weight: 80 lbs (36 kg)
Physical dimensions (L × W × H): 42.7 × 27.5 × 4 in (108.5 × 69.9 × 10.2 cm)
CPU: AMD Athlon II X2 245e 2.9 GHz dual-core processor
Graphics (GPU): AMD Radeon HD 6570M – 1 GB GDDR5
Memory: 4 GB DDR3
Storage (hard drive): 320 GB HDD
Display size: 40 in (101.6 cm) diagonal
Display resolution: 1920×1080 – 16:9 aspect ratio
Extensions (ports): HDMI input & output, S/PDIF 5.1 digital audio surround sound out, RCA analog component audio out, 3.5 mm TRS (stereo mini-jack) audio out, 4 USB ports
Networking: Wi-Fi 802.11n, Bluetooth, and Ethernet 10/100/1000
Operating system: Windows 7 Professional for Embedded Systems (64-bit)
Applications development
Microsoft provides the free Microsoft Surface 2.0 Software Development Kit (SDK) for developers to create NUI touch applications for devices with PixelSense and Windows 7 touch PCs.
Applications for PixelSense can be written in Windows Presentation Foundation or XNA. The development process is much like normal Windows 7 development, but custom WPF controls had to be created due to the unique interface of the system. Developers already proficient in WPF can utilize the SDK to write applications for PixelSense for deployments for the large hotels, casinos, and restaurants.
Related Microsoft research projects
Microsoft Research has published information about a related technology dubbed SecondLight. Still in the research phase, this project augments secondary images onto physical objects on or above the main display.
See also
AudioCubes
DiamondTouch
reacTable
Lemur Input Device
Microsoft Surface Hub
Multi-Pointer X (MPX)
Multi-touch
Philips Entertaible
TouchLight
Surface computing
SixthSense
References
External links
Developing for Microsoft Surface on Microsoft Developer Network
Surface
Graphical user interfaces
History of human–computer interaction
Digital audio players
Commercial computer vision systems
Object recognition and categorization
Infrared imaging
Surface computing
Computer-related introductions in 2007
Articles containing video clips
et:Microsoft Surface
fr:Microsoft Surface |
4583491 | https://en.wikipedia.org/wiki/JOELib | JOELib | JOELib is computer software, a chemical expert system used mainly to interconvert chemical file formats. Because of its strong relationship to informatics, this program belongs more to the category cheminformatics than to molecular modelling. It is available for Windows, Unix and other operating systems supporting the programming language Java. It is free and open-source software distributed under the GNU General Public License (GPL) 2.0.
History
JOELib and OpenBabel were derived from the OELib Cheminformatics library.
Logo
The project logo is just the word JOELib in the Tengwar script of J. R. R. Tolkien. The letters are grouped as JO-E-Li-b. Vowels are usually grouped together with a consonant, but two following vowels must be separated by a helper construct.
Major features
Chemical expert system
Query and substructure search (based on Simplified molecular-input line-entry system (SMARTS), a SMILES extension
Clique detection
QSAR
Data mining
Molecule mining, special case of Structured Data Mining
Feature–descriptor calculation
Partition coefficient, log P
Rule-of-five
Partial charges
Fingerprint calculation
etc.
Chemical file formats
Chemical table file: MDL Molfile, SD format
SMILES
Gaussian
Chemical Markup Language
MOPAC
See also
OpenBabel - C++ version of JOELib-OELib
Jmol
Chemistry Development Kit (CDK)
Comparison of software for molecular mechanics modeling
Blue Obelisk
Molecule editor
List of free and open-source software packages
References
The Blue Obelisk-Interoperability in Chemical Informatics, Rajarshi Guha, Michael T. Howard, Geoffrey R. Hutchison, Peter Murray-Rust, Henry Rzepa, Christoph Steinbeck, Jörg K. Wegner, and Egon L. Willighagen, J. Chem. Inf. Model.; 2006;
External links
at SourceForge
Algorithm dictionary
Free science software
Free software programmed in Java (programming language)
Computational chemistry software
Science software for Linux |
25513182 | https://en.wikipedia.org/wiki/Mobile%20membranes | Mobile membranes | Membrane systems have been inspired from the structure and the functioning of the living cells. They were introduced and studied by Gh.Paun under the name of P systems [24]; some applications of the membrane systems are presented in [15]. Membrane systems are essentially models of distributed, parallel and nondeterministic systems. Here we motivate and present the mobile membranes. Mobile membranes represent a variant of membrane systems inspired by the biological movements given by endocytosis and exocytosis. They have the expressive power of both P systems and process calculi with mobility such as mobile ambients [11] and brane calculi [10]. Computations with mobile membranes can be defined over specific configurations (like process calculi), while they represent also a rule-based formalism (like P systems).
The model is characterized by two essential features:
A spatial structure consisting of a hierarchy of membranes (which do not intersect) with objects associated to them. A membrane without any other membranes inside is called elementary.
The general rules describing the evolution of the structure: endocytosis (moving an elementary membrane inside a neighbouring membrane) and exocytosis (moving an elementary membrane outside the membrane where it is placed). More specific rules are given by pinocytosis (engulfing zero external membranes) and phagocytosis (engulfing just one external elementary membrane).
The computations are performed in the following way: starting from an initial structure, the system evolves by applying the rules in a nondeterministic and maximally parallel manner. A rule is applicable when all the involved objects and membranes appearing in its left hand side are available. The maximally parallel way of using the rules means that in each step a maximal multiset of rules is applied, namely a multiset of rules such that no further rule can be added to the set. A halting configuration is reached when no rule is applicable. The result is represented by the number of objects associated to a specified membrane.
Mobile membranes represents a formalism which describes the movement of membranes inside a spatial structure by applying rules from a given set of rules . The mobility is provided by consumption and rewriting of objects. In terms of computation, the work is performed using membrane configurations. A the set of membrane configurations (ranged by ) os defined by using the free monoid (ranged over by ) generated by a finite alphabet (ranged over by ):
If and are two membrane configurations, reduces to (denoted by ) if there exists a rule in the set of rules applicable to the configuration such that the new configuration is obtained. When applying the rules of , also the following inference rules are used:
;
When describing a computation of a systems of mobile membranes, an initial configuration and a set of rules are given. The rules used in this paper describe an (object rewriting), movement (moving an elementary membrane inside a neighbouring membrane), movement (moving an elementary membrane outside the membrane where it is placed), (engulfing zero external membranes), and (engulfing just one external elementary membrane).
Computability Power of Mobile Membranes
A specific feature of the mobile membranes is that this new rule-based model is appropriate to prove computability results in terms of Turing machines rather by reduction to the lambda calculus as in the case of process calculi with mobility. In this section are defined four classes of membranes inspired from biological facts, and it is shown that their computational power depends on the initial configuration and on the set of rules used.
Simple Mobile Membranes
The systems of simple mobile membranes (SM) are defined over the set of configurations , and evolve using endocytosis and exocytosis rules, namely moving a membrane inside a neighbouring membrane, or outside the membrane where it is placed, respectively. The evolution from a configuration to another is made using rules from the set of rules defined as follows:
, for , , ; (local object evolution)
, for , , ; (global object evolution)
, for , ; (endocytosis)
, for , ; (exocytosis)
where is a multiset, and , are arbitrary membrane configurations.
Turing completeness can be obtained by using nine membranes together with the operations of endocytosis and exocytosis [21]. In [17] it is proven that four mobile membranes are enough to get the power of a Turing machine, while in [4] the number of membranes is decreased to three.
denotes the family of all sets generated inside a given membrane by simple mobile membranes using local evolution rules (), endocytosis and exocytosis rules. Whenever global evolution rules () are used, the parameter is replaced by . If a type of rules is not used, then its name is omitted from the list. The number of membranes does not increase during the computation, but it can decrease by sending membranes out of the system. In this case, the denotes the family of sets of vectors of natural numbers computed by using at most $n$ membranes. denoted the family of Turing computable sets of vectors generated by arbitrary grammars.
It is proved in [17] that . The research line initiated in membrane computing is to find membrane systems with a minimal set of ingredients which are powerful enough to achieve the full power of Turing machines. In this way previous result presented in [17] are improved by decreasing the number of membranes to three.
Moreover, this is achieved by using local evolution rules instead of global evolution rules.
Theorem. .
The proof of this result uses a similar technique to that used in [4].
Enhanced Mobile Membranes
The systems of enhanced mobile membranes are a variant of simple membrane systems proposed in [1] for
describing some biological mechanisms of the immune system. The operations governing the mobility of the systems of enhanced mobile
membranes are endocytosis (endo), exocytosis (exo), forced endocytosis (fendo), forced exocytosis (fexo).The evolution from a
configuration to another is made using rules from the set of rules defined as follows:
for ; (endocytosis)
, for ; (exocytosis)
for ,
; (enhanced endocytosis)
for ;(enhanced exocytosis)
\noindent where is a multiset and is an arbitrary membrane configuration.
The computational power of the systems of enhanced mobile membranes using these four operations was studied in [20] where it is proved that twelve membranes can provide the computational universality, while in [4] the result is improved by reducing the number of membranes to nine. It is worth to note that unlike the previous results, the rewriting of object by means of context-free rules is not used in any of the results (and their proofs).
The interplay between these four operations is quite powerful, and the computational power of a Turing machine is obtained using twelve membranes without using the context-free evolution of objects [20].
The family of all sets generated inside a given membrane by enhanced mobile membranes of degree at most using rules , is denoted by .
Theorem. .
Theorem. .
When proving the result of the previous theorem the authors have not used an optimal construction of a membrane system. In what follows it is proven that using the same types of rules (endo, exo, fendo, fexo) a membrane system can be constructed using only nine membranes instead of twelve membranes. If this is an optimal construction remains an open problem.
Theorem. .
The proof is similar to that presented in [4].
Mutual Mobile Membranes
Following the approach presented in [3], "systems of mutual mobile membranes" representing a variant of systems of simple mobile membranes in which the endocytosis and the exocytosis work whenever the involved membranes "agree" on the movement are defined; this agreement is described by using dual objects and in the involved membranes. The operations governing the mobility of the systems of mutual mobile membranes are mutual endocytosis (mutual endo), and mutual exocytosis (mutual exo). The evolution from a configuration to another is made using rules from the set of rules defined as follows:
for ; (mutual endocytosis)
for ; (mutual exocytosis)
where is a multiset and is an arbitrary membrane configuration.
It is enough to consider the biologically inspired operations of mutual endocytosis and mutual exocytosis and three membranes to get the full computational power of a Turing machine [6]. Three also represents the minimum number of membranes in order to discuss properly about the movement provided by endocytosis and exocytosis: working with configurations corresponding to a system of two membranes moving inside a skin membrane.
The family of all sets generated inside a given membrane by mutual mobile membranes of degree using mutual endocytosis rules (mendo) and mutual exocytosis rules (mexo) is denoted by . Therefore, the result can be formulated as following.
Theorem. .
In systems of simple mobile membranes with local evolution rules and mobility rules it is known that systems of degree three have the same power as a Turing machine, while in systems of enhanced mobile membranes using only mobility rules the degree of systems having the same power as a Turing machine increases to nine. In each mobility rule from systems of simple and enhanced mobile membranes, in the left hand side of the rules only one object appears in the proofs. By using multisets instead of objects and synchronization by objects and co-objects, it is proved that it is enough to consider only systems of three mutual mobile membranes together with the operations of mutual endocytosis and mutual exocytosis to get the full computational power of a Turing machine.
The proof is done in a similar manner with the proof for the computational universality of the systems of enhanced mobile
membranes [20].
Mutual Membranes with Objects on Surface
Membrane systems [24] and brane calculus [10] start from the same observations; however, they are built having in mind different goals: membrane systems investigate formally the computational nature and power of various features of membranes, while the brane calculus is capable to give a faithful and intuitive representation of the biological reality. In [12] the initiators of these two formalisms describe the goals they had in mind: "While membrane computing is a branch of natural computing which tries to abstract computing models, in the Turing sense, from the structure and the functioning of the cell, making use especially of automata, language, and complexity theoretic tools, brane calculi pay more attention to the fidelity to the biological reality, have as a primary target systems biology, and use especially the framework of process~algebra."
In [2] are defined systems of mutual membranes with objects on surface, following the idea of adding objects on membrane and using the biologically inspired rules pino/exo/phago coming from [12,14,18,19]. Objects and co-objects are used in phago and exo rules in order to illustrate the fact that both involved membranes agree on the movement.
The evolution from a configuration to another is made using rules from the set of rules defined as follows:
, for (pino)
, for a (exo)
, for (phago)
\noindent where is a multiset and , are arbitrary membrane configurations.
The computational power of systems of mutual membranes with objects on surface controlled by pairs of rules is investigated:
pino/exo or phago/exo, proving that they are universal even using a small number of membranes. These cases were already investigated in [19]; however better results are provided by improving the number of membranes. A summary of the results (existing and new ones) is given in what follows:
The multiplicity vector of the multiset from all membranes is considered as a result of the computation. Thus, the result of a halting computation consists of all the vectors describing the multiplicity of objects from all the membranes; a non-halting computation provides no output. The number of objects from the right-hand side of a rule is called its weight. The family of all sets generated by systems of mutual membranes with objects on surface using at any moment during a halting computation
at most membranes, and any of the rules of weight at most respectively, is denoted by ). When one of the parameters is not bounded, it is replaced it with a .
It is proven in [19] that systems of eight membranes with objects on surface and using pino and exo operations
of weight four and three are universal. The number of membranes can be reduced from eight to three. Howevere, in order to do
this is increased the weight of the pino and exo operations with one, namely from four and three to five and four. This means
that in order to construct a universal system of mobile membranes with objects on surface by using pino and exo operations, one needs to decide either he wants to minimize the number of membranes, or the weights of the operations.
Theorem. , for all .
It is proven in [19] that systems of nine membranes with objects on surface and using phago and exo operations of weight four and three (or five and two) are universal. The number of membranes can be reduced from nine to four, but in order to do this the weight of the phago and exo operations are increased from four and three (or five and two) to six and three. When constructing a Turing complete system of mobile membranes with objects on surface by using phago and exo operations, the same problem appears as when using pino and exo operations: namely, to choose either minimizing the number of membranes, or the weights of the operations.
Theorem. , for all .
Expressive Power of Mobile Membranes
In what follows it is shown that mobile membranes have at least the expressive power of mobile ambients and brane calculi by encoding mobile ambients and brane calculi in certain systems of mobile membranes.
Embedding Mobile Ambients into Mobile Membranes
The mobile membranes and the mobile ambients [11] have similar structures and common concepts. Both have a hierarchical structure representing locations, intend to describe mobility, and are used in describing various biological phenomena [10,15]. The mobile ambients are suitable to represent the movement of ambients through ambients and the communication which takes place inside the boundaries of ambients. Membrane systems are suitable to represent the evolution of objects and movement of objects and membranes through membranes. A comparing between these new models (mobile ambients and mobile membranes) is provided, and an encoding the ambients into membranes. This embedding is essentially presented in [5].
Safe ambients represent a variant of mobile ambients in which any movement of an ambient takes place only if both participants agree. The mobility is provided by the consumption of certain pairs of capabilities. The safe ambients differ from mobile ambients by the addition of co-actions: if in mobile ambients a movement is initiated only by the moving ambient and the target ambient has no control over it, in safe ambients both participants must agree by using a matching between action and co-action. A short description of pure safe ambients (SA) is given below; more information can be found in [22,23]. Given an infinite set of names (ranged over by ), the set of SA-processes (denoted by ) together with their capabilities (denoted by ) are defined as follows:
Process is an inactive mobile ambient. A movement is provided by the capability , followed by the execution of . An ambient represents a bounded place labelled by in which a SA-process is executed. is a parallel composition of mobile ambients and . creates a new unique name within the scope of . The structural congruence over ambients is the least congruence such that is a commutative monoid.
The operational semantics of pure ambient safe calculus are defined in terms of a reduction relation by the following axioms and rules.
Axioms:
;
;
.
Rules:
;
.
denotes a reflexive and transitive closure of the binary relation .
A translation from the set of safe ambients to the set of membrane configurations is given formally as follows:
Definition. A translation is given by
, where is
An object is placed near the membrane structure to prevent the consumption of capability objects in a membrane system which
corresponds to a mobile ambient which cannot evolve further.
Proposition. Structurally congruent ambients are translated into structurally congruent membrane systems; moreover, structurally congruent translated membrane systems correspond to structurally congruent ambients:
iff .
Considering two membrane systems and with only one object , if there is a sequence of rules , from the particular set of rules used in
[7], such that applying the rules from this set to the membrane configuration it is obtained the membrane configuration .
Proposition. If and are two ambients and is a membrane system such that and , then there exists a set of rules applicable to such that , and .
Proposition. Let and be two membrane systems with only one object, and an ambient such that . If there is a set of rules applicable to such that , then there exists an ambient with and . The number of pairs of non-star objects consumed in membrane systems is equal with the number of pairs of capabilities consumed in ambients.
Theorem. (Operational correspondence)
If , then .
If , then exists such that and .
Embedding Brane Calculus into Mobile Membranes
A fragment of brane calculus called PEP, and mutual mobile membranes with objects on surface as a variant of systems with mobile membranes are considered. The mobile membranes with objects on surface is inspired by a model of membrane system introduced in [12] having objects attached to membranes. A simulation of the PEP fragment of brane calculus by using mutual
membranes with objects on surface is presented. This approach is related to some other papers trying to show the relationship between membrane systems and brane calculus [8,9,14,18,19].
As it is expressed in [24], "at the first sight the role of objects placed on membranes is different in membrane and brane systems: in membrane computing the focus is on the evolution of objects themselves, while in brane calculi the objects ("proteins") mainly control the evolution of membranes". By defining an encoding of the PEP fragment of brane calculus into mutual membranes with objects on surface, it is shown that the difference between the two models is not significant. Another difference regarding the semantics of the two formalisms is expressed in [8]: "whereas brane calculi are usually equipped with an interleaving, sequential semantics (each computational step consists of the execution of a single instruction), the usual semantics in membrane computing is based on maximal parallelism (a computational step is composed of a maximal set of independent interactions)".
Brane calculus [10] deals with membranes representing the sites of activity; thus a computation happens on the membrane surface. The operations of the two basic brane calculi, pino, exo, phago (PEP) and mate, drip, bud (MBD) are directly inspired by biologic processes such as endocytosis, exocytosis and mitosis. The latter operations can be simulated using the formers one [10].
Membranes are formed of patches, where a patch can be composed from other patches . An elementary patch consists of an action followed, after the consumption of it, by another patch . Actions often come in complementary pairs which cause the interaction between membranes. The names are used to pair-up actions and co-actions. Cardelli motivates that the replication operator is used to model the notion of a "multitude" of components of the same kind, which is in fact a standard situation in biology [10]. The replicator operator is not used because a membrane system cannot be defined without knowing exactly the initial membrane structure. denotes the set of brane systems defined above. Some abbreviations can be made: as , as , and as .
The structural congruence relation is a way of rearranging the system such that the interacting parts come together, as illustrated
in what follows:
In what follows the reduction rules of the calculus are presented:
The action creates an empty bubble within the membrane where the action resides; imagine that the original membrane buckles towards inside and pinches off. The patch on the empty bubble is a parameter of . The exo action , which comes with a complementary co-action
, models the merging of two nested membranes, which starts with the membranes touching at a point. In the process (which is a smooth, continuous process), the subsystem gets expelled to the outside, and all the residual patches of the two membranes become contiguous. The phago action , which also comes with a complementary co-action , models a membrane (the one with ) "eating" another membrane (the one with ). Again, the process has to be smooth and continuous, so it is biologically
implementable. It proceeds by the membrane wrapping around the membrane and joining itself on the other side. Hence, an additional layer of membrane is created around the eaten membrane: the patch on that membrane is specified by the parameter of the co-phago action (similar to the parameter of the pino action).
A translation from the set of brane processes to the set of membrane configurations is given formally as follows:
Definition A translation is given by
where is defined as:
The rules of the systems of mutual membranes with objects on surface (MMOS) are presented in what follows.
where is a multiset and , are arbitrary membrane configurations.
The next result claims that two PEP systems which are structurally equivalent are translated into systems of mutual membranes with objects on surface which are structurally equivalent.
Proposition. If is a PEP system and is a system of mutual membranes with objects on surface, then there exists such that and , whenever .
Proposition. If is a PEP system and is a system of mutual membranes with objects on surface, then there exists such that whenever .
Remark. In the last proposition it is possible that . Suppose . By translation it is obtained that . It is possible to have or such that , but .
Proposition. If is a PEP system and is a system of mutual membranes with objects on surface, then there exists such that and , whenever .
Proposition. If is a PEP system and is a system of mutual membranes with objects on surface, then there exists such that whenever .
The following remark is a consequence of the fact that a formalism using an interleaving semantic is translated into a formalism working in parallel.
Remark. The last proposition allows . Let us assume . By translation, it is obtained that , such that . It can be observed that there exist such that , but .
These results are presented together with their proofs in [2].
References
1. B. Aman, G.Ciobanu. Describing the Immune System Using Enhanced Mobile Membranes. Electr. Notes in Theoretical Computer Science, vol.194(3), 5—18, 2008.
2. B. Aman, G.Ciobanu. Membrane Systems with Surface Objects. Proc. of the International Workshop on Computing with Biomolecules (CBM 2008), 17—29, 2008.
3. B. Aman, G.Ciobanu. Resource Competition and Synchronization in Membranes. Proceedings of SYNASC08, IEEE
Computing Society, 145-151, 2009.
4. B. Aman, G.Ciobanu. Simple, Enhanced and Mutual Mobile Membranes. Transactions on Computational Systems Biology XI', LNBI vol.5750, 26-44, 2009.
5. B. Aman, G.Ciobanu. Translating Mobile Ambients into P Systems. Electr. Notes in Theoretical Computer Science,
vol.171(2), 11—23, 2007.
6. B. Aman, G.Ciobanu. Turing Completeness Using Three Mobile Membranes. Lecture Notes in Computer Science, vol.5715, 42—55, 2009.
7. B. Aman, G. Ciobanu. On the Relationship Between Membranes and Ambients. Biosystems, vol.91(3), 515—530, 2008.
8. N. Busi. On the Computational Power of the Mate/Bud/Drip Brane Calculus: Interleaving vs. Maximal Parallelism. Lecture Notes in Computer Science, vol.3850, Springer, 144-158, 2006.
9. N. Busi, R. Gorrieri. On the computational power of Brane calculi. Third Workshop on Computational Methods in Systems Biology, 106-117, 2005.
10. L. Cardelli. Brane Calculi. Interactions of biological membranes. Lecture Notes in BioInformatics, vol.3082, 257-278,
Springer, 2004.
11. L. Cardelli, A. Gordon. Mobile Ambients. Lecture Notes in Computer Science, vol.1378, Springer, 140-155, 1998.
12. L. Cardelli, Gh. Păun. A universality result for a (mem)brane calculus based on mate/drip operations. Intern. J. Foundations of Computer Science, vol.17(1), 49-68, 2006.
13. L. Cardelli, S. Pradalier. Where Membranes Meet Complexes. BioConcur, 2005.
14. M. Cavaliere, S. Sedwards. Membrane Systems with Peripheral Proteins: Transport and Evolution. Electr. Notes in Theoretical Computer Science, vol.171(2), 37-53, 2007.
15. G. Ciobanu, Gh. Păun, M.J. Pérez-Jiménez. Application of Membrane Computing. Springer, 2006.
16. J. Dassow, Gh. Păun. Regulated Rewriting in Formal Language Theory. Springer-Verlag, 1990.
17. S.N. Krishna. The Power of Mobility: Four Membranes Suffice. Lecture Notes in Computer Science, vol.3526, 242—251, Springer, 2005.
18. S.N. Krishna. Membrane computing with transport and embedded proteins. Theoretical Computer Science, vol.410, 355-375, 2009.
19. S.N. Krishna. Universality results for P systems based on brane calculi operations. Theoretical Computer Science, vol.371, 83-105, 2007.
20. S.N. Krishna, G. Ciobanu. On the Computational Power of Enhanced Mobile Membranes. Lecture Notes in Computer Science, vol.5028, 326—335, 2008.
21. S.N. Krishna, Gh. Păun. P Systems with Mobile Membranes. Natural Computing, vol.4(3), 255—274, 2005.
22. F. Levi, D. Sangiorgi. Controlling Interference in Ambients. Proceedings POPL'00, ACM Press, 352-364, 2000.
23. F. Levi, D. Sangiorgi. Mobile Safe Ambients. ACM TOPLAS, vol.25, 1-69, 2003.
24. Gh. Păun. Membrane Computing. An Introduction. Springer-Verlang, Berlin, 2002.
25. Gh. Păun. Membrane Computing and Brane Calculi(Some Personal Notes). Electr. Notes in Theoretical Computer Science'', vol.171, 3-10, 2007.
Cell biology |
38309380 | https://en.wikipedia.org/wiki/TrackingPoint | TrackingPoint | TrackingPoint is an applied technology company based in Austin, Texas. In 2011, it created a long-range rifle system that was the first precision guided firearm.
Formed by John McHale in February 2011, the company created its first PGF prototype in March 2011. The company offered its first product in January 2013 and a second, the AR Series semi-automatic smart rifle, in January 2014.
Variants of the company's bolt-action rifles use .338 Lapua Magnum and .300 Winchester Magnum ammunition. Semi-automatic variants are available in 7.62 NATO, 5.56 NATO and .300 BLK.
In September 2016, the company began selling the M1400, a squad-level .338 Lapua bolt-action rifle that can hit targets out to . It can also acquire and hit targets traveling at within 2.5 seconds. The rifle is long with a barrel weighing . It can be used with the company's ShotGlass wearable glasses that transmits what the scope is seeing to the shooter's eye.
In January 2014, the U.S. Army purchased six TrackingPoint fire control systems to begin exploring purported key target acquisition and aiming technologies. The Army has integrated the system onto the XM2010 Enhanced Sniper Rifle for military testing.
In 2018 TrackingPoint introduced the ShadowTrak 6 bolt-action rifle with 6.5mm Creedmoor cartridge that can hit targets out to , and can hit targets traveling at in 1 second. Weighing , it can fire Hornady ammunition; the 147gr ELD-M (a match type bullet) or the 143gr ELD-X (designed for hunting).
In November 2018, Talon Precision Optics, of Jacksonville, Florida bought TrackingPoint.
Technology
TrackingPoint's precision guided firearms system uses several component technologies:
Networked Tracking Scope: The core engine that tracks the target, calculates range and the ballistic solution, and works in concert with the shooter and guided trigger to release the shot.
Barrel Reference System: A fixed reference point that enables the networked tracking scope to make adjustments and retain zero over time. The barrel reference system is factory calibrated to a laser reference.
Guided Trigger: The rifle's trigger is hard-wired to the networked tracking scope. The networked tracking scope controls the trigger weight to eliminate trigger squeeze and shot timing errors.
Field Software Upgradeable: Software can be uploaded to the scope to add capability.
Heads Up Display (HUD): The HUD indicates range, wind, reticle, video storage gauge, zoom, and battery life, plus LRF icon, Wi-Fi on/off icon, compass icon, cant wheel, inclination wheels and off-screen indicators.
Recording: An integrated camera captures video and still images from the networked tracking scope and heads up display. Recorded images can be downloaded to a smartphone or tablet from the scope and transmitted via email or social media.
In 2017 computer security experts Runa Sandvik and Michael Auger demonstrated that naive software design left the rifle's aiming computer wide open to remote hacking, when its WiFi capability was turned on. Malicious third parties could disrupt the rifle's accuracy by altering parameters like bullet weight. They showed they could change the bullet weight from to , which would be counted on to make the rifle inaccurate. A skilled hacker would be able to acquire root access, after which they could totally brick the computer, in various ways, like erasing all the computer's software.
References
External links
TrackingPoint website.
Companies based in Austin, Texas
Software companies based in Texas
Companies established in 2011
Firearm manufacturers of the United States
Software companies of the United States |
7926016 | https://en.wikipedia.org/wiki/Charles%20White%20%28American%20football%29 | Charles White (American football) | Charles Raymond White (born January 22, 1958) is a former professional American football player who was a running back in the National Football League (NFL) for nine seasons during the 1980s.
He played college football for the University of Southern California, where he was an All-American and the winner of the Heisman Trophy. A first-round pick (27th overall) in the 1980 NFL Draft, he played professionally for the Cleveland Browns and the Los Angeles Rams of the NFL.
Early life
White was born in Los Angeles, California. He graduated from San Fernando High School in San Fernando, California, where as a track and field athlete he won the 330 yard low hurdles at the CIF California State Meet over future Olympic Gold medalist Andre Phillips. He was also a standout high school football player.
College career
White attended the University of Southern California, where he played for the USC Trojans football team. In 1978, White won the W.J. Voit Memorial Trophy as the outstanding college football player on the Pacific Coast. In 1979, he received the Heisman Trophy, Maxwell Award, Walter Camp Award, and was named UPI Player of the Year. He is the second player in Rose Bowl history (of four, total) to be honored as Player of the Game twice (1979 and 1980).
College statistics
* Includes bowl games.
Professional career
White was selected in the 1st round, 27th overall pick in the 1980 NFL Draft by the Cleveland Browns. After four disappointing seasons in Cleveland, where he rushed for a total of 942 yards and had a 3.4 yards per carry average, White was released before the start of the 1985 season. White later acknowledged that he struggled with cocaine addiction during this period.
After his release from the Browns in 1985, he reunited with his college coach, John Robinson, who was now coaching the Los Angeles Rams. White would play for the Rams for three seasons, 1985–1987. In 1987, he rushed for a league-leading 1,387 yards and 11 touchdowns, which earned him a Pro Bowl selection and the NFL Comeback Player of the Year Award.
White finished his NFL career with 3,075 rushing yards, and 23 rushing touchdowns, along with 114 receptions, 860 yards, and one receiving touchdown.
American Gladiators
In its third and fourth seasons, American Gladiators held special "Pro Football Challenge of Champions" shows. White participated in and won both, each time coming from behind in the "Eliminator" thanks to slip-ups by his opponents. He also competed in sixth season's USC vs. Notre Dame alumni special where he also won, giving him a 3-0 record on the show.
Personal life and drug use
During his years at USC, White struggled with cocaine and marijuana use. In a 1987 Sports Illustrated article, he admitted to smoking marijuana daily at USC and snorted his first line of cocaine a few weeks before the 1977 Rose Bowl. He met fellow USC student Judi McGovern and the two dated throughout their time at USC, eventually marrying and having a daughter. However, White continued his cocaine use through college and on into his early NFL career with the Browns. White checked into drug rehab in 1982 and was clean for three years. Even so, the Browns cut him in 1985 and he was picked up on waivers by the Los Angeles Rams, where he was reunited with John Robinson, his former college coach at USC.
White soon had a short relapse into cocaine, but got clean again until one night in August 1987, where he and a friend did lines until White was arrested. However, Robinson bailed him out of jail and agreed to keep him on the team if he stayed clean. White responded with the best season of his career in the strike-affected 1987 season, running for 339 yards in the three "scab games" after the Rams traded Eric Dickerson and then for 100 yards in five straight games afterwards.
White and McGovern eventually divorced. White sold his 1979 Heisman trophy in 2008 to settle tax debts. White has five children, three daughters and two sons.
Post-playing career
In 1993, White joined USC as running backs coach and today is a computer consultant.
See also
List of college football yearly rushing leaders
References
External links
1958 births
Living people
American football running backs
Cleveland Browns players
Los Angeles Rams players
National Football League replacement players
USC Trojans football coaches
USC Trojans football players
All-American college football players
College Football Hall of Fame inductees
Heisman Trophy winners
National Conference Pro Bowl players
San Fernando High School alumni
Players of American football from Los Angeles
Track and field athletes from California
Sports coaches from Los Angeles |
4123645 | https://en.wikipedia.org/wiki/MPlayer.com | MPlayer.com | Mplayer, referred to as Mplayer.com by 1998, was a free online PC gaming service and community that operated from late 1996 until early 2001. The service at its peak was host to a community of more than 20 million visitors each month and offered more than 100 games. Some of the more popular titles available were action games like Quake, Command & Conquer, and Rogue Spear, as well as classic card and board for more casual gamers. Servers and matchmaking was provided through a proprietary client. Initially, the service was subscription-based, but by early 1997, they became the first major multiplayer community to offer games to be played online through their network for free. This was done by relying on advertisement-based revenues.
Mplayer was a unit of Mpath Interactive, a Silicon Valley-based startup. The demand for online gaming in the late 1990s resulted in huge growth for the service. They became known for supplying a range of features integrated through their software, including their very successful voice chat feature. This feature proved so popular that it was later split off as a VoIP service to cater to non-gamers, dubbed HearMe, which would eventually become the new name of the company. The company was listed on NASDAQ as MPTH and later HEAR.
Despite the growth of their gaming unit, Mplayer was never profitable. HearMe continued to refocus themselves on VoIP technologies and, in late 2000, had sold off Mplayer to competitor GameSpy. In addition, some technologies were sold to 4anything.com. HearMe survived the buyout and continued to operate independently. Mplayer was taken offline and integrated into GameSpy Arcade in 2001. HearMe shut-down in mid 2000.
Story
The company first began as Mpath Interactive, a venture capital start-up co-founded in early 1995 by Brian Apgar, Jeff Rothschild and Brian Moriarty, based in Cupertino, California. It was later renamed to HearMe. Mpath Interactive later moved to Mountain View, California, after acquiring Catapult Entertainment, Inc., and their online gaming service XBAND. Mplayer began as a division in October 1996 to provide online gaming to subscribed users. A few months prior to launching Mplayer, Mpath announced their goal for the service in a job description: Not only will people go to the Internet for information, they will also go to it to meet and interact with other people. Mplayer, scheduled to debut 1996, will bring the excitement of real-time multi-player gaming to the Internet's World Wide Web for the first time. It will feature popular PC-based games from well-known game publishers. Mplayer's features will include voice-capable games and chat rooms where players can converse as they play the games, watch games in progress and choose teams or opponents. In February 1997, they began to offer internet play for free for their major commercial games such as Quake, as well as card and board games such as Scrabble and Spades. In this, they were one of the first major commercial communities on the internet to offer such a service. They continued to add many new games to their offering. The slogan that was used from its founding was "Wanna Play?" By the end of 1998, the company had a staff of 111 employees, and about 80 by late 2000. The company was listed on NASDAQ beginning April 29, 1999 as MPTH, which changed to HEAR by late September of the same year.
Revenues
Games first offered over Mplayer were by subscription. In addition to the Gaming Service, Mpath also launched a "preferred" ISP service, WebBullet, reselling InterRamp ISP accounts on the PSINet network, the very backbone which Mplayer.com's production services were hosted on. By 1997, their growth allowed the service to be offered for free through support of its advertising network, which eventually became known as the Mplayer Entertainment Network. However, the subscription model was retained, known as Plus, and gave special privileges to these member who subscribed. The yearly rate was USD $39.95, or $29.95 for two years; this gave access to certain games, their rating and ranking system in Quake and Quake II, as well as online tournaments. Subscriptions had previously been $20 per month, but upon changing their business model to offer many services for free, MPlayer decided to switch to a yearly rate so that they would not have to market to their subscribers every month in order to keep them.
While certain releases were kept as "Plus Only" features for a brief time, in many cases the Plus game rooms were simply games hosted by Mplayer's own servers. With the rapid growth of Quake fans, and the increased server load, Mplayer opened the door to the QuakeWorld network, exponentially increasing the number of available game servers, and offering someone a chance to get a faster connection to a game. The downside was that there was very little control of cheat codes in these systems. Mplayer tried to increase the appeal of the Plus subscription, offering a "secure" Mplayer owned Server hosted Game, and offering Rankings and customizable Clan Skins.
With the Internet user demographic changing, a growing market emerged for classic games, with Scrabble and Battleship leading the charge. Mplayer turned more into an aggregator, hoping to attract as many users as possible with free, ad-supported games and software, including Checkers, Othello, and Chess.
Despite this, the company had been losing money, $11.9 million in 1998 alone, and by late 1999, had yet to break even. MPath was forced to look toward different venues. Proprietary technologies that were developed as features for Mplayer, known internally as POP.X, were later licensed to third parties. This was meant to help other companies create their own internet communities using existing technology. Third parties that licensed this technology included companies like Electronic Arts and Fujitsu. HearMe, the internal audio chat feature in Mplayer that was later split off, eventually accounted for 50% of all of the company's revenues.
Growth
Mplayer began as online gaming was still in its infancy. That along with their initial subscription fee that was required to use its service limited its early growth. Mplayer gained popularity after making its service available for free to all users in early 1997, and by early 1998 had attracted more than 125,000 monthly visitors and 400,000 total members. The entire network had averaged 800,000 hours of gameplay each month, with each member averaging 15 sessions a month for 35 minutes each time. By the end of that year, Mplayer had 2 million total registered users. By March 1999, Mplayer had over 3 million total users, and over 80,000 unique daily visitors, averaging over 300 minutes of gameplay each. Mplayer saw some of its biggest growth during this period, with more than 200 million total minutes of gameplay per month beginning in 1999. According to internal data from HearMe at the time, Mplayer.com was the tenth most popular site on the internet in terms of total monthly usage time.
The huge growth of Mplayer was closely associated with the growth in the internet in the late 1990s that culminated in the dot com boom. This was seen in their first day of being publicly traded when their IPO nearly doubled. By the time of the buyout by GameSpy not long after, the service had over 10 million registered members, and 20 million unique visitors per month.
HearMe
HearMe.com was launched in January 1999 following the success of Mplayer. Mpath intended to expand their market from entertainment using money that was being made through Mplayer to create a VOIP communications network. The technology used was based on the lucrative audio chat software used within Mplayer. HearMe.com's website featured gratis voice and video-conferencing chatrooms, as well as free HTML (ActiveX) code that would allow one to add a voice-chat module directly to their own website and speak with visitors in realtime. The new business became successful to the point where the entire company decided to refocus itself on this market, and this unit was not part of the buyout. In late September 1999, Mpath Interactive bought Resounding Technology, Inc, maker of Roger Wilco, another audio chat program. HearMe continued to release updates of the software until mid 2000 when HearMe saw its end and went out of business. However, in late 2000 a deal with PalTalk emerged, where PalTalk assumed all rights to HearMe's technology. It was later implemented into GameSpy Arcade.
Games
Mplayer offered a variety of game types to play online, including fast-paced action games, sports games, card and board games, amongst other types of games. Until late 1997, Mplayer had a lineup of about 20 games, with some of their more popular ones being Quake, Red Alert, Diablo, and Scrabble. In October 1997, it was announced that they would add more than 30 new games to their roster, making it the largest offering of any online gaming service at the time. The company wanted to diversify their market, and brought in many new types of games, such as Cavedog's Total Annihilation and a host of new card games to attract more casual gamers. In a deal with Sports Illustrated, Mpath introduced an entirely new section of games dedicated to sports. The new section was meant to accommodate sports gamers, as well as online tournaments and sport news and statistics.
The main commercial games were divided by channels into action, strategy, sims, and role-playing. Their popularity generally came down to the individual game rather than the type of game. Indeed, some games would often be too underpopulated to support matchmaking, while other more popular games would have a thriving community of hundreds or thousands of gamers. Competition of online matchmaking services for computer games had been increasing by the late 1990s. Mpath attempted to ensure that it stay up to date with the latest and most popular games being released. Some games like Quake II, Daikatana and Unreal were all heavily promoted as being available for online play even before their launch.
A popular feature was the ability to download shareware versions of some games and play them online. For some games, this was supported by publishers as a means to promote their games at retail. In other cases, Mplayer arranged deals with developers to attract gamers with demos of popular games such as Quake and Unreal. The card and board games offered were supplied straight from Mplayer for free through their own software.
Game community and market
Competition
When Mplayer launched, there were few major online gaming services, but in the late 1990s, it had numerous competitors. Notable competitors were Heat.net (built on a licensed version of Mplayer's core technology), Total Entertainment Network, Microsoft's Internet Gaming Zone (later MSN Gaming Zone), GameSpy3D, Kali, Blizzard's Battle.net, and Sierra's Won.net. Furthermore, Mplayer's offering of card and board games had been countered by numerous sites across the internet, including by services like from Yahoo! and GameStorm.
Marketing
Mplayer's first business model in online gaming was to charge gamers to play. However success was limited, and the company shortly after changed their marketing direction toward offering online play for free with supported advertising. The CEO of Mpath Interactive at the time, Paul Matteucci, put it: "It wasn't until we really got it – that it was about building a community around the games – that Mplayer.com took off," speaking on making the games free. It was from here that their model would begin to be based more around the actual community of gamers, and Mplayer would see its number of players climb several-fold.
Soon after, Mplayer had become a well-known player in the online gaming industry. As such, most of their marketing was geared toward attracting new gamers through a broader offering of games, as well as taking advantage of the large community they already had. The former can be seen in the hype surrounding the release of high-profile games of the time such as Unreal and Quake II, both of which were to be offered online through Mplayer.com. The company built a family-friendly image in order to appeal to both kids and adults, with chat rooms which were monitored to limit profanity. They also used their Plus service to cater to the more hardcore gamers who did not mind the extra fee. One source describes their presence at E3 2000: E-3 2000, the Electronic Entertainment Expo, held at the Los Angeles Convention Center in May 2000, was a multimedia extravaganza. Nowhere was this more apparent than at the room-sized exhibit housing Mplayer.com, the premier on-line multiplayer gaming service. And, if the multimedia electronic action didn't grab your attention, the exhibit itself was sure to. Here was a mega multimedia presentation all its own. The exhibit, costing tens of thousands of dollars to design, fabricate, and install, occupied on three raised floors, where fanatic gamers battled it out on a dozen big-screen overhead monitors...The design and construction represented an engineering marvel. Nothing had been left to chance in the exhibit's design.
By creating such an extravagant exhibit at e3, the largest gaming exposition, they sent a message that Mplayer was a major player in the gaming industry. Even at this late date, months before the buyout by GameSpy, Mpath was still aggressively marketing Mplayer. This was despite criticisms that splitting off HearMe took the company's focus away from gaming.
Software
Service was provided through proprietary software, a channel-based lobby and matchmaking client known as gizmo. The design and interface of gizmo was outsourced to two design companies, Good Dog Design and Naima Productions. Upon launching the program, users would choose from a list of games, that would then take them to a universal lobby for that game. From there, users could create their own game channel that would be displayed to everyone. They could also join a created game. The lobby would show a list of rooms, ordered from least to most latency. Green rooms indicate games that were fast enough to be playable, while red rooms were unplayable. The rocket icon indicated the game had been launched. This would bring them to a second private chatroom before entering the game. The channel creator acted as the moderator, who could launch the game and ban players in the lobby as well as change game settings, but could also make someone else a moderator. In some games like Quake, players could join the game after it was launched, but for most this was not possible.
Features
Mpath integrated many features into Mplayer in an attempt to stay competitive and support its community. Most of these features came with an update to Gizmo in December 1997, among them were voice chat, a chalkboard system in game channels that anyone could view known as ScribbleTalk, a built-in browser known as WebViewer, personal messaging, as well a ratings and rankings system for Plus members. The voice chat only allowed one person to speak at a time, but became extremely successful to the point where half of all Mplayer's service usage was from voice chat. Mpath soon after split off a division to focus on VoIP technologies in early 1999 catering to non-gamers. Ranked games were played in a separate lobby than normal games. Ranking was determined how well you played relative to your opponent's rank. In some games, this rank was only provisional until you played a certain number of games. Later on, the rank icon only appeared after enough games were played. Users could also customize their profiles by choosing a portrait from a set of pictures and edit their profile with HTML, however this feature was removed in later versions of gizmo.
GameSpy buyout
Despite its success in attracting users, Mplayer was still in financial trouble in late 2000, and it had been speculated the division would be sold off, possibly to Sega, owner of Heat.net. However it was announced in December 2000 that GameSpy, an Irvine-based gaming site founded in 1996, made a deal to acquire Mplayer from HearMe. The two companies had fully merged by June 2001. Included in the deal was the Mplayer POP.X business unit and gaming service, as well as its Globalrankings system, which ranked players in game, and the Mplayer Entertainment Network, their advertising network. This was all sold off by HearMe for USD $20 million and a 10% stake in GameSpy. HearMe was willing to sell off its entertainment division to focus on its more profitable VoIP unit, while GameSpy wanted Mplayer's userbase for its own multiplayer gaming community. There was also the belief at GameSpy that HearMe had been neglecting the service in favor of its other ventures. At the time, GameSpy was looking to start over from its GameSpy3D service with GameSpy Arcade, which was then in beta. Only a few months after the acquisition, many features from Mplayer had been added to their new service.
References
GameSpy
Online video game services
Internet properties established in 1996 |
11185654 | https://en.wikipedia.org/wiki/Astrolog | Astrolog | Astrolog is an open-source astrological software program that has been available online free of charge since 1991. It has been (as of 2021) authored by Walter Pullen since its creation, and was originally distributed via postings to the Usenet newsgroup .
Astrolog can create horoscopes, natal charts, and calculate current planetary positions in sidereal, traditional, and heliocentric formats. It can be used to relate astrological house and sign dispositors and other nonstandard systems. Astrolog uses 23 different house systems, can calculate 18 different aspects such as trines, squares, semisquares, and rarer ones such as biquintiles, quatronoviles, and sesquiquadrates. Astrolog computes the traditional planets, along with asteroids such as Ceres, Juno and Vesta, trans-Neptunian objects such as Eris, Haumea and Makemake, and uranian objects such as Vulkanus and Russian Proserpina. It can show nearly 50 fixed stars, and animate the chart in real time synchronized with the OS clock. Astrolog can do forms of locational astrology such as astrocartography. Recent versions use the tz database for time zone and daylight saving time detection.
All versions of Astrolog have been distributed with source code, and the most recent versions are free software under the GNU General Public License. Several different parties have contributed to the core program or to alternate versions of it. For example, Astrolog started with the early astrological formulas implemented by Michael Erlewine. Later versions of it made use of the Placalc and then Swiss Ephemeris libraries produced by Astrodienst AG. Other parties contributed PostScript graphics, and integrated ephemeris files for minor planetary objects. Astrolog has been ported to many different platforms, where versions exist for Unix, MS-DOS, Microsoft Windows, Macintosh, OS/2, and Amiga, among others. Astrolog has also been distributed as sample packages with versions of SUSE Linux.
Some consider Astrolog to rival commercial programs in quality. In November 1995, The Mountain Astrologer reviewed a number of software programs, where Astrolog was the one freeware program included, described as "good enough to be worthy of review with the main commercial programs". In the 1990s, Astrolog was mentioned a number of times in American Astrology's "The New Astrology" article by Ken Irving, because at the time it was the only readily available program that could compute Gauquelin sectors as described by astrologer Michel Gauquelin.
All versions of Astrolog include a command line interface. Most criticism of the program is due to command switches being confusing to memorize or use. Windows and Macintosh versions eventually included standard GUI. The command switch interface allows all of Astrolog's features to be accessed from shell scripts or to be invoked by other programs. There are a number of online servers that use Astrolog as a back end to generate charts for the user.
External links
Astrolog website with version 7.30
References
Astrology software
Cross-platform free software
Free software programmed in C++
Software using the GPL license
1991_software |
890353 | https://en.wikipedia.org/wiki/PyGTK | PyGTK | PyGTK is a set of Python wrappers for the GTK graphical user interface library. PyGTK is free software and licensed under the LGPL. It is analogous to PyQt/PySide and wxPython, the Python wrappers for Qt and wxWidgets, respectively. Its original author is GNOME developer James Henstridge. There are six people in the core development team, with various other people who have submitted patches and bug reports. PyGTK has been selected as the environment of choice for applications running on One Laptop Per Child systems.
PyGTK will be phased out with the transition to GTK version 3 and be replaced with PyGObject, which uses GObject Introspection to generate bindings for Python and other languages on the fly. This is expected to eliminate the delay between GTK updates and corresponding language binding updates, as well as reduce maintenance burden on the developers.
Syntax
The Python code below will produce a 200x200 pixel window with the words "Hello World" inside.
import gtk
def create_window():
window = gtk.Window()
window.set_default_size(200, 200)
window.connect("destroy", gtk.main_quit)
label = gtk.Label("Hello World")
window.add(label)
label.show()
window.show()
create_window()
gtk.main()
Notable applications that have used PyGTK
PyGTK has been used in a number of notable applications, some examples:
Anaconda installer
BitTorrent
Deluge
Emesene
Exaile
Flumotion
Gajim
gDesklets
Gedit (for optional Python subsystem and plugins)
GIMP (for optional Python scripts)
GNOME Sudoku
Gramps
Gwibber (microblogging client)
Jokosher
puddletag
PyMusique
Pybliographer
Tryton
ROX Desktop (includes ROX-Filer)
SoundConverter
Ubiquity (Ubuntu installer)
Ubuntu Software Center
Wing IDE
Comix
PyGObject
PyGObject provides a wrapper for use in Python programs when accessing GObject libraries. GObject is an object system used by GTK, GLib, GObject, GIO, GStreamer and other libraries.
Like the GObject library itself, PyGObject is licensed under the GNU LGPL, so it is suitable for use in both free software and proprietary applications. It is already in use in many applications ranging from small single-purpose scripts to large full-featured applications.
PyGObject can dynamically access any GObject libraries that use GObject Introspection. It replaces the need for separate modules such as PyGTK, GIO and python-gnome to build a full GNOME 3.0 application. Once new functionality is added to GObject library it is instantly available as a Python API without the need for intermediate Python glue.
Notable applications that use PyGObject
PyGObject has replaced PyGTK, but it has taken a considerable amount of time for many programs to be ported. Most of the software listed here has an older version which used PyGTK.
Ex Falso
Gramps
Meld
Pitivi
PyChess
Quod Libet
See also
PyQt (Python wrapper for the Qt toolkit)
PySide (Alternative Python wrapper for the Qt toolkit)
wxPython (Python wrapper for the wx widgets collection)
References
External links
PyGTK Homepage
PyGTK FAQ
PyGTK Tutorial
PyGTK Notebook A Journey Through Python Gnome Technologies by Peter Gill
PyGTK at Python wiki
PyGObject Homepage
PyGObject tutorial
GTK language bindings
Python (programming language) libraries
Software that uses PyGObject
Software that uses PyGTK
Widget toolkits |
19682699 | https://en.wikipedia.org/wiki/Chinese%20New%20Year | Chinese New Year | Chinese New Year is the festival that celebrates the beginning of a new year on the traditional lunisolar and solar Chinese calendar. In Chinese and other East and Southeast Asian cultures, the festival is commonly referred to as the Spring Festival (Chinese: / ; pinyin: ) as the spring season in the lunisolar calendar traditionally starts with lichun, the first of the twenty-four solar terms which the festival celebrates around the time of the Chinese New Year. Marking the end of winter and the beginning of the spring season, observances traditionally take place from New Year’s Eve, the evening preceding the first day of the year to the Lantern Festival, held on the 15th day of the year. The first day of Chinese New Year begins on the new moon that appears between 21 January and 20 February.
Chinese New Year is one of the most important holidays in Chinese culture, and has strongly influenced Lunar New Year celebrations of its 56 ethnic groups, such as the Losar of Tibet (), and of China's neighbours, including the Korean New Year (), and the of Vietnam, as well as in Okinawa. It is also celebrated worldwide in regions and countries that houses significant Overseas Chinese or Sinophone populations, especially in Southeast Asia. These include Brunei, Cambodia, Indonesia, Malaysia, Myanmar, the Philippines, Singapore, Thailand, and Vietnam. It is also prominent beyond Asia, especially in Australia, Canada, Mauritius, New Zealand, Peru, South Africa, the United Kingdom, and the United States, as well as various European countries.
The Chinese New Year is associated with several myths and customs. The festival was traditionally a time to honor deities as well as ancestors. Within China, regional customs and traditions concerning the celebration of the New Year vary widely, and the evening preceding the New Year's Day is frequently regarded as an occasion for Chinese families to gather for the annual reunion dinner. It is also traditional for every family to thoroughly clean their house, in order to sweep away any ill fortune and to make way for incoming good luck. Another custom is the decoration of windows and doors with red paper-cuts and couplets. Popular themes among these paper-cuts and couplets include good fortune or happiness, wealth, and longevity. Other activities include lighting firecrackers and giving money in red paper envelopes.
Dates in Chinese lunisolar calendar
The lunisolar Chinese calendar determines the date of Chinese New Year. The calendar is also used in countries that have been influenced by, or have relations with, China – such as Japan, Korea, Taiwan, and Vietnam, though occasionally the date celebrated may differ by one day or even one moon cycle due to using a meridian based on a different capital city in a different time zone or different placements of intercalary months.
The Chinese calendar defines the lunar month containing the winter solstice as the eleventh month, meaning that Chinese New Year usually falls on the second new moon after the winter solstice (rarely the third if an intercalary month intervenes). In more than 96 percent of the years, Chinese New Year's Day is the closest date to a new moon to lichun () on 4 or 5 February, and the first new moon after dahan (). In the Gregorian calendar, the Chinese New Year begins at the new moon that falls between 21 January and 20 February.
Mythology
According to tales and legends, Chinese New Year started with a mythical beast called the Nian (a beast that lives under the sea or in the mountains) during the annual Spring Festival. The Nian would eat villagers, especially children in the middle of the night. One year, all the villagers decided to hide from the beast. An older man appeared before the villagers went into hiding and said that he would stay the night and would get revenge on the Nian. The old man put red papers up and set off firecrackers. The day after, the villagers came back to their town and saw that nothing had been destroyed. They assumed that the old man was a deity who came to save them. The villagers then understood that Yanhuang had discovered that the Nian was afraid of the color red and loud noises. Then the tradition grew when New Year was approaching, and the villagers would wear red clothes, hang red lanterns, and red spring scrolls on windows and doors and used firecrackers and drums to frighten away the Nian. From then on, Nian never came to the village again. The Nian was eventually captured by Hongjun Laozu, an ancient Taoist monk. After that, Nian retreated to a nearby mountain. The name of the mountain has long been lost over the years.
There is also a saying that the beast is "Xi", rather than Nian. Spring Festival included New Year’s Eve and New Year. Xi is a kind of faint monster, and Nian is not related to the animal beasts in terms of meaning, it is more like a mature harvest. There is no record of the beast in the ancient texts; it is only in Chinese folklore. The word "Nian" is composed of the words "he" and "Qian". It means that the grain is rich and the harvest is good. The farmers review the harvest at the end of the year and are also full of expectations for the coming year.
History
Before the new year celebration was established, ancient Chinese gathered and celebrated the end of harvest in autumn. However, this was not the Mid-Autumn Festival, during which Chinese gathered with family to worship the Moon. In the Classic of Poetry, a poem written during Western Zhou (1045 BC - 771 BC) by an anonymous farmer, described the traditions of celebrating the 10th month of the ancient solar calendar, which was in autumn. According to the poem, during this time people clean millet-stack sites, toast guests with mijiu (rice wine), kill lambs and cook their meat, go to their masters' home, toast the master, and cheer the prospect of living long together. The 10th-month celebration is believed to be one of the prototypes of Chinese New Year.
The records of the first Chinese new year celebration can be traced to the Warring States period (475 BC – 221 AD). In the Lüshi Chunqiu, in Qin state an exorcism ritual to expel illness, called "Big Nuo" (大儺), was recorded as being carried out on the last day of the year. Later, Qin unified China, and the Qin dynasty was founded; and the ritual spread. It evolved into the practice of cleaning one's house thoroughly in the days preceding Chinese New Year.
The first mention of celebrating at the start of a new year was recorded during the Han dynasty (202 BC – 220 AD). In the book Simin Yueling (四民月令), written by the Eastern Han agronomist Cui Shi (崔寔), a celebration was described: "The starting day of the first month, is called Zheng Ri. I bring my wife and children, to worship ancestors and commemorate my father." Later he wrote: "Children, wife, grandchildren, and great-grandchildren all serve pepper wine to their parents, make their toast, and wish their parents good health. It's a thriving view." The practice of worshipping ancestors on New Year's Eve is maintained by Chinese people to this day.
Han Chinese also started the custom of visiting acquaintances' homes and wishing each other a happy new year. In Book of the Later Han, volume 27, a county officer was recorded as going to his prefect's house with a government secretary, toasting the prefect, and praising the prefect's merit.
During the Jin dynasty (266 – 420 AD), people started the New Year's Eve tradition of all-night revelry called shousui (守歲). It was described in Western Jin general Zhou Chu's article Fengtu Ji (風土記): "At the ending of a year, people gift and wish each other, calling it Kuisui (饋歲); people invited others with drinks and food, calling it Biesui (別歲); on New Year's Eve, people stayed up all night until sunrise, calling it Shousui (守歲)." The article used the word chu xi (除夕) to indicate New Year's Eve, and the name is still used until this day.
The Northern and Southern dynasties book Jingchu Suishiji described the practice of firing bamboo in the early morning of New Year's Day, which became a New Year tradition of the ancient Chinese. Poet and chancellor of the Tang dynasty Lai Gu also described this tradition in his poem Early Spring (早春): "新曆才将半纸开,小亭猶聚爆竿灰", meaning "Another new year just started as a half opening paper, and the family gathered around the dust of exploded bamboo pole". The practice was used by ancient Chinese people to scare away evil spirits, since firing bamboo would noisily crack or explode the hard plant.
During the Tang dynasty, people established the custom of sending bai nian tie (拜年帖), which are New Year's greeting cards. It is said that the custom was started by Emperor Taizong of Tang. The emperor wrote "普天同慶" (whole nation celebrate together) on gold leaves and sent them to his ministers. Word of the emperor's gesture spread, and later it became the custom of people in general, who used Xuan paper instead of gold leaves. Another theory is that bai nian tie was derived from the Han dynasty's name tag, "門狀" (door opening). As imperial examinations became essential and reached their heyday under the Tang dynasty, candidates curried favour to become pupils of respected teachers, in order to get recommendation letters. After obtaining good examination marks, a pupil went to the teacher's home with a men zhuang (门状) to convey their gratitude. Therefore, eventually men zhuang became a symbol of good luck, and people started sending them to friends on New Year's Day, calling them by a new name, bai nian tie (拜年帖, New Year's Greetings).
The Chunlian (Spring Couplets) was written by Meng Chang, an emperor of the Later Shu (935 – 965 AD), during the Five Dynasties and Ten Kingdoms period:"新年納餘慶,嘉節號長春" (Enjoying past legacies in the new year, the holiday foreseeing the long-lasting spring). As described by Song dynasty official Zhang Tangying in his book Shu Tao Wu, volume 2: on the day of New Year's Eve, the emperor ordered the scholar Xin Yinxun to write the couplets on peach wood and hang them on the emperor's bedroom door. It is believed that placing the couplets on the door to the home in the days preceding the new year was widespread during the Song dynasty. The famous Northern Song politician, litterateur, philosopher, and poet Wang Anshi recorded the custom in his poem "元日" (New Year's Day).
The poem Yuan Ri (元日) also includes the word "爆竹" (bao zhu, exploding bamboo), which is believed to be a reference to firecrackers, instead of the previous tradition of firing bamboo, both of which are called the same in the Chinese language. After gunpowder was invented in the Tang dynasty and widely used under the Song dynasty, people modified the tradition of firing bamboo by filling the bamboo pole with gunpowder, which made for louder explosions. Later under the Song, people discarded the bamboo and started to use paper to wrap the gunpowder in cylinders, in imitation of the bamboo. The firecracker was still called "爆竹", thus equating the new and old traditions. It is also recorded that people linked the firecrackers with hemp rope and created the "鞭炮" (bian pao, gunpowder whip) in the Song dynasty. Both "爆竹" and "鞭炮" are still used by present-day people to celebrate the Chinese New Year and other festive occasions.
It was also during the Song dynasty that people started to give money to children in celebration of a new year. The money was called sui nian qian (随年钱), meaning "the money based on age". In the chapter "Ending of a year" (歲除) of Wulin jiushi (武林舊事), the writer recorded that concubines of the emperor prepared a hundred and twenty coins for princes and princesses, to wish them long lives.
The new year celebration continued under the Yuan dynasty, when people also gave nian gao (年糕, year cakes) to relatives.
The tradition of eating Chinese dumplings jiaozi (餃子) was established under the Ming dynasty at the latest. It is described in the book Youzhongzhi (酌中志): "People get up at 5 in the morning of new year's day, burn incense and light firecrackers, throw door latch or wooden bars in the air three times, drink pepper and thuja wine, eat dumplings. Sometimes put one or two silver currency inside dumplings, and whoever gets the money will attain a year of fortune." Modern Chinese people also put other food that is auspicious into dumplings: such as dates, which prophesy a flourishing new year; candy, which predicts sweet days; and nian gao, which foretells a rich life.
In the Qing dynasty, the name ya sui qian (壓歲錢, New Year's Money) was given to the lucky money given to children at the new year. The book Qing Jia Lu (清嘉錄) recorded: "elders give children coins threaded together by a red string, and the money is called Ya Sui Qian." The name is still used by modern Chinese people. The lucky money was presented in one of two forms: one was coins strung on red string; the other was a colorful purse filled with coins.
In 1928, the ruling Kuomintang party decreed that the Chinese New Year would fall on 1 Jan of the Gregorian Calendar, but this was abandoned due to overwhelming popular opposition. In 1967, during the Cultural Revolution, official Chinese New Year celebrations were banned in China. The State Council of the People's Republic of China announced that the public should "change customs"; have a "revolutionized and fighting Spring Festival"; and since people needed to work on Chinese New Year Eve, they did not need holidays during Spring Festival day. The old celebrations were reinstated in 1980.
Naming
While "Chinese New Year" remains the official name for the festival in Taiwan, the name "Spring Festival" was adopted by the People's Republic of China instead. On the other hand, the overseas Chinese diaspora mostly prefer the term "Lunar New Year", while "Chinese New Year" remains a popular and convenient translation for people of non-Chinese cultural backgrounds. Along with the Han Chinese in and outside Greater China, as many as 29 of the 55 ethnic minority groups in China also celebrate Chinese New Year. Korea, Vietnam, Singapore, Malaysia, Indonesia and the Philippines celebrate it as an official festival.
Public holiday
Chinese New Year is observed as a public holiday in some countries and territories where there is a sizable Chinese population. Since Chinese New Year falls on different dates on the Gregorian calendar every year on different days of the week, some of these governments opt to shift working days in order to accommodate a longer public holiday. In some countries, a statutory holiday is added on the following work day if the New Year (as a public holiday) falls on a weekend, as in the case of 2013, where the New Year's Eve (9 February) falls on Saturday and the New Year's Day (10 February) on Sunday. Depending on the country, the holiday may be termed differently; common names in English are "Chinese New Year", "Lunar New Year", "New Year Festival", and "Spring Festival".
For New Year celebrations that are lunar but are outside of China and Chinese diaspora (such as Korea's Seollal and Vietnam's Tết), see the article on Lunar New Year.
For other countries and regions where Chinese New Year is celebrated but not an official holiday, see the table below.
Festivities
During the festival, people around China will prepare different gourmet dishes for their families and guests. Influenced by the flourished cultures, foods from different places look and taste totally different. Among them, the most well-known ones are dumplings from northern China and Tangyuan from southern China.
Preceding days
On the eighth day of the lunar month prior to Chinese New Year, the Laba holiday (), a traditional porridge, Laba porridge (), is served in remembrance of an ancient festival, called La, that occurred shortly after the winter solstice. Pickles such as Laba garlic, which turns green from vinegar, are also made on this day. For those that practice Buddhism, the Laba holiday is also considered Bodhi Day. Layue () is a term often associated with Chinese New Year as it refers to the sacrifices held in honor of the gods in the twelfth lunar month, hence the cured meats of Chinese New Year are known as larou (). The porridge was prepared by the women of the household at first light, with the first bowl offered to the family's ancestors and the household deities. Every member of the family was then served a bowl, with leftovers distributed to relatives and friends. It's still served as a special breakfast on this day in some Chinese homes. The concept of the "La month" is similar to Advent in Christianity. Many families eat vegetarian on Chinese New Year eve, the garlic and preserved meat are eaten on Chinese New Year day.
On the days immediately before the New Year celebration, Chinese families give their homes a thorough cleaning. There is a Cantonese saying "Wash away the dirt on nin ya baat" (), but the practice is not restricted to nin ya baat (the 28th day of month 12). It is believed the cleaning sweeps away the bad luck of the preceding year and makes their homes ready for good luck. Brooms and dust pans are put away on the first day so that the newly arrived good luck cannot be swept away. Some people give their homes, doors and window-frames a new coat of red paint; decorators and paper-hangers do a year-end rush of business prior to Chinese New Year. Homes are often decorated with paper cutouts of Chinese auspicious phrases and couplets. Purchasing new clothing and shoes also symbolize a new start. Any hair cuts need to be completed before the New Year, as cutting hair on New Year is considered bad luck due to the homonymic nature of the word "hair" (fa) and the word for "prosperity". Businesses are expected to pay off all the debts outstanding for the year before the new year eve, extending to debts of gratitude. Thus it is a common practice to send gifts and rice to close business associates, and extended family members.
In many households where Buddhism or Taoism is observed, home altars and statues are cleaned thoroughly, and decorations used to adorn altars over the past year are taken down and burned a week before the new year starts on Little New Year, to be replaced with new decorations. Taoists (and Buddhists to a lesser extent) will also "send gods back to heaven" (), an example would be burning a paper effigy of Zao Jun the Kitchen God, the recorder of family functions. This is done so that the Kitchen God can report to the Jade Emperor of the family household's transgressions and good deeds. Families often offer sweet foods (such as candy) in order to "bribe" the deities into reporting good things about the family.
Prior to the Reunion Dinner, a prayer of thanksgiving is held to mark the safe passage of the previous year. Confucianists take the opportunity to remember their ancestors, and those who had lived before them are revered. Some people do not give a Buddhist prayer due to the influence of Christianity, with a Christian prayer offered instead.
Chinese New Year's Eve
The day before the Chinese New Year (Chinese: ) usually accompanied with a dinner feast, consisting of special meats are served at the tables, as a main course for the dinner and as a offering for the New Year. This meal is comparable to Thanksgiving dinner in the U.S. and remotely similar to Christmas dinner in other countries with a high percentage of Christians.
In northern China, it is customary to make jiaozi, or dumplings, after dinner to eat around midnight. Dumplings symbolize wealth because their shape resembles a Chinese sycee. In contrast, in the South, it is customary to make a glutinous new year cake (niangao) and send pieces of it as gifts to relatives and friends in the coming days. Niángāo [Pinyin] literally means "new year cake" with a homophonous meaning of "increasingly prosperous year in year out".
After dinner, some families may visit local temples hours before midnight to pray for success by lighting the first incense of the year; however in modern practice, many households held parties to celebrate. Traditionally, firecrackers were lit to ward evil spirits when the household doors sealed, and are not to be reopened until dawn in a ritual called "opening the door of fortune" (). A tradition of staying up late on Chinese New Year's Eve is known as shousui (Chinese: ), which is still practised as it is thought to add on to one's parents' longevity.
First day
The first day, known as Spring Festival (Chinese: ) is for the welcoming of the deities of the heavens and Earth on midnight. It is a traditional practice to light fireworks, burn bamboo sticks and firecrackers, and lion dance troupes, were done commonly as a tradition to ward off evil spirits.
Typical actions such as lighting fires and using knives are considered taboo, thus all consumable food has to be cooked prior. Using the broom, including swearing and breaking any dinnerware without appeasing the deities, are also considered taboo.
Normal traditions occurring on the first day involve house gatherings to the families, specifically the elders and families to the oldest and most senior members of their extended families, usually their parents, grandparents and great-grandparents, and trading Mandarin oranges as a courtesy to symbolize wealth and good luck. Members of the family who are married also give red envelopes containing cash known as lai see (Cantonese: ) or angpow (Hokkien and Teochew), or hongbao (Mandarin: ), a form of a blessing and to suppress both the aging and challenges that were associated with the coming year, to junior members of the family, mostly children and teenagers. Business managers may also give bonuses in the form of red packets to employees to symbolize a smooth-sailing career. The money can be of any form, specifically numbers ending with 8, which sounded as huat (Mandarin: ), meaning prosperity, but packets with denominations of odd numbers or without money are usually not allowed due to bad luck, especially the number 4 which sounded as si (Mandarin: ), which means death.
While fireworks and firecrackers are traditionally very popular, some regions have banned them due to concerns over fire hazards. For this reason, various city governments (e.g., Kowloon, Beijing, Shanghai for a number of years) issued bans over fireworks and firecrackers in certain precincts of the city. As a substitute, large-scale fireworks display have been launched by governments in Hong Kong and Singapore. However, in some cases such as Hong Kong being an exception to do so for the indigenous peoples of the walled villages of New Territories in a limited scale.
Second day
The second day, entitled "a year's beginning" (), oversees married daughters visiting their birth parents, relatives and close friends, often renew family ties and relationship. (Traditionally, married daughters didn't have the opportunity to visit their birth families frequently.)
The second day also saw giving offering money and sacrifices to God of Wealth (Chinese: ) to symbolize a rewarding time after hardship in the preceding year. During the days of imperial China, "beggars and other unemployed people circulate[d] from family to family, carrying a picture [of the God of Wealth] shouting, "Cai Shen dao!" [The God of Wealth has come!]." Householders would respond with "lucky money" to reward the messengers. Business people of the Cantonese dialect group will hold a 'Hoi Nin' prayer to start their business on the second day of Chinese New Year , blessing business to strive in the coming year.
As this day is believed to be The Birthday of Che Kung, a deity worshipped in Hong Kong, worshippers go to Che Kung Temples to pray for his blessing. A representative from the government asks Che Kung about the city's fortune through kau cim.
Third day
The third day is known as "red mouth" (). Chikou is also called "Chigou's Day" (). Chigou, literally "red dog", is an epithet of "the God of Blazing Wrath" (). Rural villagers continue the tradition of burning paper offerings over trash fires. It is considered an unlucky day to have guests or go visiting. Hakka villagers in rural Hong Kong in the 1960s called it the Day of the Poor Devil and believed everyone should stay at home. This is also considered a propitious day to visit the temple of the God of Wealth and have one's future told.
Fourth day
In those communities that celebrate Chinese New Year for 15 days, the fourth day is when corporate "spring dinners" kick off and business returns to normal. Other areas that have a longer Chinese New Year holiday will celebrate and welcome the gods that were previously sent on this day.
Fifth day
This day is the god of Wealth's birthday. In northern China, people eat jiaozi, or dumplings, on the morning of powu (). In Taiwan, businesses traditionally re-open on the next day (the sixth day), accompanied by firecrackers.
It is also common in China that on the 5th day people will shoot off firecrackers to get Guan Yu's attention, thus ensuring his favor and good fortune for the new year.
Sixth day
The sixth day is Horse's Day, on which people drive away the Ghost of Poverty by throwing out the garbage stored up during the festival. The ways vary but basically have the same meaning—to drive away the Ghost of Poverty, which reflects the general desire of the Chinese people to ring out the old and ring in the new, to send away the previous poverty and hardship and to usher in the good life of the New Year.
Seventh day
The seventh day, traditionally known as Renri (the common person's birthday), is the day when everyone grows one year older. In some overseas Chinese communities in Southeast Asia, such as Malaysia and Singapore, it is also the day when tossed raw fish salad, yusheng, is eaten for continued wealth and prosperity.
For many Chinese Buddhists, this is another day to avoid meat, the seventh day commemorating the birth of Sakra, lord of the devas in Buddhist cosmology who is analogous to the Jade Emperor.
Eighth day
Another family dinner is held to celebrate the eve of the birth of the Jade Emperor, the ruler of heaven. People normally return to work by the eighth day, therefore the Store owners will host a lunch/dinner with their employees, thanking their employees for the work they have done for the whole year.
Ninth day
The ninth day is traditionally known as the birthday of the Jade Emperor of Heaven () and many people offered prayer in the Taoist Pantheon as thanks or gratitude., and it is commonly known as called Ti Kong Dan (), Ti Kong Si () or Pai Ti Kong (), which is especially important to Hokkiens other than the first day of the Chinese New Year.
A prominent requisite offering is sugarcane. Legends holds that the Hokkien were spared from a massacre by Japanese pirates by hiding in a sugarcane plantation between the eighth and ninth days of the Chinese New Year, coinciding with the Jade Emperor's birthday. "Sugarcane" () is a near homonym to "thank you" () in the Hokkien dialect.
In the morning (traditionally anytime between midnight and 7 am), Taiwanese households set up an altar table with three layers: one top (containing offertories of six vegetables (; those being noodles, fruits, cakes, tangyuan, vegetable bowls, and unripe betel), all decorated with paper lanterns) and two lower levels (five sacrifices and wines) to honor the deities below the Jade Emperor. The household then kneels three times and kowtows nine times to pay obeisance and wish him a long life.
Incense, tea, fruit, vegetarian food or roast pig, and gold paper, are served as a customary protocol for paying respect to an honored person.
Tenth day
The nation celebrates the Jade Emperor's birthday on this day.
Fifteenth day
The fifteenth day of the new year is celebrated as the Lantern Festival, also known as the Yuanxiao Festival (), the Shangyuan Festival (), and Chap Goh Meh ( in Hokkien). Rice dumplings, or tangyuan (), a sweet glutinous rice ball brewed in a soup, are eaten this day. Candles are lit outside houses as a way to guide wayward spirits home. Families may walk the streets carrying lanterns.
In China and Malaysia, this day is celebrated by individuals seeking a romantic partner, akin to Valentine's Day. Nowadays, single women write their contact number on mandarin oranges and throw them in a river or a lake after which single men collect the oranges and eat them. The taste is an indication of their possible love: sweet represents a good fate while sour represents a bad fate.
This day often marks the end of the Chinese New Year festivities.
Traditional food
A reunion dinner
(nián yè fàn) is held on New Year's Eve during which family members gather for a celebration. The venue will usually be in or near the home of the most senior member of the family. The New Year's Eve dinner is very large and sumptuous and traditionally includes dishes of meat (namely, pork and chicken) and fish. Most reunion dinners also feature a communal hot pot as it is believed to signify the coming together of the family members for the meal. Most reunion dinners (particularly in the Southern regions) also prominently feature specialty meats (e.g. wax-cured meats like duck and Chinese sausage) and seafood (e.g. lobster and abalone) that are usually reserved for this and other special occasions during the remainder of the year. In most areas, fish () is included, but not eaten completely (and the remainder is stored overnight), as the Chinese phrase "may there be surpluses every year" () sounds the same as "let there be fish every year." Eight individual dishes are served to reflect the belief of good fortune associated with the number. If in the previous year a death was experienced in the family, seven dishes are served.
Other traditional foods consists of noodles, fruits, dumplings, spring rolls, and Tangyuan which are also known as sweet rice balls. Each dish served during Chinese New Year represents something special. The noodles used to make longevity noodles are usually very thin, long wheat noodles. These noodles are longer than normal noodles that are usually fried and served on a plate, or boiled and served in a bowl with its broth. Expectedly, the noodles symbolize the wish for a long life. The fruits that are typically selected would be oranges, tangerines, and pomelos as they are round and "golden" color symbolizing fullness and wealth. Their lucky sound when spoken also brings good luck and fortune. The Chinese pronunciation for orange is 橙 (chéng /chnng/), which sounds the same as the Chinese for 'success' (成). One of the ways to spell tangerine(桔 jú /jyoo/) contains the Chinese character for luck (吉 jí /jee/). Pomelos is believed to bring constant prosperity. Pomelo in Chinese (柚 yòu /yo/) sounds similar to 'to have' (有 yǒu), disregarding its tone, however it sounds exactly like 'again' (又 yòu). Dumplings and spring rolls symbolize wealth, whereas sweet rice balls symbolize family togetherness.
Red packets for the immediate family are sometimes distributed during the reunion dinner. These packets contain money in an amount that reflects good luck and honorability. Several foods are consumed to usher in wealth, happiness, and good fortune. Several of the Chinese food names are homophones for words that also mean good things.
Many places in China still follow the tradition of eating only vegetarian food on the first day of the New Year, as it is believed that doing so will bring joy and peace into their lives for the whole year.
Like many other New Year dishes, certain ingredients also take special precedence over others as these ingredients also have similar-sounding names with prosperity, good luck, or even counting money.
Practices
Red envelopes
Traditionally, red envelopes or red packets
(Mandarin: ; Hakka: fung bao / Cantonese: ) are passed out during the Chinese New Year's celebrations, from married couples or the elderly to unmarried juniors or children. During this period, red packets are also known as "yasuiqian" (, which was evolved from , literally, "the money used to suppress or put down the evil spirit"). According to legend, a demon named Sui patted a child on the head three times on New Year's Eve, and the child would have a fever. The parents wrapped coins in red paper and placed them next to their children's pillows. When Sui came, the flash of the coin scared him away. From then on, every New Year's Eve, parents will wrap the coin in red paper to protect their children.
Red packets almost always contain money, usually varying from a couple of dollars to several hundred. Chinese superstitions favour amounts that begin with even numbers, such as 8 (八, ) — a homophone for "wealth", and 6 (六, ) — a homophone for "smooth", except for the number 4 (四, ) — as it is a homophone of "death", and is, as such, considered unlucky in Asian culture. Odd numbers are also avoided, as they are associated with cash given during funerals (帛金, ). It is also customary for bills placed inside a red envelope to be new.
The act of asking for red packets is normally called (Mandarin): 討紅包 tǎo-hóngbāo, 要利是 or (Cantonese): 逗利是. A married person would not turn down such a request as it would mean that he or she would be "out of luck" in the new year. Red packets are generally given by established married couples to the younger non-married children of the family. It is custom and polite for children to wish elders a happy new year and a year of happiness, health and good fortune before accepting the red envelope. Red envelopes are then kept under the pillow and slept on for seven nights after Chinese New Year before opening because that symbolizes good luck and fortune.
In Taiwan in the 2000s, some employers also gave red packets as a bonus to maids, nurses or domestic workers from Southeast Asian countries, although whether this is appropriate is controversial.
In the mid-2010s, Chinese messaging apps such as WeChat popularized the distribution of red envelopes in a virtual format via mobile payments, usually within group chats. In 2017, it was estimated that over 100 billion of these virtual red envelopes would be sent over the New Year holiday.
Mythology
In ancient times, there is a monster named sui (祟) which comes out on New Year's Eve and touches the heads of sleeping children. The child will be frightened by the touch and wake up and have a fever. The fever eventually will cause the child to be mentally retarded. Hence, families will light up their homes and stay awake, leading to a tradition of 守祟, to guide against sui from harming their children.
A folklore tale of sui is about an elderly couple with a precious son. On the night of New Year's Eve, since they were afraid that sui would come, they took out eight pieces of copper coins to play with their son in order to keep him awake. Their son was very sleepy, however, so they let him go to sleep after placing a red paper bag containing the copper coins under the child's pillow. The two older children also stayed with him for the whole night. Suddenly, the doors and windows were blown open by a strange wind, and even the candlelight was extinguished. It turned out to be a sui. When the sui was going to reach out and touch the child's head, the pillow suddenly brightened with the golden light, and the sui was scared away, so the exorcism effect of "red paper wrapped copper money" spread in the past China (see also Chinese numismatic charms). The money is then called “ya sui qian (壓歲錢)”, the money to suppress sui.
Another tale is that a huge demon was terrorising a village and there was nobody in the village who was able to defeat the demon; many warriors and statesmen had tried with no luck. A young orphan stepped in, armed with a magical sword that was inherited from his ancestors, and battled the demon, eventually killing it. Peace was finally restored to the village, and the elders all presented the brave young man with a red envelope filled with money to repay the young orphan for his courage and for ridding the village of the demon.
Gift exchange
In addition to red envelopes, which are usually given from older people to younger people, small gifts (usually food or sweets) are also exchanged between friends or relatives (of different households) during Chinese New Year. Gifts are usually brought when visiting friends or relatives at their homes. Common gifts include fruits (typically oranges, but never trade pears), cakes, biscuits, chocolates, and candies. Gifts are preferred to be wraped with red or golden paper, which symbolises good luck.
Certain items should not be given, as they are considered taboo. Taboo gifts include:
items associated with funerals (i.e. handkerchiefs, towels, chrysanthemums, items colored white and black)
items that show that time is running out (i.e. clocks and watches)
sharp objects that symbolize cutting a tie (i.e. scissors and knives)
items that symbolize that you want to walk away from a relationship (examples: shoes and sandals)
mirrors
homonyms for unpleasant topics (examples: "clock" sounds like "the funeral ritual" or "the end of life", green hats because "wear a green hat" sounds like "cuckold", "handkerchief" sounds like "goodbye", "pear" sounds like "separate", "umbrella" sounds like "disperse", and "shoe" sounds like a "rough" year).
Markets
Markets or village fairs are set up as the New Year is approaching. These usually open-air markets feature new year related products such as flowers, toys, clothing, and even fireworks and firecrackers. It is convenient for people to buy gifts for their new year visits as well as their home decorations. In some places, the practice of shopping for the perfect plum tree is not dissimilar to the Western tradition of buying a Christmas tree.
Hong Kong filmmakers also release "New Year celebration films" (), mostly comedies, at this time of year.
Fireworks
Bamboo stems filled with gunpowder that was burnt to create small explosions were once used in ancient China to drive away evil spirits. In modern times, this method has eventually evolved into the use of firecrackers during the festive season. Firecrackers are usually strung on a long fused string so it can be hung down. Each firecracker is rolled up in red papers, as red is auspicious, with gunpowder in its core. Once ignited, the firecracker lets out a loud popping noise and, as they are usually strung together by the hundreds, the firecrackers are known for their deafening explosions that are thought to scare away evil spirits. The burning of firecrackers also signifies a joyful time of year and has become an integral aspect of Chinese New Year celebrations. Since the 2000s, firecrackers have been banned in various countries and towns.
Music
"Happy New Year!" () is a popular children's song for the New Year holiday. The melody is similar to the American folk song, Oh My Darling, Clementine. Another popular Chinese New Year song is Gong Xi Gong Xi()
.
Movies
Watching Chinese New Year films is an expression of Chinese cultural identity. During the New Year holidays, the stage boss gathers the most popular actors whom from various troupes let them perform repertories from Qing dynasty. Nowadays people prefer celebrating the new year with their family by watching these movies together.
Clothing
The color red is commonly worn throughout Chinese New Year; traditional beliefs held that red could scare away evil spirits and bad fortune. The wearing of new clothes is another clothing custom during the festival, the new clothes symbolize a new beginning in the year, and enough things to use and wear in this time.
Family portrait
In some places, the taking of a family portrait is an important ceremony after the relatives are gathered. The photo is taken at the hall of the house or taken in front of the house. The most senior male head of the family sits in the center.
Symbolism
As with all cultures, Chinese New Year traditions incorporate elements that are symbolic of deeper meaning. One common example of Chinese New Year symbolism is the red diamond-shaped fu characters (), which are displayed on the entrances of Chinese homes. This sign is usually seen hanging upside down, since the Chinese word dao (), is homophonous or nearly homophonous with () in all varieties of Chinese. Therefore, it symbolizes the arrival of luck, happiness, and prosperity.
For the Cantonese-speaking people, if the fu sign is hung upside down, the implied dao (upside down) sounds like the Cantonese word for "pour", producing "pour the luck [away]", which would usually symbolize bad luck; this is why the fu character is not usually hung upside-down in Cantonese communities.
Red is the predominant color used in New Year celebrations. Red is the emblem of joy, and this color also symbolizes virtue, truth and sincerity. On the Chinese opera stage, a painted red face usually denotes a sacred or loyal personage and sometimes a great emperor. Candies, cakes, decorations and many things associated with the New Year and its ceremonies are colored red. The sound of the Chinese word for "red" () is in Mandarin homophonous with the word for "prosperous." Therefore, red is an auspicious color and has an auspicious sound.
According to Chinese tradition, the year of the pig is a generally unlucky year for the public, which is why you need to reevaluate most of your decisions before you reach a conclusion. However, this only helps you get even more control over your life as you learn to stay ahead of everything by being cautious.
Nianhua
Nianhua can be a form of Chinese colored woodblock printing, for decoration during Chinese New Year.
Flowers
The following are popular floral decorations for the New Year and are available at new year markets.
{| class="wikitable"
!Floral Decor ||Meaning
|-
|Plum Blossom || symbolizes luckiness
|-
|Kumquat || symbolizes prosperity
|-
|Calamondin
|Symbolizes luck
|-
|Narcissus || symbolizes prosperity
|-
|Bamboo || a plant used for any time of year
|-
|Sunflower || means to have a good year
|-
|Eggplant ||a plant to heal all of your sicknesses
|-
|Chom Mon Plant || a plant which gives you tranquility
|} In general, except those in lucky colour like red and yellow, chrysanthemum should not be put at home during the new year, because it is normally used for ancestral veneration.
Icons and ornaments
{| class="wikitable"
! style="width:150px;"|Icons || Meaning || Illustrations
|-
| Lanterns || These lanterns that differ from those of Mid-Autumn Festival in general. They will be red in color and tend to be oval in shape. These are the traditional Chinese paper lanterns. Those lanterns, used on the fifteenth day of the Chinese New Year for the Lantern Festival, are bright, colorful, and in many different sizes and shapes.
|
|-
| Decorations || Decorations generally convey a New Year greeting. They are not advertisements. Faichun, also known as Huichun—Chinese calligraphy of auspicious Chinese idioms on typically red posters—are hung on doorways and walls. Other decorations include a New year picture, Chinese knots, and papercutting and couplets. ||
|-
| Dragon dance and Lion dance || Dragon and lion dances are common during Chinese New Year. It is believed that the loud beats of the drum and the deafening sounds of the cymbals together with the face of the Dragon or lion dancing aggressively can evict bad or evil spirits. Lion dances are also popular for opening of businesses in Hong Kong and Macau. ||
|-
| Fortune gods || Cai Shen Ye, Che Kung, etc. ||
|-
| Red envelope || Typically given to children, elderly and Dragon/Lion Dance performers while saying t , ||
|}
Spring travel
Traditionally, families gather together during the Chinese New Year. In modern China, migrant workers in China travel home to have reunion dinners with their families on Chinese New Year's Eve. Owing to a large number of interprovincial travelers, special arrangements were made by railways, buses and airlines starting from 15 days before the New Year's Day. This 40-day period is called chunyun, and is known as the world's largest annual migration. More interurban trips are taken in mainland China in this period than the total population of China.
In Taiwan, spring travel is also a major event. The majority of transportation in western Taiwan is in a north–south direction: long-distance travel between urbanized north and hometowns in the rural south. Transportation in eastern Taiwan and that between Taiwan and its islands is less convenient. Cross-strait flights between Taiwan and mainland China began in 2003 as part of Three Links, mostly for "Taiwanese businessmen" to return to Taiwan for the new year.
Festivities outside Greater China
Chinese New Year is also celebrated annually in many countries which houses significant Chinese populations. These include countries throughout Asia, Oceania, and North America. Sydney, London, and San Francisco claim to host the largest New Year celebration outside of Asia and South America.
Southeast Asia
Chinese New Year is a national public holiday in many Southeast Asian countries and considered to be one of the most important holidays of the year.
Malaysia
Chinese New Year's Eve is typically a half-day holiday for Malaysia and Chinese New Year is a two-day public holiday. The biggest celebrations take place in Malaysia (notably in Kuala Lumpur, George Town, Johor Bahru and Ipoh.
Singapore
In Singapore, Chinese New Year is officially a two-day public holiday. Chinese New Year is accompanied by various festive activities. One of the main highlights is the Chinatown celebrations. In 2010, this included a Festive Street Bazaar, nightly staged shows at Kreta Ayer Square and a lion dance competition. The Chingay Parade also features prominently in the celebrations. It is an annual street parade in Singapore, well known for its colorful floats and wide variety of cultural performances. The highlights of the Parade for 2011 include a Fire Party, multi-ethnic performances and an unprecedented travelling dance competition.
Philippines
In the Philippines, Chinese New Year is considered to be the most important festival for Filipino-Chinese, and its celebration has also extended to the non-Chinese majority Filipinos. In 2012, Chinese New Year was included in public holidays in the Philippines, which is only the New Year's Day itself.(Sin-nî: Chinese new years in Philippine Hokkien)
In Thailand, one of the most populous Chinese descent populated countries. Also celebrated the great Chinese New Year festivities throughout the country, especially in provinces where many Chinese descent live such as Nakhon Sawan, Suphan Buri, Phuket etc. Which is considered to promote tourism in the same agenda as well.
In the capital, Bangkok in Chinatown, Yaowarat Road, there is a great celebration. Which usually closes the road making it a pedestrian street and often have a member of royal family came to be the president of the ceremony, always open every year, such as Princess Maha Chakri Sirindhorn.
Indonesia
In Indonesia, the Chinese New Year is officially named as Hari Tahun Baru Imlek, or Sin Cia in Hokkien. It was celebrated as one of the official national religious holiday by Chinese Indonesians since 18 June 1946 to 1 January 1953 through government regulation signed by President Sukarno on 18 June 1946. It was unofficially celebrated by ethnic Chinese from 1953 to 1967 based on government regulation signed by Vice President Muhammad Hatta on 5 February 1953 which annul the previous regulation, among others, the Chinese New Year as a national religious holiday, Effectively from 6 December 1967, until 1998, the spiritual practice to celebrate the Chinese New Year by Chinese families was restricted specifically only inside of the Chinese house. This restriction is made by Indonesian government through a Presidential Instruction, Instruksi Presiden No.14 Tahun 1967, signed by President Suharto. This restriction is ended when the regime has changed and the President Suharto was overthrown. The celebration is conducted unofficially by Chinese community from 1999 to 2000. On 17 January 2000, the President Abdurrahman Wahid issued a Presidential Decree through Keputusan Presiden RI No 6 Tahun 2000 to annul Instruksi Presiden No.14 Tahun 1967. On 19 January 2001, the Ministry of Religious Affairs (Kementerian Agama Republik Indonesia) issued a Decree through Keputusan Menteri Agama RI No 13 Tahun 2001 tentang Imlek sebagai Hari Libur Nasional to set of Hari Tahun Baru Imlek as a Facultative Holiday for Chinese Community. Through the Presidential Decree it was officially declared as a 1 (one) day public religious holiday as of 9 April 2002 by President Megawati. The Indonesian government authorize only the first day of the Chinese New Year as a public religious holiday and it is specifically designated only for Chinese people. In Indonesia, the first day of the Chinese New Year is recognized as a part of the celebration of the Chinese religion and tradition of Chinese community. There are no other official or unofficial of the Chinese New Year as a public holiday. The remaining 14 days are celebrated only by ethnic Chinese families. In Indonesia, the Chinese Year is named as a year of Kǒngzǐ (孔子) or Kongzili in Indonesian. Every year, the Ministry of Religious Affairs (Kementerian Agama Republik Indonesia) set the specific date of religious holiday based on input from religious leaders. The Chinese New Year is the only national religious holiday in Indonesia that was enacted specifically with the Presidential Decree, in this case with the Keputusan Presiden Republik Indonesia (Keppres RI) No 19 Tahun 2002 dated on 9 April 2002. The celebration of the Chinese New Year as a religious holiday is specifically intended only for Chinese People in Indonesia (tradisi masyarakat Cina yang dirayakan secara turun temurun di berbagai wilayah di Indonesia, dan umat Agama Tionghoa) and it is not intended to be celebrated by Indonesian Indigenous Peoples or Masyarakat Pribumi Indonesia.
Big Chinese population cities and towns like of Chinatown includes Jakarta, Medan, Singkawang, Pangkal Pinang, Binjai, Bagansiapiapi, Tanjungbalai, Pematangsiantar, Selat Panjang, Pekanbaru, Tanjung Pinang, Batam, Ketapang, Pontianak, Sungailiat, Tanjung Pandan, Manggar, Toboali, Muntok, Lubuk Pakam, Bandung, Semarang, Surabaya, Rantau Prapat, Tebing Tinggi, Sibolga, Dumai, Panipahan, Bagan Batu, Tanjung Balai Karimun, Palembang, Bengkayang, and Tangerang always have its own New Year's celebration every years with parade and fireworks. A lot shopping malls decorated its building with lantern, Chinese words and lion or dragon with red and gold as main color. Lion dance is a common sight around Chinese houses, temples and its shophouses. Usually, the Buddhist, Confucian and Taoist Chinese will burn a big incense made by aloeswood with dragon-decorated at front of their house. The Chinese temple is open 24 hours at the first day, their also distributes a red envelopes and sometimes rice, fruits or sugar to the poor around.
Thailand
Divided into 3 days, the first day is the Wan chai (; pay day), meaning the day that people go out to shop for offerings, second day is the Wan wai (; worship day), is a day of worshiping the gods and ancestral spirits, which is divided into three periods: dawn, late morning and afternoon, the third day is a Wan tieow (; holiday), is a holiday that everyone will leave the house to travel or to bless relatives or respectable people. And often wear red clothes because it is believed to bring auspiciousness to life.
Observed by Thai Chinese and parts of the private sector. Usually celebrated for three days, starting on the day before the Chinese New Year's Eve. Chinese New Year is observed as a public holiday in Narathiwat, Pattani, Yala, Satun and Songkhla Provinces. For the year 2021 (one year only) the government declared Chinese New Year a government holiday. It applies mostly to civil servants, financial institutions and private businesses can decide whether or not to observe it.
Australia and New Zealand
With one of the largest Chinese populations outside of Asia, Sydney also claims to have the largest Chinese New Year Celebrations outside of Asia with over 600,000 people attending the celebrations in Chinatown annually. The events there span over three weeks including the launch celebration, outdoor markets, evening street food stalls, Chinese top opera performances, dragon boat races, a film festival and multiple parades that incorporate Chinese, Japanese, Korean people and Vietnamese performers. More than 100,000 people attend notably the main parade with over 3,500 performers. The festival also attracts international media coverage, reaching millions of viewers in Asia. The festival in Sydney is organized in partnership with a different Chinese province each year. Apart from Sydney, other state capital cities in Australia also celebrate Chinese New Year due to large number of Chinese residents. The cities include: Brisbane, Adelaide, Melbourne Box Hill and Perth. The common activities are lion dance, dragon dance, New Year market, and food festival. In the Melbourne suburb of Footscray, Victoria a Lunar New Year celebration initially focusing on the Vietnamese New Year has expanded into a celebration of the Chinese New Year as well as the April New Year celebrations of the Thais, Cambodians, Laotians and other Asian Australian communities who celebrate the New Year in either January/February or April.
The city of Wellington hosts a two-day weekend festival for Chinese New Year, and a one-day festival is held in Dunedin, centred on the city's Chinese gardens.
North America
Many cities in North America sponsor official parades for Chinese New Year. Among the cities with such parades are New York City (Manhattan; Flushing, Queens; and Brooklyn), San Francisco, Los Angeles, Boston, Chicago, Mexico City, Toronto, and Vancouver. However, even smaller cities that are historically connected to Chinese immigration, such as Butte, Montana, have recently hosted parades.
New York
Multiple groups in New York City cooperate to sponsor a week-long Lunar New Year celebration. The festivities include cultural festival, music concert, fireworks on the Hudson River near the Chinese Consulate, and special exhibits. One of the key celebrations is the Chinese New Year parade with floats and fireworks taking place along the streets in Lower Manhattan. In June 2015, New York City Mayor Bill de Blasio declared that the Lunar New Year would be made a public school holiday.
California
The San Francisco Chinese New Year Festival and Parade is the oldest and one of the largest events of its kind outside of Asia, and one of the largest Asian cultural events in North America.
The festival incorporates Grant and Kearny Streets into its street festival and parade route, respectively. The use of these streets traces its lineage back to early parades beginning the custom in San Francisco. In 1849, with the discovery of gold and the ensuing California Gold Rush, over 50,000 people had come to San Francisco to seek their fortune or just a better way of life. Among those were many Chinese, who had come to work in the gold mines and on the railroad. By the 1860s, the residents of San Francisco's Chinatown were eager to share their culture with their fellow San Francisco residents who may have been unfamiliar with (or hostile towards) it. The organizers chose to showcase their culture by using a favorite American tradition – the parade. They invited a variety of other groups from the city to participate, and they marched down what today are Grant Avenue and Kearny Street carrying colorful flags, banners, lanterns, drums, and firecrackers to drive away evil spirits.
In San Francisco, over 100 units participate in the annual Chinese New Year Parade held since 1958. The parade is attended by some 500,000 people along with another 3 million TV viewers.
Europe
United Kingdom
In London, celebrations take place in Chinatown, Leicester Square, and Trafalgar Square. Festivities include a parade, cultural feast, fireworks, concerts and performances. The celebration attracts between 300,000 and 500,000 people yearly according to the organisers.
France
In Paris, celebrations have been held since the 1980s in several districts during one month with many performances and the main of the three parades with 40 groups and 4,000 performers is attended alone by more than 200,000 people in the 13th arrondissement.
Netherlands
Celebrations have been held officially in The Hague since 2002. Other celebration are held in Amsterdam and in Rotterdam.
India and Pakistan
Many celebrate the festival in Chinatown, Kolkata, India, where a significant community of people of Chinese origin exists. In Kolkata, Chinese New Year is celebrated with lion and dragon dance.
In Pakistan, the Chinese New Year is also celebrated among the sizable Chinese expatriate community that lives in the country. During the festival, the Chinese embassy in Islamabad arranges various cultural events in which Pakistani arts and cultural organizations and members of the civil society also participate.
Greetings
The Chinese New Year is often accompanied by loud, enthusiastic greetings, often referred to as () in Mandarin or (Kat Lei Seut Wa) in Cantonese, loosely translated as auspicious words or phrases. New Year couplets printed in gold letters on bright red paper, referred to as chunlian () or fai chun (), is another way of expressing auspicious new year wishes. They probably predate the Ming dynasty (1368–1644), but did not become widespread until then. Today, they are ubiquitous with Chinese New Year.
Some of the most common greetings include:
Xin nian kuai le / San nin fai lok: ; Hakka: Sin Ngen Kai Lok; Taishanese: Slin Nen Fai Lok. A more contemporary greeting reflective of Western influences, it literally translates from the greeting "Happy new year" more common in the west. It is written in English as "xin nian kuai le". In northern parts of China, traditionally people say instead of (), to differentiate it from the international new year. And () can be used from the first day to the fifth day of Chinese New Year. However, () is considered very short and therefore somewhat discourteous.
Gong xi fa cai / Gong hei fat choi: ; Hokkien: Kiong hee huat chai (POJ: Kiong-hí hoat-châi); Cantonese: Gung1 hei2 faat3 coi4; Hakka: Gung hee fatt choi, which loosely translates to "Congratulations and be prosperous". It is spelled varyingly in English, such as "Gung hay fat choy", "gong hey fat choi", or "Kung Hei Fat Choy". Often mistakenly assumed to be synonymous with "Happy New Year", its usage dates back several centuries. While the first two words of this phrase had a much longer historical significance (legend has it that the congratulatory messages were traded for surviving the ravaging beast of Nian, in practical terms it may also have meant surviving the harsh winter conditions), the last two words were added later as ideas of capitalism and consumerism became more significant in Chinese societies around the world. The saying is now commonly heard in English speaking communities for greetings during Chinese New Year in parts of the world where there is a sizable Chinese-speaking community, including overseas Chinese communities that have been resident for several generations, relatively recent immigrants from Greater China, and those who are transit migrants (particularly students).
Numerous other greetings exist, some of which may be exclaimed out loud to no one in particular in specific situations. For example, as breaking objects during the new year is considered inauspicious, one may then say (Suìsuì-píng'ān) immediately, which means "everlasting peace year after year". Suì (), meaning "age" is homophonous with (suì) (meaning "shatter"), in the demonstration of the Chinese love for wordplay in auspicious phrases. Similarly, (niánnián yǒu yú), a wish for surpluses and bountiful harvests every year, plays on the word yú that can also refer to (yú meaning fish), making it a catch phrase for fish-based Chinese new year dishes and for paintings or graphics of fish that are hung on walls or presented as gifts.
The most common auspicious greetings and sayings consist of four characters, such as the following:
, – "May your wealth [gold and jade] come to fill a hall"
, – "May you realize your ambitions"
, – "Greet the New Year and encounter happiness"
, – "May all your wishes be fulfilled"
, – "May your happiness be without limit"
, – "May you hear [in a letter] that all is well"
, – "May a small investment bring ten-thousandfold profits"
, – "May your happiness and longevity be complete"
, – "When wealth is acquired, precious objects follow"
These greetings or phrases may also be used just before children receive their red packets, when gifts are exchanged, when visiting temples, or even when tossing the shredded ingredients of yusheng particularly popular in Malaysia and Singapore. Children and their parents can also pray in the temple, in hopes of getting good blessings for the new year to come.
Children and teenagers sometimes jokingly use the phrase "" (; Cantonese: ; ), roughly translated as "Congratulations and be prosperous, now give me a red envelope!". In Hakka the saying is more commonly said as 'Gung hee fatt choi, hung bao diu loi' which would be written as – a mixture of the Cantonese and Mandarin variants of the saying.
Back in the 1960s, children in Hong Kong used to say (Cantonese, Gung Hei Fat Choy, Lai Si Tau Loi, Tau Ling M Ngoi), which was recorded in the pop song Kowloon Hong Kong by Reynettes in 1966. Later in the 1970s, children in Hong Kong used the saying: , roughly translated as, "Congratulations and be prosperous, now give me a red envelope, fifty cents is too little, don't want a dollar either." It basically meant that they disliked small change – coins which were called "hard substance" (Cantonese: ). Instead, they wanted "soft substance" (Cantonese: ), which was either a ten dollar or a twenty dollar note.
See also
Other celebrations of Lunar New Year in China:
Tibetan New Year (Losar)
Mongolian New Year (Tsagaan Sar)
Celebrations of Lunar New Year in other parts of Asia:
Buryat New Year (Sagaalgan)
Korean New Year (Seollal)
Japanese New Year (Shōgatsu)
Mongolian New Year (Tsagaan Sar)
Lunar New Year (Tết)
Similar Asian Lunisolar New Year celebrations that occur in April:
Burmese New Year (Thingyan)
Cambodian New Year (Chaul Chnam Thmey)
Lao New Year (Pii Mai)
Sri Lankan New Year (Aluth Avuruddu)
Thai New Year (Songkran)
Chinese New Year Gregorian Holiday in Malaysia
Malaysia Chinese New Year (Tahun Baru Cina)
Indonesian Chinese New Year (Imlek)
Lunar New Year fireworks display in Hong Kong
The Birthday of Che Kung
Notes
References
Further reading
External links
New Year celebrations
Public holidays in Cambodia
Public holidays in China
Public holidays in Indonesia
Public holidays in Malaysia
Public holidays in the Philippines
Public holidays in Singapore
Public holidays in Taiwan
Public holidays in Thailand
East Asia
Southeast Asia
Observances set by the Chinese calendar
Winter events in China
Buddhist holidays
Taoist holidays
Chinese-Australian culture
Articles containing video clips |
67133553 | https://en.wikipedia.org/wiki/Umar%20Javeed%2C%20Sukarma%20Thapar%2C%20Aaqib%20Javeed%20vs.%20Google%20LLC%20%26%20Ors. | Umar Javeed, Sukarma Thapar, Aaqib Javeed vs. Google LLC & Ors. | Umar Javeed, Sukarma Thapar, Aaqib Javeed vs. Google LLC and Ors. is a 2019 court case in which Google and Google India Private Limited were accused of abuse of dominance in the Android operating system in India. The Competition Commission of India found that Google abused its dominant position by requiring device manufacturers wishing to pre-install apps to adhere to a compatibility standard on Android.
Plaintiffs
In 2018, Aaqib Javeed was briefly an intern with the Competition Commission of India, New Delhi while studying law at the University of Kashmir. He was due to graduate in 2019. Sukarma Thapar was working as a research associate at Competition Commission of India. She graduated from the Indian Law School in 2015. Umar Javeed was working as a research associate at the Competition Commission of India, New Delhi. In 2014, he graduated from the University of Kashmir.
Allegations
Google has signed mobile application distribution agreements (MADAs) with original equipment manufacturers (OEMs) of Android. Under this agreement OEMs of Android were required to pre-install all the Google Applications (Apps) and Google Services in Android devices before shipping/distributing them to the Indian market.
Google Apps are smartphone applications and Google Services are proprietary application programming interface (API) services, collectively referred to as Google mobile services (GMS) which includes a number of bundled apps and services: Google Play, Google Search, YouTube, Google Maps, Gmail, Google Drive, Google Chrome, Google Play Music, Google Play Movies, Hangouts, Google Duo and Google Photos. APIs are non-graphical user interface (GUI) services which run in background, they function as messengers that allow software to talk to software inside the system and on outside, they function as building blocks that allow number of systems, data locations and digital devices to communicate with one another in the digital network. Some of the Google APIs (such as Google Maps APIs, Play Games API, Location APIs, Cloud drive APIs, etc.) and other APIs which are exclusively available through Google Play Services are part of GMS.
The GMS is proprietary of Google. Unlike Android they are not licensed via open source and released into public domain. GMS is a non-free suite. The source code is the base of a program and the central point of control. Only the owner of proprietary software has the right to access the source code of the program. It cannot be used by other parties without negotiations with Google. Android operating system which is a licensable smart mobile operating system. Open-source software is free to use, develop, modify and redistribute.
For OEMs to get a GMS licence from Google in order to install them in Android products such as smartphones they are required by Google to pre-load all the Google apps and services contained in the bundle as pre-installed software. Google does not allow OEMs to pick and choose from the applications and services contained in the bundle. While signing a MADA was optional, original equipment manufacturers are stipulated to pre-install all Google's Android applications and services in order to obtain any part of GMS. It was argued that this hinders the development of rival mobile applications and services in the market.
Before signing a MADA, OEMs had to enter into an anti-fragmentation agreement (AFA) with Google, which prevented them from developing and marketing an Android Fork on other devices. This restricted access to potentially superior versions of the Android operating system.
The MADAs and AFA agreements made it possible for Google to promote and monetize YouTube as an online video platform. This makes Google AdSense the winner takes all in the play by leveraging YouTube's dominance in the advertising market. The unavailability of YouTube Premium ad-free services in the Indian market has a direct impact on Google's AdSense revenue. YouTube's content is distributed on more than 300 million Android smartphones in India, and 85 percent is consumed by Android smartphones having MADAs and AFA agreements.
Google's counterarguments
It was argued that the alleged restrictions and conditions in this case did not cause foreclosure in the India markets and do not fall in the anti-competitive agreements of The Competition Act, 2002, hence are not anti-competitive practices. Some of the arguments are as follows:
Google has not laid any requirements as alleged for OEMs to sign agreements of mobile application distribution agreements. Such agreements are optional for OEMs who wish to pre-install GMS on their products - Android devices. In exchange for a free of cost licence to GMS, the OEM agrees to place the Google Search widget, the Play icon, and a folder with a selection of other Google apps such as Chrome, Google Play Store etc. on the default home screen. (See Case document p. 10, p.11)
The alleged pre-installation of GMS as preloaded don't restrict the OEMs of Android to install other competitors apps and services in the products (such as smartphones).
The users of the smart mobile phones of Android can install other apps and can also disable pre-loaded apps installed through GMS suite.
Google has laid down the requirements of anti-fragmentation agreements, because if Android source code is modified or rebuilt beyond its limits it will create an imbalance in the whole ecosystem.
Outcome
The Competition Commission of India ordered an investigation into abuse of dominance allegations against Google:
"In this regard, the Competition Commission of India is of the prima-facie opinion that the mandatory pre-installation of the entire Google Mobile Services suite under MADAs amounts to the imposition of an unfair condition on the device manufacturers and thereby in contravention of Section 4(2)(a)(i) of the Act. It also amounts to prima facie leveraging of Google's dominance in Play Store to protect the relevant market such as Google Search (Online Search Engine) in contravention of Section 4(2)(e) of the Competition Act, 2002. Mobile search has emerged as a key gateway for users to access information and Android is a key delivery channel for mobile search. Search Engines have Data-driven effects. To enhance search engine optimization and the Search algorithm, it requires enough data, which in turn requires a sufficient number of queries from Users who rely more and more on mobile search."
The impugned conduct of Google may help perpetuate its dominance in the online search market through Google Search while resulting in denial of market access for competing search apps in contravention of Section 4(2)(c) of the Competition Act, 2002. The Commission warranted an in-depth investigation against the opposite parties of this case . The plea submitted by Google that MADAs pre-installation conditions are not exclusive or exclusionary, can also be properly examined during the investigation.It was held Google has a dominant position in the mobile operating system market. Google's Android holds 80 per cent of India's mobile operating system market.
Timeline
In December 2018 and January 2019, Google submitted its submission to the Indian Competition Commission in both confidential and non-confidential versions.
In June 2019, the Competition Commission of India sent letters to smartphone OEM inquiring as to the terms of their agreement with Google LLC and Google India Private Limited to verify if Google has imposed restrictions on their use of the company's mobile apps in the last eight years from 2011.
In July 2019, the former senior member of the Competition Commission of India told Reuters, "The developments will be watched eagerly as the case involves many intricacies and its implications will be world over".
Basic concept in this case
Data is treated as a non-monetary consideration under competition law. Online platforms may claim that their services are free for users, however, they are not because customers provide their data in exchange for the services. User data allows online platforms to improve their products and services while also providing advertisers with highly targeted advertising options. App stores, such as Google Play (Android), App Store (iOS), and Windows Store, are online digital distribution platforms that connect users of smart mobile devices with mobile application developers.
Online platforms, according to the European Commission, are software-based facilities offering two or multi-sided markets where providers and users of content, commodities, and services can meet. Online platforms, according to the German Monopolies Commission, are intermediaries that bring together multiple groups of users in order for them to interact economically or socially. Hence, Android is a multi-sided platform since it brings together more than two distinct groups, such as app developers, phone manufacturers, mobile carriers, customers, and app suppliers, who all benefit from their mutual interaction.
Mobile operating systems
The three main mobile operating systems (OS) and their types:
Type A: In this type the manufacturer-built proprietary operating systems where the operating system developer (the owner) is also the hardware manufacturer.
Apple iOS: The iPhone operating system from Apple, derived from Mac OSX. Apple did not support the ability to develop third party applications till the release of iOS 2.0 in July 2008. It is one of the most popular smartphone devices and enjoys the highest operating profits in the U.S., due to its integrated approach (end-to-end services). After July 2008, Apple IOS has app store and third party developers. The value of the developer's contribution, is backbone to a program in this transition period of technological change," Apple paid out nearly $50 billion in revenue to developers".
BlackBerry OS Originally made for business purposes and the first smartphone OS, which brought the smartphone to the daily life of people, RIM Blackberry was a sensation in the early 2000s. But this platform is also losing its market share because of strong competition. It's a closed-source proprietary system. RIM App World was launched in April 2009, and has seen a rapidly falling market share in the last four years.
Type B: Third-party proprietary operating systems where the operating system developer (the owner) will license its operating system, usually for a fee, to third-party hardware manufacturers original equipment manufacturers (OEMs). In this method, similar to Microsoft’s personal computer operating system (Windows) model, the devices will have a consistent appearance and behaviour. There is little scope for customization of the operating system by the OEMs.
Microsoft Windows Phone (Mobile)
Type C: Free and open source operating systems where the operating system developer (owner) will release the operating system via the open source licence method. Open source operating systems are developed by a company, a group of companies or a community of developers. Customization of the operating system is usually allowed to a certain degree (within the parameters of the licence agreement).
Android This is a software platform and operating system for mobile devices based on Linux kernel and developed by Google but later on by Open Handset Alliance (OHA). Its native language is Java which is the officially supported language. In this platform applications can be written in other languages also but later on it is compiled to ARM native code.
Symbian
See also
Microsoft Corp. v. Commission
References
Competition law
Google litigation
Competition law by country |
10132799 | https://en.wikipedia.org/wiki/Pete%20Shaw%20%28author%29 | Pete Shaw (author) | Pete Shaw is a British author, broadcaster, programmer and theatrical producer.
Early life
Shaw attended school in Stanwell, Middlesex. It was while at Stanwell Secondary School that he was introduced by a school friend to Tim Hartnell, the co-owner of Interface Publication, the other owner being his school friend's mother, Sue North.
His first published computer programme was The Elephant's Graveyard, written for the Sinclair ZX81 and published in the magazine ZX Computing in August 1982.
Career
Shaw's first book, Games for your ZX Spectrum, followed at the end of 1982 and was published by Virgin Books in conjunction with Interface Publications and was an early title in a series of Games For Your. .. books published by Virgin. Shaw himself wrote three more books for the series, including Games for your Oric, More Games for your Oric and Games for your Sinclair QL. Shaw also co-wrote two books designed to teach the adventure game genre, Creating Adventures on your ZX Spectrum (Interface Publications, 1983) and Creating Adventure on your BBC Micro (Interface Publications, 1985).
Shaw's books mainly comprised Type-In programme listings for home computers, which were designed to teach-as-you-type, since the programmes contained many comments on how the listing worked.
Before he even left school, Shaw was a regular contributor to ZX Computing and Home Computer Weekly, and on leaving Stanwell Secondary School he turned down a sought after place at Isleworth Art College to work full-time for the newly launched computer magazine Your Spectrum.
It was at Your Spectrum (which later was relaunched as Your Sinclair) that Shaw picked up the nickname Troubleshootin' Pete due to his regular column in the magazine in which he would answer reader questions that had been posed over the YS Helpline. Shaw's official title at Your Sinclair was Editorial Assistant when he first joined, but he was promoted to Deputy Editor within a year of joining the staff.
Your Sinclair was published by Dennis Publishing, and Shaw also contributed to other Dennis magazines including Your 64, Computer Shopper and MacUser.
While still Deputy Editor of Your Sinclair, Shaw also contributed to a weekly Capital Radio children's show called XYZ on Air, broadcast every Sunday and hosted by DJ Kelly Temple. The show was an eclectic mix of music, interviews, features and the 'Computerworld' slot hosted by Pete Shaw. It was Shaw's association with Capital Radio that brought about, The Capital Radio Book of Computers and Simple Programming (NeatQuest, 1985), co-written with Kelly Temple and Your Spectrum's original Editor, Roger Munford.
Before 1985 was out, Shaw had written at least eleven computer technical books, published around the world and in several languages.
Bibliography
Games for your ZX Spectrum (Virgin Books, 1982)
Re-released as Spiele für Ihren ZX Spectrum (Huber, Germany), Spelletjes voor Je ZX Spectrum deel 1 (Netherlands), Games for your Timex-Sinclair 2000 (Dell, USA) , Giochiamo con ZX Spectrum (Giochi Elettronici, Italy)
Games for your Oric (Virgin Books, 1983)
Getting Started on your Oric (Futura Publishing, 1983)
Creating Adventures on the ZX Spectrum (Interface Publications, 1983)
Re-released as Fantastische Avonturen voor je ZX Spectrum deel 1 (Netherlands), Novas Aventuras no Seu ZX Spectrum (Editorial Presenca, Portugal)
More Games for your Oric (Virgin Books, 1984)
Games for your Sinclair QL (Virgin Books, 1984)
More Games for your Sinclair QL (Virgin Books, 1984)
Games QL Computers Play (Corgi Books, 1985)
Creating Adventures on your BBC Micro (Interface Publications, 1985)
Fantastic Adventures for your ZX Spectrum (Interface Publications, 1985)
The Capital Radio Book of Computers and Computer Programming (NeatQuest, 1985)
The V-Book: The Complete Guide to Flying with Virgin Atlantic (V-Flyer, 2004)
Since 2002
In 2002 Shaw produced Patrick Wilde's play You Couldn't Make It Up at the Gilded Balloon in Edinburgh. The show was Wilde's follow up to What's Wrong with Angry?, which had debuted a decade earlier at the Lost Theatre in London. You Couldn't Make It Up was a black comedy dealing with issues of sexuality, the agendas of TV and film production and male rape. The year after its premiere in Edinburgh, Shaw brought the show down to the New End Theatre, Hampstead, London in 2003.
In 2006 Shaw collaborated with Sir Tim Rice to produce his musical Blondel at the Pleasance Theatre in Islington. Blondel was the first musical Rice wrote outside of his successful working partnership with Andrew Lloyd Webber. Telling the tale of medieval ministrel, Blondel, the musical is set in two acts. Shaw also created the poster artwork for revival of Blondel.
In May 2008 Shaw took a new play by Matt Ian Kelly called Lightning Strikes to Dublin as part of the Dublin Gay Theatre Festival. In August the same year, Shaw took What's Wrong with Angry? and Boys of the Empire to the Edinburgh Festival with Glenn Chandler, the latter of which he transferred to London at the King's Head Theatre for a limited run over Christmas 2008.
In 2011 he worked with photographer Paul Reiffer to create the website for The Grapes, famously owned by Evgeny Lebedev, Sean Mathias and Ian McKellen.
In 2011 Shaw also collaborated with producer Danielle Tarento on Drowning on Dry Land, which led to a string of further shows including Parade (2011), Noel & Gertie (2011), Burlesque (2011), The Pitchfork Disney (2012), Mack & Mabel (2012), Victor/Victoria (2012), Taboo (2012–2013) Titanic (2013)., Dogfight (2014), Man to Man (2014), The Grand Tour (2015), Grand Hotel (2015) and Dogfight In Concert (2015).
In 2017 was producing in his own right again, taking Ian Lindsay's Chinese Whispers to Greenwich Theatre for a two week run in July. The production starred Mark Farrelly, Steve Nallon, Peter Hardy, Matt Ian Kelly and Owl Young, with a cameo by Dermot Agnew.
Shaw is the owner of Internet publication Broadway Baby , a reviews-based website particularly focused on fringe theatre. He has increased Broadway Baby's traffic considerably since its creation in 2004. Broadway Baby is now the largest reviewer at the Edinburgh Festival Fringe by some margin. He also started the Virgin Atlantic customer site, V-Flyer.com in 2003 , which regularly receives over 150,000 readers per month.
During the Covid-19 pandemic of 2020, Shaw wrote and released a mobile App called Boozr. The App is a global database of pubs, bars and clubs with social networking features that allow you to connect with your friends.
He continues to write computer programs on a freelance basis.
Theatrical productions
You Couldn't Make It Up (The Gilded Balloon, Edinburgh 2002) Producer
You Couldn't Make It Up (New End Theatre, London 2003) Producer
Diana & Ross (The Gilded Balloon, Edinburgh 2004) Graphic Designer
Blondel (Pleasance Islington, London 2006) Co-Producer with Tim Rice
Lightning Strikes (Project Arts Centre, Dublin 2008) Producer
Boys of the Empire (C venues, Edinburgh 2008) Co-Producer with Glenn Chandler
What's Wrong with Angry? (C venues, Edinburgh 2008) Co-Producer with Glenn Chandler
Boys of the Empire (King's Head Theatre, Islington, London 2008) Co-Producer with Glenn Chandler
What's Wrong with Angry? (King's Head Theatre, Islington, London 2009) Graphic Design & Sound Design
Scouts in Bondage (King's Head Theatre, Islington, London 2009) Producer
Searching for Eden (C venues, Edinburgh 2009) Graphic Designer
Rat Pack Live (C venues, Edinburgh 2009) Graphic Designer
Blues Brothers Live (C venues, Edinburgh 2009) Graphic Designer
Broadway Baby Revue (C venues, Edinburgh 2009) Producer
The Best of Times (Tristan Bates Theatre, London 2010) Graphic Designer
Rat Pack Live (C venues, Edinburgh 2010) Graphic Designer
Blues Brothers Live (C venues, Edinburgh 2010) Graphic Designer
Elvis Live (C venues, Edinburgh 2010) Graphic Designer
Fame, The Musical (C venues, Edinburgh 2010) Graphic Designer
Jump (Pleasance, Edinburgh 2010) Graphic Designer
The Crying Cherry (C venues, Edinburgh 2010) Graphic Designer
Company (Southwark Playhouse, Southwark, London 2011) Marketing Consultant
Drowning on Dry Land (Jermyn Street Theatre, West End, London 2011) Marketing Consultant
What Goes Up (C venues, Edinburgh 2011) Graphic Designer
Parade (Southwark Playhouse, Southwark, London 2011) Marketing Consultant
Noel & Gertie (Cockpit Theatre, Marylebone, London 2011) Marketing Consultant
Burlesque (Jermyn Street Theatre, West End, London 2011) Marketing Consultant
The Pitchfork Disney (Arcola Theatre, Dalston, London 2012) Production Team
Mack & Mabel (Southwark Playhouse, Southwark, London 2012) Production Team
Taboo (Brixton Club House, Brixton, London 2012) Website Designer
Victor/Victoria (Southwark Playhouse, Southwark, London 2012) Production Team
Titanic (Southwark Playhouse, Southwark, London 2013) Production Team
Parsifal and the Cup of Miracles (Gloucester Theatre 2014) Graphic Designer
Dogfight (Southwark Playhouse, Southwark, London 2014) Production Team
The Blue Flower (Gloucester Theatre 2014) Graphic Designer
My Lifelong Love (Garrick Theatre, West End, London 2014) Graphic Designer
Man To Man (Park Theatre, Finsbury Park, London 2014) Graphic Designer
The Mikado (Charing Cross Theatre, West End, London 2014) Graphic Designer
The Grand Tour (Finsborough Theatre, London 2015) Production Team
Gods & Monsters (Southwark Playhouse, Southwark, London 2015) Production Team
Yarico (London Theatre Workshop, Fulham 2015) Production Team with John and Jodie Kidd
Grand Hotel (Southwark Playhouse, Southwark, London 2015) Production Team
Dogfight In Concert (St James Theatre, West End, London 2015) Production Team
The Tinderbox (Charing Cross Theatre, West End, London 2015) Graphic Designer
Piaf (Charing Cross Theatre, West End, London 2015) Graphic Designer
Grey Gardens (Southwark Playhouse, Southwark, London 2016) Production Team
In The Bar Of A Tokyo Hotel (Charing Cross Theatre, West End, London 2016) Graphic Designer
Titanic (Charing Cross Theatre, West End, London 2016) Production Team
Brazil (New Town Theatre, Edinburgh Fringe 2016) Graphic Designer
Allegro (Southwark Playhouse, Southwark, London 2016) Production Team
Radio Times (Charing Cross Theatre, West End, London 2016) Production Team
Ragtime (Charing Cross Theatre, West End, London 2016) Production Team
O Come, All Ye Divas (Charing Cross Theatre, West End, London 2016) Production Team
Death Takes A Holiday (Charing Cross Theatre, West End, London 2016) Production Team
Chinese Whispers (Greenwich Theatre, Greenwich, London 2016) Producer
Mother Courage And Her Children (Southwark Playhouse, Southwark, London 2017) Production Team
Le Grand Mort (Trafalgar Studios, West End, London 2017) Production Team
References
External links
The YS Rock 'N' Roll Years – Unofficial site, dedicated to archiving games reviews and feature articles from the magazine.
YRUA? The Your Spectrum Unofficial Archive – Archive of articles from Your Sinclair'''s forerunner, Your Spectrum''.
Your Sinclair: A Celebration – Fan-written website detailing both YS and YS-related material.
Broadway Baby – Theatrical Reviews Site
V-Flyer – Virgin Atlantic travel site operated by Pete Shaw
Boozr – Boozr App
Living people
Video game programmers
English theatre managers and producers
English male journalists
1966 births |
48004170 | https://en.wikipedia.org/wiki/Visual%20Risk | Visual Risk | Visual Risk was a Treasury Management software provider and consulting company headquartered in Sydney, Australia. Visual Risk is now part of GTreasury and its software and services are now under the GTreasury name. The company provides treasury and risk management software and specializes in market risk management.
History
Visual Risk was one of Australia's original Fintech companies. The company is a treasury software provider. Visual Risk was founded by Richard Hughes and Paul Nailand, and is headquartered in Sydney. Richard Hughes and Paul Nailand currently serve as Managing Directors of Visual Risk.
In December 2015 Visual Risk won best Risk Management Solution in the Treasury Management International's 2015 awards, the awards are considered the benchmark in the industry.
In April 2018 GTreasury acquired Visual Risk and integrated Visual Risk's Risk Management software into its own offering to bolster its existing treasury and cash management tool.
Product
Visual Risk is a modular Treasury Risk Management system that consists of five modules: risk Analytics, asset-liability management, treasury management, hedge accounting, and cash and liquidity. This modular system can be used separately or as a fully integrated system. In addition, Visual Risk provides a reporting dashboard. The software is deployed either locally or via the cloud.
Partnerships
In 2014, Visual Risk partnered with KPMG Australia. Visual Risk will analyze KPMG's hedging transactions and client exposures.
References
Risk management companies
Cloud computing providers |
4321829 | https://en.wikipedia.org/wiki/David%20Harel | David Harel | David Harel (; born 12 April 1950) is a computer scientist, currently serving as President of the Israel Academy of Sciences and Humanities. He has been on the faculty of the Weizmann Institute of Science in Israel since 1980, and holds the William Sussman Professorial Chair of Mathematics. Born in London, England, he was Dean of the Faculty of Mathematics and Computer Science at the institute for seven years.
Biography
Harel is best known for his work on dynamic logic, computability, database theory, software engineering and modelling biological systems. In the 1980s he invented the graphical language of Statecharts for specifying and programming reactive systems, which has been adopted as part of the UML standard. Since the late 1990s he has concentrated on a scenario-based approach to programming such systems, launched by his co-invention (with W. Damm) of Live Sequence Charts. He has published expository accounts of computer science, such as his award winning 1987 book "Algorithmics: The Spirit of Computing" and his 2000 book "Computers Ltd.: What They Really Can’t do", and has presented series on computer science for Israeli radio and television. He has also worked on other diverse topics, such as graph layout, computer science education, biological modeling and the analysis and communication of odors.
Harel completed his PhD at MIT between 1976 and 1978. In 1987, he co-founded the software company I-Logix, which in 2006 became part of IBM. He has advocated building a full computer model of the Caenorhabditis elegans nematode, which was the first multicellular organism to have its genome completely sequenced. The eventual completeness of such a model depends on his updated version of the Turing test. He is a fellow of the ACM, the IEEE, the AAAS, and the EATCS, and a member of several international academies. Harel is active in a number of peace and human rights organizations in Israel.
Awards and honors
1986 Stevens Award for Software Development Methods
1992 ACM Karlstrom Outstanding Educator Award
1994 ACM Fellow
1995 IEEE Fellow
2004 Israel Prize, for computer science
2005 Doctor Honoris Causa, University of Rennes, France
2006 ACM SIGSOFT Outstanding Research Award
2006 Member of the Academia Europaea
2006 Doctor (Laura) Honoris Causa, University of Milano-Bicocca, 18 May 2006
2006 Fellow Honoris Causa, Open University of Israel
2007 ACM Software System Award
2010 Emet Prize
2010 Member of the Israel Academy of Sciences and Humanities
2012 Doctor Honoris Causa, Eindhoven University of Technology, The Netherlands
2014 International Member of the US National Academy of Engineering
2014 International Honorary Member of the American Academy of Arts and Sciences
2019 International Member of the US National Academy of Sciences.
2020 Fellow of the Royal Society (FRS)
2021 Foreign Member of the Chinese Academy of Sciences
See also
List of Israel Prize recipients
Members of the Israel Academy of Sciences and Humanities
References
External links
David Harel's home page at the Weizmann Institute of Science.
David Harel's page at the Israel Academy of Sciences and Humanities.
1950 births
Living people
Mathematicians from London
Israeli computer scientists
Israel Prize in computer sciences recipients
Israeli Jews
Fellows of the American Academy of Arts and Sciences
Fellows of the American Association for the Advancement of Science
Fellows of the Association for Computing Machinery
Fellow Members of the IEEE
Fellows of the Royal Society
Formal methods people
Graph drawing people
Members of Academia Europaea
Systems biologists
Software engineering researchers
Unified Modeling Language
Weizmann Institute of Science faculty
Foreign associates of the National Academy of Sciences
Foreign associates of the National Academy of Engineering |
5239949 | https://en.wikipedia.org/wiki/Fantasoft | Fantasoft | Fantasoft was a computer game company which programmed and promoted a number of shareware games with a primary focus on the Apple Macintosh platform. Fantasoft has been dormant since about 2005. It was founded by Sean Sayrs, Peter Hagen, and Tim Phillips. Fantasoft was created to develop, market, and distribute the shareware game Realmz, which was MacUser Shareware Game of the Year in 1995–96. Following the success of Realmz, Fantasoft created or marketed other Macintosh and Microsoft Windows platform games, most notably Spiderweb Software's early Exile series.
Developed:
Realmz
Final Star (canceled)
New Centurions
Published:
Enigma Software:
Squish
Peregrine (canceled)
Spiderweb Software
Exile: Escape from the Pit
Exile II: Crystal Souls
Exile III: Ruined World
Flying Mikros Interactive
Monkey Shines
Monkey Shines 2: Gorilla Warfare
Alien Attack
Jelly Software:
DOWN
Freemen Software:
King of Parking
Rain'Net
Bugs Bannis
The Alchemist Guild
Lance (canceled)
Coach Potato Software
CommishWare 99
Notes
Realmz review, Brian Rumsey, (Low End Mac Gaming), July 11, 2000
Rlmz.org Download and Play Realmz by Fantasoft
External links
Fantasoft Games website
Video game companies of the United States |
5532455 | https://en.wikipedia.org/wiki/Interactive%20design | Interactive design | Interactive design is a user-oriented field of study that focuses on meaningful communication of media through cyclical and collaborative processes between people and technology. Successful interactive designs have simple, clearly defined goals, a strong purpose and intuitive screen interface.
Interactive design compared to interaction design
In some cases interactive design is equated to interaction design; however, in the specialized study of interactive design there are defined differences.
To assist in this distinction, interaction design can be thought of as:
Making devices usable, useful, and fun, focusing on the efficiency and intuitive hardware
A fusion of product design, computer science, and communication design
A process of solving specific problems under a specific set of contextual circumstances
The creation of form for the behavior of products, services, environments, and systems
Making dialogue between technology and user invisible, i.e. reducing the limitations of communication through and with technology.
About connecting people through various products and services,
Whereas interactive design can be thought of as:
Giving purpose to interaction design through meaningful experiences
Consisting of six main components including User control, Responsiveness, Real-Time Interactions, Connectedness, Personalization, and Playfulness
Focuses on the use and experience of the software
Retrieving and processing information through on-demand responsiveness
Acting upon information to transform it
The constant changing of information and media, regardless of changes in the device
Providing interactivity through a focus on the capabilities and constraints of human cognitive processing
While both definitions indicate a strong focus on the user, the difference arises from the purposes of interactive design and interaction design. In essence interactive design involves the creation of meaningful uses of hardware and systems, while interaction design is the design of those hardware and systems. Interaction design without interactive design provides only hardware or an interface. Interactive design without interaction design cannot exist, for there is no platform for it to be used by the user.
History
Fluxus
Interactive Design is heavily influenced by the Fluxus movement, which focuses on a “do-it-yourself” aesthetic, anti-commercialism and an anti-art sensibility. Fluxus is different from Dada in its richer set of aspirations. Fluxus is not a modern-art movement or an art style, rather it is a loose international organization which consists of many artists from different countries. There are 12 core ideas that form Fluxus.
Globalism
Unity of Art and Life
Intermedia
Experimentalism
Chance
Playfulness
Simplicity
Implicativeness
Exemplativism
Specificity
Presence in time
Musicality
Computers
The birth of the personal computer gave users the ability to become more interactive with what they were able to input into the machine. This was mostly due to the invention of the mouse. With an early prototype created in 1963 by Douglas Engelbart, the mouse was conceptualized as a tool to make the computer more interactive.
The Internet and Interactive Design
With the tendency of increasing use to the Internet, the advent of interactive media and computing, and eventually the emergence of digital interactive consumer products, the two cultures of design and engineering gravitated towards a common interest in flexible use and user experience. The most important characteristic of the Internet is its openness to communication between people and people. In other words, everyone can readily communicate and interact with what they want on the Internet. Recent century, the notion of interactive design started popularity with Internet environment. Stuart Moulthrop was shown interactive media by using hypertext, and made genre of hypertext fiction on the Internet. Stuart philosophies could be helpful to the hypertext improvements and media revolution with developing of the Internet. This is a short history of Hypertext. In 1945, the first concept of Hypertext had originated by Vannevar Bush as he wrote in his article As We May Think. And a computer game called Adventure was invented as responding users’ needs via the first hypertextual narrative in the early 1960s. And then Douglas Engelbart and Theodor Holm Nelson who made Xanadu collaborated to make a system called FRESS in the 1970’s. Their efforts brought immense political ramifications. By 1987, Computer Lib and Dream Machine were published by Microsoft Press. And Nelson joined Autodesk, which announced plans to support Xanadu as a commercial. The definition of Xanadu is a project that has declared an improvement over the World Wide Web, with mission statement that today's popular software simulates paper. The World Wide Web trivializes our original hypertext model with one-way ever-breaking links and no management of version or contents. In the late 1980s, Apple computer began giving away Hypercard. Hypercard is relatively cheap and simple to operate. In the early 1990s, the hypertext concept has finally received some attention from humanist academics. We can see the acceptance through Jay David Bolters ‘ Writing Space (1991)’, and George Landow’s Hypertext.
Advertising
Upon the transition from analogue to digital technology, one sees a further transition from digital technology to interactive media in advertising agencies. This transition caused many of the agencies to reexamine their business and try to stay ahead of the curve. Although it is a challenging transition, the creative potential of interactive design lies in combining almost all forms of media and information delivery: text, images, film, video and sound, and that in turn negates many boundaries for advertising agencies, making it a creative haven.
Hence, with this constant motion forward, agencies such as R/GA have established a routine to keep up. Founded in 1977 by Richard and Robert Greenberg, the company has reconstructed its business model every nine years. Starting from computer-assisted animation camera, it is now an “Agency for the Digital World”. Robert Greenberg explains: “the process of changing models is painful because you have to be ready to move on from the things that you’re good at”. This is one example of how to adapt to such a fast-paced industry, and one major conference that stays on top of things is the How Interactive Design Conference, which helps designers make the leap towards the digital age.
Interactive new media art
Nowadays, following the development of science and technology, various new media appear in different areas, like art, industry and science. Most technologies described as "new media" are digital, often having characteristics of being manipulated, networkable, dense, compressible, and interactive (like the internet, video games and mobiles). In the industry field, companies no longer focus on products itself, they focus more on human-centered design. Therefore, ”interactive” become an important element in the new media. Interactivity is not only computer and video signal presenting with each other, but it should be more referred to communication and respondence among viewers and works.
According to Selnow’s (1988) theory, interactivity has three levels:
Communicative Recognition: This communication is specific to the partner. Feedback is based on recognition of the partner. When a learner inputs information into a computer and the computer responds specifically to that input, there is mutual recognition. The menu format allows mutual recognition.
Feedback: The responses are based on previous feedback. As the communication continues, the feedback progresses to reflect understanding. When a learner refines a search query and the computer responds with a refined list, message exchange is progressing.
Information Flow: There is an opportunity for a two-way flow of information. It is necessary both the learner and the computer have means of exchanging information. The search engine tool allows for learner input via use of the keyboard and the computer responds with written information.
New media has been described as the “mixture between existing cultural conventions and the conventions of software. For instance newspapers and television, they have been produced from traditional outlets to forms of interactive multimedia.” New media can allow audiences access to content anytime, anywhere, on any digital device. It also promotes interactive feedback, participation, and community creation around the media content.
New media is a vague term to mean a whole slew of things. The Internet and social media are both forms of new media. Any type of technology that enables digital interactivity is a form of new media. Video games, as well as Facebook, would be a great example of a type of new media. New media art is simply art that utilizes these new media technologies, such as digital art, computer graphics, computer animation, virtual art, Internet art, and interactive art. New media art is very focused on the interactivity between the artist and the spectator.
Many new media art works, such as Jonah Brucker-Cohen and Katherine Moriwaki's UMBRELLA.net and Golan Levin et al.'s Dialtones: A Telesymphony, involve audience participation. Other works of new media art require audience members to interact with the work but not to participate in its production. In interactive new media art, the work responds to audience input but is not altered by it. Audience members may click on a screen to navigate through a web of linked pages, or activate motion sensors that trigger computer programs, but their actions leave no trace on the work itself. Each member of the audience experiences the piece differently based on the choices he or she makes as while interacting with the work. In Olia Lialina's My Boyfriend Came Back From The War, for example, visitors click through a series of frames on a Web page to reveal images and fragments of text. Although the elements of the story never change, the way the story unfolds is determined by each visitor's own actions.
References
Further reading
Iuppa, Nicholas. (2001) Interactive Design for New Media and the Web Boston, Focal Print.
Software design
zh:互動設計 |
9766625 | https://en.wikipedia.org/wiki/Bradley%20Willman | Bradley Willman | Bradley Willman (born 1980) is an anti-pedophile activist from Canada who engaged in private investigations using the Internet to expose pedophiles. At one time, he had unfettered access to between 2,000 and 3,000 computers that had been used to visit websites of interest to pedophiles as the result of his use of a Trojan horse. Willman's actions helped put California Superior Court judge Ronald Kline in prison for more than two years in 2007 for possession of child pornography. However, the legality of Willman's use of the Trojan horse was a basis for appeal by the judge.
Biography
Early life
Willman was born in 1980 in Langley, British Columbia, Canada.
Private investigator
Willman identifies himself as a "private computer cop" and previously used the aliases "Garbie" and "Omnipotent." In the late 1990s, Willman began tracking and investigating people who downloaded child porn from the Internet. At some time before 2000, Willman devised a Trojan horse, a type of computer program, that he used to conduct his investigations.
The Trojan horse was posted to websites of interest to pedophiles in a way that it appeared to be a picture-file attached to a message, but was actually a program file. When a visitor to the pedophile website downloaded the file, the visitor unwittingly downloaded the Trojan horse, and would typically be unaware that it was running on his or her computer. Willman had no control over who chose to download the Trojan horse. Once it had been downloaded, Willman could access, search, and retrieve files from the infected computer. Willman stated "My whole intent, after the program started working as I expected it to, was to help kids." Willman estimated that his Trojan Horse gave him access to between 2,000 and 3,000 computers that had been used to visit websites of interest to pedophiles, including those of police officers, military personnel, social workers, priests, and judges.
Pre-Kline investigations
In March 2000, Willman helped the Royal Canadian Mounted Police with a child molestation case. Some time after August 2000, Willman anonymously provided to a private watchdog group documents related to a Kentucky state investigation of child molestation and production of child pornography. Prior to January 2001, the United States Customs Service contacted Willman to see whether he had discovered any useful information related to a suspected Russian child pornography ring.
United States v. Kline
In early May 2000, California Superior Court Judge Ronald C. Kline (1941–) downloaded the Trojan horse to his computer. The Trojan Horse allowed Willman, who was in British Columbia, to monitor the contents of Kline's computer in California. Between May 2000 and mid-April 2001, Willman copied from Kline's computer portions of Kline's diary.
In April 2001, Willman anonymously forwarded excerpts of a diary on Kline's hard drives with a final entry dated March 18, 2001 to the website of pedophile-watchdog PedoWatch. PedoWatch passed the diary on to the San Bernardino County Sheriff's Office, which forwarded it to the California Attorney General's office, which in turn sent it on to the City of Irvine Police Department.
The search
In November 2001, Irvine police searched Kline's Irvine home and found more than 1,500 pornographic computer images of young children on the judge's home computer. Child porn also was found on Kline's courthouse computer. This event drew national attention and ultimately led Kline to drop his bid for reelection as judge, largely as a result of efforts by talk radio hosts John and Ken of KFI-AM (640).
After Kline was charged in November 2001 by the United States Federal Government with possession of child pornography, a man who said the judge had molested him when he was a 14 years old a quarter-century earlier came forward and California state child molestation charges were filed against Kline as well. The state child molestation charges were dismissed in July 2003 after the legal case Stogner v. California, held that a California extension of the statute of limitations for sex-related child abuse could not be applied to previously time-barred prosecutions.
Legal proceedings
In the United States, evidence that derives from an illegal government search generally cannot be used in court against a defendant by the government. However, evidence that derives from an illegal private search such as from a citizen tipster may be used in court by the government, even though the search was illegal. When an individual does the illegal search, a court answers the private/government search question by determining whether the individual was an "instrument or agent" of the government. If "the government knew of and acquiesced in the intrusive conduct, and ... the party performing the search intended to assist law enforcement efforts," then the party performing the illegal search is considered an agent of the government.
On March 17, 2003, a federal judge ruled that Willman was working as a government informant when he invaded Judge Kline's computers because (i) Willman thought of himself as an agent for law enforcement and (ii) Willman's motivation for the invasion was to act for law enforcement purposes. Since the judge ruled that this violated Kline's United States 4th Amendment right to privacy against illegal searches by the government, the judge suppressed some of the prosecution's strongest evidence against Kline. Specifically, the judge suppressed all evidence seized from Kline's home and his home computer, including excerpts from a computer diary about his sexual desires and more than 1,500 pornographic photos of young boys.
On appeal to the United States Court of Appeals for the Ninth Circuit, the appeals court disagreed with the federal judge and found that it was not enough for Willman to act with the intent to assist the government. There needed to have been some degree of governmental knowledge and acquiescence in Willman's actions to find that Willman was acting as a government agent. Thus, the appeals court reversed the district court's order suppressing the evidence that was found as the fruit of Willman's illegal, but private, search. In March 2005, the United States Supreme Court declined to hear the case, which ended Kline's appeals.
The admission
With the 9th Circuit and United States Supreme Court actions, Willman's evidence was now back in play. In December 2005, former Orange County Superior Court Judge Ronald C. Kline pleaded guilty to possessing child pornography on his home computer, ending a four-year legal battle during which Kline was under house arrest. A trial conviction could have brought Kline a 30-year prison term, but a plea agreement limited prison time to a possible 27 to 33 months.
In June 2006, the state Commission on Judicial Performance gave Kline the most serious punishment it could give a former judge by barring him from receiving work from state courts.
In February 2007, Kline was sentenced to 27 months in prison for possessing child pornography.
See also
Internet crime
Online predator
To Catch a Predator
References
External links
Case of Judge Ronald Kline.
JuliePosey.com
Predator-Hunter.com
1980 births
People associated with computer security
American computer criminals
Living people
People from Langley, British Columbia (city)
Private detectives and investigators
Anti-pedophile activism |
186266 | https://en.wikipedia.org/wiki/ITunes | ITunes | iTunes () is a software program that acts as a media player, media library, mobile device management utility, and the client app for the iTunes Store. Developed by Apple Inc., it is used to purchase, play, download, and organize digital multimedia, on personal computers running the macOS and Windows operating systems, and can be used to rip songs from CDs, as well as play content with the use of dynamic, smart playlists. Options for sound optimizations exist, as well as ways to wirelessly share the iTunes library.
Originally announced by CEO Steve Jobs on January 9, 2001, iTunes' original and main focus was music, with a library offering organization, collection, and storage of users' music collections. Starting in 2005, Apple expanded on the core music features with support for digital video, podcasts, e-books, and mobile apps purchased from the iOS App Store.
Until the release of iOS 5 in 2011, all iPhones, iPod Touches and iPads required iTunes for activation and updating mobile apps. Newer iOS devices have less reliance on iTunes in order to function, though it can still be used to back up the contents of mobile devices, as well as to share files with personal computers.
Though well received in its early years, iTunes soon received increasingly significant criticism for a bloated user experience, with Apple adopting an all-encompassing feature-set in iTunes rather than sticking to its original music-based purpose. On June 3, 2019, Apple announced that iTunes in macOS Catalina would be replaced by separate apps, namely Music, Podcasts, and TV. Finder would take over the device management capabilities. This change would not affect Windows or older macOS versions.
History
SoundJam MP, released by Casady & Greene in 1998, was renamed "iTunes" when Apple purchased it in 2000. The primary developers of the software moved to Apple as part of the acquisition, and simplified SoundJam's user interface, added the ability to burn CDs, and removed its recording feature and skin support. The first version of iTunes, promotionally dubbed "World’s Best and Easiest To Use Jukebox Software," was announced on January 9, 2001. Subsequent releases of iTunes often coincided with new hardware devices, and gradually included support for new features, including "smart playlists", the iTunes Store, and new audio formats.
Platform availability
Apple released iTunes for Windows in 2003.
On April 26, 2018, iTunes was released on Microsoft Store for Windows 10, primarily to allow it to be installed on Windows 10 devices configured to only allow installation of software from Microsoft Store. Unlike Windows versions for other platforms, it is more self-contained due to technical requirements for distribution on the store (not installing background helper services such as Bonjour), and is updated automatically though the store rather than using Apple Software Update.
Music library
iTunes features a music library. Each track has attributes, called metadata, that can be edited by the user, including changing the name of the artist, album, and genre, year of release, artwork, among other additional settings. The software supports importing digital audio tracks that can then be transferred to iOS devices, as well as supporting ripping content from CDs. iTunes supports WAV, AIFF, Apple Lossless, AAC, and MP3 audio formats. It uses the Gracenote music database to provide track name listings for audio CDs. When users rip content from a CD, iTunes attempts to match songs to the Gracenote service. For self-published CDs, or those from obscure record labels, iTunes will normally only list tracks as numbered entries ("Track 1" and "Track 2") on an unnamed album by an unknown artist, requiring manual input of data.
File metadata is displayed in users' libraries in columns, including album, artist, genre, composer, and more. Users can enable or disable different columns, as well as change view settings.
Special playlists
Introduced in 2004, "Party Shuffle" selected tracks to play randomly from the library, though users could press a button to skip a song and go to the next in the list. The feature was later renamed "iTunes DJ", before being discontinued altogether, replaced by a simpler "Up Next" feature that notably lost some of "iTunes DJ"'s functionality.
Introduced in iTunes 8 in 2008, "" can automatically generate a playlist of songs from the user's library that "go great together". "Genius" transmits information about the user's library to Apple anonymously, and evolves over time to enhance its recommendation system. It can also suggest purchases to fill out "holes" in the library. The feature was updated with iTunes 9 in 2009 to offer "Genius Mixes", which generated playlists based on specific music genres.
"Smart playlists" are a set of playlists that can be set to automatically filter the library based on a customized list of selection criteria, much like a database query. Multiple criteria can be entered to manage the smart playlist. Selection criteria examples include a genre like Christmas music, songs that haven't been played recently, or songs the user has listened to the most in a time period.
Library sharing
Through a "Home Sharing" feature, users can share their iTunes library wirelessly. Computer firewalls must allow network traffic, and users must specifically enable sharing in the iTunes preferences menu. iOS applications also exist that can transfer content without Internet. Additionally, users can set up a network-attached storage system, and connect to that storage system through an app.
Artwork printing
To compensate for the "boring" design of standard CDs, iTunes can print custom-made jewel case inserts. After burning a CD from a playlist, one can select that playlist and bring up a dialog box with several print options, including different "Themes" of album artworks.
Sound processing
iTunes includes sound processing features, such as equalization, "sound enhancement" and crossfade. There is also a feature called , which automatically adjusts the playback volume of all songs in the library to the same level.
Video
In May 2005, video support was introduced to iTunes with the release of iTunes 4.8, though it was limited to bonus features part of album purchases. The following October, Apple introduced iTunes 6, enabling support for purchasing and viewing video content purchased from the iTunes Store. At launch, the store offered popular shows from the ABC network, including Desperate Housewives and Lost, along with Disney Channel series That's So Raven and The Suite Life of Zack and Cody. CEO Steve Jobs told the press that "We’re doing for video what we’ve done for music — we’re making it easy and affordable to purchase and download, play on your computer, and take with you on your iPod."
In 2008, Apple and select film studios introduced "iTunes Digital Copy", a feature on select DVDs and Blu-ray discs allowing a digital copy in iTunes and associated media players.
Podcasts
In June 2005, Apple updated iTunes with support for podcasts. Users can subscribe to podcasts, change update frequency, define how many episodes to download and how many to delete.
Similar to songs, "Smart playlists" can be used to control podcasts in a playlist, setting criteria such as date and number of times listened to.
Apple is credited for being the major catalyst behind the early growth of podcasting.
Books
In January 2010, Apple announced the iPad tablet, and along with it, a new app for it called iBooks (now known as Apple Books). The app allowed users to purchase e-books from the iTunes Store, manage them through iTunes, and transfer the content to their iPad.
Apps
On July 10, 2008, Apple introduced native mobile apps for its iOS operating system. On iOS, a dedicated App Store application served as the storefront for browsing, purchasing and managing applications, whereas iTunes on computers had a dedicated section for apps rather than a separate app. In September 2017, Apple updated iTunes to version 12.7, removing the App Store section in the process. However, the following month, iTunes 12.6.3 was also released, retaining the App Store, with 9to5Mac noting that the secondary release was positioned by Apple as "necessary for some businesses performing internal app deployments".
iTunes Store
Introduced on April 28, 2003, The iTunes Music Store allows users to buy and download songs, with 200,000 tracks available at launch. In its first week, customers bought more than one million songs. Music purchased was protected by FairPlay, an encryption layer referred to as digital rights management (DRM). The use of DRM, which limited devices capable of playing purchased files, sparked efforts to remove the protection mechanism. Eventually, after an open letter to the music industry by CEO Steve Jobs in February 2007, Apple introduced a selection of DRM-free music in the iTunes Store in April 2007, followed by its entire music catalog without DRM in January 2009.
In October 2005, Apple announced that movies and television shows would become available through its iTunes Store, employing the DRM protection.
iTunes U
In May 2007, Apple announced the launch of "iTunes U" via the iTunes Store, which delivers university lectures from top U.S. colleges.
With iTunes version 12.7 in August 2017, iTunes U collections became a part of the Podcasts app.
On June 10, 2020, Apple formally announced that iTunes U will be discontinued from the end of 2021.
iTunes in the Cloud and iTunes Match
In June 2011, Apple announced "iTunes in the Cloud", in which music purchases were stored on Apple's servers and made available for automatic downloading on new devices. For music the user owns, such as content ripped from CDs, the company introduced "iTunes Match", a feature that can upload content to Apple's servers, match it to its catalog, change the quality to 256kbit/s AAC format, and make it available to other devices.
Internet radio and music streaming
When iTunes was first released, it came with support for the Kerbango Internet radio tuner service. In June 2013, the company announced iTunes Radio, a free music streaming service. In June 2015, Apple announced Apple Music, its paid music streaming service, and subsequently rebranded iTunes Radio as Beats 1, a radio station accompanying Apple Music.
iPhone connectivity
iTunes was used to activate early iPhone models. Beginning with the iPhone 3G in June 2008, activation did not require iTunes, making use of activation at point of sale. Later iPhone models were able to be activated and set-up on their own, without requiring the use of iTunes.
Ping
With the release of iTunes 10 in September 2010, Apple announced iTunes Ping, which CEO Steve Jobs described as "social music discovery". It had features reminiscent of Facebook, including profiles and the ability to follow other users. Ping was discontinued in September 2012.
Criticism
Security
The Telegraph reported in November 2011 that Apple had been aware of a security vulnerability since 2008 that would let unauthorized third parties install "updates" to users' iTunes software. Apple fixed the issue before the Telegraphs report and told the media that "The security and privacy of our users is extremely important", though this was questioned by security researcher Brian Krebs, who told the publication that "A prominent security researcher warned Apple about this dangerous vulnerability in mid-2008, yet the company waited more than 1,200 days to fix the flaw."
Software bloat
iTunes has been repeatedly accused of being bloated as part of Apple's efforts to turn it from a music player to an all-encompassing multimedia platform. Former PC World editor Ed Bott accused the company of hypocrisy in its advertising attacks on Windows for similar practices.
The role of iTunes has been replaced with independent apps for Apple Music, Apple TV, as well as iPhone, iPod, and iPad management being put into Finder, starting with macOS 10.15 Catalina.
See also
iTunes Festival
iTunes Store
iTunes version history
AirPlay
List of audio conversion software
Comparison of iPod managers
Dazzboard
Distribution Into iTunes
FairPlay
Feed aggregators:
Feed aggregators, comparison
Feed aggregators, List
Media players, comparison
Music visualization
References
External links
– official site
Apple Inc. services
Online music database clients
Computer-related introductions in 2001
Products and services discontinued in 2019
2001 software
Apple Inc. software
IOS software
IPod software
Mobile device management software
MacOS CD ripping software
Podcasting software
Internet properties established in 2001
Internet properties disestablished in 2019
Jukebox-style media players
Macintosh media players
MacOS media players
Music streaming services
Tag editors
Transactional video on demand
Windows CD ripping software
Windows CD/DVD writing software
Windows media players |
2577631 | https://en.wikipedia.org/wiki/NetOps | NetOps | NetOps is defined as the operational framework consisting of three essential tasks, Situational Awareness (SA), and Command & Control (C2) that the Commander (CDR) of US Strategic Command (USSTRATCOM), in coordination with DoD and Global NetOps Community, employs to operate, manage and defend the Global Information Grid (GIG) to ensure information superiority for the United States.
DoD Instruction (DoDI) 8410.02 defines NetOps as the DoD-wide operational, organizational, and technical capabilities for operating and defending the Global Information Grid. NetOps includes, but is not limited to, enterprise management, net assurance, and content management. NetOps provides Combatant Commanders (COCOMs) with GIG Situational Awareness to make informed Command and Control decisions. GIG SA is gained through the operational and technical integration of enterprise management and defense actions and activities across all levels of command (strategic, operational and expeditionary forces).
The three essential tasks are as follows:
GIG Enterprise Management (GEM)
GIG Net Assurance (GNA)
GIG Content Management (GCM)
The synergy achieved by each integrated relationship between any two of the essential tasks (GEM, GNA, and GCM) produces the following NetOps desired effects in support of the overall goal of NetOps which is to provide the right information to the edge:
Assured System and Network Availability
Assured Information Protection
Assured Information Delivery
The element of NetOps known as Situational Awareness (SA), is the primary ability to improve the quality and timeliness of collaborative decision-making. To be effective, much of the SA must be shared in near-real-time by the decision-makers who have the ability to take this information, conduct critical analysis and act on those decisions with regards to employment, protection and defense of the GIG.
This shared Situational Awareness is derived from common reporting requirements using functionally standardized management tools and common data information exchange formats across the Defense Department. These capabilities collect (or receive), and fuse (enterprise management, network defense and configuration management) data in a real time or near real-time fashion to produce defined views of the mission critical GIG information of concern to a commander or NetOps center.
The DoD NetOps Community strives to obtain common visibility of network resources so that these can be manage, anticipate and mitigate problems, ensuring uninterrupted availability and protection of the GIG and provide for graceful degradation, self-healing, failover, diversity, and elimination of critical failure points. Through effective visibility, the NetOps community endeavors to attain the three goals of NetOps: Assured System and Network Availability, Assured Information Protection and Assured Information Delivery.
Joint Task Force Global Network Operations (JTF-GNO)
JTF-GNO directs the operation and defense of the GIG to assure timely and secure Net-Centric capabilities across strategic, operational, and expeditionary boundaries in support of full spectrum warfighting, intelligence, and business missions for the Defense Department.
Background
In 1998, the Department of Defense recognized a growing cyber threat and in response created the Joint Task Force — Computer Network Defense (JTF-CND), which achieved Initial Operational Capability (IOC) on 30 December 1998 and Full Operational Capability (FOC) by June 1999.
In the fall of 2000, in accordance with DoD doctrine, JTF-CND became the Joint Task Force — Computer Network Operations (JTF-CNO). In October 2002, the new Unified Command Plan (UCP), Change 2, re-aligned JTF-CNO under the United States Strategic Command (USSTRATCOM).
The JTF-CNO began its largest and most comprehensive transformation in April 2004 when the Commander of US Strategic Command approved the Joint Concept of Operations for GIG Network Operations. This “NetOps CONOPS” provided the common framework and command and control structure to conduct the USSTRATCOM Unified Command Plan (UCP) - assigned mission of Global Network Operations (NETOPS), combining the disciplines of Enterprise Systems (EM) and Network Management (NM), Computer Network Defense (CND), and Information Dissemination Management (IDM).
The Secretary of Defense signed a delegation of authority letter on 18 June 2004, designating the Director, DISA as the new Commander of the Joint Task Force-Global Network Operations. With this designation, the new command assumed the responsibility for directing the operation and defense of the GIG.
This transformation enhanced the JTF GNO's mission and objectives in achieving the Joint Vision 2020 Objective Force and the evolving concept of Net-Centricity.
As new concepts such as Network-centric warfare and Joint Vision 2010 arrived in the mid 1990s, it became clear that the center of gravity for U.S. military warfighting capability was shifting towards the network. A corresponding capability was required to move beyond managing the network as a back-office system into a domain of warfighting.
NetOps was originally developed under the leadership of then United States Pacific Command J6 Brigadier General James Bryan during the stand-up of the USCINCPAC Theater C4I Coordination Center (TCCC) at Camp H. M. Smith, Hawaii in 1999. The TCCC initiative was constructed of two distinct components - the technology that formed the vision of the GIG and the NetOps initiative; and the partnerships that made it a reality.
Through its working relationships with DISA, the Service Components, Sub-Unified Commands, JTFs, other CINC TCCC's, and the Joint Staff, USCINCPAC TCCC made the initial strides towards achieving Information Superiority and true enterprise-level processes. The USCINCPAC TCCC was a pilot program for the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence (ASD/C3I) NetOps concept. The NetOps concept began with the development of the architectural framework for NetOps, and a USCINCPAC developed Concept of Operations (CONOPS) outlining the key players and their roles and responsibilities necessary to develop the NetOps construct in the Pacific Theater.
The original NetOps construct consisted of Network Management (NM), Information Assurance (IA), and Information Dissemination Management (IDM). Today the construct has evolved into GIG Enterprise Management (GEM), GIG Net Assurance(GNA), and GIG Content Management which roughly equates to the intent of the original NetOps concept.
NetOps Vision
“We must change the paradigm in which we talk and think about the network; we must ‘fight’ rather than ‘manage’ the network and operators must see themselves as engaged at all times, ensuring the health and operation of this critical weapons system.” Donald Rumsfeld, United States Secretary of Defense (2001 - 2006)
"The US government and the US military must become still more joint, more agile, more decentralized, more networked, and better arranged to share information and coordinate actions." Donald Rumsfeld, United States Secretary of Defense (2001 - 2006)
“This version of the NetOps CONOPS documents the lessons learned by Joint Task Force-Global Network Operations and the NetOps community through operations, exercises, and other events. We will continue to work with the NetOps Community as we translate the concepts set forth in this document into doctrine, policy and joint tactics, techniques, and procedures that strengthen the operations and defense of the Global Information Grid in support of warfighter business and intelligence operations.” General James E. Cartwright, Commander, United States Strategic Command(2004- August 2007)
“The Strategic Vision for the JTF GNO is to lead an adaptive force that assures the availability, delivery, and protection of the Global Information Grid. The NetOps framework, effects, and organizational relationships described herein formulate a foundation for the operational future of the GIG, but these will not happen automatically, nor will they occur without significant effort from the entire NetOps Community. Attaining the vision will require cooperation, innovation, and execution from all mission partners and everyone who touches the GIG.” (From "Joint Task Force-Global Network Operations Strategic Plan, An Adaptive Force Ensuring Information Delivery", February 2006. The adaptive force assures availability, delivery and protection of infrastructure, systems, and information.) LtGen Charles E. Croom, Commander, JTF GNO.
There is also a new paradigm shift occurring in NetOps from 1.0 to 2.0, or DevOps. NetOps Transformation is part of a new wave of automation assistance for network operators, and there are a few methodologies out there to help others. One prominent methodology is aptly named DIRE NetOps. It focuses on Documentation, Isolation, Repair, and Escalation to guide the user through the transformation process, ensuring high value tasks are supported with automation.
Mission
The NetOps mission is to operate and defend the GIG. Unlike many missions with a defined completion date, NetOps has been established as a standing Joint Force mission necessitating dedicated leadership and resources to execute.
NetOps provides assured NetCentric services to the DoD in support of full spectrum of warfighting operations, intelligence, and business missions throughout the GIG enterprises, seamlessly, end-to-end. An objective of NetCentric services is to quickly get information to decision makers, with adequate context, to make better decisions affecting the mission and to project their decisions forward to their forces for action.
If the decision maker is not getting the needed net-centric services, the GIG NetOps community must collaboratively determine who must take action and how information flow can be optimized. This requires NetOps personnel to have a shared SA as well as the technologies, procedures, and collaborative organizational structures to rapidly assess and respond to system and network degradations, outages, or changes in operational priorities. All functions required to most effectively support GIG operations will be holistically managed.
The effectiveness of NetOps will be measured in terms of availability and reliability of net-centric services, across all domains, in adherence to agreed-upon service levels and policies. The method for service assurance in a NetCentric collaborative environment is to establish operational thresholds, compliance monitoring, and a clear understanding of the capabilities between enterprise service/resource providers and consumers through Service Level Agreements (SLAs).
Proper instrumentation of the GIG will enable monitoring of adherence to these SLAs, as well as enable timely decision-making, service prioritization, resource allocation, root cause, and mission impact assessment. Subsequent TTPs and SLAs will be formalized with appropriate implementation policies to enforce compliance.
See also
Joint Task Force-Global Network Operations
United States Strategic Command
United States Cyber Command
Global Information Network Architecture
Information Assurance Vulnerability Alert
References
External links
United States Strategic Command Official Website
Joint Task Force Global Network Operations (JTF GNO) - Requires PKI/CAC
Defense Information Systems Agency (DISA) Information Assurance/NetOps
DoD IA Policy Chart - Build and Operate a Trusted GIG
Warfare post-1945
United States Department of Defense information technology
Joint military units and formations of the United States |
11157848 | https://en.wikipedia.org/wiki/FLUKA | FLUKA | FLUKA (FLUktuierende KAskade) is a fully integrated Monte Carlo simulation package for the interaction and transport of particles and nuclei in matter.
FLUKA has many applications in particle physics, high energy experimental physics and engineering, shielding, detector and telescope design, cosmic ray studies, dosimetry, medical physics, radiobiology. A recent line of development concerns hadron therapy.
FLUKA software code is used by Epcard, which is a software program for simulating radiation exposure on airline flights.
References
Further reading
External links
Official site of FLUKA collaboration
FLUKA on the CERN bulletin
Physics software used to fight cancer
Fortran software
Physics software
Monte Carlo molecular modelling software
Science software for Linux
Linux-only proprietary software
CERN software
Monte Carlo particle physics software
Proprietary commercial software for Linux |
57793246 | https://en.wikipedia.org/wiki/Data%20center%20security | Data center security | Data center security is the set of policies, precautions and practices adopted at a data center to avoid unauthorized access and manipulation of its resources. The data center houses the enterprise applications and data, hence why providing a proper security system is critical. Denial of service (DoS), theft of confidential information, data alteration, and data loss are some of the common security problems afflicting data center environments.
Overview
According to the Cost of a Data Breach Survey, in which 49 U.S. companies in 14 different industry sectors participated, they noticed that:
39% of companies say negligence was the primary cause of data breaches
Malicious or criminal attacks account for 37 percent of total breaches.
The average cost of a breach is $5.5 million.
The need for a secure data center
Physical security is needed to protect the value of the hardware therein.
Data protection
The cost of a breach of security can have severe consequences on both the company managing the data center and on the customers whose data are copied. The 2012 breach at Global Payments, a processing vendor for Visa, where 1.5 million credit card numbers were stolen, highlights the risks of storing and managing valuable and confidential data. As a result, Global Payments' partnership with Visa was terminated; it was estimated that they lost over $100 million.
Insider attacks
Defenses against exploitable software vulnerabilities are often built on the assumption that "insiders" can be trusted. Studies show that internal attacks tend to be more damaging because of the variety and amount of information available inside organizations.
Vulnerabilities and common attacks
The quantity of data stored in data centers has increased, partly due to the concentrations created by cloud-computing
Threats
Some of the most common threats to data centers:
DoS (Denial of Service)
Data theft or alteration
Unauthorized use of computing resources
Identity theft
Vulnerabilities
Common vulnerabilities include:
Implementation: Software design and protocol flaws, coding errors, and incomplete testing
Configuration: Use of defaults, elements inappropriately configured
Exploitation of out-of-date software
Many "worm" attacks on data centers exploited well-known vulnerabilities:
CodeRed
Nimda and
SQL Slammer
Exploitation of software defaults
Many systems are shipped with default accounts and passwords, which are exploited for unauthorized access and theft of information.
Common attacks
Common attacks include:
Scanning or probing: One example of a probe- or scan-based attack is a port scan - whereby "requests to a range of server port addresses on a host" are used, to find "an active port" and then cause harm via "a known vulnerability of that service.". This reconnaissance activity often precedes an attack; its goal is to gain access by discovering information about a system or network.
DoS (Denial of service): A denial-of-service attack occurs when legitimate users are unable to access information systems, devices, or other network resources due to the actions of a malicious cyber threat actor. This type of attack generates a large volume of data to deliberately consume limited resources such as bandwidth, CPU cycles, and memory blocks.
Distributed Denial of Service (DDoS): This kind of attack is a particular case of DoS where a large number of systems are compromised and used as source or traffic on a synchronized attack. In this kind of attack, the hacker does not use only one IP address but thousands of them. thumb|center | 400px
Unauthorized access: When someone other than an account owner uses privileges associated to a compromised account to access to restricted resources using a valid account or a backdoor.
Eavesdropping: Etymologically, Eavesdropping means Secretly listen to a conversation. In the networking field, it is an unauthorized interception of information (usernames, passwords) that travels on the network. User logons are the most common signals sought.
Viruses and worms: These are malicious code that, when executed produce undesired results. Worms are self-replicating malware, whereas viruses, which also can replicate, need some kind of human action to cause damage.
Internet infrastructure attacks: This kind of attack targets the critical components of the Internet infrastructure rather than individual systems or networks.
Trust exploitation: These attacks exploit the trust relationships that computer systems have to communicate.
Session hijacking also known as cookie hijacking: Consists of stealing a legitimate session established between a target and a trusted host. The attacker intercepts the session and makes the target believe it is communicating with the trusted host.
Buffer overflow attacks: When a program allocates memory buffer space beyond what it had reserved, it results in memory corruption affecting the data stored in the memory areas that were overflowed.
Layer 2 attacks: This type of attack exploit the vulnerabilities of data link layer protocols and their implementations on layer 2 switching platforms.
SQL injection: Also known as code injection, this is where input to a data-entry form's, due to incomplete data validation, allows entering harmful input that causes harmful instructions to be executed.
Network security infrastructure
The network security infrastructure includes the security tools used in data centers to enforce security policies. The tools include packet-filtering technologies such as ACLs, firewalls and intrusion detection systems (IDSs) both network-based and host-based.
ACLs (Access Control List)
ACLs are filtering mechanisms explicitly defined based on packet header information to permit or deny traffic on specific interfaces. ACLs are used in multiple locations within the Data Center such as the Internet Edge and the intranet server farm. The following describes standard and extended access lists:
Standard ACLs: the simplest type of ACL filtering traffic solely based on source IP addresses. Standard ACLs are typically deployed to control access to network devices for network management or remote access. For example, one can configure a standard ACL in a router to specify which systems are allowed to Telnet to it. Standard ACLs are not recommended option for traffic filtering due to their lack of granularity. Standard ACLSs are configured with a number between 1 and 99 in Cisco routers.
Extended ACLs:
Extended ACL filtering decisions are based on the source and destination IP addresses, Layer 4 protocols, Layer 4 ports, ICMP message type and code, type of service, and precedence. In Cisco routers, one can define extended ACLs by name or by a number in the 100 to 199 range.
Firewalls
A firewall is a sophisticated filtering device that separates LAN segments, giving each segment a different security level and establishing a security perimeter that controls the traffic flow between segments. Firewalls are most commonly deployed at the Internet Edge where they act as boundary to the internal networks. They are expected to have the following characteristics: Performance: the main goal of a firewall is to separate the secured and the unsecured areas of a network. Firewalls are then post in the primary traffic path potentially exposed to large volumes of data. Hence, performance becomes a natural design factor to ensure that the firewall meets the particular requirements.
Application support: Another important aspect is the ability of a firewall to control and protect a particular application or protocol, such as Telnet, FTP, and HTTP. The firewall is expected to understand application-level packet exchanges to determine whether packets do follow the application behavior and, if they do not, do deny the traffic.
There are different types of firewalls based on their packet-processing capabilities and their awareness of application-level information:
Packet-filtering firewalls
Proxy firewalls
Stateful firewalls
Hybrid firewalls
IDSs
IDSs are real-time systems that can detect intruders and suspicious activities and report them to a monitoring system. They are configured to block or mitigate intrusions in progress and eventually immunize the systems from future attacks. They have two fundamental components:
Sensors: Appliances and software agents that analyze the traffic on the network or the resource usage on end systems to identify intrusions and suspicious activities.
IDS management: Single- or multi-device system used to configure and administer sensors and to additionally collect all the alarm information generated by the sensors. The sensors are equivalent to surveillance tools, and IDS management is the control center watching the information produced by the surveillance tools.
Layer 2 security
Cisco Layer 2 switches provide tools to prevent the common Layer 2 attacks (Scanning or Probing, DoS, DDoS, etc.). The following are some security features covered by the Layer 2 Security:
Port Security
ARP Inspection
Private VLANs
Private VLANs and Firewalls
Security measures
The process of securing a data center requires both a comprehensive system-analysis approach and an ongoing process that improves the security levels as the Data Center evolves. The data center is constantly evolving as new applications or services become available. Attacks are becoming more sophisticated and more frequent. These trends require a steady evaluation of security readiness.
A key component of the security-readiness evaluation is the policies that govern the application of security in the network including the data center. The application includes both the design best practices and the implementation details. As a result, security is often considered as a key component of the main infrastructure requirement. Since a key responsibility of the data centers is to make sure of the availability of the services, data center management systems often consider how its security affects traffic flows, failures, and scalability. Due to the fact that security measures may vary depending on the data center design, the use of unique features, compliance requirements or the company's business goals, there is no set of specific measures that cover all possible scenarios.
There exist in general two types of data center security: physical security and virtual security.
Physical security
The physical security of a data center is the set of protocol built-in within the data center facilities in order to prevent any physical damage to the machines storing the data. Those protocols should be able to handle everything ranging from natural disasters to corporate espionage to terrorist attacks.
To prevent physical attacks, data centers use techniques such as:
CCTV security network: locations and access points with 90-day video retention.
24×7
on-site security guards,
Network operations center (NOC) Services and technical team
Anti-tailgating/Anti-pass-back turnstile gate. Only permits one person to pass through after authentication.
Single entry point into co-location facility.
Minimization of traffic through dedicated data halls, suites, and cages.
Further access restriction to private cages
Three-factor authentication
SSAE 16 compliant facilities.
Checking the provenance and design of hardware in use
Reducing insider risk by monitoring activities and keeping their credentials safe
Monitoring of temperature and humidity
Fire prevention with zoned dry-pipe sprinkler
Natural disaster risk-free locations
Virtual security
Virtual security is security measures put in place by the data centers to prevent remote unauthorized access that will affect the integrity, availability or confidentiality of data stored on servers.
Virtual or network security is a hard task to handle as there exist many ways it could be attacked. The worst part of it is that it is evolving years after years. For instance, an attacker could decide to use a malware (or similar exploits) in order to bypass the various firewalls to access the data. Old systems may as well put security at risk as they do not contain modern methods of data security.
Virtual attacks can be prevented with techniques such as
Heavy data encryption during transfer or not: 256-bit SSL encryption for web applications.1024-bit RSA public keys for data transfers. AES 256-bit encryption for files and databases.
Logs auditing activities of all users.
Secured usernames and passwords: Encrypted via 256-bit SSL, requirements for complex passwords, set up of scheduled expirations, prevention of password reuse.
Access based on the level of clearance.
AD/LDAP integration.
Control based on IP addresses.
Encryption of session ID cookies in order to identify each unique user.
Two-factor authentication availability.
Third party penetration testing performed annually
Malware prevention through firewalls and automated scanner
References
Computer network security
Data breaches
Data centers
Data security
Information management |
810178 | https://en.wikipedia.org/wiki/Freedom%20Downtime | Freedom Downtime | Freedom Downtime is a 2001 documentary film sympathetic to the convicted computer hacker Kevin Mitnick, directed by Emmanuel Goldstein and produced by 2600 Films.
The documentary centers on the fate of Mitnick, who is claimed to have been misrepresented in the feature film Takedown (2000) produced by Miramax and adapted from the book by the same name by Tsutomu Shimomura and John Markoff, which is based on disputed events. The film also documents a number of computer enthusiasts who drive across the United States searching for Miramax representatives and demonstrating their discontent with certain aspects of the bootleg script of Takedown they had acquired. One of their major points of criticism was that the script ended with Mitnick being convicted to serve a long-term prison sentence, while in reality, at the time the film's production, Mitnick had not yet even had a trial but nonetheless was incarcerated for five years without bail in a high-security facility. Freedom Downtime also touches on what happened to other hackers after being sentenced. The development of the Free Kevin movement is also covered.
Several notable and iconic figures from the hacking community appear in the movie, including Phiber Optik (Mark Abene), Bernie S (Ed Cummings), Alex Kasper, and director Emmanuel Goldstein (Eric Corley). Freedom Downtime tries to communicate a different view of the hacker community from that usually shown by the mainstream media, with hackers being depicted as curious people who rarely intend to cause damage, driven by a desire to explore and conduct pranks. The film questions the rationality of placing computer hackers who went "over the line" in the same environment as serious felons.
It also contains interviews with people related to Mitnick and hacker culture in general. The authors of , ex-couple Katie Hafner and John Markoff, appear in very different roles. While Hafner's empathy for Mitnick is shown to have grown, Markoff continues to defend his critical book and articles in The New York Times newspaper about the hacker. Markoff is ridiculed as the narrator, director Goldstein (a hacker himself), points out his factual errors during the interview. Reba Vartanian, Mitnick's grandmother, also appears in a number of interview segments. Furthermore, lawyers, friends, and libertarians give their view of the story. Footage and interviews from the DEF CON and Hackers on Planet Earth conventions try to dispel some hacker myths and confirm others.
The film premiered at H2K, the 2000 H.O.P.E. convention. After that the film saw a limited independent theatrical release and was shown at film festivals. It was released on VHS and sold via the 2600 website.
In June 2004, a DVD was released. The DVD includes a wealth of extra material spread over two discs, including three hours of extra footage, an interview with Kevin Mitnick in January 2003 (shortly after his supervised release ended), and various DVD eggs. It also includes subtitles in 20 languages, provided by volunteers.
References
External links
Watch/Download Freedom Downtime from Archive.org
Watch/Download Freedom Downtime from Defcon.org
2001 films
American documentary films
American films
2600: The Hacker Quarterly
Documentary films about the Internet
Hacker culture
Works about computer hacking |
16466075 | https://en.wikipedia.org/wiki/3793%20Leonteus | 3793 Leonteus | 3793 Leonteus is a large Jupiter trojan from the Greek camp, approximately in diameter. It was discovered on 11 October 1985, by American astronomer couple Carolyn and Eugene Shoemaker at the Palomar Observatory in California, United States. The D-type Jovian asteroid belongs to the 30 largest Jupiter trojans and has a rotation period of 5.6 hours. It was named after the hero Leonteus from Greek mythology.
Orbit and classification
Leonteus is a dark Jovian asteroid orbiting in the leading Greek camp at Jupiter's Lagrangian point, 60° ahead of its orbit in a 1:1 resonance (see Trojans in astronomy). It is also a non-family asteroid in the Jovian background population.
It orbits the Sun at a distance of 4.8–5.7 AU once every 11 years and 11 months (4,350 days; semi-major axis of 5.22 AU). Its orbit has an eccentricity of 0.09 and an inclination of 21° with respect to the ecliptic. The asteroid was first observed as at the McDonald Observatory in November 1951. The body's observation arc begins with its observation as at Goethe Link Observatory in October 1961, or 24 years prior to its official discovery observation at Palomar.
Physical characteristics
In both the Tholen- and SMASS-like taxonomy of the Small Solar System Objects Spectroscopic Survey (S3OS2), Leonteus is a D-type asteroid. It is also a D-type in the SDSS-based taxonomy, while the Collaborative Asteroid Lightcurve Link (CALL) assumes it to be of carbonaceous composition.
Rotation period
Since 1994, several rotational lightcurves have been obtained from photometric observations by Stefano Mottola and Anders Erikson using the Dutch 0.9-metre and Bochum 0.61-metre telescopes at La Silla Observatory in Chile, as well as by American photometrist Robert Stephens at the Center for Solar System Studies in California.
Analysis of Mottola's best-rated lightcurve from June 1994 gave a rotation period of hours with a brightness variation of magnitude ().
Diameter and albedo
According to the surveys carried out by the Infrared Astronomical Satellite IRAS, the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Leonteus measures between 86.26 and 112.05 kilometers in diameter and its surface has an albedo between 0.042 and 0.072.
CALL derives an albedo of 0.0784 and a diameter of 86.38 kilometers based on an absolute magnitude of 8.7.
Naming
This minor planet was named from Greek mythology after Leonteus, a hero of the Trojan War, who attempted to win a competition among the Greek warriors to see who could throw an iron meteorite the farthest. However, he lost the game to his associate, Polypoites, after whom the minor planet 3709 Polypoites is named. The official naming citation was published by the Minor Planet Center on 27 August 1988 ().
Notes
References
External links
Asteroid Lightcurve Database (LCDB), query form (info )
Dictionary of Minor Planet Names, Google books
Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center
003793
Discoveries by Carolyn S. Shoemaker
Discoveries by Eugene Merle Shoemaker
Minor planets named from Greek mythology
Named minor planets
19851011 |
10849824 | https://en.wikipedia.org/wiki/Label%20switching | Label switching | Label switching is a technique of network relaying to overcome the problems perceived by traditional IP-table switching (also known as traditional layer 3 hop-by-hop routing). Here, the switching of network packets occurs at a lower level, namely the data link layer rather than the traditional network layer.
Each packet is assigned a label number and the switching takes place after examination of the label assigned to each packet. The switching is much faster than IP-routing. New technologies such as Multiprotocol Label Switching (MPLS) use label switching. The established ATM protocol also uses label switching at its core.
According to RFC 2475 (An Architecture for Differentiated Services, December 1998):
"Examples of the label switching (or virtual circuit) model include Frame Relay, ATM, and MPLS. In this model path forwarding state and traffic management or quality of service (QoS) state is established for traffic streams on each hop along a network path. Traffic aggregates of varying granularity are associated with a label switched path at an ingress node, and packets/cells within each label switched path are marked with a forwarding label that is used to look up the next-hop node, the per-hop forwarding behavior, and the replacement label at each hop. This model permits finer granularity resource allocation to traffic streams, since label values are not globally significant but are only significant on a single link; therefore resources can be reserved for the aggregate of packets/cells received on a link with a particular label, and the label switching semantics govern the next-hop selection, allowing a traffic stream to follow a specially engineered path through the network."
A related topic is "Multilayer Switching," which discusses silicon-based wire-speed routing devices that examine not only layer 3 packet information, but also layer 4 (transport) and layer 7 (application) information.
References
See also
Virtual circuit
Computer networking |
1088297 | https://en.wikipedia.org/wiki/List%20of%20objects%20at%20Lagrange%20points | List of objects at Lagrange points | This is a list of known objects which occupy, have occupied, or are planned to occupy any of the five Lagrange points of two-body systems in space.
Sun–Earth Lagrange points
L1
is the Lagrange point located approximately 1.5 million kilometers from Earth towards the Sun.
Past probes
International Cometary Explorer, formerly the International Sun–Earth Explorer 3 (ISEE-3), diverted out of in 1983 for a comet rendezvous mission. Currently in heliocentric orbit. The Sun–Earth L1 is also the point to which the Reboot ISEE-3 mission was attempting to return the craft as the first phase of a recovery mission (as of September 25, 2014 all efforts have failed and contact was lost).
NASA's Genesis probe collected solar wind samples at from December 3, 2001, to April 1, 2004, when it returned the sample capsule to Earth. It returned briefly in late 2004 before being pushed into heliocentric orbit in early 2005.
LISA Pathfinder (LPF) was launched on 3 December 2015, and arrived at on 22 January 2016, where, among other experiments, it tested the technology needed by (e)LISA to detect gravitational waves. LISA Pathfinder used an instrument consisting of two small gold alloy cubes.
The Chang'e 5 orbiter (during extended mission. After ferrying lunar samples back to Earth in 2020, the transport module was sent to where it is permanently stationed to conduct limited Earth-Sun observations.)
Present probes
The Solar and Heliospheric Observatory (SOHO) in a halo orbit
The Advanced Composition Explorer (ACE) in a Lissajous orbit
WIND (At since 2004)
The Deep Space Climate Observatory (DSCOVR), designed to image the sunlit Earth in 10 wavelengths (EPIC) and monitor total reflected radiation (NISTAR). Launched on 11 February 2015, began orbiting L1 on 8 June 2015 to study the solar wind and its effects on Earth. DSCOVR is unofficially known as GORESAT, because it carries a camera always oriented to Earth and capturing full-frame photos of the planet similar to the Blue Marble. This concept was proposed by then-Vice President of the United States Al Gore in 1998 and was a centerpiece in his 2006 film An Inconvenient Truth.
Planned probes
Aditya-L1
Interstellar Mapping and Acceleration Probe slated for launch in late 2024
NEO Surveyor
SWFO-L1
The ESA Advanced Telescope for High ENergy Astrophysics (ATHENA)
Lagrange mission (ESA). One spacecraft in L1 and one in L5.
L2
is the Lagrange point located approximately 1.5 million kilometers from Earth in the direction opposite the Sun. Spacecraft at the Sun–Earth L2 point are in a Lissajous orbit until decommissioned, when they are sent into a heliocentric graveyard orbit.
Past probes
2001 – 2010: NASA's Wilkinson Microwave Anisotropy Probe (WMAP) observed the cosmic microwave background. It was moved to a heliocentric orbit to avoid posing a hazard to future missions.
2003 – 2004: NASA's WIND. The spacecraft then went to Earth orbit, before heading to .
2009 – 2013: The ESA Herschel Space Observatory exhausted its supply of liquid helium and was moved from the Lagrangian point in June 2013.
2009 – 2013: At the end of its mission ESA's Planck spacecraft was put into a heliocentric orbit and passivated to prevent it from endangering any future missions.
2011 – 2012: CNSA's Chang'e 2. Chang'e 2 was then placed onto a heliocentric orbit that took it past the near-Earth asteroid 4179 Toutatis.
Present probes
The ESA Gaia probe
The joint Russian-German high-energy astrophysics observatory Spektr-RG
The joint NASA, ESA and CSA James Webb Space Telescope (JWST)
Planned probes
The ESA Euclid mission, to better understand dark energy and dark matter by accurately measuring the acceleration of the universe.
The NASA Nancy Grace Roman Space Telescope (WFIRST)
The ESA PLATO mission, which will find and characterize rocky exoplanets.
The JAXA LiteBIRD mission.
The ESA ARIEL mission, which will observe the atmospheres of exoplanets.
The joint ESA-JAXA Comet Interceptor
The NASA Advanced Technology Large-Aperture Space Telescope, which would replace the Hubble Space Telescope.
Cancelled probes
The ESA Eddington mission
The NASA Terrestrial Planet Finder mission (may be placed in an Earth-trailing orbit instead)
L3
is the Sun–Earth Lagrange point located on the side of the Sun opposite Earth, slightly outside the Earth's orbit.
There are no known objects in this orbital location.
L4
is the Sun–Earth Lagrange point located close to the Earth's orbit 60° ahead of Earth.
Asteroid is the first discovered tadpole orbit companion to Earth, orbiting ; like Earth, its mean distance to the Sun is about one astronomical unit.
Asteroid is the second Earth trojan, confirmed in November 2021, oscillating around in a tadpole orbit and expected to remain there for at least 4000 years, until destabilized by Venus.
STEREO A (Solar TErrestrial RElations Observatory – Ahead) made its closest pass to in September 2009, on its orbit around the Sun, slightly faster than Earth.
OSIRIS-REx passed near the L4 point and performed a survey for asteroids between 9 and 20 February 2017.
L5
is the Sun–Earth Lagrange point located close to the Earth's orbit 60° behind Earth.
Asteroid , in a horseshoe companion orbit with Earth, is currently proximal to but at a high inclination.
STEREO B (Solar TErrestrial RElations Observatory – Behind) made its closest pass to in October 2009, on its orbit around the Sun, slightly slower than Earth.
The Spitzer Space Telescope is in an Earth-trailing heliocentric orbit drifting away c. 0.1 AU per year. In c. 2013–15 it has passed in its orbit.
Hayabusa2 passed near during the spring of 2017, and imaged the surrounding area to search for Earth trojans on 18 April 2018.
Proposed
Lagrange mission (ESA). One spacecraft in L5 and one in L1.
Earth–Moon Lagrange points
L2
THEMIS
Chang'e 5-T1
Queqiao relay satellite
L4 and L5
Kordylewski clouds
Future location of TDRS-style communication satellites to support satellite
Past probes
Hiten was the first spacecraft to demonstrate a low energy trajectory, passing by and to achieve lunar orbit at a very low fuel expense, compared to usual orbital techniques. Hiten did not find any conclusive increase in dust density at Lagrange points.
Proposed objects
Exploration Gateway Platform
In his 1976 book The High Frontier: Human Colonies in Space Dr. Gerard O'Neill proposed the establishment of gigantic Space Islands in . The inhabitants of the L5 Society should convert lunar material to huge solar power satellites. Many works of fiction, most notably the Gundam series, involve colonies at these locations.
Sun–Venus Lagrange points
L4
Sun–Mars Lagrange points
Asteroids in the and Sun–Mars Lagrangian points are sometimes called Mars trojans, with a lower-case t, as "Trojan asteroid" was originally defined as a term for Lagrangian asteroids of Jupiter. They may also be called Mars Lagrangian asteroids.
L4
L5
5261 Eureka
, , (not confirmed as true Lagrangian asteroids)
Source: Minor Planet Center
Sun–Jupiter Lagrange points
Asteroids in the and Sun–Jupiter Lagrangian points are known as Jupiter Trojan asteroids or simply Trojan asteroids.
L4
Trojan asteroids, Greek camp
L5
Trojan asteroids, Trojan camp
Saturn–Tethys Lagrange points
L4
Telesto
L5
Calypso
Saturn–Dione Lagrange points
L4
Helene
L5
Polydeuces, follows a "tadpole" orbit around L5
Sun–Uranus Lagrange points
L3
83982 Crantor, follows a horseshoe orbit around L3
L4
Sun–Neptune Lagrange points
Minor planets in the and Sun–Neptune Lagrangian points are called Neptune trojans, with a lower-case t, as "Trojan asteroid" was originally defined as a term for Lagrangian asteroids of Jupiter.
Data from: Minor Planet Center
L4
385571 Otrera
385695 Clete
L5
Tables of missions
Color key:
Future and proposed missions
See also
Trojan (celestial body)
Co-orbital configuration
Footnotes
Trojans (astronomy)
Space lists |
66166043 | https://en.wikipedia.org/wiki/Proxmox%20Backup%20Server | Proxmox Backup Server | Proxmox Backup Server (short Proxmox BS) is an open-source backup software project supporting virtual machines, containers, and physical hosts. The Bare-metal server is based on the Debian Linux distribution, with some extended features, such as out-of-the-box ZFS support and Linux kernel 5.4 LTS.
Proxmox Backup Server is licensed under the GNU Affero General Public License, version 3.
Technology
Proxmox Backup Server is written mostly in Rust and implements data deduplication to reduce the storage space needed. Data is split into chunks.
History
Development of Proxmox Backup originally began in October 2018 to provide more efficient backup for the virtualization platform Proxmox Virtual Environment than the integrated vzdump backup tool which only allows full backups. In July 2020, the first public beta was announced. Its first stable release was announced in November, 2020.
Operation
Proxmox Backup uses a client-server model where the server stores the backup data. The client tool works on most modern Linux systems. The software is installed bare-metal with an ISO image, which includes management tools and a web-based GUI. Administrators can manage the system via a Web browser or a command-line interface (CLI). Proxmox Backup Server also provides a REST API for third party tools.
Proxmox Backup Server supports incremental backups, data deduplication, Zstandard compression and authenticated encryption (AE). The first backup is a full backup, and subsequent backups are sent incrementally from the client to the Proxmox Backup Server, where data is deduplicated.
For the Proxmox VE platform, the Proxmox Backup client is tightly integrated; the backup storage is configurable as a storage backend on a Proxmox VE node and supports deduplicated backups of QEMU virtual machines and LXC containers. The platform also leverages QEMU dirty-bitmaps, which allows for fast backups from the Proxmox VE client to the server, as the disk images do not need to be scanned for changes.
Backups can be stored on-premises or synchronized to remote locations with Remotes, and multiple, unrelated hosts can use the same backup server. All client-server traffic is transferred over TLS-1.3 to protect against eavesdropping. To further protect backup data at rest, optional encryption of all backed-up-data is available using AES-256 in Galois/Counter Mode. As the backup server can not access the backup data without the matching encryption keys, it can even be an untrusted host.
Data retention policy can be defined in Proxmox Backup Server. Removing expired data is done in two phases: first, prune removes indices of the backups which are no longer needed, and then garbage collector process is running to physically delete the orphaned data chunks.
See also
Bacula
Amanda
References
External links
Backup software
Free backup software
Software using the GNU AGPL license |
4177264 | https://en.wikipedia.org/wiki/XMK%20%28operating%20system%29 | XMK (operating system) | The eXtreme Minimal Kernel (XMK) is a real-time operating system (RTOS) that is designed for minimal RAM/ROM use. It achieves this goal, though it is almost entirely written in the C programming language. As a consequence it can be easily ported to any 8-, 16-, or 32-bit microcontroller.
XMK comes as two independent packages: the XMK Scheduler that contains the core kernel, everything necessary to run a multithreaded embedded application, and the Application Programming Layer (APL) that provides higher level functions atop the XMK Scheduler API.
The XMK distribution contains no standard libraries such as libc that should be part of the development tools for target systems.
External links
XMK: eXtreme Minimal Kernel project home page (broken link)
Windows Evolution Over Timeline
Real-time operating systems
Embedded operating systems |
25907924 | https://en.wikipedia.org/wiki/Primal%20Carnage | Primal Carnage | Primal Carnage is an asymmetrical multiplayer game developed by Lukewarm Media and released by Reverb Publishing. The game pits a group of armed humans against predatory dinosaurs in various combat scenarios. Human gameplay takes the form of a first-person shooter, whilst the dinosaurs are controlled from a third-person perspective. Lukewarm Media, an indie development team, announced the game in February 2010, and eventually released it on October 29, 2012. Primal Carnage received "mixed or average reviews" according to Metacritic.
A prequel game, Primal Carnage: Genesis, was announced in 2013, but was put on hold shortly thereafter due to disagreements within Lukewarm Media. A complete rebuild of the original game was in development as of 2014. Circle 5 Studios took over the series later that year, and eventually published the rebuild as a sequel and paid update in 2015, under the name Primal Carnage: Extinction.
Gameplay
Primal Carnage is an action and online asymmetrical multiplayer game that pits humans against dinosaurs. Both teams have their own set of playable characters, divided into classes. Team members work together, using their own unique abilities to succeed. Gameplay is viewed from a third-person perspective when playing as a dinosaur, while human players experience the game as a first-person shooter. Players on both teams have the option of seeing their fellow teammates through walls. Dinosaurs eat humans to regain health, while humans must reach certain areas to replenish health and ammunition. Dinosaurs can hide in bushes and wait to attack humans, who generally take refuge in a few select, open areas such as a helipad while they defend against the dinosaurs.
Team members
The game has five human characters with weapons such as shotguns, snipers, and flamethrowers. One character can throw flares to blind nearby dinosaurs, and others can trap the animals in nets or tranquilize them.
The game debuted with five playable dinosaurs, including Tyrannosaurus, Carnotaurus, and Dilophosaurus. Like humans, dinosaurs also have their own abilities as well, activated by roaring. A dinosaur's abilities can be used to aid themselves or their fellow team members. The Tyrannosaurus can consume humans in one bite, and can offer a health bonus to nearby dinosaurs. Because it is the most powerful dinosaur, the number of Tyrannosaurus players is limited in each game.
The Carnotaurus has the ability to charge into humans, injuring them. The Dilophosaurus can blind humans by spitting venom at them. Another dinosaur is the fictional Novaraptor, which has the ability to jump and can pounce on humans. The fifth playable animal is Pteranodon, a member of the pterosaur group which is commonly mistaken for dinosaurs. The Pteranodon can fly and swoop down to snatch humans, before dropping them to their death. It can also locate humans from above and relay those locations to other dinosaur players.
Players who pre-ordered the game received a feathered raptor as a bonus playable character. Several new creatures were added in 2013, including Spinosaurus, Cryolophosaurus, Oviraptor, and the pterosaur Tupandactylus.
Maps and game modes
The game initially had five maps, including a loading dock. It launched with one game mode: Team Deathmatch. Get to the Chopper, a free downloadable content (DLC) pack, was released in 2013. It consists of a new jungle level set during a storm. Human players race down a linear path to reach a helicopter and escape, while dinosaur players try to stop them. The pack also introduced the Spinosaurus. A new game mode, Capture the Egg, was added later in 2013. It plays similarly to capture the flag, with human players sneaking into dinosaur nesting areas.
Development and release
Primal Carnage was developed by Lukewarm Media, an independent developer consisting of 21 people from around the world. The game was announced in February 2010, and was expected to release for Microsoft Windows and Linux during the fourth quarter of the year. It was originally developed using the Unigine game engine, although the development team switched to Unreal Development Kit (a free version of Unreal Engine 3) later in 2010. This made the release of a Linux version unlikely.
The game moved to open beta testing on October 8, 2012, and received a wide release on October 29, 2012, by Reverb Publishing. It was the first game by Lukewarm Media to be published. The game was available for download through its official website, as well as online retailers such as Steam and GamersGate.
Reception
Primal Carnage received "mixed or average reviews" according to Metacritic. At launch, some players complained of glitches, such as start-up problems and crashing. Critics also noted that Primal Carnage contained only one game mode at launch and believed that it would need more features in the near future to survive. Maxwell McGee, writing for GameSpot, stated that the game "hits on a fun design, but stumbles in execution. A lack of content and some technical issues leave this game feeling like a $15 beta rather than an official release". Mike Sharkey of GameSpy wrote, "What's there is terrific, there just needs to be more of it. Here's hoping it doesn't go extinct before it evolves into something really great". Xav de Matos of Joystiq called it an "entertaining experience" despite missing "some key elements and polish". Lukewarm Media issued updates to correct the technical issues.
Some critics found the team abilities to be adequately balanced, and considered Primal Carnage superior to recent games such as Dino D-Day and Orion: Dino Beatdown. Others found the levels reminiscent of the Jurassic Park films. Leif Johnson of IGN wrote that it "may not be the most visually stunning game around, but it works well with what it has".
Carlos Leiva of Vandal praised the game's concept of humans against dinosaurs, while CD-Action found the idea to be the game's only unique offering. de Matos wrote that "squaring off against a speedy raptor or trying to outmaneuver a giant Tyrannosaurus and somehow making it out alive is thrilling". Johnson found the game design basic and outdated, and was disappointed by the different camera perspective when playing as a dinosaur, writing that while it "allows for some gruesomely satisfying kills, it sometimes interferes with their execution". PC Gamer called it "a Jurassic Park worth visiting once, but not for a long stay".
Other games
In 2013, Lukewarm Media announced plans for a prequel game known as Primal Carnage: Genesis. It would be significantly different from the first installment, playing as a story-driven, single-player game. However, the project was put on hold later in 2013, due to disagreements within the company regarding the project's large scope.
As of 2014, Lukewarm Media was working on a complete rebuild of the original game known as Primal Carnage 2.0, which was to be released as a free update. However, Circle 5 Studios took over the series in 2014, and published the rebuild as a sequel and paid update in 2015, under the title Primal Carnage: Extinction. It was co-developed by Pub Games.
A virtual reality game, Primal Carnage: Onslaught, was released on December 29, 2016, as an Early Access title on Steam. It was developed by Pub Games and published by Circle 5 Studios.
References
External links
Official website
2012 video games
Cancelled Linux games
Dinosaurs in video games
First-person shooters
Indie video games
Unreal Engine games
Video games developed in the United States
Video games set on islands
Windows games
Windows-only games |
35647449 | https://en.wikipedia.org/wiki/Rhett%20Ellison | Rhett Ellison | Rhett Marshall Ellison (born October 3, 1988) is a former American football tight end and fullback who played eight seasons in the National Football League (NFL). He played college football at USC and was drafted by the Minnesota Vikings in the fourth round of the 2012 NFL Draft. He also played for the New York Giants for three seasons.
High school career
Ellison played high school football at Saint Francis High School in Mountain View, California. As a junior in 2005, he made 27 tackles and 4 sacks, plus caught 26 passes for 301 yards (11.6 avg.) with 5 touchdowns. As a senior, he had 49 tackles, 2 sacks and 1 interception on defense and 31 receptions for 394 yards (12.7 avg.) with 2 touchdowns on offense. His 2006 season honors included 2006 Super Prep All-Farwest, Prep Star All-West, Long Beach Press-Telegram Best of the Rest, San Francisco Chronicle All-Metro honorable mention and San Jose Mercury News All-Area first team playing linebacker and tight end.
Both major recruiting rating agencies of the time, Rivals.com and Scout.com, rated Ellison as a three star (out of five) recruit and in the top 20 among other tight ends in his class. He was recruited by several BCS conference programs and, beside USC, received athletic scholarship offers from Arizona State, Cal, Oregon and Virginia Tech. He committed to USC on February 6, 2007, the day before National Signing Day, a part of a recruiting class that ranked in the top two in the country in 2007.
College career
Ellison redshirted the 2007 season, his freshman year with the Trojans.
As a redshirt freshman in 2008, Ellison served as an often-used backup tight end. Overall for the season, while appearing in 9 games (all but Oregon, Arizona State, Washington State and Arizona), he had 4 receptions for 58 yards (14.5 avg.) and a tackle. He started the UCLA and Penn State games at fullback and split time between there and tight end in those games. He broke his right foot prior to the Oregon game and missed those next 4 contests while recuperating. In 2009, Ellison appeared in all 13 games, serving as a regularly used backup tight end while also making plays on special teams as a sophomore. For the season, he had 6 catches for 41 yards (6.8 avg.) with 1 touchdown, plus made 4 tackles. As a junior in 2010, Ellison started all season at tight end, hauling in 21 catches for 239 yards (11.4 avg.) with 3 touchdowns, and also made 3 tackles. He made 2010 All-Pac-12 honorable mention.
The steady Ellison, equally proficient as a blocker and pass catcher, started at tight end for his second season as a senior in 2011, but also saw time at fullback. Ellison was captain of the Trojans in 2011 and an award was created to honor his leadership and determination, called the Rhett Ellison "Machine" Trojan Way Leadership Award.
Professional career
Ellison was invited to the 2012 NFL Combine as a fullback.
Minnesota Vikings
Ellison was taken in the fourth round of the 2012 NFL Draft, 128th pick overall, by the Minnesota Vikings. He was not expecting to be drafted, and as such was on a river excursion and not watching the draft when the Vikings called him to let him know he was about to be selected; the surprise was such that he cried in joy. Trojan teammate and 4th pick overall Matt Kalil was also selected by the Vikings in the first round. He is the second player of Maori heritage to play for the Vikings since David Dixon.
On November 24, 2013, Ellison scored his first career touchdown catch against the Green Bay Packers. He scored the third tight end rushing touchdown in franchise history in 2016.
New York Giants
On March 10, 2017, Ellison signed a four-year, $18 million contract with the New York Giants. He started the year as the backup tight end to Evan Engram. He also took snaps at fullback. On September 10, 2017, in the Giants' season opening 19–3 loss to the Dallas Cowboys on NBC Sunday Night Football, Ellison had one reception for nine yards in his Giants debut. In a week 5 loss to the Tampa Bay Buccaneers, he caught his first touchdown as a Giant. On December 27, 2019, Ellison was placed on injured reserve with a concussion. He finished the season with 18 catches for 167 yards and one touchdown through 10 games.
Retirement
On March 9, 2020, Ellison announced his retirement from football.
Personal life
He is the son of three-time Super Bowl champion Riki Ellison, a former USC and NFL linebacker. Ellison is of partial Māori, specifically Ngāi Tahu, descent and is the grandnephew of the first captain of the All Blacks, Thomas Ellison, who led New Zealand on their first tour to Australia in 1893. He is also related to former New Zealand rugby union players Jacob Ellison and Tamati Ellison. Ellison completed a bachelor's degree in international relations from USC and a master's degree in communication management. Ellison is married to fashion model Raina Hein, runner-up of season 14 of America's Next Top Model, and have two children together.
References
External links
USC Trojans bio
NFL Combine bio
1988 births
Living people
American football fullbacks
American people of New Zealand descent
Minnesota Vikings players
New York Giants players
Ngāi Tahu
New Zealand players of American football
People from Portola Valley, California
Players of American football from California
Sportspeople from the San Francisco Bay Area
USC School of International Relations alumni
USC Trojans football players
Ellison family |
31592197 | https://en.wikipedia.org/wiki/Anti-keylogger | Anti-keylogger | An anti-keylogger (or anti–keystroke logger) is a type of software specifically designed for the detection of keystroke logger software; often, such software will also incorporate the ability to delete or at least immobilize hidden keystroke logger software on a computer. In comparison to most anti-virus or anti-spyware software, the primary difference is that an anti-keylogger does not make a distinction between a legitimate keystroke-logging program and an illegitimate keystroke-logging program (such as malware); all keystroke-logging programs are flagged and optionally removed, whether they appear to be legitimate keystroke-logging software or not. The anti-keylogger is efficient in managing malicious users. It can detect the keyloggers and terminate them from the system.
Use of anti-keyloggers
Keyloggers are sometimes part of malware packages downloaded onto computers without the owners' knowledge. Detecting the presence of a keylogger on a computer can be difficult. So-called anti- keylogging programs have been developed to thwart keylogging systems, and these are often effective when used properly.
Anti-keyloggers are used both by large organizations as well as individuals in order to scan for and remove (or in some cases simply immobilize) keystroke logging software on a computer. It is generally advised the software developers that anti-keylogging scans be run on a regular basis in order to reduce the amount of time during which a keylogger may record keystrokes. For example, if a system is scanned once every three days, there is a maximum of only three days during which a keylogger could be hidden on the system and recording keystrokes.
Public computers
Public computers are extremely susceptible to the installation of keystroke logging software and hardware, and there are documented instances of this occurring. Public computers are particularly susceptible to keyloggers because any number of people can gain access to the machine and install both a hardware keylogger and a software keylogger, either or both of which can be secretly installed in a matter of minutes. Anti-keyloggers are often used on a daily basis to ensure that public computers are not infected with keyloggers, and are safe for public use.
Gaming usage
Keyloggers have been prevalent in the online gaming industry, being used to secretly record a gamer's access credentials, user name and password, when logging into an account, this information is sent back to the hacker. The hacker can sign on later to the account and change the password to the account, thus stealing it.
World of Warcraft has been of particular importance to game hackers and has been the target of numerous keylogging viruses. Anti-keyloggers are used by many World of Warcraft and other gaming community members in order to try to keep their gaming accounts secure.
Financial institutions
Financial institutions have become the target of keyloggers, particularly those institutions which do not use advanced security features such as PIN pads or screen keyboards. Anti-keyloggers are used to run regular scans of any computer on which banking or client information is accessed, protecting passwords, banking information, and credit card numbers from identity thieves.
Personal use
The most common use of an anti-keylogger is by individuals wishing to protect their privacy while using their computer; uses range from protecting financial information used in online banking, any passwords, personal communication, and virtually any other information which may be typed into a computer. Keyloggers are often installed by people known by the computer's owner, and many times have been installed by an ex-partner hoping to spy on their ex-partner's activities, particularly chat.
Types
Signature-based
This type of software has a signature base, that is strategic information that helps to uniquely identify a keylogger, and the list contains as many known keyloggers as possible. Some vendors make some effort or availability of an up-to-date listing for download by customers. Each time a 'System Scan' is run, this software compares the contents of the hard disk drive, item by item, against the list, looking for any matches.
This type of software is a rather widespread one, but it has its own drawbacks The biggest drawback of signature-based anti-keyloggers is that one can only be protected from keyloggers found on the signature-base list, thus staying vulnerable to unknown or unrecognized keyloggers. A criminal can download one of many famous keyloggers, change it just enough, and the anti-keylogger won't recognize it.
Heuristic analysis
This software doesn't use signature bases, it uses a checklist of known features, attributes, and methods that keyloggers are known to use.
It analyzes the methods of work of all the modules in a PC, thus blocking the activity of any module that is similar to the work of keyloggers. Though this method gives better keylogging protection than signature-based anti-keyloggers, it has its own drawbacks. One of them is that this type of software blocks non-keyloggers also. Several 'non-harmful' software modules, either part of the operating system or part of legitimate apps, use processes which keyloggers also use, which can trigger a false positive. Usually all the non signature-based keyloggers have the option to allow the user to unblock selected modules, but this can cause difficulties for inexperienced users who are unable to discern good modules from bad modules when manually choosing to block or unblock.
See also
Keystroke logger
Hardware keylogger
References
Computer security software
Surveillance |
66375280 | https://en.wikipedia.org/wiki/Twitch%20Sings | Twitch Sings | Twitch Sings was a free-to-play karaoke video game developed by Harmonix and published by live streaming service Twitch. It was released on April 13, 2019 for Microsoft Windows and macOS.
Twitch Sings' servers closed on January 1, 2021. Twitch stated that they made the decision to close the game to "invest in broader tools and music services."
Development
In October 2018 during TwitchCon, Twitch announced that they'd developed a karaoke video game in collaboration with Harmonix. During the opening keynote, Twitch CEO Emmett Shear stated, "We believe in a new category of game that’s made to be streamed, where the audience isn’t just nice to have, they’re crucial to the experience, where the driver’s seat is big enough for your whole community. So we teamed up with Harmonix and built a game ourselves." Attendees at the convention were among the first to test out the game. An open beta would be launched later that month.
Twitch Sings was released to the public on April 13, 2019 for Microsoft Windows and macOS. Mobile versions for iOS and Android were originally planned, but was never released. Over 2,000 songs were available at launch.
On September 4, 2020, Twitch announced that they would be closing Twitch Sings by 2021. The platform removed videos and clips relating to the game on December 1, 2020, citing contractual obligations. Twitch Sings servers were fully shut down on January 1, 2021.
Gameplay
Twitch Sings featured both single-player and multiplayer game modes. The main objective of the game was to sing a song as accurately as possible. A pitch meter helped players stay on key. Players can sing solo songs live, or create a duet with a fellow creator. In order to perform a duet, a player would record one half of a song. They would later send a video of their performance to another player who sings the second half of the song. The game's software would later merge the two videos into one song. In a later update, Twitch added the ability to sing duets in real-time through a new party mode feature. Broadcasters were able to directly stream from the game. This allowed stream audiences to interact in various ways, such as voting on singing challenges for the broadcaster to attempt, along with sending in virtual ovations if they enjoyed the performance.
Players were able to adjust their game experience in a variety of ways. They could either use their webcam or a customizable avatar to represent themselves once in game. A number of different voice effects and world maps were also offered to players.
References
2019 video games
Karaoke video games
Harmonix games
Twitch (service)
Video games developed in the United States
Windows games
MacOS games
Music video games
Products and services discontinued in 2021 |
87577 | https://en.wikipedia.org/wiki/Saab%20JAS%2039%20Gripen | Saab JAS 39 Gripen | The Saab JAS 39 Gripen (; English: griffin) is a light single-engine multirole fighter aircraft manufactured by the Swedish aerospace and defense company Saab AB. The Gripen has a delta wing and canard configuration with relaxed stability design and fly-by-wire flight controls. Later aircraft are fully NATO interoperable. As of 2020, more than 271 Gripens of all models, A–F, have been built.
In 1979, the Swedish government began development studies for an aircraft capable of fighter, attack, and reconnaissance missions to replace the Saab 35 Draken and 37 Viggen in the Swedish Air Force. A new design from Saab was selected and developed as the JAS 39. The first flight occurred in 1988, with delivery of the first serial production airplane in 1993. It entered service with the Swedish Air Force in 1996. Upgraded variants, featuring more advanced avionics and adaptations for longer mission times, began entering service in 2003.
To market the aircraft internationally, Saab formed partnerships and collaborative efforts with overseas aerospace companies. On the export market, early models of the Gripen achieved moderate success, with sales to nations in Central Europe, South Africa, and Southeast Asia. Bribery was suspected in some of these procurements, but authorities closed the investigation in 2009.
A major redesign of the Gripen series, designated JAS 39E/F Gripen but previously referred to as Gripen NG or Super JAS, began deliveries to the Swedish and Brazilian Air Forces in 2019. The changes from the C-series to E-series include a larger body, a more powerful engine, an increased weapons payload capability, and new cockpit, avionics architecture, and electronic warfare system.
Development
Origins
In the late 1970s, Sweden sought to replace its aging Saab 35 Draken and Saab 37 Viggen. The Swedish Air Force required an affordable Mach 2 aircraft with good short-field performance for a defensive dispersed basing plan in the event of invasion; the plan included 800 m long by 17 m wide rudimentary runways that were part of the Bas 90 system. One goal was for the aircraft to be smaller than the Viggen while equalling or improving on its payload-range characteristics. Early proposals included the Saab 38, also called B3LA, intended as an attack aircraft and trainer, and the A 20, a development of the Viggen that would have capabilities as a fighter, attack and sea reconnaissance aircraft. Several foreign designs were also studied, including the General Dynamics F-16 Fighting Falcon, the McDonnell Douglas F/A-18 Hornet, the Northrop F-20 Tigershark and the Dassault Mirage 2000. Ultimately, the Swedish government opted for a new fighter to be developed by Saab.
In 1979, the government began a study calling for a versatile platform capable of "JAS", standing for Jakt (air-to-air), Attack (air-to-surface), and Spaning (reconnaissance), indicating a multirole, or swingrole, fighter aircraft that can fulfill multiple roles during the same mission. Several Saab designs were reviewed, the most promising being "Project 2105" (redesignated "Project 2108" and, later, "Project 2110"), recommended to the government by the Defence Materiel Administration (Försvarets Materielverk, or FMV). In 1980, Industrigruppen JAS (IG JAS, "JAS Industry Group") was established as a joint venture by Saab-Scania, LM Ericsson, Svenska Radioaktiebolaget, Volvo Flygmotor and Försvarets Fabriksverk, the industrial arm of the Swedish armed forces.
The preferred aircraft was a single-engine, lightweight single-seater, embracing fly-by-wire technology, canards, and an aerodynamically unstable design. The powerplant selected was the Volvo-Flygmotor RM12, a licence-built derivative of the General Electric F404−400; engine development priorities were weight reduction and lowering component count. On 30 June 1982, with approval from the Riksdag, the FMV issued contracts worth SEK 25.7 billion to Saab, covering five prototypes and an initial batch of 30 production aircraft. By January 1983, a Viggen was converted to a flying test aircraft for the JAS 39's intended avionics, such as the fly-by-wire controls. The JAS 39 received the name Gripen (griffin) via a public competition, which is the heraldry on Saab's logo.
Testing, production, and improvements
Saab rolled out the first Gripen on 26 April 1987, marking the company's 50th anniversary. Originally planned to fly in 1987, the first flight was delayed by 18 months due to issues with the flight control system. On 9 December 1988, the first prototype (serial number 39-1) took its 51-minute maiden flight with pilot Stig Holmström at the controls. During the test programme, concern surfaced about the aircraft's avionics, specifically the fly-by-wire flight control system (FCS), and the relaxed stability design. On 2 February 1989, this issue led to the crash of the prototype during an attempted landing at Linköping; the test pilot Lars Rådeström walked away with a broken elbow. The cause of the crash was identified as pilot-induced oscillation, caused by problems with the FCS's pitch-control routine.
In response to the crash Saab and US firm Calspan introduced software modifications to the aircraft. A modified Lockheed NT-33A was used to test these improvements, which allowed flight testing to resume 15 months after the accident. On 8 August 1993, production aircraft 39102 was destroyed in an accident during an aerial display in Stockholm. Test pilot Rådeström lost control of the aircraft during a roll at low altitude when the aircraft stalled, forcing him to eject. Saab later found the problem was high amplification of the pilot's quick and significant stick command inputs. The ensuing investigation and flaw correction delayed test flying by several months, resuming in December 1993.
The first order included an option for another 110, which was exercised in June 1992. Batch II consisted of 96 one-seat JAS 39As and 14 two-seat JAS 39Bs. The JAS 39B variant is 66 cm (26 in) longer than the JAS 39A to accommodate a second seat, which also necessitated the deletion of the cannon and a reduced internal fuel capacity. By April 1994, five prototypes and two series-production Gripens had been completed; but a beyond-visual-range missile (BVR) had not yet been selected. A third batch was ordered in June 1997, composed of 50 upgraded single-seat JAS 39Cs and 14 JAS 39D two-seaters, known as 'Turbo Gripen', with NATO compatibility for exports. Batch III aircraft, delivered between 2002 and 2008, possess more powerful and updated avionics, in-flight refuelling capability via retractable probes on the aircraft's starboard side, and an On-Board Oxygen-generating system (OBOGS) for longer duration missions. In-flight refuelling was tested via a specially equipped prototype (39‐4) used in successful trials with a Royal Air Force VC10 in 1998.
Teaming agreements
During the 1995 Paris Air Show, Saab Military Aircraft and British Aerospace (BAe, now BAE Systems) announced the formation the joint-venture company Saab-BAe Gripen AB with the goal of adapting, manufacturing, marketing and supporting Gripen worldwide. The deal involved the conversion of the A and B series aircraft to the "export" C and D series, which developed the Gripen for compatibility with NATO standards. This co-operation was extended in 2001 with the formation of Gripen International to promote export sales. In December 2004, Saab and BAE Systems announced that BAE was to sell a large portion of its stake in Saab, and that Saab would take full responsibility for marketing and export orders of the Gripen. In June 2011, Saab announced that an internal investigation revealed evidence of acts of corruption by BAE Systems, including money laundering, in South Africa, one of the Gripen's customers.
On 26 April 2007, Norway signed a NOK150 million joint-development agreement with Saab to co-operate in the development programme of the Gripen, including the integration of Norwegian industries in the development of future versions of the aircraft. In June of the same year, Saab also entered an agreement with Thales Norway A/S concerning the development of communications systems for the Gripen fighter. This order was the first awarded under the provisions of the Letter of Agreement signed by the Norwegian Ministry of Defence and Gripen International in April 2007. As a result of the United States diplomatic cables leak in 2010, it was revealed that US diplomats had become concerned with co-operation between Norway and Sweden on the topic of the Gripen, and had sought to exert pressure against a Norwegian purchase of the aircraft.
In December 2007, as part of Gripen International's marketing efforts in Denmark, a deal was signed with Danish technology supplier Terma A/S that let them participate in an Industrial Co-operation programme over the next 10–15 years. The total value of the programme was estimated at over DKK10 billion, and was partly dependent on a procurement of the Gripen by Denmark. Subsequently, Denmark elected to procure the F-35 Joint Strike Fighter.
Controversies, scandals, and costs
Developing an advanced multi-role fighter was a major undertaking for Sweden. The predecessor Viggen, despite being less advanced and less expensive, had been criticized for occupying too much of Sweden's military budget and was branded "a cuckoo in the military nest" by critics as early as 1971. At the 1972 party congress of the Social Democrats, the dominant party in Swedish politics since the 1950s, a motion was passed to stop any future projects to develop advanced military aircraft. In 1982, the Gripen project passed in the Riksdag by a margin of 176 for and 167 against, with the entire Social Democratic party voting against the proposal due to demands for more studies. A new bill was introduced in 1983 and a final approval was given in April 1983 with the condition that the project was to have a predetermined fixed-price contract, a decision that would later be criticized as unrealistic due to later cost overruns.
According to Annika Brändström, in the aftermath of the 1989 and 1993 crashes, the Gripen risked a loss of credibility and the weakening of its public image. There was public speculation that failures to address technical problems exposed in the first crash had directly contributed to the second crash and thus had been avoidable. Brändström observed that media elements had called for greater public accountability and explanation of the project; ill-informed media analysis had also distorted public knowledge of the Gripen. The sitting Conservative government quickly endorsed and supported the Gripen – Minister of Defense Anders Björck issued a public reassurance that the project was very positive for Sweden. In connection to the Gripen's marketing efforts to multiple countries, including South Africa, Austria, the Czech Republic and Hungary, there were reports of widespread bribery and corruption by BAE Systems and Saab. In 2007, Swedish journalists reported that BAE had paid bribes equivalent to millions of dollars. Following criminal investigations in eight countries, only one individual in Austria, Alfons Mensdorf-Pouilly, was prosecuted for bribery. The scandal tarnished the international reputation of the Gripen, BAE Systems, Saab, and Sweden.
The Gripen's cost has been subject to frequent attention and speculation. In 2008, Saab announced reduced earnings for that year, partly attributing this to increased marketing costs for the aircraft. In 2008, Saab disputed Norway's cost calculations for the Gripen NG as overestimated and in excess of real world performance with existing operators. A 2007 report by the European Union Institute for Security Studies stated the total research and development costs of Gripen were €1.84 billion. According to a study by Jane's Information Group in 2012, the Gripen's operational cost was the lowest among several modern fighters; it was estimated at $4,700 per flight hour. The Swedish Ministry of Defense estimated the cost of the full system, comprising 60 Gripen E/F, at SEK 90 billion distributed over the period 2013–42. The Swedish Armed Forces estimated that maintaining 100 C/D-model aircraft until 2042 would cost SEK 60 billion, while buying aircraft from a foreign supplier would cost SEK 110 billion.
JAS 39E/F and other developments
A two-seat aircraft, designated "Gripen Demo", was ordered in 2007 as a testbed for various upgrades. It was powered by the General Electric F414G, a development of the Boeing F/A-18E/F Super Hornet's engine. The Gripen NG's maximum takeoff weight was increased from 14,000 to 16,000 kg (30,900–35,300 lb), internal fuel capacity was increased by 40 per cent by relocating the undercarriage, which also allowed for two additional hardpoints be added on the fuselage underside. Its combat radius was when carrying six AAMs and drop tanks. The PS-05/A radar is replaced by the new Raven ES-05 active electronically scanned array (AESA) radar, which is based on the Vixen AESA radar family from Selex ES (since 2016 Finmeccanica, then Leonardo S.p.A.). The Gripen Demo's maiden flight was conducted on 27 May 2008. On 21 January 2009, the Gripen Demo flew at Mach 1.2 without reheat to test its supercruise capability. The Gripen Demo served as a basis for the Gripen E/F, also referred to as the Gripen NG (Next Generation) and MS (Mission System) 21.
Saab studied a variant of the Gripen capable of operating from aircraft carriers in the 1990s. In 2009, it launched the Sea Gripen project in response to India's request for information on a carrier-based aircraft. Brazil may also require new carrier aircraft. Following a meeting with Ministry of Defence (MoD) officials in May 2011, Saab agreed to establish a development center in the UK to expand on the Sea Gripen concept. In 2013, Saab's Lennart Sindahl stated that development of an optionally manned Gripen E capable of flying unmanned operations was being explored by the firm; further development of optionally manned and carrier versions would require customer commitment. On 6 November 2014, the Brazilian Navy expressed interest in a carrier-based Gripen.
In 2010, Sweden awarded Saab a four-year contract to improve the Gripen's radar and other equipment, integrate new weapons, and lower its operating costs. In June 2010, Saab stated that Sweden planned to order the Gripen NG, designated JAS 39E/F, and was to enter service in 2017 or earlier dependent on export orders. On 25 August 2012, following Switzerland's intention to buy 22 of the E/F variants, Sweden announced it planned to buy 40–60 Gripen E/Fs. The Swedish government decided to purchase 60 Gripen Es on 17 January 2013. Subsequent to a national referendum in 2014 Switzerland decided not to procure replacement fighters and postponed their procurement process.
In July 2013, assembly began on the first pre-production Gripen E. Originally 60 JAS 39Cs were to be retrofitted as JAS 39Es by 2023, but this was revised to Gripen Es having new-built airframes and some reused parts from JAS 39Cs. The first production aircraft is to be delivered in 2018. In March 2014, Saab revealed the detailed design and indicated plans to receive military type certification in 2018. The first Gripen E was rolled out on 18 May 2016. Saab delayed the first flight from 2016 to 2017 to focus on civilian-grade software certification; high speed taxi tests began in December 2016. In September 2015, Saab Aeronautics head Lennard Sindhal announced that an Electronic Warfare version of the Gripen F two-seater was under development. On 15 June 2017, Saab completed the Gripen E's first flight. , the Gripen E had attained supersonic flight and was to commence load tests. The development flight test programme with Pre-production Gripen E continues today after the internal deliveries to both the Swedish and Brazilian Air Forces. On 24 November 2021, Saab announced that the first 6 Gripen E fighters were ready to be delivered to the Swedish and Brazilian air forces.
Design
Overview
The Gripen is a multirole fighter aircraft, intended as a light-weight and agile aerial platform with advanced, highly adaptable avionics. It has canard control surfaces that contribute a positive lift force at all speeds, while the generous lift from the delta wing compensates for the rear stabiliser producing negative lift at high speeds, increasing induced drag. Being intentionally unstable and employing digital fly-by-wire flight controls to maintain stability removes many flight restrictions, improves manoeuvrability, and reduces drag. The Gripen also has good short takeoff performance, being able to maintain a high sink rate and strengthened to withstand the stresses of short landings. A pair of air brakes are located on the sides of the rear fuselage; the canards also angle downward to act as air brakes and decrease landing distance. It is capable of flying at a 70–80 degrees angle of attack.
To enable the Gripen to have a long service life, roughly 50 years, Saab designed it to have low maintenance requirements. Major systems such as the RM12 engine and PS-05/A radar are modular to reduce operating cost and increase reliability. The Gripen was designed to be flexible, so that newly developed sensors, computers, and armaments could be integrated as technology advances. The aircraft was estimated to be roughly 67% sourced from Swedish or European suppliers and 33% from the US.
One key aspect of the Gripen programme that Saab have been keen to emphasize has been technology-transfer agreements and industrial partnerships with export customers. The Gripen is typically customized to customer requirements, enabling the routine inclusion of local suppliers in the manufacturing and support processes. A number of South African firms provide components and systems – including the communications suite and electronic warfare systems – for the Gripens operated by the South African Air Force. Operators also have access to the Gripen's source code and technical documentation, allowing for upgrades and new equipment to be independently integrated. Some export customers intend to domestically assemble the Gripen; it has been proposed that Brazilian aerospace manufacturer Embraer may produce Gripens for other export customers as well.
Avionics and sensors
All of the Gripen's avionics are fully integrated using five MIL-STD-1553B digital data buses, in what is described as "sensor fusion". The total integration of the avionics makes the Gripen a "programmable" aircraft, allowing software updates to be introduced over time to increase performance and allow for additional operational roles and equipment. The Ada programming language was adopted for the Gripen, and is used for the primary flight controls on the final prototypes from 1996 onwards and all subsequent production aircraft. The Gripen's software is continuously being improved to add new capabilities, as compared to the preceding Viggen, which was updated only in an 18-month schedule.
Much of the data generated from the onboard sensors and by cockpit activity is digitally recorded throughout the length of an entire mission. This information can be replayed in the cockpit or easily extracted for detailed post-mission analysis using a data transfer unit that can also be used to insert mission data to the aircraft. The Gripen, like the Viggen, was designed to operate as one component of a networked national defence system, which allows for automatic exchange of information in real-time between Gripen aircraft and ground facilities. According to Saab, the Gripen features "the world's most highly developed data link". The Gripen's Ternav tactical navigation system combines information from multiple onboard systems such as the air data computer, radar altimeter, and GPS to continuously calculate the Gripen's location.
The Gripen entered service using the PS-05/A pulse-Doppler X band multi-mode radar, developed by Ericsson and GEC-Marconi, which is based on the latter's advanced Blue Vixen radar for the Sea Harrier that also served as the basis for the Eurofighter's CAPTOR radar. The all-weather radar is capable of locating and identifying targets 120 km (74 mi) away, and automatically tracking multiple targets in the upper and lower spheres, on the ground and sea or in the air. It can guide several beyond visual range air-to-air missiles to multiple targets simultaneously. Saab stated the PS-05/A is able to handle all types of air defence, air-to-surface, and reconnaissance missions, and is developing a Mark 4 upgrade to it. The Mark 4 version has a 150% increase in high-altitude air-to-air detection ranges, detection and tracking of smaller targets at current ranges, 140% improvement in air-to-air mode at low altitude, and full integration of modern weapons such as the AIM-120C-7 AMRAAM, AIM-9X Sidewinder, and MBDA Meteor missiles.
The future Gripen E/F will use a new Active Electronically Scanned Array (AESA) radar, Raven ES-05, based on the Vixen AESA radar family from Selex ES. Among other improvements, the new radar is to be capable of scanning over a greatly increased field of view and improved range. In addition, the new Gripen integrates the Skyward-G Infra-red search and track (IRST) sensor, which is capable of passively detecting thermal emissions from air and ground targets in the aircraft's vicinity. The sensors of the Gripen E are claimed to be able to detect low radar cross-section (RCS) targets at beyond visual range. Targets are tracked by a "best sensor dominates" system, either by onboard sensors or through the Transmitter Auxiliary Unit (TAU) data link function of the radar.
Cockpit
The primary flight controls are compatible with the Hands On Throttle-And-Stick (HOTAS) control principle – the centrally mounted stick, in addition to flying the aircraft, also controls the cockpit displays and weapon systems. A triplex, digital fly-by-wire system is employed on the Gripen's flight controls, with a mechanical backup for the throttle. Additional functions, such as communications, navigational and decision support data, can be accessed via the Up Front Control Panel, directly above the central cockpit display. The Gripen includes the EP-17 cockpit display system, developed by Saab to provide pilots with a high level of situational awareness and reduces pilot workload through intelligent information management. The Gripen features a sensor fusion capability, information from onboard sensors and databases is combined, automatically analysed, and useful data is presented to the pilot via a wide field-of-view Head-Up Display, three large multi-function colour displays, and optionally a Helmet Mounted Display System (HMDS).
Of the three Multi-Function Displays (MFD), the central display is for navigational and mission data, the display to the left of the center shows aircraft status and electronic warfare information, and the display to the right of the center has sensory and fire control information. In two-seat variants, the rear seat's displays can be operated independently of the pilot's own display arrangement in the forward seat, Saab has promoted this capability as being useful during electronic warfare and reconnaissance missions, and while carrying out command and control activities. In May 2010, Sweden began equipping their Gripens with additional onboard computer systems and new displays. The MFDs are interchangeable and designed for redundancy in the event of failure, flight information can be presented on any of the displays.
Saab and BAE developed the Cobra HMDS for use in the Gripen, based on the Striker HMDS used on the Eurofighter. By 2008, the Cobra HMDS was fully integrated on operational aircraft, and is available as an option for export customers; it has been retrofitted into older Swedish and South African Gripens. The HMDS provides control and information on target cueing, sensor data, and flight parameters, and is optionally equipped for night time operations and with chemical/biological filtration. All connections between the HMDS and the cockpit were designed for rapid detachment, for safe use of the ejection system.
Engine
All in-service Gripens as of January 2014 are powered by a Volvo RM12 turbofan engine (now GKN Aerospace Engine Systems), a licence-manufactured derivative of General Electric F404, fed by a Y-duct with splitter plates; changes include increased performance and improved reliability to meet single engine use safety criteria, as well as a greater resistance to bird strike incidents. Several subsystems and components were also redesigned to reduce maintenance demands. By November 2010, the Gripen had accumulated over 143,000 flight hours without a single engine-related failure or incident; Rune Hyrefeldt, head of Military Program management at Volvo Aero, stated: "I think this must be a hard record to beat for a single-engine application".
The JAS 39E and F variants under development are to adopt the F414G powerplant, a variant of the General Electric F414. The F414G can produce 20% greater thrust than the current RM12 engine, enabling the Gripen to supercruise (fly at supersonic speed without the use of afterburners) at a speed of Mach 1.1 while carrying an air-to-air combat payload. In 2010, Volvo Aero stated it was capable of further developing its RM12 engine to better match the performance of the F414G, and claimed that developing the RM12 would be a less expensive option. Prior to Saab's selection of the F414G, the Eurojet EJ200 had also been under consideration for the Gripen; proposed implementations included the use of thrust vectoring.
Equipment and armaments
The Gripen is compatible with a number of different armaments, beyond the aircraft's single 27 mm Mauser BK-27 cannon (omitted on the two-seat variants), including air-to-air missiles such as the AIM-9 Sidewinder, air-to-ground missiles such as the AGM-65 Maverick, and anti-ship missiles such as the RBS-15. In 2010, the Swedish Air Force's Gripen fleet completed the MS19 upgrade process, enabling compatibility with a range of weapons, including the long-range MBDA Meteor missile, the short-range IRIS-T missile and the GBU-49 laser-guided bomb. Speaking on the Gripen's selection of armaments, Saab's campaign director for India Edvard de la Motte stated that: "If you buy Gripen, select where you want your weapons from. Israel, Sweden, Europe, US… South America. It's up to the customer".
In flight, the Gripen is typically capable of carrying up to of assorted armaments and equipment. Equipment includes external sensor pods for reconnaissance and target designation, such as Rafael's LITENING targeting pod, Saab's Modular Reconnaissance Pod System, or Thales' Digital Joint Reconnaissance Pod. The Gripen has an advanced and integrated electronic warfare suite, capable of operating in an undetectable passive mode or to actively jam hostile radar; a missile approach warning system passively detects and tracks incoming missiles. In November 2013, it was announced that Saab will be the first to offer the BriteCloud expendable Active jammer developed by Selex ES. In June 2014, the Enhanced Survivability Technology Modular Self Protection Pod, a defensive missile countermeasure pod, performed its first flight on the Gripen.
Saab describes the Gripen as a "swing-role aircraft", stating that it is capable of "instantly switching between roles at the push of a button". The human/machine interface changes when switching between roles, being optimized by the computer in response to new situations and threats. The Gripen is also equipped to use a number of different communications standards and systems, including SATURN secure radio, Link-16, ROVER, and satellite uplinks. Equipment for performing long range missions, such as an air-to-air refuelling probe and Onboard Oxygen Generation System (OBOGS), was integrated on the Gripen C/D.
Usability and maintenance
During the Cold War, the Swedish Armed Forces were to be ready to defend against a possible invasion. This scenario required defensive force dispersal of combat aircraft in the Bas 90 system to maintain an air defence capacity. Thus, a key design goal during the Gripen's development was the ability to take off from snow-covered landing strips of only ; furthermore, a short-turnaround time of just ten minutes, during which a team composed of a technician and five conscripts would be able to re-arm, refuel, and perform basic inspections and servicing inside that time window before returning to flight for air-to-air missions. For air-to-ground missions this turnaround time using the same resource is slightly longer at twenty minutes.
During the design process, great priority was placed on facilitating and minimizing aircraft maintenance; in addition to a maintenance-friendly layout, many subsystems and components require little or no maintenance at all. Aircraft are fitted with a Health and Usage Monitoring System (HUMS) that monitors the performance of various systems, and provides information to technicians to assist in servicing it. Saab operates a continuous improvement programme; information from the HUMS and other systems can be submitted for analysis. According to Saab, the Gripen provides "50% lower operating costs than its best competitor".
A 2012 Jane's Aerospace and Defense Consulting study compared the operational costs of a number of modern combat aircraft, concluding that Gripen had the lowest cost per flight hour (CPFH) when fuel used, pre-flight preparation and repair, and scheduled airfield-level maintenance together with associated personnel costs were combined. The Gripen had an estimated CPFH of US$4,700 whereas the next lowest, the F-16 Block 40/50, had a 49% higher CPFH at $7,000.
Operational history
Sweden
The Swedish Air Force placed a total order for 204 Gripens in three batches. The first delivery occurred on 8 June 1993, when 39102 was handed over to the Flygvapnet during a ceremony at Linköping; the last of the first batch was handed over on 13 December 1996. The Air Force received its first Batch II example on 19 December 1996. Instead of the fixed-price agreement of Batch I, Batch II aircraft were paid as a "target price" concept: any cost underruns or overruns would be split between FMV and Saab.
The JAS 39 entered service with the Skaraborg Wing (F 7) on 1 November 1997. The final Batch three aircraft was delivered to FMV on 26 November 2008. This was accomplished at 10% less than the agreed-upon price for the batch, putting the JAS 39C flyaway cost at under US$30 million. This batch of Gripens was equipped for in-flight refuelling from specially equipped TP84s. In 2007, a programme was started to upgrade 31 of the air force's JAS 39A/B fighters to JAS 39C/Ds. The SwAF had a combined 134 JAS 39s in service in January 2013. In March 2015, the Swedish Air Force received its final JAS 39C.
On 29 March 2011, the Swedish parliament approved the Swedish Air Force for a 3-month deployment to support the UN-mandated no-fly zone over Libya. Deployment of eight Gripens, ten pilots, and other personnel began on 2 April. On 8 June 2011, the Swedish government announced an agreement to extend the deployment for five of the Gripens. By October 2011, Gripens had flown more than 650 combat missions, almost 2,000 flight hours, and delivered approximately 2,000 reconnaissance reports to NATO. Journalist Tim Hepher suggested that the Libyan operations might stimulate sales of the Gripen and other aircraft.
In November 2012, Lieutenant Colonel Lars Helmrich of the Swedish Air Force testified to the Riksdag regarding the Gripen E. He stated that the current version of the Gripen would be outdated in air-to-air combat by 2020. With 60 Gripens having been judged to be the minimum required to defend Swedish Airspace, the Swedish Air Force wants to have 60–80 Gripens upgraded to the E/F standard by 2020.
On 25 August 2012, the Swedish government announced that 40–60 JAS 39E/F Gripens were expected to be procured and in service by 2023. On 11 December 2012, the Riksdag approved the purchase of 40 to 60 JAS 39E/Fs with an option to cancel if at least 20 aircraft are not ordered by other customers. on 17 January 2013, the government approved the deal for 60 JAS 39Es to be delivered between 2018 and 2027. On 3 March 2014, the Swedish defence minister stated that another 10 JAS 39Es might be ordered; this was later confirmed by the government.
There are also plans to keep some of the Gripen C/D active after 2025. This was recommended by the Swedish defence advisory committee in 2019.
In 2006, Swedish Gripen aircraft participated in Red Flag – Alaska, a multinational air combat exercise hosted by the United States Air Force. Gripen flew simulated combat sorties against F-16 Block 50, Eurofighter Typhoon and F-15C and scored ten kills, including a Eurofighter Typhoon and five F-16 Block 50s on day one of the exercises with no losses. Three Swedish Gripen C also participated in a war game against five Royal Norwegian Air Force's F-16 Block 50 fighters in Sweden. Swedish Gripen C and Norwegian F-16 flew three combat sorties; Gripen C scored five kills in each sorties against Norwegian F-16, on the last sortie F-16 scored a kill against Gripen.
Czech Republic
When the Czech Republic became a NATO member in 1999, the need to replace their existing Soviet-built MiG-21 fleet with aircraft compatible with NATO interoperability standards became apparent. In 2000, the Czech Republic began evaluating a number of aircraft, including the F-16, F/A-18, Mirage 2000, Eurofighter Typhoon and the Gripen. One major procurement condition was the industrial offset agreement, set at 150% of the expected purchase value. In December 2001, having reportedly been swayed by Gripen International's generous financing and offset programme, the Czech government announced that the Gripen had been selected. In 2002, the deal was delayed until after parliamentary elections had taken place; alternative means of air defence were also studied, including leasing the aircraft.
On 14 June 2004, it was announced that the Czech Republic was to lease 14 Gripen aircraft, modified to comply with NATO standards. The agreement also included the training of Czech pilots and technicians in Sweden. The first six were delivered on 18 April 2005. The lease was for an agreed period of 10 years at a cost of €780 million; the 14 ex-Swedish Air Force aircraft included 12 single-seaters and two JAS 39D two-seat trainers. In September 2013, the Defence and Security Export Agency announced that a follow-up agreement with the Czech Republic had been completed to extend the lease by 14 years, until 2029; leased aircraft shall also undergo extensive modernization, including the adoption of new datalinks. The lease also has an option of eventually acquiring the fighters outright. In 2014, the lease was extended to 2027 and the Saab service contract was extended to 2026.
In November 2014, Czech Air Force commander General Libor Štefánik proposed leasing a further six Gripens due to Russia's deteriorating relationship with the West; a Ministry of Defence spokesperson stated that the notion was the commander's personal vision and fleet expansion was not on the agenda for years to come. In 2015, the service decided to upgrade its fleet to the MS20 configuration. The MS20 upgrade was completed in 2018.
Hungary
Following Hungary's membership of NATO in 1999, there were several proposals to achieve a NATO-compatible fighter force. Considerable attention went into studying second-hand aircraft options as well as modifying the nation's existing MiG-29 fleet. In 2001, Hungary received several offers of new and used aircraft from various nations, including Sweden, Belgium, Israel, Turkey, and the US. Although the Hungarian government initially intended to procure the F-16, in November 2001 it was in the process of negotiating a 10-year lease contract for 12 Gripen aircraft, with an option to purchase the aircraft at the end of the lease period.
As part of the procurement arrangements, Saab had offered an offset deal valued at 110 per cent of the cost of the 14 fighters. Initially, Hungary had planned to lease several Batch II aircraft; however, the inability to conduct aerial refuelling and weapons compatibility limitations had generated Hungarian misgivings. The contract was renegotiated and was signed on 2 February 2003 for a total of 14 Gripens, which had originally been A/B standard and had undergone an extensive upgrade process to the NATO-compatible C/D 'Export Gripen' standard. The last aircraft deliveries took place in December 2007.
While the Hungarian Air Force operates a total of 14 Gripen aircraft under lease, in 2011, the country reportedly intended to purchase these aircraft outright. However, in January 2012, the Hungarian and Swedish governments agreed to extend the lease period for a further ten years; according to Hungarian Defence Minister Csaba Hende, the agreement represented considerable cost savings.
Two Gripens were lost in crashes in May and June 2015, leaving 12 Gripens in operation. From 2017, Hungary is back to operating 14 fighters.
In August 2021 contract was signed with Saab to modernize the Gripen fleet of the Hungarian Air Force. The radar will be upgraded to PS-05/Mk4 and the software will be upgraded to MS 20 Block 2 level. New weapons would be added to the arsenal of the Hungarian Gripens. The IRIS-T missiles has been ordered in December, 2021.
South Africa
In 1999, South Africa signed a contract with BAe/Saab for the procurement of 26 Gripens (C/D standard) with minor modifications to meet their requirements. Deliveries to the South African Air Force commenced in April 2008. By April 2011, 18 aircraft (nine two-seater aircraft and nine single-seaters) had been delivered. While the establishment of a Gripen Fighter Weapon School at Overberg Air Force Base in South Africa had been under consideration, in July 2013 Saab ruled out the option due to a lack of local support for the initiative; Thailand is an alternative location being considered, as well as the Čáslav Czech air base.
Between April 2013 and December 2013, South African contractors held prime responsibility for maintenance work on the Gripen fleet as support contracts with Saab had expired; this arrangement led to fears that extended operations may not be possible due to a lack of proper maintenance. In December 2013, Armscor awarded Saab a long-term support contract for the company to perform engineering, maintenance, and support services on all 26 Gripens through 2016. On 13 March 2013, South African Defense Minister Nosiviwe Mapisa-Nqakula stated that "almost half of the SAAF Gripens" have been stored because of an insufficient budget to keep them flying. In September 2013, the SAAF decided not to place a number of its Gripens in long-term storage; instead all 26 aircraft would be rotated between flying cycles and short-term storage. Speaking in September 2013, Brigadier-General John Bayne testified that the Gripen met the SAAF's minimum requirements, as the country faced no military threats.
Thailand
In 2007, Thailand's Parliament authorized the Royal Thai Air Force to spend up to 34 billion baht (US$1.1 billion) as part of an effort to replace Thailand's existing Northrop F-5 fleet. In February 2008, the Thai Air Force ordered six Gripens (two single-seat C-models and four two-seat D-models) from Saab; deliveries began in 2011. Thailand ordered six more Gripen Cs in November 2010; deliveries began in 2013. Thailand may eventually order as many as 40 Gripens. In 2010, Thailand selected the Surat Thani Airbase as the main operating base for its Gripens. The first of the six aircraft were delivered on 22 February 2011.
Saab delivered three Gripens in April 2013, and three more in September 2013. In September 2013, Air Force Marshal Prajin Jantong stated that Thailand is interested in purchasing six aircraft more in the near future, pending government approval. Thai Supreme Commander General Thanasak Patimapragorn has stated that the Air Force intends for the Gripen's information systems to be integrated with Army and Navy systems. The armed forces were to officially inaugurate the Gripen Integrated Air Defence System during 2014.
During the Falcon Strike exercise 2019, the Royal Thai Air Force Gripen C/D scores 25 J-11A aircraft kills in simulated combat with the Peoples Liberation Army Air Force with only two losses within visual range combat and the Royal Thai Air Force Gripen C/D scores 41 J-11A aircraft kills with loss of five Gripen C in a simulated beyond visual range combat.
United Kingdom
The Empire Test Pilots' School (ETPS) in the United Kingdom has used the Gripen for advanced fast jet training of test pilots under a "wet lease" arrangement since 1999. It operates a Gripen D aircraft.
Brazil
In October 2008, Brazil selected three finalists for its F-X2 fighter programme: the Dassault Rafale B/C, the Boeing F/A-18E/F Super Hornet, and the Gripen NG. The Brazilian Air Force initially planned to procure at least 36 and possibly up to 120 later, to replace its Northrop F‐5EM and Dassault Mirage 2000C aircraft. In February 2009, Saab submitted a tender for 36 Gripen NGs. In early 2010, the Brazilian Air Force's final evaluation report reportedly placed the Gripen ahead, a decisive factor being lower unit cost and operational costs. Amid delays due to financial constraints, there were reports in 2010 of the Rafale's selection, and in 2011 of the F/A-18's selection. On 18 December 2013, President Dilma Rousseff announced the Gripen NG's selection. Key factors were domestic manufacturing opportunities, full Transfer of Technology (ToT), participation in its development, and potential exports to Africa, Asia and Latin America; Argentina and Ecuador are interested in procuring Gripens via Brazil, and Mexico is considered an export target. Another factor was the distrust of the US due to the NSA surveillance scandal. The Gripen is not immune to foreign pressure: the UK may use their 30% component percentage in the Gripen to veto an Argentinan sale over the Falkland Islands dispute; thus Argentina is considering other fighters instead.
On 24 October 2014, Brazil and Sweden signed a 39.3 billion SEK (US$5.44 bn, R$13 bn) contract for 28 Gripen E (single-seat version) and 8 Gripen F (dual-seat version) fighters for delivery from 2019 to 2024 and maintained until 2050; the Swedish government will provide a subsidized 25-year, 2.19% interest rate loan for the buy. At least 15 aircraft are to be assembled in Brazil, Brazilian companies shall be involved in its production; Gripen Fs are to be delivered later. An almost US$1 billion price increase since selection is due to developments requested by Brazil, such as the "Wide Area Display" (WAD), a panoramic 19 by 8 inches touchscreen display. The compensation package is set at US$9 billion, or 1.7 times the order value. The Brazilian Navy is interested in the Gripen Maritime to replace its Douglas A-4KU Skyhawk carrier-based fighters. In 2015, Brazil and Sweden finalised the deal to develop the Gripen F, designated F-39 by Brazil.
The first Brazilian F-39E Gripen flight took place on 26 August 2019, from Saab's factory airfield in Linköping, Sweden. The unit was handed over to the Brazilian Air Force on 10 September 2019 for the flight test programme. The fighter arrived in Brazil on 20 September 2020, and then was transported by land to Navegantes International Airport. On 24 September, it took off to the Embraer unit in Gavião Peixoto, in São Paulo state, to start the test program for flight control systems, weapon integration, communication systems and others. The fighters will be part of the 1st Air Defense Group (1º GDA), based at the Anápolis Air Force Base. The deliveries of operational fighters will begin in 2021. According to Saab executive Eddy De La Motte, the first F-39F will be delivered in 2023. In 2021, Brazil started supersonic flight tests of F-39E aircraft at high altitude above 16,000 feet. According to Saab executive Mikael Franzén, Brazil will start to receive the first production aircraft with IRST from November 2021. The Brazilian Air Force has a requirement for 108 Gripens, to be delivered in three batches.
On 1 February 2022, the Commander of the Brazilian Air Force Carlos de Almeida Baptista Júnior told newspaper Folha de S. Paulo, that Brazil is in initial planning phase for negotiations with Saab for a new batch of 30 Gripen E/Fs, "our capacity planning takes us today, by our employment assumptions, to 66 Gripens in operation", this planning phase is expected to be finished by mid-2022. The negociation and Brazilian intention was confirmed by the Saab's chief executive Håkan Buskhe in February 2019. The confirmation comes after rumors in the specialized defense media in Brazil, that the Air Force saw the Lockheed Martin F-35 as an ideal candidate to continue the process of modernization of the branch in the coming years, after Gripen's recent failed bids in Finland and Switzerland, rumors that was denied by Baptista Júnior.
Potential operators
Austria
Austria is considering replacing their Eurofighter Typhoon fighters with the Gripen due to obsolescence and costs, as they are all Tranche 1 and would have to be upgraded.
Botswana
In 2014, Saab opened an office in Botswana. The country is interested in buying eight surplus Gripen C/Ds, with possible extension to 16, to replace the Botswana Defence Force Air Wing's (BDF) 14 ex-Royal Canadian Air Force CF-5 fighters used since 1996. BDF officials reportedly believe the Gripen acquisition is a done deal.
Canada
Canada is a level 3 industrial partner in the Lockheed Martin F-35 Lightning II development program, however an open fighter competition was launched in December 2017. The Royal Canadian Air Force announced in February 2018 that Saab was a contestant along with the F-35. The competition is very dependent on industrial benefits for Canadian companies; in May 2019, Saab offered to build Gripens in Canada akin to the Brazilian arrangement.
In June 2019, Saab stated it was ready to offer 88 Gripen Es to Canada, in addition to full transfer of technology, Saab stated that they could offer the integration of American and other non-Saab equipment so that the aircraft is interoperable with the US military. Saab also stated the Gripen E was built for arctic conditions. In January 2021, Saab has offered to build two aerospace centers in Canada as part of the technology transfer proposal. On 1 December 2021, the Canadian government confirmed that the Super Hornet did not meet its requirements and reduced the competitors to the F-35 and the Gripen.
Colombia
Saab has offered 15 Gripen C/D or E/F to Colombia, with possible deliveries during 2018–21, depending on variant selected.
India
The Gripen was a contender for the Indian MRCA competition for 126 multirole combat aircraft. In April 2008, Gripen International offered the Next Generation Gripen for India's tender and opened an office in New Delhi to support its efforts in the Indian market. On 4 February 2009, Saab announced that it had partnered with India's Tata Group to develop the Gripen to fit India's needs. The Indian Air Force (IAF) conducted extensive evaluations of the Gripen's flight performance, logistics capability, weapons systems, advanced sensors and weapons firing. In April 2011, the IAF rejected the bid in favour of the Eurofighter Typhoon and the Dassault Rafale. Allegedly, IAF officials, while happy with the improved capabilities of Gripen NG, identified its high reliance on US-supplied hardware, including electronics, weaponry and the GE F414 engine, as a factor that may hamper exports.
In 2015, after the Rafale order was cut back to just 36 aircraft, Saab indicated a willingness to set up joint production of the Gripen in India. In October 2016, Saab, among other manufacturers, reportedly received an informal request-for-information query, resuming a new competition for a single-engine fighter to replace the IAF's Soviet-built MiG-21 and MiG-27 fleets; Saab had already submitted an unsolicited bid. In November 2017, Saab pledged full Gripen E technology transfer to India if it is awarded the contract. The Gripen is now competing with 6 other types in a fresh tender which is referred as MMRCA 2.0 in the Indian media, for the procurement of 114 multi-role combat aircraft.
Indonesia
In July 2016, Saab Indonesia confirmed having submitted a proposal earlier in the year in response to an Indonesian Air Force requirement. The proposal included the initial acquisition of 16 Gripen C/D for US$1.5 billion, to replace Northrop F-5E Tiger II in service with the Indonesian Air Force since the 1980s. Saab have expressed the intention for the bid to "100%" comply with Indonesia's Defence Industry Law 2012 (or, Law Number 16), which requires foreign contractors to work with local industry, collaborating on production and sharing technology. They also indicated that the bid could replace the C/D versions with the E-version, if Indonesia were willing to accept longer delivery time. Competing aircraft responding to the requirement include the F-16V, Su-35, Rafale and Eurofighter Typhoon.
Philippines
In September 2016, Saab announced its intention to open an office in Manila to support its campaign to sell the Gripen to fill the Philippine Air Force's requirement for 12 multirole fighters; Saab also intends to offer ground infrastructure, integrated C2 systems and datalinks, similar to the capabilities of the Royal Thai Air Force. In 2018, Saab renewed its sales push. According to the Department of National Defense (Philippines), the Department of National Defense is more likely to buy six Gripen C/D MS20 over the US offer of F-16V Block 70/72.
Others
Other countries that have expressed interest in Gripen include:
Argentina (E/F from Brazil, subject to UK veto)
Ecuador (C/D, or E/F from Brazil)
Estonia
Kenya (C/D)
Latvia
Lithuania
Malaysia (C/D)
Mexico (C/D, or E/F from Brazil)
Namibia (C/D)
Peru (C/D, or E/F from Brazil)
Portugal (C/D)
Serbia
Slovenia
Uruguay (C/D or E/F from Brazil)
Vietnam
Ireland
Saab's head of exports Eddy de La Motte has stated that the Gripen's chances have improved as nations waver in their commitments to the F-35. In September 2013, Saab's CEO Håkan Buskhe said he envisioned Gripen sales to reach 400 or 450 aircraft.
Failed bids
Belgium
Sweden withdrew from the Belgian F-16 replacement competition due to foreign policy incompatibility.
Bulgaria
After the Bulgarian Air Force expressed interest in the Gripen, the Gerdzhikov caretaker cabinet announced on 26 April 2017 that a state commission chose Saab's fighter, planning for an initial batch of eight Gripens at up to 1.5 billion BGN (ca. 745 million euro), to be delivered in the 2018–20 timeframe, with a planned follow-up batch of another eight fighters. Competing bids were used USAF F-16A/Bs to be refurbished and modernised to MLU standard by the Portuguese OGMA (similar to Bulgaria's neighbour Romania) and used Italian Tranche 1 Eurofighter Typhoons, with the US/Portuguese offer finishing second and the Italian offer third. According to the deputy prime minister and minister of defence Stefan Yanev, the main reason for the Gripen's selection was the favourable financial terms offered by Saab, including a lease option and offset agreements, accounting for about one billion BGN for the aircraft alone ($834 million), while the US/Portuguese bid accounted a price of about one and a half billion BGN for the aircraft alone. The decision pended Bulgarian parliamentary approval. The second-place offer was retained as a back-up option if negotiations with Saab failed; finances for the program were budgeted until the end of 2017. The fighters would replace both the MiG-29 fighters of Graf Ignatievo Air Base and the Su-25 attack aircraft of Bezmer Air Base, as well as the already retired Su-22 reconnaissance aircraft.
In October 2018, potential suppliers responded to a renewed tender for aircraft, consisting of new F-16V Viper aircraft from Lockheed Martin, new F/A-18E/F Super Hornet aircraft from Boeing, used Eurofighter Typhoon aircraft from Italy and used Saab JAS 39 Gripen C/D from Sweden. France, Germany, Israel and Portugal did not respond to requests for used Eurofighter Typhoons and F-16 variants.
In December 2018, Saab submitted an improved offer to supply 10 new Gripen C/Ds instead of the previously proposed 8. However, in December 2018, the Bulgarian Ministry of Defence selected the US offer for 8 F-16V for an estimated 1.8 billion lev ($1.05 billion) as the preferred option, and recommended the government start talks with the US.
On 3 June 2019, the U.S. State Department approved the possible sale of 8 F-16 aircraft to Bulgaria. The cost of the contract was estimated at $1.67 billion. On 10 July 2019, Bulgaria approved the acquisition of eight F-16V Block 70/72 fighters for US$1.25bn.
The deal was vetoed by the Bulgarian President, Rumen Radev on 23 July 2019, citing the need to find a broader consensus for the deal, sending the deal back to parliament. On 26 July the deal was again approved by parliament, overruling the veto, and was approved by Radev.
In April 2020, Lockheed Martin was contracted by the U.S. government to produce F-16Vs for Bulgaria, estimated to be completed in 2027.
Croatia
On 24 October 2015, Sweden announced its Gripen C/D bid for Croatia's fighter replacement requirement, following a request for information from the Croatian Ministry of Defence in June for between 8 and 12 new-build aircraft to replace Croatia's fleet of MiG-21bis aircraft. The LTDP would run from 2015 to 2024 and was scheduled to have funding available for a replacement aircraft in 2019. On 29 March 2018, the Croatian government chose Israel's bid of 12 F-16C/D Barak 2020 fighters over the Gripen; this sale was halted in January 2019 after the US failed to approve Israel's sale of the modified aircraft to Croatia. Sweden submitted another response in September 2020 following a second RFP identifying Croatia's requirements issued in the spring of 2020 for twelve fighters. The second RFP opened up the competition to both new and secondhand aircraft. On 28 May 2021, the Prime Minister of Croatia Andrej Plenković announced that the Croatian Government will buy 12 used French Rafale F3R fighters for Croatian Air Force.
Denmark
In 2007, Denmark signed a Memorandum of Understanding between the Defence Ministers of Sweden and Denmark to evaluate the Gripen as a replacement for Denmark's fleet of 48 F-16s. Denmark also requested the development of Gripen variants featuring more powerful engines, larger payloads, longer range, and additional avionics; this request contributed to Saab's decision to proceed with the JAS E/F's development. Denmark repeatedly delayed the purchase decision; in 2013, Saab indicated that the Gripen was one of four contenders for the Danish purchase, alongside Boeing's Super Hornet, Lockheed Martin's F-35 Joint Strike Fighter, and the Eurofighter. Denmark is a level-3 partner in the JSF programme, and has already invested US$200 million. The final selection was to be in mid-2015 where an order for 24 to 30 fighters was expected. The Swedish government announced on 21 July 2014 the Gripen's withdrawal from the Danish competition, having chosen not to respond to the invitation to tender. In May 2016, Denmark announced the intention to purchase 27 F-35 fighters.
On 9 June 2016, the Danish defence committee agreed to purchase 27 F-35As to replace its F-16s for US$3 billion. In May 2019, Danish Minister of Defence Claus Hjort Frederiksen stated that Denmark is considering stationing fighter jets in Greenland to counter Russia's expanding military presence in the Arctic region. In an additional interview with Ritzau, the minister said that to provide air defense of Greenland would require at least four fighter planes, which would require Denmark to make an additional purchase. In January 2020, Lockheed Martin announced that assembly had begun on L-001, the first of 27 F-35As destined for the Royal Danish Air Force.
According to DR (Danish public-service) USA spied on the other contenders, the Danish ministries and defense industry to gain advantage in the procurement process.
Finland
The Gripen's first export bid was to Finland, where it competed against F-16, F/A-18, MiG-29 and Mirage 2000 to replace Finnish Air Force's J 35 Draken and MiG-21 fleet. In May 1992, McDonnell Douglas F/A-18C/D was announced as a winner on performance and cost grounds. The Finnish Minister of Defence, Elisabeth Rehn, stated that delays in Gripen's development schedule had hurt its chances in the competition.
In June 2015, a working group set up by the Finnish MoD proposed starting a program to replace the Finnish Air Force's current F/A-18 Hornet fleet; it recognized five potential types: Boeing F/A-18E/F Super Hornet, Dassault Rafale, Eurofighter Typhoon, Lockheed Martin F-35, and Saab Gripen. In December 2015, the Finnish MoD sent a letter to Britain, France, Sweden and the US, informing them that the HX Fighter Program had launched to replace the Hornet with multi-role fighters by around 2025; the letter mentioned the Gripen as a potential fighter. A Request for Information (RFI) for the HX Fighter Program was sent in April 2016, and five responses were received by November 2016; an official request for quotations will be sent to all five manufacturers that responded to the RFI in early 2018.
On 29 January 2020, the Gripen E prototype 39–10 landed at Tampere–Pirkkala Airport to participate in HX Challenge, the Finnish HX fighter procurement programme's flight evaluations. It was later followed by Gripen NG demonstrator 39–7 (sensor testbed), while a GlobalEye participated in the trials from Linköping in Sweden. Saab announced they had successfully completed the planned tests to demonstrate the capabilities of Gripen and GlobalEye. On 31 January 2020 Saab submitted a revised offer in response to the revised Request for Quotation for the Finnish HX programme and follow-on BAFO activity anticipated to continue through April 2021. The down selected is scheduled to occur in 2021.
Saab has submitted its Best and Final Offer (BAFO) for the HX Fighter Program that includes 64 JAS-39Es with an option for JAS-39Fs, two GlobalEye AEW&C and extensive weapons package. More than 20% of the proposal price relating to Gripen is for weapons, such as Meteor, IRIS-T, KEPD 350, SPEAR, EAJP (Electronic Attack Jammer Pod), and LADM (Lightweight Air-launched Decoy Missile).
On 5 December 2021, the Finnish newspaper Iltalehti reported that several foreign and security policy sources stated the Finnish Defense Forces recommendation for the F-35 as Finland's next fighter. These sources pointed to the F-35's capability and expected long lifespan as key reasons. On 10 December 2021, the selection of Lockheed Martin's F-35A was officially confirmed by the Finnish government.
Netherlands
In July 2008, the Netherlands announced it would evaluate Gripen NG together with four other competitors; in response, Saab offered 85 aircraft to the Royal Netherlands Air Force in August 2008. On 18 December 2008, it was reported that the Netherlands had evaluated the F-35 as having a better performance-price relation than the Gripen NG. On 13 January 2009, NRC Handelsblad claimed that, according to Swedish sources, Saab had offered to deliver 85 Gripens for €4.8 billion to the Dutch Air Force, about 1 billion euro cheaper than budgeted for the F-35.
Norway
On 18 January 2008, the Norwegian Ministry of Defence issued a Request for Binding Information (RBI) to the Swedish Defence Material Administration, who issued an offer for 48 Gripens in April 2008. On 20 November 2008, the selection of the F-35 Lightning II for the Royal Norwegian Air Force was announced, stating that the F-35 is the only candidate to meet all operational requirements; media reports claimed the requirements were tilted in the F-35's favour. Saab and Sweden's defence minister Sten Tolgfors stated that Norway's cost calculations were flawed; the offer being for 48 Gripens over 20 years, but Norway had extrapolated it to operating 57 aircraft over 30 years, thus doubling the cost; cost projections also failed to relate to the Gripen's operational costs. Norway also calculated greater attrition losses than what Sweden considered reasonable. According to Tolgfors, Norway's decision complicated further export deals.
In December 2010 leaked United States diplomatic cables revealed that the United States deliberately delayed Sweden's request for access to a US AESA radar until after Norway's selection. The cables also indicated that Norwegian consideration of the Gripen "was just a show" and that Norway had decided to purchase the F-35 due to "high-level political pressure" from the US.
Pakistan
Pakistan was interested in the Gripen C/D, but it was denied by Sweden in 2004.
Poland
The Gripen C/D was a contender for 48 multirole fighters for the Polish Air Force started in 2001. On 27 December 2002, the Polish Defence Minister announced the F-16C/D Block 50/52+'s selection. According to Stephen Larrabee, the selection was heavily influenced by Lockheed Martin's lucrative offset agreement (totaling $3.5 billion and 170% offset against Gripen International's €3.2 billion with 146% offset) and by a political emphasis on Poland's strategic relationship with the US and NATO. Both Gripen International and Dassault Aviation (who offered the Mirage 2000-5 Mk 2) described the decision as political. According to a former Polish military defence vice-minister, the JAS 39 offer was better and included research participation proposals.
In 2014, Poland planned to purchase 64 multirole combat aircraft from 2021 as part of its modernisation plans to replace the ageing fleet of Sukhoi Su-22M4 'Fitter-K' ground attack aircraft and Mikoyan MiG-29 'Fulcrum-A' fighters. On 23 November 2017, the Armament Inspectorate announced it was starting the acquisition process. By 22 December 2017, five entities had expressed interest in the potential procurement, referred to as Harpia (harpy eagle), including: Saab AB with Gripen NG, Lockheed Martin with F-35, Boeing Company with F/A-18, Leonardo SpA with Eurofighter Typhoon and Fights-On Logistics with second hand F-16s. In May 2019, the Polish Defense Ministry formally requested to buy 32 F-35A for $4 billion with delivery from 2023 to 2026 with an option for 32 more from 2027.
Slovakia
On 30 August 2014, the Czech Republic, Slovakia and Sweden signed a letter of intent agreeing to co-operate on using the Gripen, which might lead to its acquisition by the Slovak Air Force. The letter of intent laid the foundation for bilateral co-operation around a common airspace surveillance of Slovakia and the Czech Republic. Slovakia sought to replace its MiG-29 fighters and the Gripen has been reported as the aircraft of choice, although the requirement would go to open competition. They may seek to lease fighters rather than buy, as did neighbouring Hungary and the Czech Republic.
In February 2018, the Slovak Ministry of Defence announced the launch of a new study to examine bids from the US and Swedish governments for the F-16V Viper and the Gripen to replace Slovakian MiG-29s. On 11 July 2018, the Slovakian Defense ministry announced that it will purchase 14 F-16V Block 70/72s instead of Gripen Cs. The F-16V package includes ammunition, training and logistics for a total of €1.589 billion (US$1.85 billion). Political opposition, among them former Defence Minister Ľubomír Galko, expressed criticism that the deal lacked transparency.
On December 12, 2018, Slovakia signed a contract to acquire 14 F-16 Block 70/72. All are to be delivered by the end of 2023.
Switzerland
In January 2008, the Swiss Defence Material Administration invited Gripen International to bid to replace the nation's F-5 fleet. Saab responded with an initial proposal on 2 July 2008; other contenders were the Dassault Rafale and Eurofighter Typhoon. On 30 November 2011, the Swiss government announced its decision to buy 22 Gripen NG aircraft for 3.1 billion Swiss francs. In 2012, a confidential report of the Swiss Air Force's 2009 tests of the three contenders was leaked, which had rated the Gripen as performing substantially below both the Rafale and the Eurofighter. The Gripen was assessed as satisfactory for reconnaissance but unsatisfactory for combat air patrol and strike missions. The JAS 39C/D was evaluated, while the Gripen NG was bid. The parliamentary security commission found that the Gripen offered the most risks, but voted to go ahead as it was the cheapest option. The Gripen was considered satisfactory in all roles.
On 25 August 2012, the plan to order was confirmed by both Swedish and Swiss authorities. Deliveries were expected to run from 2018 to 2021 at a fixed price of CHF 3.126 billion (US$3.27 billion) including development costs, mission planning systems, initial spares and support, training, and certification; the Swedish government also guaranteed the price, performance and operational suitability. 8 JAS 39Cs and 3 JAS 39Ds were to be leased from 2016 to 2020 to train Swiss pilots and allow the F-5s to be retired. In 2013, Saab moved to increase Swiss industry offsets above 100% of the deal value after the Swiss parliament's upper house voted down the deal's financing. On 27 August 2013, the National Council's Security Commission approved the purchase, followed by the lower and upper houses of the parliament's approval in September 2013. Elements of the left and center of the political spectrum often criticized the Gripen as unnecessary and too expensive. On 18 May 2014, 53.4% of Swiss voters voted against the plan in a national referendum. Reportedly, objectors questioned the role of fighter aircraft in general, and the relevance of alternatives such as UAVs, surface-to-air missiles, or cyberwarfare capabilities.
In 2015, Switzerland was set to relaunch the F-5E/F, and now also F/A-18C/D, replacement programme; the Gripen was again considered the favourite. In March 2018, Swiss officials named contenders in its Air 2030 programme that includes not only combat aircraft but also ground-based air defense systems: The Gripen, Dassault Rafale, Eurofighter Typhoon, Boeing F/A-18E/F Super Hornet and Lockheed Martin F-35. In January 2019, Saab submitted a formal proposal for 30 to 40 Gripen Es to Armasuisse. It was due to perform evaluation flights for Swiss personnel at Payerne Air Base in June 2019. However, in June 2019, Saab did not participate at Payerne with the Gripen E because it was not considered ready to perform all tests.
Others
The Gripen was one of the candidates to replace the Austrian Air Force's ageing Saab 35 Drakens; the Eurofighter Typhoon was selected in 2003, but is being considered again due to costs.
The Swedish government decided not to enter the Belgian contest.
Oman ended up with the Eurofighter Typhoon.
Romania decided to acquire used F-16s instead.
The Gripen was one of the aircraft evaluated by the Chilean Air Force in 1999. Chile finally selected the F-16 over the Gripen, Boeing F/A-18, and Dassault Mirage 2000–5.
There were plans to begin licensed production of the Gripen in Lviv, Ukraine. However, these plans have stalled since 2014.
Variants
A-series single seater, JAS 39A or Gripen A: initial version that entered service with the Swedish Air Force in 1996. A number have been upgraded to the C standard.
A-series two seater, JAS 39B or Gripen B: two-seat version of the 39A for training, specialised missions and type conversion. To fit the second crew member and life support systems, the internal cannon and an internal fuel tank were removed and the airframe lengthened 0.66 m (2 ft 2 in).
C-series single seater, JAS 39C, or Gripen C: NATO-compatible version with extended capabilities in terms of armament, electronics, etc. Can be refuelled in flight. Variant was first deliveried on 6 September 2002.
C-series two seater, JAS 39D, or Gripen D: two-seat version of the 39C, with similar alterations as the 39B.
E-series single seater Gripen NG: improved version following on from the Gripen Demo technology demonstrator. Changes from the JAS 39C/D include the more powerful F414G engine, Raven ES-05 AESA radar, increased fuel capacity and payload, two additional hardpoints, and other improvements. These improvements have reportedly increased the Gripen NG costs to an estimated 24,000 Swiss Francs (US$27,000) per hour, and increased the flyaway cost to 100 million Swiss Francs (US$113M).
E-series single seater, JAS 39E or Gripen E: single-seat production version developed from the Gripen NG program, priced at US$85 million a unit. Sweden and Brazil have ordered the variant. Brazil's designation for this variant is F-39E.
E-series two seater, JAS 39F or Gripen F: two-seat version of the E variant. Eight ordered by Brazil, to be developed and assembled in São Bernardo do Campo, Brazil; planned for pilot training and combat, being optimized for back seat air battle management, with jamming, information warfare and network attack, besides weapon system officer and electronic warfare roles. Brazil's designation for the variant is F-39F.
Proposals
Gripen Aggressor: ‘red team’ weaponless variant of the Gripen C & possibly D intended for the UK's Air Support to Defence Operational Training (ASDOT) requirement, and part of the US Air Force's adversary air (AdAir) opportunity.
Gripen Maritime: proposed carrier-based version based on the Gripen E-series. , its development was underway. , Brazil and India were interested. This variant has also been named Sea Gripen. In July 2017, the Brazilian Navy began studying the Gripen Maritime for naval purposes and is looking to replace its fleet of Douglas A-4 Skyhawk aircraft.
Gripen UCAV: proposed unmanned combat aerial vehicle (UCAV) variant of the Gripen E.
Gripen EA: proposed electronic warfare (EW) ‘Growler’ or Electronic Attack variant of the Gripen F.
Operators
There are Gripens in service as of 2016.
The Brazilian Air Force operates 5 F-39Es, with another 23 F-39Es and 8 F-39Fs on order. 72 E/F aircraft to be ordered.
'Adelphi' Squadron of the 1st Air Defense Group
The Czech Air Force has 14 Gripens on lease; these include 12 single-seat C models and two two-seat D models, in operation as of 2016.
211. taktická letka (211th Tactical Squadron)
The Hungarian Air Force operates 14 Gripens (12 C-models and 2 D-models) on a lease-and-buy arrangement as of February 2017.
'Puma' Harcászati Repülőszázad ('Puma' Tactical Fighter Squadron at 59th Air Base)
The South African Air Force (SAAF) ordered 26 aircraft; 17 single-seat C-models and nine two-seat D-models. The first delivery, a two-seater, took place on 30 April 2008. It has 17 Cs and nine Ds in service as of 2016.
No. 2 Squadron
The Swedish Air Force operates 74 JAS 39Cs, 24 Ds and 2 Es, ordered 60 Es as of 2016 with 10 more aircraft planned to be ordered. It originally ordered 204 aircraft, including 28 two-seaters. Sweden leases 28 to the Czech and Hungarian Air Forces.
Skaraborg Wing
Blekinge Wing
Norrbotten Wing
The Royal Thai Air Force has eight JAS 39Cs and four JAS 39Ds in use as of 2016. In October 2013, the Thai government announced its intention to purchase another six Gripens.
701 Fighter Squadron, Wing 7
The Empire Test Pilots' School operates Gripens for training. ETPS instructor pilots and students undergo simulator training with the Swedish Air Force, and go on to fly the two-seater Gripen at Saab in Linköping, in two training campaigns per year (Spring and Autumn). The agreement was renewed in 2008.
Aircraft on display
Second prototype JAS 39–2 is on display at the Swedish Air Force Museum, Linköping.
Single seat JAS 39A serial 39113 is displayed at the Skaraborg Wing.
The Swedish government has donated one Swedish Air Force JAS 39A to Thailand for display at the Royal Thai Air Force Museum in Don Mueang, Bangkok.
Accidents and incidents
, Gripen aircraft have been involved in at least 10 incidents, including nine hull-loss accidents, with one loss of life.
The first two crashes, in 1989 and 1993 respectively, occurred during public displays of the Gripen and resulted in considerable negative media reports. The first crash was filmed by a Sveriges Television news crew and led to critics calling for development to be cancelled. The second crash occurred in an empty area on the island of Långholmen during the 1993 Stockholm Water Festival with tens of thousands of spectators present. The decision to display the Gripen over large crowds was publicly criticized, and was compared to the 1989 crash. Both the 1989 and 1993 crashes were related to flight control software issues and pilot-induced oscillation (PIO); the flight control system was corrected by 1995. The first and only fatal crash occurred on 14 January 2017 at Hat Yai International Airport, Thailand, during an airshow for Thai Children's Day; the pilot did not survive. The last crash occurred on 21 August 2018 at Kallinge Airport near the southern Swedish town of Ronneby; the pilot was able to successfully eject from the aircraft. The following investigation by the Swedish Accident Investigation Authority led to the conclusion by DNA analysis of the engine that it collided with Phalacrocorax carbo birds at a speed of and height .
Specifications
JAS 39C/D
JAS 39E/F
See also
Notes
References
Citations
Bibliography
.
.
Further reading
.
External links
Mega Pit Stops
Siivet – Wings
1980s Swedish fighter aircraft
39
Delta-wing aircraft
Saab JAS 39 Gripen
Single-engined jet aircraft
Relaxed-stability aircraft
Aircraft first flown in 1988
Fourth-generation jet fighter |
425978 | https://en.wikipedia.org/wiki/Commodore%20Datasette | Commodore Datasette | The Commodore 1530 (C2N) Datasette, later also Datassette (a portmanteau of data and cassette), is Commodore's dedicated magnetic tape data storage device. Using compact cassettes as the storage medium, it provides inexpensive storage to Commodore's 8-bit home/personal computers, notably the PET, VIC-20, and C64. A physically similar model, Commodore 1531, was made for the Commodore 16 and Plus/4 series computers.
Features
Typical compact cassette interfaces of the late 1970s use a small controller in the computer to convert digital data to and from analog tones. The interface is then connected to the cassette deck using normal sound wiring like RCA jacks or 3.5mm phone jacks. This sort of system was used on the Apple II and TRS-80 Color Computer, as well as many S-100 bus systems, and allow them to be used with any cassette player with suitable connections.
In the Datasette, instead of writing two tones to tape to indicate bits, patterns of square waves are used, including a parity bit. Programs are written twice to tape for error correction; if an error is detected when reading the first recording, the computer corrects it with data from the second. The Datasette has built-in analog-to-digital converters and audio filters to convert the computer's digital data into analog sound and vice versa. Connection to the computer is done via a proprietary edge connector (Commodore 1530) or mini-DIN connector (1531). The absence of recordable audio signals on this interface makes the Datasette and clones the only cassette recorders usable with Commodore computers, until aftermarket converters made the use of ordinary recorders possible.
Because of its digital format the Datasette is both more reliable than other data cassette systems and very slow, transferring data at around per second. After the Datasette's launch, however, special turbo tape software appeared, providing much faster tape operation (loading and saving). Such software was integrated into most commercial prerecorded applications (mostly games), as well as being available separately for loading and saving the users' homemade programs and data. These programs were only widely used in Europe, as the US market had long since moved onto disks.
Datasettes can typically store about per side. The use of turbo tape and other fast loaders increased this number to roughly .
The Datasette has only one connection cable, with a –spacing PCB edge connector at the computer end. All input/output signals to the Datasette are all digital, and so all digital-to-analog conversion, and vice versa, is handled within the unit. Power is also included in this cable. The pinout is ground, , motor, read, write, key-sense. The sense signal monitors the play, rewind, and fast-forward buttons but cannot differentiate between them. A mechanical interlock prevents any two of them from being pressed at the same time. The motor power is derived from the computer's unregulated supply via a transistor circuit.
Encoding
To record physical data, the zero-crossing from positive to negative voltage of the analog signal is measured. The resulting time between these positive to negative crossings is then compared to a threshold to determine whether the time since the last crossing is short (0) or long (1). Note the lower amplitude for the shorter periods.
A circuit in the tape unit transforms the analog signal into a logical 1 or 0, which is then transmitted to the computer via the tape connector. Inside the computer, the first Complex Interface Adapter (6526) in the C64 senses when the signal goes from one to zero. This event is called trigger and causes an interrupt request. This event can be handled by a handler code, or simply discovered by testing bit 4 of location $DC0D. The points that trigger this event are indicated by the black circles in the figure.
Inside the tape device the read head signal is fed into an operational amplifier (1) whose output signal is DC-filtered. Op-amp (2) amplifies and feeds an RC filter. Op-amp (3) amplifies the signal again followed by another DC filter. Op-amp (4) amplifies the signal into clipping the sine-formed signal. The positive and negative rails for all op-amps are wired to +5V DC and GND. The clipped signal therefore fits into the TTL electrical level window of the Schmitt trigger step that in turn feeds the digital cassette port.
On the PAL version of the C64, the time granularity is (for NTSC ). Since each bit uses this means = or a data rate.
Once the bits can be decoded, they are fed into a shift register and are continuously compared to a special bit sequence. This bit sequence can also be seen as a byte. A bit-sequence match means that the stream is byte-synchronized. The first byte to compare with is called lead-in byte. If matched, it's compared to the sync byte as well.
An example: Turbo Tape 64 has a lead-in byte $02 (binary 00000010), sync byte $09 (binary 00001001) and a following sync sequence of $08, $07, $06, $05, $04, $03, $02, $01.
Models
PET, VIC-20, C64/128
There are at least four main models of the 1530/C2N Datassette:
The original modified Sanyo M1540A cassette drive, built into the earliest models of PET in 1977. This was a standard shoebox tape recorder with a corner of the case removed and modified electronics; a Commodore PCB was installed internally in place of the Sanyo electronics. To disguise the Sanyo brand, Commodore simply fitted a Commodore badge over the original logo.
The second built-in Datassette in the PET 2001: another standard consumer model (sold in some markets as CCE CCT1020) modified with a Commodore PCB. Black cassette lid, five white keys, no tape counter, no SAVE LED
Black body original-shape model, black cassette lid, five black keys, no tape counter, no SAVE LED
White body original-shape model, black cassette lid, five black keys, with tape counter, no SAVE LED
White body new-shape model, silver cassette lid, six black keys, with white tape counter SAVE LED on left side
White body new-shape model, silver cassette lid, six black keys, with tape counter and a red SAVE LED on the right
As above but with black pattern and silvery Commodore logo, six black keys, tape counter and a red SAVE LED on right side
The first two external models were made as PET peripherals, and styled after the PET 2001 built-in tape drive. The latter two were styled and marketed for the VIC-20 and C64. All 1530s are compatible with all those computers, as well as the C128.
In addition to this, some models came with a small hole above the keys, to allow access to the adjustment screw of the tape head azimuth position. A small screwdriver can thus easily be used to affect the adjustment without disassembling the Datassette's chassis.
Confusingly, the Datassette at various times was sold both as the C2N DATASETTE UNIT Model 1530 and as the 1530 DATASSETTE UNIT Model C2N. Note the difference in spelling (one S versus two) used on the original product packaging.
Like Datasette models, the recording format is compatible across computers; the VIC, for example, can read PET cassettes.
C16/116 and Plus/4
Similar in physical appearance to the 1530/C2N models is the Commodore 1531, made for the Commodore 16 and Plus/4 series computers. This has a Mini-DIN connector in place of the PCB edge connector. This can be used with a C64/128 via an adaptor, which was supplied by Commodore with some units.
Black/charcoal body new shape model, silver cassette lid, six light gray keys, with tape counter and a red SAVE LED
Popularity
The Datasette was more popular outside than inside the United States. U.S. Gold, which imported American computer games to Britain, often had to wait until they were converted from disk because most British Commodore 64 owners used tape, while the US magazine Compute!'s Gazette reported that by 1983 "90 percent of new Commodore 64 owners bought a disk drive with their computer". Computer Gaming World reported in 1986 that British cassette-based software had failed in the United States because "97% of the Commodore systems in the USA have disk drives"; by contrast, MicroProse reported in 1987 that 80% of its 100,000 sales of Gunship in the UK were on cassette. In the United States disk drives quickly became standard, despite the 1541 costing roughly five times as much as a Datasette. In most parts of Europe, the Datasette was the medium of choice for several years after its launch, although floppy disk drives were generally available. The inexpensive and widely available audio cassettes made the Datasette a good choice for the budget-aware home computer mass market.
See also
Famicom Data Recorder
Fast loader
IBM cassette tape
Kansas City standard
Magnetic tape data storage
References
External links
Similar Commodore tape drives
Datasette photos
Description of tape format with conversion utilities and code
C2N232 project to build a hardware adaptor/software program to archive Commodore Datasette files to a modern computer.
DC2N Homepage Digital C2N replacement project.
Sketchup model of the Commodore Datasette 1530. Sketchup model of the Commodore Datasette 1530.
CBM storage devices
CBM hardware
Home computer peripherals
Commodore 64
Commodore VIC-20
Tape-based computer storage |
2631038 | https://en.wikipedia.org/wiki/Sparse%20file | Sparse file | In computer science, a sparse file is a type of computer file that attempts to use file system space more efficiently when the file itself is partially empty. This is achieved by writing brief information (metadata) representing the empty blocks to the data storage media instead of the actual "empty" space which makes up the block, thus consuming less storage space. The full block size is written to the media as the actual size only when the block contains "real" (non-empty) data.
When reading sparse files, the file system transparently converts metadata representing empty blocks into "real" blocks filled with null bytes at runtime. The application is unaware of this conversion.
Most modern file systems support sparse files, including most Unix variants and NTFS. Apple's HFS+ does not provide support for sparse files, but in OS X, the virtual file system layer supports storing them in any supported file system, including HFS+. Apple File System (APFS), announced in June 2016 at WWDC, also supports them. Sparse files are commonly used for disk images, database snapshots, log files and in scientific applications.
Advantages
The advantage of sparse files is that storage space is only allocated when actually needed: Storage capacity is conserved, and large files can occasionally be created even if insufficient free space for the original file is available on the storage media. This also reduces the time of the first write as the system doesn't have to allocate blocks for the "skipped" space. If the initial allocation requires writing all zeros to the space, it also keeps the system from having to write over the "skipped" space twice.
For example, a virtual machine image with max size of 100 GB that has 2 GB of files actually written would require the full 100 GB when backed by pre-allocated storage, yet only 2 GB on a sparse file. If the file system supports hole punching and the guest operating system issues TRIM commands, deleting files on the guest will accordingly reduce space needed.
Disadvantages
Disadvantages are that sparse files may become fragmented; file system free space reports may be misleading; filling up file systems containing sparse files can have unexpected effects (such as disk-full or quota-exceeded errors when merely overwriting an existing portion of a file that happened to have been sparse); and copying a sparse file with a program that does not explicitly support them may copy the entire, uncompressed size of the file, including the zero sections which are not allocated on the storage media—losing the benefits of the sparse property in the file. Sparse files are also not fully supported by all backup software or applications. However, the VFS implementation sidesteps the prior two disadvantages. Loading executables on 32 bit Windows (exe or dll) which are sparse takes a much longer time since the file cannot be memory mapped in the limited 4 GB address space, and are not cached as there is no codepath for caching 32 bit sparse executables (Windows on 64 bit architectures can map sparse executables). On NTFS sparse files (or rather their non-zero areas) can't be compressed. NTFS implements sparseness as a special kind of compression so a file may be either sparse or compressed.
Sparse files in Unix
Sparse files are typically handled transparently to the user. But the differences between a normal file and sparse file become apparent in some situations.
Creation
The Unix command
dd of=sparse-file bs=5M seek=1 count=0
will create a file of five mebibytes in size, but with no data stored on the media (only metadata). (GNU dd has this behavior because it calls ftruncate to set the file size; other implementations may merely create an empty file.)
Similarly the truncate command may be used, if available:
truncate -s 5M <filename>
On Linux, an existing file can be converted to sparse by:
fallocate -d <filename>
Alas, there's no portable way to punch holes; the syscall is fallocate(FALLOC_FL_PUNCH_HOLE) on Linux, fcntl(F_FREESP) on Solaris.
Detection
The -s option of the ls command shows the occupied space in blocks.
ls -ls sparse-file
Alternatively, the du command prints the occupied space, while ls prints the apparent size.
In some non-standard versions of du, the option --block-size=1 prints the occupied space in bytes instead of blocks, so that it can be compared to the ls output:
du --block-size=1 sparse-file
ls -l sparse-file
Also, the tool filefrag from e2fsprogs package can be used to show block allocation details of the file.
filefrag -v sparse-file
Copying
Normally the GNU version of cp is good at detecting whether a file is sparse, so
cp sparse-file new-file
creates new-file, which will be sparse. However, GNU cp does have a --sparse option. This is especially useful if a file containing long zero blocks is saved in a non-sparse way (i.e. the zero blocks have been written to the storage media in full). Storage space can be conserved by doing:
cp --sparse=always file1 file1_sparsed
Some cp implementations, like FreeBSD's cp, do not support the --sparse option and will always expand sparse files. A partially viable alternative on those systems is to use rsync with its own --sparse option instead of cp. Unfortunately --sparse cannot be combined with --inplace.
Via standard input
cp --sparse=always /proc/self/fd/0 new-sparse-file < somefile
See also
Comparison of file systems
References
External links
NTFS Sparse Files For Programmers
Creating sparse files in Windows Server using fsutil
Creating sparse files in Solaris using mkfile(1M)
View the Size of the Sparse File of a Database Snapshot
SEEK_HOLE or FIEMAP: Detecting holes in sparse files
virtsync is a commercial solution to rsync's --sparse and --inplace issue.
SparseChecker - Utility that allows to manage the sparse files on NTFS file system
Phantom - a program to convert files to sparse files to reduce storage consumption
ArchLinux Wiki: Sparse file
Computer files |
5634280 | https://en.wikipedia.org/wiki/Audit%20%28disambiguation%29 | Audit (disambiguation) | An audit is an independent evaluation of an organization, process, project, product or system.
Audit, auditor or auditing may also refer to:
Types of audit
Academic audit, the completion of a course of study for which no assessment is completed or grade awarded
Conformity assessment audit (ISO, HACCP, JCAHCO)
Environmental audit
Energy audit
First Amendment audits, social movement involving photographing or filming from a public space
Financial audit, the examination by an independent third party of the financial statements of a company
Clinical audit, a process of the United Kingdom's National Health Service
Internal audit
Performance audit, an examination of a program, function, operation or the management systems and procedures of a governmental or non-profit entity
Quality audit, a systematic, independent examination of a quality system
Helpdesk and incident reporting auditing
Computing
Audit (telecommunication) - multiple meanings
audit trail
Information technology security audit - a process that can verify that certain standards have been met
Configuration audit (as part of configuration management)
Information technology audit - an examination of the controls within an entity's Information technology infrastructure
Software audit (disambiguation) - multiple meanings
Auditor Security Collection, a Linux distribution which was merged into BackTrack
Religion
Auditing (Scientology), a procedure in Scientology
Saint Auditor (Nectarius of Auvergne), Christian martyr of the 4th century
Other uses
Auditor, the head of a Student Society, especially in Ireland
Auditors of Reality, characters in the Discworld novels
Auditing |
34625392 | https://en.wikipedia.org/wiki/Final%20Cut%20Pro%20X | Final Cut Pro X | Final Cut Pro, previously Final Cut Pro X, is professional non-linear video editing software published by Apple Inc. as part of their Pro Apps family of software programs. It was released on June 21, 2011 for sale in the Mac App Store. It is the successor to Final Cut Pro. It was renamed "Final Cut Pro" in November 2020, coinciding with the release of macOS Big Sur.
Features
Final Cut Pro X shares some of both its code and interface design philosophy with Apple's consumer video editing software, iMovie.
Interface
Event browser: Replacing “bins” in other NLEs, the event browser is where the original media is found and can be searched and sorted by various forms of metadata. Keyword ranges, favorite and rejected ranges, and smart collections allow for faster sorting of a large number of clips.
Magnetic timeline: Inventing an alternative to track-based timelines found in traditional NLEs, Final Cut's magnetic timeline uses clip connections to keep connected clips and secondary storylines in sync with clips located on the primary storyline. By default, clips move around each other "magnetically", filling in any gaps and avoiding clip collisions by automatically bumping clips out of the way vertically. The magnetic connections are also user-definable.
Roles: In order to separate and organize different audio types on the magnetic timeline, editors can designate what "role' each clip plays. Introduced in version 10.0.1, Roles can be assigned to clips as an alternate way of creating organizational functionality. A Role (or Sub-Role) gets assigned to clips to identify what it is (for example Video, Titles, Dialogue, Effects, Music). Upon Sharing a Master File of the Project the various Roles can be split out as stems or in a multitrack file for broadcast delivery or other distribution needs.
Content auto-analysis: Found in the import window and event browser is the option to analyze media for shot type and facial recognition or fix potential problems like audio loudness, audio hum, channel grouping, background noise, color balance, pulldown removal, and stabilization. This process generates metadata that can automatically be organized as Keywords and can be grouped into Smart Collections.
Synchronized clips: Video and audio clips recorded on separate devices can be synched automatically by timecode, audio waveforms, and markers together as a single clip.
Compound clips: Nested sequences from the original Final Cut Pro have been replaced by compound clips. A selection of video and audio clips can be nested into a single compound clip. This compound clip can be opened in its own timeline or broken apart for further editing. It can also be reused in different projects.
Closed captions: Introduced in version 10.4.1, closed captions can be created right in the timeline or imported into the timeline from an external file.
Multicam editing: Introduced in version 10.0.3, multiple camera angles can be synchronized automatically and combined into a multicam clip. Once in the timeline, a multicam clip can be cut up into different angles by using the angle viewer. A multicam clip can be opened in the angle editor where new angles can be added, synched, relabeled, and rearranged at any time.
Auditions: Clips can be grouped together in the event browser or on the timeline as auditions. Once in the timeline, an audition allows the user to choose between different clips in their edit while the timeline ripples automatically in order to preview two or more different versions of a cut.
3D titles: Introduced in version 10.2.0, text can be extruded, textured, lit, and shaded with materials and environments in 3D. This allows users to create titles like those found in Hollywood movies directly in the application.
360 degree video editing: Introduced in version 10.4. import and edit 360° equirectangular video in a wide range of formats and frame sizes.
Advanced color grading
High dynamic range
Technical features
While inheriting the name from its predecessor, Final Cut Pro, Final Cut Pro X is a completely re-written application. As a native 64-bit application it takes advantage of more than 4GB of RAM. It utilizes all CPU cores with Grand Central Dispatch. Open CL support allows GPU accelerated processing for improved performance for playback, rendering, and transcoding. It is resolution-independent, supporting images sizes from SD to beyond 4K. Final Cut Pro X supports playback of many native camera and audio formats. It can also transcode video clips to the Apple ProRes codec for improved performance. Many tasks are performed in the background such as auto-saving, rendering, transcoding, and media management, allowing the user an uninterrupted experience. Final Cut Pro X was developed for macOS only.
Workflow integration
Motion 5 Titles, motion graphics, effects generated in Motion 5 can be published to Final Cut Pro X. Inside the Final Cut Pro X, editors can modify the parameters and contents of the effects, as long as the permission for such modifications is turned on in the Motion 5 project file.
Adobe Photoshop In Final Cut Pro X 10.0.3 and later, the editor can import Photoshop projects onto the storyline similar to a still image. A Photoshop project with layers is treated similar to a compound clip and the layers are preserved after being imported into the Final Cut Pro X. Individual layers of the Photoshop project can be toggled on or off inside the Final Cut Pro X by double-clicking the imported project and going into the compound clip editing panel. Other adjustments to the Photoshop project should be performed on the imported Photoshop project using Adobe Photoshop program with updates happening in real-time inside Final Cut Pro X.
Development
Final Cut Pro X was made available for purchase and download on the Mac App Store on June 21, 2011 along with new versions of Motion and Compressor. Since then the application has been updated a few times each year with new features, bug fixes, native codec support, and overall improvements with stability.
Many features found in Final Cut Pro 7 that were missing in Final Cut Pro X on its initial release have subsequently been added such as: XML import and export, server support, multicam editing, chapter markers, and broadcast monitor support.
Because Final Cut Pro X abandoned a track-based timeline in favor of the Magnetic Timeline initially there were limitations in exporting audio stems for Broadcast and Distribution needs. On September 9, 2011 version 10.0.1 was released with a new solution to this problem, the introduction of Video and Audio Roles. Clips are assigned Roles and upon Sharing a Project the user could export Multitrack Quicktime Files or Stems. AAF export is done with a third-party app X2Pro which uses FCPXML. Another method is to use Logic Pro X to make an AAF.
With version 10.0.6 released on October 23, 2012, Apple introduced native Redcode Raw support as well as MXF support through a third party plugin. MXF was eventually natively supported by version 10.1.4.
Prior to the introduction of version 10.1.0 Project and Event Libraries were separate folders. Events contained all the original media and Project Libraries contained the actual edited Projects on timelines. The Project and Event Libraries were stored in a user's Movie folder or on the root level of an external hard drive. These Libraries automatically opened in Final Cut Pro X depending on which hard drives were mounted. That all changed on December 19, 2013, when Project and Event Libraries were merged into a new Library model. Libraries contained Events which in turn contained Projects. And unlike before Libraries could be opened and closed by the user. Media could be stored internally in the Library or kept outside the Library. On June 27, 2014 media management was further refined with the release of version 10.1.2.
During the NAB Show 2015 Apple released version 10.2.0. 3D Titles were introduced directly in Final Cut Pro X as well as its companion application, Motion. The Color Board merged with a new Color Correction effect to allow for more flexibility in stacking layers of effects. The ability to apply a Keying or Shape Mask directly to any effect was also introduced.
Version 10.3 introduced an entirely new interface and an improved magnetic timeline. Support for iXML metadata when importing audio. Audio editing vastly improved. Audio roles can be shown in lanes. REC 2020 color import, edit, and export. MXF support.
Version 10.4.0 introduced color wheels and color curves, 360º video editing and High Dynamic Range (HDR) video,
During the NAB Show 2018 Apple released version 10.4.1. Closed Captioning was introduced as well as support for ProRes RAW. Version 10.4.1 requires Mac OS version 10.13.2 High Sierra.
Release history
For a complete overview of the changes made, see Apple's release notes.
Reception
Final Cut Pro X was announced in April 2011 simultaneously at the Los Angeles Final Cut Pro Users Group held at Bally's Las Vegas and at the NAB Show in the Las Vegas Convention Center and released in June 2011.
The reaction was extremely mixed, with veteran film editor Walter Murch initially saying, "I can't use this," citing a lack of features compared to Final Cut Pro 7. In a 2015 interview, Murch was much less critical of the tool and suggested that he was interested in using it.
Missing features and issues noted as essential to professional video production in FCPX included lack of edit decision list (EDL), XML and Open Media Framework Interchange (OMF) support, inability to import projects created in previous releases of Final Cut Pro, a lack of a multicam editing tool, third-party I/O hardware output, and videotape capture being limited to Firewire video devices only, including capture with third-party hardware, were addressed within the first six months of the product's life. EDL export, a product of the early days of videotape editing, is now supported through third-party software and creating an AAF (a newer version of OMF) for passing projects to Pro Tools through X2Pro.
Since then, in some people's opinions, some of Final Cut Pro X's initial shortcomings have been fixed.
Apple announced in April 2018 that there are more than 2.5 Million users of Final Cut Pro X.
Ecosystem
Since its release, FCP X has supported the construction of effect, transition, and title plugins by publishing custom-built effects from Apple Motion. This has led to a third-party ecosystem of developers building effects from simple color corrections to complex templates. Third-party plug-ins can also be created through Apple's FxPlug SDK. As Projects, Events, and Libraries are stored in a database format; this has allowed many third-party developers to build workflow tools by utilizing FCPXML.
Feature films and television shows edited with Final Cut Pro X
Film
Young Detective Dee: Rise of the Sea Dragon (2013)
Loreak (2014)
Focus (2015)
Well Wishes (2015)
What Happened, Miss Simone? (2015)
La isla del viento (2015)
600 Miles (2015)
Minimalism: A Documentary About the Important Things (2015)
Just Let Go (2015)
An Autumn Without Berlin (2015)
The Chosen (2015)
Amama (2015)
Saved by Grace (2016)
Whiskey Tango Foxtrot (2016)
Saturday's Warrior (2016)
Voice from the Stone (2016)
El Hombre de las Mil Caras (2016)
Bokeh (2016)
Everything Else (2016)
La Historia de Jan (2016)
Geostorm (2017)
The Unknown Soldier (2017)
Flesh And Blood (2017)
Daisy Winters (2017)
Handia (2017)
Escape Room (2017)
Brothers' Nest (2018)
La llum d'Elna (2017)
Dead Envy (2018)
Gabriel (2018)
Off The Tracks (Documentary) (2018)
Dantza (2018)
Scramble the Seawolves (Documentary) (2018)
Jezebel (2019)
The Banker (2020)
Faith Based (2020)
Wild Amsterdam (2018) (Documentary)
Fragments of Truth (Documentary) (2018)
The Isle (2018)
Unhinged (2017)
Psychosynthesis (2019)
Shadows on the Road (2018)
Against the Clock (2019)
Chasing Molly (2019)
Follow Me (2020)
Budapest Heist (2020)
Monsters of Man (2020)
Blood Red Sky (2021)
Television
BBC Have I Got News For You (2020)
BBC News (2014)
Trailer Park Boys (2012)
Leverage (2012)
George to the Rescue (2013)
The Hong Kong Affair (2013)
Drag Queens of London (2014)
O.J. Speaks: The Hidden Tapes (2015)
Paramedics: Emergency Response (2015)
Challenger Disaster: Lost Tapes (2016)
Rebellion (2016)
Sex on the Edge (2016)
Conquistadores Adventum (2017)
Diana: In Her Own Words (2017)
Students on the Edge (2018)
La Peste (2018)
Dogs Of Berlin (2018)
Apollo: Missions to the Moon (2019)
Matchday: Inside FC Barcelona (2019)
Salvados (2019)
La Línea: Shadow of Narco (2020)
Q: Into the Storm (2021)
References
Apple Inc. software
Video editing software
Video editing software for macOS |
1868994 | https://en.wikipedia.org/wiki/Xpdf | Xpdf | Xpdf is a free and open-source PDF viewer for operating systems supported by the Qt toolkit. Versions prior to 4.00 were written for the X Window System and Motif.
Functions
Xpdf runs on nearly any Unix-like operating system. Binaries are also available for Windows. Xpdf can decode LZW and read encrypted PDFs. The official version obeys the DRM restrictions of PDF files, which can prevent copying, printing, or converting some PDF files. There are patches that make Xpdf ignore these DRM restrictions, and these restrictions are patched out by the Debian distribution.
Xpdf includes several programs that don't need an X Window System, including some that extract images from PDF files or convert PDF to PostScript or text. These programs run on DOS, Windows, Linux and Unix.
Xpdf is also used as a back-end for other PDF readers frontends such as KPDF and GPDF, and its engine, without the X11 display components, is used for PDF viewers including BePDF on BeOS, '!PDF' on RISC OS, on PalmPDF on Palm OS and on Windows Mobile.
Two versions exist for AmigaOS. Xpdf needs a limited version of an X11 engine called Cygnix on the host system. AmigaOS 4 included AmiPDF, a PDF viewer based on 3.01 version of the Xpdf. However both Apdf and AmiPDF are native and need no X11.
xpdf-utils
The associated package "xpdf-utils" or "poppler-utils" contains tools such as pdftotext and pdfimages.
Exploit
A vulnerability in the Xpdf implementation of the JBIG2 file format, re-used in Apple's iOS phone operating software, was used by the Pegasus spyware to implement a zero-click attack on iPhones by constructing an emulated computer architecture inside a JBIG2 stream. Apple fixed this "FORCEDENTRY" vulnerability in iOS 14.8 in September 2021.
See also
Poppler, a GPL-licensed fork of the xpdf-3.0 rendering library designed for easier reuse in other programs
List of PDF software
Notes and references
Sources
External links
Free PDF readers
Technical communication tools
X Window programs
Amiga software
AmigaOS 4 software
Software that uses Motif (software) |
2286665 | https://en.wikipedia.org/wiki/Enterprise%20asset%20management | Enterprise asset management | Enterprise asset management (EAM) involves the management of the maintenance of physical assets of an organization throughout each asset's lifecycle. EAM is used to plan, optimize, execute, and track the needed maintenance activities with the associated priorities, skills, materials, tools, and information. This covers the design, construction, commissioning, operations, maintenance and decommissioning or replacement of plant, equipment and facilities.
"Enterprise" refers to the scope of the assets in an Enterprise across departments, locations, facilities and, potentially, supporting business functions. Various assets are managed by the modern enterprises at present. The assets may be fixed assets like buildings, plants, machineries or moving assets like vehicles, ships, moving equipments etc. The lifecycle management of the high value physical assets require regressive planning and execution of the work.
History
EAM arose as an extension of the computerized maintenance management system (CMMS) which is usually defined as a system for the computerisation of the maintenance of physical assets.
Enterprise asset management software
Enterprise asset management software is a computer software that handles every aspect of running a public works or asset-intensive organization. Enterprise asset management (EAM) software applications include features such as asset life-cycle management, preventive maintenance scheduling, warranty management, integrated mobile wireless handheld options and portal-based software interface. Rapid development and availability of mobile devices also affected EAM software which now often supports mobile enterprise asset management.
See also
Building lifecycle management
References
Sources
Physical Asset Management(Springer publication) Nicholas Anthony John,2010.
Pascual, R. "El Arte de Mantener", Pontificia Universidad Católica de Chile, Santiago, Chile, 2015.
Asset management
Business software
Wireless locating |
32476167 | https://en.wikipedia.org/wiki/IBM%20cloud%20computing | IBM cloud computing | IBM cloud computing is a set of cloud computing services for business offered by the information technology company IBM. IBM Cloud includes infrastructure as a service (IaaS), software as a service (SaaS) and platform as a service (PaaS) offered through public, private and hybrid cloud delivery models, in addition to the components that make up those clouds.
Overview
IBM offers three hardware platforms for cloud computing. These platforms offer built-in support for virtualization. For virtualization IBM offers IBM Websphere application infrastructure that supports programming models and open standards for virtualization.
The management layer of the IBM cloud framework includes IBM Tivoli middleware. Management tools provide capabilities to regulate images with automated provisioning and de-provisioning, monitor operations and meter usage while tracking costs and allocating billing. The last layer of the framework provides integrated workload tools. Workloads for cloud computing are services or instances of code that can be executed to meet specific business needs. IBM offers tools for cloud based collaboration, development and test, application development, analytics, business-to-business integration, and security.
History
IBM cloud computing emerged from the union of its mainframe computing and virtualization technologies. Known as the original virtualization company, IBM's first experiments in virtualization occurred in the 1960s with the development of the virtual machine (VM) on CP-40 and CP-67 operating systems. CP-67, a hypervisor used for software testing and development, enabled memory sharing across VMs while giving each user their own virtual memory space. With the machine partitioned into separate VMs, mainframes could run multiple applications and processes at the same time, making the hardware more efficient and cost-effective. IBM began selling VM technology for the mainframe in 1972.
In February 1990, IBM released the RS/6000 (which later became known as IBM Power Systems) based servers. The servers, in combination with the IBM mainframe, were built for complex and mission-critical virtualization. Power systems servers include PowerVM hypervisors with live partition mobility and active memory sharing. Live migration was introduced with POWER6 in May 2007. Next, IBM looked to implement standardization and automation in their technology in order to keep up with the proliferation of data produced by increasingly efficient hardware and data centers. This combination of virtualization, standardization and automation led to the development of IBM cloud computing.
IBM began to develop a strategy for cloud computing in 2007, announcing that it planned to build clouds for enterprise clients and provide services to fill what it regarded as gaps in existing cloud environments. In October 2007, IBM announced a partnership with Google to promote cloud computing in universities. In addition to donating hardware and machines, the two companies also provided a curriculum to teach students about cloud computing.
IBM claimed in April 2011 that 80% of Fortune 500 companies were using IBM cloud, and that their software and services were used by more than 20 million end-user customers, with clients including American Airlines, Aviva, Carfax, Frito-Lay, IndiaFirst Life Insurance Company, and 7-Eleven.
On 4 June 2013 IBM announced its acquisition of SoftLayer, to form an IBM Cloud Services Division.
By March 4, 2014 IBM acquired Cloudant.
IBM Cloud
The IBM SmartCloud brand includes infrastructure as a service, software as a service and platform as a service offered through public, private and hybrid cloud delivery models. IBM places these offerings under three umbrellas: SmartCloud Foundation, SmartCloud Services and SmartCloud Solutions.
SmartCloud Foundation consists of the infrastructure, hardware, provisioning, management, integration and security that serve as the underpinnings of a private or hybrid cloud. Built using those foundational components, PaaS, IaaS and backup services make up SmartCloud Services. Running on this cloud platform and infrastructure, SmartCloud Solutions consist of a number of collaboration, analytics and marketing SaaS applications.
IBM also builds cloud environments for clients that are not necessarily on the SmartCloud Platform. For example, features of the SmartCloud platform—such as Tivoli management software or IBM Systems Director virtualization—can be integrated separately as part of a non-IBM cloud platform. The SmartCloud platform consists solely of IBM hardware, software, services and practices.
IBM SmartCloud Enterprise and SmartCloud Enterprise+ compete with products like those of Rackspace and Amazon Web Services. Erich Clementi, vice president of Global Technology Services at IBM, said in 2012 that the goal with SmartCloud Enterprise and SmartCloud Enterprise+ was to provide an Amazon EC2-like experience primarily for test and development purposes and to provide a more robust experience for production workloads.
In 2011, IBM SmartCloud integrated Hadoop-based InfoSphere BigInsights for big data, Green Hat for software testing and Nirvanix for cloud storage. In 2012, the then new CEO Virginia Rometty said the company planned to spend $20 billion on acquisitions by 2015.
Users may build their own private cloud or purchase services hosted on the IBM cloud. Users may also purchase IBM hardware, software and services to build their customized cloud environment.
By 2014, the name SmartCloud was replaced with products that have a prefix of IBM Cloud. A product called IBM Cloud Manager with OpenStack was IBM's integration of OpenStack along with a multitude of value additions that would serve enterprise customers. A product called IBM Cloud Orchestrator would serve the orchestration needs of an enterprise. The aforementioned SmartCloud products have been discontinued.
By 2016, the aforementioned product called IBM Cloud Manager with OpenStack was discontinued, although the services organization may be using other versions of OpenStack for large scale cloud deployments.
Public, private and hybrid cloud models
IBM offers cloud delivery options including solely private cloud, solely public cloud, and variations in between. Private, public and hybrid clouds are not strictly distinct, as IBM allows the option to build a customized cloud out of a combination of public cloud and private cloud elements. Companies that prefer to keep all data and processes behind their own firewall can use private cloud services managed by their own IT staff. A company may also choose pay-as-you-go pricing. Hybrid cloud options allow for some processes to be hosted and managed by IBM, while others are kept on a private cloud or on a VPN or VLAN. IBM also offers planning and consultation throughout the deployment process. IBM offers five cloud provision models:
Private cloud, owned and operated by the customer
Private cloud, owned by the customer, but operated by IBM (or another provider)
Private cloud, owned and operated by IBM (or another provider)
Virtual private cloud services (based on multi-tenanted support for individual enterprises)
Public cloud services (based on the provision of functions to individuals)
The majority of cloud users choose a hybrid cloud model, with some workloads being served by internal systems, some from commercial cloud providers and some from public cloud service providers.
On August 25, 2011, IBM announced the release of a new hybrid cloud model orchestrated by IBM WebSphere Cast Iron integration of on- and off-premises resources. Enterprises can use Cast Iron integration to link their public cloud appliances— hosted on environments like Amazon EC2, Google Apps, Salesforce.com, Oracle CRM, SugarCRM and a number of others—to their existing systems or in-house, private cloud environments. Cast Iron Integration aims to reduce the time and effort needed for customized coding, in favor of simple workload provisioning through Tivoli Management Framework.
The IBM public cloud offering, SmartCloud Enterprise, was launched on April 7, 2011. SCE is hosted IaaS with service level agreements (SLA)s, and can be offered in a private, public or hybrid model. The environment is hosted on IBM servers (System p or System x), with a standard set of software images to choose from.
For customers who perceive that the security risk of cloud computing adoption is too high, IBM offers private cloud services. IDEAS International wrote in a white paper, "IBM believes that its clients are currently more comfortable with private clouds than public or hybrid clouds, and that many are ready to deploy fundamental business applications in private clouds." For building strictly private clouds, IBM offers IBM Workload Deployer and Cloudburst as ready-to-deploy, “cloud in a box.” Cloudburst provides blade servers, middleware and virtualization for an enterprise to build its own cloud-ready virtual machines. Workload Deployer connects an enterprise's existing servers to virtualization components and middleware in order to help deploy standardized virtual machines designed by IBM.
For customers who prefer to perform their own integration of private clouds, IBM offers a choice of hardware and software building blocks, along with recommendations and a reference architecture, prior to deployment. Clients may choose from IBM virtualization-enabled servers, middleware and SaaS applications.
Cloud standards
IBM participates in several cloud standards initiatives within various standards development organizations involved in cloud service models IaaS, PaaS and SaaS, all of which work toward improvements in cloud interoperability and security.
IBM is a member of The Open Group, a council that works for the development of open, vendor-neutral IT standards and certifications. Other members of the group include HP, Oracle, SAP and numerous others. IBM contributed the Cloud Computing Reference Architecture in February 2011 to The Open Group as the basis of an industry-wide cloud architecture. IBM's CCRA is based on real-world input from many cloud implementations across IBM. It is intended to be used as a blueprint/guide for architecting cloud implementations, driven by functional and non-functional requirements of the respective cloud implementation. HP and Microsoft have also published Cloud Computing Reference Architectures.
Within the IaaS space, IBM is a member of the Cloud Management Work Group (CMWG) within the Distributed Management Task Force (DMTF), which released a draft version of their IaaS APIs, called the Cloud Infrastructure Management Interface (CIMI), on September 14, 2011. The CIMI APIs define a logical model for the management of resources within the Infrastructure as a Service domain. With these APIs, clients can create, manage and connect machines, volumes and networks.
For PaaS and SaaS standards, IBM, Red Hat, Cisco, Citrix, EMC and others contribute to the Topology and Orchestration Specification for Cloud Applications (TOSCA) technical committee within Organization for the Advancement of Structured Information Standards (OASIS), which aims to provide a standardized way of managing the lifecycle of cloud services, for portability of cloud based applications. TOSCA's goal is to advance an interoperability standard that will make it easier to deploy cloud applications without vendor lock-in, while maintaining application requirements for security, governance, and compliance.
IBM participates in a number of cloud security related standards including the DMTF Cloud Auditing Data Federation (CADF) working group, and the OASIS Identity in the Cloud (IDCloud) technical committee. CADF is designed to address the need for a cloud provider to provide specific audit event, log and report information on a per-tenant and application basis. IDCloud aims to addresses the serious security challenges posed by identity management in cloud computing and investigates the need for profiles to achieve interoperability within current standards.
IBM founded the Cloud Standards Customer Council (CSCC) in April 2011, with the Object Management Group (OMG) Kaavo, Rackspace and Software AG, as an end user advocacy group that aimed to accelerate adoption of cloud services and eliminate barriers to security and interoperability associated with the transition to the cloud. In addition to contributing standards requirements to various standards development organizations (SDO), the CSCC also creates guides that companies can use on their own path to cloud adoption.
Timeline
March 2018
Cloudant migrated to IBM Cloud
July 2016
IBM Cloud PowerVC Manager
October 2014
IBM Cloud Manager with OpenStack
IBM Cloud Orchestrator
March 2014
Cloudant acquired
October 2011
IBM SmartCloud Application Services
IBM SmartCloud Foundation
IBM SmartCloud Ecosystem
IBM SmartCloud Enterprise+
August 2011
Launch of hybrid cloud with Cast Iron Cloud Integration
July 2011
IBM opens two cloud data centers in Japan
IBM Smarter Commerce
June 2011
IBM SmartCloud Archive launch
IBM SmartCloud Virtualized Server Recovery
IBM SmartCloud Managed Backup
April 2011
Launch of IBM SmartCloud
Launch of IBM SmartCloud Enterprise
IBM Workload Deployer
IBM joins Cloud Standards Customer Council
November 2010
IBM Federal Community Cloud for government organizations
October 2010
IBM Service Delivery Manager
IBM CloudBurst v2.1 (with POWER7-based hardware)
IBM Blueworks Live
July 2010
Announcement of cloud computing data center in Ehningen, Germany
IBM Smart Business Desktop Cloud
February 2010
IBM opens cloud computing data center in Raleigh, North Carolina
November 2009
IBM Smart Business Development and Test on the IBM Cloud
IBM Smart Analytics Cloud
October 2009
IBM Smart Business Storage Cloud
June 2009
IBM Smart Business Services
IBM Cloudburst (later renamed IBM Workload Deployer)
January 2009
LotusLive collaboration suite
See also
AppScale
Amazon Web Services
Engine Yard
ENlight Cloud
Force.com
GoGrid
Google App Engine
Google Compute Engine
Heroku
HP Cloud
Jelastic
Microsoft Azure
Nodejitsu
OpenShift
OpenStack
Oracle Cloud
Rackspace
SAP Cloud Platform
Skytap
VMware
References
External links
IBM cloud services
Cloud computing providers
Cloud computing
Cloud infrastructure
Cloud platforms
Cloud storage
Computer-related introductions in 2010 |
473394 | https://en.wikipedia.org/wiki/Printf%20format%20string | Printf format string | printf format string refers to a control parameter used by a class of functions in the input/output libraries of C and many other programming languages. The string is written in a simple template language: characters are usually copied literally into the function's output, but format specifiers, which start with a character, indicate the location and method to translate a piece of data (such as a number) to characters.
"printf" is the name of one of the main C output functions, and stands for "print formatted". printf format strings are complementary to scanf format strings, which provide formatted input (parsing). In both cases these provide simple functionality and fixed format compared to more sophisticated and flexible template engines or parsers, but are sufficient for many purposes.
Many languages other than C copy the printf format string syntax closely or exactly in their own I/O functions.
Mismatches between the format specifiers and type of the data can cause crashes and other vulnerabilities. The format string itself is very often a string literal, which allows static analysis of the function call. However, it can also be the value of a variable, which allows for dynamic formatting but also a security vulnerability known as an uncontrolled format string exploit.
History
Early programming languages such as Fortran used special statements with completely different syntax from other calculations to build formatting descriptions. In this example, the format is specified on line 601, and the WRITE command refers to it by line number:
WRITE OUTPUT TAPE 6, 601, IA, IB, IC, AREA
601 FORMAT (4H A= ,I5,5H B= ,I5,5H C= ,I5,
& 8H AREA= ,F10.2, 13H SQUARE UNITS)
ALGOL 68 had more function-like API, but still used special syntax (the delimiters surround special formatting syntax):
printf(($"Color "g", number1 "6d,", number2 "4zd,", hex "16r2d,", float "-d.2d,", unsigned value"-3d"."l$,
"red", 123456, 89, BIN 255, 3.14, 250));
But using the normal function calls and data types simplifies the language and compiler, and allows the implementation of the input/output to be written in the same language. These advantages outweigh the disadvantages (such as a complete lack of type safety in many instances) and in most newer languages I/O is not part of the syntax.
C's has its origins in BCPL's function (1966). In comparison to and , is a BCPL language escape sequence representing a newline character (for which C uses the escape sequence ) and the order of the format specification's field width and type is reversed in :
WRITEF("%I2-QUEENS PROBLEM HAS %I5 SOLUTIONS*N", NUMQUEENS, COUNT)
Probably the first copying of the syntax outside the C language was the Unix shell command, which first appeared in Version 4, as part of the port to C.
Format placeholder specification
Formatting takes place via placeholders within the format string. For example, if a program wanted to print out a person's age, it could present the output by prefixing it with "Your age is ", and using the signed decimal specifier character to denote that we want the integer for the age to be shown immediately after that message, we may use the format string:
printf("Your age is %d", age);
Syntax
The syntax for a format placeholder is
Parameter field
This is a POSIX extension and not in C99. The Parameter field can be omitted or can be:
{| class="wikitable"
|-
! Character
! Description
|-
|
| n is the number of the parameter to display using this format specifier, allowing the parameters provided to be output multiple times, using varying format specifiers or in different orders. If any single placeholder specifies a parameter, all the rest of the placeholders MUST also specify a parameter. For example, produces .
|}
This feature mainly sees its use in localization, where the order of occurrence of parameters vary due to the language-dependent convention.
On the non-POSIX Microsoft Windows, support for this feature is placed in a separate printf_p function.
Flags field
The Flags field can be zero or more (in any order) of:
{| class="wikitable"
|-
! Character
! Description
|-
| (minus)
|Left-align the output of this placeholder. (The default is to right-align the output.)
|-
| (plus)
|Prepends a plus for positive signed-numeric types. positive = , negative = . (The default doesn't prepend anything in front of positive numbers.)
|-
| (space)
|Prepends a space for positive signed-numeric types. positive = , negative = . This flag is ignored if the flag exists. (The default doesn't prepend anything in front of positive numbers.)
|-
| (zero)
|When the 'width' option is specified, prepends zeros for numeric types. (The default prepends spaces.) For example, produces 3, while produces .
|-
| (apostrophe)
| The integer or exponent of a decimal has the thousands grouping separator applied.
|-
| (hash)
| Alternate form: For and types, trailing zeros are not removed. For , , , , , types, the output always contains a decimal point. For , , types, the text , , , respectively, is prepended to non-zero numbers.
|}
Width field
The Width field specifies a minimum number of characters to output, and is typically used to pad fixed-width fields in tabulated output, where the fields would otherwise be smaller, although it does not cause truncation of oversized fields.
The width field may be omitted, or a numeric integer value, or a dynamic value when passed as another argument when indicated by an asterisk . For example, will result in 10 being printed, with a total width of 5 characters.
Though not part of the width field, a leading zero is interpreted as the zero-padding flag mentioned above, and a negative value is treated as the positive value in conjunction with the left-alignment flag also mentioned above.
Precision field
The Precision field usually specifies a maximum limit on the output, depending on the particular formatting type. For floating point numeric types, it specifies the number of digits to the right of the decimal point that the output should be rounded. For the string type, it limits the number of characters that should be output, after which the string is truncated.
The precision field may be omitted, or a numeric integer value, or a dynamic value when passed as another argument when indicated by an asterisk . For example, will result in being printed.
Length field
The Length field can be omitted or be any of:
{| class="wikitable"
|-
! Character
! Description
|-
|
| For integer types, causes to expect an -sized integer argument which was promoted from a .
|-
|
| For integer types, causes to expect an -sized integer argument which was promoted from a .
|-
|
| For integer types, causes to expect a -sized integer argument.
For floating point types, this is ignored. arguments are always promoted to when used in a varargs call.
|-
|
| For integer types, causes to expect a -sized integer argument.
|-
|
| For floating point types, causes to expect a argument.
|-
|
| For integer types, causes to expect a -sized integer argument.
|-
|
| For integer types, causes to expect a -sized integer argument.
|-
|
| For integer types, causes to expect a -sized integer argument.
|}
Additionally, several platform-specific length options came to exist prior to widespread use of the ISO C99 extensions:
{| class="wikitable"
|-
! Characters
! Description
|-
|
| For signed integer types, causes to expect -sized integer argument; for unsigned integer types, causes to expect -sized integer argument. Commonly found in Win32/Win64 platforms.
|-
|
| For integer types, causes to expect a 32-bit (double word) integer argument. Commonly found in Win32/Win64 platforms.
|-
|
| For integer types, causes to expect a 64-bit (quad word) integer argument. Commonly found in Win32/Win64 platforms.
|-
|
| For integer types, causes to expect a 64-bit (quad word) integer argument. Commonly found in BSD platforms.
|}
ISO C99 includes the inttypes.h header file that includes a number of macros for use in platform-independent coding. These must be outside double-quotes, e.g.
Example macros include:
{| class="wikitable"
|-
! Macro
! Description
|-
|
| Typically equivalent to (Win32/Win64) or
|-
|
| Typically equivalent to (Win32/Win64), (32-bit platforms) or (64-bit platforms)
|-
|
| Typically equivalent to (Win32/Win64) or
|-
|
| Typically equivalent to (Win32/Win64), (32-bit platforms) or (64-bit platforms)
|-
|
| Typically equivalent to (Win32/Win64) or
|-
|
| Typically equivalent to (Win32/Win64), (32-bit platforms) or (64-bit platforms)
|-
|
| Typically equivalent to (Win32/Win64) or
|-
|
| Typically equivalent to (Win32/Win64), (32-bit platforms) or (64-bit platforms)
|}
Type field
The Type field can be any of:
{| class="wikitable"
|-
! Character
! Description
|-
|
|Prints a literal character (this type doesn't accept any flags, width, precision, length fields).
|-
| ,
| as a signed integer. and are synonymous for output, but are different when used with scanf for input (where using will interpret a number as hexadecimal if it's preceded by , and octal if it's preceded by .)
|-
|
| Print decimal .
|-
| ,
| in normal (fixed-point) notation. and only differs in how the strings for an infinite number or NaN are printed (, and for ; , and for ).
|-
| ,
| value in standard form (). An conversion uses the letter (rather than ) to introduce the exponent. The exponent always contains at least two digits; if the value is zero, the exponent is . In Windows, the exponent contains three digits by default, e.g. , but this can be altered by Microsoft-specific function.
|-
| ,
| in either normal or exponential notation, whichever is more appropriate for its magnitude. uses lower-case letters, uses upper-case letters. This type differs slightly from fixed-point notation in that insignificant zeroes to the right of the decimal point are not included. Also, the decimal point is not included on whole numbers.
|-
| ,
| as a hexadecimal number. uses lower-case letters and uses upper-case.
|-
|
| in octal.
|-
|
|null-terminated string.
|-
|
| (character).
|-
|
| (pointer to void) in an implementation-defined format.
|-
| ,
| in hexadecimal notation, starting with or . uses lower-case letters, uses upper-case letters. (C++11 iostreams have a that works the same).
|-
|
| Print nothing, but writes the number of characters written so far into an integer pointer parameter.In Java this prints a newline.
|}
Custom format placeholders
There are a few implementations of -like functions that allow extensions to the escape-character-based mini-language, thus allowing the programmer to have a specific formatting function for non-builtin types. One of the most well-known is the (now deprecated) glibc's . However, it is rarely used due to the fact that it conflicts with static format string checking. Another is Vstr custom formatters, which allows adding multi-character format names.
Some applications (like the Apache HTTP Server) include their own -like function, and embed extensions into it. However these all tend to have the same problems that has.
The Linux kernel printk function supports a number of ways to display kernel structures using the generic specification, by appending additional format characters. For example, prints an IPv4 address in dotted-decimal form. This allows static format string checking (of the portion) at the expense of full compatibility with normal printf.
Most languages that have a -like function work around the lack of this feature by just using the format and converting the object to a string representation.
Vulnerabilities
Invalid conversion specifications
If there are too few function arguments provided to supply values for all the conversion specifications in the template string, or if the arguments are not of the correct types, the results are undefined, may crash. Implementations are inconsistent about whether syntax errors in the string consume an argument and what type of argument they consume. Excess arguments are ignored. In a number of cases, the undefined behavior has led to "Format string attack" security vulnerabilities. In most C or C++ calling conventions arguments may be passed on the stack, which means in the case of too few arguments printf will read past the end of the current stackframe, thus allowing the attacker to read the stack.
Some compilers, like the GNU Compiler Collection, will statically check the format strings of printf-like functions and warn about problems (when using the flags or ). GCC will also warn about user-defined printf-style functions if the non-standard "format" is applied to the function.
Field width versus explicit delimiters in tabular output
Using only field widths to provide for tabulation, as with a format like for three integers in three 8-character columns, will not guarantee that field separation will be retained if large numbers occur in the data. Loss of field separation can easily lead to corrupt output. In systems which encourage the use of programs as building blocks in scripts, such corrupt data can often be forwarded into and corrupt further processing, regardless of whether the original programmer expected the output would only be read by human eyes. Such problems can be eliminated by including explicit delimiters, even spaces, in all tabular output formats. Simply changing the dangerous example from before to addresses this, formatting identically until numbers become larger, but then explicitly preventing them from becoming merged on output due to the explicitly included spaces. Similar strategies apply to string data.
Memory write
Although an outputting function on the surface, allows for write to a memory location specified by an argument via . This functionality is occasionally used as a part of more elaborate format string attacks.
The functionality also makes accidentally Turing complete even with a well-formed set of arguments. A game of tic-tac-toe written in the format string is a winner of the 27th IOCCC.
Programming languages with printf
Languages that use format strings that deviate from the style in this article (such as AMPL and Elixir), languages that inherit their implementation from the JVM or other environment (such as Clojure and Scala), and languages that do not have a standard native printf implementation but have external libraries which emulate printf behavior (such as JavaScript) are not included in this list.
awk (via sprintf)
C
C++ (also provides overloaded shift operators and manipulators as an alternative for formatted output – see iostream and iomanip)
Objective-C
D
F#
G (LabVIEW)
GNU MathProg
GNU Octave
Go
Haskell
J
Java (since version 1.5) and JVM languages
Julia (via its Printf standard library; Formatting.jl library adds Python-style general formatting and "c-style part of this package aims to get around the limitation that @sprintf has to take a literal string argument.")
Lua (string.format)
Maple
MATLAB
Max (via the sprintf object)
Mythryl
PARI/GP
Perl
PHP
Python (via operator)
R
Raku (via , , and )
Red/System
Ruby
Tcl (via format command)
Transact-SQL (via xp_sprintf)
Vala (via and )
The utility command, sometimes built in to the shell, such as with some implementations of the KornShell (ksh), Bourne again shell (bash), or Z shell (zsh). These commands usually interpret C escapes in the format string.
See also
Format (Common Lisp)
C standard library
Format string attack
iostream
ML (programming language)
printf debugging
(Unix)
printk (print kernel messages)
scanf
string interpolation
References
External links
C++ reference for
gcc printf format specifications quick reference
The specification in Java 1.5
GNU Bash builtin
Articles with example C code
C standard library
Unix software |
900407 | https://en.wikipedia.org/wiki/DigiPen%20Institute%20of%20Technology | DigiPen Institute of Technology | DigiPen Institute of Technology is a private, for-profit university in Redmond, Washington. It also has campuses in Singapore and Bilbao, Spain. DigiPen offers bachelor's and master's degree programs in Computer Science, Animation, Video Game Development, Game Design, Sound Design, and Computer Engineering.
DigiPen also offers summer programs for students in grades K-12, online courses and year-long high school programs.
History
In 1988, DigiPen was founded by Claude Comair in Vancouver, British Columbia, Canada as a research and development institute for computer science and animation. Comair continues to be the President and CEO to the present day.
In 1990, DigiPen began offering its first dedicated educational program in the subject of 3D computer animation through the Vancouver Film School.
In 1990, DigiPen began offering a 3D animation program and began collaborating with Nintendo of America to create a post-secondary program for video game programming. With Nintendo's support, DigiPen Applied Computer Graphics School accepted its first class of video game programming students in 1994.
In 1996, the Washington State Higher Education Coordinating Board (HECB) granted DigiPen the authorization to award degree programs in the United States. DigiPen's first offered degree program was the Bachelor of Science in Real-Time Interactive Simulation.
In 1998, DigiPen Institute of Technology opened its campus in Redmond, Washington as a joint campus between DigiPen and Nintendo Software Technology. Redmond became DigiPen's home.
In 2002, DigiPen received national accreditation from the ACCSC. DigiPen began offering its first Masters program, a Masters of Science in Computer Science. DigiPen graduated its last classes in its Associate programs, and only offered Undergraduate and Postgraduate programs.
In 2008, DigiPen Institute of Technology opened its campus in Singapore in conjunction with Singapore's Economic Development Board. Also in 2008, DigiPen's Research & Development arm created an Artificial Intelligence system regarding human behavioral modeling and simulation, titled B-HIVE, for The Boeing Company and their Phantom Works division. B-HIVE and its associated patents were commended as Boeing's "Supplier Technology of the Year" in 2008.
In 2010, DigiPen relocated its main campus to an independent location, still in Redmond, Washington. DigiPen Institute of Technology Singapore joined the public university Singapore Institute of Technology.
In 2011, DigiPen Institute of Technology opened its campus in the Greater Bilbao area, in the municipality of Zierbena.
In 2015, DigiPen's Singapore campus moved to the Singapore Polytechnic campus, while Singapore Institute of Technology’s joint campus began development. DigiPen's Bachelor of Science in Computer Engineering received ABET accreditation.
Campuses
Redmond, Washington, United States
DigiPen's main campus is located in Redmond on 9931 Willows Road. It offers 9 undergraduate and 2 postgraduate degree programs. It has approximately 1200 full-time students with a faculty-to-student ratio of 1:11 and an average class size of 22. International students make up 13% of the total student population. 24% of students are women. There are approximately 50 student run organizations on campus. King County is the home to a significant number of technology and game development companies.
Singapore
DigiPen's Singapore campus is DigiPen's first international campus since its establishment in the United States. DigiPen opened its Singapore campus in conjunction with Singapore's Economic Development Board in 2008. Currently, DigiPen operates as an Overseas University Affiliate (Third Party Education Service Provider) for the public university Singapore Institute of Technology. Therefore, DigiPen's Singapore campus arranges courses for the students of Singapore Institute of Technology only, and does not enroll students directly or issue undergraduate certificates independently. DigiPen's Singapore campus offers 5 undergraduate degree programs, including a Bachelor of Engineering with Honours in Systems Engineering (ElectroMechanical Systems) which is not offered in any other campus. Moreover, this campus regularly offers Specialist Diploma Programs (unaccredited) jointly with Workforce Singapore. DigiPen's Singapore campus has approximately 900 full-time students from Singapore Institute of Technology.
Bilbao, Spain
DigiPen's Europe campus is located in Ribera de Zorrotzaurre, 2 in the city of Bilbao. It offers 2 undergraduate degree programs, with approximately 200 full-time students.
International university partnerships
Keimyung University in Daegu, South Korea and DigiPen Institute of Technology have a collaboration where students local to South Korea have the option to spend a program's first 5 semesters in Daegu, taught by DigiPen Faculty members, and the remaining semesters in Redmond.
Thammasat University in Bangkok, Thailand and DigiPen Institute of Technology have a collaboration where students local to Thailand have the option to spend a program's first 2-4 semesters in Bangkok and the remaining semesters in Redmond.
Academics
Primary educational paths
CS - Standard computer science education with emphasis on high performance programming.
RTIS – Real Time Interactive Simulation (core game engine architecture and programming)
GD – Game Design (gameplay architecture, programming and scripting)
FA – Fine Arts (digital art and animation).
E – Engineering (computer hardware, and software architecture)
MSD – Music and Sound Design
DA – Digital Audio
Accreditation
DigiPen is accredited by the Accrediting Commission of Career Schools and Colleges. DigiPen's Bachelor of Science in Computer Science in Real-Time Interactive Simulation and Bachelor of Science in Computer Engineering are accredited by the Accreditation Board for Engineering and Technology.
Research and development
DigiPen Research & Development performs research for The Boeing Company and has received the commendations of Boeing Supplier of the Year Award in Technology in 2008 and Boeing Performance Excellence Award in 2008, 2013, and 2014. DigiPen has a professional relationship with Phantom Works and BR&T.
DigiPen Research & Development is active in the research in Formula 1 and INDYCAR, and is technical sponsor of Renault F1 (2008–present) and Andretti Autosport (2015–present).
Criticism
DigiPen has been criticized for asserting ownership over the copyright of work performed by their students.
In a 2021 video entitled "DigiPen: The College That Teaches Crunch Culture", gaming journalist and YouTuber Jim Sterling, citing a series of anonymous interviews they had conducted with former DigiPen students, accused the school of conditioning students into the crunch culture of the larger video game industry and criticized practices such as overburdening students with impossible workloads and class requirements, failure to communicate grades and academic standing with students, and hiring and maintaining professors with outdated industry experience who reportedly engaged in yelling, intimidation, and other abusive behavior towards students. According to Sterling, DigiPen was reached out to for comment a week before the video's release and did not respond.
Rankings
In 2015, Business Insider published a list of the 50 best computer science and engineering schools in USA in which DigiPen was ranked 50th. In that same year, the Princeton Review ranked DigiPen No. 3 on its annual list of "Top 25 Undergraduate Schools to Study Game Design," as well as No. 5 on its list of "Top 25 Graduate Schools to Study Game Design".
In 2019, Animation Career Review ranked DigiPen as the sixth best Video Game University in the United States. In that same year, Princeton Review ranked DigiPen No.4 on its annual list of "Top 50 Undergraduate Schools to Study Game Design", as well as No. 6 on its list of "Top 25 Graduate Schools to Study Game Design".
DigiPen consistently ranked as a top US undergraduate school to study Game Design in Princeton Review and Animation Career Review since 2010 and 2015 respectively. However, excluding The Business Insider in 2015, no other ranking (such as Times Higher Education World University Rankings 2019 for computer science) in the later years ranked DigiPen as a Computer Science and Engineering School.
Notable faculty
Ellen Beeman - fantasy and science fiction author, television screenwriter, and computer game designer/producer Beeman is a Senior Lecturer of Game Software Design and Production at DigiPen.
Notable alumni
Nate Martin - "Founding Father of Escape Rooms", Co-founder & CEO of Puzzle Break
Patrick Hackett - Co-creator of Tilt Brush.
Kim Swift - designer on the Portal team
Aubrey Edwards - video game developer and professional wrestling referee
References
External links
Official website
Engineering universities and colleges in Washington (state)
Private universities and colleges in Washington (state)
Video game universities
Educational institutions established in 1988
For-profit universities and colleges in the United States
Universities and colleges in King County, Washington
Education in Redmond, Washington
1988 establishments in Washington (state) |
27125334 | https://en.wikipedia.org/wiki/NixOS | NixOS | NixOS is a Linux distribution built on top of the Nix package manager. It uses declarative configuration and allows reliable system upgrades. Several official package "channels" are offered including current Stable release and Unstable following latest development. NixOS has tools dedicated to DevOps and deployment tasks.
History
In 2003, Eelco Dolstra started NixOS as a research project. In 2015, the Stichting NixOS was founded aiming to support projects like NixOS that implement the purely functional deployment model.
Versions
NixOS follows a cadenced releasing, twice a year. This used to happen around March and September but, starting with 21.05, NixOS targets May and November instead. Each version number has the format "YY.MM", for instance "20.03" was the version released in March 2020. Each version of NixOS has a name, such as "Markhor" for the release 20.03.
Features
Declarative configuration model
In NixOS, the entire operating system – the kernel, applications, system packages, configuration files, and so on – is built by the Nix package manager from a description in a functional build language. This means that building a new configuration cannot overwrite previous configurations.
A NixOS system is configured by writing a specification of the functionality that the user wants on their machine in a global configuration file. For instance, here is a minimal configuration of a machine running an SSH daemon:
{
boot.loader.grub.device = "/dev/sda";
fileSystems."/".device = "/dev/sda1";
services.sshd.enable = true;
}
After changing the configuration file, the system can be updated using the nixos-rebuild switch command. This command does everything necessary to apply the new configuration, including downloading and compiling packages and generating configuration files.
Reliable upgrades
Since Nix files are pure and declarative, evaluating them will always produce the same result, regardless of what packages or configuration files are on the system. Thus, upgrading a system is as reliable as reinstalling from scratch.
Atomic upgrades
NixOS has a transactional approach to configuration management making configuration changes such as upgrades atomic. This means that if the upgrade to a new configuration is interrupted – say, the power fails half-way through – the system will still be in a consistent state: it will either boot in the old or the new configuration. In other systems, a machine might end up in an inconsistent state, and may not even boot anymore.
Rollbacks
If after a system update the new configuration is undesirable, it can be rolled back using a special command (nixos-rebuild switch --rollback). Every system configuration version automatically shows up at the system boot menu. If the new configuration crashes or does not boot properly, an older version can be selected. Rollbacks are lightweight operations that do not involve files being restored from copies.
Reproducible system configurations
NixOS's declarative configuration model makes it easy to reproduce a system configuration on another machine. Copying the configuration file to the target machine and running the system update command generates the same system configuration (kernel, applications, system services, and so on) except for parts of the system not managed by the package manager such as user data.
Source-based model with binary cache
The Nix build language used by NixOS specifies how to build packages from source. This makes it easy to adapt the system to user needs. However, building from source being a slow process, the package manager automatically downloads pre-built binaries from a cache server when they are available. This gives the flexibility of a source-based package management model with the efficiency of a binary model.
Consistency
The Nix package manager ensures that the running system is consistent with the logical specification of the system, meaning that it will rebuild all packages that need to be rebuilt. For instance, if the kernel is changed then the package manager will ensure that external kernel modules will be rebuilt. Similarly, when a library is updated it ensures that all the system packages use the new version, even packages statically linked to it.
Multi-user package management
There is no need for special privileges to install software in NixOS. In addition to the system-wide profile, every user has a dedicated profile in which they can install packages. Nix also allows multiple versions of a package to coexist, so different users can have different versions of the same package installed in their respective profiles. If two users install the same version of a package, only one copy will be built or downloaded. Nix's security model ensures that this is secure because only the users explicitly trusted by the system configuration are allowed to use build parameters that would allow them to control the content of a derivation's output (such as adding impurities to the sandbox or using an untrusted substituter). Without those parameters, paths can only be substituted from a substituter trusted by the system or a local sandboxed build which is implicitly trusted.
Implementation
NixOS is based on the Nix package manager that stores all packages in isolation from each other in the package store.
Installed packages are identified by a cryptographic hash of all input used for their build. Changing the build instructions of a package modifies its hash and that will result in a different package installed in the package store. This system is also used to manage configuration files ensuring that newer configurations are not overwriting older ones.
An implication of this is that NixOS doesn't follow the Filesystem Hierarchy Standard. The only exceptions are a symlink /bin/sh to the version of bash in the Nix store (like this: /nix/store/s/5rnfzla9kcx4mj5zdc7nlnv8na1najvg-bash-4.3.43/); and while NixOS does have an /etc directory to keep system-wide configuration files, most files in that directory are symlinks to generated files in /nix/store such as /nix/store/s2sjbl85xnrc18rl4fhn56irkxqxyk4p-sshd_config. Not using global directories such as /bin is part of what allows multiple versions of a package to coexist.
Reception
Jesse Smith reviewed NixOS 15.09 for DistroWatch Weekly. Smith wrote:
DistroWatch Weekly also has a review of NixOS 17.03, written by Ivan Sanders.
See also
Nix package manager – The package manager upon which NixOS is based
GNU Guix System – An operating system built on GNU Guix that is inspired by Nix
References
External links
Nix packages repository
Unofficial NixOS Wiki
Operating system security
X86-64 Linux distributions
Linux distributions
Source-based Linux distributions |
7066791 | https://en.wikipedia.org/wiki/Disk%20encryption | Disk encryption | Disk encryption is a technology which protects information by converting it into unreadable code that cannot be deciphered easily by unauthorized people. Disk encryption uses disk encryption software or hardware to encrypt every bit of data that goes on a disk or disk volume. It is used to prevent unauthorized access to data storage.
The expression full disk encryption (FDE) (or whole disk encryption) signifies that everything on the disk is encrypted, but the master boot record (MBR), or similar area of a bootable disk, with code that starts the operating system loading sequence, is not encrypted. Some hardware-based full disk encryption systems can truly encrypt an entire boot disk, including the MBR.
Transparent encryption
Transparent encryption, also known as real-time encryption and on-the-fly encryption (OTFE), is a method used by some disk encryption software. "Transparent" refers to the fact that data is automatically encrypted or decrypted as it is loaded or saved.
With transparent encryption, the files are accessible immediately after the key is provided, and the entire volume is typically mounted as if it were a physical drive, making the files just as accessible as any unencrypted ones. No data stored on an encrypted volume can be read (decrypted) without using the correct password/keyfile(s) or correct encryption keys. The entire file system within the volume is encrypted (including file names, folder names, file contents, and other meta-data).
To be transparent to the end-user, transparent encryption usually requires the use of device drivers to enable the encryption process. Although administrator access rights are normally required to install such drivers, encrypted volumes can typically be used by normal users without these rights.
In general, every method in which data is seamlessly encrypted on write and decrypted on read, in such a way that the user and/or application software remains unaware of the process, can be called transparent encryption.
Disk encryption vs. filesystem-level encryption
Disk encryption does not replace file encryption in all situations. Disk encryption is sometimes used in conjunction with filesystem-level encryption with the intention of providing a more secure implementation. Since disk encryption generally uses the same key for encrypting the whole drive, all of the data can be decrypted when the system runs. However, some disk encryption solutions use multiple keys for encrypting different volumes. If an attacker gains access to the computer at run-time, the attacker has access to all files. Conventional file and folder encryption instead allows different keys for different portions of the disk. Thus an attacker cannot extract information from still-encrypted files and folders.
Unlike disk encryption, filesystem-level encryption does not typically encrypt filesystem metadata, such as the directory structure, file names, modification timestamps or sizes.
Disk encryption and Trusted Platform Module
Trusted Platform Module (TPM) is a secure cryptoprocessor embedded in the motherboard that can be used to authenticate a hardware device. Since each TPM chip is unique to a particular device, it is capable of performing platform authentication. It can be used to verify that the system seeking the access is the expected system.
A limited number of disk encryption solutions have support for TPM. These implementations can wrap the decryption key using the TPM, thus tying the hard disk drive (HDD) to a particular device. If the HDD is removed from that particular device and placed in another, the decryption process will fail. Recovery is possible with the decryption password or token.
Although this has the advantage that the disk cannot be removed from the device, it might create a single point of failure in the encryption. For example, if something happens to the TPM or the motherboard, a user would not be able to access the data by connecting the hard drive to another computer, unless that user has a separate recovery key.
Implementations
There are multiple tools available in the market that allow for disk encryption. However, they vary greatly in features and security. They are divided into three main categories: software-based, hardware-based within the storage device, and hardware-based elsewhere (such as CPU or host bus adaptor). Hardware-based full disk encryption within the storage device are called self-encrypting drives and have no impact on performance whatsoever. Furthermore, the media-encryption key never leaves the device itself and is therefore not available to any virus in the operating system.
The Trusted Computing Group Opal Storage Specification provides industry accepted standardization for self-encrypting drives. External hardware is considerably faster than the software-based solutions, although CPU versions may still have a performance impact, and the media encryption keys are not as well protected.
All solutions for the boot drive require a pre-boot authentication component which is available for all types of solutions from a number of vendors. It is important in all cases that the authentication credentials are usually a major potential weakness since the symmetric cryptography is usually strong.
Password/data recovery mechanism
Secure and safe recovery mechanisms are essential to the large-scale deployment of any disk encryption solutions in an enterprise. The solution must provide an easy but secure way to recover passwords (most importantly data) in case the user leaves the company without notice or forgets the password.
Challenge–response password recovery mechanism
Challenge–response password recovery mechanism allows the password to be recovered in a secure manner. It is offered by a limited number of disk encryption solutions.
Some benefits of challenge–response password recovery:
No need for the user to carry a disc with recovery encryption key.
No secret data is exchanged during the recovery process.
No information can be sniffed.
Does not require a network connection, i.e. it works for users that are at a remote location.
Emergency recovery information (ERI)-file password recovery mechanism
An emergency recovery information (ERI) file provides an alternative for recovery if a challenge–response mechanism is unfeasible due to the cost of helpdesk operatives for small companies or implementation challenges.
Some benefits of ERI-file recovery:
Small companies can use it without implementation difficulties.
No secret data is exchanged during the recovery process.
No information can be sniffed.
Does not require a network connection, i.e. it works for users that are at a remote location.
Security concerns
Most full disk encryption schemes are vulnerable to a cold boot attack, whereby encryption keys can be stolen by cold-booting a machine already running an operating system, then dumping the contents of memory before the data disappears. The attack relies on the data remanence property of computer memory, whereby data bits can take up to several minutes to degrade after power has been removed. Even a Trusted Platform Module (TPM) is not effective against the attack, as the operating system needs to hold the decryption keys in memory in order to access the disk.
Full disk encryption is also vulnerable when a computer is stolen when suspended. As wake-up does not involve a BIOS boot sequence, it typically does not ask for the FDE password. Hibernation, in contrast goes via a BIOS boot sequence, and is safe.
All software-based encryption systems are vulnerable to various side channel attacks such as acoustic cryptanalysis and hardware keyloggers. In contrast, self-encrypting drives are not vulnerable to these attacks since the hardware encryption key never leaves the disk controller.
Also, most of full disk encryption schemes don't protect from data tampering (or silent data corruption, i.e. bitrot). That means they only provide privacy, but not integrity. Block cipher-based encryption modes used for full disk encryption are not authenticated encryption themselves because of concerns of the storage overhead needed for authentication tags. Thus, if tampering would be done to data on the disk, the data would be decrypted to garbled random data when read and hopefully errors may be indicated depending on which data is tampered with (for the case of OS metadata – by the file system; and for the case of file data – by the corresponding program that would process the file). One of the ways to mitigate these concerns, is to use file systems with full data integrity checks via checksums (like Btrfs or ZFS) on top of full disk encryption. However, cryptsetup started experimentally to support authenticated encryption
Full disk encryption
Benefits
Full disk encryption has several benefits compared to regular file or folder encryption, or encrypted vaults. The following are some benefits of disk encryption:
Nearly everything including the swap space and the temporary files is encrypted. Encrypting these files is important, as they can reveal important confidential data. With a software implementation, the bootstrapping code cannot be encrypted however. For example, BitLocker Drive Encryption leaves an unencrypted volume to boot from, while the volume containing the operating system is fully encrypted.
With full disk encryption, the decision of which individual files to encrypt is not left up to users' discretion. This is important for situations in which users might not want or might forget to encrypt sensitive files.
Immediate data destruction, such as simply destroying the cryptographic keys (crypto-shredding), renders the contained data useless. However, if security towards future attacks is a concern, purging or physical destruction is advised.
The boot key problem
One issue to address in full disk encryption is that the blocks where the operating system is stored must be decrypted before the OS can boot, meaning that the key has to be available before there is a user interface to ask for a password. Most Full Disk Encryption solutions utilize Pre-Boot Authentication by loading a small, highly secure operating system which is strictly locked down and hashed versus system variables to check for the integrity of the Pre-Boot kernel. Some implementations such as BitLocker Drive Encryption can make use of hardware such as a Trusted Platform Module to ensure the integrity of the boot environment, and thereby frustrate attacks that target the boot loader by replacing it with a modified version. This ensures that authentication can take place in a controlled environment without the possibility of a bootkit being used to subvert the pre-boot decryption.
With a pre-boot authentication environment, the key used to encrypt the data is not decrypted until an external key is input into the system.
Solutions for storing the external key include:
Username / password
Using a smartcard in combination with a PIN
Using a biometric authentication method such as a fingerprint
Using a dongle to store the key, assuming that the user will not allow the dongle to be stolen with the laptop or that the dongle is encrypted as well
Using a boot-time driver that can ask for a password from the user
Using a network interchange to recover the key, for instance as part of a PXE boot
Using a TPM to store the decryption key, preventing unauthorized access of the decryption key or subversion of the boot loader
Using a combination of the above
All these possibilities have varying degrees of security; however, most are better than an unencrypted disk.
See also
Comparison of disk encryption software
Digital forensics
Disk encryption hardware
Disk encryption software
Disk encryption theory
Encryption
Filesystem-level encryption
Hardware-based full disk encryption
In re Boucher
Single sign-on
References
Further reading
External links
Presidential Mandate requiring data encryption on US government agency laptops
On-The-Fly Encryption: A Comparison – Reviews and lists the different features of disk encryption systems (archived version from January 2013)
All about on-disk/full-disk encryption on one page – covers the use of dm-crypt/LUKS on Linux, starting with theory and ending with many practical examples about its usage (archived version from September 2015).
Buyer's Guide to Full Disk Encryption – Overview of full-disk encryption, how it works, and how it differs from file-level encryption, plus an overview of leading full-disk encryption software. |
61847462 | https://en.wikipedia.org/wiki/Alwin%20Walther | Alwin Walther | Alwin Oswald Walther (born 6 May 1898 in Reick; died 4 January 1967 in Darmstadt) was a German mathematician, engineer and professor. He is one of the pioneers of mechanical computing technology in Germany.
Life
Alwin Walther was born in May 1898 in Reick near Dresden. From 1916 to 1919 Walther served his military service. He was wounded twice and received the Iron Cross 1st Class. From 1919 to 1922 he studied mathematics at the Technical University of Dresden and the University of Göttingen. In 1922, he received his doctorate to Dr. rer. tech. (today according to Dr.-Ing.) from the University of Göttingen under the supervision of Gerhard Kowalewski and . From 1922 to 1928, he was assistant and senior Assistant to Richard Courant at the Mathematical Institute at the University of Göttingen. In 1924, he habilitated and became a Privatdozent. The year before, he stayed in Copenhagen for scientific purposes. From 1926 to 1927 he was a Rockefeller Fellow in Copenhagen and Stockholm. On 1 April 1928 Walther became a full professor of mathematics at the Technische Hochschule Darmstadt and director of the Institute for Applied Mathematics, which he built. In 1955, he was a visiting professor at the University of California, Berkeley.
Alwin Walther, Heinz Billing, Helmut Schreyer, Konrad Zuse and Alan Turing met in Göttingen in 1947. In the form of a colloquium, British experts (including John R. Womersley, Arthur Porter and Alan Turing) interviewed Walther, Billing, Schreyer and Zuse.
Walther retired on 30 September 1966. A few months later he died after a short illness at the age of 68 years in Darmstadt.
Work
Walther attached great importance to questions of the practical application of mathematics. Alwin Walther was one of the first to adapt the mathematics to the requirements of the engineers. In the early 1930s he developed the slide rules "System Darmstadt", which was widely used in engineering.
On his initiative, the German Computing Centre in Darmstadt and the International Computing Centre in Rome were built.
Walter was a nominator in two nominations for the Nobel Prize in Physics, Peter Debye (1930) and Enrico Fermi (1936).
Peter Schnell, founder of Software AG, Rudolf Zurmühl and Helmut Hoelzer, the inventor and constructor of the world's first electronic analog computer, were his students.
From 1952 to 1955 he was chairman of the Gesellschaft für Angewandte Mathematik und Mechanik (GAMM). From 1958 he was board member of the Association for Computing Machinery (ACM) and from 1959 to 1962 he was vice president of the newly founded International Federation for Information Processing (IFIP).
Alwin Walther was active for many years in the Association of Friends of the Technische Universität Darmstadt. In March 1933 he became deputy secretary. In the following year, until the late 1940s, he was their treasurer. In 1950, the general assembly appointed him as honorary member of the Association.
Institute for Applied Mathematics
In 1928, Alwin Walther built the Institute for Applied Mathematics at the Technische Hochschule Darmstadt. It was the first Institute for Applied Mathematics in Germany. The focus of the institute was on the development of electronic arithmetic. Already at the end of the thirties, he set up a computing station in his institute. The computing capacity was unique in Europe at the time. At the computing station two decades before the invention of programming languages, algorithms were tested and used successfully in the processing of problems from industry.
In Germany, the beginnings of computer science go back to the Institute for Applied Mathematics of the TH Darmstadt. In 1956, the first programming lectures and internships in Germany were offered at the TH Darmstadt.
The Institute for Applied Mathematics contributed to Zuse's Z4 by providing parts and components.
In 1951, the development of the digital electronic computing machine "Darmstädter Elektronischer Rechenautomat (DERA)" in tube technology was started. Around the same time, Walther procured a computer of the highest performance class, an IBM 650, for the TH Darmstadt. The TH Darmstadt was thus the first university in Germany to have a mainframe computer.
Due to the reputation that the TH Darmstadt had at the time in computer science research, the first international congress on computer science held in German-speaking countries took place in October 1955 at the TH Darmstadt.
Awards
1950: Honorary member of the Association of Friends of the Technische Universität Darmstadt.
1959: Silver Medal of the City of Paris on the occasion of his preparation for a UNESCO Congress for Information Processing
1963: Silver merit plaque of the city Darmstadt
1963: Honorary doctorate from the TU Dresden
Publications (selection)
Einführung in die mathematische Behandlung naturwissenschaftlicher Fragen, Springer, Berlin, 1928.
Unterricht und Forschung im Institut für Praktische Mathematik (IPM) der Technischen Hochschule Darmstadt, Der mathematische Unterricht in der Bundesrepublik Deutschland, Kapitel XVIII, S. 260–274.
Fastperiodische Folgen und Potenzreihen mit fastperiodischen Koeffizienten; Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg; 6,217–234 (1928).
Alwin-Walther-Medal
From 1997 to 2010, the departments of computer science and mathematics at the Technische Universität Darmstadt awarded an Alwin-Walther-Medal for outstanding achievements, as well as for exceptional research and development work in the fields of computer science or applied mathematics.
Literature
Melanie Hanel: Normalität unter Ausnahmebedingungen. Die TH Darmstadt im Nationalsozialismus, Darmstadt 2014.
Technische Universität Darmstadt: Technische Bildung in Darmstadt. Die Entwicklung der Technischen Hochschule 1836–1996, Volume 4, Darmstadt 1998.
Wilhelm Barth: Alwin Walther – Praktische Mathematik und Computer an der THD, in: Technische Hochschule Darmstadt Jahrbuch 1978/79, Darmstadt, 1979, S. 29–34.
Alwin Walther: Pionier des Wissenschaftlichen Rechnens, Wissenschaftliches Kolloquium anlässlich des hundertsten Geburtstages, 8. Mai 1998, Darmstadt 1999, published by Hans-Jürgen Hoffmann.
Christa Wolf and Marianne Viefhaus: Verzeichnis der Hochschullehrer der TH Darmstadt, Darmstadt 1977.
External links
Klaus Biener: Alwin Walther – Pionier der Praktischen Mathematik. cms-journal Nr. 18, August 1999, Humboldt-University of Berlin.
Alwin Walther presents DERA in 1963 and answers questions about the future, a film on YouTube (Video, 37 min).
Notes and references
1967 deaths
1898 births
Technische Universität Darmstadt faculty
20th-century German mathematicians |
31555577 | https://en.wikipedia.org/wiki/Skylake%20%28microarchitecture%29 | Skylake (microarchitecture) | Skylake is the codename used by Intel for a processor microarchitecture that was launched in August 2015 succeeding the Broadwell microarchitecture. Skylake is a microarchitecture redesign using the same 14 nm manufacturing process technology as its predecessor, serving as a tock in Intel's tick–tock manufacturing and design model. According to Intel, the redesign brings greater CPU and GPU performance and reduced power consumption. Skylake CPUs share their microarchitecture with Kaby Lake, Coffee Lake, Cannon Lake, Whiskey Lake, and Comet Lake CPUs.
Skylake is the last Intel platform on which Windows earlier than Windows 10 will be officially supported by Microsoft, although enthusiast-created modifications exist that allow Windows 8.1 and earlier to continue to receive Windows Updates on later platforms.
Some of the processors based on the Skylake microarchitecture are marketed as 6th-generation Core.
Intel officially declared end of life and discontinued Skylake LGA 1151 CPUs on March 4, 2019.
Development history
Skylake's development, as with previous processors such as Banias, Dothan, Conroe, Sandy Bridge, and Ivy Bridge, was primarily undertaken by Intel Israel at its engineering research center in Haifa, Israel. The final design was largely an evolution of Haswell, with minor improvements to performance and several power-saving features being added. A major priority of Skylake's design was to design a microarchitecture for envelopes as low as 4.5W to embed within tablet computers and notebooks in addition to higher-power desktop computers and servers.
In September 2014, Intel announced the Skylake microarchitecture at the Intel Developer Forum in San Francisco, and that volume shipments of Skylake CPUs were scheduled for the second half of 2015. The Skylake development platform was announced to be available in Q1 2015. During the announcement, Intel also demonstrated two computers with desktop and mobile Skylake prototypes: the first was a desktop testbed system, running the latest version of 3DMark, while the second computer was a fully functional laptop, playing 4K video.
An initial batch of Skylake CPU models (6600K and 6700K) was announced for immediate availability during the Gamescom on August 5, 2015, unusually soon after the release of its predecessor, Broadwell, which had suffered from launch delays. Intel acknowledged in 2014 that moving from 22 nm (Haswell) to 14 nm (Broadwell) had been its most difficult process to develop yet, causing Broadwell's planned launch to slip by several months; yet, the 14 nm production was back on track and in full production as of Q3 2014. Industry observers had initially believed that the issues affecting Broadwell would also cause Skylake to slip to 2016, but Intel was able to bring forward Skylake's release and shorten Broadwell's release cycle instead. As a result, the Broadwell architecture had an unusually short run.
Overclocking of unsupported processors
Officially Intel supported overclocking of only the K and X versions of Skylake processors. However, it was later discovered that other non-K chips could be overclocked by modifying the base clock value – a process made feasible by the base clock applying only to the CPU, RAM, and integrated graphics on Skylake. Through beta UEFI firmware updates, some motherboard vendors, such as ASRock (which prominently promoted it under the name Sky OC) allowed the base clock to be modified in this manner.
When overclocking unsupported processors using these UEFI firmware updates, several issues arise:
C-states are disabled, therefore the CPU will constantly run at its highest frequency and voltage
Turbo-boost is disabled
Integrated graphics are disabled
AVX2 instruction performance is poor, approximately 4-5 times slower due to the upper 128-bit half of the execution units and data buses not being taken out of their power saving states
CPU core temperature readings are incorrect
These issues are partly caused by the power management of the processor needing to be disabled for base clock overclocking to work.
In February 2016, however, an ASRock firmware update removed the feature. On February 9, 2016, Intel announced that it would no longer allow such overclocking of non-K processors, and that it had issued a CPU microcode update that removes the function. In April 2016, ASRock started selling motherboards that allow overclocking of unsupported CPUs using an external clock generator.
Operating system support
In January 2016, Microsoft announced that it would end support of Windows 7 and Windows 8.1 on Skylake processors effective July 17, 2017; after this date, only the most critical updates for the two operating systems would be released for Skylake users if they have been judged not to affect the reliability of the OS on older hardware, and Windows 10 would be the only Microsoft Windows platform officially supported on Skylake, as well as all future Intel CPU microarchitectures beginning with Skylake's successor Kaby Lake. Terry Myerson stated that Microsoft had to make a large investment in order to reliably support Skylake on older versions of Windows, and that future generations of processors would require further investments. Microsoft also stated that due to the age of the platform, it would be challenging for newer hardware, firmware, and device driver combinations to properly run under Windows 7.
On March 18, 2016, in response to criticism over the move, primarily from enterprise customers, Microsoft announced revisions to the support policy, changing the cutoff for support and non-critical updates to July 17, 2018 and stating that Skylake users would receive all critical security updates for Windows 7 and 8.1 through the end of extended support. In August 2016, citing a strong partnership with our OEM partners and Intel, Microsoft stated that it would continue to fully support 7 and 8.1 on Skylake through the end of their respective lifecycles. In addition, an enthusiast-created modification was released that disabled the Windows Update check and allowed Windows 8.1 and earlier to continue to be updated on this and later platforms.
As of Linux kernel 4.10, Skylake mobile power management is supported with most Package C states supported seeing some use. Linux 4.11 enables Frame-Buffer Compression for the integrated graphics chipset by default, which lowers power consumption.
Skylake is fully supported on OpenBSD 6.2 and later, including accelerated graphics.
For Windows 11, only the high-end Skylake-X processors are officially listed as compatible. All other Skylake processors are not officially supported due to security concerns. However, it is still possible to manually upgrade using an ISO image (as Windows 10 users on those processors won't be offered to upgrade to Windows 11 via Windows Update), or perform a clean installation as long as the system has Trusted Platform Module (TPM) 2.0 enabled, but the user must accept that they will not be entitled to receive updates, and that damage caused by using Windows 11 on an unsupported configuration are not covered by the manufacturer's warranty.
Features
Like its predecessor, Broadwell, Skylake is available in five variants, identified by the suffixes S (SKL-S), X (SKL-X), H (SKL-H), U (SKL-U), and Y (SKL-Y). SKL-S and SKL-X contain overclockable K and X variants with unlocked multipliers. The H, U and Y variants are manufactured in ball grid array (BGA) packaging, while the S and X variants are manufactured in land grid array (LGA) packaging using a new socket, LGA 1151 (LGA 2066 for Skylake X). Skylake is used in conjunction with Intel 100 Series chipsets, also known as Sunrise Point.
The major changes between the Haswell and Skylake architectures include the removal of the fully integrated voltage regulator (FIVR) introduced with Haswell. On the variants that will use a discrete Platform Controller Hub (PCH), Direct Media Interface (DMI) 2.0 is replaced by DMI 3.0, which allows speeds of up to 8 GT/s.
Skylake's U and Y variants support one DIMM slot per channel, while H and S variants support two DIMM slots per channel. Skylake's launch and sales lifespan occur at the same time as the ongoing SDRAM market transition, with DDR3 SDRAM memory gradually being replaced by DDR4 memory. Rather than working exclusively with DDR4, the Skylake microarchitecture remains backward compatible by interoperating with both types of memory. Accompanying the microarchitecture's support for both memory standards, a new SO-DIMM type capable of carrying either DDR3 or DDR4 memory chips, called UniDIMM, was also announced.
Skylake's few P variants have a reduced on-die graphics unit (12 execution units enabled instead of 24 execution units) over their direct counterparts; see the table below. In contrast, with Ivy Bridge CPUs the P suffix was used for CPUs with completely disabled on-die video chipset.
Other enhancements include Thunderbolt 3.0, SATA Express, Iris Pro graphics with Direct3D feature level 12_1 with up to 128 MB of L4 eDRAM cache on certain SKUs. The Skylake line of processors retires VGA support, while supporting up to five monitors connected via HDMI 1.4, DisplayPort 1.2 or Embedded DisplayPort (eDP) interfaces. HDMI 2.0 (4K@60 Hz) is only supported on motherboards equipped with Intel's Alpine Ridge Thunderbolt controller.
The Skylake instruction set changes include Intel MPX (Memory Protection Extensions) and Intel SGX (Software Guard Extensions). Future Xeon variants will also have Advanced Vector Extensions 3.2 (AVX-512F).
Skylake-based laptops were predicted to use wireless technology called Rezence for charging, and other wireless technologies for communication with peripherals. Many major PC vendors agreed to use this technology in Skylake-based laptops; however, no laptops were released with the technology as of 2019.
The integrated GPU of Skylake's S variant supports on Windows DirectX 12 Feature Level 12_1, OpenGL 4.6 with latest Windows 10 driver update (OpenGL 4.5 on Linux) and OpenCL 3.0 standards, as well as some modern hardware video encoding/decoding formats such as VP9 (GPU accelerated decode only), VP8 and HEVC (hardware accelerated 8-bit encode/decode and GPU accelerated 10-bit decode).
Intel also released unlocked (capable of overclocking) mobile Skylake CPUs.
Unlike previous generations, Skylake-based Xeon E3 no longer works with a desktop chipset that supports the same socket, and requires either the C232 or the C236 chipset to operate.
Known issues
Short loops with a specific combination of instruction use may cause unpredictable system behavior on CPUs with hyperthreading. A microcode update was issued to fix the issue.
Skylake is vulnerable to Spectre attacks.
In fact, it is more vulnerable than other processors because it uses indirect branch speculation not just on indirect branches but also when the return prediction stack underflows.
The latency for the spinlock instruction has been increased dramatically (from the usual 10 cycles to 141 cycles in Skylake), which can cause performance issues with older programs or libraries using pause instructions. Intel documents the increased latency as a feature that improves power efficiency.
Architecture changes compared to Broadwell microarchitecture
CPU
Improved front-end, deeper out-of-order buffers, improved execution units, more execution units (third vector integer ALU(VALU)) for five ALUs in total, more load/store bandwidth, improved hyper-threading (wider retirement), speedup of AES-GCM and AES-CBC by 17% and 33% accordingly.
Up to four cores as the default mainstream configuration and up to 18 cores for X-series
AVX-512: F, CD, VL, BW, and DQ for some future Xeon variants, but not Xeon E3
Intel MPX (Memory Protection Extensions)
Intel SGX (Software Guard Extensions)
Intel Speed Shift
Larger Re-order buffer (224 entries, up from 192)
L1 cache size unchanged at 32 KB instruction and 32 KB data cache per core.
L2 cache was changed from 8-way to 4-way set associative
Voltage regulator module (FIVR) is moved back to the motherboard
Enhancements of Intel Processor Trace: fine grained timing through CYC packets (cycle-accurate mode) and support for IP (Instruction Pointer) address filtering.
64 to 128 MB L4 eDRAM cache on certain SKUs
GPU
Skylake's integrated Gen9 GPU supports Direct3D 12 at the feature level 12_1
Full fixed function HEVC Main/8bit encoding/decoding acceleration. Hybrid/Partial HEVC Main10/10bit decoding acceleration. JPEG encoding acceleration for resolutions up to 16,000×16,000 pixels. Partial VP9 encoding/decoding acceleration.
I/O
LGA 1151 socket for mainstream desktop processors and LGA 2066 socket for enthusiast gaming/workstation X-series processors
100-series chipset (Sunrise Point)
X-series uses X299-series chipset
DMI 3.0 (From DMI 2.0)
Support for both DDR3L SDRAM and DDR4 SDRAM in mainstream variants, using custom UniDIMM SO-DIMM form factor with up to 64 GB of RAM on LGA 1151 variants. Usual DDR3 memory is also supported by certain motherboard vendors even though Intel doesn't officially support it.
Support for 16 PCI Express 3.0 lanes from CPU, 20 PCI Express 3.0 lanes from PCH (LGA 1151), 44 PCI Express 3.0 lanes for Skylake-X
Support for Thunderbolt 3 (Alpine Ridge)
Other
Thermal design power (TDP) up to 95 W (LGA 1151); up to 165 W (LGA 2066)
14 nm manufacturing process
Configurations
Skylake processors are produced in five main families: Y, U, H, S, and X. Multiple configurations are available within each family:
List of Skylake processor models
Mainstream desktop processors
Common features of the mainstream desktop Skylake CPUs:
DMI 3.0 and PCIe 3.0 interfaces
Dual channel memory support in the following configurations: DDR3L-1600 1.35 V (32 GB maximum) or DDR4-2133 1.2 V (64 GB maximum). DDR3 is unofficially supported through some motherboard vendors
16 PCI-E 3.0 lanes
The Core-branded processors support the AVX2 instruction set. The Celeron and Pentium-branded ones support only SSE4.1/4.2
350 MHz base graphics clock rate
High-end desktop processors (Skylake-X)
Common features of the high performance Skylake-X CPUs:
Quad channel memory support for DDR4-2400 (on the i7-7800X) or DDR4-2666 (on all other CPUs) up to 128 GB
28 (for the i7-7800X and i7-7820X) to 44 (for all other CPUs) PCI-E 3.0 lanes
In addition to the AVX2 instruction set, they also support the AVX-512 instructions
No built-in iGPU (integrated graphics processor)
Turbo Boost Max Technology 3.0 for up to 2/4 threads workloads for CPUs that have 8 cores and more (7820X, 7900X, 7920X, 7940X, 7960X, 7980XE, and all 9th generation chips)
A different cache hierarchy (when compared to client Skylake CPUs or previous architectures)
Xeon High-end desktop processors (Skylake-X)
Is Xeon instead of Core
Uses C621 Chipset
Xeon W-3175X is the only Xeon with a multiplier unlocked for overclocking
Mobile processors
See also Server, Mobile below for mobile workstation processors.
Workstation processors
All models support: MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX, AVX2, AVX-512, FMA3, MPX, Enhanced Intel SpeedStep Technology (EIST), Intel 64, XD bit (an NX bit implementation), Intel VT-x, Intel VT-d, Turbo Boost (excluding W-2102 and W-2104), Hyper-threading (excluding W-2102 and W-2104), AES-NI, Intel TSX-NI, Smart Cache.
PCI Express lanes: 48
Supports up to 8 DIMMs of DDR4 memory, maximum 512 GB.
Server processors
E3 series server chips all consist of System Bus 9 GT/s, max. memory bandwidth of 34.1 GB/s dual channel memory. Unlike its predecessor, the Skylake Xeon CPUs require C230 series (C232/C236) or C240 series (C242/C246) chipset to operate, with integrated graphics working only with C236 and C246 chipsets. Mobile counterparts uses CM230 and CM240 series chipsets.
Skylake-SP (14 nm) Scalable Performance
Xeon Platinum supports up to 8 sockets. Xeon Gold supports up to 4 sockets. Xeon Silver and Bronze support up to 2 sockets.
−M: 1536 GB RAM per socket instead of 768 GB RAM for non−M SKUs
−F: integrated OmniPath fabric
−T: High thermal-case and extended reliability
Support for up to 12 DIMMs of DDR4 memory per CPU socket.
Xeon Platinum, Gold 61XX, and Gold 5122 have two AVX-512 FMA units per core. Xeon Gold 51XX (except 5122), Silver, and Bronze have a single AVX-512 FMA unit per core.
Xeon Bronze and Silver (dual processor)
Xeon Bronze 31XX has no HT or Turbo Boost support.
Xeon Bronze 31XX supports DDR4-2133 MHz RAM. Xeon Silver 41XX supports DDR4-2400 MHz RAM.
Xeon Bronze 31XX and Xeon Silver 41XX support two UPI links at 9.6 GT/s.
Xeon Gold (quad processor)
Xeon Gold 51XX and F SKUs has two UPIs at 10.4 GT/s. Xeon Gold 61XX has three UPIs at 10.4 GT/s.
Xeon Gold 51XX support DDR4-2400 MHz RAM (except 5122). Xeon Gold 5122 and 61XX support DDR4-2666 MHz RAM.
Xeon Platinum (octal processor)
Xeon Platinum non-F SKUs have three UPIs at 10.4 GT/s. Xeon Platinum F-SKUs have two UPIs at 10.4 GT/s.
Xeon Platinum supports DDR4-2666 MHz RAM.
See also
List of Intel CPU microarchitectures
References
External links
Skylake microarchitecture
Intel microarchitectures
Transactional memory
X86 microarchitectures |
1437137 | https://en.wikipedia.org/wiki/Quake%20Army%20Knife | Quake Army Knife | QuArK (aka Quake Army Knife), is a free and open-source program for developing 3D assets for a large variety of video games, mostly first-person shooters using engines similar to or based on the Quake engine by id Software. QuArK runs on Microsoft Windows.
Overview
QuArK is released under the GNU General Public License and has the ability to edit maps (either directly or through an intermediate compiler process), and can import, export, manipulate and convert models, sounds, textures and various other game assets, or create any of these assets from scratch. It is also possible to move or change dynamic game objects without the need to recompile the whole map which makes the fine-tuning of details quicker. QuArK uses external compilers (like Q3Map2) to produce the actual level-file used by the game. These compilers can be fully configured using their command-line parameters, and once done, QuArK remembers these settings so they can be used every time.
The interface is based upon VCL and includes a multitude of flyover hints and other forms of in-program documentation. It also offers multiple editor layouts, including 2D wireframe and 3D textured views, where it is possible to see how the map or model will look in-game. This view can be rendered with a built-in software, Glide, OpenGL or Direct3D renderer. Views have three modes: wireframe, solid color and textured, and supports transparency and lighting in OpenGL mode.
QuArK is a brush-based editor, that works by adding brushes into an empty space, building the map block-by-block. To assist, more advanced features are available, including constructive solid geometry functions such as brush-subtraction. Additionally, for engines that support it, Bézier surfaces can be used to create curved surfaces. QuArK also has a built-in leak finder in order to prevent holes in the map. Items can be added to a map simply by selecting them from a list of available entities, and their properties can be edited once they are placed in the map.
Along with support for most games based on engines developed by id Software, QuArK also has support for other game engines such as Source, Genesis3D, 6DX, Crystal Space, Torque, and Sylphis 3D.
It is possible to add plug-ins, written in Python, to extend the capabilities of the editor, or to make changes to the official Python files to alter the way QuArK's functions work. More information about this can be found in the QuArK Infobase.
QuArK itself has very low system requirements, although a lot of additional resources are taken up by the loaded game data. That amount depends on the game-mode selected and the size and complexity of the map or model being edited. QuArK supports the Win32 platform, including Windows 95, 98, ME, NT 4, 2000 and XP. It also runs on 64-bit operating systems (in 32-bit mode), Windows Vista and higher, and it can run under Unix-based platforms by using the Wine compatibility layer.
Usage and popularity
QuArK is one of the three most notable level editors for Quake, together with Radiant and Worldcraft. QuArK is one of the two most popular editors for Quake II, GtkRadiant being the other. QuArK is the most popular tool to access WAD files. QuArK is probably the second most popular tool for level editing for Half-Life, after the official Valve Hammer Editor. QuArK is also used as a mapping tool in scientific studies.
History
QuArK started out as a Delphi program called "Quakemap", written by Armin Rigo in 1996. Initially it could only edit maps for Quake, but editing capabilities for QuakeC, sounds and compiled maps were added in version 2, which was released in October 1996.
In 1997 a contest was held to rename the software and QuArK, which stands for "Quake Army Knife", was selected. It is named so in reference to the game engine series it supported, the Quake engines, and for Swiss Army knife, because it could not only edit maps, but included a model editor and texture browser as well. Version 3.0 was the first release under this name.
QuArK soon expanded to support Hexen II with version 4.0, and Quake II not much later. With the release of version 5.0 in 1998, Python support was added for plugin capabilities.
The latest stable version of QuArK was 6.3, released in January 2003. However, since then many new alpha and beta versions have been released that have many new features, and include support for many new games. A small (and incomplete) overview of the major releases since 6.3:
Ports
There were plans to make a C++ version of QuArK that reuses the existing Python files, plans to port the program to macOS and Linux, and plans to do a complete rewrite of QuArK in C++ and Python, but development on all these projects has ceased.
Utilities
QuArK comes with several stand-alone utilities:
QuArKSAS: The QuArK Steam Access System, or QuArKSAS, is a command-line program that allows the user to extract files from the Steam filesystem.
grnreader.exe: Used to convert .gr2 files into QuArK-loadable .ms files.
NVDXT: Nvidia's DXT converter, used to create .dds files.
Unofficial packages
There are several unofficial packages available:
3D Development Pack is a custom installer to allow people to quickly and easily develop a 3D game using QuArK. It combines QuArK, Lazarus and GLScene.
Quark For GLScene is an install for QuArK that includes OpenBSP as the default compiler and doesn't need Quake installed.
The Garage Games website offered a custom installer, which will install QuArK with some additional files so it's ready to go and configured for Torque: Torque Game Engine Documentation.
Notes
References
1996 software
Free 3D graphics software
Free software programmed in Delphi
Free software programmed in Python
Id Tech
Quake (series)
Video game level editors
Video game modification tools
Windows-only free software |
67127822 | https://en.wikipedia.org/wiki/Joplin%20%28software%29 | Joplin (software) | Joplin is a free and open-source desktop and mobile note-taking application written for Unix-like (including macOS and Linux) and Microsoft Windows operating systems, as well as iOS, Android, and Linux/Windows terminals, written in JavaScript using Electron. Joplin's workflow and featureset is most often compared to Evernote.
History
Joplin is named for the ragtime composer and pianist, Scott Joplin.
The first public desktop application release was version 0.10.19, on November 20, 2017.
A Web Clipper for Chrome was introduced in December 2017 and the Firefox extension was released in May 2018.
A new Joplin Cloud service was introduced in 2021, along with an on-premises Joplin Server application. Both products can be used to sync notes, to-dos, notebooks and note data across devices, as well as share notes or notebooks with other Joplin users, or even publish content to the web.
Features
Notes in markdown format
Markdown extension plug-ins
Storage in plain-text files
Optional client-side encryption
Organisation in notebooks and sub-notebooks
Tagging system
"Offline-first", notes are always accessible locally, and can be synced on demand
Note synchronization with Joplin Cloud, Nextcloud, Dropbox, OneDrive, WebDAV, or (networked) file system
See also
Comparison of note-taking software
References
External links
Joplin Forum
Demonstration videos:
Joplin Is An Open Source Alternative To Evernote
Joplin, a free, open source, self hosted syncing note taking alternative to Evernote and OneNote
Free and open-source Android software
Free note-taking software
Free software programmed in JavaScript
iOS software
Linux software
MacOS software
Windows software |
1828959 | https://en.wikipedia.org/wiki/PDFCreator | PDFCreator | PDFCreator is an application for converting documents into Portable Document Format (PDF) format on Microsoft Windows operating systems. It works by creating a virtual printer that prints to PDF files, and thereby allows practically any application to create PDF files by choosing to print from within the application and then printing to the PDFCreator printer.
Since 2009, PDFCreator has included closed source adware, toolbars and other software that is installed by default and can be deselected.
Implementation
The application is written in Microsoft C# and released to the public at no charge. It works with 64-bit and 32-bit Windows versions including Windows 11. The actual PDF generation is handled by Ghostscript, which is included in the setup packages.
Besides being installed as a virtual printer, PDFCreator can be associated with .ps files to manually convert PostScript to PDF format.
PDFCreator can convert to the following file formats: PDF (including PDF/A (PDF/A-1b, PDF/A-2b and PDF/A-3b) and PDF/X (X-3:2002, X-3:2003 and X-4)), PNG, JPEG, TIFF, TXT. It also allows to digitally sign PDF documents.
Since version 0.8.1 RC9 (2005) PDFCreator allows any COM enabled application to make use of its functionality. The business editions of PDFCreator allow users to write their own C# scripts with access to the entire job data. These custom scripts can be integrated directly before and after the conversion. They have full access to the .NET-framework and can reference compatible external libraries.
PDFCreator allows the user to disable printing, copying of text or images and modifying the original document. The user can also choose between two types of passwords, user and owner, to restrict PDF files in several ways. The former is required to open the PDF file, while the latter is necessary in order to change permissions and password. Encryption can be either Low (128 Bit), Medium (128 Bit AES) or High (256 Bit AES), with the latter only being available in the PDFCreator Business editions.
PDFCreator provides the possibility for automating certain tasks, for example with the help of user tokens. These placeholders for values, like today’s date, username, or e-mail address can be helpful when printing many similar files like invoices.
With PDFCreator users can verify themselves as author of a document by digital signatures. This feature is part of all PDFCreator editions, including PDFCreator Free.
Starting with version 0.9.6, there is full support for Windows Vista and version 0.9.7 provides support for Windows 7.
Starting with version 3.0.0, PDFCreator ended support for Windows XP.
Starting with version 4.4.0 there is full support for Windows 11.
Adware toolbar controversy
Between 2009 and 2013 the installation package included a closed-source browser toolbar that was considered by many users to be malicious software. Although technically an optional component, the opt-out procedure used to be a two-step process (prior to version 1.2.3), which was considered by many to be intentionally confusing. In addition to the spyware activity described below, the toolbar allowed one-click creation of PDFs from the current webpage and included a search tool. As of version 1.2.3, the opt-out procedure only required unchecking one checkbox during the installation process.
Starting with version 0.9.7 (February 2009), PDFCreator included an adware toolbar. The end-user-license agreement for PDFForge Toolbar by Spigot, Inc. (versions prior to 0.9.7 have a different, optional toolbar called "PDFCreator Toolbar"), states that the software will:
Pdfforge, which created PDFCreator, wrote an FAQ entry regarding the toolbar that stated:
Since that time various versions of PDFCreator have included adware toolbars and other software with the installer, which many virus scanners identify as problematic or undesired software.
In March 2012 the company announced that the toolbar had been discontinued with version 1.3.0. The company stated:
As of 23 March 2012, PDFCreator included the MyStart toolbar by Incredibar. On 13 June 2012, PDFCreator once again included another controversial bundled software package, which tests as spyware, called SweetIM. In July 2012 the project disabled reviews and ratings on its sourceforge repository.
On 30 August 2012, PDFCreator version 1.5.0 was released which included an installer for the "AVG Security Toolbar." There was an option to disable installation of the "AVG Security Toolbar, but it was not clearly identified. Furthermore, installation of PDFCreator required acceptance of the AVG EULA even when installation of the "AVG Security Toolbar" was disabled.
On 23 October 2012, PDFCreator version 1.5.1 was released which includes an installer for iClaro Search. Compared to previous adware choices, once installed, iClaro cannot be removed using the "Add/Remove Programs" option.
On 14 January 2013, PDFCreator version 1.6.2 was released which includes an installer for Install Entrusted Toolbar. The setup screen for Install Entrusted Toolbar has a single option in black font. The description for Express (recommended) reads:
In October 2013, PDFCreator was stealthily installing more software, including Amazon's Internet Explorer toolbar, without notifying the user.
Awards
The now defunct OpenCD project chose PDFCreator as the best free software package for creating PDF files in Windows.
In August 2008, InfoWorld magazine recognized PDFCreator with an Open Source Software Award from the field of more than 50 available open source or free PDF creation applications.
Both of these awards predate the inclusion of the contested spyware.
See also
List of PDF software
List of virtual printer software
References
External links
Adware
PDF software
Windows-only freeware
Software that bundles malware |
54259098 | https://en.wikipedia.org/wiki/Hexnode | Hexnode | Hexnode is a software company headquartered in San Francisco specialising in providing Unified Endpoint Management solution to SMBs. After January 8, 2020, perpetual licenses are no longer offered
In June 2019, Hexnode was included in the Now Tech report from Forrester Research, a global research and advisory firm..Hexnode was recognized in Gartner midmarket context:Magic Quadrant for UEM 2021 as well.
Hexnode MDM
Hexnode's Mobile Device Management software offers cloud-based services and supports Android, iOS and Windows PCs and phones, macOS and tvOS. Hexnode MDM is the flagship product of the company. Hexnode MDM enables centralized device management, allows policy creation and restricts device features. The software is equipped with features such as management tools, remote setup and remote lock and wipe.
Hexnode MDM also provides a solution to lock down devices into kiosk mode. Runs on iOS, Android and Windows devices, Hexnode is bundled with security options such as kiosk browser, access to WiFi permission etc. in addition to other device management features.
Amongst the other prominent features include content management, expense management, FileVault, Remote view and control, BitLocker, app management, web filtering, Hexnode messenger and others.
See also
List of Mobile Device Management software
List of Kiosk software
References
Software companies based in California
Software companies of the United States
2013 establishments in the United States
2013 establishments in California
Software companies established in 2013
Companies established in 2013 |
52368984 | https://en.wikipedia.org/wiki/Biometric%20tokenization | Biometric tokenization | Biometric tokenization is the process of substituting a stored biometric template with a non-sensitive equivalent, called a token, that lacks extrinsic or exploitable meaning or value. The process combines the biometrics with public-key cryptography to enable the use of a stored biometric template (e.g., fingerprint image on a mobile or desktop device) for secure or strong authentication to applications or other systems without presenting the template in its original, replicable form.
Biometric tokenization in particular builds upon the longstanding practice of tokenization for sequestering secrets in this manner by having the secret, such as user credentials like usernames and passwords or other Personally Identifiable Information (PII), be represented by a substitute key in the public sphere.
The technology is most closely associated with authentication to online applications such as those running on desktop computers, mobile devices, and Internet of Things (IoT) nodes. Specific use cases include secure login, payments, physical access, management of smart, connected products such as connected homes and connected cars, as well as adding a biometric component to two-factor authentication and multi-factor authentication.
Origins
With the September 9, 2014 launch of its Apple Pay service, Cupertino, Calif.-based Apple, Inc. initiated the conversation surrounding use biometricsupported tokenization of payment data for point of sale retail transactions. Apple Pay tokenizes mobile users’ virtualized bank card data in order to wirelessly transmit a payment, represented as a token, to participating retailers that support Apple Pay (e.g. through partnerships and supported hardware). Apple Pay leverages its proprietary Touch ID fingerprint scanner on its proprietary iPhone line with, aside from cryptography, the added security of its Apple A7 system on a chip that includes a Secure Enclave hardware feature that stores and protects the data from the Touch ID fingerprint sensor. Apple Pay then, at least for payments, is credited with innovating in the space of biometric tokenization even if the use case was limited to payment convenience and security, restricted to the company’s own hardware and software, and despite the fact that executives did not publicly utter the phrase “biometric tokenization” or speak about the underlying technology.
While biometric tokenization and Apple Pay are similar, biometric tokenization as it is known today and particularly using the term verbatim is an authentication feature that goes beyond payment convenience and security. Other distinctive features are that biometric tokenization can be implemented on other operating systems such as OSX, Microsoft Windows, Google Android for password-less login to desktop and mobile applications.
Mechanics
Biometric tokenization like its non-biometric counterpart, tokenization, utilizes end-to-end encryption to safeguard data in transit. With biometric tokenization, a user initiates his or her authentication first by accessing or unlocking biometrics such as fingerprint recognition, facial recognition system, speech recognition, iris recognition or retinal scan, or combination of these biometric modalities. The user’s unique qualities are generally stored in one of two ways, either on-device in a trusted execution environment (TEE) or trusted platform module (TPM), or on a server the way other data are stored.
Biometric tokenization champions typically prefer biometric templates to be encrypted and stored in TEEs or TPMs so as to prevent large-scale data breaches such as the June 2015 U.S. Office of Personnel Management one. Biometric tokenization when aided by on-device storage of user data also can preserve internet privacy because user data are stored individually inside single devices rather than aggregated on ostensibly vulnerable servers. Moving biometric user credentials either for two-factor authentication or unqualified authentication, for example, off of servers and onto devices is a tenet of the Fast Identity Online (FIDO) Alliance, an industry consortium concerned with replacing passwords with decentralized biometrics.
The next step in biometric tokenization after the unlocking of user credentials in the trusted area of their device is for the credentials to be tokenized, with the token containing the precise data required for the action (e.g. login or payment). This access token can be time-stamped as in the case of one-time passwords or session tokens so as to be useful for a specific time period, or they may not be. With biometric tokenization this token is then validated by means of joint client-side and server-side validation, which occurs through a challenge-response token exchange. The user is then logged in, authenticated, or otherwise granted access.
Information Security
In order to achieve the highest level of privacy and protection when calculating and transmitting sensitive information, biometric tokenization leverages existing encryption algorithms, authentication protocols, as well as hardware trust zones. Combining some or all of these methods maximizes the level of protection needed to uphold the integrity of the process and security of data that could otherwise expose users to a breach of trust on a mass scale.
Encryption Algorithms in Use
ECDSA
RSA
ange
White-box cryptography
Software Obfuscation
Authentication Protocols in Use
Universal 2nd Factor (U2F)
Universal Authentication Framework (UAF)
Temporary OTP
Hardware Trust Zones in Use
Trusted Execution Environment
ARM TrustZone
Secure Enclave
References
Data security
Biometrics |
189297 | https://en.wikipedia.org/wiki/MSN%20TV | MSN TV | MSN TV (formerly WebTV) was a web access product consisting of a thin client device which used a television for display (instead of using a computer monitor), and the online service that supported it. The device design and service was developed by WebTV Networks, Inc., a company started in 1995. The WebTV product was announced in July 1996 and later released on September 18, 1996. In April 1997, the company was purchased by Microsoft Corporation and in July 2001, was rebranded to MSN TV and absorbed into MSN.
While most thin clients developed in the mid-1990s were positioned as diskless workstations for corporate intranets, WebTV was positioned as a consumer product, primarily targeting those looking for a low-cost alternative to a computer for Internet access. The WebTV and MSN TV devices allowed a television set to be connected to the Internet, mainly for web browsing and e-mail. The WebTV/MSN TV service, however, also offered its own exclusive services such as a "walled garden" newsgroup service, news and weather reports, storage for user bookmarks (Favorites), IRC (and for a time, MSN Chat) chatrooms, a Page Builder service that let WebTV users create and host webpages that could later be shared to others via a link if desired, the ability to play background music from a predefined list of songs as you surfed the web, dedicated sections for aggregated content covering various topics (entertainment, romance, stocks, etc.), and a few years after Microsoft bought out WebTV, integration with MSN Messenger and Hotmail. The setup included a thin client in the form of a set-top box, a remote, a network connection using dial-up, or with the introduction of Rogers Interactive TV and the MSN TV 2, the option to use broadband, and a wireless keyboard, which was sold optionally up until the 2000s.
The WebTV/MSN TV service lasted for 17 years, shutting down on September 30, 2013, and allowing subscribers to migrate their data well before that date arrived.
The original WebTV network relied on a Solaris backend network and telephone lines to deliver service to customers via dial-up, with "frontend servers" that talk directly to boxes using a custom protocol, the WebTV Protocol (WTVP), to authenticate users and deliver content to boxes. For the MSN TV 2, however, a completely new service based on IIS servers and regular HTTP/HTTPS services was used.
History
Concept
Co-founder Steve Perlman is credited with the idea for the device. He first combined computer and television as a high-school student when he decided his home PC needed a graphics display. He went on to build software for companies such as Apple and Atari. While working at General Magic, the idea of bringing TVs and computers together resurfaced.
One night, Perlman was browsing the web and came across a Campbell's soup website with recipes. He thought that the people who might be interested in what the site had to offer were not using the web. It occurred to him that if the television audience was enabled by a device to augment television viewing with receiving information or commercial offers through the television, then perhaps the web address could act as a signal and the television cable could be the conduit.
Early history
A Silicon Valley startup, WebTV Networks was founded in July 1995. Perlman brought along co-founders Bruce Leak and Phil Goldman shortly after conceiving the basic concept. The company operated out of half of a former BMW car dealership building on Alma Street in Palo Alto, California, which was being used for storage by the Museum of American Heritage. WebTV had been able to obtain the space for very low rent, but it was suboptimal for technology development.
Before incorporation, the company referred to itself as Artemis Research to disguise the nature of its business. The info page of its original website explained that it was studying "sleep deprivation, poor diet and no social life for extended periods on humans and dwarf rabbits". The dwarf rabbit reference was an inside joke among WebTV's hard-working engineers—Phil Goldman's pet house rabbit Bowser (inspiration for the General Magic logo) was often found roaming the WebTV building late into the night while the engineers were working—although WebTV actually received inquiries from real research groups conducting similar studies and seeking to exchange data.
The company hired many engineers and a few business development employees early on, having about 30 total employees by October 1995. Two early employees of Artemis were from Apple Inc: Andy Rubin, creator of the Android cell phone OS, and Joe Britt. Both men would later be two of the founders of Danger, Inc. (originally Danger Research).
WebTV Networks' business model was to license a reference design to consumer electronics companies for a WebTV Internet Terminal, a set-top box that attached to a telephone line and automatically connected to the Internet through a dial-up modem. The consumer electronics companies' income was derived from selling the WebTV set-top box. WebTV's income was derived from operating the WebTV Service, the Internet-based service to which the set-top boxes connected and for which it collected a fee from WebTV subscribers. The service provided features such as HTML-based email, and proxied websites, which were reformatted by the service before they were sent to set-top box, to make them display more efficiently on a television screen.
WebTV closed its first round of financing, US$1,500,000, from Marvin Davis in September 1995, which it used to develop its prototype set-top box, using proprietary hardware and firmware. The company also used the financing to develop the online service that the set-top boxes connected to. WebTV leveraged their limited startup funds by licensing a reference design for the appliance to Sony and Philips. Eventually other companies would also become licensees and WebTV would profit on the monthly service fees. After 22 months, the company was sold to Microsoft for $425 million, with each of the three founders receiving $64 million.
Barely surviving to reach announcement
By the spring of 1996 WebTV Networks employed approximately 70 people, many of them finishing their senior year at nearby Stanford University, or former employees of either Apple Computer or General Magic. WebTV had started negotiating with Sony to manufacture and distribute the WebTV set-top box, but negotiations had taken much longer than WebTV had expected, and WebTV had used up its initial funding. Steve Perlman liquidated his assets, ran up his credit cards and mortgaged his house to provide bridge financing while seeking additional venture capital. Because Sony had insisted upon exclusive distribution rights for the first year, WebTV had no other distribution partner in place, and just before WebTV was to close venture capital financing from Brentwood Associates, Sony sent WebTV a certified letter stating it had decided not to proceed with WebTV. It was a critical juncture for WebTV, because the Brentwood financing had been predicated on the expectation of a future relationship with Sony, and if Brentwood had decided to not proceed with the financing after being told that Sony had backed out, WebTV would have gone bankrupt and Perlman would have lost everything. But Brentwood decided to proceed with the financing despite losing Sony's involvement, and further financing from Paul Allen's Vulcan Ventures soon followed.
WebTV then proceeded to close a non-exclusive WebTV set-top box distribution deal with Philips, which provided competitive pressure causing Sony to change its mind, to resume its relationship with WebTV and also to distribute WebTV.
WebTV was announced on July 10, 1996, generating a large wave of press attention as not only the first television-based use of the World Wide Web, but also as the first consumer-electronics device to access the World Wide Web without a personal computer. After the product's announcement, the company closed additional venture financing, including investments from Microsoft Corporation, Citicorp, Seagate Technology, Inc., Soros Capital, L.P., St. Paul Venture Capital and Times Mirror Company.
The launch
WebTV was launched on September 18, 1996, within one year after its first round of financing, with WebTV set-top boxes in stores from Sony and Philips, and WebTV's online service running from servers in its tiny office, still based in the former BMW dealership.
The initial price for the WebTV set-top box was US$349 for the Sony version and US$329 for the Philips version, with a wireless keyboard available for about an extra US$50. The monthly service fee initially was US$19.95 per month for unlimited Web surfing and e-mail.
There was little difference between the first Sony and the Philips WebTV set-top boxes, except for the housing and packaging. The WebTV set-top box had very limited processing and memory resources (just a 112 MHz MIPS CPU, 2 megabytes of RAM, 2 megabytes of ROM, 1 megabyte of Flash memory) and the device relied upon a connection through a 33.6 kbit/s dialup modem to connect to the WebTV Service, where powerful servers provide back-end support to the WebTV set-top boxes to support a full Web-browsing and email experience for the subscribers.
Initial sales were slow. By April 1997, WebTV had only 56,000 subscribers, but the pace of subscriber growth accelerated after that, achieving 150,000 subscribers by Autumn 1997, about 325,000 subscribers by April 1998 and about 800,000 subscribers by May 1999. WebTV achieved profitability by Spring 1998, and grossed over US$1.3 billion in revenue through its first 8 years of operation. In 2005 WebTV was still grossing US$150 million per year in revenue with 65% gross margin.
WebTV briefly classified as a weapon
Because WebTV utilized strong encryption, specifically the 128-bit encryption (not SSL) used to communicate with its proprietary service, upon launch in 1996, WebTV was classified as "munitions" (a military weapon) by the United States government and was therefore barred from export under United States security laws at the time. Because WebTV was widely distributed in consumer electronic stores under the Sony and Philips brands for only US$325, its munitions classification was used to argue that the US should no longer consider devices incorporating strong encryption to be munitions, and should permit their export. Two years later, in October 1998, WebTV obtained a special exemption permitting its export, despite the strong encryption, and shortly thereafter, laws concerning export of cryptography in the United States were changed to generally permit the export of strong encryption.
Microsoft takes notice
In February 1997, in an investor meeting with Microsoft, Steve Perlman was approached by Microsoft's Senior Vice President for Consumer Platforms Division, Craig Mundie. Despite the fact that the initial WebTV sales had been modest, Mundie expressed that Microsoft was impressed with WebTV and saw significant potential both in WebTV's product offering and in applying the technology to other Microsoft consumer and video product offerings. Microsoft offered to acquire WebTV, build a Microsoft campus in Silicon Valley around WebTV, and establish WebTV as a Microsoft division to develop television-based products and services, with Perlman as the division's president.
Discussions proceeded rapidly, involving Bill Gates, then CEO of Microsoft, personally. Gates called Perlman at his home on Easter Sunday in March 1997, and Perlman described to Gates WebTV's next generation products in development, which would be the first consumer devices to incorporate hard disks, including the WebTV Plus, and the WebTV Digital Video Recorders. Gates' interest was piqued, and negotiations between Microsoft and WebTV rapidly proceeded to closure, with both sides working around the clock to get the deal done. Negotiation time was so short that the hour lost due to the change to Daylight Saving Time the night before the planned announcement, which the parties had neglected to factor into their schedule, almost left them without enough time to finish the deal.
On April 6, 1997, 20 months after WebTV's founding, and only six weeks after negotiations with Microsoft began, during a scheduled speech at the National Association of Broadcasters conference in Las Vegas, Nevada, Craig Mundie announced that Microsoft had acquired WebTV. The acquisition price was US$503 million, but WebTV was so young a company that most of the employees' stock options had yet to be vested. As such, the vested shares at the time of the announcement amounted to US$425 million, and that was the acquisition price announced.
Subsequent to the acquisition, WebTV became a Silicon Valley-based division of Microsoft, with Steve Perlman as its president. The WebTV division began developing most of Microsoft's television-based products, including the first satellite Digital Video Recorders (the DishPlayer for EchoStar's Dish Network and UltimateTV for DirecTV), Microsoft's cable TV products, the Xbox 360 hardware, and Microsoft's Mediaroom IPTV platform.
In May 1999, America Online announced that it was going to compete directly with Microsoft in delivering Internet over television sets by introducing AOL TV.
In June 1999, Steve Perlman left Microsoft and started Rearden, a business incubator for new companies in media and entertainment technology.
MSN TV rebranding
In July 2001, six years after WebTV's founding, Microsoft rebranded WebTV as MSN TV. Contracts were terminated with all other licensed manufacturers of the WebTV hardware except RCA, leaving them as the sole manufacturer of further hardware. Promotion of the WebTV brand ended.
In later years, the number of consumers using dialup access had dropped and as the Classic and Plus clients were restricted to dialup access, their subscriber count began to drop. Because the WebTV client was subsidized hardware, the company had always required individual subscriptions for each box, but with the subsidies ended, MSN started offering free use of MSN TV boxes to their computer users who subscribed to MSN as an incentive not to depart for discount dialup ISPs.
Broadband MSN TV
In 2001, Rogers Cable partnered with Microsoft to introduce "Rogers Interactive TV" in Canada. The service enabled Rogers' subscribers to access the Web via their TV sets, create their own websites, shop online, chat, and access e-mail. This initiative was the first broadband implementation of MSN TV.
In late 2004, Microsoft introduced MSN TV 2. Codenamed the "Deuce", it was capable of broadband access, and it introduced a revamped user interface and new capabilities. These include offline viewing of media (so long as you were already logged in), audio and video streaming (broadband only), Adobe Reader, support for viewing Microsoft Office documents (namely Microsoft Word), Windows Media Player, the ability to access Windows computers on a home network to function as a media player, and even the ability the use of a mouse, although that was most likely unintentional at first. MSN TV 2 also kept some key features from the first generation of WebTV/MSN TV, such as its MIDI engine and the ability to play background music as you surfed the web. MSN TV 2 used a different online service from the original WebTV/MSN TV, but it offered many of the same services, such as chatrooms, instant messaging, weather, news, aggregated "info centers", and newsgroups, and like that service, still required a subscription to use. For those with broadband, the fee was US$99 yearly.
For inexpensive devices, the cost of licensing the operating system is substantial. For Microsoft, however, it would be actualizing a sunk cost, and when Microsoft released the MSN TV 2 model, they adopted standard PC architecture and used a customized version of Windows CE as the operating system. This allowed MSN TV 2 to more easily and inexpensively keep current.
Discontinuation
By late 2009, MSN TV hardware was no longer being sold by Microsoft, although service continued for existing users for the next four years. Attempting to go to the "Buy MSN TV" section on the MSN TV website at the time resulted in the following message being shown:
"Sorry, MSN TV hardware is no longer available for purchase from Microsoft. Microsoft continues to support the subscription service for existing WebTV and MSN TV customers."
On July 1, 2013, an email was sent out to subscribers stating that the MSN TV service would be shutting down on September 30, 2013. During that time, subscribers were advised to convert any accounts on the first-generation service to Microsoft accounts and to migrate any favorites and other data they had on their MSN TV accounts to SkyDrive. Once September 30, 2013 finally arrived, the WebTV/MSN TV service fully closed. Existing customers were offered MSN Dial-Up Internet Access accounts with a promotion. Customer service was available for non-technical and billing questions until January 15, 2014.
Technology
Set-top box
Since the WebTV set-top box was a dedicated web-browsing appliance that did not need to be based on a standard operating system, the cost of licensing an operating system could be avoided. All first generation boxes featured a 64-bit MIPS RISC CPU, boot ROM and flash ROM storage for all Classic and New Plus models, RAM, and a smart card reader, which wasn't significantly utilized. The web browser that ran on the set-top box was compatible with both Netscape Navigator and Microsoft Internet Explorer standards. The first WebTV Classic set-top boxes from Sony and Philips had a 33.6k modem, and 2 MB of RAM, boot ROM, and flash ROM. Later models had 56k modems and increased ROM/RAM capacity. The WebTV set-top boxes leveraged the service's server-side caching proxy which reformatted and compressed web pages before sending them to the box, a feature generally unavailable to dial-up ISP users at the time and as such, had to be developed by WebTV. For web browsing purposes, given WebTV's thin client software, there was no need for a hard disk, but by putting the browser in non-volatile memory, upgrades could be downloaded from the WebTV service onto the set-top box.
The WebTV set-top box was designed so that at a specified time, it would check to see if there was any email waiting. If there was, it would illuminate a red LED on the device so the consumer would know it was worth connecting to pick up their mail.
A second model, the "Plus", was introduced a year later. This model featured a TV tuner to allow watching television in a PIP (Picture-In-Picture) window, allowed one to capture video stills from the tuner or composite inputs as a JPEG that could then be uploaded to a WebTV discussion post, email, or a "scrapbook" on a user's account for later use, and included a video tuner that allowed one to schedule a VCR in a manner like TiVo allowed several years later. The Plus also included a 56k modem, support for ATVEF, a technology that allowed users to download special script-laden pages to interact with television shows, and in original models, had a 1.1 GB hard drive for storage in place of the ROM chips used in the previous Classic models, mainly in order to accommodate large nightly downloads of television schedules. Around Fall 1998, plans for a "Derby" revision of the WebTV Plus were announced, which was rumored to have a faster CPU and more memory. By early 1999, only one Derby unit was produced by Sony as a revision of their INT-W200 Plus model, but no substantial changes were made to the hardware outside of the CPU being upgraded with no change in clock speed, and the modem being changed to a softmodem. As chip prices dropped, later versions of the Plus used an M-Systems DiskOnChip flash ROM instead, alongside increasing RAM capacity to 16 MB.
WebTV produced reference designs of models incorporating a disk-based personal video recorder and a satellite tuner for EchoStar's Dish Network (referred to as the DishPlayer) and for DirecTV (called UltimateTV). In 2001, EchoStar sued Microsoft for failing to support the WebTV DishPlayer. EchoStar subsequently sought to acquire DirecTV and was the presumptive acquirer, but EchoStar was ultimately blocked by the Federal Communications Commission. While EchoStar's lawsuit against Microsoft was in process, DirecTV (presumptively acquired and controlled by EchoStar) dropped UltimateTV (thus ending Microsoft's satellite product initiatives) and picked TiVo's DirecTV product as its only Digital Video Recorder offering.
As an ease-of-use design consideration, WebTV early decided to reformat pages rather than have users doing sideways scrolling. As entry-level PCs evolved from VGA resolution of 640x480 to SVGA resolution of 800x600, and web site dimensions followed suit, reformatting the PC-sized web pages to fit the 560-pixel width of a United States NTSC television screen became less satisfactory. The WebTV browser also translated HTML frames as tables in order to avoid the need for a mouse.
In Japan, WebTV had a small run starting around late 1997, with a couple "Classic" Japanese units being released with hard drives and two times more RAM than American Classic and Old Plus units at the time, and in Spring of 1999, allowed customers to choose the option of utilizing Sega's Dreamcast video game console, which came with a built-in modem, to access WebTV. This was possible as Sega and Microsoft collaborated to create a port of the WebTV technology on the Dreamcast, using the Windows CE abstraction layer supported on the console and what's believed to be a version of the Internet Explorer 2.0 browser engine. The Japanese service ended sometime in March 2002.
Security
Security was always an issue with the WebTV/MSN TV service. This was primarily due to the fact that proprietary URLs used to perform certain actions on the service had very little verification procedures in place and for a while, could easily be executed through the URL panel on the set-top box. Starting in around 1998, self-proclaimed WebTV hackers quickly figured out ways to exploit the service's poor security with these vulnerable URLs, resulting in many things which include but aren't limited to: access to internal sections of the production WebTV service such as "Tricks," which hosted several pages designed to troubleshoot the WebTV box and service; the ability to remotely change the settings of a subscriber's box; or even remotely performing actions on any account, including deleting them, which were not verified by the service as to whether the requests were coming from the account holder or not. These "hackers" even found a way to connect to internal WebTV services and discovered WebTV content that was previously unknown to the public, including a version of Doom for WebTV Plus units that could be downloaded from one of these services at one point. At the same year WebTV hacking started to pick up, WebTV Networks had tried their hardest to keep these rogue users back on the production service, and even going as far as terminating people involved with any unauthorized usage of the WebTV service, regardless of their motives. The most notable of these terminations is of WebTV user Matt Squadere, known by his internet handle MattMan69, who is well known for having his and others access terminated without warning due to connecting to the internal WebTV services "TestDrive" and "Weekly", which was possible from accessing the Tricks section of WebTV with a password that was shared around at the time. Matt was specifically terminated when he accessed TestDrive a second time and reported it to WebTV Networks' 1-800 number, which he was initially rewarded for with a WebTV shirt. At the same time, WebTV had its privacy policy changed without warning subscribers prior to doing so, which legally gave them the right to terminate any user for any reason without making it necessary to warn them as to what they've done. This caused a massive uproar from subscribers towards WebTV Networks regarding their fairness and ethics with their legal agreements. It appears after this major incident though that future WebTV hacking endeavors were kept secret between those well known in the hacking scene and were not reported to WebTV Networks directly, supposedly to be able to keep using already discovered methods that were not already nicked. This included any findings on the more technical workings of the WebTV service, including protocol security and service URLs that were still exploitable. Some of the remaining hacks were also used to target unsuspecting WebTV users. One example of this, which concerns being able to have unauthorized access to one's WebTV account, has been documented on the "Tricks/Hacks Archive" section of Matt Squadere's current site, "WebTV MsnTV Secret Pics":
...I chose my victims by reading the News Groups. I would look for those punks that liked to talk sh*t, you know the ones that swore up and down that they could hack your account of fry your webtv unit but in reality they couldn't even access the home page. I would also target those lamers that thought they were cool cause they knew how to send a e-mail bomb that could power off your webtv box and thought they were king shit, lmao!
WebTV/MSN TV was also victim to a virus written in July 2002 by 43 year old David Jeansonne dubbed "NEAT", which changed the local dial-up access number on victims' boxes to 911, which would be dialed the next time the WebTV/MSN TV box had to dial in. It was sent to 18 MSN TV users through an attachment in an email with the subject "NEAT", disguising itself as a tool that could change the colors and fonts of the MSN TV user interface. It was supposedly forwarded to 3 other users by some of the initial victims, making the total victim count 21. At least 10 of the victims reported having the police show up at their homes as a result of their boxes dialing 911. There are also claims of the virus having the ability to mass-mail itself, although this wasn't properly confirmed at the time the virus was prevalent. The writer of this virus was eventually arrested in February 2004 and charged with cyberterrorism.
Protocols
With the first generation of the WebTV/MSN TV service, it appears that a few protocols were used to allow communication with the service, but the main one used for the majority of service communication was WTVP, or the WebTV Protocol. It's a TCP-based protocol that is essentially a proprietary version of HTTP 1.0 with the ability to serve both standard web content and specialized service content to WebTV/MSN TV users. It also introduced its own protocol extensions, which include but aren't limited to 128-bit RC4-based message encryption, ticket-based authorization, proprietary challenge-response authentication to both verify clients logging in to the service and to supply them session keys used for message encryption, and persistent connections. This protocol was supported by all first-generation WebTV/MSN TV devices and the Sega Dreamcast release of WebTV up until the September 2013 discontinuation of the entire service (March 2002 for those in Japan). Another protocol believed to have been used by the service is dubbed "Mail Notify", which is a UDP-based protocol that is believed to have taken part in delivering e-mail notifications to WebTV boxes. Its existence has only been confirmed in a leaked Microsoft document and how it operated or whether it was used on the client-side or as a server-side component isn't clear at the moment. WTVP had extremely minimal documentation back in WebTV's prime and only by 2019, 6 whole years after the service shut down, did more attempts to document it crop up, although not by much initially as it was first done by releasing a third party proof-of-concept WebTV server, dubbed the "WebTV Server Emulator," that only implemented the bare minimum of the service and didn't properly document a whole lot about it. In general, it has been proven difficult to find WebTV staff who remember any more technical details on the protocol, let alone find any who have contact information at all, and WebTV hacking scene members who know how the protocols work (which generally speaking is very few people) have been hesitant to release any more significant information on them when asked. As of 2021, there has been an attempt to further explore the WTVP protocol in a more detailed and concise fashion along with other technical parts of WebTV/MSN TV with the "WebTV/MSN TV Wiki" project, run by someone outside WebTV staff and its hacking scene. There have also since been open-source software projects available started by others that aim to create a working WebTV/MSN TV service while documenting as much of the service protocols as possible. With a lack of sufficient resources on any technical WebTV information and not many people showing enough interest to figure out how the service as a whole worked and share their findings publicly, though, progress on overall documentation on these protocols has been very slow.
The MSN TV 2's service was completely separate from the original one and ran on completely different infrastructure. It, like the first generation service's protocols, had next to no documentation available publicly during its original run. Unlike the original iteration of WebTV/MSN TV, however, the MSN TV 2 barely attracted any talent willing to study how it worked. This is partly because when it released, people already into WebTV/MSN TV hacking started losing interest and some felt that the MSN TV 2 was not worth putting time into hacking. As a result, the MSN TV 2 service was almost entirely undocumented. From what little information that has now been recently disclosed by the very few people from the original hacking scene that stuck around for the MSN TV 2, the service ran on IIS servers and used standard HTTP/HTTPS web services and webpages to communicate with set top boxes. It's also believed XML was one of the formats used by the MSN TV 2 service, although this cannot be 100% confirmed right now.
WebTV/MSN TV client hardware
Models
Confirmed
Not Confirmed
Hacking attempts
In February 2006, Chris Wade analyzed the proprietary BIOS of the MSN TV 2 set top box, and created a sophisticated memory patch which allowed it to be flashed and used to boot Linux on it. An open-source solution to enabling TV output on the MSN TV 2 and similar devices was made available in 2009. There were also recorded attempts to make use of unused IDE pins on the MSN TV 2's motherboard and supply a hard drive, most likely to add extra storage beyond the 64 MB given by the default CompactFlash storage. Outside of these attempts, though, not much was done in the realm of hacking the WebTV/MSN TV hardware.
See also
Microsoft Venus
Set-top box
SmartTV
AOL TV
Google TV (smart TV platform)
Caldera DR-WebSpyder
References
External links
.
"WebTV/MSN TV Wiki", focused on documenting all information about the WebTV/MSN TV product and service
.
Interactive television
Streaming television
MSN
Set-top box
Thin clients
Products and services discontinued in 2013
Telecommunications-related introductions in 1996
Computer-related introductions in 1996 |
53721959 | https://en.wikipedia.org/wiki/Yunhao%20Liu | Yunhao Liu | Yunhao Liu is a Chinese computer scientist. He is the Dean of Global Innovation Exchange (GIX) at Tsinghua University.
Liu was named Fellow of the Association for Computing Machinery (ACM) in 2015 for contributions to sensor networks and Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2015 for contributions to wireless sensor networks and systems.
He is the Editor-in-Chief of ACM Transactions on Sensor Network and is the Honorary Chair of the ACM China Council.
Biography
Yunhao Liu received his B.S. degree from Automation Department at Tsinghua University in 1995, an M.A. degree from Graduate School of Translation and Interpreting at Beijing Foreign Studies University in 1997, and an M.S. and Ph.D. degree from Department of Computer Science and Engineering at Michigan State University in 2003 and 2004.
From 2004 to 2011, he was assistant professor, associate professor, and postgraduate director in the Department of Computer Science and Engineering, Hong Kong University of Science and Technology (HKUST). He was Professor in School of Information Science and Technology at Tsinghua University 2011 through 2013, and he served as Chang Jiang Professor and the Dean of School of Software, Tsinghua University, 2013 to 2017. From 2018, he joined MSU and now serves as MSU Foundation Professor, and from 2018 to 2019, he served as chairperson designee in the Department of Computer Science and Engineering, Michigan State University.
Honors and awards
Yunhao received Hong Kong Best Innovation & Research Grand Award for building the world's earliest Coal Mine Surveillance with Wireless Sensor Networks in 2007. He received the First Class Ministry of Education Nature Science Award for Location and Localizability study in wireless networks in 2010.
In 2011, Liu was awarded the State Natural Science Award for his contribution to wireless localization theory and practice. In the same year, he also received one of the five Distinguished Young Scholar Awards in Computer Science by National Natural Science Foundation of China. In 2013, Liu was named the ACM Presidential Award for his contribution to spreading the word and shared the value that ACM offers to China's vast computing community.
In 2014, the 20th Annual International Conference on Mobile Computing and Networking (ACM MobiCom) awards the best paper award to Liu's group for their paper Tagoram: Real-Time Tracking of Mobile RFID Tags to High Precision Using COTS Devices. Yunhao and his students designed and developed Tagoram, and successfully deployed this system in Terminal One, Beijing Capital International Airport and Sanya Phoenix International Airport. The prototype ran over a year and consumed 110,000 RFID tags involving 53 destination airports, 93 airlines, and 1,094 flights. Based on the observation that the tag diversity is the key to the localization performance, they designed and implemented Differential Augmented Hologram (DAH) localization scheme, which successfully eliminated the impact of tag diversity on the localization accuracy, and improved the localization performance to millimeter-level, which was 40 times better than the state-of-the-art. The Tagoram system also won the Gold Award in Soft China Forum 2013, which is the highest honor in Chinese Software industry. It was the first time that the conference awards its highest award to an Asian institution.
In 2015, Liu was named Fellow of the Association for Computing Machinery (ACM) for contributions to sensor networks, and Fellow of the Institute of Electrical and Electronics Engineers (IEEE) for contributions to wireless sensor networks and systems.
In 2016, Liu was awarded the IOT Young Achievement Award for his contribution to the Internet of Things by China Computer Federation.
In 2017, Liu was named CSE Distinguished Alumni by Michigan State University.
In 2018, Liu was named MSU Foundation Professor by Michigan State University.
In 2021, Liu was awarded the Best Student Paper Award of SIGCOMM 2021, and the Best Paper of Sensys 2021.
Notable publications
Books:
Liu, Yunhao, and Zheng Yang. Location, localization, and localizability: location-awareness technology for wireless networks. Springer Science & Business Media, 2010.
Liu, Yunhao. "Introduction to internet of things." Beijing. The Science Press, 2010.
Selected journal articles:
Lionel, M. N., Liu, Y., Lau, Y. C., & Patil, A. P. (2004). LANDMARC: indoor location sensing using active RFID. Wireless networks, 10(6), 701-710.
Yang, Z., Zhou, Z., & Liu, Y. (2013). From RSSI to CSI: Indoor localization via channel response. ACM Computing Surveys, 46(2), 25.
Wu, C., Yang, Z., Liu, Y., & Xi, W. (2012). WILL: Wireless indoor localization without site survey. IEEE Transactions on Parallel and Distributed Systems, 24(4), 839-848.
Li, M., & Liu, Y. (2009). Underground coal mine monitoring with wireless sensor networks. ACM Transactions on Sensor Networks (TOSN), 5(2), 10.
Li, M., & Liu, Y. (2010). Rendered path: range-free localization in anisotropic sensor networks with holes. IEEE/ACM Transactions on Networking (ToN), 18(1), 320-332.
Liu, Y., He, Y., Li, M., Wang, J., Liu, K., & Li, X. (2012). Does wireless sensor network scale? A measurement study on GreenOrbs. IEEE Transactions on Parallel and Distributed Systems, 24(10), 1983-1993.
Liu, Y., Liu, K., & Li, M. (2010). Passive diagnosis for wireless sensor networks. IEEE/ACM Transactions on Networking (TON), 18(4), 1132-1144.
Yang, Z., & Liu, Y. (2009). Quality of trilateration: Confidence-based iterative localization. IEEE Transactions on Parallel and Distributed Systems, 21(5), 631-640.
Liu, Y., Xiao, L., Liu, X., Ni, L. M., & Zhang, X. (2005). Location awareness in unstructured peer-to-peer systems. IEEE Transactions on Parallel and Distributed Systems, 16(2), 163-174.
Liu, Y., Zhao, Y., Chen, L., Pei, J., & Han, J. (2011). Mining frequent trajectory patterns for activity monitoring using radio frequency tag arrays. IEEE Transactions on Parallel and Distributed Systems, 23(11), 2138-2149.
Yang, Z., & Liu, Y. (2011). Understanding node localizability of wireless ad hoc and sensor networks. IEEE Transactions on Mobile Computing, 11(8), 1249-1260.
Dong, D., Li, M., Liu, Y., Li, X. Y., & Liao, X. (2011). Topological detection on wormholes in wireless ad hoc and sensor networks. IEEE/ACM Transactions on Networking (TON), 19(6), 1787-1796.
Liu, Y., Xiao, L., & Ni, L. (2007). Building a scalable bipartite P2P overlay network. IEEE Transactions on Parallel and distributed systems, 18(9), 1296-1306.
Liu, Y., Mao, X., He, Y., Liu, K., Gong, W., & Wang, J. (2013). CitySee: Not only a wireless sensor network. IEEE Network, 27(5), 42-47.
Yin, Z., Wu, C., Yang, Z., & Liu, Y. (2017). Peer-to-peer indoor navigation using smartphones. IEEE Journal on Selected Areas in Communications, 35(5), 1141-1153.
Qian, K., Wu, C., Yang, Z., Liu, Y., He, F., & Xing, T. (2018). Enabling contactless detection of moving humans with dynamic speeds using CSI. ACM Transactions on Embedded Computing Systems (TECS), 17(2), 52.
Zhou, Z., Shangguan, L., Zheng, X., Yang, L., & Liu, Y. (2017). Design and implementation of an RFID-based customer shopping behavior mining system. IEEE/ACM transactions on networking, 25(4), 2405-2418.
Yang, L., Li, Y., Lin, Q., Jia, H., Li, X. Y., & Liu, Y. (2017). Tagbeat: Sensing mechanical vibration period with COTS RFID systems. IEEE/ACM Transactions on Networking (TON), 25(6), 3823-3835.
Ma, Q., Zhang, S., Zhu, T., Liu, K., Zhang, L., He, W., & Liu, Y. (2016). PLP: Protecting location privacy against correlation analyze attack in crowdsensing. IEEE transactions on mobile computing, 16(9), 2588-2598.
Duan, C., Yang, L., Lin, Q., & Liu, Y. (2018). Tagspin: High accuracy spatial calibration of RFID antennas via spinning tags. IEEE Transactions on Mobile Computing, 17(10), 2438-2451.
Selected conference articles:
Yang, Q., Li, Z., Liu, Y., Long, H., Huang, Y., He, J., ... & Zhai, E. Mobile Gaming on Personal Computers with Direct Android Emulation. In Proceedings of ACM MobiCom 2019.
Yan, Y., Li, Z., Chen, Q. A., Wilson, C., Xu, T., Zhai, E., ... & Liu, Y. Understanding and Detecting Overlay-based Android Malware at Market Scales. In Proceedings of ACM MobiSys 2019.
Xiao, A., Liu, Y., Li, Y., Qian, F., Li, Z., Bai, S., ... & Xin, X. An In-depth Study of Commercial MVNO: Measurement and Optimization. In Proceedings of ACM MobiSys 2019.
Dang, F., Li, Z., Liu, Y., Zhai, E., Chen, Q. A., Xu, T., ... & Yang, J. Understanding Fileless Attacks on Linux-based IoT Devices with HoneyCloud. In Proceedings of ACM MobiSys 2019.
Zheng, Y., Zhang, Y., Qian, K., Zhang, G., Liu, Y., Wu, C., & Yang, Z. Zero-Effort Cross-Domain Gesture Recognition with Wi-Fi. In Proceedings of ACM MobiSys 2019.
Zhang, K., Wu, C., Yang, C., Zhao, Y., Huang, K., Peng, C., ... & Yang, Z. ChromaCode: A Fully Imperceptible Screen-Camera Communication System. In Proceedings of ACM MobiCom 2018.
Qian, K., Wu, C., Zhang, Y., Zhang, G., Yang, Z., & Liu, Y. Widar2.0: Passive human tracking with a single wi-fi link. In Proceedings of ACM MobiSys 2018.
Liu, C., Zhang, L., Liu, Z., Liu, K., Li, X., & Liu, Y. Lasagna: towards deep hierarchical understanding and searching over mobile sensing data. In Proceedings of ACM MobiCom 2016.
Yang, L., Li, Y., Lin, Q., Li, X. Y., & Liu, Y. Making sense of mechanical vibration period with sub-millisecond accuracy using backscatter signals. In Proceedings of ACM MobiCom 2016.
Yang, L., Lin, Q., Li, X., Liu, T., & Liu, Y. See through walls with cots rfid system!. In Proceedings of ACM MobiCom 2015.
Zhang, L., Bo, C., Hou, J., Li, X. Y., Wang, Y., Liu, K., & Liu, Y. Kaleido: You can watch it but cannot record it. In Proceedings of ACM MobiCom 2015.
Shangguan, L., Yang, Z., Liu, A. X., Zhou, Z., & Liu, Y. Relative Localization of RFID Tags using Spatial-Temporal Phase Profiling. In Proceedings of USENIX NSDI 2015.
Zhu, T., Ma, Q., Zhang, S., & Liu, Y. Context-free attacks using keyboard acoustic emanations. In Proceedings of ACM CCS 2014.
Yang, L., Chen, Y., Li, X. Y., Xiao, C., Li, M., & Liu, Y. Tagoram: Real-time tracking of mobile RFID tags to high precision using COTS devices. In Proceedings of ACM MobiCom 2014. [Best Paper Award]
Zhang, L., Li, X. Y., Huang, W., Liu, K., Zong, S., Jian, X., ... & Liu, Y. It starts with igaze: Visual attention driven networking with smart glasses. In Proceedings of ACM MobiCom 2014.
Yang, Z., Wu, C., & Liu, Y. Locating in fingerprint space: wireless indoor localization with little human intervention. In Proceedings of ACM MobiCom 2012.
Mo, L., He, Y., Liu, Y., Zhao, J., Tang, S. J., Li, X. Y., & Dai, G. Canopy closure estimates with greenorbs: Sustainable sensing in the forest. In Proceedings of ACM SenSys 2009.
Yang, Z., Liu, Y., & Li, X. Y. Beyond trilateration: On the localizability of wireless ad-hoc networks. In Proceedings of IEEE INFOCOM 2009.
Li, M., & Liu, Y. Underground structure monitoring with wireless sensor networks. In Proceedings ACM IPSN 2007.
Liao, X., Jin, H., Liu, Y., Ni, L. M., & Deng, D. Anysee: Peer-to-peer live streaming. In Proceedings of IEEE INFOCOM 2006.
Liu, Y., Liu, X., Xiao, L., Ni, L. M., & Zhang, X. Location-aware topology matching in P2P systems. In Proceedings of IEEE INFOCOM 2004.
Professional services
Editor for:
ACM Transactions on Sensor Network (Editor-in-Chief, 2017–Present)
IEEE/ACM Transactions on Networking (Associate Editor, 2012-2016)
IEEE Transactions on Parallel and Distributed Systems (Associate Editors-in-Chief, 2011-2015)
Tsinghua Science and Technology (Associate Editor-in-Chief)
Program Committee Member for:
IEEE INFOCOM 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020
IEEE ICDCS 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2018
ACM MobiCom 2009, 2010, 2011, 2012, 2015, 2018, 2019, 2020
Chair for:
IEEE INFOCOM 2020 (PC Co-Chair)
ACM EWSN 2019 (General Co-Chair)
ACM TURC 2017-2019 (General Co-Chair)
Microsoft 21 Century Computing 2015 (General Co-Chair)
IEEE TrustCom 2014 (PC Chair)
IEEE RTAS 2012 (General Chair)
IEEE MASS 2011 (PC Chair)
IEEE ICDCS 2010, 2012 (PC Vice Chair)
WWW 2008 (General Vice Chair)
References
External links
Yunhao Liu at Michigan State University.
Yunhao Liu at Tsinghua University (Chinese).
Yunhao Liu - Google Scholar.
People of ACM - Yunhao Liu.
Fellows of the Association for Computing Machinery
Fellow Members of the IEEE
Computer scientists
Scientists from Beijing
Living people
Educators from Beijing
1971 births |
9174778 | https://en.wikipedia.org/wiki/Microprocessor%20development%20board | Microprocessor development board | A microprocessor development board is a printed circuit board containing a microprocessor and the minimal support logic needed for an electronic engineer or any person that wants to become acquainted with the microprocessor on the board and to learn to program it. It also served users of the microprocessor as a method to prototype applications in products.
Unlike a general-purpose system such as a home computer, usually a development board contains little or no hardware dedicated to a user interface. It will have some provision to accept and run a user-supplied program, such as downloading a program through a serial port to flash memory, or some form of programmable memory in a socket in earlier systems.
History
The reason for the existence of a development board was solely to provide a system for learning to use a new microprocessor, not for entertainment. So everything superfluous was left out to keep costs down. Even an enclosure was not supplied, nor a power supply. This is because the board would only be used in a "laboratory" environment so it did not need an enclosure, and the board could be powered by a typical bench power supply already available to an electronic engineer.
Microprocessor training development kits were not always produced by microprocessor manufacturers. Many systems that can be classified as microprocessor development kits were produced by third parties, one example is the Sinclair MK14, which was inspired by the official SC/MP development board from National Semiconductor, the "NS introkit".
Although these development boards were not designed for hobbyists, they were often bought by them because they were the earliest cheap microcomputer devices you could buy. They often added all kinds of expansions, such as more memory, a video interface etc. It was very popular to use (or write) an implementation of Tiny Basic. The most popular microprocessor board, the KIM-1, received the most attention from the hobby community, because it was much cheaper than most other development boards, and you could get more software for it (Tiny Basic, games, assemblers), and cheap expansion cards to add more memory or other functionality. More articles were published in magazines like "Kilobaud Microcomputing" that described home-brew software and hardware for the KIM-1 than for other development boards.
Today some chip producers still release "test boards" to demonstrate their chips, and to use them as a "reference design". Their significance these days is much smaller than it was in the days that such boards, (the KIM-1 being the canonical example) were the only low cost way to get "hands-on" acquainted with microprocessors..
Features
The most important feature of the microprocessor development board was the ROM based built-in machine language monitor, or "debugger" as it was also sometimes called. Often the name of the board was related to the name of this monitor program, for example the name of the monitor program of the KIM-1 was "Keyboard Input Monitor", because the ROM based software allowed entry of programs without the rows of cumbersome toggle switches that older systems used. The popular 6800 based systems often used a monitor with a name with the word "bug" for "debugger" in it, for example the popular "MIKBUG".
Input was normally done with a hexadecimal keyboard, using a machine language monitor program, and the display only consisted of a 7-segment display. Backup storage of written assembler programs was primitive: only a cassette type interface was typically provided, or the serial Teletype interface was used to read (or punch) a papertape.
Often the board has some kind to expansion connector that brought out all the necessary CPU signals, so that an engineer could build and test an experimental interface or other electronic device.
External interfaces on the bare board were often limited to a single RS-232 or current loop serial port, so a terminal, printer, or Teletype could be connected.
List of historical development boards
8085AAT, an Intel 8085 microprocessor training unit from Paccom
CDP18S020 evaluation board for the RCA CDP1802 microprocessor
EVK 300 6800 single board from American Microsystems (AMI)
Explorer/85 expandable learning system based on the 8085, by Netronics's research and development ltd.
ITT experimenter used switches and LEDs, and an intel 8080
JOLT was designed by Raymond M. Holt, co-founder of Microcomputer Associates, Incorporated.
KIM-1 the development board for the MOS Technology/Rockwell/Synertek 6502 microprocessor. The name KIM is short for "keyboard input monitor"
SYM-1 a slightly improved KIM-1 with better software, more memory, and I/O. Also known as the VIM
AIM-65 an improved KIM-1 with an alphanumerical LED display, and a built-in printer.
The KIM-1 also lead to some unofficial copies, such as the super-KIM and the Junior from the magazine Elektor, and the MCS Alpha 1
LC80 by Kombinat Mikroelektronik Erfurt
MAXBOARD development board for the Motorola 6802.
MEK6800D2 the official development board for the Motorola 6800 microprocessor. The name of the monitor software was MIKBUG
MicroChroma 68 color graphics kit. Developed by Motorola to demonstrate their new 6847 video display processor. The monitor software was called TVBUG
Motorola EXORciser development system (rack based) for the Motorola 6809
Microprofessor I (MPF-1) Z80 development and training system by Acer
Tangerine Microtan 65 6502 development system with VDU, that could be expanded to a more capable system.
MST-80B 8080 training system by the Lawrence Livermore National Laboratory
NS introkit by National Semiconductor featuring the SC/MP, the predecessor to the Sinclair MK14
NRI microcomputer, a system developed to teach computer courses by McGraw-Hill and the National Radio Institute (NRI)
MK14 Training system for the SC/MP microprocessor from Sinclair Research Ltd.
SDK-80 Intel's development board for their 8080 microprocessor
SDK-51 Intel's development board for their Intel MCS-51
SDK-85 Intel's development board for their 8085 microprocessor
SDK-86 Intel's development board for their 8086 microprocessor
Siemens Microset-8080 boxed system based on an 8080.
Signetics Instructor 50 based on the Signetics 2650.
SGS-ATES Nanocomputer z80.
RCA Cosmac Super Elf by RCA . a 1802 learning system with an RCA 1861 Video Display Controller.
TK-80 the development board for NEC's clone of Intel's i8080, the μPD 8080A
TM 990/100M evaluation board for the Texas Instruments TMS9900
TM 990/180M evaluation board for the Texas Instruments TMS9800
XPO-1 Texas Instruments development system for the PPS-4/1 line of microcontrollers
DSP evaluation boards
A DSP evaluation board, sometimes also known as a DSP starter kit (DSK) or a DSP evaluation module, is an electronic board with a digital signal processor used for experiments, evaluation and development. Applications are developed in DSP Starter Kits using software usually referred as an integrated development environment (IDE). Texas Instruments and Spectrum Digital are two companies who produce these kits.
Two examples are the DSK 6416 by Texas Instruments, based on the TMS320C6416 fixed point digital signal processor, a member of C6000 series of processors that is based on VelociTI.2 architecture, and the DSK 6713 by Texas Instruments, which was developed in cooperation with Spectrum Digital, based on the TMS320C6713 32-bit floating point digital signal processor, which allows for programming in C and assembly.
See also
Embedded system
Intel System Development Kit
Single-board computer
Single-board microcontroller
References
Early microcomputers
History of computing hardware
Telecommunications engineering |
635094 | https://en.wikipedia.org/wiki/Prix%20Ars%20Electronica | Prix Ars Electronica | The Prix Ars Electronica is one of the best known and longest running yearly prizes in the field of electronic and interactive art, computer animation, digital culture and music. It has been awarded since 1987 by Ars Electronica (Linz, Austria).
In 2005, the Golden Nica, the highest prize, was awarded in six categories: "Computer Animation/Visual Effects," "Digital Musics," "Interactive Art," "Net Vision," "Digital Communities" and the "u19" award for "freestyle computing." Each Golden Nica came with a prize of €10,000, apart from the u19 category, where the prize was €5,000. In each category, there are also Awards of Distinction and Honorary Mentions.
The Golden Nica is replica of the Greek Nike of Samothrace. It is a handmade wooden statuette, plated with gold, so each trophy is unique: approximately 35 cm high, with a wingspan of about 20 cm, all on a pedestal. "Prix Ars Electronica" is a phrase composed of French, Latin and Spanish words, loosely translated as "Electronic Arts Prize."
Golden Nica winners
Computer animation / film / vfx
The "Computer Graphics" category (1987–1994) was open to different kinds of computer images. The "Computer Animation" (1987–1997) was replaced by the current "Computer Animation/Visual Effects" category in 1998.
Computer Graphics
1987 – Figur10 by Brian Reffin Smith, UK
1988 – The Battle by David Sherwin, US
1989 – Gramophone by Tamás Waliczky, HU
1990 – P-411-A by Manfred Mohr, Germany
1991 – Having encountered Eve for the second time, Adam begins to speak by Bill Woodard, US
1992 – RD Texture Buttons by Michael Kass and Andrew Witkin, US
1993 – Founders Series by Michael Tolson, US
1994 – Jellylife / Jellycycle / Jelly Locomotion by Michael Joaquin Grey, US
Computer Animation
1987 – Luxo Jr. by John Lasseter, US
1988 – Red's Dream by John Lasseter, US
1989 – Broken Heart by Joan Staveley, US
1990 – Footprint by Mario Sasso and Nicola Sani, IT
1991 – Panspermia by Karl Sims, US
1992 – Liquid Selves / Primordial Dance by Karl Sims, US
1993 – Lakmé by Pascal Roulin, BE
1994 – Jurassic Park by Dennis Muren, Mark Dippé and Steve Williams, US/CA
Distinction: Quarxs by Maurice Benayoun, FR
Distinction: K.O. Kid by Marc Caro, FR
1995 – God's Little Monkey by David Atherton and Bob Sabiston, US
1996 – Toy Story by John Lasseter, Lee Unkrich and Ralph Eggleston, US
1997 – Dragonheart by Scott Squires, Industrial Light & Magic (ILM), US
Computer Animation/Visual Effects
1998 – The Sitter by Liang-Yuan Wang, TW
Titanic by Robert Legato and Digital Domain, US
1999 – Bunny by Chris Wedge, US
What Dreams May Come by Mass Illusions, POP, Digital Domain, Vincent Ward, Stephen Simon and Barnet Bain, US
2000 – Maly Milos by Jakub Pistecky, CA
Maaz by Christian Volckman, FR
2001 – Le Processus by Xavier de l’Hermuzičre and Philippe Grammaticopoulos, FR
2002 – Monsters, Inc. by Andrew Stanton, Lee Unkrich, Pete Docter and David Silverman, US
2003 – Tim Tom by Romain Segaud and Cristel Pougeoise, FR
2004 – Ryan by Chris Landreth, US.
Distinction: Parenthèse from Francois Blondeau, Thibault Deloof, Jérémie Droulers, Christophe Stampe, France
Distinction: Birthday Boy from Sejong Park, Australia
2005 – Fallen Art by Tomek Baginski, Poland.
Distinction: The Incredibles from Pixar
Distinction: City Paradise by Gaëlle Denis (UK), Passion Pictures (FR)
2006 – 458nm by Jan Bitzer, Ilija Brunck, Tom Weber, Filmakademie Baden-Württemberg, Germany.
Distinction: Kein platz Für Gerold by Daniel Nocke / Studio Film Bilder, Germany
Distinction: Negadon, the monster from Mars, by Jun Awazu, Japan
2007 – Codehunters by Ben Hibon, (UK)
2008 – Madame Tutli-Putli by Chris Lavis, Maciek Szczerbowski. (Directors), Jason Walker (Special Visual Effects), National Film Board of Canada
2009 – HA'Aki by Iriz Pääbo, National Film Board of Canada
2010 – Nuit Blanche by Arev Manoukian (Director), Marc-André Gray (Visual Effects Artist), National Film Board of Canada
2011 – Metachaos by Alessandro Bavari (IT)
2012 – Rear Window Loop by Jeff Desom (LU)
Distinction: Caldera by Evan Viera/Orchid Animation (US)
Distinction: Rise of the Planet of the Apes by Weta Digital (NZ)/Twentieth Century Fox
2013 – Forms by Quayola (IT), Memo Akten (TR)
Distinction: Duku Spacemarines by La Mécanique du Plastique (FR)
Distinction: Oh Willy… by Emma De Swaef (BE), Marc James Roels (BE) / Beast Animation
2014 – Walking City by Universal Everything (UK)
2015 – Temps Mort by Alex Verhaest (BE)
Distinction: Bär by Pascal Floerks (DE)
Distinction: The Reflection of Power by Mihai Grecu (RO/HU)
Digital Music
This category is for those making electronic music and sound art through digital means. From 1987 to 1998 the category was known as "Computer music." Two Golden Nicas were awarded in 1987, and none in 1990. There was no Computer Music category in 1991.
1987 – Peter Gabriel and Jean-Claude Risset
1988 – Denis Smalley
1989 – Kaija Saariaho
1990 – None
1991 – Category omitted
1992 – Alejandro Viñao
1993 – Bernard Parmegiani
1994 – Ludger Brümmer
1995 – Trevor Wishart
1996 – Robert Normandeau
1997 – Matt Heckert
1998 – Peter Bosch and Simone Simons (joint award)
1999 – Come to Daddy by Aphex Twin (Richard D. James) and Chris Cunningham (joint award)
Distinction: Birthdays by Ikue Mori (JP)
Distinction: Mego (label), Hotel Paral.lel by Christian Fennesz, Seven Tons For Free by Peter Rehberg (a.k.a. Pita)
2000 – 20' to 2000 by Carsten Nicolai
Distinction: Minidisc by Gescom
Distinction: Outside the Circle of Fire by Chris Watson
2001 – Matrix by Ryoji Ikeda
2002 – Man'yo Wounded 2001 by Yasunao Tone
2003 – Ami Yoshida, Sachiko M and Utah Kawasaki (joint award)
2004 – Banlieue du Vide by Thomas Köner
2005 – TEO! A Sonic Sculpture by Maryanne Amacher
2006 – L'île ré-sonante by Eliane Radigue
2007 – Reverse-Simulation Music by Mashiro Miwa
2008 – Reactable by Sergi Jordà (ES), Martin Kaltenbrunner (AT), Günter Geiger (AT) and Marcos Alonso (ES)
2009 – Speeds of Time versions 1 and 2 by Bill Fontana (US)
2010 – rheo: 5 horizons by Ryoichi Kurokawa (JP)
2011 – Energy Field by Jana Winderen (NO)
2012 – "Crystal Sounds of a Synchrotron" by Jo Thomas (GB)
2013 – frequencies (a) by Nicolas Bernier (CA)
Distinction: SjQ++ by SjQ++ (JP)
Distinction: Borderlands Granular by Chris Carlson (US)
2015 – Chijikinkutsu by Nelo Akamatsu (JP)
Distinction: Drumming is an elastic concept by Josef Klammer (AT)
Distinction: Under Way by Douglas Henderson (DE)
2017 – Not Your World Music: Noise In South East Asia by Cedrik Fermont (CD/BE/DE), Dimitri della Faille (BE/CA)
Distinction: Gamelan Wizard by Lucas Abela (AU), Wukir Suryadi (ID) und Rully Shabara (ID)
Distinction: Corpus Nil by Marco Donnarumma (DE/IT)
Hybrid art
2007 – SymbioticaSymbiotica
2008 – Pollstream – Nuage Vert by Helen Evans (FR/UK) and Heiko Hansen (FR/DE) HeHe
2009 – Natural History of the Enigma by Eduardo Kac (US)
2010 – Ear on Arm by Stelarc (AU)
2011 – May the Horse Live in me by Art Orienté Objet (FR)
2012 – Bacterial radio by Joe Davis (US)
2013 – Cosmopolitan Chicken Project, Koen Vanmechelen (BE)
2015 – Plantas Autofotosintéticas, Gilberto Esparza (MX)
2017 – K-9_topology, Maja Smrekar (SI)
[the next idea] voestalpine Art and Technology Grant
2009 – Open_Sailing by Open_Sailing Crew led by Cesar Harada.
2010 – Hostage by [Frederik De Wilde].
2011 – Choke Point Project by P2P Foundation (NL).
2012 – qaul.net – tools for the next revolution by Christoph Wachter & Mathias Jud
2013 – Hyperform by Marcelo Coelho (BR), Skylar Tibbits (US), Natan Linder (IL), Yoav Reaches (IL)
Honorary Mentions: GravityLight by Martin Riddiford (GB), Jim Reeves (GB)
2014 – BlindMaps by Markus Schmeiduch, Andrew Spitz and Ruben van der Vleuten
2015 – SOYA C(O)U(L)TURE by XXLab (ID) – Irene Agrivina Widyaningrum, Asa Rahmana, Ratna Djuwita, Eka Jayani Ayuningtias, Atinna Rizqiana
Interactive Art
Prizes in the category of interactive art have been awarded since 1990. This category applies to many categories of works, including installations and performances, characterized by audience participation, virtual reality, multimedia and telecommunication.
1990 – Videoplace installation by Myron Krueger
1991 – Think About the People Now project by Paul Sermon
1992 – Home of the Brain installation by Monika Fleischmann and Wolfgang Strauss
1993 – Simulationsraum-Mosaik mobiler Datenklänge (smdk) installation by Knowbotic Research
1994 – A-Volve environment by Christa Sommerer and Laurent Mignonneau
1995 – the concept of Hypertext, attributed to Tim Berners-Lee
1996 – Global Interior Project installation by Masaki Fujihata
1997 – Music Plays Images X Images Play Music concert by Ryuichi Sakamoto and Toshio Iwai
1998 – World Skin, a Photo Safari in the Land of War installation by Jean-Baptiste Barrière and Maurice Benayoun
1999 – Difference Engine #3 by construct and Lynn Hershman
2000 – Vectorial Elevation, Relational Architecture #4 installation by Rafael Lozano-Hemmer
2001 – polar installation by Carsten Nicolai and Marko Peljhan
2002 – n-cha(n)t installation by David Rokeby
2003 – Can You See Me Now? participatory game by Blast Theory and Mixed Reality Lab
2004 – Listening Post installation by Ben Rubin and Mark Hansen
2005 – MILKproject installation and project by Esther Polak, Ieva Auzina and RIXC – Riga Centre for New Media Culture
2006 – The Messenger installation by Paul DeMarinis
2007 – Park View Hotel installation by Ashok Sukumaran
2008 – Image Fulgurator by Julius von Bismarck (Germany)
2009 – Nemo Observatorium by Laurence Malstaf (Belgium)
2010 – The Eyewriter by Zachary Lieberman, Evan Roth, James Powderly, Theo Watson, Chris Sugrue, Tempt1
2011 – Newstweek by Julian Oliver (NZ) and Danja Vasiliev (RU)
2012 – Memopol-2 by Timo Toots (EE)
2013 – Pendulum Choir By Michel Décosterd (CH), André Décosterd (CH)
Distinction – Rain Room by rAndom International (GB)
Distinction – Voices of Aliveness by Masaki Fujihata (JP)
2014 – Loophole for All by Paolo Cirio(IT)
2016 – Can you hear me? by Mathias Jud(DE), Christoph Wachter (CH)
Internet-related categories
In the categories "World Wide Web" (1995–96) and ".net" (1997–2000), interesting web-based projects were awarded, based on criteria like web-specificity, community-orientation, identity and interactivity. In 2001, the category became broader under the new name "Net Vision / Net Excellence", with rewards for innovation in the online medium.
World Wide Web
1995 – Idea Futures by Robin Hanson
1996 – Digital Hijack by etoy
Second prizes: HyGrid by SITO and Journey as an exile
.net
1997 – Sensorium by Taos Project
1998 – IO_Dencies Questioning Urbanity by Knowbotic Research
1999 – Linux by Linus Torvalds
2000 – In the Beginning... Was the Command Line (excerpts) by Neal Stephenson
Net Vision / Net Excellence
2001 – Banja by Team cHmAn and "PrayStation" by Joshua Davis
2002 – Carnivore by Radical Software Group and "They Rule" by Josh On and Futurefarmers
2003 – Habbo Hotel and Noderunner by Yury Gitman and Carlos J. Gomez de Llarena
2004 – Creative Commons
2005 – Processing by Benjamin Fry, Casey Reas and the Processing community
2006 – The Road Movie by exonemo
Digital Communities
A category begun in 2004 with support from SAP (and a separate ceremony in New York City two months before the main Ars Electronica ceremony) to celebrate the 25th birthday of Ars Electronica. Two Golden Nicas were awarded.
2004 – Wikipedia and The World Starts With Me
Distinction:
Open-Clothes
2005 – Akshaya, an information technology development program in India
Distinction: Free Software Foundation (USA) and Telestreet – NewGlobalVision (Italy)
2006 – canal*ACCESSIBLE
Distinction:
Codecheck (Roman Bleichenbacher CH)
Proyecto Cyberela – Radio Telecentros (CEMINA)
Honorary Mentions:
Arduino (Arduino)
Charter97.org – News from Belarus
CodeTree
MetaReciclagem
Mountain Forum
Northfield.org
Pambazuka News (Fahamu
Semapedia
stencilboard.at (Stefan Eibelwimmer (AT), Günther Kolar (AT))
The Freecycle Network
The Organic City
UgaBYTES Initiative (UgaBYTES Initiative (UG))
2007 – Overmundo
2008 – 1 kg more
Distinction: PatientsLikeMe and Global Voices Online
2009 – HiperBarrio by Álvaro Ramírez and Gabriel Jaime Vanegas
Distinction:
piratbyran.org
wikileaks.org
Honorary Mentions:
hackmeeting.org
pad.ma
Maneno
femalepressure.net
metamute.org
ubu.com
canchas.org
feraltrade.org
flossmanuals.net
wikiartpedia.org
changemakers.net
vocesbolivianas.org
2010 – Chaos Computer Club
2011 – Fundacion Ciudadano Inteligente
Distinction:
Bentham Papers Transcription Initiative (Transcribe Bentham) (UK). See also the project's Transcription Desk
X_MSG
2012 – Syrian people know their way
2013 – El Campo de Cebada by El Campo de Cebada (ES)
Distinction: Refugees United by Christopher Mikkelsen (DK), David Mikkelsen (DK)
Distinction: Visualizing Palestine by Visualizing Palestine (PS)
2014 – Project Fumbaro Eastern Japan by Takeo Saijo (JP)
See also
List of computer-related awards
References
External links
ARS ELECTRONICA ARCHIVE - PRIX
Past winners
Past winners (in german, more detailed)
Prix Ars Electronica 1987–1990
Awards established in 1987
Arts awards in Austria
Animation awards
Computer-related awards
Digital media
New media art festivals |
10734234 | https://en.wikipedia.org/wiki/Strat-O-Matic%20Football | Strat-O-Matic Football | Strat-O-Matic Pro Football is a tabletop board game that was first produced by the Strat-O-Matic game company in 1968. The game is a statistically based sports game that simulates the play of American football. Each player's statistics are gathered, analyzed, and then converted into numerical results which reflect each player's production for a given year. These numerical results are placed on a set of cards, with each team having its own set. In addition, a team's defensive ratings for a season are converted into card data that determines how many yards may be gained against that defense.
Gameplay
The original game is played with dice and cards. In playing a typical game, each athlete is represented by a player card, on which are printed various ratings and result tables for dice rolls. A player, who may play solitaire or against another player, is in charge of making strategic and personnel decisions for his/her team, while determining the results of his/her decisions by cross-referencing dice rolls with a system of printed charts and tables. A game of Strat-O-Matic Pro Football takes approximately 60 minutes to play.
Strat-O-Matic Pro Football is also available as a personal computer game faithfully adapted from the board game. One of the main features of the computer game is the inclusion of team-specific computer managers which will call plays and make player substitutions to simulate all of the real-life NFL teams Strat-O-Matic has released on the computer game (approximately 20 seasons from 1957 through the present). These computer managers present a competitive challenge to those who may find it difficult to find a live opponent for the board game. In addition, computer game players of Strat-O-Matic Pro Football may play head-to-head online and there are many customer-created leagues set up specifically for head-to-head play.
Although Strat-O-Matic Football is a simplification of the complicated game it simulates, there is a great deal of strategy involved. The person or computer controlling the team on offense and the person or computer controlling the team on defense for a particular play each secretly choose a play to call from among a handful of plays and in the case of defense chooses player movement if any. If the defensive play and/or player movement are designed to be effective against the choice of offensive play (this is the goal of the person or computer controlling the defensive team) then the play is less likely to be successful for the offense. For example, if the defense calls "pass" and the offense calls a passing play then the play is less likely to be successful; further, if the defense chooses to double-team a specific receiver for the pass and the pass is intended for that receiver then the play is significantly less likely to be successful though choosing to double-team may open up other types of plays to be successful. Field goals, punts, and kickoffs are also handled individually within the game and while there is, similar to the NFL, strategy involved in when to use these plays (for example, on fourth down whether to attempt a field goal, to punt, or to go for a first down) there is little strategy associated with these kicking plays.
Each game is decided by an approximately equal amount of strategy, player talent (cards and ratings of players used in the game), and luck. A coach may win a game with a lot of luck but will likely not be successful in the long term without either good strategy or good players (or both).
When two humans are playing a game of Strat-O-Matic Football there is a significant amount of “cat-and-mouse” type strategy. For example, coach A may call a long pass on 3rd-and-2 yards to go in order to take advantage of his belief that coach B may weaken his long pass defense (by emptying his long pass zone) in an effort to strengthen his defense against shorter yardage plays (specifically runs and flat/lookin passes) on that specific play. However, coach B may be careful not to empty his long pass zone in this situation in order not to give up the big play. On the next 3rd-and-2 both coaches will be aware of what occurred on the previous situation and may adjust their strategy accordingly. Coaches who have good intuition in predicting the plays their opponents will call may gain a competitive advantage.
There are many coaching styles of playing the game which leads to diversity in playing games against different opponents. Some coaches like to call very few risky offensive plays as they call mostly runs and low risk flat/lookin passes while other coaches like to call a significant number of risky and potentially rewarding long passes. Some coaches call defensive plays with less risk such as calling run or pass without moving players (i.e. without weakening one zone to strengthen another) while other coaches gamble more on defense. This diversity is considered to be a strong point of the game as a whole and is a testament to the realism of the game as this coaching style diversity also exists in the NFL.
Modes of play
There are two primary modes of play for Strat-O-Matic Computer Football:
Play solo against the computer manager either setting up a league or simply playing a non-league game;
Play online against others live, one participant (the “host”) gives his IP address to the other participant who types in the IP address to join the host either within a league or simply playing a non-league game.
League types
There are two primary types of leagues (either solo or online):
Stock league in which NFL teams in their entirety are used without any modification to team rosters. For example, many people play solo stock leagues in which they play an entire season for one or more NFL teams (usually for their favorite NFL team) while the computer auto-plays all of the other games in that season. As another example there are many stock leagues for online play in which each participant controls one NFL team for an entire season.
Draft league in which teams are created from a shared pool of NFL players. For example, there are many draft leagues for online play in which each participant drafts from a pool of all NFL players from a particular season (often the most recently completed NFL season) and participants compete in the league with the team of players they drafted.
Alternate versions
In 1976 Strat-O-Matic produced the first version of its college football game. This board game is similar to the pro version in play calling, outcomes, timing and use of the 20 card "split" deck. The differences between the pro and college game are listed below.
Once the offensive and defensive plays have been called the result is derived from the differential of natural numbers (offense - defense) from two simultaneously played cards. The difference is then used to enter the proper cell of the team sheet (odd numbers use the offensive sheet, even use defensive) to get the result. Thus the randomness of the dice roll in the pro version is eliminated. However, since the offensive and defensive players can both play numbers 1 to 10 (in addition to the play call), the basic game play calling is more complex than that of the pro version;
No individual players are used per se. Each team is represented by two sheets - a red and blue two sided scrimmage sheet and a special teams sheet (kickoffs, returns, etc.). In spite of this, Strat-O-Matic did include a brochure listing the lead offensive playmakers for the given team and given year;
Solitaire play is much more difficult in the first college version (because of the absence of dice). The game is really designed to be played face to face by two to four players;
Finally, the first college edition features the opportunity to play teams from different times. The game includes teams from the 1950s, 1960s and 1970s and is designed so that the matchups accurately represent the strengths of schedules.
The company only produced the version for a couple of years and replaced it with the second version which plays almost exactly as the pro football game.
Plays
There are five primary running plays and five primary passing plays for the offense to choose from and the offense must also choose the intended target for the play: the ball carrier for a running play or the intended receiver for a passing play.
The five running plays are End Run Left, Off Tackle Left, Linebuck, Off Tackle Right, and End Run Right. Choice of left or right makes a difference in the yardage gained or lost on a play only in two ways: a) determining which offensive and defensive players are involved in trying to block for or tackle the ball carrier and b) whether or not the defense has strengthened its run defense on that side. For example, if the offensive coach believes that the defensive coach will strengthen his run defense against End Run Left then if the offensive coach still wants to run an End Run he should choose End Run Right.
Each of the three types of runs (End Run, Off Tackle, and Linebuck) uses its own columns on the ball carrier’s cards and on the defensive cards. Some ball carriers are very poor on Linebucks compared to the other two run types, some other ball carriers are very poor on End Runs compared to the other two run types, and some ball carriers are fairly balanced. Choice of which running play to use is often based upon running behind the better offensive linemen and avoiding running towards the better defensive linemen; for example, if the middle of the offensive line (Center and both Guards) is its strength then Linebuck may be a good choice unless the offensive coach believes that the defensive coach may strengthen his defense against the Linebuck.
The five passing plays are Flat Pass Left, Lookin, Flat Pass Right, Short Pass, and Long Pass. Flat Pass Left, Lookin, and Flat Pass Right are the same type of passing play and use the same columns for resolution; choice of left, right, or middle (lookin) makes a difference in play resolution in two ways: a) determining which defender is involved in attempting to defend the play and b) whether or not the defense has strengthened (or weakened) its flat/lookin defense in that specific target zone. Flat/lookin passes are designed to gain approximately 4-10 yards when successful although they may go for much more or much less (including negative yardage for flat passes but not for lookin passes) depending upon the defensive play called, player movement, player talent (cards and ratings), and the luck of the dice on the play. Short passes are designed to gain 10-15 yards when successful although they may go for more yardage. Long passes are designed to gain 25 or more yards when successful. Flat/lookin passes are the least risky of the three pass types as interception and sack chances are low while long passes incur the greatest risk of interception or sack of the three pass types.
There are four defensive plays to choose from: Run, Pass, Short Yardage, and Run-Key. Run and Pass are the primary plays while Short Yardage is designed to focus even more than Run on helping support against a running play (while further weakening the defense against a pass) and Run-Key is designed to focus on attempting to completely stop a specific ball carrier (while very significantly weakening the defense against a pass). In addition, there are many variations of defensive player movement allowed as all linebackers and the free safety are eligible to move into some adjacent zones (or farther in some cases) including blitzing and are eligible to double-team receivers (within limits). For example, a defensive coach may call pass (which does not guard well against a run) and move one or more of his linebackers to the line of scrimmage to blitz (to attempt to sack the Quarterback for a loss of yards) and also to strengthen support against the run; the advantage of blitzing linebackers while calling pass is to increase the chance for a sack and to strengthen support against the run but the disadvantage of this call is that the linebacker would vacate his flat or lookin zone leaving that type of pass with significantly less support. As another example, the free safety who is generally responsible for helping support against the long pass may instead strengthen another zone such as a flat or lookin zone, the offense does not have the capability of changing the play he has already selected but the offense may call more long passes in the future to try to take advantage of the open long pass zone if the defense moves his free safety again.
Similarities and differences with regard to fantasy football
Strat-O-Matic Football draft leagues appeal to many fans of fantasy football. In a fantasy football league participants compile a team of NFL players and as the current NFL season progresses the success of participants' teams is measured by the performance of the NFL players on their team. In a Strat-O-Matic Football draft league participants compile a team of NFL players whose cards and ratings measuring their in-game performance are determined from a prior NFL season – usually the most recently completed season although there are many draft leagues which use much older seasons instead based upon the preferences of members within the league – and participants play games using Strat-O-Matic Football. In this way participants are not only building their team of NFL players but they are also greatly affecting the outcome with the strategies they use during the game.
In a Strat-O-Matic Football draft league participants craft their teams by drafting individual players based in large part on their own preferences. For example, one league participant may focus on drafting a team with a good running game (by drafting good running backs and offensive linemen early in the draft) while another participant may focus on drafting a good run defense and/or a good pass defense; other participants may focus on a more balanced approach. It is up to individual participant to decide how to craft his team during the draft and often this decision is based upon the participant’s style of play during a game.
Both Strat-O-Matic Football and fantasy football have many “keeper” draft leagues in which NFL players are retained from one year to the next. In a keeper league participants who are able to evaluate and draft good players over the long term will excel. For example, if a keeper draft league participant drafts a player of who is young and remains good for several years then that participant will gain the benefit of the player's performances for those years unless the participant trades the player to another participant. One additional appeal of Strat-O-Matic Football keeper leagues is that it gives the participants an additional interest in watching the performances of their players during NFL games. A great majority of online Strat-O-Matic Football draft leagues are keeper leagues.
Strat-O-Matic also has a college football computer game which uses the same computer game engine as the pro game with rules modified to match the rules of college football. All of the 1-A and 1-AA teams are represented in the college game and while each player on each team has his own card or rating these cards and ratings are based more on team performance than on individual performance (unlike the pro game in which each player's card or rating represents his individual play for that season). Just as in the pro game Strat-O-Matic's college football game may be played against the computer or online against a live opponent in a league or non-league game. Unlike the pro game the college game teams do not each have their own customized computer manager but each team does have an appropriate one out of a couple dozen overall computer managers which will approximately reflect the team's real-life performance.
Notes
External links
Official company website
StratFanForum.com
SOMFootball.com: Portal website for all things related to Strat-O-Matic Football
Board games introduced in 1968
Sports board games
Fantasy sports |
45415985 | https://en.wikipedia.org/wiki/Poly-1 | Poly-1 | The Poly-1 was a desktop computer designed in New Zealand for educational use.
Background
The Poly-1 was developed in 1980 by Neil Scott and Paul Bryant, who at the time were teaching electronics engineering at Wellington Polytechnic (now Massey University's Wellington campus), which the computer was named after. As with the Acorn BBC Micro in Britain, Scott and Bryant saw the increasing need for a fully integrated computer to serve the New Zealand school market, which had the blessing of then Education Minister Merv Wellington. After Scott and Bryant gathered a team of engineers and designers, DFC New Zealand Limited and Lower Hutt-based Progeni Systems — founded by Perce Harpham in 1968 — formed a joint venture, Polycorp, to market and build the Poly-1, which entered production in 1981.
A distinctive fibreglass casing was designed to house the computer and monitor as an all-in-one unit, in a similar fashion to the Commodore PET. The Poly-1 came standard with a colour display and 64KB of RAM.
The Proteus was available as an accessory to make the Poly-1 network-capable, allowing up to 32 of the computers to be linked.
One of the earliest clients for the Poly in classroom networks was Rotorua Boys' High School, one of whose staff Derek Williams was seconded in 1984 to work as a computer programmer and software developer of educational applications for Progeni Systems Ltd on the FORGE Computer Learning System for New Zealand schools. under the supervision of Emeritus Professor John Tiffin. FORGE was also used for training by the Victorian Fire Brigade in Australia, and for the first time, allowed New Zealand educators to design and deliver curricula on class computer networks.
There remains interest in the Poly with an ongoing Poly Preservation Project.
Decline
Despite strong support from teachers for the Poly-1, the Muldoon Government reneged on a NZD$10m Ministry of Education agreement to purchase 1000 units over 5 years, after coming under pressure from Cabinet ministers and lobbyists who favoured economic deregulation. In particular, the then Minister of Regional Development, Warren Cooper, remarked that he and his colleagues "could see no reason why Government should spend money so that teachers could do even less work".
The Poly-1 cost up to several thousand dollars per unit, and aggressive undercutting from Apple Computer further weakened the Poly-1's place in the market. However, a sizeable consignment was still able to be sold to the Australian Defence Force and a number of local organisations.
A Poly 2 model with a separate keyboard and monitor, and Poly C designed for the Chinese market were produced, but in much smaller numbers. Plans to sell to China fell through after the Tiananmen Square protests of 1989. Additionally, by the late 1980s, the IBM PC was increasingly becoming dominant.
The Poly-1 was discontinued in 1989, and the following year Progeni was liquidated, after the collapse of DFC New Zealand and the subsequent bailout of the Bank of New Zealand, to which Progeni still owed debts.
Specifications
CPU: Motorola 6809 with 6840 clock
Networking: Motorola 6854 Proteus, max 32 interconnected
See also
Aamber Pegasus
References
External links
The Poly Preservation Project
Kiwi Nuggets Forum – Poly 1 Educational Computer
University of Auckland – Computing History Displays: Fourth Floor – Computers Made in New Zealand
Old-Computers.com – POLYCORP Poly 1 Educational Computer
University of Hawaii – Neil Scott
Terry Stewart's (Tezza's) Classic Computing Collection
Perce Harpham – The Story of Progeni
Computer-related introductions in 1981
Information technology in New Zealand
6809-based home computers |
2008418 | https://en.wikipedia.org/wiki/Isochronous%20media%20access%20controller | Isochronous media access controller | Isochronous media access controller (I-MAC) is a media access control whereby data must be transferred isochronously—in other words, the data must be transmitted at a steady rate, without interruption.
Media access control
Telecommunications equipment |
25530924 | https://en.wikipedia.org/wiki/RNDIS | RNDIS | The Remote Network Driver Interface Specification (RNDIS) is a Microsoft proprietary protocol used mostly on top of USB. It provides a virtual Ethernet link to most versions of the Windows, Linux, and FreeBSD operating systems. A partial RNDIS specification is available from Microsoft, but Windows implementations have been observed to issue requests not included in that specification, and to have undocumented constraints.
The protocol is tightly coupled to Microsoft's programming interfaces and models, most notably the Network Driver Interface Specification (NDIS), which are alien to operating systems other than Windows. This complicates implementing RNDIS on non-Microsoft operating systems, but Linux, FreeBSD, NetBSD and OpenBSD implement RNDIS natively.
The USB Implementers Forum (USB-IF) defines at least three non-proprietary USB communications device class (USB CDC) protocols with comparable "virtual Ethernet" functionality; one of them (CDC-ECM) predates RNDIS and is widely used for interoperability with non-Microsoft operating systems, but does not work with Windows.
Most versions of Android include RNDIS USB functionality. For example, most Samsung smartphones have the capability and use RNDIS over USB to operate as an virtual Ethernet card that will connect the host PC to the mobile or Wi-Fi network in use by the phone, effectively working as a mobile broadband modem or a wireless card, for mobile hotspot tethering.
See also
Ethernet over USB
Notes and references
External links
Overview of Remote NDIS (RNDIS)
Galaxy S9 Tactical Edition support RNDIS protocol
Microsoft application programming interfaces
Computer networking |
1014472 | https://en.wikipedia.org/wiki/Naim%20%28software%29 | Naim (software) | naim is a messaging and chat program written by Daniel Reed in C; it supports the protocols AIM, ICQ, IRC, and RPI's Lily CMC protocols. Unlike most messaging clients, it is not graphical; it runs from the console using the ncurses library. naim is free software, licensed under the GNU GPL.
naim is a multiplatform program. It is primarily aimed at Unix-like systems.
naim uses the AOL instant messenger TOC protocol instead of the OSCAR protocol. This means naim lacks some features other instant messaging services have.
References
Further reading
Martin Brown (7 September 2005) Free IRC clients, Choosing the best IRC client for your needs, Free Software Magazine issue 7, web page 3
Robert Shingledecker, John Andrews, Christopher Negus, The official Damn Small Linux book: the tiny adaptable Linux that runs on anything, Prentice Hall, 2008, , p. 64
External links
naim web site
AIM (software) clients
Internet Relay Chat clients
Free Internet Relay Chat clients
Instant messaging clients for Linux
Free instant messaging clients
Software that uses ncurses |
551232 | https://en.wikipedia.org/wiki/Software%20Park%20Thailand | Software Park Thailand | Software Park Thailand is a government agency under the National Science and Technology Development Agency. It was established to stimulate the development of the Thai software industry. It maintains a close association with the private sector. It is in Pak Kret, Nonthaburi Province.
Other software parks in Thailand
E-saan Software Park. Khon Kaen Province
MISOLIMA Software and Technology Park, Chiang Mai Province
Software Park Phuket Province
Samui Software Park. Ko Samui
Nakhon Ratchasima Province Software Park
See also
Thailand Science Park
National Science and Technology Development Agency
Thailand Board of Investment
References
External links
Software Park Thailand
MISOLIMA Software and Technology Park
Software Park Phuket
Software Park Korat
Information technology in Thailand
National Science and Technology Development Agency |
794541 | https://en.wikipedia.org/wiki/STU-III | STU-III | STU-III (Secure Telephone Unit - third generation) is a family of secure telephones introduced in 1987 by the NSA for use by the United States government, its contractors, and its allies. STU-III desk units look much like typical office telephones, plug into a standard telephone wall jack and can make calls to any ordinary phone user (with such calls receiving no special protection, however). When a call is placed to another STU-III unit that is properly set up, one caller can ask the other to initiate secure transmission. They then press a button on their telephones and, after a 15-second delay, their call is encrypted to prevent eavesdropping. There are portable and militarized versions and most STU-IIIs contained an internal modem and RS-232 port for data and fax transmission. Vendors were AT&T (later transferred to Lucent Technologies), RCA (Now L-3 Communications, East) and Motorola.
STU-III are no longer in service with the U.S. Government, with the last cryptographic keys for the units expiring on December 31, 2009. It has been replaced by the STE (Secure Terminal Equipment) and other equipment using the more modern Secure Communications Interoperability Protocol (SCIP).
Versions
STU-III/Low Cost Terminal (LCT) designed for use in office environment by all types of users. (Motorola Sectel 1500, Lucent Technologies/GD 1100 and 1150)
STU-III/Cellular Telephone (CT) is interoperable with all STU-III versions. Works in all continental US mobile network and in most of the foreign cellular networks.
STU-III/Allied (A) specialized version of the STU-III/LCT that is compatible with the STU-II. It retains all basic STU-III functions and capabilities and incorporates STU-II BELLFIELD KDC, STU-II net, and STU-II multipoint modes of operation.
STU-III/Remote Control Interface (R or RCU)
STU-III/MultiMedia Terminal (MMT)
STU-III/Inter Working Function (IWF)
STU-III/Secure Data Device (SDD)
STU-III/CipherTAC 2000 (CTAC)
Security
Most STU-III units were built for use with what NSA calls Type 1 encryption. This allows them to protect conversations at all security classification levels up to Top Secret, with the maximum level permitted on a call being the lower clearance level of the two persons talking. At the height of the Commercial COMSEC Endorsement Program, Type 2, 3, and 4 STU-IIIs were manufactured, but they saw little commercial success.
Two major factors in the STU-III's success were the Electronic Key Management System (EKMS) and the use of a removable memory module in a plastic package in the shape of a house key, called a KSD-64A. The EKMS is believed to be one of the first widespread applications of asymmetric cryptography. It greatly reduced the complex logistics and bookkeeping associated with ensuring each encryption device has the right keys and that all keying material is protected and accounted for.
The KSD-64A contains a 64kbit EEPROM chip that can be used to store various types of keying and other information. A new (or zeroized) STU-III must first have a "seed key" installed. This key is shipped from NSA by registered mail or Defense Courier Service. Once the STU-III has its seed key, the user calls an 800-number at NSA to have the seed key converted into an operational key. A list of compromised keys is downloaded to the STU-III at this time. The operational key is supposed to be renewed at least once a year.
The operational key is then split into two components, one of which replaces the information on the KSD-64A, at which point it becomes a Crypto Ignition Key or CIK. When the CIK is removed from the STU-III telephone neither unit is considered classified. Only when the CIK is inserted into the STU-III on which it was created can classified information be received and sent.
When a call "goes secure", the two STU-III's create a unique key that will be used to encrypt just this call. Each unit first makes sure that the other is not using a revoked key and if one has a more up-to-date key revocation list it transmits it to the other. Presumably the revocation lists are protected by a digital signature generated by NSA.
While there have been no reports of STU-III encryption being broken, there have been claims that foreign intelligence services can recognize the lines on which STU-IIIs are installed and that un-encrypted calls on these lines, particularly what was said while waiting for the "go secure" command to complete, have provided valuable information.
Use
Hundreds of thousands of STU-III sets were produced and many were still in use as of 2004. STU-III replaced earlier voice encryption devices, including the KY-3 (1960s), the STU-I (1970) and the STU-II (1975). The STU-II had some 10,000 users. These, in turn, replaced less secure voice scramblers. Unlike earlier systems, the STU-III's encryption electronics are completely contained in the desk set. Further, the reduced bandwidth required by a STU-III permitted it to be used for encrypted voice communications even over limited conduits such as the commercial maritime communication satellites of the day. The STU-III is no longer in use, having been replaced by the STE (Secure Terminal Equipment) or OMNI, more modern, all digital systems that overcome many of the STU-III's problems, including the 15 second delay.
Operational difficulties in using STU-III phones hindered coordination between the Federal Aviation Administration and NORAD during the September 11, 2001 attacks on New York and Washington. See Communication during the September 11 attacks.
STE succeeded STU-III in the 1990s. Similar to STU-III, an STE unit physically resembles an ordinary telephone. Besides connecting to a regular wall phone jack (Public Switched Telephone Network), the STE was originally designed to be connected to Integrated Services Digital Network (ISDN) lines. As a result, in addition to having secured voice conversations, users can also use an STE unit for classified data and fax transmissions. Transfer rate of an STE is also considerably higher (STU-III: up to 9 kbit/s; STE: up to 128 kbit/s). Lastly, an STE unit is backward compatible with an STU-III unit when both units are connected to the PSTN.
The heart of an STE unit is the Fortezza Plus (KOV-14) Crypto Card, which is a PCMCIA card. It contains both the cryptographic algorithms as well as the key(s) used for encryption. Cryptographic algorithms include BATON, FIREFLY, and SDNS signature algorithm. When the Crypto Card is removed from the STE unit, neither the phone or the card is considered classified. BATON is a block cipher developed by the NSA with a block size of 128 bits and key size of 320 bits. FIREFLY, on the other hand, is a key distribution protocol developed by the NSA. The FIREFLY protocol uses public key cryptography to exchange keys between two participants of a secured call.
Both STU-III and STE are built on technologies that are proprietary, and details of the cryptographic algorithms (BATON and FIREFLY) are classified. Although the secrecy of the algorithms does not make the device less secure, it does limit the usage to within the U.S. government and its allies. Within the Department of Defense, Voice over IP (VoIP) has slowly emerged as an alternative solution to STU-III and STE. The high bandwidth of IP networks makes VoIP attractive because it results in voice quality superior to STU-III and STE. To secure VoIP calls, VoIP phones are connected to classified IP networks (e.g. Secret Internet Protocol Router Network – SIPRNET).
Both allies and adversaries of the United States are interested in STU-III, STE, and other secured voice technologies developed by the NSA. To date, there has not been any reported cryptanalysis on the encryption algorithms used by the STU-III and STE. Any breaks in these algorithms could jeopardize national security.
Information about STU-III is very limited despite the fact that it is out of production. Because of the sensitive nature of the subject, there are few relevant documents available. The majority of the information available originates from the manufacturers (e.g. L-3 Communications) of STU-III and STE.
See also
SIGSALY
STU-I
STU-II
KY-57
KG-84
SCIP
Secure Terminal Equipment
References
External links
STU-III Handbook
STU-III Description, Technical Specification, Pictures
Report on VOIP and Secure Communications
The NAVY INFOSEC WebSite on STU-III and STE
National Security Agency encryption devices
Secure telephones |
69987734 | https://en.wikipedia.org/wiki/Personio | Personio | Personio is a German software company headquartered in Munich. The company develops software that simplifies or automates HR management processes for smaller companies. As a business-to-business (B2B) company, Personio had more than 5,000 customers in Germany and other countries as of 2021, most of which were small and medium-sized enterprises. With a valuation of $6.3 billion, Personio was one of the most valuable unicorns in Germany.
History
Personio was founded in 2015 by Hanno Renner, Ignaz Forstmeier, Roman Schumacher and Arseniy Vershinin, all of whom had studied at the Technical University of Munich. The company emerged from a program of the Center for Digital Technology and Management. The cloud-based software was primarily used by other start-up companies in the early days, and was thus able to cover a market niche. Personio's software is provided for a subscription payment and can simplify or completely automate processes in the area of human resource management.
In October 2021, Personio announced a new round of financing in the amount of 230 million euros. Investors included various venture capitalists from the United States such as Greenoaks Capital Partners, Altimeter and Alkeon Capital.
References
German companies established in 2015
Companies based in Munich
Software companies established in 2015
Software companies of Germany |
69153575 | https://en.wikipedia.org/wiki/Ministry%20of%20Information%20Technology%20%28Maharashtra%29 | Ministry of Information Technology (Maharashtra) | The Ministry of Information Technology is a ministry of the Government of Maharashtra. It is responsible for preparing annual plans for the development of Maharashtra state.
The Ministry is headed by a cabinet level Minister. Uddhav Thackeray is Current Minister of Information Technology. As a Chief Minister of Maharashtra.
Head office
List of Cabinet Ministers
List of Ministers of State
See All Ministry
Ministry of General Administration (Maharashtra)
Ministry of Information and Public Relations (Maharashtra)
Ministry of Information Technology (Maharashtra)
Ministry of Law and Judiciary (Maharashtra)
Ministry of Home Affairs (Maharashtra)
Ministry of Public Works (Excluding Public Undertakings) (Maharashtra)
Ministry of Public Works (Including Public Undertakings) (Maharashtra)
Ministry of Finance (Maharashtra)
Ministry of Planning (Maharashtra)
Ministry of Revenue (Maharashtra)
Ministry of State Excise (Maharashtra)
Ministry of Food, Civil Supplies and Consumer Protection (Maharashtra)
Ministry of Special Assistance (Maharashtra)
Ministry of Social Justice (Maharashtra)
Ministry of Forests Department (Maharashtra)
Ministry of Environment and Climate Change (Maharashtra)
Ministry of Tourism (Maharashtra)
Ministry of Skill Development and Entrepreneurship (Maharashtra)
Ministry of Food and Drug Administration (Maharashtra)
Ministry of Animal Husbandry Department (Maharashtra)
Ministry of Agriculture (Maharashtra)
Ministry of Labour (Maharashtra)
Ministry of Water Resources (Maharashtra)
Ministry of Command Area Development (Maharashtra)
Ministry of Public Health (Maharashtra)
Ministry of Energy (Maharashtra)
Ministry of Rural Development (Maharashtra)
Ministry of Urban Development (Maharashtra)
Ministry of School Education (Maharashtra)
Ministry of Medical Education (Maharashtra)
Ministry of Higher and Technical Education (Maharashtra)
Ministry of Industries (Maharashtra)
Ministry of Textiles (Maharashtra)
Ministry of Protocol (Maharashtra)
Ministry of Housing (Maharashtra)
Ministry of Cultural Affairs (Maharashtra)
Ministry of Minority Development and Aukaf (Maharashtra)
Ministry of Marathi Language (Maharashtra)
Ministry of Woman and Child Development (Maharashtra)
Ministry of Water Supply (Maharashtra)
Ministry of Parliamentary Affairs (Maharashtra)
Ministry of Dairy Development (Maharashtra)
Ministry of Sports and Youth Welfare (Maharashtra)
Ministry of Disaster Management (Maharashtra)
Ministry of Relief & Rehabilitation (Maharashtra)
Ministry of Other Backward Classes (Maharashtra)
Ministry of Socially and Educationally Backward Classes (Maharashtra)
Ministry of Vimukta Jati (Maharashtra)
Ministry of Nomadic Tribes (Maharashtra)
Ministry of Special Backward Classes Welfare (Maharashtra)
Ministry of Khar Land Development (Maharashtra)
Ministry of Earthquake Rehabilitation (Maharashtra)
Ministry of Majority Welfare Development (Maharashtra)
Ministry of Ex. Servicemen Welfare (Maharashtra)
Ministry of Sanitation (Maharashtra)
Ministry of Tribal Development (Maharashtra)
Ministry of Employment Guarantee (Maharashtra)
Ministry of Horticulture (Maharashtra)
Ministry of Co-operation (Maharashtra)
Ministry of Marketing (Maharashtra)
Ministry of Transport (Maharashtra)
Ministry of Fisheries Department (Maharashtra)
Ministry of Ports Development (Maharashtra)
Ministry of Soil and Water Conservation (Maharashtra)
Ministry of Mining Department (Maharashtra)
References
Government ministries of Maharashtra |
68367 | https://en.wikipedia.org/wiki/Computer%20chess | Computer chess | Computer chess includes both hardware (dedicated computers) and software capable of playing chess. Computer chess provides opportunities for players to practice even in the absence of human opponents, and also provides opportunities for analysis, entertainment and training.
Computer chess applications that play at the level of a chess master or higher are available on hardware from supercomputers to smart phones. Standalone chess-playing machines are also available. Stockfish, GNU Chess, Fruit, and other free open source applications are available for various platforms.
Computer chess applications, whether implemented in hardware or software, utilize different strategies than humans to choose their moves: they use heuristic methods to build, search and evaluate trees representing sequences of moves from the current position and attempt to execute the best such sequence during play. Such trees are typically quite large, thousands to millions of nodes. The computational speed of modern computers, capable of processing tens of thousands to hundreds of thousands of nodes or more per second, along with extension and reduction heuristics that narrow the tree to mostly relevant nodes, make such an approach effective.
The first chess machines capable of playing chess or reduced chess-like games were software programs running on digital computers early in the vacuum-tube computer age (1950s). The early programs played so poorly that even a beginner could defeat them. Within 40 years, in 1997, chess engines running on super-computers or specialized hardware were capable of defeating even the best human players. By 2006, programs running on desktop PCs had attained the same capability. In 2006, Monroe Newborn, Professor of Computer Science at McGill University, declared: "the science has been done". Nevertheless, solving chess is not currently possible for modern computers due to the game's extremely large number of possible variations.
Computer chess was once considered the "Drosophila of AI", the edge of knowledge engineering. But the field is now considered a scientifically completed paradigm, and playing chess is a mundane computing activity.
Availability and playing strength
Chess machines/programs are available in several different forms: stand-alone chess machines (usually a microprocessor running a software chess program, but sometimes as a specialized hardware machine), software programs running on standard PCs, web sites, and apps for mobile devices. Programs run on everything from super-computers to smartphones. Hardware requirements for programs are minimal; the apps are no larger than a few megabytes on disk, use a few megabytes of memory (but can use much more, if it is available), and any processor 300Mhz or faster is sufficient. Performance will vary modestly with processor speed, but sufficient memory to hold a large transposition table (up to several gigabytes or more) is more important to playing strength than processor speed.
Most available commercial chess programs and machines can play at super-grandmaster strength (Elo 2700 or more), and take advantage of multi-core and hyperthreaded computer CPU architectures. Top programs such as Stockfish have surpassed even world champion caliber players. Most chess programs comprise a chess engine connected to a GUI, such as Winboard or Chessbase. Playing strength, time controls, and other performance-related settings are adjustable from the GUI. Most GUIs also allow the player to set up and to edit positions, to reverse moves, to offer and to accept draws (and resign), to request and to receive move recommendations, and to show the engine's analysis as the game progresses.
There are a few chess engines such as Sargon, IPPOLIT, Stockfish, Crafty, Fruit, Leela Chess Zero and GNU Chess which can be downloaded (or source code otherwise obtained) from the Internet free of charge.
Types and features of chess software
Perhaps the most common type of chess software are programs that simply play chess. A human player make a move on the board, the AI calculates and plays a subsequent move, and the human and AI alternate turns until one player resigns. The chess engine, which calculates the moves, and the graphical user interface (GUI) are sometimes separate programs. Different engines can be connected to the GUI, permitting play against different styles of opponent. Engines often have a simple text command-line interface, while GUIs may offer a variety of piece sets, board styles, or even 3D or animated pieces. Because recent engines are so capable, engines or GUIs may offer some way of handicapping the engine's ability, to improve the odds for a win by the human player. Universal Chess Interface (UCI) engines such Fritz or Rybka may have a built in mechanism for reducing the Elo rating of the engine (via UCI's uci_limitstrength and uci_elo parameters). Some versions of Fritz have a Handicap and Fun mode for limiting the current engine or changing the percentage of mistakes it makes or changing its style. Fritz also has a Friend Mode where during the game it tries to match the level of the player.
Chess databases allow users to search through a large library of historical games, analyze them, check statistics, and formulate an opening repertoire. Chessbase (for PC) is a common program for these purposes amongst professional players, but there are alternatives such as Shane's Chess Information Database (Scid) for Windows, Mac or Linux, Chess Assistant for PC, Gerhard Kalab's Chess PGN Master for Android or Giordano Vicoli's Chess-Studio for iOS.
Programs such as Playchess allow you to play games against other players over the internet.
Chess training programs teach chess. Chessmaster had playthrough tutorials by IM Josh Waitzkin and GM Larry Christiansen. Stefan Meyer-Kahlen offers Shredder Chess Tutor based on the Step coursebooks of Rob Brunia and Cor Van Wijgerden. World champions Magnus Carlsen's Play Magnus company released a Magnus Trainer app for Android and iOS. Chessbase has Fritz and Chesster for children. Convekta provides a large number of training apps such as CT-ART and its Chess King line based on tutorials by GM Alexander Kalinin and Maxim Blokh.
There is also software for handling chess problems.
Computers versus humans
After discovering refutation screening—the application of alpha–beta pruning to optimizing move evaluation—in 1957, a team at Carnegie Mellon University predicted that a computer would defeat the world human champion by 1967. It did not anticipate the difficulty of determining the right order to evaluate moves. Researchers worked to improve programs' ability to identify killer heuristics, unusually high-scoring moves to reexamine when evaluating other branches, but into the 1970s most top chess players believed that computers would not soon be able to play at a Master level. In 1968 International Master David Levy made a famous bet that no chess computer would be able to beat him within ten years, and in 1976 Senior Master and professor of psychology Eliot Hearst of Indiana University wrote that "the only way a current computer program could ever win a single game against a master player would be for the master, perhaps in a drunken stupor while playing 50 games simultaneously, to commit some once-in-a-year blunder".
In the late 1970s chess programs suddenly began defeating highly skilled human players. The year of Hearst's statement, Northwestern University's Chess 4.5 at the Paul Masson American Chess Championship's Class B level became the first to win a human tournament. Levy won his bet in 1978 by beating Chess 4.7, but it achieved the first computer victory against a Master-class player at the tournament level by winning one of the six games. In 1980 Belle began often defeating Masters. By 1982 two programs played at Master level and three were slightly weaker.
The sudden improvement without a theoretical breakthrough was unexpected, as many did not expect that Belle's ability to examine 100,000 positions a second—about eight plies—would be sufficient. The Spracklens, creators of the successful microcomputer program Sargon, estimated that 90% of the improvement came from faster evaluation speed and only 10% from improved evaluations. New Scientist stated in 1982 that computers "play terrible chess ... clumsy, inefficient, diffuse, and just plain ugly", but humans lost to them by making "horrible blunders, astonishing lapses, incomprehensible oversights, gross miscalculations, and the like" much more often than they realized; "in short, computers win primarily through their ability to find and exploit miscalculations in human initiatives".
By 1982, microcomputer chess programs could evaluate up to 1,500 moves a second and were as strong as mainframe chess programs of five years earlier, able to defeat a majority of amateur players. While only able to look ahead one or two plies more than at their debut in the mid-1970s, doing so improved their play more than experts expected; seemingly minor improvements "appear to have allowed the crossing of a psychological threshold, after which a rich harvest of human error becomes accessible", New Scientist wrote. While reviewing SPOC in 1984, BYTE wrote that "Computers—mainframes, minis, and micros—tend to play ugly, inelegant chess", but noted Robert Byrne's statement that "tactically they are freer from error than the average human player". The magazine described SPOC as a "state-of-the-art chess program" for the IBM PC with a "surprisingly high" level of play, and estimated its USCF rating as 1700 (Class B).
At the 1982 North American Computer Chess Championship, Monroe Newborn predicted that a chess program could become world champion within five years; tournament director and International Master Michael Valvo predicted ten years; the Spracklens predicted 15; Ken Thompson predicted more than 20; and others predicted that it would never happen. The most widely held opinion, however, stated that it would occur around the year 2000. In 1989, Levy was defeated by Deep Thought in an exhibition match. Deep Thought, however, was still considerably below World Championship level, as the reigning world champion, Garry Kasparov, demonstrated in two strong wins in 1989. It was not until a 1996 match with IBM's Deep Blue that Kasparov lost his first game to a computer at tournament time controls in Deep Blue versus Kasparov, 1996, game 1. This game was, in fact, the first time a reigning world champion had lost to a computer using regular time controls. However, Kasparov regrouped to win three and draw two of the remaining five games of the match, for a convincing victory.
In May 1997, an updated version of Deep Blue defeated Kasparov 3½–2½ in a return match. A documentary mainly about the confrontation was made in 2003, titled Game Over: Kasparov and the Machine.
With increasing processing power and improved evaluation functions, chess programs running on commercially available workstations began to rival top flight players. In 1998, Rebel 10 defeated Viswanathan Anand, who at the time was ranked second in the world, by a score of 5–3. However, most of those games were not played at normal time controls. Out of the eight games, four were blitz games (five minutes plus five seconds Fischer delay for each move); these Rebel won 3–1. Two were semi-blitz games (fifteen minutes for each side) that Rebel won as well (1½–½). Finally, two games were played as regular tournament games (forty moves in two hours, one hour sudden death); here it was Anand who won ½–1½. In fast games, computers played better than humans, but at classical time controls – at which a player's rating is determined – the advantage was not so clear.
In the early 2000s, commercially available programs such as Junior and Fritz were able to draw matches against former world champion Garry Kasparov and classical world champion Vladimir Kramnik.
In October 2002, Vladimir Kramnik and Deep Fritz competed in the eight-game Brains in Bahrain match, which ended in a draw. Kramnik won games 2 and 3 by "conventional" anti-computer tactics – play conservatively for a long-term advantage the computer is not able to see in its game tree search. Fritz, however, won game 5 after a severe blunder by Kramnik. Game 6 was described by the tournament commentators as "spectacular". Kramnik, in a better position in the early middlegame, tried a piece sacrifice to achieve a strong tactical attack, a strategy known to be highly risky against computers who are at their strongest defending against such attacks. True to form, Fritz found a watertight defense and Kramnik's attack petered out leaving him in a bad position. Kramnik resigned the game, believing the position lost. However, post-game human and computer analysis has shown that the Fritz program was unlikely to have been able to force a win and Kramnik effectively sacrificed a drawn position. The final two games were draws. Given the circumstances, most commentators still rate Kramnik the stronger player in the match.
In January 2003, Kasparov played Junior, another chess computer program, in New York City. The match ended 3–3.
In November 2003, Kasparov played X3D Fritz. The match ended 2–2.
In 2005, Hydra, a dedicated chess computer with custom hardware and sixty-four processors and also winner of the 14th IPCCC in 2005, defeated seventh-ranked Michael Adams 5½–½ in a six-game match (though Adams' preparation was far less thorough than Kramnik's for the 2002 series).
In November–December 2006, World Champion Vladimir Kramnik played Deep Fritz. This time the computer won; the match ended 2–4. Kramnik was able to view the computer's opening book. In the first five games Kramnik steered the game into a typical "anti-computer" positional contest. He lost one game (overlooking a mate in one), and drew the next four. In the final game, in an attempt to draw the match, Kramnik played the more aggressive Sicilian Defence and was crushed.
There was speculation that interest in human–computer chess competition would plummet as a result of the 2006 Kramnik-Deep Fritz match. According to Newborn, for example, "the science is done".
Human–computer chess matches showed the best computer systems overtaking human chess champions in the late 1990s. For the 40 years prior to that, the trend had been that the best machines gained about 40 points per year in the Elo rating while the best humans only gained roughly 2 points per year. The highest rating obtained by a computer in human competition was Deep Thought's USCF rating of 2551 in 1988 and FIDE no longer accepts human–computer results in their rating lists. Specialized machine-only Elo pools have been created for rating machines, but such numbers, while similar in appearance, are not directly compared. In 2016, the Swedish Chess Computer Association rated computer program Komodo at 3361.
Chess engines continue to improve. In 2009, chess engines running on slower hardware have reached the grandmaster level. A mobile phone won a category 6 tournament with a performance rating 2898: chess engine Hiarcs 13 running inside Pocket Fritz 4 on the mobile phone HTC Touch HD won the Copa Mercosur tournament in Buenos Aires, Argentina with 9 wins and 1 draw on August 4–14, 2009. Pocket Fritz 4 searches fewer than 20,000 positions per second. This is in contrast to supercomputers such as Deep Blue that searched 200 million positions per second.
Advanced Chess is a form of chess developed in 1998 by Kasparov where a human plays against another human, and both have access to computers to enhance their strength. The resulting "advanced" player was argued by Kasparov to be stronger than a human or computer alone, this has been proven in numerous occasions, at Freestyle Chess events.
Players today are inclined to treat chess engines as analysis tools rather than opponents. Chess grandmaster Andrew Soltis stated in 2016 "The computers are just much too good" and that world champion Magnus Carlsen won't play computer chess because "he just loses all the time and there's nothing more depressing than losing without even being in the game."
Computer methods
Since the era of mechanical machines that played rook and king endings and electrical machines that played other games like hex (game) in the early years of the 20th century, scientists and theoreticians have sought to develop a procedural representation of how humans learn, remember, think and apply knowledge, and the game of chess, because of its daunting complexity, became the "Drosophila of artificial intelligence (AI)". The procedural resolution of complexity became synonymous with thinking, and early computers, even before the chess automaton era, were popularly referred to as "electronic brains". Several different schema were devised starting in the latter half of the 20th century to represent knowledge and thinking, as applied to playing the game of chess (and other games like checkers):
search based (brute force vs selective search)
Search in search based schema (minimax/alpha-beta, Monte Carlo tree search)
Evaluations in search based schema (machine learning, neural networks, Texel tuning, genetic algorithms, gradient descent, reinforcement learning)
knowledge based (PARADISE, endgame tablebases)
Using "ends-and-means" heuristics a human chess player can intuitively determine optimal outcomes and how to achieve them regardless of the number of moves necessary, but a computer must be systematic in its analysis. Most players agree that looking at least five moves ahead (ten plies) when necessary is required to play well. Normal tournament rules give each player an average of three minutes per move. On average there are more than 30 legal moves per chess position, so a computer must examine a quadrillion possibilities to look ahead ten plies (five full moves); one that could examine a million positions a second would require more than 30 years.
The earliest attempts at procedural representations of playing chess predated the digital electronic age, but it was the stored program digital computer that gave scope to calculating such complexity. Claude Shannon, in 1949, laid out the principles of algorithmic solution of chess. In that paper, the game is represented by a "tree", or digital data structure of choices (branches) corresponding to moves. The nodes of the tree were positions on the board resulting from the choices of move. The impossibility of representing an entire game of chess by constructing a tree from first move to last was immediately apparent: there are an average of 36 moves per position in chess and an average game lasts about 35 moves to resignation (60-80 moves if played to checkmate, stalemate, or other draw). There are 400 positions possible after the first move by each player, about 200,000 after two moves each, and nearly 120 million after just 3 moves each.
So a limited lookahead (search) to some depth, followed by using domain-specific knowledge to evaluate the resulting terminal positions was proposed. A kind of middle-ground position, given good moves by both sides, would result, and its evaluation would inform the player about the goodness or badness of the moves chosen. Searching and comparing operations on the tree were well suited to computer calculation; the representation of subtle chess knowledge in the evaluation function was not. The early chess programs suffered in both areas: searching the vast tree required computational resources far beyond those available, and what chess knowledge was useful and how it was to be encoded would take decades to discover.
The developers of a chess-playing computer system must decide on a number of fundamental implementation issues. These include:
Graphical user interface (GUI) – how moves are entered and communicated to the user, how the game is recorded, how the time controls are set, and other interface considerations
Board representation – how a single position is represented in data structures;
Search techniques – how to identify the possible moves and select the most promising ones for further examination;
Leaf evaluation – how to evaluate the value of a board position, if no further search will be done from that position.
Adriaan de Groot interviewed a number of chess players of varying strengths, and concluded that both masters and beginners look at around forty to fifty positions before deciding which move to play. What makes the former much better players is that they use pattern recognition skills built from experience. This enables them to examine some lines in much greater depth than others by simply not considering moves they can assume to be poor. More evidence for this being the case is the way that good human players find it much easier to recall positions from genuine chess games, breaking them down into a small number of recognizable sub-positions, rather than completely random arrangements of the same pieces. In contrast, poor players have the same level of recall for both.
The equivalent of this in computer chess are evaluation functions for leaf evaluation, which correspond to the human players' pattern recognition skills, and the use of machine learning techniques in training them, such as Texel tuning, stochastic gradient descent, and reinforcement learning, which corresponds to building experience in human players. This allows modern programs to examine some lines in much greater depth than others by using forwards pruning and other selective heuristics to simply not consider moves the program assume to be poor through their evaluation function, in the same way that human players do. The only fundamental difference between a computer program and a human in this sense is that a computer program can search much deeper than a human player could, allowing it to search more nodes and bypass the horizon effect to a much greater extent than is possible with human players.
Graphical user interface
Computer chess programs usually support a number of common de facto standards. Nearly all of today's programs can read and write game moves as Portable Game Notation (PGN), and can read and write individual positions as Forsyth–Edwards Notation (FEN). Older chess programs often only understood long algebraic notation, but today users expect chess programs to understand standard algebraic chess notation.
Starting in the late 1990s, programmers began to develop separately engines (with a command-line interface which calculates which moves are strongest in a position) or a graphical user interface (GUI) which provides the player with a chessboard they can see, and pieces that can be moved. Engines communicate their moves to the GUI using a protocol such as the Chess Engine Communication Protocol (CECP) or Universal Chess Interface (UCI). By dividing chess programs into these two pieces, developers can write only the user interface, or only the engine, without needing to write both parts of the program. (See also chess engine.)
Developers have to decide whether to connect the engine to an opening book and/or endgame tablebases or leave this to the GUI.
Board representations
The data structure used to represent each chess position is key to the performance of move generation and position evaluation. Methods include pieces stored in an array ("mailbox" and "0x88"), piece positions stored in a list ("piece list"), collections of bit-sets for piece locations ("bitboards"), and huffman coded positions for compact long-term storage.
Search techniques
Computer chess programs consider chess moves as a game tree. In theory, they examine all moves, then all counter-moves to those moves, then all moves countering them, and so on, where each individual move by one player is called a "ply". This evaluation continues until a certain maximum search depth or the program determines that a final "leaf" position has been reached (e.g. checkmate).
Minimax search
One particular type of search algorithm used in computer chess are minimax search algorithms, where at each ply the "best" move by the player is selected; one player is trying to maximize the score, the other to minimize it. By this alternating process, one particular terminal node whose evaluation represents the searched value of the position will be arrived at. Its value is backed up to the root, and that evaluation becomes the valuation of the position on the board. This search process is called minimax.
A naive implementation of the minimax algorithm can only search to a small depth in a practical amount of time, so various methods have been devised to greatly speed the search for good moves. Alpha–beta pruning, a system of defining upper and lower bounds on possible search results and searching until the bounds coincided, is typically used to reduce the search space of the program.
In addition, various selective search heuristics, such as quiescence search, forward pruning, search extensions and search reductions, are also used as well. These heuristics are triggered based on certain conditions in an attempt to weed out obviously bad moves (history moves) or to investigate interesting nodes (e.g. check extensions, passed pawns on seventh rank, etc.). These selective search heuristics have to be used very carefully however. Over extend and the program wastes too much time looking at uninteresting positions. If too much is pruned or reduced, there is a risk cutting out interesting nodes.
Monte Carlo tree search
Monte Carlo tree search (MCTS) is a heuristic search algorithm which expands the search tree based on random sampling of the search space. A version of Monte Carlo tree search commonly used in computer chess is PUCT, Predictor and Upper Confidence bounds applied to Trees.
DeepMind's AlphaZero and Leela Chess Zero uses MCTS instead of minimax. Such engines use batching on graphics processing units in order to calculate their evaluation functions and policy (move selection), and therefore require a parallel search algorithm as calculations on the GPU are inherently parallel. The minimax and alpha-beta pruning algorithms used in computer chess are inherently serial algorithms, so would not work well with batching on the GPU. On the other hand, MCTS is a good alternative, because the random sampling used in Monte Carlo tree search lends itself well to parallel computing, and is why nearly all engines which support calculations on the GPU use MCTS instead of alpha-beta.
Other optimizations
Many other optimizations can be used to make chess-playing programs stronger. For example, transposition tables are used to record positions that have been previously evaluated, to save recalculation of them. Refutation tables record key moves that "refute" what appears to be a good move; these are typically tried first in variant positions (since a move that refutes one position is likely to refute another). The drawback is that transposition tables at deep ply depths can get quite large – tens to hundreds of millions of entries. IBM's Deep Blue transposition table in 1996, for example was 500 million entries. Transposition tables that are too small can result in spending more time searching for non-existent entries due to threshing than the time saved by entries found. Many chess engines use pondering, searching to deeper levels on the opponent's time, similar to human beings, to increase their playing strength.
Of course, faster hardware and additional memory can improve chess program playing strength. Hyperthreaded architectures can improve performance modestly if the program is running on a single core or a small number of cores. Most modern programs are designed to take advantage of multiple cores to do parallel search. Other programs are designed to run on a general purpose computer and allocate move generation, parallel search, or evaluation to dedicated processors or specialized co-processors.
History
The first paper on search was by Claude Shannon in 1950. He predicted the two main possible search strategies which would be used, which he labeled "Type A" and "Type B", before anyone had programmed a computer to play chess.
Type A programs would use a "brute force" approach, examining every possible position for a fixed number of moves using a pure naive minimax algorithm. Shannon believed this would be impractical for two reasons.
First, with approximately thirty moves possible in a typical real-life position, he expected that searching the approximately 109 positions involved in looking three moves ahead for both sides (six plies) would take about sixteen minutes, even in the "very optimistic" case that the chess computer evaluated a million positions every second. (It took about forty years to achieve this speed. An later search algorithm called alpha–beta pruning, a system of defining upper and lower bounds on possible search results and searching until the bounds coincided, reduced the branching factor of the game tree logarithmically, but it still was not feasible for chess programs at the time to exploit the exponential explosion of the tree.
Second, it ignored the problem of quiescence, trying to only evaluate a position that is at the end of an exchange of pieces or other important sequence of moves ('lines'). He expected that adapting minimax to cope with this would greatly increase the number of positions needing to be looked at and slow the program down still further. He expected that adapting type A to cope with this would greatly increase the number of positions needing to be looked at and slow the program down still further.
This led naturally to what is referred to as "selective search" or "type B search", using chess knowledge (heuristics) to select a few presumably good moves from each position to search, and prune away the others without searching. Instead of wasting processing power examining bad or trivial moves, Shannon suggested that type B programs would use two improvements:
Employ a quiescence search.
Employ forward pruning; i.e. only look at a few good moves for each position.
This would enable them to look further ahead ('deeper') at the most significant lines in a reasonable time.
However, early attempts at selective search often resulted in the best move or moves being pruned away. As a result, little or no progress was made for the next 25 years dominated by this first iteration of the selective search paradigm. The best program produced in this early period was Mac Hack VI in 1967; it played at the about the same level as the average amateur (C class on the United States Chess Federation rating scale).
Meanwhile, hardware continued to improve, and in 1974, brute force searching was implemented for the first time in the Northwestern University Chess 4.0 program. In this approach, all alternative moves at a node are searched, and none are pruned away. They discovered that the time required to simply search all the moves was much less than the time required to apply knowledge-intensive heuristics to select just a few of them, and the benefit of not prematurely or inadvertently pruning away good moves resulted in substantially stronger performance.
In the 1980s and 1990s, progress was finally made in the selective search paradigm, with the development of quiescence search, null move pruning, and other modern selective search heuristics. These heuristics had far fewer mistakes than earlier heuristics did, and was found to be worth the extra time it saved because it could search deeper and widely adopted by many engines. While many modern programs do use alpha-beta search as a substrate for their search algorithm, these additional selective search heuristics used in modern programs means that the program no longer does a "brute force" search. Instead they heavily rely on these selective search heuristics to extend lines the program considers good and prune and reduce lines the program considers bad, to the point where most of the nodes on the search tree are pruned away, enabling modern programs to search very deep.
In 2006, Rémi Coulom created Monte Carlo tree search, another kind of type B selective search. In 2007, an adaption of Monte Carlo tree search called Upper Confidence bounds applied to Trees or UCT for short was created by Levente Kocsis and Csaba Szepesvári. In 2011, Chris Rosin developed a variation of UCT called Predictor + Upper Confidence bounds applied to Trees, or PUCT for short. PUCT was then used in AlphaZero in 2017, and later in Leela Chess Zero in 2018.
Knowledge versus search (processor speed)
In the 1970s, most chess programs ran on super computers like Control Data Cyber 176s or Cray-1s, indicative that during that developmental period for computer chess, processing power was the limiting factor in performance. Most chess programs struggled to search to a depth greater than 3 ply. It was not until the hardware chess machines of the 1980s, that a relationship between processor speed and knowledge encoded in the evaluation function became apparent.
It has been estimated that doubling the computer speed gains approximately fifty to seventy Elo points in playing strength .
Leaf evaluation
For most chess positions, computers cannot look ahead to all possible final positions. Instead, they must look ahead a few plies and compare the possible positions, known as leaves. The algorithm that evaluates leaves is termed the "evaluation function", and these algorithms are often vastly different between different chess programs. Evaluation functions typically evaluate positions in hundredths of a pawn (called a centipawn), where by convention, a positive evaluation favors White, and a negative evaluation favors Black. However, some evaluation function output win/draw/loss percentages instead of centipawns.
Historically, handcrafted evaluation functions consider material value along with other factors affecting the strength of each side. When counting up the material for each side, typical values for pieces are 1 point for a pawn, 3 points for a knight or bishop, 5 points for a rook, and 9 points for a queen. (See Chess piece relative value.) The king is sometimes given an arbitrary high value such as 200 points (Shannon's paper) to ensure that a checkmate outweighs all other factors . In addition to points for pieces, most handcrafted evaluation functions take many factors into account, such as pawn structure, the fact that a pair of bishops are usually worth more, centralized pieces are worth more, and so on. The protection of kings is usually considered, as well as the phase of the game (opening, middle or endgame). Machine learning techniques such as Texel turning, stochastic gradient descent, or reinforcement learning are usually used to optimise handcrafted evaluation functions.
Most modern evaluation functions make use of neural networks. The most common evaluation function in use today is the efficiently updatable neural network, which is a shallow neural network whose inputs are piece-square tables. Piece-square tables are a set of 64 values corresponding to the squares of the chessboard, and there typically exists a piece-square table for every piece and colour, resulting in 12 piece-square tables and thus 768 inputs into the neural network. In addition, some engines use deep neural networks in their evaluation function. Neural networks are usually trained using some reinforcement learning algorithm, in conjunction with supervised learning or unsupervised learning.
The output of the evaluation function is a single scalar, quantized in centipawns or other units, which is, in the case of handcrafted evaluation functions, a weighted summation of the various factors described, or in the case of neural network based evaluation functions, the output of the head of the neural network. The evaluation putatively represents or approximates the value of the subtree below the evaluated node as if it had been searched to termination, i.e. the end of the game. During the search, an evaluation is compared against evaluations of other leaves, eliminating nodes that represent bad or poor moves for either side, to yield a node which by convergence, represents the value of the position with best play by both sides.
Endgame tablebases
Endgame play had long been one of the great weaknesses of chess programs because of the depth of search needed. Some otherwise master-level programs were unable to win in positions where even intermediate human players could force a win.
To solve this problem, computers have been used to analyze some chess endgame positions completely, starting with king and pawn against king. Such endgame tablebases are generated in advance using a form of retrograde analysis, starting with positions where the final result is known (e.g., where one side has been mated) and seeing which other positions are one move away from them, then which are one move from those, etc. Ken Thompson was a pioneer in this area.
The results of the computer analysis sometimes surprised people. In 1977 Thompson's Belle chess machine used the endgame tablebase for a king and rook against king and queen and was able to draw that theoretically lost ending against several masters (see Philidor position#Queen versus rook). This was despite not following the usual strategy to delay defeat by keeping the defending king and rook close together for as long as possible. Asked to explain the reasons behind some of the program's moves, Thompson was unable to do so beyond saying the program's database simply returned the best moves.
Most grandmasters declined to play against the computer in the queen versus rook endgame, but Walter Browne accepted the challenge. A queen versus rook position was set up in which the queen can win in thirty moves, with perfect play. Browne was allowed 2½ hours to play fifty moves, otherwise a draw would be claimed under the fifty-move rule. After forty-five moves, Browne agreed to a draw, being unable to force checkmate or win the rook within the next five moves. In the final position, Browne was still seventeen moves away from checkmate, but not quite that far away from winning the rook. Browne studied the endgame, and played the computer again a week later in a different position in which the queen can win in thirty moves. This time, he captured the rook on the fiftieth move, giving him a winning position , .
Other positions, long believed to be won, turned out to take more moves against perfect play to actually win than were allowed by chess's fifty-move rule. As a consequence, for some years the official FIDE rules of chess were changed to extend the number of moves allowed in these endings. After a while, the rule reverted to fifty moves in all positions more such positions were discovered, complicating the rule still further, and it made no difference in human play, as they could not play the positions perfectly.
Over the years, other endgame database formats have been released including the Edward Tablebase, the De Koning Database and the Nalimov Tablebase which is used by many chess programs such as Rybka, Shredder and Fritz. Tablebases for all positions with six pieces are available. Some seven-piece endgames have been analyzed by Marc Bourzutschky and Yakov Konoval. Programmers using the Lomonosov supercomputers in Moscow have completed a chess tablebase for all endgames with seven pieces or fewer (trivial endgame positions are excluded, such as six white pieces versus a lone black king). In all of these endgame databases it is assumed that castling is no longer possible.
Many tablebases do not consider the fifty-move rule, under which a game where fifty moves pass without a capture or pawn move can be claimed to be a draw by either player. This results in the tablebase returning results such as "Forced mate in sixty-six moves" in some positions which would actually be drawn because of the fifty-move rule. One reason for this is that if the rules of chess were to be changed once more, giving more time to win such positions, it will not be necessary to regenerate all the tablebases. It is also very easy for the program using the tablebases to notice and take account of this 'feature' and in any case if using an endgame tablebase will choose the move that leads to the quickest win (even if it would fall foul of the fifty-move rule with perfect play). If playing an opponent not using a tablebase, such a choice will give good chances of winning within fifty moves.
The Nalimov tablebases, which use state-of-the-art compression techniques, require 7.05 GB of hard disk space for all five-piece endings. To cover all the six-piece endings requires approximately 1.2 TB. It is estimated that a seven-piece tablebase requires between 50 and 200 TB of storage space.
Endgame databases featured prominently in 1999, when Kasparov played an exhibition match on the Internet against the rest of the world. A seven piece Queen and pawn endgame was reached with the World Team fighting to salvage a draw. Eugene Nalimov helped by generating the six piece ending tablebase where both sides had two Queens which was used heavily to aid analysis by both sides.
Opening book
Chess engines, like human beings, may save processing time as well as select strong variations as expounded by the masters, by referencing an opening book stored in a disk database. Opening books cover the opening moves of a game to variable depth, depending on opening and variation, but usually to the first 10-12 moves (20-24 ply). Since the openings have been studied in depth by the masters for centuries, and some are known to well into the middle game, the valuations of specific variations by the masters will usually be superior to the general heuristics of the program.
While at one time, playing an out-of-book move in order to put the chess program onto its own resources might have been an effective strategy because chess opening books were selective to the program's playing style, and programs had notable weaknesses relative to humans, that is no longer true today. The opening books stored in computer databases are most likely far more extensive than even the best prepared humans, and playing an early out-of-book move may result in the computer finding the unusual move in its book and saddling the opponent with a sharp disadvantage. Even if it does not, playing out-of-book may be much better for tactically sharp chess programs than for humans who have to discover strong moves in an unfamiliar variation over the board.
Computer chess rating lists
CEGT, CSS, SSDF, WBEC, REBEL, FGRL, and IPON maintain rating lists allowing fans to compare the strength of engines. Various versions of Stockfish, Komodo, Leela Chess Zero, and Fat Fritz dominate the rating lists in the early 2020s.
CCRL (Computer Chess Rating Lists) is an organisation that tests computer chess engines' strength by playing the programs against each other. CCRL was founded in 2006 to promote computer-computer competition and tabulate results on a rating list.
The organisation runs three different lists: 40/40 (40 minutes for every 40 moves played), 40/4 (4 minutes for every 40 moves played), and 40/4 FRC (same time control but Chess960). Pondering (or permanent brain) is switched off and timing is adjusted to the AMD64 X2 4600+ (2.4 GHz) CPU by using Crafty 19.17 BH as a benchmark. Generic, neutral opening books are used (as opposed to the engine's own book) up to a limit of 12 moves into the game alongside 4 or 5 man tablebases.
History
The pre-computer age
The idea of creating a chess-playing machine dates back to the eighteenth century. Around 1769, the chess playing automaton called The Turk, created by Hungarian inventor Farkas Kempelen, became famous before being exposed as a hoax. Before the development of digital computing, serious trials based on automata such as El Ajedrecista of 1912 which played a king and rook versus king ending, were too complex and limited to be useful for playing full games of chess. The field of mechanical chess research languished until the advent of the digital computer in the 1950s.
Early software age: selective search
Since then, chess enthusiasts and computer engineers have built, with increasing degrees of seriousness and success, chess-playing machines and computer programs. One of the few chess grandmasters to devote himself seriously to computer chess was former World Chess Champion Mikhail Botvinnik, who wrote several works on the subject. He also held a doctorate in electrical engineering. Working with relatively primitive hardware available in the Soviet Union in the early 1960s, Botvinnik had no choice but to investigate software move selection techniques; at the time only the most powerful computers could achieve much beyond a three-ply full-width search, and Botvinnik had no such machines. In 1965 Botvinnik was a consultant to the ITEP team in a US-Soviet computer chess match (see Kotok-McCarthy).
The later software age: full-width search
One developmental milestone occurred when the team from Northwestern University, which was responsible for the Chess series of programs and won the first three ACM Computer Chess Championships (1970–72), abandoned type B searching in 1973. The resulting program, Chess 4.0, won that year's championship and its successors went on to come in second in both the 1974 ACM Championship and that year's inaugural World Computer Chess Championship, before winning the ACM Championship again in 1975, 1976 and 1977. The type A implementation turned out to be just as fast: in the time it used to take to decide which moves were worthy of being searched, it was possible just to search all of them. In fact, Chess 4.0 set the paradigm that was and still is followed essentially by all modern Chess programs today.
The rise of chess machines
In 1978, an early rendition of Ken Thompson's hardware chess machine Belle, entered and won the North American Computer Chess Championship over the dominant Northwestern University Chess 4.7.
The microcomputer revolution
Technological advances by orders of magnitude in processing power have made the brute force approach far more incisive than was the case in the early years. The result is that a very solid, tactical AI player aided by some limited positional knowledge built in by the evaluation function and pruning/extension rules began to match the best players in the world. It turned out to produce excellent results, at least in the field of chess, to let computers do what they do best (calculate) rather than coax them into imitating human thought processes and knowledge. In 1997 Deep Blue, a brute-force machine capable of examining 500 million nodes per second, defeated World Champion Garry Kasparov, marking the first time a computer has defeated a reigning world chess champion in standard time control.
Super-human chess
In 2016, NPR asked experts to characterize the playing style of computer chess engines. Murray Campbell of IBM stated that "Computers don't have any sense of aesthetics... They play what they think is the objectively best move in any position, even if it looks absurd, and they can play any move no matter how ugly it is." Grandmasters Andrew Soltis and Susan Polgar stated that computers are more likely to retreat than humans are.
The neural network revolution
While neural networks have been used in the evaluation functions of chess engines since the late 1980s, with programs such as NeuroChess, Morph, Blondie25, Giraffe, AlphaZero, and MuZero, neural networks did not become widely adopted by chess engines until the arrival of efficiently updatable neural networks in the summer of 2020. Efficiently updatable neural networks were originally developed in computer shogi in 2018 by Yu Nasu, and had to be first ported to a derivative of Stockfish called Stockfish NNUE on 31 May 2020, and integrated into the official Stockfish engine on 6 August 2020, before other chess programmers began to adopt neural networks into their engines.
Some people, such as the Royal Society's Venki Ramakrishnan, believe that AlphaZero lead to the widespread adoption of neural networks in chess engines. However, AlphaZero influenced very few engines to begin using neural networks, and those tended to be new experimental engines such as Leela Chess Zero, which began specifically to replicate the AlphaZero paper. The deep neural networks used in AlphaZero's evaluation fucntion required expensive graphics processing units, which were not compatible with existing chess engines. The vast majority of chess engines only use central processing units, and computing and processing information on the GPUs require special libraries in the backend such as Nvidia's CUDA, which none of the engines had access to. Thus the vast majority of chess engines such as Komodo and Stockfish continued to use handcrafted evaluation functions until efficiently updatable neural networks were ported to computer chess in 2020, which did not require either the use of GPUs or libraries like CUDA at all. Even then, the neural networks used in computer chess are fairly shallow, and the deep reinforcement learning methods pioneered by AlphaZero are still extremely rare in computer chess.
Timeline
1769 – Wolfgang von Kempelen builds the Turk. Presented as a chess-playing automaton, it is secretly operated by a human player hidden inside the machine.
1868 – Charles Hooper presents the Ajeeb automaton which also has a human chess player hidden inside.
1912 – Leonardo Torres y Quevedo builds El Ajedrecista, a machine that could play King and Rook versus King endgames.
1941 – Predating comparable work by at least a decade, Konrad Zuse develops computer chess algorithms in his Plankalkül programming formalism. Because of the circumstances of the Second World War, however, they were not published, and did not come to light, until the 1970s.
1948 – Norbert Wiener's book Cybernetics describes how a chess program could be developed using a depth-limited minimax search with an evaluation function.
1950 – Claude Shannon publishes "Programming a Computer for Playing Chess", one of the first papers on the algorithmic methods of computer chess.
1951 – Alan Turing is first to publish a program, developed on paper, that was capable of playing a full game of chess (dubbed Turochamp).
1952 – Dietrich Prinz develops a program that solves chess problems.
1956 – Los Alamos chess is the first program to play a chess-like game, developed by Paul Stein and Mark Wells for the MANIAC I computer.
1956 – John McCarthy invents the alpha–beta search algorithm.
1957 – The first programs that can play a full game of chess are developed, one by Alex Bernstein and one by Russian programmers using a BESM.
1958 – NSS becomes the first chess program to use the alpha–beta search algorithm.
1962 – The first program to play credibly, Kotok-McCarthy, is published at MIT.
1963 – Grandmaster David Bronstein defeats an M-20 running an early chess program.
1966–67 – The first chess match between computer programs is played. Moscow Institute for Theoretical and Experimental Physics (ITEP) defeats Kotok-McCarthy at Stanford University by telegraph over nine months.
1967 – Mac Hack VI, by Richard Greenblatt et al. introduces transposition tables and employs dozens of carefully tuned move selection heuristics; it becomes the first program to defeat a person in tournament play. Mac Hack VI played about C class level.
1968 – Scottish chess champion David Levy makes a 500 pound bet with AI pioneers John McCarthy and Donald Michie that no computer program would win a chess match against him within 10 years.
1970 – Monty Newborn and the Association for Computing Machinery organize the first North American Computer Chess Championships in New York.
1971 – Ken Thompson, an American Computer scientist at Bell Labs and creator of the Unix operating system, writes his first chess-playing program called "chess" for the earliest version of Unix.
1974 – David Levy, Ben Mittman and Monty Newborn organize the first World Computer Chess Championship which is won by the Russian program Kaissa.
1975 – After nearly a decade of only marginal progress since the high-water mark of Greenblatt's MacHack VI in 1967, Northwestern University Chess 4.5 is introduced featuring full-width search, and innovations of bitboards and iterative deepening. It also reinstated a transposition table as first seen in Greenblatt's program. It was thus the first program with an integrated modern structure and became the model for all future development. Chess 4.5 played strong B-class and won the 3rd World Computer Chess Championship the next year. Northwestern University Chess and its descendants dominated computer chess until the era of hardware chess machines in the early 1980s.
1976 – In December, Canadian programmer Peter R. Jennings releases Microchess, the first game for microcomputers to be sold.
1977 – In March, Fidelity Electronics releases Chess Challenger, the first dedicated chess computer to be sold. The International Computer Chess Association is founded by chess programmers to organize computer chess championships and report on research and advancements on computer chess in their journal. Also that year, Applied Concepts released Boris, a dedicated chess computer in a wooden box with plastic chess pieces and a folding board.
1978 – David Levy wins the bet made 10 years earlier, defeating Chess 4.7 in a six-game match by a score of 4½–1½. The computer's victory in game four is the first defeat of a human master in a tournament.
1979 – Frederic Friedel organizes a match between IM David Levy and Chess 4.8, which is broadcast on German television. Levy and Chess 4.8, running on a CDC Cyber 176, the most powerful computer in the world, fought a grueling 89 move draw.
1980 – Fidelity computers win the World Microcomputer Championships each year from 1980 through 1984. In Germany, Hegener & Glaser release their first Mephisto dedicated chess computer. The USCF prohibits computers from competing in human tournaments except when represented by the chess systems' creators. The Fredkin Prize, offering $100,000 to the creator of the first chess machine to defeat the world chess champion, is established.
1981 – Cray Blitz wins the Mississippi State Championship with a perfect 5–0 score and a performance rating of 2258. In round 4 it defeats Joe Sentef (2262) to become the first computer to beat a master in tournament play and the first computer to gain a master rating.
1984 – The German Company Hegener & Glaser's Mephisto line of dedicated chess computers begins a long streak of victories (1984–1990) in the World Microcomputer Championship using dedicated computers running programs ChessGenius and Rebel.
1986 – Software Country (see Software Toolworks) released Chessmaster 2000 based on an engine by David Kittinger, the first edition of what was to become the world's best selling line of chess programs.
1987 – Frederic Friedel and physicist Matthias Wüllenweber found Chessbase, releasing the first chess database program. Stuart Cracraft releases GNU Chess, one of the first 'chess engines' to be bundled with a separate graphical user interface (GUI), .
1988 – HiTech, developed by Hans Berliner and Carl Ebeling, wins a match against grandmaster Arnold Denker 3½–½. Deep Thought shares first place with Tony Miles in the Software Toolworks Championship, ahead of former world champion Mikhail Tal and several grandmasters including Samuel Reshevsky, Walter Browne and Mikhail Gurevich. It also defeats grandmaster Bent Larsen, making it the first computer to beat a GM in a tournament. Its rating for performance in this tournament of 2745 (USCF scale) was the highest obtained by a computer player.
1989 – Deep Thought demolishes David Levy in a 4-game match 0–4, bringing to an end his famous series of wagers starting in 1968.
1990 – On April 25, former world champion Anatoly Karpov lost in a simul to Hegener & Glaser's Mephisto Portorose M68030 chess computer.
1991 – The ChessMachine based on Ed Schröder's Rebel wins the World Microcomputer Chess Championship
1992 – ChessMachine wins the 7th World Computer Chess Championship, the first time a microcomputer beat mainframes. GM John Nunn releases Secrets of Rook Endings, the first book based on endgame tablebases developed by Ken Thompson.
1993 – Deep Thought-2 loses a four-game match against Bent Larsen. Chess programs running on personal computers surpass Mephisto's dedicated chess computers to win the Microcomputer Championship, marking a shift from dedicated chess hardware to software on multipurpose personal computers.
1995 – Fritz 3, running on a 90Mhz Pentium PC, beats Deep Thought-2 dedicated chess machine, and programs running on several super-computers, to win the 8th World Computer Chess Championships in Hong Kong. This marks the first time a chess program running on commodity hardware defeats specialized chess machines and massive super-computers, indicating a shift in emphasis from brute computational power to algorithmic improvements in the evolution of chess engines.
1996 – IBM's Deep Blue loses a six-game match against Garry Kasparov, 2–4.
1997 – Deep(er) Blue, a highly modified version of the original, wins a six-game match against Garry Kasparov, 3.5-2.5.
2000 – Stefan Meyer-Kahlen and Rudolf Huber draft the Universal Chess Interface, a protocol for GUIs to talk to engines that would gradually become the main form new engines would take.
2002 – Vladimir Kramnik draws an eight-game match against Deep Fritz.
2003 – Kasparov draws a six-game match against Deep Junior and draws a four-game match against X3D Fritz.
2004 – a team of computers (Hydra, Deep Junior and Fritz) wins 8½–3½ against a strong human team formed by Veselin Topalov, Ruslan Ponomariov and Sergey Karjakin, who had an average Elo rating of 2681. Fabien Letouzey releases the source code for Fruit 2.1, an engine quite competitive with the top closed-source engines of the time. This leads many authors to revise their code, incorporating the new ideas.
2005 – Rybka wins the IPCCC tournament and very quickly afterwards becomes the strongest engine.
2006 – The world champion, Vladimir Kramnik, is defeated 4–2 by Deep Fritz.
2009 – Pocket Fritz. 4 running on a smartphone, wins Copa Mercosur, an International Master level tournament, scoring 9½/10 and earning a performance rating of 2900. A group of pseudonymous Russian programmers release the source code of Ippolit, an engine seemingly stronger than Rybka. This becomes the basis for the engines Robbolito and Ivanhoe, and many engine authors adopt ideas from it.
2010 – Before the World Chess Championship 2010, Topalov prepares by sparring against the supercomputer Blue Gene with 8,192 processors capable of 500 trillion (5 × 1014) floating-point operations per second. Rybka developer, Vasik Rajlich, accuses Ippolit of being a clone of Rybka.
2011 – The ICGA strips Rybka of its WCCC titles.
2017 – AlphaZero, a neural net-based digital automaton, beats Stockfish 28–0, with 72 draws, in a 100-game match.
2018 - Efficiently updatable neural network (NNUE) evaluation is invented for computer shogi.
2019 – Leela Chess Zero (LCZero v0.21.1-nT40.T8.610), a chess engine based on AlphaZero, defeats Stockfish 19050918 in a 100-game match with the final score 53.5 to 46.5 to win TCEC season 15.
2020 - NNUE is added to Stockfish evaluation, noticeably increasing its strength.
Categorizations
Dedicated hardware
These chess playing systems include custom hardware with approx. dates of introduction (excluding dedicated microcomputers):
Belle 1976
Bebe, a strong bit-slice processor 1980
HiTech 1985
ChipTest 1985
Deep Thought 1987
Deep Thought 2 (Deep Blue prototype)~1994
Deep Blue 1996, 1997
Hydra, predecessor was called Brutus 2002
AlphaZero 2017 (used Google's Tensor Processing Units for neural networks, but the hardware is not specific to Chess or games)
MuZero 2019 (similar hardware to its predecessor AlphaZero, non-specific to Chess or e.g. Go), learns the rules of Chess
Commercial dedicated computers
In the late 1970s to early 1990s, there was a competitive market for dedicated chess computers. This market changed in the mid-1990s when computers with dedicated processors could no longer compete with the fast processors in personal computers.
Boris in 1977 and Boris Diplomat in 1979, chess computers including pieces and board, sold by Applied Concepts Inc.
Chess Challenger, a line of chess computers sold by Fidelity Electronics from 1977 to 1992. These models won the first four World Microcomputer Chess Championships.
ChessMachine, an ARM-based dedicated computer, which could run two engines:
"The King", which later became the Chessmaster engine, was also used in the TASC R30 dedicated computer.
Gideon, a version of Rebel, in 1992 became the first microcomputer to win the World Computer Chess Championship.
Excalibur Electronics sells a line of beginner strength units.
Mephisto, a line of chess computers sold by Hegener & Glaser. The units won six consecutive World Microcomputer Chess Championships.
Novag sold a line of tactically strong computers, including the Constellation, Sapphire, and Star Diamond brands.
Phoenix Chess Systems makes limited edition units based around StrongARM and XScale processors running modern engines and emulating classic engines.
Saitek sells mid-range units of intermediate strength. They bought out Hegener & Glaser and its Mephisto brand in 1994.
Recently, some hobbyists have been using the Multi Emulator Super System to run the chess programs created for Fidelity or Hegener & Glaser's Mephisto computers on modern 64-bit operating systems such as Windows 10. The author of Rebel, Ed Schröder has also adapted three of the Hegener & Glaser Mephisto's he wrote to work as UCI engines.
DOS programs
These programs can be run on MS-DOS, and can be run on 64-bit Windows 10 via emulators such as DOSBox or Qemu:
Chessmaster 2000
Colossus Chess
Fritz 1–3
Kasparov's Gambit
Rebel
Sargon
Socrates II
Notable theorists
Well-known computer chess theorists include:
Georgy Adelson-Velsky, a Soviet and Israeli mathematician and computer scientist
Hans Berliner, American computer scientist and world correspondence chess champion, design supervisor of HiTech (1988)
Mikhail Botvinnik, Soviet electrical engineer and world chess champion, wrote Pioneer
Alexander Brudno, Russian computer scientist, first elaborated the alphabeta pruning algorithm
Feng-hsiung Hsu, the lead developer of Deep Blue (1986–97)
Professor Robert Hyatt developed Cray Blitz and Crafty
Danny Kopec, American Professor or Computer Science and International Chess Master, developed Kopec-Bratko test
Alexander Kronrod, Soviet computer scientist and mathematician
Professor Monroe Newborn, chairman of the computer chess committee for the Association of Computing Machinery
Claude E. Shannon, American computer scientist and mathematician
Alan Turing, English computer scientist and mathematician
Solving chess
The prospects of completely solving chess are generally considered to be rather remote. It is widely conjectured that there is no computationally inexpensive method to solve chess even in the very weak sense of determining with certainty the value of the initial position, and hence the idea of solving chess in the stronger sense of obtaining a practically usable description of a strategy for perfect play for either side seems unrealistic today. However, it has not been proven that no computationally cheap way of determining the best move in a chess position exists, nor even that a traditional alpha–beta searcher running on present-day computing hardware could not solve the initial position in an acceptable amount of time. The difficulty in proving the latter lies in the fact that, while the number of board positions that could happen in the course of a chess game is huge (on the order of at least 1043 to 1047), it is hard to rule out with mathematical certainty the possibility that the initial position allows either side to force a mate or a threefold repetition after relatively few moves, in which case the search tree might encompass only a very small subset of the set of possible positions. It has been mathematically proven that generalized chess (chess played with an arbitrarily large number of pieces on an arbitrarily large chessboard) is EXPTIME-complete, meaning that determining the winning side in an arbitrary position of generalized chess provably takes exponential time in the worst case; however, this theoretical result gives no lower bound on the amount of work required to solve ordinary 8x8 chess.
Martin Gardner's Minichess, played on a 5×5 board with approximately 1018 possible board positions, has been solved; its game-theoretic value is 1/2 (i.e. a draw can be forced by either side), and the forcing strategy to achieve that result has been described.
Progress has also been made from the other side: as of 2012, all 7 and fewer pieces (2 kings and up to 5 other pieces) endgames have been solved.
Chess engines
A "chess engine" is software that calculates and orders which moves are the strongest to play in a given position. Engine authors focus on improving the play of their engines, often just importing the engine into a graphical user interface (GUI) developed by someone else. Engines communicate with the GUI by standardized protocols such as the nowadays ubiquitous Universal Chess Interface developed by Stefan Meyer-Kahlen and Franz Huber. There are others, like the Chess Engine Communication Protocol developed by Tim Mann for GNU Chess and Winboard. Chessbase has its own proprietary protocol, and at one time Millennium 2000 had another protocol used for ChessGenius. Engines designed for one operating system and protocol may be ported to other OS's or protocols.
Chess engines are regularly matched against each other at dedicated chess engine tournaments.
Chess web apps
In 1997, the Internet Chess Club released its first Java client for playing chess online against other people inside one's webbrowser. This was probably one of the first chess web apps. Free Internet Chess Server followed soon after with a similar client. In 2004, International Correspondence Chess Federation opened up a web server to replace their email-based system. Chess.com started offering Live Chess in 2007. Chessbase/Playchess has long had a downloadable client, and added a web-based client in 2013.
Another popular web app is tactics training. The now defunct Chess Tactics Server opened its site in 2006, followed by Chesstempo the next year, and Chess.com added its Tactics Trainer in 2008. Chessbase added a tactics trainer web app in 2015.
Chessbase took their chess game database online in 1998. Another early chess game databases was Chess Lab, which started in 1999. New In Chess had initially tried to compete with Chessbase by releasing a NICBase program for Windows 3.x, but eventually, decided to give up on software, and instead focus on their online database starting in 2002.
One could play against the engine Shredder online from 2006. In 2015, Chessbase added a play Fritz web app, as well as My Games for storing one's games.
Starting in 2007, Chess.com offered the content of the training program, Chess Mentor, to their customers online. Top GMs such as Sam Shankland and Walter Browne have contributed lessons.
See also
Computer checkers
Computer Go
Computer Othello
Computer shogi
Notes
References
Sources
(This book actually covers computer chess from the early days through the first match between Deep Blue and Garry Kasparov.)
Mastering the Game: A History of Computer Chess at Computer History Museum
Bill Wall's Computer Chess History Timeline
Further reading
New Architectures in Computer Chess – Thesis on How to Build A Chess Engine
Lasar, Matthew (2011). Brute force or intelligence? The slow rise of computer chess". Ars Technica.
Newborn, Monty (1996). Outsearching Kasparov, American Mathematical Society's Proceeding of Symposia in Applied Mathematics: Mathematical Aspects of Artificial Intelligence, v. 55, pp 175–205, 1998. Based on paper presented at the 1996 Winter Meeting of the AMS, Orlando, Florida, Jan 9–11, 1996.
Newborn, Monty (2000). Deep Blue's contribution to AI, Annals of Mathematics and Artificial Intelligence, v. 28, pp. 27–30, 2000.
Newborn, Monty (2006). Theo and Octopus at the 2006 World Championship for Automated Reasoning Programs, Seattle, Washington, August 18, 2006
External links
List of chess engine ratings and game files in PGN format
Mastering the Game: A History of Computer Chess at the Computer History Museum
ACM Computer Chess by Bill Wall
"Computer Chess" by Edward Winter
Computer Chess Information and Resources – blog following the creation of a computer chess engine
Defending Humanity's Honor, an article by Tim Krabbé about "anti-computer style" chess
A guide to Endgame Tablebases
GameDev.net – Chess Programming by François-Dominic Laramée Part 1 2 3 4 5 6
Colin Frayn's Computer Chess Theory Page
"Play chess with God" – for playing chess against Ken Thompson's endgame database
Chess programming wiki
Computer Chess Club Forums
The Strongest Computer Chess Engines Over Time
Media
The History of Computer Chess: An AI Perspective – a full lecture featuring Murray Campbell (IBM Deep Blue Project), Edward Feigenbaum, David Levy, John McCarthy, and Monty Newborn. at Computer History Museum
Electronic games
Game artificial intelligence |
40870618 | https://en.wikipedia.org/wiki/Sean%20M.%20Joyce | Sean M. Joyce | Sean M. Joyce (born 1961) was the 14th Deputy Director of the Federal Bureau of Investigation.
Early life and education
A Brockton, Massachusetts native, Joyce earned an undergraduate degree in Business Administration and Computer Science from Boston College followed by an MBA from the Tuck School of Business of Dartmouth College. Joyce worked as an Analyst at Raytheon and an Experienced Senior at Arthur Andersen prior to joining the FBI in 1987.
Career
Public Sector Experience
Joyce began his career as an FBI special agent in 1987. Following the completion of training at the FBI Academy in Quantico, Virginia, he was assigned to the Dallas Division where he investigated Violent Crimes. He later investigated Colombian narcotics matters in the Miami Division and in 1994, Joyce was selected as a member of the Bureau’s Hostage Rescue Team.
In 1998, Joyce returned to the Dallas Field Office and became a special agent and SWAT team leader. He earned the Attorney General’s Award for Exceptional Service in 2004 for his work on a Dallas Division counterterrorism squad. He received the same award again a year later for his work on another counterterrorism matter.
Joyce was designated legal attaché to Prague in August 2005, and in 2007, received an award for investigative excellence for his work there. In 2007, he was assigned to the Washington Field Office as an assistant special agent in charge. The following year he was named section chief of the Counterterrorism Division’s International Terrorism Operations Section, with responsibility for international terrorism matters within the United States.
In 2009, Joyce was appointed assistant director of the FBI’s International Operations Division. As assistant director, he was responsible for employees in 75 foreign and domestic locations in support of the FBI’s international mission to defeat national security and criminal threats by building a global network of trusted partners and strengthening the FBI’s international capabilities.
In April 2010, Joyce was appointed executive assistant director (EAD) of the FBI’s National Security Branch (NSB), composed of the Counterterrorism Division, Counterintelligence Division, Directorate of Intelligence, and the Weapons of Mass Destruction Directorate.
Joyce was appointed the 14th Deputy Director for the FBI in September 2011. In this position Joyce had direct oversight of the FBI's 36,000 employees and its $8 billion annual budget. With more than 26 years of service in the FBI, Joyce brought a wide range of operational and leadership experience. He was an integral part of transforming the FBI into an intelligence-driven organization. In addition, he spearheaded several strategic initiatives including ‘next generation cyber,’ which was a cross-organizational initiative to maintain the FBI’s world leadership in law enforcement and domestic intelligence. He also established a framework to operate and evaluate the FBI’s 56 domestic field offices.
Private Sector Experience
Joyce, is currently a Principal in PricewaterhouseCoopers' (PwC) Advisory Practice, where he is the Global and U.S.Cybersecurity, Privacy and Forensics Leader and a member of the U.S. Advisory Leadership Team.
During his time at PwC, Joyce has worked with many clients in various sectors providing strategic guidance, investigative support, technological changes, incident breach response and cybersecurity advice. Most notably, Joyce has consulted in some of the most prolific cyber breaches, providing guidance and expertise to top executives. Joyce has also briefed many boards and senior executives on the challenges posed by the digital revolution, including the threat landscape, best practices in governance and lines of defense, and how to use cybersecurity and resiliency as business enablers.
As the Global Cybersecurity, Privacy and Forensics Leader, Joyce is responsible for overseeing over 4,500 staff globally. In addition, he has spearheaded several strategic initiatives including the Cybersecurity and Privacy Innovation Institute which provides key research, perspectives, and analysis on trends affecting the industry.
Additionally, Joyce previously led the US and Global Financial Crimes Unit for PwC, focusing on the interplay between cybersecurity, anti-money laundering and sanctions, fraud, and anti-bribery/anti-corruption.
Prior to rejoining PwC, Joyce was the Chief Trust Officer at Airbnb where he led Design Specialists, Product Managers, Engineers and Data Scientists to help grow and defend the platform. Also, he had responsibility for Privacy and Community Policy. Joyce was also a member of the Airbnb Executive Committee.
Joyce is also a member of the Aspen Institute Cybersecurity Working Group, a cross-sector public-private forum dedicated to addressing cybersecurity challenges.
Honors and Awards
Joyce is a 2013 recipient of the Director of National Intelligence Distinguished Service Medal, the CIA Director’s Award, the DIA’s Director’s Award, the FBI Meritorious Medal, Attorney General’s Award for Exceptional Service and the 2011 Presidential Rank Award, among other honors.
External links
Airbnb Executive Resigned Last Year Over Chinese Request for More Data Sharing - The Wall Street Journal, 20 November 2020.
Sean Joyce brings intensity to No. 2 job at FBI - The Washington Post, 6 November 2011.
References
Deputy Directors of the Federal Bureau of Investigation
Living people
Tuck School of Business alumni
Boston College alumni
1961 births |
49650476 | https://en.wikipedia.org/wiki/Source-specific%20routing | Source-specific routing | Source-specific routing, also called source-address dependent routing (SADR), is a routing technique in which a routing decision is made by looking at the source address of a packet in addition to its destination address. The main application of source-specific routing is to allow a cheap form of multihoming without the need for provider-independent addresses or any cooperation from upstream ISPs.
The problem
In traditional next-hop routing, a packet is routed according to its destination only, towards the closest router that announces a route that matches that destination. Consider a multihomed end-user network connected to two ISPs, BT&T and PacketCast; such a network will typically have two edge routers, each of which is connected to one ISP.
Both edge routers announce a default route, meaning that they are willing to accept packets destined for the Internet. If a packet with a source in BT&T's network is routed through PacketCast's edge router, PacketCast will assume it is a spoofed packet, and drop it in accordance to BCP 38.
Multihoming with source-specific routing
With source-specific routing, each edge router announces a source-specific default route: a route that applies to packets destined to the Internet but only if their source is in a given prefix. The effect is that each edge router only attracts packets that have a source address in that provider's prefix.
Desirable host changes
With source-specific routing, each host interface has multiple addresses, one per provider-dependent prefix. For outgoing traffic, host software must choose the right source address. Various techniques for doing that have been suggested, at the network layer, above the network layer (see Shim6), or by using multipath techniques at the higher layers (see Multipath TCP and Multipath Mosh).
Support in routing protocols
On a network with a single edge router, it is possible to implement source-specific routing by manual manipulation of routing tables. With multiple routers, explicit support for source-specific routing is required in the routing protocol.
As of early 2016, there are two routing protocols that implement support for source-specific routing:
The Babel routing protocol has support for source-specific routing for both IPv4 and IPv6; this is implemented for IPv6 in babeld and in BIRD (earlier versions of babeld supported source-specific routing for IPv4);
There exists an implementation of IS-IS with support for source-specific routing for IPv6 only.
The IETF Homenet protocol suite requires support for source-specific routing in its routing protocol.
References
Routing
Multihoming |
36821209 | https://en.wikipedia.org/wiki/List%20of%20role-playing%20game%20software | List of role-playing game software | Role-playing game software, as opposed to role-playing video games, is a software intended to assist in developing and running of role-playing games. It does not allow the game to be played entirely within the computer. Such software assist in the drawing of maps, player character and non-player character creation, generation of monsters, and provision of dice rolls and their results. The software may be specific to a single role playing game system, or flexible enough to be applied to multiple game models.
Software
References
Role-playing game software |
17403206 | https://en.wikipedia.org/wiki/Opkg | Opkg | opkg (open package management) is a lightweight package management system based upon ipkg. It is written in C and resembles Advanced Package Tool (APT)/dpkg in operation. It is intended for use on embedded Linux devices and is used in this capacity in the OpenEmbedded and OpenWrt projects.
Opkg was originally forked from ipkg by the Openmoko project. More recently, development of opkg has moved from its old Google Code repository to Yocto Project where it is actively maintained again.
Opkg packages use the .ipk extension.
References
External links
Free package management systems
Free software programmed in C
Linux package management-related software
Linux-only free software |
7038564 | https://en.wikipedia.org/wiki/V7 | V7 | V7 may refer to:
Electronics
Vivo V7, a smartphone by Vivo
Science and technology
Chemicals
ATC code V07 All other non-therapeutic products, a subgroup of the Anatomical Therapeutic Chemical Classification System
Communications
V.7, an ITU-T recommendation for data communication
, also sent as
Computing
Version 7 Unix, a reference to the seventh edition of Research Unix from 1979
UNIX V7, a brand mark by The Open Group for compliance with the Single UNIX Specification, Version 4 (SUSv4)
Transportation
Automobiles
Brilliance V7, a Chinese mid-size SUV
Changan Alsvin V7, a Chinese subcompact sedan
Hanteng V7, a Chinese mid-size MPV
Luxgen V7, a Taiwanese minivan
Aviation
Volotea, by IATA code
Motorcycles
Moto Guzzi V7, an Italian motorcycle
Other
V7 (political alliance), a political alliance in Suriname
The Marshall Islands, by ITU callsign prefix
V7, notation for a major-minor dominant seventh chord built on the fifth degree |