id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
1023530 | https://en.wikipedia.org/wiki/Stac%20Electronics | Stac Electronics | Stac Electronics, originally incorporated as State of the Art Consulting and later shortened to Stac, Inc., was a technology company founded in 1983. It is known primarily for its Lempel–Ziv–Stac lossless compression algorithm and Stacker disk compression utility for compressing data for storage.
History
The original founders included five Caltech graduate students in Computer Science (Gary Clow, Doug Whiting, John Tanner, Mike Schuster and William Dally), two engineers from the industry (Scott Karns and Robert Monsour) and two board members from the industry (Robert Johnson of Southern California Ventures and Hugh Ness of Scientific Atlanta). The first employee was Bruce Behymer, a Caltech undergraduate in Engineering and Applied Science.
Originally headquartered in Pasadena, California and later in Carlsbad, California, the company received venture capital funding to pursue a business plan as a fabless chip company selling application-specific standard products to the tape drive industry. The plan was to include expansion into the disk drive market, which was much larger than the tape drive market. Following the success of Cirrus in the disk drive market, this was the real basis for venture capitalists' interest in Stac.
As part of the application engineering to adapt its data compression chips for use in disk drives, the company implemented a DOS driver that transparently compressed data written to a PC hard disk and decompressed the data transparently upon subsequent hard disk reads. In doing so they discovered that given the relative speed difference between the PC processor and the disk drive access times, it was possible to perform the data compression in software, obviating the need for a data compression chip in every disk drive, as they were planning to produce. This DOS driver was written in x86 assembly language under contract by Paul Houle.
In 1990 the company released Stacker, a disk compression utility. The product was highly successful, due to the relatively small capacities (20 to 80 megabytes) and high prices of contemporary hard drives, at a time when larger software packages such as Microsoft's new Windows user interface were becoming popular. On average, Stacker doubled disk capacity, and usually increased disk performance by compressing the data before writing and after reading, compensating for the relative slowness of the drives. Stac sold several million units of Stacker over the product's lifetime.
They also released a hardware product called STAC Coprocessor Card, which claimed to not only improve the compression of the files, but to decrease the time needed to compress files.
After 1994
At some time prior to 1996, the company relocated its main office from Carlsbad to Carmel Valley, in San Diego, and maintained a programming group in Estonia. After settling the lawsuit with Microsoft, Stac attempted to expand its product portfolio in the utility software segment by adding additional storage and communication titles through internal development and acquisition. The company scrambled to replace the revenues lost after the market for hard drive compression software collapsed with the inclusion of DoubleSpace in MS-DOS and the rapid decline in hard disk cost per megabyte. Using the funds from its IPO (1992) and the settlement with Microsoft, Stac acquired a remote desktop software product called "ReachOut". It acquired a server image backup product, "Replica", and internally developed a network backup product for workstations and laptops, and marketed this product first as "Replica NDM" and later as "eSupport Essentials". Much of the technology pioneered in Stac's network backup offering ultimately found its way into today's online backup solutions.
Meanwhile, Stac's original chip business continued to grow. In order to realize shareholder value, its chip subsidiary called Hifn, was spun off in 1998 in a primary public offering.
Stac then renamed the remaining utility software company to "Previo", and repositioned itself as a help desk and support organization tool provider. This effort was pursued while the dot-com bubble was bursting, and in 2002 management elected to take the unusual step of selling Stac's remaining technology assets (to Altiris) and returning its remaining cash to shareholders before dissolving.
Controversy
Microsoft lawsuit
In 1993, Microsoft released MS-DOS 6.0, which included a disk compression program called DoubleSpace. Microsoft had previously been in discussions with Stac to license its compression technology, and had discussions with Stac engineers and examined Stac's code as part of the due diligence process. Stac, in an effort led by attorney Morgan Chu, sued Microsoft for infringement of two of its data compression patents, and won; in 1994, a California jury ruled the infringement by Microsoft was not willful, but awarded Stac $120 million in compensatory damages, coming to about $5.50 per copy of MS-DOS 6.0 that had been sold. The jury also agreed with a Microsoft counterclaim that Stac had misappropriated the Microsoft trade secret of a pre-loading feature that was included in Stacker 3.1, and simultaneously awarded Microsoft $13.6 million on the counterclaim.
While Microsoft prepared an appeal, Stac obtained a preliminary injunction from the court stopping the sales of all MS-DOS products that included DoubleSpace; by this time Microsoft had already started shipping an "upgrade" of MS-DOS to its OEM customers that removed DoubleSpace. By the end of 1994, Microsoft and Stac settled all pending litigation by agreeing that Microsoft would make a $39.9 million investment in Stac Electronics, and additionally pay Stac about $43 million in royalties on their patents.
See also
Hifn
Novell DOS 7, OpenDOS 7.01, DR-DOS 7.02 and higher
PC DOS 7 and PC DOS 2000
DOS Protected Mode Services
Multimedia Stacker
Disk compression
File Allocation Table
Comparison of file systems
References
External links
Previo support page on Altiris site
Microsoft KB showing installation notes for both the software and hardware for Windows 3.x (copy on GitHub )
Defunct software companies of the United States
Software companies based in California
Technology companies based in Greater Los Angeles
Software companies established in 1983
Technology companies disestablished in 2002
1983 establishments in California
2002 disestablishments in California
Companies based in Pasadena, California
Companies based in Carlsbad, California
Defunct manufacturing companies based in Greater Los Angeles
Microsoft criticisms and controversies |
1409254 | https://en.wikipedia.org/wiki/Vegas%20Pro | Vegas Pro | Vegas Pro is a video editing software package for non-linear editing (NLE) originally published by Sonic Foundry, then by Sony Creative Software, and now by Magix. The software runs on the Windows operating system.
Originally developed as audio editing software, it eventually developed into an NLE for video and audio from version 2.0. Vegas features real-time multitrack video and audio editing on unlimited tracks, resolution-independent video sequencing, complex effects and compositing tools, 24-bit/192 KHz audio support, VST and DirectX plug-in effect support, and Dolby Digital surround sound mixing. On 24 May 2016, Sony announced that it had sold Vegas (and most of its "Creative Software" line) to MAGIX, who would continue supporting and developing the software.
Features
VEGAS does not require any specialized hardware to run properly, allowing it to operate on almost any standard Windows computer across a broad range of hardware.
In areas of compositing and motion graphics, Vegas provides a broad tool set including 3D track motion compositing with control over z-depth, and spatial arrangement of visual planes including plane intersection.
Much of the visual effects processing in Vegas follows an audio-like paradigm. Effects can be applied at any stage of the visual signal flow — event level, track level and output level effects, much like reverb, delay and flange audio effects are applied in a digital audio system, like Pro Tools, Cubase or Sonar. Master output effects can also be controlled and manipulated over time by the use of Master Bus track automation envelopes.
One major omission of Vegas is that, although it started life as an Audio Multitrack NLE, it has no MIDI capability at all (apart from control-desk and synchronisation). This restricts its use for Audio production, focusing the product on the post-production, Video NLE market only.
VEGAS features integration with 24p DV. It is also one of the few NLEs which can convert other formats to 24p (or any format to any other format) without any kind of a plugin or third-party application support and is the only proprietary NLE that allows multiple instances of the application to be opened simultaneously. Clips and sequences can be copied and pasted between instances of Vegas. One instance can be rendering a sequence in the background while the user continues to edit in a different instance of Vegas in the foreground. VEGAS provides sophisticated compositing including green screen, masking, and keyframe animation. Nesting allows a prior project to be included in another project modularizing the editing process so that an array of tracks and edits become one track for further editing. Any changes to the previous project become reflected in the later project. Nesting is especially helpful in large, complex or special effects projects as the final rendering suffers no generation loss.
Unlike other editors, MAGIX VEGAS Pro supports scripting technology which provides task automation, simplified workflow, and greater efficiency and productivity. Free and paid pre-written scripts are available from the VEGAS community on the web.
Version history
Each release of Vegas is sold standalone, however upgrade discounts are sometimes provided.
Vegas Beta
Sonic Foundry introduced a sneak preview version of Vegas Pro on 11 June 1999. It is called a "Multitrack Media Editing System".
Vegas 1.0
Released on 23 July 1999 at the NAMM Show in Nashville, Tennessee, Vegas was an audio-only tool with a particular focus on rescaling and resampling audio. It boasted support for popular formats like DivX and Real Networks RealSystem G2 file formats.
Vegas Video beta (Vegas 2.0 beta)
Released on 10 April 2000, this was the first version of Vegas to include video-editing tools.
Vegas Video (Vegas 2.0)
Released on 12 June 2000.
Vegas Video LE 3.0
Vegas Video 3.0
Released on 3 December 2001.
This release offered:
New Video Effects - Lens Flare, Light Rays, Film FX, Color Curves, Mirror, Remap, Deform, Convolution, Linear Blur, Black Restore, Levels, Unsharp Mask, Color Grad, and Timecode Burn filter.
Batch Capture with Automatic Scene Detection - Capture DV with automatic scene detection, batch capture, tape logging, still image capture and thumbnail previews.
Red Book Audio CD Mastering with CD Architect(TM) Technology - Burn professional-quality Red Book audio CD masters directly from the Vegas timeline, with ISRC, UPC, and PQ list support.
New Sonic Foundry DV Codec - This high-quality DV codec developed by Sonic Foundry offers pristine colors, sharp images, artifact-free compositing, and DV chromakeying.
DV Print-To-Tape From The Timeline - Print finished projects, with bars and tone, to DV cameras and decks from the Vegas timeline.
Windows Media(TM) File Editing - Create and edit Windows Media(TM) files in Vegas Video 3.0.
New MPEG Encoding Tools - The new MPEG plug-in in Vegas Video produces superior MPEG-2 files for DVD productions, with significantly faster render times.
Dynamic RAM Previewing - Temporary RAM/render-free previews allow quick analysis and tweaking of complex video FX without time consuming rendering.
VideoCD and Data CD Burning - Burn your project directly to VideoCD for playback on most DVD players, or data CD for playback on your computer's CD-ROM.
Vegas 4.0
Released on 6 February 2003.
This release included:
Advanced Color Correction Tools
Searchable Media Pool Bins
Vectorscope, Histogram, Parade and Waveform Monitoring
Application Scripting
Improved Ripple Editing
Motion Blur and Supersampling Envelopes
5.1 Surround Mixing
Dolby® Digital AC-3 Encoding certified and tested by Dolby Laboratories
DirectX® Audio Plug-In Effects Automation
ASIO Driver Support
Windows Media™ 9 Support, including Surround Encoding
DVD Authoring with AC-3 File Import Capabilities
Integration with DVD Architect Via Chapter Marker Passing
Vegas 4.0b
Released in April 2003; added HD editing and 24p support.
Vegas 4.0e
Released in November 2003; This is the first release of Vegas under the ownership of Sony; Sonic Foundry had sold Vegas alongside Sound Forge and other programs to Sony Pictures Digital for US$18 Million that same year.
Vegas 5.0
Released in April 2004.
Vegas 6.0
Released on 18 April 2005.
Vegas 7.0
Released in September 2006. Version 7 is the final Vegas release to include Windows 2000 support.
Vegas Pro 8.0
Released on 10 September 2007.
The first Sony Vegas version to go with the "Sony Vegas Pro" branding instead of the regular "Sony Vegas" branding.
It also moved the timeline to the bottom by default, but the user can still move it back to the top.
Vegas Pro 8.1
Vegas Pro 8.1 is the first version of Vegas Pro to be ported to 64-bit systems.
Vegas Pro 9.0
On 11 May 2009, Sony Creative Software released Sony Vegas Pro 9.0 with greater support for digital cinema including:
Support for 4K resolution
Native support for pro camcorder formats such as Red and XDCAM EX
The latest release of Sony Vegas Pro 9.0 is Vegas Pro 9.0e (Released on 13 May 2010), which includes features such as new white balance video FX.
In 2009, Sony Creative Software purchased the Velvetmatter Radiance suite of video FX plug-ins and these are included in Vegas 9. As a result, they are no longer available as a separate product from Velvetmatter.
Vegas Pro 10
Sony Vegas Pro 10, released on 11 October 2010, introduced many new features such as:
Stereoscopic 3D Editing
Comprehensive Closed Captioning
GPU-Accel
Elastique Pitch Method
Support for OpenFX plugins
Version 10 is the final Vegas Pro release to include Windows XP support.
Vegas Pro 11
Sony announced Vegas Pro 11 on 9 September 2011, and it was released on 17 October 2011. Updated features include GPGPU acceleration of video decoding, effects, playback, compositing, pan/crop, transitions, and motion. Other improvements were to include enhanced text tools, enhanced stereoscopic/3D features, RAW photo support, and new event synchronization mechanisms. In addition, Vegas Pro 11 comes pre-loaded with "NewBlue" Titler Pro, a 2D and 3D titling plug-in.
Vegas Pro 12
Sony released Vegas Pro 12 on 9 November 2012. Updated features include enhanced 4K support, more visual effects, and faster encoding performance. Vegas Pro 12 is dedicated to 64-bit versions of Windows.
Vegas Pro 13
Sony released Vegas Pro 13 on 11 April 2014.
It brings new collaboration tools and streamlined workflows to professional content producers faced with a wide variety of multimedia production tasks. This is the final Vegas Pro release under Sony's ownership. The last Sony Vegas Pro 13 build was #453. MAGIX released a rebranded version build #545.
Available in three new configurations:
Vegas Pro 13 Edit: Video and Audio Production
Vegas Pro 13: Video, Audio, and Blu-ray Disc Creation
Vegas Pro 13 Suite: Editing, Disc Authoring, and Visual Effects
Vegas Pro 14
MAGIX released Vegas Pro 14 on 20 September 2016. This is the first release of Vegas Pro under the ownership of MAGIX. It features advanced 4K upscaling as well as various bug fixes, a higher video velocity limit, RED camera support and various other features, this was the last version of Vegas Pro to have the light theme set by default.
Vegas Pro 15
Released on 28 August 2017, Vegas Pro 15 features major UI changes which claimed to bring usability improvements and customization. It was the first version of VEGAS Pro to have a dark theme, it also allows more efficient editing speeds, including adding new shortcuts to speed up editing. Vegas Pro 15 includes support for Intel Quick Sync Video (QSV) and other technologies, as well as various other features. it introduced the VEGAS Pro icon to be a V.
Vegas Pro 16
Released on 27 August 2018, Vegas Pro 16 has some new features including file backup, motion tracking, improved video stabilization, 360° editing and HDR support.
Vegas Pro 17
Released on 5 August 2019. It contains these new features:
Nested timelines
Improved video stabilization
Planar motion tracking/video tracking
Smart Split Edit
Dynamic storyboard and timeline interaction
Bézier masking OFX-Plugin
Lens correction plug
Improved Picture-In-Picture OFX plug-in
Automatic slideshow creator
Screen capture
Improved multicamera editing
Improved color grading
Length show
Experimental MKV reader
Vegas Pro 18
Released on 3 August 2020. New Features:
Motion Tracker Panel
Improved Video FX, Transitions and Media Generator windows
8-Bit (full range) pixel format
Black Bar Fill plug-in
Denoiser plug-in
Flicker Control plug-in
Style Transfer plug-in
Integrated graphics card driver update check
The Lens Correction FX has got an additional zooming factor
Export and Import of VEGAS Pro preferences
Reworked screen capture utility VEGAS Capture
Incremental Save
A more detailed render progress dialog
Swap video files
New Video Scopes options
VEGAS Prepare
VEGAS Hub explorer window
Alternate High DPI mode
Logarithmic Exposure adjustment
Some more legacy features were hidden by default, use Preferences > Deprecated Features
Event edge handles
Vegas Pro 18 has suffered from serious stability issues leaving it a 35% positive review score on the distribution platform Steam.
Vegas Pro 19
Released on 18 August 2021. New features:
Improved user interface
Improved color grading
Improved effects
New cloud-integrated content management and acquisition.
Live streaming
Reception
Major broadcasters have utilized the software, including Nightline with Ted Koppel. Several film festival winners have used Vegas to cut their features. It is also often used by many small to medium Internet content creators due to its ease of use, popularity, and the availability of tutorials on the software.
Related products
The consumer level Vegas Movie Studio version (formerly titled VideoFactory and Screenblast) shares the same interface and underlying code base as the professional Vegas version, but does not include professional features such as advanced compositing tools, or advanced DVD/Blu-ray Disc authoring. In previous releases, the video editing portion of the professional suite could be purchased separately from Sony's DVD and Blu-ray Disc authoring software, DVD Architect Pro (previously called DVD Architect; DVD Architect Studio is the consumer version), then a package called 'Vegas + DVD' became available while Vegas 7 was out. Since the release of Vegas Pro 8.0, both DVD Architect Studio Pro 4.5, Vegas Pro 8.0, as well as Boris FX LTD and Magic Bullet Movie Looks HD are all bundled together and may not be purchased individually.
Catalyst Production Suite is a new lineup of video preparation and editing software released by Sony Creative Software.
References
Further reading
Book
News release
Review
External links
Video editing software
Vegas
Windows-only software |
9953712 | https://en.wikipedia.org/wiki/Postage%20stamps%20and%20postal%20history%20of%20Malta | Postage stamps and postal history of Malta | The postal history of Malta began in the early modern period, when pre-adhesive mail was delivered to foreign destinations by privately owned ships for a fee. The earliest known letter from Malta, sent during the rule of the Order of St John, is dated 1532. The first formal postal service on the islands was established by the Order in 1708, with the post office being located at the Casa del Commun Tesoro in Valletta. The first postal markings on mail appeared later on in the 18th century.
The postal service was reformed in 1798 during the French occupation of Malta, and the islands were taken over by the British in 1800. In the early 19th century, two separate post offices were established in Malta: the Island Post Office and the Packet Office, with the latter forming part of the British Post Office. Their operations were amalgamated in 1849, and British postage stamps began to be used in Malta in August 1857. Malta's first postage stamp—the Halfpenny Yellow—was issued in 1860 for use on local mail, while letters sent to foreign destinations continued to be franked with British stamps.
In 1885, the Malta Post Office was set up and Halfpenny Yellows and British stamps were no longer valid in Malta. A set of six definitive stamps along with several types of postal stationery were issued. Malta continued to issue stamps and stationery throughout the 19th, 20th and 21st centuries. At some points from the 1880s to the 1980s, postage stamps or dual-purpose postage and revenue stamps were also valid for fiscal use, but at times separate revenue stamps were issued. Postage due stamps were issued between 1925 and the 1990s.
In 1995, the private limited company Posta Limited was set up to run the postal service. The public limited company MaltaPost took over in 1998, and was gradually privatized between 2002 and 2008.
Postal history
Hospitaller rule and French occupation (1530–1800)
The earliest known information about mail in Malta is from when the islands were under Hospitaller rule. It is believed that prior to the Order's arrival in Malta in 1530, correspondence between Malta and Sicily was carried on private vessels such as speronaras for a fee. The earliest known letter from Malta is dated 14 June 1532; Grand Master Philippe Villiers de L'Isle-Adam sent it to , the Bishop of Auxerre, who was also the French ambassador in Rome.
After the plague epidemic of 1675–1676, mail was disinfected at the Barriera in the capital Valletta. At the time it was feared that paper could carry the disease.
In 1708, Grand Master Ramón Perellós established the Commissary of Posts, which was Malta's first proper postal service. A post office was established within the Casa del Commun Tesoro in Valletta. At this point, fixed tariffs based on weight, the number of sheets and the destination of a letter were introduced. The first postal markings on Maltese mail appeared sometime in the second half of the 18th century.
During Hospitaller rule, most correspondence sent to or from Malta was with the Italian states and France. There was also mail sent to and from Greece and Spain, but this was not as frequent.
On 18 June 1798, shortly after the successful French invasion of Malta, Napoleon issued an article which was meant to reorganise Malta's postal system such that postal charges covered the expenses of running the postal service. At this point, a handstamp reading Malte was introduced. The text in this postal marking is set within an irregular shape resembling a loaf, and it is therefore commonly nicknamed as the "Loaf of Bread" handstamp.
Early British rule (1799–1884)
After a few months of French occupation, a rebellion broke out and the British, Portuguese and Neapolitans assisted the Maltese rebels. On 7 October 1799 Alexander Ball, the British Civil Commissioner of Malta, issued a notice which set up a mail delivery service in the rebel-held parts of Malta. Some rooms in San Anton Palace were used as a post office, and mail was sent to Sicily every week. After Malta had become a British protectorate in 1800, the Island Post Office was established as a government department handling inland mail and ship letters, and it continued to be housed at the Casa del Commun Tesoro. The Chief Secretariat's Office took responsibility of the Post Office in 1815. Starting from 1819, the Island Post Office used various handstamps reading Malta P Paid, Malta Post Paid or Malta Post Office on mail.
On 3 July 1806, the Packet Office was established in order to create a regular mail service between Britain and Malta. This was part of the British Post Office, and it was a separate entity from the Island Post Office. The Packet Office was initially also housed in the Casa del Commun Tesoro, but in separate rooms from the Island Post Office. Private vessels known as packet boats were contracted to carry mail from Falmouth, Cornwall to Malta via Gibraltar. The first such journey was made by the vessel Cornwallis, which left Falmouth on 19 July 1806 and arrived in Malta on 20 August. The Packet Agent in Malta introduced its own handstamps on mail (reading Malta) in early 1807. The Falmouth–Gibraltar–Malta packet boat service was extended to Corfu in 1819, and further packet boat links were introduced to Messina in 1824 and Genoa in 1826. In the early 19th century, mail to destinations such as Sicily or Mahón, Menorca could also be sent via merchant or naval ships.
Throughout the 19th century, mail was disinfected by slitting letters open and soaking them in vinegar or exposing them to fumes of a mixture of substances. By 1812, this was carried out at the Profumo Office of the Lazzaretto. During the plague epidemic of 1813, the local postal service continued to function and it was the only means of communication between people in areas where movement was restricted. Disinfected mail began to be marked with wax seals starting from around 1816, and later with handstamps from around 1837. Disinfection of mail on a large scale ceased in the 1880s, but it was carried out in rare cases until at least 1936.
In 1842, the Packet Office moved to the Banca Giuratale, and in March 1849 the Island Post Office also moved to this building, which came to be known as the General Post Office. On 1 April 1849, a Postmaster became responsible for both offices, and their operations were combined although they officially remained separate until 1884. On 10 June 1853, an experimental free daily postal service for letters and newspapers was introduced between Valletta, the Three Cities, Gozo and some of the larger towns. Prepayment of postage became compulsory on 1 March 1858, shortly after British postage stamps had been introduced in Malta.
In 1859, it was decided that a Malta postage stamp would be issued for local mail, and the Halfpenny Yellow was subsequently issued on 1 December 1860. At this point, daily delivery was introduced in Valletta, Floriana and Sliema, and letter boxes were introduced in Valletta. The new stamp was only valid locally, and mail addressed to foreign destinations continued to be franked with British stamps. The United Kingdom joined the General Postal Union (later known as the Universal Postal Union, UPU) on 1 July 1875, and since at the time the postal services in Malta were deemed to be part of the British Post Office, the islands were effectively also part of the union. Malta claims to have been the first British colony to join the UPU, although due to its status as a colony it did not gain full membership until independence almost a century later.
Malta Post Office (1885–1995)
In 1879, Governor Arthur Borton made a proposal to transfer control of postal services to the Government of Malta. This occurred on 1 January 1885, when the Island Post Office and the Packet Office were officially merged into a single Post Office. At this point, the Halfpenny Yellows and British stamps were withdrawn and new Malta stamps and postal stationery were issued to replace them. On 17 May 1886, the General Post Office moved from the Banca Giuratale to Palazzo Parisio, also in Valletta. From 14 August 1889, letter carriers were given a numbered handstamp which they would apply to the mail that they were delivering. This created accountability since it made it easy to identify the postman responsible in the case of misdelivered mail. These markings, which are known as postmen's personal handstamps (), remained in use until 1949.
By around 1891, police stations in various villages had begun to sell postage stamps. The postal system was reorganised in 1894, and this included the establishment of postal districts. Circular postmarks for various towns and villages were introduced in 1900, but most were withdrawn in 1921. Some of the larger post offices such as those at Cospicua, Notabile, Sliema and Victoria, Gozo retained their postmarks.
Postal services in Malta were disrupted during World War II, and postal censorship was introduced at this point. Malta was heavily bombarded during the war, and several post offices were destroyed or damaged due to aerial bombardment. The Cospicua post office was hit in June 1940, and its staff and equipment were relocated to a temporary post office in Paola, which was itself bombed on 12 February 1942. It was relocated once again to Żejtun where it remained until 1946. Palazzo Parisio was bombed on 24 April 1942 and the GPO temporarily moved elsewhere in Valletta until returning to its former location on 16 January 1943. During this period, some of the GPO services were transferred to the primary school of Ħamrun.
Demand for postal services increased drastically after the war, and many new post offices were established in towns and villages between the 1950s and the 1980s. The first postage meter was installed in Malta in 1961. Malta became independent in 1964, and the country was accepted into the UPU on 21 May 1965.
The Parcel Post Office moved to a new building in Victory Square in Valletta on 12 November 1963. Palazzo Parisio remained in use by the postal authorities until 4 July 1973, when the GPO moved across the street to Auberge d'Italie and the Central Mail Room, the registered letter branch and the Poste Restante moved to the former Garrison Chapel (a building now housing the Malta Stock Exchange). While it was the GPO, parts of Auberge d'Italie also housed other government departments.
Following the murder of Karin Grech by a letter bomb in 1977, mail addressed to people who were perceived to be at risk of a similar attack was checked for explosives. Once an item was certified as harmless, a marking with the letter X was applied to it before being delivered to the recipient. Such markings are known as "Karin Grech crosses" by philatelists.
Alphanumeric postal codes were introduced in Malta in 1991.
Posta Ltd (1995–1998)
On 1 October 1995, the private limited company Posta Ltd was set up to run the General Post Office. This was done after the British Postal Consultancy Service made a recommendation in 1994 that the postal service should be run commercially. In 1996, the company made losses when sending bulk mail to Germany after the latter increased its tariffs. Posta Ltd had signed a contract with the phantom company Euromail Ltd which did not take into consideration the rate increase, and the latter profited from the former's losses.
In October 1997, the company moved to new premises at 305, Qormi Road in Marsa, and the Parcel Post Office, Central Mail Room and Philatelic Bureau were transferred there from Valletta.
MaltaPost (since 1998)
The public limited company MaltaPost plc was established on 16 April 1998, and it took over operations of Malta's postal service on 1 May of the same year. On 31 January 2002, MaltaPost was partially privatized when the government sold 35% to Transend Worldwide Ltd, a subsidiary company of New Zealand Post. In September 2007 the government sold 25% of its shareholding in MaltaPost to Lombard Bank, which effectively became the majority shareholder in the company with 60% shareholding. The other 40% were sold to the public in January 2008.
Malta entered the European Union (EU) in 2004, and since then there have been significant changes in the legislation which regulates Malta's postal services. The mail distribution system was also restructured, and in 2007 MaltaPost changed the country's postcodes. As required by EU legislation, the postal services sector was liberalised on 1 January 2013, allowing other entities apart from MaltaPost to provide postal services in the country. As of 2020, MaltaPost remains Malta's only Universal Service Provider of postal services. Apart from transporting mail and other standard postal activities, the company's post offices also provide services such as payments of bills and the sale of stationery.
On 17 June 2016, MaltaPost opened the Malta Postal Museum in Valletta.
Postage stamps
Use of British stamps in Malta (1857–1884)
Between 1855 and 1856, some mail sent by British military personnel during the Crimean War which was franked with British stamps was cancelled with a wavy lines grid postmark at Malta.
British postage stamps became available to the general public in Malta on 18 August 1857 and they remained valid until 31 December 1884. These can be identified by coded postal obliterators reading M or A25, which were introduced in 1857 and 1859 respectively. The M postmark was withdrawn by early 1861 and a number of different A25 postmarks remained in use until 1904.
Most contemporary British stamps with face values up to 10/- can be found used in Malta. These include line-engraved issues such as the Halfpenny Rose Red, Penny Red, Three Halfpence Red and Two Pence Blue, the embossed stamps and surface-printed issues such as the Penny Venetian Red, Penny Lilac and the 1883–1884 Lilac and Green issue. Some British postal fiscals also exist used in Malta.
Malta stamps under British rule (1860–1964)
Queen Victoria and King Edward VII definitives (1860–1911)
Malta's first postage stamp, the Halfpenny Yellow, was issued on 1 December 1860. It was only valid for local letters, and British stamps had to be used for mail addressed to foreign destinations. The stamp was printed by De La Rue initially on blued unwatermarked paper. It was reprinted 29 times over the course of over two decades, resulting in stamps from the printings having differences in the colour shade, paper, watermark or perforation.
At the end of 1884, a series of definitive stamps depicting Queen Victoria was issued, and they became valid for use on 1 January 1885 when control of the postal services was fully transferred to the local colonial government. The ½d value of this issue had the design of the 1860 stamp but was printed in green, while the other stamps (with denominations of 1d, 2d, 2½d, 4d and 1/-) had designs which incorporated the Maltese cross. The colours of all six stamps were based on UPU regulations. A 5/- value in a larger format was added in 1886.
Four new definitive stamps were issued in 1899. Instead of depicting the monarch, these stamps featured: a Gozo boat (4½d), a Hospitaller galley (5d), the national personification Melita (2/6) and St Paul's Shipwreck (10/-). A ¼d value depicting the Grand Harbour was added in 1901 for the postal rate of local printed matter. In 1902, there was a shortage of 1d stamps, so stocks of the Queen Victoria 2½d stamp of 1885 were overprinted at the Government Printing Office in Valletta. One stamp in each sheet had an error such that the overprint read . It is believed that this was produced deliberately.
Between 1903 and 1904, a set of seven stamps with face values between ½d and 1/- was issued. The frame of the stamps was based on the halfpenny yellow but they included the portrait of the new monarch Edward VII with a crown added on top. The 1886 5/- and the 1899–1901 pictorials remained in use. A new watermark was introduced in October 1904 and it was used on subsequent reprints of the 1899–1901 pictorials and 1903–1904 definitives. From 1907 to 1911 there were colour changes, and some bicoloured stamps were reissued in one colour. A 5/- value depicting Edward VII was issued in March 1911 replacing the 1886 issue.
King George V definitives (1914–1930)
After George V became king in 1910, new definitives which depicted his portrait were issued between 1914 and 1920. These stamps had standard keytype designs which were used in other British colonies, and they were inscribed since they were equally valid for postal and fiscal use. The lower values between ¼d and 1/- had a smaller size than the 2/- and 5/- high values. A pictorial 4d value which was similar to the 1901 ¼d was also issued, and the 2/6 of 1899 was reprinted with a new watermark in 1919. That year, a 10/- value with a modified version of the 1899 St Paul's Shipwreck design was released. 1530 copies were issued, and today it is Malta's most expensive stamp.
Two war tax stamps with denominations of ½d and 3d were issued in 1917 and 1918, during World War I. In late 1921 and early 1922, some of the 1914–1919 issues were reprinted with a new watermark (including the 10/- value, but this is not as scarce as the 1919 issue). The 2d value of this issue had a new design. On 15 April 1922, this stamp was also issued locally surcharged after there was a shortage of ¼d stamps.
In 1921, Malta was granted a limited form of self-government which led to the establishment of a Senate and Legislative Assembly. To commemorate this, 1899–1922 definitive stamps were locally overprinted and were issued between 12 January and 29 April 1922. Between August 1922 and 1926, a set of stamps known as the Melita issue was released, with face values between ¼d and £1. The stamps featured allegorical representations and they were designed by the Maltese artists Edward Caruana Dingli and Gianni Vella. In 1926, separate revenue stamps began to be issued again, so the Melita stamps up to 10/- were issued with a overprint. Two sheets of the 3d value were discovered with the overprint inverted, and uncertainty about the error's issue led to a political scandal in 1930.
A series of postage stamps inscribed was issued between 1926 and 1927. They were printed by Waterlow and Sons instead of De La Rue (which had printed all of Malta's stamps since 1860). The values of ¼d to 6d depict George V and the coat of arms of Malta, while the values of 1/- to 10/- have engraved designs. Air mail was introduced on 1 April 1928, and the 6d stamp was issued overprinted as Malta's first airmail stamp. In 1928, it was decided that dual-purpose postage and revenue stamps would be issued again instead of having separate issues. The 1926–1927 stamps were issued with overprints in 1928, and a series of stamps with amended inscriptions was issued in 1930.
King George VI and Queen Elizabeth II definitives (1938–1964)
In 1938, a definitive set of stamps with denominations between ¼d and 10/- depicting the new monarch George VI and pictorial scenes was issued. Some of the scenes were reused from the 1926–1930 definitives, whilst some values had completely new designs. The stamps were printed by Waterlow. The ¼d value was the only stamp which did not depict the monarch, but instead was based on the 1901 ¼d stamp, modernised to show how the Grand Harbour looked in the 1930s and with the GRI monogram representing the monarch. Six stamps from this set were reissued in new colours in 1943.
In 1947 Malta was granted self-government again, and in 1948 this was commemorated by overprinting the George VI definitives with the text . In 1953, six stamps were reissued with colour changes, still bearing the overprint. Most of these did not exist in these colours without the overprint. A variety of the 1½d green exists in which the overprint is albino, such that it appears to be omitted. This is one of Malta's rarest postage stamp errors.
A definitive issue of stamps with denominations from ¼d to £1 depicting Queen Elizabeth II and various pictorial scenes was issued between January 1956 and January 1957. The ¼d to 2/- stamps were printed by Bradbury Wilkinson and Company, while those between 2/6 and £1 were printed by Waterlow. The stamps depicted monuments, churches and historic sites in Malta such as the Great Siege Monument, Auberge de Castille, Les Gavroches and monuments in Saint John's Co-Cathedral, along with scrolls presented to Malta by George VI and President Franklin D. Roosevelt during World War II. In 1963 and 1964, the 1d and 2d denominations of this issue were released with a different watermark.
Omnibus and commemorative issues (1935–1964)
Malta participated in all Crown Agents omnibus issues prior to independence, issuing stamps with common designs that were used in many colonies of the British Empire. Malta issued such stamps for the silver jubilee of George V (1935), the coronation of George VI (1937), victory at the end of World War II (1946), the Royal Silver Wedding (1949), the 75th anniversary of the UPU (1949), the coronation of Elizabeth II (1953), Freedom from Hunger (1963) and the Red Cross centenary (1963).
In 1950, Malta issued a set of three stamps on the occasion of the visit of Princess Elizabeth (later Elizabeth II) to Malta. Other commemorative issues followed, with issues for the 7th centenary of the Scapular in 1951, and for another royal visit by Elizabeth II and the centenary of the dogma of the Immaculate Conception both in 1954. In 1960, a set of stamps on stamps was issued for the centenary of the Halfpenny Yellow, and in 1962 a set of stamps was issued to commemorate the Great Siege of Malta of 1565.
A set of stamps commemorating the award of the George Cross to Malta was issued in 1957, designed by the Maltese artist Emvin Cremona, who would design hundreds of Malta stamps until the 1970s. Other early stamp designs by Cremona include issues commemorating technical education (1958), other issues commemorating the George Cross award (1958, 1959 and 1961), the 19th centenary of St Paul's shipwreck (1960), an Anti-Brucellosis Congress (1964) and the First European Congress of Catholic Doctors (1964); the latter was issued two weeks before Malta became independent. Most commemorative issues from the mid-1950s until independence were printed by Harrison and Sons, but a few were printed by Bradbury Wilkinson or Waterlow.
Independence and republic (since 1964)
Malta achieved independence as the State of Malta on 21 September 1964, and on that day a set of six stamps was issued to commemorate the event. The stamps depicted a dove representing peace along with a British crown, a Papal tiara and the United Nations emblem. A definitive set depicting scenes from Malta's history was issued in 1965, and two additional values were issued in 1970. The independence and definitive sets were designed by Cremona, as were most stamps until the 1970s. Many stamps of the mid-1960s exist with flaws or errors such as missing colours, most notably the 1965–1970 definitive. Some stamps from this definitive were also issued in postage stamp booklets in 1970 and 1971.
Malta adopted the Maltese pound or lira in 1972, and this event was commemorated by stamps showing the new coins. Some stamps of the 1965 definitive were subsequently surcharged in 1972 and 1977, and a definitive set in the new currency was issued in 1973. This set showed various scenes while the top value of £M2 was larger and depicted the country's coat of arms. Malta became a republic on 13 December 1974 and stamps commemorating this were issued in early 1975. The coat of arms was changed at this point, and a new version of the £M2 definitive was issued in 1976 to reflect this.
Other decimal currency definitives depicted the history of Maltese industry (1981), the Maltese Islands' natural and artistic heritage (1991), and flowers (1999–2006). The 1981 and 1991 definitives were each issued at one go, and in 1994 some stamps from the 1991 definitive were issued in booklets. The flowers definitives were issued in installments, and they included Malta's only self-adhesive stamps which were issued exclusively in booklets printed by Cartor Security Printers in 2003 and 2004. In 2002, postage labels which were dispensed from vending machines were also issued.
From 1964 to 1972, most stamps were printed by Harrison or De La Rue, but some were printed by the Government Printing Works in Rome, by Joh. Enschedé, by the in Vienna, by the Government Printer of Israel or by Format International Security Printers. Stamps were printed locally by Printex Limited from 1972 to 1999, and by Bundesdruckerei in Germany between 1999 and 2004 (except for the aforementioned booklet stamps by Cartor). Since 2004, stamps have once again been printed in Malta by Printex. All Malta stamps since independence have a watermark consisting of multiple Maltese crosses, with the exception of the 1999–2004 stamps by Bundesdruckerei or Cartor which had no watermark.
The number of commemorative or pictorial stamps issued each year increased drastically since Malta's independence. Christmas stamps have been issued annually since 1964. Between 1969 and 2001, these were semi-postal since they were sold at a premium over face value to raise money for charity. Malta issued its first miniature sheet in 1971 along with that year's Christmas stamps, and such sheets have been issued regularly since then. Malta has also issued EUROPA stamps since 1971, and since 2006 they have also been issued in stamp booklets as well as sheets.
A set of five stamps depicting films shot in Malta was meant to be issued on 27 February 2002, but due to unclear circumstances the stamps were never officially released. Despite this, some stamps ended up in collectors' hands and a set of the unissued stamps sold for £5,500 at an auction in 2018.
When Malta joined the European Union (EU) during the 2004 enlargement, a joint stamp issue was released by nine of the ten new EU members including Malta. The joint issue was prepared at MaltaPost's initiative, and the stamps issued by the nine countries had a common design by the Maltese artist Jean Pierre Mizzi. This was Malta's first joint issue, and a number of other joint issues have been released since then. Malta has issued SEPAC stamps since 2007. MaltaPost introduced personalised stamps in 2005, and continues to offer this service as of 2020.
Malta adopted the euro as its currency on 1 January 2008. Stamps issued shortly before and after the changeover (between 22 December 2006 and 28 June 2008) were denominated in both the lira and euro currencies in accordance with guidelines issued by the National Euro Changeover Committee. Pre-2006 stamps denominated only in lira remained valid for use until 31 January 2008, and they could be exchanged for euro-denominated stamps until March 2008, after which they were invalidated. The dual currency stamps issued since December 2006 remain valid for use today along with euro-denominated stamps.
A definitive set which commemorated events in Malta's history was issued in 2009, and two further stamps were added to the set in 2012 after postal rates had increased. Malta issued its largest set on 10 August 2012, when 88 stamps depicting ships which had taken part in Operation Pedestal were issued.
Postage due stamps
Malta used postage due markings on mail throughout the 19th and early 20th centuries, but only issued its first set of postage due stamps on 16 April 1925. The first set consisted of ten imperforate provisional stamps with denominations ranging from ½d to 1/6, and they were typeset at the Government Printing Office in Valletta. The stamps had simple numeral designs, and they were printed such that tête-bêche pairs occurred within each sheet.
On 20 July 1925, a new set of postage due stamps which had been printed by Bradbury Wilkinson in Britain was issued in Malta. These had the same face values as the provisional issue, and they had a design featuring a Maltese cross and were printed in different colours. This design continued to be used until after independence, and many reprints were made resulting in the stamps existing with various different paper types, perforations, watermarks and colour shades.
Decimal currency postage due stamps inscribed Taxxa Postali (Maltese for "postal tax") were issued in 1973, and their design consisted of a numeral overlaid on a Maltese lace background. A final set of postage dues was issued in 1993, depicting a Neolithic spiral design from one of the megalithic temples found in the islands.
Postal stationery
Malta has issued postal stationery items since 1885. Postal cards (including reply cards), newspaper wrappers and registered envelopes were the first postal stationery items to be issued, concurrently with the first definitive stamps, while pre-stamped envelopes and aerogrammes were introduced later on. Most of these were discontinued at various points during the 20th century, but as of 2020 pre-stamped envelopes are still in use while pre-stamped postal cards are still issued regularly for philatelic purposes.
Postal cards
Between 1885 and 1917, various ½d and 1d postal cards were issued for local and foreign rates respectively. These cards bore an imprinted stamp which depicted the reigning monarch: initially Victoria and later Edward VII or George V. Some of the cards exist in two versions: as a single card and as a reply card (the latter consisting of two cards attached together, intended to allow recipients to send a reply without paying any postage themselves). In 1927 and 1936, postal cards which included imprinted versions of the contemporary George V postage stamps were issued, followed by cards with imprinted versions of the George VI pictorial definitives in 1938 and 1944.
Since 1980, Malta has issued postal cards for philatelic purposes. Most of them have an imprinted version of EUROPA stamps but there are a few exceptions which have actual adhesive stamps affixed instead of imprinted versions. Cards were issued almost annually for international philatelic exhibitions between 1980 and 2006, and for local philatelic exhibitions since 2007. In 1989 an additional postal card was issued to commemorate the 25th anniversary of independence, and in 2009 cards was issued for both an international and a local exhibition.
Occasion cards which usually bear an imprinted stamp have also been issued since 2001. These are similar to the philatelic postal cards, and they commemorate an event, anniversary or a philatelic exhibition.
Newspaper wrappers
Newspaper wrappers were issued between 1885 and 1913. Only three wrappers were issued, and they were all denominated ½d and depicted the reigning monarch (Victoria, Edward VII or George V).
Registered envelopes
Registered envelopes were issued between 1885 and 1995. Colonial-era envelopes came in two different sizes and they had an imprinted stamp which depicted the reigning monarch: Victoria, Edward VII, George V, George VI or Elizabeth II. This imprinted stamp only covered the registration fee, while the actual postage had to be paid by additional postage stamps. They were printed by either McCorquodale & Co Ltd or De La Rue, who imprinted the name of their company on the envelopes. Elizabeth II envelopes initially remained in use after independence, and in 1972 they were overprinted with denominations in cents, the new Malta pound being divided into 100 cents from May 1972. From 1974 to 1995, a number of envelopes with the imprinted stamp showing the coat of arms of Malta were issued. The country's coat of arms changed twice during this period, first in 1975 and then in 1988, and in both cases envelopes were issued with the old symbols obliterated, usually also being replaced by the new emblem imprinted next to or affixed over the older one.
Pre-stamped envelopes
Malta issued its first pre-stamped envelopes in 1900. These came in three different sizes, and they had an imprinted 1d stamp depicting Queen Victoria. These proved to be unpopular, and no further pre-stamped envelopes were issued until this type of postal stationery was reintroduced by MaltaPost on 16 September 2002. The envelopes came in two designs, with imprinted stamps for local mail depicting a luzzu and those for foreign mail depicting a map. These envelopes were popular, and they were issued in a number of different sizes and formats. On 20 May 2006, a new design depicting a Neolithic spiral design was introduced for the local envelopes. These were also issued in a number of sizes and formats.
Another new design of local pre-stamped envelopes was introduced in 2011. The imprinted stamps depict a spiral design similar to 2006, along with an architectural element from one of the megalithic temples. Various denominations and formats of these envelopes have been issued since then, and they remain in use as of 2020. MaltaPost also allows companies to order personalised versions with their logo printed on the envelope.
Aerogrammes
Malta issued two pre-stamped aerogrammes in 1971, and they had imprinted versions of definitive stamps which had been issued a year earlier.
Philately
Stamp collecting existed in Malta soon after the Halfpenny Yellow was first issued in the 1860s. The hobby was popular on the islands during the British colonial period, and philatelic organisations such as the Malta Philatelic Society Zeitun existed in the 1930s and 1940s. There are a number of such organisations today, including the Malta Philatelic Society which was set up in 1966, the Gozo Philatelic Society which was set up in 1999 and the Żejtun Philatelic Group () which was set up in 2002.
The Malta Study Circle is a United Kingdom-based group with an interest in studying and exchanging information on Malta's stamps and postal history. The organisation was originally established in 1948, but it lapsed by 1952 before being revived by the philately expert Robson Lowe in 1955. Since then, it has published a number of study papers and regular newsletters about various aspects of Malta's philately.
Annual philatelic exhibitions are held by the Malta Philatelic Society and Gozo Philatelic Society in conjunction with MaltaPost. In the 2010s, such exhibitions also began to include other non-stamp collections.
See also
Designers of Maltese stamps
Revenue stamps of Malta
Postage stamps and postal history of the Sovereign Military Order of Malta
Notes
References
Bibliography
Further reading
External links
MaltaPost official website
MaltaPost Philatelic Bureau official website
Malta Philatelic Society
Gozo Philatelic Society
Malta Study Circle
Communications in Malta
Philately of Malta
Postal system of Malta |
1351034 | https://en.wikipedia.org/wiki/Institute%20for%20Logic%2C%20Language%20and%20Computation | Institute for Logic, Language and Computation | The Institute for Logic, Language and Computation (ILLC) is a research institute of the University of Amsterdam, in which researchers from the Faculty of Science and the Faculty of Humanities collaborate. The ILLC's central research area is the study of fundamental principles of encoding, transmission and comprehension of information. Emphasis is on natural and formal languages, but other information carriers, such as images and music, are studied as well.
Research at the ILLC is interdisciplinary, and aims at bringing together insights from various disciplines concerned with information and information processing, such as logic, mathematics, computer science, computational linguistics, cognitive science, artificial intelligence, and philosophy. It is organized in the three groups Logic & Computation (project leader: Yde Venema), Logic & Language (project leader: Robert van Rooij), and Language & Computation (project leader: Jelle Zuidema) united by the key themes Explainable and Ethical AI, Interpretable Machine Learning for Natural Language Processing, Cognitive Modelling, Logic, Games and Social Agency and Quantum Information and Computation. The ILLC is involved in several international collaborations among which we highlight the Joint Research Centre for Logic (JRC), a special collaborative partnership between Tsinghua University and the University of Amsterdam.
In addition to its research activities, the ILLC is running the Graduate Programme in Logic with a PhD programme and the MSc in Logic, an international top-ranked and interdisciplinary MSc degree in logic (MSc Logic webpage). In September 2018, the institute opened the Minor in Logic and Computation, welcoming local and international bachelor students. The programme of the Minor in Logic and Computation consists of 30 EC, chosen from a list of high-profile courses organised according to four themes: Mathematics, Philosophy, Theoretical Computer Science, and Computational Linguistics and AI.
History
The ILLC started off in 1986 as Instituut voor Taal, Logica en Informatie (ITLI; Institute for Language, Logic and Information). In the beginning, it was an informal association of staff members from the Faculty of Mathematics and Computer Science and the Faculty of Philosophy, and was joined by computational linguists from the Faculty of Humanities in 1989.
In 1991 the institute was officially established as a University Research Institute. During 1991–1996 the programming research group of the Faculty of Mathematics and Computer Science was also part of the institute. The Applied Logic Lab from the Faculty of Social Sciences was part of the ILLC from 1996 to 2003. Other groups in computer science and cognitive science have associated themselves with the institute in 1996.
The ILLC is rooted in the Amsterdam logic research tradition dating back to the early twentieth century (including researchers such as L.E.J. Brouwer, Arend Heyting, and Evert Willem Beth). It considers Beth's Instituut voor Grondslagenonderzoek en Filosofie der Exacte Wetenschappen (founded in 1952) as its precursor.
Directors
Members
Other notable members and past members include:
Renate Bartsch
Harry Buhrman
Peter van Emde Boas
Henkjan Honing
Luca Incurvati
Theo Janssen
Dick de Jongh
Michiel van Lambalgen
Benedikt Löwe
Remko Scha
Anne Troelstra
Jouko Väänänen
Paul Vitányi
See also
Korteweg-de Vries Institute for Mathematics
Centrum Wiskunde & Informatica
External links
Research institutes in the Netherlands
Cognitive science research institutes
Computer science institutes in the Netherlands
Logic organizations
University of Amsterdam |
6123526 | https://en.wikipedia.org/wiki/SIP%20URI%20scheme | SIP URI scheme | The SIP URI scheme is a Uniform Resource Identifier (URI) scheme for the Session Initiation Protocol (SIP) multimedia communications protocol. A SIP address is a URI that addresses a specific telephone extension on a voice over IP system. Such a number could be a private branch exchange or an E.164 telephone number dialled through a specific gateway. The scheme was defined in .
Operation
A SIP address is written in [email protected] format in a similar fashion to an email address. An address like:
sip:[email protected]
instructs a SIP client to use the NAPTR and SRV schemes to look up the SIP server associated with the DNS name voip-provider.example.net and connect to that server. If those records are not found, but the name is associated with an IP address, the client will directly contact the SIP server at that IP address on port 5060, by default using the UDP transport protocol. It will ask the server (which may be a gateway) to be connected to the destination user at 1-999-123-4567. The gateway may require the user REGISTER using SIP before placing this call. If a destination port is provided as part of the SIP URI, the NAPTR/SRV lookups are not used; rather, the client directly connects to the specified host and port.
As a SIP address is text, much like an e-mail address, it may contain non-numeric characters. As the client may be a SIP phone or other device with just a numeric, telephone-like keypad, various schemes exist to associate an entirely numeric identifier to a publicly reachable SIP address. These include the iNum Initiative (which issues E.164-formatted numbers, where the corresponding SIP address is the number '@sip.inum.net'), SIP Broker-style services (which associate a numeric *prefix to the SIP domain name) and the e164.org and e164.arpa domain name servers (which convert numbers to addresses one-by-one as DNS reverse-lookups).
SIP addresses may be used directly in configuration files (for instance, in Asterisk (PBX) installations) or specified through the web interface of a voice-over-IP gateway provider (usually as a call forwarding destination or an address book entry). Systems which allow speed dial from a user's address book using a vertical service code may allow a short numeric code (like *75xx) to be translated to a pre-stored alphanumeric SIP address.
Spam and security issues
In theory, the owner of a SIP-capable telephone handset could publish a SIP address from which they could be freely and directly reached worldwide, in much the same way that SMTP e-mail recipients may be contacted from anywhere at almost no cost to the message sender. Anyone with a broadband connection could install a softphone (such as Ekiga) and call any of these SIP addresses for free.
In practice, various forms of network abuse are discouraging creation and publication of openly reachable SIP addresses:
The spam (electronic) which has rendered SMTP the "spam mail transport protocol" could potentially make published sip: numbers unusable as the numbers are flooded with VoIP spam, usually automatic announcement devices delivering pre-recorded advertisements. Unlike mailto:, sip: establishes a voice call which interrupts the human recipient in real time with a ringing telephone.
SIP is vulnerable to Caller ID spoofing as the displayed name and number, much like the return address on e-mail, is supplied by the sender and not authenticated.
Servers supporting inbound sip: connections are routinely targeted with unauthorised REGISTER attempts with random numeric usernames and passwords, a brute force attack intended to impersonate individual off-premises extensions on the local PBX
Servers supporting inbound sip: connections are also targeted with unsolicited attempts to reach outside numbers, usually premium-rate destinations such as caller-pays-airtime mobile exchanges in foreign countries.
In the server logs, this looks like:
[Oct 23 15:04:02] NOTICE[4539]: chan_sip.c:21614 handle_request_invite: Call from '' to extension '011972599950423' rejected because extension not found in context 'default'.
[Oct 23 15:04:04] NOTICE[4539]: chan_sip.c:21614 handle_request_invite: Call from '' to extension '9011972599950423' rejected because extension not found in context 'default'.
[Oct 23 15:04:07] NOTICE[4539]: chan_sip.c:21614 handle_request_invite: Call from '' to extension '7011972599950423' rejected because extension not found in context 'default'.
[Oct 23 15:04:08] NOTICE[4539]: chan_sip.c:21614 handle_request_invite: Call from '' to extension '972599950423' rejected because extension not found in context 'default'.
an attempt to call a Palestinian mobile telephone (Israel, country code +972) by randomly trying 9- (a common code for an outside line from an office PBX), 011- (the overseas call prefix in the North American Numbering Plan) and 7- (on the off-chance a PBX is using it instead of 9- for an outside line). Security tools such as firewalls or fail2ban must therefore be deployed to prevent unauthorised outside call attempts; many VoIP providers also disable overseas calls to all but countries specifically requested as enabled by the subscriber.
SIPS URI scheme
The SIPS URI scheme adheres to the syntax of the SIP URI, differing only in that the scheme is sips rather than sip. The default Internet port address for SIPS is 5061 unless explicitly specified in the URI.
SIPS allows resources to specify that they should be reached securely. It mandates that each hop over which the request is forwarded up to the target domain must be secured with TLS. The last hop from the proxy of the target domain to the user agent has to be secured according to local policies.
SIPS protects against attackers which try to listen on the signaling link. It does not provide real end-to-end security, since encryption is only hop-by-hop and every single intermediate proxy has to be trusted.
See also
Federated VoIP and telephone number mapping
e164.arpa
Security Descriptions for SDP
Mikey key exchange method
ZRTP end-to-end key exchange proposal
References
URI schemes
Internet protocols |
49068293 | https://en.wikipedia.org/wiki/Symphonic%20Choirs | Symphonic Choirs | Symphonic Choirs is a vocal synthesizer and vocal library software created by EastWest, designed to imitate an entire vocal choir. The content was created by producers Doug Rogers and Nick Phoenix with recording engineer Keith O. Johnson for EastWest. Recorded in a real concert hall, the software initially had two styles of producing a result, the first being the "PLAY" engine version and the second being the "WordBuilder". The WordBuilder works by the user typing in what they want the software to recreate and it playing back the words.
It has vocal samples for male, female and young boys' choirs. It covers the ranges Soprano, Alto, Tenor and Bass and has audio engine effects and outputs. The SATB sections offer Normal, Legato, Staccato, and Slurred articulations, while the young boys choir offers Normal and Legato articulations. It was recorded with 3 microphones, allowing for results that give the impression of the choir being in different positions. The software only contains choir-themed samples, so it is not designed to sing any other genre of music. Other samples include solo sounds in tones such as whispers covering Soprano, Alto and choir boy, though these samples do not use WordBuilder. The software also works with Kompakt, allowing for use of sound layers for user creation of choirs to suit the means; although the software can be used without Kompakt, the user will not be able to access the full capabilities of the sample library.
In the PLAY version 2.1.1 update released in 2010, WordBuilder and PLAY were integrated into a single interface. The "Choirs (VOTA) Expansion" expansion pack was also later created using samples from Quantum Leap sound libraries. The samples contain heavy vibrato vowels that allow the loudest Symphonic Choirs to crossfade with the FFF Mark Wherry's "Voices of the Apocalypse". They were also built with the WordBuilder in mind. It also includes sample patches, "Angels" and "Demons" from "Voices of the Apocalypse". Unlike the original software, the expansion is recorded with a single microphone.
Reception
When reviewing the original version of the software, Sound on Sound reviewer John Walden called the quality "magnificent" and believed it was suitable for hobbyist, educational and professional markets.
Audio Pro Central felt that the software was expensive, but still cheaper than hiring an entire choir for the same session and state its results were "breathtaking". However, it noted the software's limitations beyond choir synthesis.
TechRadar noted the amount of samples crammed into the software at 38GB spread across 9 DVDs and it was part of a growing trend at time of release of amassing large amounts of sampled sounds into a product. The result is that a computer running 64-bit OS was needed and simple Windows XP and 2GBs of data may not cut it. Other criticisms include the note of the wobbliness of some of the solo samples and while the Bass were "gutsy" and the sopranos "ethereal" it was difficult to make samples switch roles. Focusing on the high price, the product was given an overall mixed review and it was noted due to the high price some seeking such a product may be better off if the software offered a quarter of the content at a lower price.
Electronic Musician reviewer David M. Rubin felt the software was "high-end", noting its flexibility. He admitted that you have to learn how to work with the phonetic system quite a bit and the results may not be as good as an experienced choir director can deliver, but noted how it worked to give a great virtual choir. He noted at the time of his review there were no solo, tenor, or bass samples and the WordBuilder did not work with the solo samples, but gave an overall good review. The high demand of the processor was also noted by him as a downfall of the software.
References
Speech synthesis software
Singing software synthesizers |
34504206 | https://en.wikipedia.org/wiki/FireMonkey | FireMonkey | FireMonkey (abbreviated FMX) is a cross-platform GUI framework developed by Embarcadero Technologies for use in Delphi or C++Builder, using C++ or Object Pascal to build cross platform applications for Windows, macOS, iOS, and Android. A 3rd party library, FMX Linux, enables the building of FireMonkey applications on Linux.
History
FireMonkey is based on VGScene, which was designed by Eugene Kryukov of KSDev from Ulan-Ude, Russia as a next generation vector-based GUI. In 2011, VGScene was sold to the American company Embarcadero Technologies. Kryukov continued to be involved in the development of FireMonkey. Along with the traditional Windows only Visual Component Library (VCL), the cross-platform FireMonkey framework is included as part of Delphi, C++Builder and RAD Studio since version XE2. FireMonkey started out as a vector based UI framework, but evolved to be a bitmap or raster based UI framework to give greater control of the look to match target platform appearances.
Overview
FireMonkey is a cross-platform UI framework, and allows developers to create user interfaces that run on Windows, macOS, iOS and Android. It is written to use the GPU where possible, and applications take advantage of the hardware acceleration features available in Direct2D on Windows Vista, Windows 7, Windows 8 and Windows 10, OpenGL on macOS, OpenGL ES on iOS and Android, and on Windows platforms where Direct2D is not available (Windows XP for example) it falls back to GDI+.
Applications and interfaces developed with FireMonkey are separated into two categories, HD and 3D. An HD application is a traditional two-dimensional interface; that is, UI elements on the screen. It is referred to as HD because FireMonkey utilizes multi-resolution bitmaps in its dynamic style system to take advantage of high-DPI displays. The second type, a 3D interface, provides a 3D scene environment useful for developing visualisations. The two can be freely mixed, with 2D elements (normal UI controls such as buttons) in a 3D scene, either as an overlay or in the 3D space, and 3D scenes integrated into the normal 2D "HD" interface. The framework has inbuilt support for effects (such as blurs and glows, as well as others) and animation, allowing modern WPF-style fluid interfaces to be easily built. It also supports native themes, so that a FireMonkey application can look very close to native on each platform. Native controls can be used on Windows, macOS, iOS and Android through both third-party libraries and the ControlType property.
FireMonkey is not only a visual framework but a full software development framework, and retains many features available with VCL. The major differences are:
Cross-platform compatibility
Any visual component can be a child of any other visual component, allowing for creation of hybrid components
Built-in styling support (now also available in VCL)
Use of Single precision floating point numbers for position, etc. instead of integers.
Support for GPU shader based visual effects (such as Glow, Inner Glow, Blur for example) and animation of visual components
Due to the framework being cross-platform compatible, the same source code and form design can be used to deploy to the various platforms it supports. It natively supports 32-bit and 64-bit executables on Windows, 32-bit executables on macOS, 32-bit and 64-bit executables on iOS, and 32-bit and 64-bit executables on Android. FireMonkey includes platform services that adapt the user interface to the correct behavior and appearance on each target platform.
Since its introduction in XE2, there have been numerous improvements in many areas of the framework and it is being actively developed and improved. For example, macOS development is integrated tightly into the IDE, requiring a Mac only for deployment. Numerous components such as sensors, touch and GPS have been added, especially useful for those developing mobile apps. There have been significant performance and underlying tech improvements, too.
Features
Graphics
FireMonkey uses hardware acceleration when available on Windows, macOS, iOS, and Android. Direct2D or OpenGL can be used on Windows Vista, Windows 7, Windows 8 and Windows 10. On Windows platforms where Direct2D is not available (Windows XP for example) it falls back to GDI+. OpenGL is used on macOS. OpenGL ES is used on iOS and Android.
Styles
All controls in FireMonkey are styleable via the styling system. This is accomplished by attaching a TStyleBook to the form, and a style is loaded and applied to the form. On some platforms certain controls can also be set to use a OS provided control implementation in contrast to the self drawn Firemonkey version. This sometimes adds further features while removing some features provided by Firemonkey's own implementation.
Platform Services
In addition to visual components, FireMonkey provides a loosely coupled way of accessing platform specific features independent of the platform. This also shows up as platform default behaviors. For example the TabPosition of the TTabControl has a property value of PlatformDefault that moves the tabs to the top on Android and the bottom on iOS automatically to be in line with the design guidelines for those platforms.
References
Computer libraries
Graphical user interfaces
Pascal (programming language) libraries
Pascal (programming language) software |
12797117 | https://en.wikipedia.org/wiki/PowerVM | PowerVM | PowerVM, formerly known as Advanced Power Virtualization (APV), is a chargeable feature of IBM POWER5, POWER6, POWER7, POWER8, POWER9 and Power10 servers and is required for support of micro-partitions and other advanced features. Support is provided for IBM i, AIX and Linux.
Description
IBM PowerVM has the following components:
A "VET" code, which activates firmware required to support resource sharing and other features.
Installation media for the Virtual I/O Server (VIOS), which is a service partition providing sharing services for disk and network adapters.
Installation media for Lx86, x86 binary translation software, which allows Linux applications compiled for the Intel x86 platform to run in POWER-emulation mode. A supported Linux distribution is a co-requisite for use of this feature.
IBM PowerVM comes in three editions:
IBM PowerVM Express
Only supported on "Express" servers (e.g. Power 710/730, 720/740, 750 and Power Blades).
Limited to three partitions, one of which must be a VIOS partition.
No support for Multiple Shared Processor Pools.
Withdrawn from marketing August 1, 2014.
This is primarily intended for "sandbox" environments
IBM PowerVM Standard
Supported on all POWER6, POWER7 and POWER8 systems.
Unrestricted use of partitioning – 10× LPARs per core (20× LPARs for Power7+ and Power8 servers) (up to a maximum of 1,000 per system).
Multiple Shared Processor Pools (on POWER7 and POWER8 systems only).
This is the most common edition in use on production systems.
IBM PowerVM Enterprise
Supported on POWER7 and POWER8 systems only.
As PowerVM Standard with the addition of Live Partition Mobility (which allows running virtual machines to migrate to another system) and Active Memory Sharing (which intelligently reallocates physical memory between multiple running virtual machines).
See also
Comparison of platform virtualization software
IBM High Availability Cluster Multiprocessing
Linux on Power
Kernel-based Virtual Machine - a linux based virtual machine which is developing PowerPC support
References
External links
IBM PowerVM Wiki
IBM PowerVM Editions Formerly Advanced POWER Virtualization (APV)
IBM PowerVM Editions Support
Overview of PowerVM
IBM Redbooks | PowerVM Virtualization on IBM System p: Introduction and Configuration
IBM Redbooks | PowerVM Virtualization on IBM System p: Managing and Monitoring
Virtual I/O Server Commands Reference
Virtualization software
IBM software |
40819493 | https://en.wikipedia.org/wiki/Mark%20Weatherford | Mark Weatherford | Mark Weatherford is an American cybersecurity professional who has held a variety of executive level positions in both the public and private sectors. He was appointed as the first deputy under secretary for cybersecurity at the US Department of Homeland Security from 2011 to 2013. He is currently the Global Information Security Strategist for Booking Holdings.
Weatherford is a graduate of the University of Arizona in Tucson, Arizona, and received his master's degree from the Naval Postgraduate School in Monterey, California. He holds the Certified Information Systems Security Professional (CISSP) certification.
He is a former US Navy cryptologic officer and led the Navy’s Computer Network Defense operations and the Naval Computer Incident Response Team (NAVCIRT).
Before joining the DHS, he served (2010–11) as the vice president and chief security officer of the North American Electric Reliability Corporation (NERC), where he directed the organization’s critical infrastructure and cybersecurity program for electric utilities across North America. He was also appointed by Governor Arnold Schwarzenegger as the state of California's first Chief Information Security Officer in the Office of Information Security (2008–09), and was also the first Chief Information Security Officer (CISO) for the State of Colorado (2004–07), where he was appointed by both Governor Bill Owens and Governor Bill Ritter. Most notably, he helped establish the state’s first cybersecurity program and spearheaded some of the nation's first cybersecurity legislation aimed to protect citizens.
After leaving the DHS, he was a principal with the Chertoff Group in Washington DC, and senior vice president and chief cybersecurity strategist of vArmour.
Weatherford was one of Information Security magazine’s "Security 7 Award" winners in 2008 and was awarded SC Magazines "CSO of the Year" award in 2010, In 2012 and 2013 he was named one of the "10 Most Influential People in Government Information Security" by GovInfoSecurity. He is a member of the Marysville High School, Marysville, California, Hall of Fame and was inducted into the Information Systems Security Association (ISSA) International Hall of Fame in October 2018.
References
External links
Chertoff Group
Obama administration personnel
United States Department of Homeland Security officials
University of Arizona alumni
Naval Postgraduate School alumni
People associated with computer security
Living people
1956 births
People from Marysville, California |
5155293 | https://en.wikipedia.org/wiki/Construction%20field%20computing | Construction field computing | Construction field computing is the use of handheld devices that augment the construction superintendent's ability to manage the operations on a construction site. These information appliances (IA) must be portable devices which can be carried or worn by the user, and have computational and connectivity capacity to perform the tasks of communication management. Data entry and retrieval must be simple so that the user can manipulate the device while simultaneously moving, observing events, studying materials, checking quality, or performing other tasks required. Examples of these devices are the PDA, tablet PC modern tablet devices including iPad and Android Tablets and smartphone.
Usage of information appliances in construction
Superintendents are often moving about the construction site or between various sites. Their responsibilities cover a wide variety of tasks such as:
Comparing planned to constructed conditions.
Carrying out in-field quality inspections ( punch lists or snagging as it is called in the UK)
Capturing data about such defects and communicating it to the relevant sub-contractors.
Coordinating and scheduling events and material delivery.
Monitoring jobsite conditions and correcting safety deficiencies, improving efficiency, and ensuring quality.
Recording and documenting work progress, labor, inspections, compliance to specifications, etc.
Communicating direction to specialty contractors, laborers, suppliers, etc.
Clarifying plans and specifications, resolving differing conditions, adapting methods and materials to site-specific requirements.
These tasks require that information is readily accessible and easily communicated to others and the company database. Since construction sites are unique, the device and system must be adaptable and flexible. Durability, predictability, and perceived value by the field management will determine the system's acceptance and thus proper use. Construction personnel are not well known for adapting to new technologies, but they do embrace methods that are proven to lighten work load and increase income.
Construction industry field personnel were quick to adopt new technologies such as the FAX machine and mobile phone, they have been slower to embrace the PC, tablet PC, PDA, and other devices. Disruptive technology is usually difficult in construction field use for several reasons:
Field managers have often risen 'through the ranks' and learned from their predecessors how to run a construction site.
They commonly do not have exposure to higher education and the new methods and technologies being developed.
Field work is demanding and varied such that suitable and durable technologies are difficult to build. Failure of the device or system means schedule slip and increase costs so that using the new technology can be untenable.
Laborers generally are not highly educated and willing to invest time in change.
Construction is very schedule driven and time for training or learning new methods is rare. The learning curve is perceived to be too steep to justify implementing new technologies or methods not mandated by governing authorities or required by the contract documents.
Overcoming these issues is imperative for most construction firms. The augmentation and automation of managerial practices is required to make the construction process more efficient. The information appliance makes it possible field supervisors to access needed information, communicate requirements to others, and document the process effectively.
Connectivity and computing power
An effective device will be able to process data into actionable information and transmit data and information to other devices or persons. The device may not actually perform computation of final communication, but it must appear as though it does. Extra steps to upload and download information will be perceived as a nuisance or waste of time to the user and cause the device to be underutilized. Some IAs are self-contained in that they have computing capacity and software to perform required tasks independent of a server or other devices. Others rely on connectivity with other devices and/or a server to perform required functions.
Communication with home office
Fat client
A fat client refers to a device that has sufficient speed and size to run programs and is loaded locally with software needed for operation. It can stand alone. Some advantages of this type of system are:
No need for continuous Wi-Fi, Bluetooth, cell connection or other connectivity instrument for continuous operation. Data and operating functions are fully self-contained.
User generally has more control over the interface so is able to adapt it to his/her own preferences and operational needs.
User has perception of governance. The idea that another device or a central server controls the process is disturbing to many free-spirited construction personnel.
Can be faster operationally than thin client since the device does not need to wait for transmitting data and server access time. With a good connection signal, however, this difference will most likely not be noticeable.
Thin client
A thin client refers to a device that acts as a terminal or interface with a server or other device. Sometimes called dumb terminal, these devices do not have sufficient computing capacity or data storage capacity of process information, but only allow the user to access the software and data needed by them. Some advantages of this type of system are:
Real-time data. User both provides and has access to most current information. Information entered is immediately available to other users of the information through the central database. Updates from all contributors to the database are available immediately to the user.
Automated and instantaneous action. Communication or directives are performed and recorded instantly rather than occurring when the information is uploaded by the user.
More central control over information, operating systems, and the manner in which data is collected.
Lower software and hardware costs since only minimal computing capacity is needed on the thin client device and separate software for each unit is not required.
Thin client device is not useful as a stand-alone so that it is not attractive to fencers and thus a less likely target of theft. (Thieves generally know which items are worth stealing.) Stolen devices are unlikely to be usable to non-authorized personnel. Therefore, proprietary information is protected.
Poka-yoke or error reduction strategies for data entry are more easily accomplished with a standardized system that cannot be easily altered by the user. A well-designed system will identify possible errors such as entering letters when numbers are required, identifying answers generally inconsistent with field, etc.
More fully automates data transmission. Synchronization is not required as a separate activity which can be forgotten or take longer than the user wants to wait for.
Transparency and user friendliness
Computer transparency is important in construction because of the requirements listed in the 'usages' section in this article. Especially important here are the lack of training in computer sciences and need to remain focused on the job-site activities. Any suitable device and system should support the user without their understanding of the technical aspects of the computer or system. Any data functions operations must require no knowledge of the database schema. Response to commands or input should be immediate and reversible so that the user can quickly experiment and learn by doing without causing damage to the system or data. The user will most often feel that access to information and control over the system is not unduly limited. These traits will reduce user anxiety and encourage usage and acceptance of new technologies and systems.
User friendliness is needed due to the varied level of knowledge of the user. The functional options should be easily labeled and structured such that the user can understand by viewing the screen and by intuition. Usability will in large part determine whether or not the device and/or system is utilized. An ineffective tool is not only useless, but can be deleterious in that it takes time from the user, but does not return value. Even if the user only perceives the operation of the device to be worthless, he/she will be 'demotivated' to use the device and do so improperly or insufficiently. This will render the activity useless, fulfilling the expectation of failure.
Portable devices available
Laptop and tablet PC
Laptop or notebook computers are small enough to be carried around the construction site, but too big to do so comfortably. Other disadvantages include:
They must be set down on a suitable surface to be used.
Require the use of both hands for proper operation.
Most laptops are not durable enough to stand up to the dirt, moisture, and rough handling prevalent on most job sites. Those that are 'construction resistant' are too expensive and a target of theft.
The tablet PC is basically a scaled down laptop with a touch screen that accepts handwriting rather than keyboard entry. Some do have keyboards, but they must be set down to operate and thus suffer the same problem of not being usable and portable at the same time. They can be carried with one hand and used with the other, thus allowing for ambulatory use. They are generally the size of a clipboard or notepad carried by many superintendents and are a good replacement for those devices due to the automation advantages of the computer, but they are still too big to be worn so that the user can move throughout the jobsite easily (up and down ladders) and have hands available for other uses (measuring manipulating objects to demonstrate technique or effect).
PDAs
The PDA has been accepted by many construction firms to aid the management of punch lists, safety inspections, and maintenance work. They can be thin or thick devices, but are often a combination of the two, having connectivity, but containing programs to operate even when out of range of WiFi or coverage. PDAs are durable, inexpensive, and very portable (being worn on a clip or carried in a pocket). The small screen size and limited ability to quickly enter data are the drawbacks of this device.
Smart phones
A smart phone is basically a mobile phone and PDA combined into one IA. Other functionalities such as digital camera and voice recorder are common. Data entry, like with the PDA, is by stylus or keypad and cumbersome. Many have web browsing capabilities but the small screen size diminishes the utility of this function to viewing email, weather reports, or some web content. Both PDA and smart phones have calendars, task lists, and phone lists, but they are useful to superintendents when coupled with the phone functions as is the case with the latter. Popular devices include the Blackberry and Treo, the pocket pc, iPhone and Droid all are popular because of their web/email abilities and ease of use.
Modern Tablets
With the advent of modern tablets including the iPad and Android tablets the area of mobile IT in construction is moving into a new era. These devices overcome many of the limitations of the rugged PDAs in terms of data entry, data access in field and its timely communication to others who need to act on it. In addition they cost a fraction of the cost of the rugged devices or rugged slate PCs.
Cameras and peripherals
Digital cameras are often included in smart phones and PDAs, but their resolution is lower than the dedicated camera device. Most are not truly IA because they do not readily communicate with other devices or process information. Most brands share information by USB or flash memory card which is removed from the camera and inserted in complementary devices (most PDAs accept these cards). Other methods, such as Infrared Communications or Bluetooth are also available.
Software availability
A wide variety of software applications for each of the devices listed above are available. The user must determine system requirements and then ensure that software is available to perform the needed functions on a specific device.
Input/output features of IAs
Some IAs (such as the total station) are made specifically for construction use, but they are for very specific applications and will not be considered here as the purpose of this article concerns general construction site management. Traditional input and output methods of keyboard, mouse, and screen are not suitable for the portable IA due to size constraints. These features are very important and must be considered.
Input methods
Data, queries, commands, or responses must be entered into the computer through some sort of interface. Following are some of the methods useful in portable IAs.
Touch screen
The touch screen and stylus is effective for construction applications as it allows handwriting recognition for those who do not feel comfortable with keyboards or the small size of keyboards on portable IAs. They are also free hand entry so that the user can sketch and draw notes and measurements directly onto the screen. The digital image of the sketch can be transmitted to others or converted to another format after uploading to a printer or other computer.
Key pad
These devices are the standard entry method for phones and easy to understand but are a slow means for alphanumeric data entry. They may be suitable for numeric entry into data fields. The user enters numbers on the keypad in response to prompts on the IA screen so this method is only suitable for entry of quantifiable standard information.
Voice/speech recognition
Voice Recognition is the ability to respond to verbal commands. Speech recognition refers to the capacity of the IA to convert voice entry into data. Both have been difficult to use in many construction applications due to ambient noise, construction jargon which varies by region, trade, and company, and because of speech patterns of the individual user. This method is slower than keyboard entry for the experienced user.
Output methods
Screen
The touch screen is the standard display method for tablet PC and smaller IAs. Color may or may not be important to the user but can be an aid in directing the user. The screen size is perhaps the most important consideration. Organization of the screen is challenging on the small screen and the user should be considered when designing the interface. See section 2.2 and 2.3 in Ben Shneiderman's "Designing the user interface" () concerning design of the interface.
Voice
Voice or tonal output from the IA can be effective as a reminder, warning, or indication of action performed, but tends to be irritating to the user if it is the principal method of interaction. Verbal output is slower than viewing the information and it is difficult for a person to pay attention to and understand information. Systems such as JAWS screen reader for the visually impaired do exist, but are not practical for construction site users and application. See Shneiderman section 9.4 ().
Future possibilities
Eyetap is a technology being developed that may have construction applications in the future. It allows the user to receive input from the computer superimposed over the scene in view. A diptych screen may be utilized to increase overall screen size and input area. Other 'Star Trek' devices and methods are being developed but the usable products have yet to be 'beamed down' to planet Earth.
Other considerations
Usability
The section on transparency in this article discusses some requirements for usability. Before implementing a system, a study on how it will add value to the user must be done. Having more data from the field is not directly beneficial to the site superintended. Reducing the time it takes to do 'paperwork' is valued. Sending an image taken from the IA camera directly to an architect for clarification does save time and effort and will be valued by the user.
Training
The system and device should be designed such that it encourages experimentation and usage. Immediate response to input and adaptability based on level of experience are better than sitting through a training seminar to get a few dry donuts. Training is best accomplished by showing and encouraging use. A 'guru' or user expert within the company may be best way to resolve questions and provide answers to questions and concerns. See 'The social life of information' (need reference).
Scalability and continual change
Technologies will evolve and the user will be facing new challenges with each change. It is usually wiser to adopt a small system that works and then later add features. This gradual adoption reduces anxiety and increases acceptance and use. This work by Linda V. Orr discusses methods to reduce anxiety for new computer users.
Integration of systems
Consideration should be given to ensure that IA devices and different software packages communicate with each other so that information is not lost or re-entry is not required by the user. Users do not always know or care what software is being used or which database is being accessed and do not understand why they must enter the same information again. For example, once the date has been entered, the user will be frustrated to be prompted to enter it again during the same session. Taking this further, the user may be frustrated to have to enter a date since there is a calendar function on the IA being used.
Web Based Mobile IT
These systems from leading vendors are not simply all about mobility in field and cutting the work required by in field managers in the area of report writing. Cloud based tablet and PC systems provide not just mobile capture and access to data in field but also for the first time they both computerise and move to the cloud the quality function. The use of powerful relational databases at the back end permits the data captured in field to be analysed. This can be used for instance to rate sub-contractors and to feed into continuous quality improvement.
Benefits of Mobile IT in Construction
Studies from the UK organisation COMIT and others have shown there is a valuable ROI from using this technology. Firms have reported faster delivery of projects, project hand over with zero defects, reduced costs from the process of managing sub-contractors and report writing and the value of having a secure audit trail.
See also
Human-Computer Interaction
Lean manufacturing - especially waste reduction in management practices.
Mobile computing
Object-oriented analysis and design
Palmtop
Portable (disambiguation)
Portable computer
Wearable computer
External links
Discussion concerning ubiquitous networks
Information appliances
Construction |
17730804 | https://en.wikipedia.org/wiki/Seioglobal | Seioglobal | Seioglobal formerly "SSG" / Safesoft Global (Chinese: 晟峰成略, Hanyu Pinyin: Sheng Feng Cheng Lve), (registered as Shanghai Seio Software Technology Co., Ltd) also commonly known locally as “Seio”, is an IT service multinational corporations based in Shanghai with focus on providing information technology consulting and software development services to Forbes Global 2000 companies and multinational corporations. In its portfolio it also develops Offshore Development Centers (ODC) for multinational corporations via its almost “no” language barrier talent model and is one of the few companies in its industry in China that owns a 30,000sqm technology software park (Jiaxing Software Park refer to List of technology centers), in addition to its own training and development center.
History
Established on January 27, 2003, at its headquarters, Shanghai, the company developed Information Technology Outsourcing (ITO) and Business Process Outsourcing (BPO) solutions via applying security practices, structured global sourcing methodologies based on the Capability Maturity Model Integration (CMMI).
The company initially traded as “Sheng-feng Co., Ltd.” with a registered capital of 4.5 million US dollars and rapidly grew to become one of China's leading IT service companies. As of Dec 2007, the company sat on a US$17.92 million revenue base (from US$1.75 million in 2003), had won several exceptional quality awards within China, for ITO leadership, had 8 offices globally (6 in strategic locations within China), and plans to open a U.S office in 2009.
The company also reported an 89% CAGR cumulative growth since its founding in 2003, had matured in its Capability Maturity Model Integration (CMMI Level 3), and had been initiating steps to attain Capability Maturity Model Integration (CMMI Level 5) certification. It is best known in China for its unique organization logo story, its rapid organizational growth (which started from just 15 employees), and locally known as the ‘China’s youth IT dreams company’ is its built upon the entrepreneurship spirit.
On 28 July 2008, due to business development growth and to better serve international markets, once known as SSG (Safesoft Global), the company was re-branded to Seioglobal.
The new brand, Seioglobal was reported to represents developments in its decentralized culture, commitment for quicker delivery of the organizations services and streamlined operations. Its headquarters and operations had also been relocated to a new high-riser building offering newer amenities in downtown Shanghai.
Critical Times
In its early stages, the company had to make complex strategic decisions that would lay down the foundation for its development. In doing so, the organization foresaw the need for rapid organizational growth in order to survive the on coming competitive times. During these times, the company was heavily driven by the entrepreneurship spirit.
However, it was not satisfied and reorganized the company via its Japanese roots to diversify it and focused on intensifying independent innovations within Shanghai’s surrounding 2nd tier cities such as Wuxi and Jiaxing.
Upgrading for competition
For developing and attracting software development, information services to China’s developing metropolitan cities, the company quickly adhered to the nation’s “Eleventh Five-Year Plan." Such actions were to shape the strategy for the company for its coming challenging years. and in 2003, the company invested in several strategic 2nd tier metropolitan cities in China and opened up offices in Tokyo and Osaka in Japan.
Philanthropy
The company strives to be social responsible and on May 21, 2008, initiated a donation campaign to the quake hit areas of the Sichuan basin that were affected by the May 12 earthquake, one of the provinces in which it has branch offices.
See also
Software industry in China
China Software Industry Association
References
External links
Official Company Website
Official Parent / Group Company Website
Official Jiaxing Technology Software Park Website
Outsourcing companies
Multinational companies
International information technology consulting firms
Software companies of China
Chinese brands
Science and technology in the People's Republic of China
Companies based in Shanghai
Companies established in 2003
Engineering companies of China |
30034 | https://en.wikipedia.org/wiki/Tim%20Berners-Lee | Tim Berners-Lee | Sir Timothy John Berners-Lee (born 8 June 1955), also known as TimBL, is an English computer scientist best known as the inventor of the World Wide Web. He is a Professorial Fellow of Computer Science at the University of Oxford and a professor at the Massachusetts Institute of Technology (MIT). Berners-Lee proposed an information management system on 12 March 1989, then implemented the first successful communication between a Hypertext Transfer Protocol (HTTP) client and server via the Internet in mid-November.
Berners-Lee is the director of the World Wide Web Consortium (W3C), which oversees the continued development of the Web. He co-founded (with his then wife-to-be Rosemary Leith) the World Wide Web Foundation. He is a senior researcher and holder of the 3Com founder's chair at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He is a director of the Web Science Research Initiative (WSRI) and a member of the advisory board of the MIT Center for Collective Intelligence. In 2011, he was named as a member of the board of trustees of the Ford Foundation. He is a founder and president of the Open Data Institute and is currently an advisor at social network MeWe.
In 2004, Berners-Lee was knighted by Queen Elizabeth II for his pioneering work.
He devised and implemented the first Web browser and Web server, and helped foster the Web's subsequent explosive development. He currently directs the W3 Consortium, developing tools and standards to further the Web's potential. In April 2009, he was elected as Foreign Associate of the National Academy of Sciences.
He was named in Time magazine's list of the 100 Most Important People of the 20th century and has received a number of other accolades for his invention. He was honoured as the "Inventor of the World Wide Web" during the 2012 Summer Olympics opening ceremony in which he appeared working with a vintage NeXT Computer. He tweeted "This is for everyone" which appeared in LED lights attached to the chairs of the audience. He received the 2016 Turing Award "for inventing the World Wide Web, the first web browser, and the fundamental protocols and algorithms allowing the Web to scale".
Early life and education
Berners-Lee was born on 8 June 1955 in London, England, the eldest of the four children of Mary Lee Woods and Conway Berners-Lee; his brother Mike is a professor of ecology and climate change management. His parents were computer scientists who worked on the first commercially built computer, the Ferranti Mark 1. He attended Sheen Mount Primary School, and then went on to attend south-west London's Emanuel School from 1969 to 1973, at the time a direct grant grammar school, which became an independent school in 1975. A keen trainspotter as a child, he learnt about electronics from tinkering with a model railway. He studied at The Queen's College, Oxford, from 1973 to 1976, where he received a first-class Bachelor of Arts degree in physics. While at university, Berners-Lee made a computer out of an old television set, which he bought from a repair shop.
Career and research
After graduation, Berners-Lee worked as an engineer at the telecommunications company Plessey in Poole, Dorset. In 1978, he joined D. G. Nash in Ferndown, Dorset, where he helped create typesetting software for printers.
Berners-Lee worked as an independent contractor at CERN from June to December 1980. While in Geneva, he proposed a project based on the concept of hypertext, to facilitate sharing and updating information among researchers. To demonstrate it, he built a prototype system named ENQUIRE.
After leaving CERN in late 1980, he went to work at John Poole's Image Computer Systems, Ltd, in Bournemouth, Dorset. He ran the company's technical side for three years. The project he worked on was a "real-time remote procedure call" which gave him experience in computer networking. In 1984, he returned to CERN as a fellow.
In 1989, CERN was the largest Internet node in Europe and Berners-Lee saw an opportunity to join hypertext with the Internet:
Berners-Lee wrote his proposal in March 1989 and, in 1990, redistributed it. It then was accepted by his manager, Mike Sendall, who called his proposals "vague, but exciting". He used similar ideas to those underlying the ENQUIRE system to create the World Wide Web, for which he designed and built the first web browser. His software also functioned as an editor (called WorldWideWeb, running on the NeXTSTEP operating system), and the first Web server, CERN HTTPd (short for Hypertext Transfer Protocol daemon).
Berners-Lee published the first web site, which described the project itself, on 20 December 1990; it was available to the Internet from the CERN network. The site provided an explanation of what the World Wide Web was, and how people could use a browser and set up a web server, as well as how to get started with your own website. On 6 August 1991, Berners-Lee first posted, on Usenet, a public invitation for collaboration with the WorldWideWeb project.
In a list of 80 cultural moments that shaped the world, chosen by a panel of 25 eminent scientists, academics, writers and world leaders, the invention of the World Wide Web was ranked number one, with the entry stating, "The fastest growing communications medium of all time, the Internet has changed the shape of modern life forever. We can connect with each other instantly, all over the world."
In 1994, Berners-Lee founded the W3C at the Massachusetts Institute of Technology. It comprised various companies that were willing to create standards and recommendations to improve the quality of the Web. Berners-Lee made his idea available freely, with no patent and no royalties due. The World Wide Web Consortium decided that its standards should be based on royalty-free technology, so that they easily could be adopted by anyone.
Berners-Lee participated in Curl Corp's attempt to develop and promote the Curl programming language.
In 2001, Berners-Lee became a patron of the East Dorset Heritage Trust, having previously lived in Colehill in Wimborne, East Dorset. In December 2004, he accepted a chair in computer science at the School of Electronics and Computer Science, University of Southampton, Hampshire, to work on the Semantic Web.
In a Times article in October 2009, Berners-Lee admitted that the initial pair of slashes ("//") in a web address were "unnecessary". He told the newspaper that he easily could have designed web addresses without the slashes. "There you go, it seemed like a good idea at the time," he said in his lighthearted apology.
Policy work
In June 2009, then-British prime minister Gordon Brown announced that Berners-Lee would work with the UK government to help make data more open and accessible on the Web, building on the work of the Power of Information Task Force. Berners-Lee and Professor Nigel Shadbolt are the two key figures behind data.gov.uk, a UK government project to open up almost all data acquired for official purposes for free reuse. Commenting on the opening up of Ordnance Survey data in April 2010, Berners-Lee said: "The changes signal a wider cultural change in government based on an assumption that information should be in the public domain unless there is a good reason not to—not the other way around." He went on to say: "Greater openness, accountability and transparency in Government will give people greater choice and make it easier for individuals to get more directly involved in issues that matter to them."
In November 2009, Berners-Lee launched the World Wide Web Foundation (WWWF) in order to campaign to "advance the Web to empower humanity by launching transformative programs that build local capacity to leverage the Web as a medium for positive change".
Berners-Lee is one of the pioneer voices in favour of net neutrality, and has expressed the view that ISPs should supply "connectivity with no strings attached", and should neither control nor monitor the browsing activities of customers without their expressed consent. He advocates the idea that net neutrality is a kind of human network right: "Threats to the Internet, such as companies or governments that interfere with or snoop on Internet traffic, compromise basic human network rights." Berners-Lee participated in an open letter to the US Federal Communications Commission (FCC). He and 20 other Internet pioneers urged the FCC to cancel a vote on 14 December 2017 to uphold net neutrality. The letter was addressed to Senator Roger Wicker, Senator Brian Schatz, Representative Marsha Blackburn and Representative Michael F. Doyle.
Berners-Lee joined the board of advisors of start-up State.com, based in London. As of May 2012, he is president of the Open Data Institute, which he co-founded with Nigel Shadbolt in 2012.
The Alliance for Affordable Internet (A4AI) was launched in October 2013 and Berners-Lee is leading the coalition of public and private organisations that includes Google, Facebook, Intel and Microsoft. The A4AI seeks to make Internet access more affordable so that access is broadened in the developing world, where only 31% of people are online. Berners-Lee will work with those aiming to decrease Internet access prices so that they fall below the UN Broadband Commission's worldwide target of 5% of monthly income.
Berners-Lee holds the founders chair in Computer Science at the Massachusetts Institute of Technology, where he heads the Decentralized Information Group and is leading Solid, a joint project with the Qatar Computing Research Institute that aims to radically change the way Web applications work today, resulting in true data ownership as well as improved privacy. In October 2016, he joined the Department of Computer Science at Oxford University as a professorial research fellow and as a fellow of Christ Church, one of the Oxford colleges.
From the mid 2010s Berners-Lee initially remained neutral on the emerging Encrypted Media Extensions (EME) proposal for with its controversial Digital Rights Management (DRM) implications. In March 2017 he felt he had to take a position which was to support the EME proposal. He reasoned EME's virtues whilst noting DRM was inevitable. As W3C director, he went on to approve the finalised specification in July 2017. His stance was opposed by some including Electronic Frontier Foundation (EFF), the anti-DRM campaign Defective by Design and the Free Software Foundation. Varied concerns raised included being not supportive of the Internet's open philosophy against commercial interests and risks of users being forced to use a particular web browser to view specific DRM content. The EFF raised a formal appeal which did not succeed and the EME specification became a formal W3C recommendation in September 2017.
On 30 September 2018, Berners-Lee announced his new open-source startup Inrupt to fuel a commercial ecosystem around the Solid project, which aims to give users more control over their personal data and lets them choose where the data goes, who's allowed to see certain elements and which apps are allowed to see that data.
In November 2019 at the Internet Governance Forum in Berlin Berners-Lee and the WWWF launched Contract for the Web, a campaign initiative to persuade governments, companies and citizens to commit to nine principles to stop "misuse", with the warning that "if we don't act nowand act togetherto prevent the web being misused by those who want to exploit, divide and undermine, we are at risk of squandering [its potential for good]".
Awards and honours
Berners-Lee has received many awards and honours. He was knighted by Queen Elizabeth II in the 2004 New Year Honours "for services to the global development of the Internet", and was invested formally on 16 July 2004.
On 13 June 2007, he was appointed to the Order of Merit (OM), an order restricted to 24 (living) members. Bestowing membership of the Order of Merit is within the personal purview of the Queen and does not require recommendation by ministers or the Prime Minister.
He was elected a Fellow of the Royal Society (FRS) in 2001. He was also elected as a member into the American Philosophical Society in 2004 and the National Academy of Engineering in 2007.
He has been conferred honorary degrees from a number of universities around the world, including Manchester (his parents worked on the Manchester Mark 1 in the 1940s), Harvard and Yale.
In 2012, Berners-Lee was among the British cultural icons selected by artist Sir Peter Blake to appear in a new version of his most famous artwork – the Beatles' Sgt. Pepper's Lonely Hearts Club Band album cover – to celebrate the British cultural figures of his life that he most admires to mark his 80th birthday.
In 2013, he was awarded the inaugural Queen Elizabeth Prize for Engineering. On 4 April 2017, he received the 2016 ACM Turing Award "for inventing the World Wide Web, the first web browser, and the fundamental protocols and algorithms allowing the Web to scale".
Personal life
Berners-Lee has said "I like to keep work and personal life separate."
Berners-Lee married Nancy Carlson, an American computer programmer, in 1990. She was also working in Switzerland at the World Health Organization. They had two children and divorced in 2011. In 2014, he married Rosemary Leith at the Chapel Royal, St. James's Palace in London. Leith is a Canadian Internet and banking entrepreneur and a founding director of Berners-Lee's World Wide Web Foundation. The couple also collaborate on venture capital to support artificial intelligence companies.
Berners-Lee was raised as an Anglican, but he turned away from religion in his youth. After he became a parent, he became a Unitarian Universalist (UU). When asked whether he believes in God, he stated: "Not in the sense of most people, I'm atheist and Unitarian Universalist."
The web’s source code was auctioned by Sotheby’s in London during 23–30 June 2021, as a non-fungible token (NFT) by TimBL. Selling for USD $5,434,500, it was reported the proceeds would be used to fund initiatives by TimBL and his wife, Rosemary Leith.
References
Further reading
Tim Berners-Lee's publications
Tim Berners-Lee and the Development of the World Wide Web (Unlocking the Secrets of Science) (Mitchell Lane Publishers, 2001),
Tim Berners-Lee: Inventor of the World Wide Web (Ferguson's Career Biographies), Melissa Stewart (Ferguson Publishing Company, 2001), children's biography
How the Web was Born: The Story of the World Wide Web, Robert Cailliau, James Gillies, R. Cailliau (Oxford University Press, 2000),
Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web by Its Inventor, Tim Berners-Lee, Mark Fischetti (Paw Prints, 2008)
"Man Who Invented the World Wide Web Gives it New Definition", Compute Magazine, 11 February 2011
BBC2 Newsnight – Transcript of video interview of Berners-Lee on the read/write Web
Technology Review interview
External links
Tim Berners-Lee on the W3C site
List of Tim Berners-Lee publications on W3C site
First World Wide Web page
Interview with Tim Berners Lee
1955 births
Living people
People from Barnes, London
People educated at Emanuel School
Alumni of The Queen's College, Oxford
Academics of the University of Southampton
Fellows of Christ Church, Oxford
Members of the Department of Computer Science, University of Oxford
People associated with CERN
English computer scientists
English expatriates in the United States
English inventors
English Unitarians
Fellows of the American Academy of Arts and Sciences
Fellows of the British Computer Society
Fellows of The Queen's College, Oxford
Fellows of the Royal Academy of Engineering
Fellows of the Royal Society
Hypertext Transfer Protocol
Internet pioneers
Knights Commander of the Order of the British Empire
MacArthur Fellows
Massachusetts Institute of Technology faculty
Members of the Order of Merit
Foreign associates of the National Academy of Engineering
Turing Award laureates
Foreign associates of the National Academy of Sciences
Royal Medal winners
Semantic Web people
UNESCO Niels Bohr Medal recipients
Unitarian Universalists
Former Anglicans
Webby Award winners
World Wide Web Consortium
Recipients of the Order of the Cross of Terra Mariana, 1st Class
Fellows of the Royal Society of Arts
English atheists
MIT Computer Science and Artificial Intelligence Laboratory people
Members of the American Philosophical Society
Honorary Fellows of the British Academy |
32421587 | https://en.wikipedia.org/wiki/Collaborative%20decision-making%20software | Collaborative decision-making software | Collaborative decision-making (CDM) software is a software application or module that helps to coordinate and disseminate data and reach consensus among work groups.
CDM software coordinates the functions and features required to arrive at timely collective decisions, enabling all relevant stakeholders to participate in the process.
The selection of communication tools is very important for high end collaborative efforts. Online collaboration tools are very different from one another, some use older forms of Internet-based Managing and working in virtual teams is not any task but it is being done for decades now. The most important factor for any virtual team is decision making. All the virtual teams have to discuss, analyze and find solutions to problems through continuous brain storming session collectively. An emerging enhancement in the integration of social networking and business intelligence (BI), has drastically improvised the decision making by directly linking the information on BI systems with collectively gathered inputs from social software.
Nowadays all the organizations are dependent on business intelligence (BI) tools so that their employers can make better decisions based on the processed information in tools. The application of social software in business intelligence (BI) to the decision-making process provides a significant opportunity to tie information directly to the decisions made throughout the company.
History
Technology scientists and researchers have worked and explored automated decision Support Systems (DSS) for around 40 years. The research initiated with building model-driven DSS in the late 1960s. Advanced with usage of financial related planning systems, spreadsheet-based decision Support Systems and group decision support systems (GDSS) started in the early and mid-1980s. Data warehouses, managerial Information Systems, online analytical processing (OLAP) and business Intelligence emerged in late 1980s and mid-1990s and around same time the knowledge driven DSS and the usage of web-based DSS were evolving significantly. The field of automated decision support is emerging to utilize new advancements and create new applications.
In the 1960s, scientists deliberately started examining the utilization of automated quantitative models to help with basic decision making and planning. Automated decision support systems have become more of real time scenarios with the advancement of minicomputers, timeshare working frameworks and distributed computing. The historical backdrop of the execution of such frameworks starts in the mid-1960s. In a technology field as assorted as DSS, chronicling history is neither slick nor direct. Diverse individuals see the field of decision Support Systems from different vantage focuses and report distinctive records of what happened and what was important. As technology emerged new automated decision support applications were created and worked upon. Scientists utilized multiple frameworks to create and comprehend these applications. Today one can arrange the historical backdrop of DSS into the five expansive DSS classes, including: communications-driven, data-driven, document driven, knowledge-driven and model-driven decision support systems. Model-driven spatial decision support system (SDSS) was developed in the late 1980s and by 1995 the SDSS idea had turned out to be recognized in the literature. Data driven spatial DSS are also quite regular. All in all, a data-driven DSS stresses access to and control of a time-series of internal organization information and sometimes external and current data. Executive Information Systems are cases of data driven DSS.The very first cases of these frameworks were called data-oriented DSS, analysis Information Systems and recovery. Communications-driven DSS utilize networks and communications technologies to facilitate decision-relevant collaboration and communication. In these frameworks, communications technologies are the overwhelming design segment. Devices utilized incorporate groupware, video conferencing and computer-based bulletin boards.
In 1989, Lotus presented a groupware application called Notes and expanded the focus of GDSS to incorporate upgrading communication, collaboration and coordination among gatherings of individuals. In general, groupware, bulletin boards, audio and videoconferencing are the essential advancements for communications-driven decision support. In the last couple of years, voice and video started utilizing the Internet convention and have incredibly extended the conceivable outcomes for synchronous communications-driven DSS. A document driven DSS utilizes PC storage and processing technologies to give record recovery and investigation. Huge archived databases may incorporate examined reports, hypertext records, pictures, sounds and video. Content and record administration expanded in the 1970s and 1980s as a critical, generally utilized automated means for presenting and preparing bits of content. Cases of archives that may be retrieved by a document driven DSS are strategies and techniques, item determinations, catalogs and corporate verifiable reports, including minutes of meetings and correspondence. A search engine is an essential decision-aiding tool connected with document-driven DSS. Knowledge-driven DSS can propose or prescribe actions to managers. These DSS are individual PC frameworks with specific critical thinking ability risen. The "expertise" comprises knowledge around a specific area, comprehension of issues inside that space, and "skill" at taking care of some of these issues. These frameworks have been called suggestion DSS and knowledge-based DSS.
Web based DSS, starting in roughly 1995, the far reaching Web and worldwide Internet gave an innovation stage to encourage developing the abilities and sending of automated choice support. The arrival of the HTML 2. details with shape labels and tables was a defining moment in the advancement of web-based DSS. In 1995, various papers were introduced on utilizing the Web and Internet for choice support at the third International conference of the International society for decision support systems (ISDSS). Notwithstanding web-based, model-driven DSS, analysts were reporting web access to data warehouses. DSS Research Resources was begun as an online gathering of bookmarks. By 1995, the World Wide Web was perceived by various programming designers and scholastics as a genuine stage for executing a wide range of decision-support systems. In 1996-97, corporate intranets were produced to support information exchange and knowledge management. The primary decision-support apparatuses included specially appointed question and reporting instruments, improvement and recreation models, online analytical processing (OLAP), data mining and data visualization. Enterprise wide DSS utilizing database technologies were particularly well known among large organizations. In 1999, sellers presented new Web-based analytical applications. Numerous DBMS merchants moved their center to web-based analytical applications and business intelligence solutions. In 2000, application service providers (ASPs) started facilitating the application programming and specialized foundation for decision support capabilities. Additionally the year 2000 was a gateway. More advanced "enterprise knowledge portals" were presented by sellers that combined information portals, knowledge management, business intelligence, and communications-driven DSS in an integrated web environment.
Decision support applications and research concentrates on identified data-oriented systems, management expert systems, multidimensional data analysis, query and reporting tools, online analytical processing (OLAP), business Intelligence, group DSS, conferencing and groupware, document management, spatial DSS and executive Information Systems as the technologies rise, meet and wander. The investigation of decision support systems is a connected train that utilizes learning and particularly hypothesis from different disciplines. Consequently, numerous DSS scientists look into inquiries that have been analyzed on the grounds that they were of worry to individuals who were building and utilizing particular DSS. Subsequently, a great part of the wide DSS information base gives speculations and headings to building more powerful DSS.
CDM and Business Intelligence
Web 2.0 collaboration tools have reached the mass collaboration expectations by crossing the limits of web 1.0 collaboration tools. These tools provide a user controlled environment with social software in an inexpensive and flexible approach. The raise of collaboration 2.0 technologies are being quickly accepted in the corporate. Social and collaborative business intelligence (BI) were popularly recognized as a sub category with in BI work space in the year 2009. Social and collaborative BI, a type of CDM software, harnesses the functions and philosophies of social networking and social Web 2.0 technologies, applying them to reporting and analytics at the enterprise level, to facilitate better and faster fact-based decision-making. This platform, such as Web 2.0 technologies, is designed around the premise that anyone should be able to share content and contribute to discussion, anywhere and anytime Since 2010 there is an inclination to consolidate highlights from informal organizations into Business Intelligence arrangements. A wide range of business applications ought to likewise take after this crucial change in the coming years.
International Data Corporation (IDC) predicted that 2011 would be the year where the trend of embedding social media style features into BI solutions would make its mark, and that virtually all types of business applications would undergo a fundamental transformation. IDC also believed the emerging CDM software market would grow quickly, forecasting revenues of nearly $2 billion by 2014, with a compound annual growth rate of 38.2 percent between the years 2009 and 2014. CDM software, in the context of BI, is the ability to share and institutionalize information, analysis and insight, which would otherwise be lost.
Business Intelligence (BI) has been broadly utilized to oversee and refine incomprehensible supplies of information. Many organizations have applied business Intelligence in their firms in order to refine their own data for better understanding and decision making. BI also has its applications in statistical analysis, predictive modelling and optimization. The different reports generated by these products play a major role in decision making. Decision Making is an important task in the job as the consequences of a decision effect the growth and performance of the organization. Collaborative Decision Making (CDM) joins social programming with business insight. This mix can drastically enhance the nature of basic decision making by specifically connecting the data contained in BI frameworks with collective information gathered using social programming. User associations could cobble together such a framework with existing social programming, BI stages and essential labeling usefulness. CDM is a rising segment of numerous application sorts - including BI, human resources (HR), ability administration and suites - however it is likewise a conduct realized by the utilization of Web 2.0 applications. In the vanguard of this pattern is the way that BI is being incorporated with shared, cloud-based applications. Virtual world Second Life is additionally rising as a stage for collaborative decision making. The key advantage of this is "breaking down space" and the capacity to mix synchronous and asynchronous exercises. For meetings and occasions, the advantages of having all the significant data and individuals on request, which evacuates the limitations of timetable and geology. Service oriented architecture (SOA) has assumed an essential part in making this a reality. BI pervades a whole association and, if utilized effectively, can decidedly impact choices that influence each useful territory.
Now collective Decision Making (CDM) is a joint government/industry activity went for enhancing air movement stream administration through expanded data trade among aeronautics group partners. CDM is included agents from government, general flight, carriers, private industry and the scholarly world who cooperate to make mechanical and procedural answers for the air traffic flow management (ATFM) challenges confronted by the national airspace system (NAS). New techniques are being used to maximize understanding and improve collaborative Decision Making in areas such as design reviews, construction planning and integrated operations.
Today's BI tools are doing good work in terms of extracting right information for the right people, but lack of accountability in decision making process is leading the organizations into poor choices. Though there is lot of money invested in the business Intelligence software and data warehouse technology, the output of these is still giving bad business choices. There is a gap created between level of information in business Intelligence and the quality and transparency of decision making. The problem has become so prevalent that the need for collaborative decision making (CDM) software, a new approach making complex business decisions that closely links information and reports gathered from social media collaboration tools emerged. CDM platforms will give users easy access to relevant BI data sources as well as the ability to tag and search those sources for future reference and accountability. The decision itself would be linked to the BI software inputs, collaboration tools and the methods and practices that were used to make that decision.
The need of making complex and efficient decisions with the power of information systems made the use of business intelligence in collaborative decision making The quality of the decisions depends on the effective utilization of BI and information integration in the business which include – capturing BI value, effective practice of BI applications and knowledgeable business officials with expertise in BI and IT knowledge.
Benefits and potential
The concept of social and collaborative BI has been hailed by many as the answer to the persistent problem that, despite increasing investment in BI, many organizations are failing to utilize reporting and analytics effectively and continue to make poor business decisions, resulting in low ROI.
Gartner predicts that CDM platforms will stimulate a new approach to complex decision making by linking the information and reports gleaned from BI software with the latest social media collaboration tools.
Gartner's prognostic report, The Rise of Collaborative Decision Making, predicts that this new technology will minimize the cost and lag in the decision-making process, leading to improved productivity, operational efficiencies and ultimately, better, more timely decisions.
Recent McKinsey Global and Aberdeen Group research have indicated that organizations with collaborative technologies respond to business threats and complete key projects faster, experiencing decreased time to market for new products as well as improved employee satisfaction.
Components
There are three major functions that combine together to enable effective enterprise collaboration and networking based on reporting and analytics, and form the basis of a CDM platform. These are the ability to:
Discuss and overlay knowledge on business data
Share knowledge and content
Collectively decide the best course of action
Discussing and overlaying knowledge on business data
Most decision-making and discussion surrounding business processes occurs outside organizational BI platforms, opening a gap between human insight and the business data itself. Business decisions should be made alongside business data to ensure steadfast, fact-based decision-making.
An open-access discussion forum integrated into the BI solution allows users to discuss the results of data analysis, connecting the right people with the right data. Users are able to overlay human knowledge, insight and provide context to the data in reports.
A social layer within a BI solution improves the efficiency of business interaction regarding reporting and analytics compared to traditional avenues of communication such as faxes, phone calls and face-to-face meetings by:
Being recordable: Conversations are automatically recorded, creating a searchable history of all interaction, eliminating unnecessarily revisiting points previously made
Eliminating logistical hurdles: The need for complex and costly travel arrangements is significantly reduced, with geographically dispersed stakeholders able to participate in the exchange of information faster
Enabling all relevant stakeholders to participate: All relevant stakeholders can contribute to discussion at their convenience
Key features of a CDM forum
Collaborative decision-making (CDM) is defined as social media feature which, if combined with BI applications, will allow an increased distribution and discussion of information through a number of key features. These key features include annotations, discussions, and tagging, embedding, and providing decisions. Annotations help others in accepting and interpreting the data, which makes it more significant. For instance, when users are creating or analyzing reports within the BI environment, they can add commentaries and annotations so as to offer context to the data. Business leaders can be observed to be assured that they completely understand the information on which decisions are grounded. Open-access discussions will allow the contributors to post their notions as well as to read, consider and enhance the proposals of others. This feature can be a valuable device for pursuing the input of other investors. This is because of how assimilating CDM tools within the BI environment offers the possibility to hold discussions in complete view of the significant data. Tagging, on the other hand, enables the users to highlight related information in a flexible manner which makes it easy for other user to examine and recover beneficial and practical data. The ability to embed information enclosed in a BI solution into other applications is a vital factor for making sure that precise information is made accessible to decision-makers in a sensible manner. When information is embedded, it can be seen and commented on by several users. Meaning to say, ideas and suggestions can also be shared and discoursed in actual. Lastly, BI solutions are observed to have the capability of supporting appropriate decision-making that supports groups to attain explicit, quantifiable goals and objectives. These may also comprise an improved product overview or more lucrative supply chain.
Sharing knowledge and content
The digital era is often described as the Information Age. But the value of information resides in its ability to be shared.
A CDM module allows information relating to reporting and analytics to be shared in three ways, by:
Cataloguing: A social layer within a BI solution allows users to create a searchable history by tagging and cataloging past discussions and reports within shared folders inside the BI portal. Tagging allows users to quickly and easily file report, annotation and discussion content under multiple categories for quick and easy retrieval.
Distributing: The ability to export entire files/reports from the BI portal keeps all relevant decision-makers properly informed. Likewise, sharing direct links to external information in a threaded discussion within the CDM platform adds necessary detail, context and perspective to discussion.
Embedding: A CDM layer within a BI tool enables users to embed reports and vital contextual content across platforms – wherever it is needed for decision-making.
A CDM module does this in two ways
Within the BI tool's social layer or enterprise portals (intranet system) via a web services application programming interface (API)
Outside the enterprise, on any platform, via YouTube style Java script export, enabling users to embed live interactive reports or other information by simply copying the Java script fragment into any HTML page
Collectively deciding the best course of action
Collaborative Decision Making (CDM) Systems are defined as cooperative computer-based systems which assist the elucidation of ill-structured difficulties by a set of decision makers who are functioning together as a team. Their main objective is to enlarge the effectiveness of decision clusters through the cooperative sharing of information among group members and the computer. CDM associates the social software with business intelligence in which this said amalgamation can radically improve the value of decision-making by directly connecting the information enclosed in BI systems with collaborative input garnered through the usage of social software. This has also been identified as collaborative BI which has become a collaborative decision-making (CDM) module. Accordingly, this attaches the purposes and philosophies of social networking and Web 2.0 technologies, putting them on to broadcasting and analytics. If this would be implemented properly, collaborative BI will have the capability to form important connections between people, data, process and technology which will then connect the gap concerning insight and action through assisting peoples’ normal decision-making procedures. In order for an organization to attain a real collaborative BI, they must requisite to implement a collaborative mentality as well and upkeep a culture of organization-wide data sharing and data entree. This halts down departmental silos, empowering quicker, improved and more operative decision-making. It is also observed as an inflexible precondition for success wherein if an organization has a culture where people are rewarded for hoarding evidence, or information, and being specialists without sharing, then that organization is not ready. Technology will be observed to not make an organization collaborative if it does not already upkeep the belief of teams from various business units functioning in concert on shared projects.
Technology factors that underpin enterprise CDM
A BI CDM module is underpinned by three factors.
1 Ease of use: CDM software follows the Web 2.0 self-service mindset. The collaborative components within the BI solution cater for a diversity of user ability and skill levels to ensure knowledge does not remain departmentalized.
2 Fully integrated: Users must be able to discuss their analysis alongside their BI content. Picture this scenario: You’re using your BI tool to search for data on last month's sales results from the Americas. You find a startling anomaly – sales have skyrocketed compared to previous months. Why? What has been done differently? How can you replicate the results? If the CDM platform is within the BI tool, you can immediately start the investigation, inviting others into the conversation in full view of the data. There's no need to set up meetings and discussions in isolation from your data set. The collaborative process remains clearly documented in a single open-access space, and discussion remains on topic – the underlying information (data) is right there. To enable successful CDM, both your collaborative platform and information should be in the one place.
3 Web-based: Being Web-based, the collaborative platform allows all relevant stakeholders to follow and contribute to discussion as it unfolds, regardless of location, time difference or device used to access it.
Notable CDM modules in the Business Intelligence space
Social BI and CDM software is still in its infancy according to Gartner, and remains underutilized. However, a handful of vendors in the BI marketplace offer CDM modules, including:
IBM Cognos (Optional add on)
While the offerings listed above are larger BI systems with upgrades for CDM features, there have emerged some dedicated web based, software-as-a-service CDM offerings, including:
1000minds
Altova MetaTeam
D-Sight
Loomio
References
Business software
Decision support systems
Group decision-making
Collaborative software |
1268625 | https://en.wikipedia.org/wiki/OpenGL%20ES | OpenGL ES | OpenGL for Embedded Systems (OpenGL ES or GLES) is a subset of the OpenGL computer graphics rendering application programming interface (API) for rendering 2D and 3D computer graphics such as those used by video games, typically hardware-accelerated using a graphics processing unit (GPU). It is designed for embedded systems like smartphones, tablet computers, video game consoles and PDAs. OpenGL ES is the "most widely deployed 3D graphics API in history".
The API is cross-language and multi-platform. The libraries GLUT and GLU are not available for OpenGL ES. OpenGL ES is managed by the non-profit technology consortium Khronos Group. Vulkan, a next-generation API from Khronos, is made for simpler high performance drivers for mobile and desktop devices.
Versions
Several versions of the OpenGL ES specification now exist. OpenGL ES 1.0 is drawn up against the OpenGL 1.3 specification, OpenGL ES 1.1 is defined relative to the OpenGL 1.5 specification and OpenGL ES 2.0 is defined relative to the OpenGL 2.0 specification. This means that, for example, an application written for OpenGL ES 1.0 should be easily portable to the desktop OpenGL 1.3; as the OpenGL ES is a stripped-down version of the API, the reverse may or may not be true, depending on the particular features used.
OpenGL ES comes with its own version of shading language (OpenGL ES SL), which is different from OpenGL SL.
Version 1.0 and 1.1 both have common (CM) and common lite (CL) profiles, the difference being that the common lite profile only supports fixed-point instead of floating point data type support, whereas common supports both.
OpenGL ES 1.0
OpenGL ES 1.0 was released publicly July 28, 2003. OpenGL ES 1.0 is based on the original OpenGL 1.3 API, with much functionality removed and a little bit added. One significant difference between OpenGL and OpenGL ES is that OpenGL ES removed the need to bracket OpenGL library calls with glBegin and glEnd. Other significant differences are that the calling semantics for primitive rendering functions were changed in favor of vertex arrays, and fixed-point data types were introduced for vertex coordinates. Attributes were also added to better support the computational abilities of embedded processors, which often lack a floating point unit (FPU). Many other functions and rendering primitives were removed in version 1.0 to produce a lightweight interface, including:
quad and polygon rendering primitives,
texgen, line and polygon stipple,
polygon mode and antialiased polygon rendering are not supported, although rendering using multisample is still possible (rather than alpha border fragments),
ARB_Image pixel class operation are not supported, nor are bitmaps or 3D textures,
several of the more technical drawing modes are eliminated, including frontbuffer and accumulation buffer. Bitmap operations, specifically copying pixels (individually) is not allowed, nor are evaluators, nor (user) selection operations,
display lists and feedback are removed, as are push and pop operations for state attributes,
some material parameters were removed, including back-face parameters and user defined clip planes.
Actual version is 1.0.0.2.
OpenGL ES 1.1
OpenGL ES 1.1 added features such as mandatory support for multitexture, better multitexture support (including combiners and dot product texture operations), automatic mipmap generation, vertex buffer objects, state queries, user clip planes, and greater control over point rendering.
Actual Version is 1.1.12.
OpenGL ES 2.0
OpenGL ES 2.0 was publicly released in March 2007. It is roughly based on OpenGL 2.0, but it eliminates most of the fixed-function rendering pipeline in favor of a programmable one in a move similar to the transition from OpenGL 3.0 to 3.1. Control flow in shaders is generally limited to forward branching and to loops where the maximum number of iterations can easily be determined at compile time. Almost all rendering features of the transform and lighting stage, such as the specification of materials and light parameters formerly specified by the fixed-function API, are replaced by shaders written by the graphics programmer. As a result, OpenGL ES 2.0 is not backward compatible with OpenGL ES 1.1. Some incompatibilities between the desktop version of OpenGL and OpenGL ES 2.0 persisted until OpenGL 4.1, which added the GL_ARB_ES2_compatibility extension.
Actual version is 2.0.25.
The Khronos Group has written a document describing the differences between OpenGL ES 2.0 and ordinary OpenGL 2.0.
OpenGL ES 3.0
The OpenGL ES 3.0 specification was publicly released in August 2012. OpenGL ES 3.0 is backwards compatible with OpenGL ES 2.0, enabling applications to incrementally add new visual features to applications. OpenGL 4.3 provides full compatibility with OpenGL ES 3.0. Version 3.0 is also the basis for WebGL 2.0.
Actual is version 3.0.6.
New functionality in the OpenGL ES 3.0 specification includes:
multiple enhancements to the rendering pipeline to enable acceleration of advanced visual effects including: occlusion queries, transform feedback, instanced rendering and support for four or more rendering targets,
high quality ETC2 / EAC texture compression as a standard feature, eliminating the need for a different set of textures for each platform,
a new version of the GLSL ES shading language with full support for integer and 32-bit floating point operations;
greatly enhanced texturing functionality including guaranteed support for floating point textures, 3D textures, depth textures, vertex textures, NPOT textures, R/RG textures, immutable textures, 2D array textures, swizzles, LOD and mip level clamps, seamless cube maps and sampler objects,
an extensive set of required, explicitly sized texture and render-buffer formats, reducing implementation variability and making it much easier to write portable applications.
OpenGL ES 3.1
The OpenGL ES 3.1 specification was publicly released in March 2014.
New functionality in OpenGL ES 3.1 includes:
Compute shaders
Independent vertex and fragment shaders
Indirect draw commands
OpenGL ES 3.1 is backward compatible with OpenGL ES 2.0 and 3.0, thus enabling applications to incrementally incorporate new features. Actual Version is 3.1-(November 2016).
OpenGL ES 3.2
The OpenGL ES 3.2 specification was publicly released in August 2015.
New capabilities in OpenGL ES 3.2 include:
Geometry and tessellation shaders to efficiently process complex scenes on the GPU.
Floating point render targets for increased flexibility in higher precision compute operations.
ASTC compression to reduce the memory footprint and bandwidth used to process textures.
Enhanced blending for sophisticated compositing and handling of multiple color attachments.
Advanced texture targets such as texture buffers, multisample 2D array and cube map arrays.
Debug and robustness features for easier code development and secure execution.
Actual State is 3.2.6 July 2019.
Some more extensions are developed or in Development in Mesa for next OpenGL ES Version (see Mesamatrix).
Next generation API is Vulkan.
Platform usage
For complete list of companies and their conformant products, view here
OpenGL ES 1.0
OpenGL ES 1.0 added an official 3D graphics API to the Android and Symbian operating systems, as well as by QNX It is also supported by the PlayStation 3 as one of its official graphics APIs (the other one being low level libgcm library) with Nvidia's Cg in lieu of GLSL. The PlayStation 3 also includes several features of the 2.0 version of OpenGL ES.
OpenGL ES 1.1
The 1.1 version of OpenGL ES is supported by:
Android 1.6
Apple iOS for iPad, iPhone, and iPod Touch
RIM's BlackBerry 5.0 operating system series (only BlackBerry Storm 2, BlackBerry Curve 8530 and later models have the needed hardware)
BlackBerry PlayBook
BlackBerry BB10
Various Nokia phones such as Nokia N95, N93, N93i, and N82.
The Palm webOS, using the Plug-in Development Kit
Nintendo 3DS
OpenGL ES 2.0
Supported by:
The Android platform since Android 2.0 through NDK and Android 2.2 through Java
AmigaOS on AmigaOne with Warp3D Nova and compatible Radeon HD graphics card.
Apple iOS 5 or later in iPad, iPad Mini, iPhone 3GS or later, and iPod Touch 3rd generation or later
BlackBerry devices with BlackBerry OS 7.0 and Blackberry 10, as well as the BlackBerry PlayBook
Google Native Client
Intel HD Graphics 965G / X3000 and higher (Linux)
Nvidia (Android), Curie NV40+: Linux, Windows
Various Nokia phones (such as Symbian^3 based Nokia N8, MeeGo based Nokia N9, and Maemo based Nokia N900)
Palm webOS, using the Plug-in Development Kit
The Pandora console
The Raspberry Pi
The Odroid
Various Samsung mobile phones (such as the Wave)
Web browsers (WebGL)
The GCW Zero console
The PlayStation Vita portable console
The PlayStation 4 console
OpenGL ES 3.0
Supported by:
Android since version 4.3, on devices with appropriate hardware and drivers, including:
Nexus 7 (2013)
Nexus 4
Nexus 5
Nexus 10
HTC Butterfly S
HTC One/One Max
LG G2
LG G Pad 8.3
The Raspberry Pi 4
Samsung Galaxy J5
Samsung Galaxy J5 (2016)
Samsung Galaxy S4 (Snapdragon version)
Samsung Galaxy S5
Samsung Galaxy Note 3
Samsung Galaxy Note 10.1 (2014 Edition)
Sony Xperia M
Sony Xperia Z/ZL
Sony Xperia Z1
Sony Xperia Z Ultra
Sony Xperia Tablet Z
iOS since version 7, on devices including:
iPhone 5S
iPad Air
iPad mini with Retina display
BlackBerry 10 OS since version 10.2, on devices including:
BlackBerry Z3
BlackBerry Z30
BlackBerry Passport
Supported by some recent versions of these GPUs:
Adreno 300 and 400 series (Android, BlackBerry 10, Windows10 Windows RT)
Mali T600 series onwards (Android, Linux, Windows 7)
PowerVR Series6 (iOS, Linux)
Vivante (Android, OS X 10.8.3, Windows 7)
Nvidia (Android), Tesla G80+: Linux, Windows 7+
Intel HD Graphics Sandy Bridge and higher (Linux)
AMD Terascale and actual GCN-architecture (Windows, Linux)
LLVMpipe and Softpipe: soft drivers in Mesa
VIRGL: virtual Driver for virtual machines in 2018 with Mesa 18.1 (See Mesamatrix.net)
OpenGL ES 3.1
Supported by Windows, Linux, Android (since version 5.0) on devices with appropriate hardware and drivers, including:
Adreno 400 series
Adreno 500 series (Mesa 18.1 for Linux and Android)
AMD Terascale and actual GCN-architecture (Windows, Linux (r600, radeonSI))
Intel HD Graphics for Intel Atom Z3700 series (Android)
Intel HD Graphics for Intel Celeron N and J series (Android)
Intel HD Graphics for Intel Pentium N and J series (Android)
Intel HD Graphics Haswell and higher (Linux Mesa: previous Ivy Bridge nearly without stencil texturing)
Mali T6xx (midgard) series onwards (Android, Linux)
Nvidia GeForce 400 series onwards (Windows, Linux)
Nvidia Tegra K1 (Android, Linux)
Nvidia Tegra X1 (Android)
PowerVR Series 6, 6XE, 6XT, 7XE and 7XT (Linux, Android)
Vivante GC2000 series onwards (optional with GC800 and GC1000)
panfrost: ARM panfrost support (Linux Mesa 22.0)
v3d: Driver for Broadcom ARM raspberry in Mesa (Linux)
VIRGL: virtual Driver for virtual machines in 2018 with Mesa 18.1 (See Mesamatrix.net)
LLVMpipe: software driver in Mesa 20.2 (Linux)
softpipe: software driver in Mesa 20.3 (Linux)
Zink: emulation driver in Mesa 21.1 (Linux)
d3d12: WSL2 linux driver for Microsoft 10+ (Mesa 22.0)
Android Extension Pack
Android Extension Pack (AEP) is a set of OpenGL ES 3.1 extensions, all bundled into a single extension introduced by Google in 2014. This allows applications to use all of the features of the set of extensions, while only testing for the presence of a single one. The AEP was officially added to Android Lollipop to provide extra features like tessellation over what was officially in the GLES 3.1 revision. OpenGL ES 3.2 update is largely made up of the AEP additions, which are already present in desktop OpenGL.
OpenGL ES 3.2
OpenGL ES 3.2, incorporating the Android Extension Pack (AEP), "boasts a small number of improvements over last year’s OpenGL ES 3.1. Both make use of similar features from the AEP. From the AEP, OpenGL ES 3.2 compliant hardware will support Tessellation for additional geometry detail, new geometry shaders, ASTC texture compression for a smaller memory bandwidth footprint, floating point render targets for high accuracy compute processes, and new debugging features for developers. These high-end features are already found in the group’s full OpenGL 4 specification."
Supported by Windows, Linux, Android (since version 6.0 possible, 7.0+ Vulkan 1.0 and OpenGL ES 3.2 needed) on devices with appropriate hardware and drivers, including:
Adreno 420 and newer (Android, Linux (freedreno))
AMD GCN-architecture (Windows, Linux (Mesa 18.2 with radeonSI))
Intel HD Graphics Skylake and higher (Linux)
Mali-T760 and newer (Android, Linux)
Nvidia GeForce 400 series (Fermi) and newer (Windows, Linux)
VIRGL: virtual Driver for virtual machines in 2018 with Mesa 18.1 (See Mesamatrix.net)
LLVMpipe: software driver in Mesa 20 (Linux)
Zink: Vulkan emulation driver in Mesa 21.2 (Linux)
Deprecation in Apple devices
OpenGL ES (and OpenGL) is deprecated in Apple's operating systems, but still works in up to at least iOS 12.
The Future
There is currently no plan for a new core version of OpenGL ES, as adoption of Vulkan has been deemed to displace it in embedded and mobile applications. Development of extensions to OpenGL ES continues as of 2017.
OpenGL compatibility
A few libraries have been created to emulate OpenGL calls using GL ES:
Nvidia offers a 2-clause BSD licensed library called Regal, originally started by Cass Everitt. It was last updated in 2016. Regal is used for example by Google's NaCl.
The MIT licensed GL4ES emulates OpenGL 2.1/1.5 using GL ES 2.0/1.1. It is based on glshim.
See also
Direct3D – Windows API for high-performance 3D graphics, with 3D acceleration hardware support
DirectX – Windows API for handling tasks related to graphics and video
Metal – low level, high-performance 3D accelerated graphics library for Apple devices
OpenSL ES – API for audio on embedded systems, developed by the Khronos Group
ANGLE (software) – Google developed library to turn OpenGL ES calls into those of DirectX or Vulkan
References
Further reading
External links
Public bug tracking
OpenGL ES Conformant companies
Public forums
List of OpenGL ES compatible devices
OpenGL home page
OpenGL ES 1.1 & 2.0 Emulator from ARM
OpenGL ES 3.0 Emulator from ARM
3D graphics APIs
Es |
4907231 | https://en.wikipedia.org/wiki/GRASP%20%28object-oriented%20design%29 | GRASP (object-oriented design) | General Responsibility Assignment Software Patterns (or Principles), abbreviated GRASP, is a set of "nine fundamental principles in object design and responsibility assignment" first published by Craig Larman in his 1997 book Applying UML and Patterns.
The different patterns and principles used in GRASP are controller, creator, indirection, information expert, low coupling, high cohesion, polymorphism, protected variations, and pure fabrication. All these patterns solve some software problem common to many software development projects. These techniques have not been invented to create new ways of working, but to better document and standardize old, tried-and-tested programming principles in object-oriented design.
Larman states that "the critical design tool for software development is a mind well educated in design principles. It is not UML or any other technology." Thus, the GRASP principles are really a mental toolset, a learning aid to help in the design of object-oriented software.
Patterns
In object-oriented design, a pattern is a named description of a problem and solution that can be applied in new contexts; ideally, a pattern advises us on how to apply its solution in varying circumstances and considers the forces and trade-offs. Many patterns, given a specific category of problem, guide the assignment of responsibilities to objects.
Information expert
Problem: What is a basic principle by which to assign responsibilities to objects?
Solution: Assign responsibility to the class that has the information needed to fulfill it.
Information expert (also expert or the expert principle) is a principle used to determine where to delegate responsibilities such as methods, computed fields, and so on.
Using the principle of information expert, a general approach to assigning responsibilities is to look at a given responsibility, determine the information needed to fulfill it, and then determine where that information is stored.
This will lead to placing the responsibility on the class with the most information required to fulfill it.
Related Pattern or Principle: Low Coupling, High Cohesion
Creator
The creation of objects is one of the most common activities in an object-oriented system. Which class is responsible for creating objects is a fundamental property of the relationship between objects of particular classes.
Problem: Who creates object A?
Solution: In general, Assign class B the responsibility to create object A if one, or preferably more, of the following apply:
Instances of B contain or compositely aggregate instances of A
Instances of B record instances of A
Instances of B closely use instances of A
Instances of B have the initializing information for instances of A and pass it on creation.
Related Pattern or Principle: Low Coupling, Factory pattern
Controller
The controller pattern assigns the responsibility of dealing with system events to a non-UI class that represents the overall system or a use case scenario. A controller object is a non-user interface object responsible for receiving or handling a system event.
Problem: Who should be responsible for handling an input system event?
Solution: A use case controller should be used to deal with all system events of a use case, and may be used for more than one use case. For instance, for the use cases Create User and Delete User, one can have a single class called UserController, instead of two separate use case controllers.
The controller is defined as the first object beyond the UI layer that receives and coordinates ("controls") a system operation. The controller should delegate the work that needs to be done to other objects; it coordinates or controls the activity. It should not do much work itself. The GRASP Controller can be thought of as being a part of the application/service layer (assuming that the application has made an explicit distinction between the application/service layer and the domain layer) in an object-oriented system with common layers in an information system logical architecture.
Related Pattern or Principle: Command, Facade, Layers, Pure Fabrication
Indirection
The indirection pattern supports low coupling and reuses potential between two elements by assigning the responsibility of mediation between them to an intermediate object. An example of this is the introduction of a controller component for mediation between data (model) and its representation (view) in the model-view-controller pattern. This ensures that coupling between them remains low.
Problem: Where to assign responsibility, to avoid direct coupling between two (or more) things? How to de-couple objects so that low coupling is supported and reuse potential remains higher?
Solution: Assign the responsibility to an intermediate object to mediate between other components or services so that they are not directly coupled.
The intermediary creates an indirection between the other components.
Low coupling
Coupling is a measure of how strongly one element is connected to, has knowledge of, or relies on other elements. Low coupling is an evaluative pattern that dictates how to assign responsibilities for the following benefits:
lower dependency between the classes,
change in one class having a lower impact on other classes,
higher reuse potential.
High cohesion
High cohesion is an evaluative pattern that attempts to keep objects appropriately focused, manageable and understandable. High cohesion is generally used in support of low coupling. High cohesion means that the responsibilities of a given set of elements are strongly related and highly focused on a rather specific topic. Breaking programs into classes and subsystems, if correctly done, is an example of activities that increase the cohesive properties of named classes and subsystems. Alternatively, low cohesion is a situation in which a set of elements, of e.g., a subsystem, has too many unrelated responsibilities. Subsystems with low cohesion between their constituent elements often suffer from being hard to comprehend, reuse, maintain and change as a whole.
Polymorphism
According to the polymorphism principle, responsibility for defining the variation of behaviors based on type is assigned to the type for which this variation happens. This is achieved using polymorphic operations. The user of the type should use polymorphic operations instead of explicit branching based on type.
Problem: How to handle alternatives based on type? How to create pluggable software components?
Solution: When related alternatives or behaviors vary by type (class), assign responsibility for the behavior—using polymorphic operations—to the types for which the behavior varies. (Polymorphism has several related meanings. In this context, it means "giving the same name to services in different objects".)
Protected variations
The protected variations pattern protects elements from the variations on other elements (objects, systems, subsystems) by wrapping the focus of instability with an interface and using polymorphism to create various implementations of this interface.
Problem: How to design objects, subsystems, and systems so that the variations or instability in these elements does not have an undesirable impact on other elements?
Solution: Identify points of predicted variation or instability; assign responsibilities to create a stable interface around them.
Pure fabrication
A pure fabrication is a class that does not represent a concept in the problem domain, specially made up to achieve low coupling, high cohesion, and the reuse potential thereof derived (when a solution presented by the information expert pattern does not). This kind of class is called a "service" in domain-driven design.
Related Patterns and Principles
• Low Coupling.
• High Cohesion.
See also
Anemic domain model
Design pattern (computer science)
Design Patterns (book)
SOLID (object-oriented design)
References
Software design
Programming principles |
462843 | https://en.wikipedia.org/wiki/Chinese%20wall | Chinese wall | A Chinese wall or ethical wall is an information barrier protocol within an organization designed to prevent exchange of information or communication that could lead to conflicts of interest. For example, a Chinese wall may be established to separate people who make investments from those who are privy to confidential information that could improperly influence the investment decisions. Firms are generally required by law to safeguard insider information and ensure that improper trading does not occur.
Etymology
Bryan Garner's Dictionary of Modern Legal Usage states that the metaphor title "derives of course from the Great Wall of China", although an alternative explanation links the idea to the screen walls of Chinese internal architecture.
The term was popularized in the United States following the stock market crash of 1929, when the U.S. government legislated information separation between investment bankers and brokerage firms, in order to limit the conflict of interest between objective company analysis and the desire for successful initial public offerings. Rather than prohibiting one company from engaging in both businesses, the government permitted the implementation of Chinese wall procedures.
A leading note on the subject published in 1980 in the University of Pennsylvania Law Review titled "The Chinese Wall Defense to Law-Firm Disqualification" perpetuated the use of the term.
Objections to the term Chinese wall
There have been disputes about the use of the term for some decades, particularly in the legal and banking sectors. The term can be seen both as culturally insensitive and an inappropriate reflection on Chinese culture and trade, which are now extensively integrated into the global market.
In Peat, Marwick, Mitchell & Co. v. Superior Court (1988), Presiding Justice Harry W. Low, a Chinese American, wrote a concurring opinion specifically in order "to express my profound objection to the use of this phrase in this context". He called the term a "piece of legal flotsam which should be emphatically abandoned", and suggested "ethics wall" as a more suitable alternative. He maintained that the "continued use of the term would be insensitive to the ethnic identity of the many persons of Chinese descent".
Alternative terms
Alternative phrases include "screen", "firewall", "cone of silence", and "ethical wall".
"Screen", or the verb "to screen", is the preferred term of the American Bar Association Model Rules of Professional Conduct. The ABA Model Rules define screening as "the isolation of a lawyer from any participation in a matter through the timely imposition of procedures within a firm that are reasonably adequate under the circumstances to protect information that the isolated lawyer is obligated to protect under these Rules or other law", and suitable "screening procedures" have been approved where paralegals have moved from one law firm to another and have worked on cases for their former employer which may conflict with the interests of their current employer and the clients they represent.
Usage in specific industries
Finance
A Chinese wall is commonly employed in investment banks, between the corporate-advisory area and the brokering department. This separates those giving corporate advice on takeovers from those advising clients about buying shares and researching the equities themselves.
The "wall" is thrown up to prevent leaks of corporate inside information, which could influence the advice given to clients making investments, and allow staff to take advantage of facts that are not yet known to the general public.
The phrase "already over the wall" is used by equity research personnel to refer to rank-and-file personnel who operate without an ethics wall at all times. Examples include members of the Chinese wall department, most compliance personnel, attorneys and certain NYSE-licensed analysts. The term "over the wall" is used when an employee who is not normally privy to wall-guarded information somehow obtains sensitive information. Breaches considered semi-accidental were typically not met with punitive action during the heyday of the "dot-com" era. These and other instances involving conflicts of interest were rampant during this era. A major scandal was exposed when it was discovered that research analysts were encouraged to blatantly publish dishonest positive analyses on companies in which they, or related parties, owned shares, or on companies that depended on the investment banking departments of the same research firms. The U.S. government has since passed laws strengthening the use Ethics walls such as Title V of the Sarbanes-Oxley Act in order to prevent such conflicts of interest.
Ethics walls are also used in the corporate finance departments of the "Big Four" and other large accountancy and financial services firms. They are designed to insulate sensitive documentation from the wider firm in order to prevent conflicts.
Journalism
The term is used in journalism to describe the separation between the editorial and advertising arms. The Chinese wall is regarded as breached for "advertorial" projects.
Insurance
The term is used in property and casualty insurance to describe the separation of claim handling where both parties to a claim (e.g. an airport and an airline) have insurance policies with the same insurer. The claim handling process needs to be segregated within the organisation to avoid a conflict of interest.
This also occurs when there is an unidentified or uninsured motorist involved in an auto collision. In this case, two loss adjusters will take on the claim - one representing the insured party and another representing the uninsured or unidentified motorist. While they both represent the same policy, they both must investigate and negotiate to determine fault and what if anything is covered under the policy. In this case, a Chinese wall is erected between the two adjusters.
Law
Chinese walls may be used in law firms to address a conflict of interest, for example to separate one part of the firm representing a party on a deal or litigation from another part of the firm with contrary interests or with confidential information from an adverse party. Under UK law, a firm may represent competing parties in a suit, but only in strictly defined situations and when individual fee earners do not act for both sides. In United States law firms, the use of Chinese walls is no longer permitted except within very narrow exceptions. The American Bar Association Model Rules of Professional Conduct (2004) state: "While lawyers are associated in a firm, none of them shall knowingly represent a client when any one of them practicing alone would be prohibited from doing so by Rules 1.7 or 1.9, unless the prohibition is based on a personal interest of the prohibited lawyer and does not present a significant risk of materially limiting the representation of the client by the remaining lawyers in the firm." Although ABA rules are only advisory, most U.S. states have adopted them or have even stricter regulations in place.
Government procurement
Chinese or ethical walls may be required where a company which already has a contract with a public body intends to bid for a new contract, in circumstances where the public body concerned wishes to maintain a fair competitive procedure and avoid or minimise the advantage which the existing contractor may have over other potential bidders. In some cases UK public sector terms of contract require the establishment of "ethical wall arrangements" approved by the customer as a pre-condition for involvement in the procurement process for additional goods and services or for a successor contract. (19: broken link)
Computer science
In computer science, the concept of a Chinese wall is used by both the operating system for computer security and the US judicial system for protection against copyright infringement. In computer security it concerns the software stability of the operating system. The same concept is involved in an important business matter concerning the licensing of each of a computer's many software and hardware components.
Any hardware component that requires direct software interaction will have a license for itself and a license for its software "driver" running in the operating system. Reverse engineering software is a part of computer science that can involve writing a driver for a piece of hardware in order to enable it to work in an operating system unsupported by the manufacturer of the hardware, or to add functionality or increase the performance of its operations (not provided by the manufacturer) in the supported operating system, or to restore the usage of a piece of computer hardware for which the driver has disappeared altogether. A reverse engineered driver offers access to development, by persons outside the company that manufactured it, of the general hardware usage.
Reverse engineering
There is a case-law mechanism called "clean room design" that is employed to avoid copyright infringement when reverse engineering a proprietary driver.
It involves two separate engineering groups separated by a Chinese wall. One group works with the hardware to reverse engineer what must be the original algorithms and only documents their findings. The other group writes the code, based only on that documentation. Once the new code begins to function with tests on the hardware, it is able to be refined and developed over time.
This method insulates the new code from the old code, so that the reverse engineering is less likely to be considered by a jury as a derived work.
Computer security
The basic model used to provide both privacy and integrity for data is the "Chinese wall model" or the "Brewer and Nash model". It is a security model where read/write access to files is governed by membership of data in conflict-of-interest classes and datasets.
See also
Brewer and Nash model
Conflict of interest
Insider trading
Glass–Steagall Act
Global Settlement
Mad Men: "Chinese Wall"
References
Investment banking
Informal legal terminology
Journalism standards
Conflict of interest mitigation
Data security
Metaphors referring to objects
Reverse engineering |
230339 | https://en.wikipedia.org/wiki/Robert%20Tappan%20Morris | Robert Tappan Morris | Robert Tappan Morris (born November 8, 1965) is an American computer scientist and entrepreneur. He is best known for creating the Morris worm in 1988, considered the first computer worm on the Internet.
Morris was prosecuted for releasing the worm, and became the first person convicted under the then-new Computer Fraud and Abuse Act (CFAA).
He went on to cofound the online store Viaweb, one of the first web applications, and later the venture capital funding firm Y Combinator, both with Paul Graham.
He later joined the faculty in the department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology (MIT), where he received tenure in 2006. He was elected to the National Academy of Engineering in 2019.
Early life
Morris was born in 1965 to parents Robert Morris and Anne Farlow Morris. The senior Robert Morris was a computer scientist at Bell Labs, who helped design Multics and Unix; and later became the chief scientist at the National Computer Security Center, a division of the National Security Agency (NSA).
Morris grew up in the Millington section of Long Hill Township, New Jersey, and graduated from Delbarton School in 1983.
Morris attended Harvard University, and later went on to graduate school at Cornell University. During his first year there, he designed a computer worm (see below) that disrupted many computers on what was then a fledgling internet. This led to him being indicted a year later.
After serving his conviction term, he returned to Harvard to complete his Doctor of Philosophy (Ph.D.) under the supervision of H.T. Kung. He finished in 1999.
Morris worm
Morris' computer worm was developed in 1988, while he was a graduate student at Cornell University. He released the worm from MIT, rather than from Cornell. The worm exploited several vulnerabilities to gain entry to targeted systems, including:
A hole in the debug mode of the Unix sendmail program
A buffer overflow or overrun hole in the fingerd network service
The transitive trust enabled by people setting up network logins with no password requirements via remote execution (rexec) with Remote Shell (rsh), termed rexec/rsh
The worm was programmed to check each computer it found to determine if the infection was already present. However, Morris believed that some system administrators might try to defeat the worm by instructing the computer to report a false positive. To compensate for this possibility, Morris programmed the worm to copy itself anyway, 14% of the time, no matter what the response was to the infection-status interrogation.
This level of persistence was a design flaw: it created system loads that brought it to the attention of administrators, and disrupted the target computers. During the ensuing trial, it was estimated that the cost in "potential loss in productivity" caused by the worm and efforts to remove it from different systems ranged from $200 to $53,000.
Criminal prosecution
In 1989, Morris was indicted for violating United States Code Title 18 (), the Computer Fraud and Abuse Act (CFAA). He was the first person to be indicted under this act. In December 1990, he was sentenced to three years of probation, 400 hours of community service, and a fine of $10,050 plus the costs of his supervision. He appealed, but the motion was rejected the following March. Morris' stated motive during the trial was "to demonstrate the inadequacies of current security measures on computer networks by exploiting the security defects [he] had discovered." He completed his sentence as of 1994.
Later life and work
Morris' principal research interest is computer network architectures which includes work on distributed hash tables such as Chord and wireless mesh networks such as Roofnet.
He is a longtime friend and collaborator of Paul Graham. Along with cofounding two companies, Graham dedicated his book ANSI Common Lisp to Morris, and named the programming language that generates the online stores' web pages RTML (Robert T. Morris Language) in his honor. Graham lists Morris as one of his personal heroes, saying "he's never wrong."
Timeline
1983 – Graduated from Delbarton School in Morristown, New Jersey
1987 – Received his Bachelor of Arts (B.A.) from Harvard University.
1988 – Released the Morris worm (when he was a graduate student at Cornell University)
1989 – Indicted under the Computer Fraud and Abuse Act (CFAA) of 1986 on July 26, 1989; the first person to be indicted under the Act
1990 – Convicted in United States v. Morris
1995 – Cofounded Viaweb, a start-up company that made software for building online stores (with Paul Graham)
1998 – Viaweb sold for $49 million to Yahoo, which renamed the software Yahoo! Store
1999 – Received Ph.D. in Applied Sciences from Harvard for thesis titled Scalable TCP Congestion Control
1999 – Appointed as an assistant professor at MIT
2005 – Cofounded Y Combinator, a seed-stage startup venture capital funding firm, that provides seed money, advice, and connections at two 3-month programs per year (with Paul Graham, Trevor Blackwell, and Jessica Livingston)
2006 – Awarded tenure at MIT
2006 – Technical advisor for Cisco Meraki
2008 – Released the programming language Arc, a Lisp dialect (with Paul Graham)
2010 – Awarded the 2010 Special Interest Group in Operating Systems (SIGOPS) Mark Weiser award
2015 – Elected a Fellow of Association for Computing Machinery (ACM, 2014) for "contributions to computer networking, distributed systems, and operating systems."
2019 – Elected to National Academy of Engineering
See also
List of convicted computer criminals
References
Further reading
A Report on the Internet Worm
External links
, at MIT
1965 births
Living people
American computer programmers
American computer scientists
Computer systems researchers
Cornell University alumni
Place of birth missing (living people)
Delbarton School alumni
MIT School of Engineering faculty
Computer security academics
American computer criminals
American technology company founders
People from Long Hill Township, New Jersey
Lisp (programming language) people
American computer businesspeople
Y Combinator people
Harvard School of Engineering and Applied Sciences alumni
People convicted of cybercrime
People charged with computer fraud |
16769210 | https://en.wikipedia.org/wiki/Ne0h | Ne0h | ne0h is a Canadian hacker who received mass media attention in 1999 because of his affiliation with the hacker group globalHell, and was featured in Kevin Mitnick's book The Art of Intrusion and Tom Parker's book Cyber Adversary Characterization: Auditing the Hacker Mind.
His real identity is unknown.
References
External links
List of gLobalHell attacks 1998–1999
Web Archive of the first media attention toward ne0h's hacks John Vranesevich, AntiOnline September 6, 1999
Hacker Shows Just How Easy it Really is Attrition Staff June 18, 1999
Highlights of Kevin Mitnick's new book The Art Of Intrusion Adaptation, Profiles in Deception November 29, 2004
Highlights of Tom Parkers book Cyber Adversary Characterization Auditing the Hacker Mind
Canadian computer criminals
Living people
Hacking (computer security)
Year of birth missing (living people) |
49831 | https://en.wikipedia.org/wiki/Light%20pen | Light pen | A light pen is a computer input device in the form of a light-sensitive wand used in conjunction with a computer's cathode-ray tube (CRT) display.
It allows the user to point to displayed objects or draw on the screen in a similar way to a touchscreen but with greater positional accuracy. A light pen can work with any CRT-based display, but its ability to be used with LCDs was unclear (though Toshiba and Hitachi displayed a similar idea at the "Display 2006" show in Japan).
A light pen detects changes in brightness of nearby screen pixels when scanned by cathode-ray tube electron beam and communicates the timing of this event to the computer. Since a CRT scans the entire screen one pixel at a time, the computer can keep track of the expected time of scanning various locations on screen by the beam and infer the pen's position from the latest timestamp.
History
The first light pen, at this time still called "light gun", was created around 1945–1955 as part of the Whirlwind I project at MIT, where it was used to select discrete symbols on the screen, and later at the SAGE project, where it was used for tactical real-time-control of a radar-networked airspace.
One of the first more widely deployed uses was in the Situation Display consoles of the AN/FSQ-7 for military airspace surveillance. This is not very surprising, given its relationship with the Whirlwind projects. See Semi-Automatic Ground Environment for more details.
During the 1960s, light pens were common on graphics terminals such as the IBM 2250 and were also available for the IBM 3270 text-only terminal.
Light pen usage was expanded in the early 1980s to music workstations such as the Fairlight CMI and personal computers such as the BBC Micro. IBM PC compatible CGA, HGC and some EGA graphics cards also featured a connector compatible with a light pen, as did early Tandy 1000 computers, the Thomson MO5 computer family, the Atari 8-bit, Commodore 8-bit, some MSX computers and Amstrad PCW home computers. For the MSX computers, Sanyo produced a light pen interface cartridge.
Because the user was required to hold their arm in front of the screen for long periods of time (potentially causing "gorilla arm") or to use a desk that tilts the monitor, the light pen fell out of use as a general-purpose input device.
See also
CueCat
Digital pen
Light gun
Pen computing
Stylus (computing)
Notes
References
External links
Computing input devices
History of human–computer interaction
Pointing devices |
55241036 | https://en.wikipedia.org/wiki/Filip%20Cabinet | Filip Cabinet | The Filip Cabinet was the Cabinet of Moldova led by Pavel Filip from January 2016 to June 2019.
Overview
After the dismissal of previous cabinet, no consensus was reached by the three pro-European parliamentary parties - Liberal Democratic Party of Moldova (PLDM), Democratic Party of Moldova (PDM), and Liberal Party (PL). A new political crisis began. After on early 21 December 2015 Vlad Plahotniuc has announced his return in politics and in the Democratic Party, stating that he "will participate directly in the process of forming a new parliamentary majority [...] which also will succeed to gain all the needed votes for the election of the president in the next March" (2016), a few hours later, in the morning of 21 December 2015, a group of 14 communist MPs have announced that they were leaving the communist faction in parliament and will form a new parliamentary group - The Social Democratic Platform for Moldova (Platforma Social Democrată Pentru Moldova), and later PDM and the group of former communists MPs began discussions regarding the formation of a new parliamentary majority.
On 13 January 2016 the newly formed parliamentary majority nominated Vlad Plahotniuc for the function of Prime Minister of Moldova, however, the president of Moldova Nicolae Timofti has rejected the candidature with a rationale which says that "there are reasonable suspicions that Mr. Vladimir Plahotniuc does not meet the criteria of integrity necessary for his appointment as Prime Minister, considering also that by the Decision no. 5 of 15.02.2013 of the Parliament of the Republic of Moldova, published in the Official Gazette on 22.02.2013, Mr. Vladimir Plahotniuc expressed a vote of mistrust in his capacity as first deputy chairman of the Parliament, accusing him of involvement in illegal activities that are prejudicial to the image of the Parliament, and the Republic of Moldova”.
Later Nicolae Timofti designated Ion Păduraru for the function of Prime Minister, but after Păduraru gave up, the parliamentary majority has nominated Pavel Filip - former Minister of Informational Technologies and Communications. During the parliamentary meeting, a group of protesters gathered in front of the Parliament building that demanded the interruption of the vote. After the president appointed Filip as Prime Minister of Moldova and Filip Cabinet was inaugurated in night, a series of protests was resumed. The cabinet headed by Filip was voted by 57 of 101 MPs: all 20 Democratic Party MPs, 13 Liberal Party MPs, 14 former communist (PCRM) MPs, 8 PLDM MPs, and another 2 former PLDM MPs.
On 13 December 2016, PL chairman, Mihai Ghimpu, withdrew the political support to Anatol Șalaru (minister of defense). After the newly elected president Igor Dodon was sworn in, on 27 December he signed the decree on the dismissal of Șalaru from the position of Minister of Defense. Since then, deputy minister Gheorghe Galbura was acting minister of defense.
On 15 March 2017, the formally independent minister of Agriculture and Food Industry Eduard Grama (a close person to Liberal Democratic Party of Moldova) was retained in an alleged corruption case. Few days later, Grama resigns from the office, and on 20 March president Dodon formally signs his dismissal decree.
In late April 2017 the anti-corruption prosecutors and the CNA officers retained liberal minister of Transport and Roads Infrastructure Iurie Chirinciuc, suspecting him in corruption acts.
After on 25 May 2017 the liberal mayor of Chișinău Dorin Chirtoacă was retained by the anti-corruption prosecutors and National Anti-Corruption Center officers, on 26 May, PL chairman Mihai Ghimpu has announced that Liberal Party leaves the government coalition. Later, on 29 May, three liberal ministers: Deputy Prime Minister on Social Affairs - Gheorghe Brega, Ministry of Education - Corina Fusu, and Ministry of Environment - Valeriu Munteanu, and their liberal deputy ministers, have announced that they resign from their posts. Next day president Dodon signed the decrees for their dismissals, and one more decree for the liberal Minister of Transport and Roads Infrastructure of Moldova Iurie Chirinciuc, who at that moment was placed under home arrest.
In the summer of 2017 a reform of the government was conducted, and following the transfers of powers, nine ministries remained out 16 ones, and no more liberal ministers remained in the cabinet componence. The reform was criticized by the parliamentary opposition. In September 2017 president Dodon rejected two times the candidature of Eugen Sturza (from European People's Party of Moldova) for the office of Minister of Defence, nominee by cabinet, arguing by "lack of competence in domain". Later, on 24 October 2017, the President of Parliament of Moldova, Andrian Candu, as acting President of Moldova, has signed the decree for naming Sturza as Minister of Defence.
The cabinet was reshuffled on 20 December 2017. Six ministers were dismissed in an attempt to reform the government.
Composition
The Başkan (Governor) of Gagauzia is elected by universal, equal, direct, secret and free suffrage on an alternative basis for a term of 4 years. One and the same person can be a governor for no more than two consecutive terms. The Başkan of Gagauzia is confirmed as a member of the Moldovan government by a decree of the President of Moldova.
Achievements
"The First House" Program
In November 2017, the First House Program was launched. The purpose of the program is to facilitate the access of individuals to purchasing a dwelling by contracting partially state-guaranteed bank loans, especially for young families. The program has become functional in March 2018. Analysts have previously commented, that the program advantages the owners of construction companies, thus guaranteeing them more customers. The Finance Minister declined then to comment on these assumptions. Until now, some 202 people have already received credits for house purchase under the "First House" program.
In May 2018, the First House program 2 is started, exclusively for civil servants who have at least one year worked in state institutions. About 5,000 public servants could conclude such contracts this year, as the state has proposed to allocate 20 million lei for this program. The aim of the program is to motivate young people to work in budgetary institutions.
Later in July 2018, the First House Program is expanding to make it more accessible to families with more children. The "First Home" 3 project assume the gradual offsetting from the state budget of the mortgage loan, from 10 to 100%, depending on the number of children in the family.
The "Good Roads for Moldova" Program
In 2018, the Government will ensure the repair and construction of 1200 km of roads in rural areas within the national program "Good Roads for Moldova", regardless of the political color of the localities. The Government approved the budget of the "Good Roads for Moldova" program, being allocated 972 million lei, with the repair works being carried out in over 1200 villages. The Ministry of Economy and Infrastructure launched the "Good Roads for Moldova" online map. The map contains up-to-date information for each district about the amount of work to be done, the stage of implementation of the project, how many road sections have been restored, are in work or are to be repaired, what is the volume of the investments, type of works performed, etc. The online map can be viewed on www.drumuribune.md.
Government Reform
On July 26, 2017, the Government of the Republic of Moldova entered a new stage, in the context of voting on July 21, 2017, by Parliament of the new structure of the Executive. Thus, following the transfers of competences, from 16 ministries remained 9 ministries. Government reform has generated more criticism about the process of drafting and implementing this initiative.
The list of indicated ministries results from the change of name and taking over some areas of activity, as follows:
The Ministry of Economy took over the fields of activity from the Ministry of Transport and Road Infrastructure and from the Ministry of Information Technology and Communications, as well as the construction field from the Ministry of Regional Development and Construction, with the name change in the Ministry of Economy and Infrastructure;
The Ministry of Culture took over the fields of activity from the Ministry of Education, the Ministry of Youth and Sports and the research field from the Academy of Sciences of Moldova, with the change of the name in the Ministry of Education, Culture and Research;
The Ministry of Labor, Social Protection and Family took over the fields of activity of the Ministry of Health, with the change of the name in the Ministry of Health, Labor and Social Protection;
The Ministry of Regional Development and Construction took over the fields of activity from the Ministry of Agriculture and Food Industry and from the Ministry of Environment, with the change of name in the Ministry of Agriculture, Regional Development and Environment.
In the Government meeting of August 30, 2017, the Regulation for the organization and functioning of the new ministries was approved, containing the organigram, the central office structure, the areas of competence and the limit staff. From the subordination of the ministries, all state-owned or with majority state-owned capital enterprises were removed so that ministers were more focused on policies. Undertakings are in charge of the Public Property Agency, which is subordinated to the Government.
Information Technology Park "Moldova IT Park"
On January 1, 2017, entered into force the Law no. 77, regarding IT parks. On October 26, 2017, 15 ATIC companies submitted to the Ministry of Economy and Infrastructure of Moldova an application for the establishment of "Moldova IT Park". Subsequently, on December 20, 2017, the Government approved the Regulation for the organization and operation of the Park Administration and the Residents Registration Regulations.
On January 1, 2018, Moldova's First IT Park - "Moldova IT Park", founded for 10 years, was created, the period, in which it plans to attract about 400 IT companies from the Republic of Moldova. The main purpose of the new structure is to provide an organizational platform with a set of innovative mechanisms and facilities to boost the growth of the information technology industry, create new jobs and attract local and foreign investment. The administrator is appointed by the Government for a term of 5 years.
The director of the "Ritlabs" Company, Maxim Masiutin, said that the IT Park offers attractive conditions for employees who will receive the full salary indicated in the individual labor contract without any taxes. In turn, programmer Vitalie Esanu, an IT expert, said that such schemes create opportunities for corruption.
In the first four months of the launch of "Moldova IT Park" 164 residents were registered. Among the facilities and tax incentives granted to residents of IT parks, the biggest burden is, for current and potential residents, the single tax of 7% of sales revenue and the removal of bureaucratic barriers.
The advantages of the IT business environment in our country were presented in Iași, Romania, at the PinAwards regional forum. During the event, the Republic of Moldova was presented as an attractive destination for IT companies, mentioning the benefits of the law on IT Parks.
Launching the Service 112
"112" is the unique number for emergency calls, active in all EU Member States.
It operates on a non-stop basis and can be called absolutely free of charge by every citizen from the fixed and mobile phones.
The "112 Service" project was launched in May 2012. The new system automates all processes through state-of-the-art software. The Republic of Moldova is the second country to use this modern software. The service 112 is tasked with managing much more complex cases for rapid interventions.
On 29 March 2018, the Unique National Service for Emergency Calls 112 was launched 112. Prime Minister Pavel Filip said at the launch event that "It is a beautiful project, initiated in 2012. It is a soul initiative for me. Its implementation lasted several years, yet we launch the Service nine months ahead than was expected. Through it, we will provide citizens with modern services just like in the countries of the European Union ".
The Government approved the Interaction Rules between the Single National Service for Emergency Calls 112 and Emergency Specialised Services to ensure immediate intervention by rescuers, doctors or police officers to provide the necessary assistance.
The Regulation provides for a clear delimitation of the duties of the 112 and Emergency Specialised Services - the Emergency Medical Service, the General Police Inspectorate and the General Emergency Inspectorate.
Basic skills of the 112 Service include the reception, management and processing of emergency calls throughout the Republic of Moldova, completing emergency call records and the centralization, storage and access to managed data under the Automatized Information System of the Service 112.
References
External links
Cabinet of Ministers
Filip Cabinet @ alegeri.md
Moldova cabinets
Coalition governments
2016 establishments in Moldova
Cabinets established in 2016 |
4712705 | https://en.wikipedia.org/wiki/Taft%20High%20School%20%28Texas%29 | Taft High School (Texas) | Taft High School is a public high school in the city of Taft, Texas, Texas|San Patricio County]], United States and classified as a 3A school by the UIL. It is a part of the Taft Independent School District located in east central San Patricio County. In 2013, the school was rated "Met Standard" by the Texas Education Agency.
Extracurricular activities
Taft participates in numerous activities mostly as part of University Interscholastic League in District 30 2A, after being realigned from District 30 3A following the 2003-2004 school year. Taft is the second largest school in District 30, behind George West High School. It is also has a branch of the National Honor Society and an active Student Council.
Sports/athletics
Through UIL Taft participates in American football, cross country, volleyball, basketball, baseball, softball, track and field, and golf. Taft also participates in powerlifting through the Texas High School Powerlifting Association attending the state competition many times over the past decade. The varsity basketball team has been state ranked for the past two years. The current Athletic Director is Pete Guajardo. The Girls Coordinator is Tasha Wilson.
Academic competitions
Taft participates in the majority of UIL events led by Ms. Mary-Jean Wolter for over 14 years. In 2006, the UIL Team placed 3rd overall at district competition in Falfurrias with six advancing to regional competition. Sarah Bailey and Lee Dykes came within points of advancing to state in Computer Applications and Computer Science respectively. In 2007, the team placed 4th overall at district competition with four advancing to regionals in eight events.
Computer science
The two year district champion (2005,2006) Computer Science team also competes in hands-on Texas Computer Educator Association competition. In 2007 Lee Dykes and Thomas Putnam qualified for the state TCEA Programming competition. The team of Lee Dykes, Thomas Putnam, and Amanda Harbison competed in UIL 2A South Regional Competition in 2006 and the team of Lee Dykes, Thomas Putnam, Joel Reyes, and Domingo Hiracheta will compete as wildcard in the same competition in 2007. Lee Dykes was District champion in 2006 and received 2nd in District 2007 competition following the previous state champion. He advanced to state with 3rd at the regional level.
In 2008, Domingo Hiracheta also advanced to State level in TCEA as a one-man team, behind a three-man team from Port Aransas, as well as placing third individually in Computer Science Regionals for UIL, qualifying for State.
Robotics
In 2007, Taft High School participated for the first time ever in the BEST Robotics competition. Lead by Domingo Hiracheta, the first-year rookie team swept the competition floor, winning the Regional Finals by 200 points. The team won a total of five trophies, among them the Rookie Team of the Year, and Most Agile Robot.
Theatre
The school hosts an active drama team which holds an annual talent show and participates in one act play competition. In 2006, Ariana Cancino was also selected to be part of the 2006 All-State one-act play crew. Many years the team travels to the spring Texas Educational Theater Association Theaterfest conference. The theater teacher is Kerri Ramos.
Band
The "Band with Pride", Taft High School Band had received 21 consecutive UIL sweepstakes, until the 2006-2007 school year, which entails receiving a first division rating in marching, concert performance, and sight reading. Most years it attends three Battle of the Bands-style pre-UIL competitions. Band members are also encouraged to participate in try-outs for All-Region, All-State, and All-State Jazz bands created by the Association of Texas Small School Bands. Every third year the band travels to Florida to perform in Walt Disney World's Magic Music Days.
Winterguard
The band includes an active winter guard, under director Meisha Hinojosa, that has placed in state competition in recent years with many of its members also placing in State Solo and Ensemble competitions.
In 2007-2008, The Taft Ultimate Impact Winterguard Won State In The Novice Red Division. (300)
In 2008-2009, The Winterguard Won State In Scholastic A. (Bring Me To Life)
References
External links
Taft Independent School District website
Greatschools.net
Schools in San Patricio County, Texas
Public high schools in Texas |
61194805 | https://en.wikipedia.org/wiki/Snappy%20Gifts | Snappy Gifts | Snappy Gifts is a multinational company based in New York. The company, founded in 2015, provides companies with a system to offer workers personalized gifts.
History
Snappy Gifts was founded in 2015 by Dvir Cohen and Hani Goldstein in San Francisco and later moved its headquarters to New York. Initially, the company raised 1.6 million dollars and started off focusing on "personal client gifting" but in 2017 shifted its business model to corporate gifting while offering an enterprise version of its platform.
The company, which was established in San Francisco and later moved its headquarters to New York, opened an additional branch in Tel-Aviv.
In 2017, Snappy was included in Retail Accelerator XRC Labs' third cohort of startups.
In late 2018, a company survey detailing the top 25 worst corporate gifts was featured in Business News Daily and Fortune Magazine.
By 2019, the company raised an additional 8.5 million dollars in a funding round which was led by 83North and Hearst Ventures.
In September 2019, Snappy Gifts was listed in Forbes Magazine's list of "10 Most Promising Young Israeli Startups in New York".
Snappy Gifts partners with many globally-known companies such as Microsoft, Adobe, Comcast and Uber and also with human resources firms such as TriNet, ADP, BambooHR, HR Uncubed, and Crain's Best Places to Work.
In early 2021, Snappy Gifts was ranked in first place in Inc. magazine's list of Top 250 Fastest-Growing Private Companies in the New York Metro Area. In May of that year, Snappy Gifts raised another 70 million dollars in a series C funding round.
The Snappy platform
The Snappy Gifts system, which is available for both mobile and desktop consumers, provides companies with software for personalized gifting based on employee data such as age, gender, location and can also be synced to allow for gift recommendations based on time-specific events such as birthday celebrations and work anniversaries. Gifts are sent via mobile text message or email. The recipient gets a "virtual scratch card" which reveals to them the gift which their employer has pre-chosen for them. The employee then has the option to accept it or swap it while the employer later receives an email with the purchase request. Snappy Gifts are sourced through retailers and brands such as Amazon, Birchbox, Cloud9Living and Best Buy. The system is also designed with a "thank you" note feature which allows managers to see the immediate impact on their employees.
References
External links
American companies established in 2015
Companies based in Tel Aviv
Software companies based in New York City
Software companies of Israel
2015 establishments in California
Human resource management software
Employment compensation
Employee relations
Business software
Software companies of the United States
2015 establishments in New York City
Software companies established in 2015 |
14350709 | https://en.wikipedia.org/wiki/AC3D | AC3D | AC3D is a 3D design program which has been available since 1994. The software is used by designers for modeling 3D graphics for games and simulations - most notably it is used by the scenery creators at Laminar Research on the X-Plane (simulator). The .ac format has also been used in FlightGear for scenery objects and aircraft models.
History
Initially developed on the Amiga, the code was then ported to Silicon Graphics workstations which used the GL graphics library. At that time, the user interface was implemented using X-Window/Motif.
A Linux port was released onto the internet in 1994 (the GL graphics were replaced with OpenGL). A Microsoft Windows port followed when the X-Window interface was dropped in favor of the portable Tcl/Tk scripting library.
In 2002, Inivis Limited purchased the full intellectual property rights to AC3D and continues to develop and market the software. They decided to keep the name AC3D for the software.
In 2005, a Mac OS X version of AC3D was released.
Modeling
AC3D's modeling is polygon/subdivision-surface based. Unlike some other 3D software, AC3D refers to 'surfaces' rather than 'polygons'. An AC3D surface can be a polygon, polygon-outline or line. An AC3D object is a collection of surfaces.
3D files
AC3D can load and save a wide variety of 3D file formats but primarily uses its own .ac file format which is ascii.
Inivis is the first 3rd party vendor to offer officially sanctioned support for the Second Life sculpted prim format; exporters for other 3D software packages exist, but are solely user-supported.
Scripting and plugins
Extra functionality can be added to AC3D via Tcl/Tk scripts and/or C/C++ dynamic libraries (plug-ins). A software development kit (SDK) is available to licensed users.
References
External links
Inivis Limited
AC3D file format
3D graphics software
Computer-aided design software for Linux
3D computer graphics software for Linux
Proprietary commercial software for Linux
Software that uses Tk (software) |
1581035 | https://en.wikipedia.org/wiki/ILOVEYOU | ILOVEYOU | ILOVEYOU, sometimes referred to as Love Bug or Love Letter for you, is a computer worm that infected over ten million Windows personal computers on and after 5 May 2000. It started spreading as an email message with the subject line "ILOVEYOU" and the attachment "LOVE-LETTER-FOR-YOU.TXT.vbs." At the time, Windows computers often hid the latter file extension ("VBS," a type of interpreted file) by default because it is an extension for a file type that Windows knows, leading unwitting users to think it was a normal text file. Opening the attachment activates the Visual Basic script. First, the worm inflicts damage on the local machine, overwriting random files (including Office files and image files; however, it hides MP3 files instead of deleting them). Then, the worm copies itself to all addresses in the Windows Address Book used by Microsoft Outlook, allowing it to spread much faster than any other previous email worm.
Onel de Guzman, a then-24-year-old resident of Manila, Philippines, created the malware. Because there were no laws in the Philippines against making malware at the time of its creation, the Philippine Congress enacted Republic Act No. 8792, otherwise known as the E-Commerce Law, in July 2000 to discourage future iterations of such activity. However, the Constitution of the Philippines bans ex post facto laws, and as such, de Guzman could not be prosecuted.
Creation
ILOVEYOU was created by Onel de Guzman, a college student in Manila, Philippines, who was 24 years old at the time. De Guzman, who was poor and struggling to pay for Internet access at the time, created the computer worm intending to steal other users' passwords, which he could use to log in to their Internet accounts without needing to pay for the service. He justified his actions on his belief that Internet access is a human right and that he was not actually stealing.
The worm used the same principles that de Guzman had described in his undergraduate thesis at AMA Computer College. He stated that the worm was very easy to create, thanks to a bug in Windows 95 that would run code in email attachments when the user clicked on them. Originally designing the worm to only work in Manila, he removed this geographic restriction out of curiosity, which allowed the worm to spread worldwide. De Guzman did not expect this worldwide spread.
Description
On the machine system level, ILOVEYOU relied on the scripting engine system setting (which runs scripting language files such as .vbs files) being enabled and took advantage of a feature in Windows that hid file extensions by default, which malware authors would use as an exploit. Windows would parse file names from right to left, stopping at the first period character, showing only those elements to the left of this. The attachment, which had two periods, could thus display the inner fake "TXT" file extension. True text files are considered to be innocuous as they are incapable of running executable code. The worm used social engineering to entice users to open the attachment (out of actual desire to connect or simple curiosity) to ensure continued propagation. Systemic weaknesses in the design of Microsoft Outlook and Microsoft Windows were exploited to allow malicious code capable of gaining complete access to the operating system, secondary storage, and system and user data in, simply through unwitting users clicking on an icon.
Spread
Messages generated in the Philippines began to spread westwards through corporate email systems. Because the worm used mailing lists as its source of targets, the messages often appeared to come from acquaintances and were therefore often regarded as "safe" by their victims, providing further incentive to open them. Only a few users at each site had to access the attachment to generate millions more messages that crippled mail systems and overwrote millions of files on computers in each successive network.
Impact
The worm originated in the Pandacan neighborhood of Manila in the Philippines on May 4, 2000, thereafter following daybreak westward across the world as employees began their workday that Friday morning, moving first to Hong Kong, then to Europe, and finally the United States. The outbreak was later estimated to have caused US$5.5–8.7 billion in damages worldwide, and estimated to cost US$10–15 billion to remove the worm. Within ten days, over fifty million infections had been reported, and it is estimated that 10% of Internet-connected computers in the world had been affected. Damage cited was mostly the time and effort spent getting rid of the infection and recovering files from backups. To protect themselves, The Pentagon, CIA, the British Parliament and most large corporations decided to completely shut down their mail systems. At the time, it was one of the world's most destructive computer related disasters ever.
The events inspired the song "E-mail" on the Pet Shop Boys' UK top-ten album of 2002, Release, the lyrics of which play thematically on the human desires which enabled the mass destruction of this computer infection.
Architecture
De Guzman wrote the ILOVEYOU script (the attachment) in Microsoft Visual Basic Scripting (VBS), which ran in Microsoft Outlook and was enabled by default. The script adds Windows Registry data for automatic startup on system boot.
The worm searches connected drives and replaces files with extensions JPG, JPEG, VBS, VBE, JS, JSE, CSS, WSH, SCT, DOC, HTA, MP2, and MP3 with copies of itself, while appending the additional file extension VBS. However, MP3s and other sound-related files would be hidden rather than overwritten.
The worm propagates itself by sending one copy of the payload to each entry in the Microsoft Outlook address book (Windows Address Book). It also downloads the Barok trojan renamed for the occasion as "WIN-BUGSFIX.EXE."
The fact that the worm was written in VBS allowed users to modify it. A user could easily change the worm to replace essential files and destroy the system, allowing more than 25 variations of ILOVEYOU to spread across the Internet, each doing different kinds of damage. Most of the variations had to do with what file extensions were affected by the worm. Others modified the email subject to target a specific audience, like the variant "Cartolina" in Italian or "BabyPic" for adults. Some others only changed the credits to the author, which were initially included in the standard version of the virus, removing them entirely or referencing false authors. Still, others overwrote "EXE" and "COM" files. The user's computer would then be unbootable upon restarting.
Some mail messages sent by ILOVEYOU:
VIRUS ALERT!!
Important! Read Carefully!!
Investigation
On 5 May 2000, two young Filipino programmers named Reonel Ramones and Onel De Guzman became targets of a criminal investigation by agents of the Philippines' National Bureau of Investigation (NBI). Local Internet service provider Sky Internet had reported receiving numerous contacts from European computer users alleging that malware (in the form of the "ILOVEYOU" worm) had been sent via the ISP's servers.
De Guzman attempted to hide the evidence by removing his computer from his apartment, but he accidentally left some disks behind that contained the worm, as well as information that implicated Michael Buen as a possible co-conspirator.
After surveillance and investigation by Darwin Bawasanta of Sky Internet, the NBI traced a frequently appearing telephone number to Ramones' apartment in Manila. His residence was searched and Ramones was arrested and placed under investigation by the Department of Justice (DOJ). Onel De Guzman was also charged in absentia.
At that point, the NBI were unsure what felony or crime would apply. It was suggested they be charged with violating Republic Act 8484 (the Access Device Regulation Act), a law designed mainly to penalise credit card fraud, since both used pre-paid (if not stolen) Internet cards to purchase access to ISPs. Another idea was that they be charged with malicious mischief, a felony (under the Philippines Revised Penal Code of 1932) involving damage to property. The drawback here was that one of its elements, aside from damage to property, was intent to damage, and De Guzman had claimed during custodial investigations that he might have unwittingly released the worm. At a press conference organised by his lawyer on 11 May, he said "It is possible" when asked whether he might have done so.
To show intent, the NBI investigated AMA Computer College, where De Guzman had dropped out at the very end of his final year. They found that, for his undergraduate thesis, he had proposed the implementation of a trojan to steal Internet login passwords. This way, he claimed, would allow users to finally be able to afford an Internet connection. The proposal was rejected by the college of Computer Studies board, leading De Guzman to claim that his professors were close-minded.
Aftermath
Since there were no laws in the Philippines against writing malware at the time, both Ramones and de Guzman were released with all charges dropped by state prosecutors. To address this legislative deficiency, the Philippine Congress enacted Republic Act No. 8792, otherwise known as the E-Commerce Law, in July 2000, months after the worm outbreak.
In 2012, the Smithsonian Institution named ILOVEYOU the tenth-most virulent computer virus in history.
De Guzman did not want public attention. His last known public appearance was at the 2000 press conference, where he obscured his face and allowed his lawyer to answer most questions; his whereabouts remained unknown for 20 years afterward. In May 2020, investigative journalist Geoff White revealed that while researching his cybercrime book Crime Dot Com, he had found Onel de Guzman working at a mobile phone repair stall in Manila. De Guzman admitted to creating and releasing the virus. He claimed he had initially developed it to steal Internet access passwords, since he could not afford to pay for access. He also stated that he created it alone, clearing the two others who had been accused of co-writing the worm.
See also
Christmas Tree EXEC
Code Red worm
Computer virus
Nimda (computer worm)
Timeline of notable computer viruses and worms
References
External links
The Love Bug - A Retrospect
ILOVEYOU Virus Lessons Learned Report, Army Forces Command
Radsoft: The ILOVEYOU Roundup
"No 'sorry' from Love Bug author" at The Register
CERT Advisory CA-2000-04 Love Letter Worm
Computer worms
Email worms
Communications in the Philippines
2000 in the Philippines
Hacking in the 2000s
2000 introductions |
41597819 | https://en.wikipedia.org/wiki/SpaceEngine | SpaceEngine | SpaceEngine (stylized as "Space Engine") is an interactive 3D planetarium and astronomy software developed by Russian astronomer and programmer Vladimir Romanyuk. It creates a 1:1 scale three-dimensional planetarium representing the entire observable universe from a combination of real astronomical data and scientifically-accurate procedural generation algorithms. Users can travel through space in any direction or speed, and forwards or backwards in time. SpaceEngine is in beta status and up to version 0.980, released in July 2016, it was and still is available as a freeware download for Microsoft Windows. Version 0.990 beta was the first paid edition, released in June 2019 on Steam. The program has full support for VR headsets.
Properties of objects, such as temperature, mass, radius, spectrum, etc., are presented to the user on the HUD and in an accessible information window. Users can observe celestial objects ranging from small asteroids or moons to large galaxy clusters, similar to other simulators such as Celestia. The default version of SpaceEngine includes over 130,000 real objects, including stars from the Hipparcos catalog, galaxies from the NGC and IC catalogs, many well-known nebulae, and all known exoplanets and their stars.
Functionality
The proclaimed goal of SpaceEngine is scientific realism, and to reproduce every type of known astronomical phenomenon. It uses star catalogs along with procedural generation to create a cubical universe 10 billion parsecs (32.6 billion light-years) on each side, centered on the barycenter of the Solar System. Within the software, users can use search tools to filter through astronomical objects based on certain characteristics. In the case of planets and moons, specific environmental types, surface temperatures, and pressures can be used to filter through the vast amount of different procedurally generated worlds.
SpaceEngine also has a built-in flight simulator (currently in Alpha) which allows for users to spawn in a selection of fictional spacecraft which can be flown in an accurate model of orbital mechanics and also an atmospheric flight model when entering the atmospheres of the various planets and moons. The spacecraft range from small SSTO spaceplanes, to large interstellar spacecraft which are all designed with realism in mind, featuring radiators, fusion rockets, and micrometeorite shields. Interstellar spacecraft simulate the hypothetical Alcubierre drive.
Catalog objects
The real objects that SpaceEngine includes are the Hipparcos catalog for stars, the NGC and IC catalogs for galaxies, all known exoplanets, and prominent star clusters, nebulae, and Solar System objects including some comets.
Wiki and locations
The software has its own built-in "wiki" database which gives detailed information on all celestial objects and enables a player to create custom names and descriptions for them. It also has a locations database where a player can save any position and time in the simulation and load it again in the future.
Limitations
Although objects that form part of a planetary system move, and stars rotate about their axes and orbit each other in multiple star systems, stellar proper motion is not simulated, and galaxies are at fixed locations and do not rotate.
Most real-world spacecraft such as Voyager 2 are not provided with SpaceEngine.
Interstellar light absorption is not modeled in SpaceEngine.
Development
Development of SpaceEngine began in 2005, with its first public release in June 2010. The software is written in C++. The engine uses OpenGL as its graphical API and uses shaders written in GLSL. As of the release of version 0.990, the shaders have been encrypted to protect against plagiarism. Plans have been made to start opening them in a way that allows the community to develop special content for the game, with ship engine effects being made available to users who have purchased the game.
On May 27, 2019, the Steam store page for SpaceEngine was made public in preparation for the release of the first paid version, 0.990 beta.
SpaceEngine is currently only available for Windows PCs; however, Romanyuk has plans for the software to support macOS and Linux in the future.
See also
Celestia
Space flight simulation game
List of space flight simulation games
Planetarium software
List of observatory software
List of games with Oculus Rift support
Gravity (software)
References
External links
Russian website
SpaceEngine Forum
2010 video games
Astronomy software
Science software for Windows
Steam Greenlight games
Video game engines
Articles containing video clips
Video games developed in Russia
Windows games
Windows-only games |
3537964 | https://en.wikipedia.org/wiki/Historicity%20of%20the%20Homeric%20epics | Historicity of the Homeric epics | The extent of the historical basis of the Homeric epics has been a topic of scholarly debate for centuries.
While researchers of the 18th century had largely rejected the story of the Trojan War as fable, the discoveries made by Heinrich Schliemann at Hisarlik reopened the question in modern terms, and the subsequent excavation of Troy VIIa and the discovery of the toponym "Wilusa" in Hittite correspondence has made it plausible that the Trojan War cycle was at least remotely based on a historical conflict of the 12th century BC, even if the poems of Homer are removed from the event by more than four centuries of oral tradition.
History
In antiquity, educated Greeks accepted the truth of human events depicted in the Iliad and Odyssey, even as philosophical scepticism was undermining faith in divine intervention in human affairs. In the time of Strabo, topographical disquisitions discussed the identity of sites mentioned by Homer. This continued when Greco-Roman culture was Christianised: Eusebius of Caesarea offered universal history reduced to a timeline, in which Troy received the same historical weight as Abraham, with whom Eusebius' Chronologia began, ranking the Argives and Mycenaeans among the kingdoms ranged in vertical columns, offering biblical history on the left (verso), and secular history of the kingdoms on the right (recto). Jerome's Chronicon followed Eusebius, and all the medieval chroniclers began with summaries of the universal history of Jerome.
With such authorities accepting it, post-Roman Europeans continued to accept Troy and the events of the Trojan War as historical. Geoffrey of Monmouth's pseudo-genealogy traced a Trojan origin for royal Briton descents in Historia Regum Britanniae. Merovingian descent from a Trojan ancestor was embodied in a literary myth first established in Fredegar's chronicle (2.4, 3.2.9), to the effect that the Franks were of Trojan stock and adopted their name from King Francio, who had built a new Troy on the banks of the river Rhine (modern Treves). However, even before the so-called Age of Enlightenment of the 18th century these supposed facts of the medieval concept of history were doubted by Blaise Pascal: "Homer wrote a romance, for nobody supposes that Troy and Agamemnon existed any more than the apples of the Hesperides. He had no intention to write history, but only to amuse us." During the 19th century the stories of Troy were devalued as fables by George Grote.
The discoveries made by Heinrich Schliemann at Hisarlik revived the question during modern times, and recent discoveries have resulted in more discussion. According to Jeremy B. Rutter, archaeological finds thus far can neither prove nor disprove whether Hisarlik VIIa was sacked by Mycenaean Greeks sometime between 1325 and 1200 BC.
No text or artefact found on the site itself clearly identifies the Bronze Age site by name. This is due probably to the levelling of the former hillfort during the construction of Hellenistic Ilium (Troy IX), destroying the parts that most likely contained the city archives. A single seal of a Luwian scribe has been found in one of the houses, proving the presence of written correspondence in the city, but not a single text. Research by Anatolian specialists indicates that what is called "Troy" was in the Late Bronze Age known to the Hittites as the kingdom of Wilusa, and that it appears that there were several armed conflicts in the area at the end of the Late Bronze Age, although this does not identify the combatants.
The bilingual toponymy of Troy/Ilion is well established in the Homeric tradition. The Mycenaean Greeks of the 13th century BC had colonized the Greek mainland and Crete, and were beginning to make forays into Anatolia. Philologist Joachim Latacz identifies the "Achaioi" of the Illiad with the inhabitants of Ahhiyawa. He posits that in all probability the Iliad preserved through oral hexameters the memory of one or more acts of aggression perpetrated by the Ahhiyawans against Wilusa in the thirteenth century B.C.
Status of the Iliad
The more that is known about Bronze Age history, the clearer it becomes that it is not a yes-or-no question but one of educated assessment of how much historical knowledge is present in Homer, and whether it represents a retrospective memory of Dark Age Greece, as Finley concludes, or of Mycenaean Greece, which is the dominant view of A Companion to Homer, A.J.B. Wace and F.H. Stebbings, eds. (New York/London: Macmillan 1962). The particular narrative of the Iliad is not an account of the war, but a tale of the psychology, the wrath, vengeance and death of individual heroes, which assumes common knowledge of the Trojan War as a back-story. No scholars now assume that the individual events of the tale (many of which involve divine intervention) are historical fact; however, no scholars claim that the story is entirely devoid of memories of Mycenaean times.
However, in addressing a separate controversy, Oxford Professor of Greek, Martin L. West indicated that such an approach "misconceives" the problem, and that Troy probably fell to a much smaller group of attackers in a much shorter time.
The Iliad as essentially legendary
Some archaeologists and historians, most notably, until his death in 1986, Moses I. Finley, maintain that none of the events in Homer's works is historical. Others accept that there may be a foundation of historical events in the Homeric narrative, but say that in the absence of independent evidence it is not possible to separate fact from myth.
Finley in The World of Odysseus presents a picture of the society represented by the Iliad and the Odyssey, avoiding the question as "beside the point that the narrative is a collection of fictions from beginning to end". Finley was in a minority when his World of Odysseus first appeared in 1954. With the understanding that war was the normal state of affairs, Finley observed that a ten-year war was out of the question, indicating Nestor's recall of a cattle-raid in Elis as a norm, and identifying the scene in which Helen points out to Priam the Achaean leaders in the battlefield, as "an illustration of the way in which one traditional piece of the story was retained after the war had ballooned into ten years and the piece had become rationally incongruous."
Finley, for whom the Trojan War is "a timeless event floating in a timeless world", analyzes the question of historicity, aside from invented narrative details, into five essential elements: 1. Troy was destroyed by a war; 2. the destroyers were a coalition from mainland Greece; 3. the leader of the coalition was a king named Agamemnon; 4. Agamemnon's overlordship was recognized by the other chieftains; 5. Troy, too, headed a coalition of allies. Finley does not find any evidence for any of these elements.
Aside from narrative detail, Finley pointed out that, aside from some correlation of Homeric placenames and Mycenaean sites, there is also the fact that the heroes lived at home in palaces (oikoi) unknown in Homer's day; far from a nostalgic recall of the Mycenaean age, Finley asserts that "the catalog of his errors is very long".
His arms bear a resemblance to the armour of his time, quite unlike the Mycenaean, although he persistently casts them in antiquated bronze, not iron. His gods had temples, and the Mycenaeans built none, whereas the latter constructed great vaulted tombs to bury their chieftains in and the poet cremates his. A neat little touch is provided by the battle chariots. Homer had heard of them, but he did not really visualize what one did with chariots in a war. So his heroes normally drove from their tents a mile or less away, carefully dismounted, and then proceeded to battle on foot.
What the poet believed he was singing about was the heroic past of his own Greek world, Finley concludes.
During recent years scholars have suggested that the Homeric stories represented a synthesis of many old Greek stories of various Bronze Age sieges and expeditions, fused together in the Greek memory during the "dark ages" which followed the end of the Mycenean civilization. In this view, no historical city of Troy existed anywhere: the name perhaps derives from a people called the Troies, who probably lived in central Greece. The identification of the hill at Hisarlık as Troy is, in this view, a late development, following the Greek colonisation of Asia Minor during the 8th century BC.
It is also worth comparing the details of the Iliadic story to those of older Mesopotamian literature—most notably, the Epic of Gilgamesh. Names, set scenes, and even major parts of the story, are strikingly similar. Some academics believe that writing first came to Greece from the east, via traders, and these older poems were used to demonstrate the uses of writing, thus heavily influencing early Greek literature.
The Iliad as essentially historical
Another opinion is that Homer was heir to an unbroken tradition of oral epic poetry reaching back some 500 years into Mycenaean times. The case is set out in The Singer of Tales by Albert B. Lord, citing earlier work by folklorist and mythographer Milman Parry. In this view, the poem's core could represent a historical campaign that took place at the eve of the Mycenaean era. Much legendary material may have been added, but in this view it is meaningful to ask for archaeological and textual evidence corresponding to events referred to in the Iliad. Such a historical background would explain the geographical knowledge of Hisarlık and the surrounding area, which could alternatively have been obtained, in Homer's time, by visiting the site. Some verses of the Iliad have been argued to predate Homer's time, and could conceivably date back to the Mycenaean era. Such verses only fit the poem's meter if certain words are pronounced with a /w/ sound, which had vanished from most dialects of Greece by the 7th century BC.
The Iliad as partly historical
As mentioned above, though, it is most likely that the Homeric tradition contains elements of historical fact and elements of fiction interwoven. Homer describes a location, presumably in the Bronze Age, with a city. This city was near Mount Ida in northwest Turkey. Such a city did exist, at the mound of Hisarlık.
Homeric evidence
[[Image:Homeric Greece-en.svg|thumb|400px|Map of Bronze Age Greece as described in Homer's Iliad]]
Also, the Catalogue of Ships mentions a great variety of cities, some of which, including Athens, were inhabited both in the Bronze Age and in Homer's time, and some of which, such as Pylos, were not rebuilt after the Bronze Age. This suggests that the names of no-longer-existing towns were remembered from an older time, because it is unlikely that Homer would have managed to name successfully a diverse list of important Bronze Age cities that were, in his time, only a few blocks of rubble on the surface, often without even names. Furthermore, the cities enumerated in the Catalogue are given in geographical clusters, this revealing a sound knowledge of Aegean topography. Some evidence is equivocal: locating the Bronze Age palace of Sparta, the traditional home of Menelaus, under the modern city has been challenging, though archaeologists have discovered at least one Mycenaean era site about 7.5 miles outside of Sparta.
Mycenaean evidence
Likewise, in the Mycenaean Greek Linear B tablets, some Homeric names appear, including Achilles (Linear B: , a-ki-re-u), a name which was also common in the classical period, noted on tablets from both Knossos and Pylos. The Achilles of the Linear B tablet is a shepherd, not a king or warrior, but the very fact that the name is an authentic Bronze Age name is significant. These names in the Homeric poems presumably remember, if not necessarily specific people, at least an older time when people's names were not the same as they were when the Homeric epics were written down. Some story elements from the tablets appear in the Iliad.
Hittite evidence
The first person to point to the Hittite texts as a possible primary source was the Swiss scholar Emil Forrer in the 1920s and 1930s. In discussing an ethnic group called the Ahhiyawa in these texts, Forrer drew attention to the place names Wilusa and Taruisa, which he argued were the Hittite way of writing Wilios (Ϝίλιος, old form of Ιlios) and Troia (Troy). He also noted the mention of a Wilusan king Alaksandu, who had concluded a treaty with the Hittite king Muwatalli; the name of this king closely resembled Alexandros/Alexander, the alternative name of Paris, the son of king Priam. Other identifications Forrer offered included Priam with Piyama-Radu, and Eteocles, king of Orchomenos, with one Tawagalawa. However, despite his arguments, many scholars dismissed Forrer's identification of Wilusa-(W)ilios/Troia-Taruisa as either improbable or at least unprovable, since until recently the known Hittite texts provided no clear indication where the kingdom of Wilusa was located beyond somewhere in Western Anatolia.
General scholarly opinion about this identification changed with the discovery of a text join to the Manapa-Tarhunda letter, which located Wilusa beyond the Seha River near the Lazpa land. Modern scholars identify the Seha with the Classical Caicus River, which is the modern Bakırçay, and the Lazpa land is the more familiar isle of Lesbos. As Trevor Bryce observes, "This must considerably strengthen the possibility that the two were directly related, if not identical."
Despite this evidence, the surviving Hittite texts do not provide an independent account of the Trojan War. The Manapa-Tarhunda letter is about a member of the Hittite ruling family, Piyama-Radu, who gained control of the kingdom of Wilusa, and whose only serious opposition came from the author of this letter, Manapa-Tarhunda. King Muwatalli of the Hittites was the opponent of this king of Troy, and the result of Muwatalli's campaign is not recorded in the surviving texts. The Ahhiyawa, generally identified with the Achaean Greeks, are mentioned in the Tawagalawa letter as the neighbors of the kingdom of Wilusa, and who provided a refuge for the troublesome renegade Piyama-Radu. The Tawagalawa letter mentions that the Hittites and the Ahhiyawa fought a war over Wilusa.
Geological evidence
In November 2001, geologist John C. Kraft from the University of Delaware presented the results of investigations into the geology of the region that had started in 1977. The geologists compared the present geology with the landscapes and coastal features described in the Iliad and other classical sources, notably Strabo's Geographia. Their conclusion was that there is regularly a consistency between the location of Troy as Hisarlik (and other locations such as the Greek camp), the geological evidence, and descriptions of the topography and accounts of the battle in the Iliad.
See also
Historical Troy uncovered
Homeric Question
Historicity of the Exodus
Notes
References
External links
(Dartmouth College) Prehistoric Archaeology of the Aegean: 27. Troy VII and the Historicity of the Trojan War
The Greek Age of Bronze "Trojan War"
Hawkins, J.D., "Evidence from Hittite Records", Archaeology, Vol. 57, Number 3, May/June 2004
Iliad
Wilusa
Trojan War
Troad
Greek mythology studies
Iliad
Homeric scholarship |
11185636 | https://en.wikipedia.org/wiki/Information%20security%20management | Information security management | Information security management (ISM) defines and manages controls that an organization needs to implement to ensure that it is sensibly protecting the confidentiality, availability, and integrity of assets from threats and vulnerabilities. The core of ISM includes information risk management, a process which involves the assessment of the risks an organization must deal with in the management and protection of assets, as well as the dissemination of the risks to all appropriate stakeholders. This requires proper asset identification and valuation steps, including evaluating the value of confidentiality, integrity, availability, and replacement of assets. As part of information security management, an organization may implement an information security management system and other best practices found in the ISO/IEC 27001, ISO/IEC 27002, and ISO/IEC 27035 standards on information security.
Risk management and mitigation
Managing information security in essence means managing and mitigating the various threats and vulnerabilities to assets, while at the same time balancing the management effort expended on potential threats and vulnerabilities by gauging the probability of them actually occurring. A meteorite crashing into a server room is certainly a threat, for example, but an information security officer will likely put little effort into preparing for such a threat.
After appropriate asset identification and valuation has occurred, risk management and mitigation of risks to those assets involves the analysis of the following issues:
Threats: Unwanted events that could cause the deliberate or accidental loss, damage, or misuse of information assets
Vulnerabilities: How susceptible information assets and associated controls are to exploitation by one or more threats
Impact and likelihood: The magnitude of potential damage to information assets from threats and vulnerabilities and how serious of a risk they pose to the assets; cost–benefit analysis may also be part of the impact assessment or separate from it
Mitigation: The proposed method(s) for minimizing the impact and likelihood of potential threats and vulnerabilities
Once a threat and/or vulnerability has been identified and assessed as having sufficient impact/likelihood to information assets, a mitigation plan can be enacted. The mitigation method chosen largely depends on which of the seven information technology (IT) domains the threat and/or vulnerability resides in. The threat of user apathy toward security policies (the user domain) will require a much different mitigation plan than one used to limit the threat of unauthorized probing and scanning of a network (the LAN-to-WAN domain).
Information security management system
An information security management system (ISMS) represents the collation of all the interrelated/interacting information security elements of an organization so as to ensure policies, procedures, and objectives can be created, implemented, communicated, and evaluated to better guarantee an organization's overall information security. This system is typically influenced by organization's needs, objectives, security requirements, size, and processes. An ISMS includes and lends to effective risk management and mitigation strategies. Additionally, an organization's adoption of an ISMS largely indicates that it is systematically identifying, assessing, and managing information security risks and "will be capable of successfully addressing information confidentiality, integrity, and availability requirements." However, the human factors associated with ISMS development, implementation, and practice (the user domain) must also be considered to best ensure the ISMS' ultimate success.
Implementation and education strategy components
Implementing effective information security management (including risk management and mitigation) requires a management strategy that takes note of the following:
Upper-level management must strongly support information security initiatives, allowing information security officers the opportunity "to obtain the resources necessary to have a fully functional and effective education program" and, by extension, information security management system.
Information security strategy and training must be integrated into and communicated through departmental strategies to ensure all personnel are positively affected by the organization's information security plan.
A privacy training and awareness "risk assessment" can help an organization identify critical gaps in stakeholder knowledge and attitude towards security.
Proper evaluation methods for "measuring the overall effectiveness of the training and awareness program" ensure policies, procedures, and training materials remain relevant.
Policies and procedures that are appropriately developed, implemented, communicated, and enforced "mitigate risk and ensure not only risk reduction, but also ongoing compliance with applicable laws, regulations, standards, and policies."
Milestones and timelines for all aspects of information security management help ensure future success.
Without sufficient budgetary considerations for all the above—in addition to the money allotted to standard regulatory, IT, privacy, and security issues—an information security management plan/system can not fully succeed.
Relevant standards
Standards that are available to assist organizations with implementing the appropriate programs and controls to mitigate threats and vulnerabilities include the ISO/IEC 27000 family of standards, the ITIL framework, the COBIT framework, and O-ISM3 2.0. The ISO/IEC 27000 family represent some of the most well-known standards governing information security management and the ISMS and are based on global expert opinion. They lay out the requirements for best "establishing, implementing, deploying, monitoring, reviewing, maintaining, updating, and improving information security management systems." ITIL acts as a collection of concepts, policies, and best practices for the effective management of information technology infrastructure, service, and security, differing from ISO/IEC 27001 in only a few ways. COBIT, developed by ISACA, is a framework for helping information security personnel develop and implement strategies for information management and governance while minimizing negative impacts and controlling information security and risk management, and O-ISM3 2.0 is The Open Group's technology-neutral information security model for enterprise.
See also
Certified Information Systems Security Professional
Chief information security officer
Security information management
References
External links
ISACA
The Open Group
Information management
Information technology management
Security |
36469866 | https://en.wikipedia.org/wiki/Cannon%20Fodder%20%28video%20game%29 | Cannon Fodder (video game) | Cannon Fodder is a shoot 'em up developed by Sensible Software and published by Virgin Interactive Entertainment for the Amiga in 1993. Virgin ported the game to home computer systems MS-DOS, the Atari ST and the Archimedes, and the consoles Jaguar, Mega Drive, SNES and 3DO. The game is military-themed and based on shooting action with squad-based tactics. The player directs troops through numerous missions, battling enemy infantry, vehicles and installations.
Cannon Fodder has a darkly humorous tone which commentators variously praised and condemned. Its creators intended it to convey an anti-war message, which some reviewers recognised, but the Daily Star and a number of public figures derided the game. In other respects, reviewers highly praised the game, which widely achieved scores of over 90% in Amiga magazines. Amiga Action awarded it an unprecedented score, calling it the best game of the year.
Gameplay
Cannon Fodder is a military-themed action game with strategy and shoot 'em up elements. The player controls a small squad of up to five soldiers. These soldiers are armed with machine guns which kill enemy infantry with a single round. The player's troops are similarly fragile, and while they possess superior fire-power at the game's outset, the enemy infantry becomes more powerful as the game progresses. As well as foot soldiers, the antagonists include vehicles such as Jeeps, tanks and helicopters as well as missile-armed turrets. The player must also destroy buildings which spawn enemy soldiers. For these targets, which are invulnerable to machine gun fire, the player must utilise secondary, explosive weaponry: grenades and rockets. Ammunition for these weapons is limited and the player must find supply crates to replenish their troops. Wasting these weapons can potentially result in the player not having enough to fulfil the mission objectives. The player can opt to shoot crates - destroying enemy troops and buildings in the ensuing explosion - at less risk to their soldiers than retrieving them, but again at a greater risk of depleting ammunition.
The player proceeds through 23 missions divided into several levels each, making 72 levels in all. There are various settings including jungle, snow and desert, some with unique terrain features and vehicles such as igloos and snowmobiles. The player must also contend with rivers (crossing which soldiers are slowed and cannot fire) and quicksand as well as mines and other booby traps. In addition to shooting action, the game features strategy elements and employs a point-and-click control system more common to strategy than action games. As the player's troops are heavily outnumbered and easily killed, they must use caution, as well as careful planning and positioning. To this end, they can split the squad into smaller units to take up separate positions or risk fewer soldiers when moving into dangerous areas.
Development
Cannon Fodder was developed by Sensible Software, a small independent developer then of several years' standing, which had become one of the most prominent Amiga developers. Cannon Fodder - its working title from early in the development - was created after such successes as Wizkid, Mega Lo Mania and especially Sensible Soccer and was developed by six people in a "small, one room office". It was rooted in Mega Lo Mania, the "basic idea" being - according to creator Jon Hare - a strategy game in which the player "could send groups on missions, but that was all really." The group nonetheless wanted to introduce action elements into the strategy ideas of Mega Lo Mania, giving the player "more direct" control, though retaining the mouse control and icons uncommon to shoot 'em ups.
In accordance with habit Sensible's personnel eschewed storyboards when developing the starting point, instead writing descriptions of the concept and core gameplay functions. Sensible made an early decision to employ its signature "overhead" camera. Development of the basic scrolling and movement was another early step. Individual programmers then worked on various parts of the design, with the team play-testing rigorously as it went, often discarding the results of its experiments: "The reason we make good games is that if we put something in that turns out crap, we're not afraid to chuck it out", said graphics designer Stuart Cambridge. Hare elaborated: "[We] constructively criticise what comes out, gradually getting rid of the naff ideas and keeping any good stuff. We change it again and again and again until we get what we want." A point of pride was the realistic behaviour of the homing missile code, while the rural setting of some of the levels was inspired by Emmerdale Farm. Earlier works-in-progress employed larger numbers of icons than would be featured in the final version. The mechanics also had more depth: individual soldiers had particular attributes - such as being necessary to use certain weapons or vehicles - and a greater capacity to act independently, both removed in favour of "instant" action rather than "war game" play. Final touches were the additions of the last vehicles and introductory screens.
The designers named each of the game's several hundred otherwise identical protagonists, who were also awarded gravestones (varying according to the soldier's attained in-game rank) displayed on a screen between levels. Of this "personalisation", Hare said: "The graves show that people died, and their names mean they're not just faceless sacrifices". The theme was a departure from Sensible Software's usual non-violent games, and Hare stated "I'm only happy with this one because it makes you think 'Yes, people really die'. We're not glamourising anything, I don't think." He said it was inspired by "all wars ever" and was "meant to be an anti-war thing." He felt it would make gamers "realise just how senseless war is" and for this reason was "the game we've always wanted to write". CU Amiga however perceived "a fairly sick sense of humour" and predicted "The mix of satire and violence is bound to get some people pretty heated about the way such a serious subject is treated".
Production of the game began in early 1991 but was then delayed as programmer Jools Jameson worked on Mega Drive conversions of other games. The proposed Cannon Fodder had been part of a four-game deal with Robert Maxwell's software publisher, which was liquidated after the businessman's death. Unusually for an independent developer, Sensible had little difficulty in finding publishers and after work resumed on the game, concluded a deal with Virgin in May 1993. The creators chose Virgin as it "seemed like a good bet" (Hare) as well as because of the straightforwardness of UK head Tim Chaney. Several months before its release, elements of the game were combined with Sensible Soccer, to create Sensible Soccer Meets Bulldog Blighty. This modified Sensible Soccer demo featured a mode of play that replaced the ball with a timed hand grenade. The magazine described it as a "1944 version of Sensible Soccer", though The Daily Telegraph compared it to the Christmas-time football match in 1914.
Reception
Critical reception
Amiga Action said the game was "easily" the best of the year. The magazine compared it favourably to Sensible Soccer, saying the latter was "arguably the most playable and enjoyable game ever seen [...] At least, that was probably the best, as at this moment in time I believe this release [Cannon Fodder] to better it in terms of, well, everything." The reviewer also wrote: "Only last month in my Frontier review, I stated that I had only given that game 93% because no game would ever be less than seven per cent from perfection. Sensible Software have forced me to eat those words almost immediately." Amiga Computing praised the theme track as one of the best of the year. The reviewer could not find fault with the game and claimed to have given himself a headache in the attempt. He called it "one of the most playable games you will ever play and also one of the most fun. A rootin' tootin' shoot 'em up of the highest order." Amiga Down Under said the game "rates highly on sheer entertainment value - a game you'll want to come back to time and time again". The reviewer's "only complaint" was that some of the missions verged "on the near-impossible". Amiga Format said "if there's one thing Cannon Fodder has in spades, it's addictability and killer playability." The magazine also praised the graphics ("Great backgrounds. Excellent characters. Plenty of necessary detail") and sound ("A theme tune that sticks in your head, sound effects that work a dream - superb.") The reviewer said the game was "Extremely thought provoking" and "a highly enjoyable foray into the intelligent side of wandering around the place doing it to them before they do it to you." He also said that despite the controversy, Cannon Fodder was "possibly the most anti-war game I've seen in a while."
Amiga Power'''s reviewer said "playing this game is now more important to me than eating, sleeping, or any other bodily function." He praised the "groovetastic" UB40-inspired theme song, realistic sound effects and "intuitive" controls and said "I can't find anything wrong with the game". He nevertheless wrote that the lack of a two-player prevented it from scoring 100%, as well as joking that "It's got a finite number of levels" and "Not even real life is worth 100%.” The game went on to be ranked the sixth best game of all time in the history of the magazine.AUI also noted the lack of a two player option and said "23 missions will be completed sooner rather than later". The magazine however said "It is hard to criticise anything in Cannon Fodder" and "these are petty matters when compared to the sheer enjoyment of playing the game." CU Amiga wrote: "Cannon Fodder is the best thing since gunpowder. It's bloody brilliant. It's better than sex." The magazine praised the "toe-tapping" theme tune and "attention to sonic detail" in the sound effects. It noted the lack of order options but said "on the whole it's a very playable, very tough shoot 'em up". The One called the game "Sheer, unadulterated brilliance" and said "Cannon Fodder is quite simply one of the best strategy/action/shoot 'em ups to appear for ages." The writer praised the "damn fine humour" and said "I can't find anything wrong with it".Amiga Format reported sales figures of over 100,000 for the game. Reviewing the Jaguar version, GamePro criticized that the control mechanics are imprecise and slow without the option of mouse control and that the small character graphics make it difficult to follow the action. However, they complimented the sound effects and concluded that "Despite its foibles, Cannon Fodder is a challenging contest that'll have you planning new strategies to overcome your failures." Like GamePro, the four reviewers of Electronic Gaming Monthly complained that the characters are too small, but highly praised the fun, addictive gameplay and the sense of humor, and remarked that "Cannon Fodder stands high above the crowd of average Jaguar games." They later awarded Cannon Fodder Best Atari Jaguar Game of 1995. Citing the high level of bloodshed and selection of vehicles, a reviewer for Next Generation assessed, "Put simply, this is the best Jaguar game ever. Period."
The 3DO version also received overall positive reviews from Electronic Gaming Monthly, Next Generation, and GamePro, with the critics largely reiterating the same praises and criticisms they gave to the Jaguar version. Electronic Gaming Monthlys review team, however, additionally remarked that the difficulty slope and accessibility make the game appealing to players of all skill levels.Next Generation reviewed the PC version of the game, and stated that "This sui generis froth of strategy and action swings the needle on the old 'Just-One-More-Game' meter just a little higher."
In 2004, readers of Retro Gamer voted Cannon Fodder as the 61st top retro game. Cannon Fodder was included in the 2011 list of the best violent video games of all time by The Daily Telegraph, with a comment that "this got a lot of people very upset" but "it was a relentlessly playable game, filled with dark humour and cartoony graphics." That same year, Wirtualna Polska ranked it as the fifth best Amiga game.
Controversy
The game drew criticism in the Daily Star for its juxtaposition of war and humour, its showcasing in London on Remembrance Day and especially its use of iconography closely resembling the remembrance poppy. The newspaper quoted The Royal British Legion, Liberal Democrat MP Menzies Campbell and Viscount Montgomery of Alamein, calling the game offensive to "millions", "monstrous" and "very unfortunate" respectively. Virgin Interactive initially defended the use of the poppy as an anti-war statement, which the Daily Star in turn dismissed as a "publicity writer's hypocrisy". Magazine Amiga Power became involved in the controversy because of its planned reuse of the poppy on the cover of an issue also to be released on Armistice Day. This had been changed in response to criticism in the Daily Star original article, but the newspaper published another piece focussing on a perceived inflammatory retort by Amiga Power editor Stuart Campbell: "Old soldiers? I wish them all dead." The article featured further quotes from the British Legion. The magazine apologised for including the comment although Campbell himself felt he was "entitled to an opinion" regardless of its insensitivity. The game was ultimately released with a soldier rather than a poppy on the box art, though the poppy was still displayed on the game's title screen. Amiga Power also changed its cover after "legal scrapes with the British Legion over whether a poppy is just a flower or a recognisable symbol of a registered charity." Stuart Campbell elsewhere pointed out that the game was ironically anti-war, while contemporary Amiga Format reviewer Tim Smith also praised the game as intelligently anti-war. Metro later acknowledged "a relatively profound statement on the futility of war" which had been unrecognised by the Daily Star. Kieron Gillen defended the game as ironic and anti-war in a retrospective. Amiga Computing reported the publicity as "perhaps the best advertising campaign" for which Virgin could have hoped.
See alsoCannon Fodder 2Cannon Fodder 3Syndicate'' - a contemporary video game with similar squad-based tactical shooting elements with strategy elements
References
External links
1993 video games
3DO Interactive Multiplayer games
Acorn Archimedes games
Action video games
Amiga games
Anti-war video games
Atari Jaguar games
Atari ST games
Black comedy video games
Cancelled PlayStation Portable games
Censored video games
Amiga CD32 games
DOS games
Game Boy Color games
Games commercially released with DOSBox
Mobile games
Real-time tactics video games
Satirical video games
Sega Genesis games
Sensible Software
Shoot 'em ups
Single-player video games
Super Nintendo Entertainment System games
Video game controversies
Video games developed in the United Kingdom
Video games with isometric graphics
Video games scored by Allister Brimble
Video games scored by Jon Hare
Video games scored by Richard Joseph
Virgin Interactive games |
64030253 | https://en.wikipedia.org/wiki/Amplitude%20Problem | Amplitude Problem | Juan Irming (born December 14, 1974), also known as Amplitude Problem, is a Swedish-American musician and producer currently based in Los Angeles. While the former hacker has had a long career in music beginning in the underground demoscene in Europe, he is best known for his chiptune, synthwave, and nerdcore tracks.
Biography
1974 - 1992: Early Life
Juan Irming was born to Swedish parents on the Spanish island Mallorca in 1974. In the '80s, Irming began to frequent a local computer store in Malmö called Computer Corner. The store had an Atari 130XE in the window that Irming was able to program with scrolling text displaying the store name. The store owners were so impressed, they brought the preteen on as a paid programmer. It was at this time Irming discovered his life-long passion for computer engineering which eventually led him to the demoscene, phone hacking (phreaking), early cracking underground, and computer hacker subculture in Europe.
Amplitude Problem's track Computer Corner was a tribute to the store's impact on his formative years as a young hacker. The music video for the song was created by German pixel artist Valenberg, the artist for the point-and-click cyberpunk video game VirtuaVerse.
1992 - Present: Immigration to US
Juan Irming immigrated to the United States from Sweden in the early '90s, to attend Musicians Institute in Hollywood. After his graduation, he spent time to focus on fatherhood and his programming career. Following his steps, his son, Maxwell "Max James" Irming also earned his degree at Musicians Institute and has pursued a professional career in software engineering and music production. The duo performed on the main stage together at the 2019 DEF CON 27 gathering in Las Vegas.
Irming resides in Los Angeles, California and currently works as a senior level software engineer and continues to produce music.
Music
1986 - 1993: Demoscene and early career
Juan Irming, known at the time as 7an, began in the European demoscene in the late '80s as a member and composer for the Atari ST demo crew and hacking group SYNC. Irming's "tracker music" placed first in several demo music contests. It was during this era that Irming began to make extensive use of vintage home computer hardware such as the Commodore 64, Atari 130XE, and the Atari ST in his productions, favoring the SID6581 and YM2149 computer sound chips to create a distinctive style of chiptune music.
2014 - Present: Chiptune, Synthwave, and Nerdcore
In 2014, Juan Irming, under the new moniker Amplitude Problem, would produce music for American hacker and nerdcore rapper YTCracker and other artists. Amplitude Problem has composed and produced video game inspired synth and chiptune tracks for a number of projects and records. Amplitude Problem produced YTCracker's cyberpunk album Introducing Neals which released on November 5, 2014 on Guy Fawkes Day.
Amplitude Problem's Crime of Curiosity (2019) features well-known American hacker Loyd Blankenship, also known as The Mentor, reading his essay The Hacker Manifesto (originally titled The Conscience of a Hacker) which serves as a guideline and moral code of hackers around the world. Crime of Curiosity is also featured as the official soundtrack to the demoscene history book THE CRACKERS: The Art of Cracking from 1984–1994.
Amplitude Problem produced the album Blue Bots Dots which released in 2015 and garnered favorable reviews. In 2017, he followed-up with the nu jazz album The Frequency Modulators Orchestra, Vol. 1 which critics have described as "an artificial world of high-tech joy, pop music for Donkey Kong to jam to." Tracks from the album aired on FM jazz radio stations across the United States.
In 2020, Irming launched a folktronica side-project titled Cybard, with a cover of the video game Assassin's Creed Valhalla soundtrack and a full-length album, CMLXXXIV. This was followed up by the release of a cover of the video game Valheim soundtrack. Both games heavily feature Viking and Norse themes.
Amplitude Problem has been featured on albums alongside artists like Mitch Murder, Lazerhawk and GUNSHIP. He has appeared live at events and venues such as DEF CON, Comic Con, and Game On Expo with YTCracker, Dual Core, and MC Frontalot. Influences include first-generation chip composers Rob Hubbard, Ben Daglish, Martin Galway and Maniacs of Noise in addition to electronic, industrial, jazz and hip hop acts such as Damokles, Kraftwerk, Jean-Michel Jarre, Alphaville, Depeche Mode, Nine Inch Nails, Herbie Hancock and Public Enemy. He has called his own work "retro-future music for geeks, cyberpunks and the occasional normal human being."
Craigslist Hack Marketing Stunt
On November 23, 2014 at 8pm, there was a massive breach on the American classifieds website Craigslist. The attack was allegedly part of a guerrilla marketing campaign for YTCracker's album Introducing Neals on which Amplitude Problem was the lead producer. Visitors of the Craigslist website were forcibly redirected to hacking website DigitalGangster.com, with a following redirect to a YouTube video which Gizmodo describes as a "very strange animated rap video [that] filled your ears with lyrics about freedom, privacy, and net neutrality".
YTCracker denied claims that he was behind the attack and noted he'd likely be accused due to his previous involvement in the criminal hacking scene. It is not yet known which individual or group performed the hack.
Discography
Hold Down the Sun (2020)
DEF CON 27 Soundtrack (2019) (various artists)
World Builder (2019) (solo effort)
Crime of Curiosity (2019) (featuring Inverse Phase and other demoscene/hacking artists)
Descendants of Funk (2018) (various artists)
Collision Theory (2017) (various artists)
The Frequency Modulators Orchestra, Vol. 1 (2017) (solo effort)
Hear the Living Dead (2016) (various artists)
Synchron Assembly (2016) (solo effort)
Chip Wars (2015) (various artists)
I Fight for the Users (2015) (various artists)
Grid Knights (2015) (various artists)
Coastal Keys (2015) (various artists)
Carpenter (2015) (various artists)
Blue Bots Dots (2015) (solo effort)
The Next Peak (2015) (various artists)
Introducing Neals (2014) (composition and production for YTCracker)
References
External links
1974 births
Nerdcore artists
Swedish musicians
Synthwave musicians
Living people
Chiptune musicians
Hacker culture
Demosceners
Cyberpunk culture |
20368922 | https://en.wikipedia.org/wiki/Vivek%20Kundra | Vivek Kundra | Vivek Kundra (born October 9, 1974) is a former American administrator who served as the first chief information officer of the United States from March, 2009 to August, 2011 under President Barack Obama. He is currently the chief operating officer at Sprinklr, a provider of enterprise customer experience management software based in NYC. He was previously a visiting Fellow at Harvard University.
He previously served in D.C. Mayor Adrian Fenty's cabinet as the District's chief technology officer and in Virginia Governor Tim Kaine's cabinet as Assistant Secretary of Commerce and Technology.
Early life and education
Kundra was born in New Delhi, India, on October 9, 1974. He moved to Tanzania with his family at the age of one, when his father joined a group of professors and teachers to provide education to local residents. Kundra learned Swahili as his first language, in addition to Hindi and English. His family moved to the Washington, D.C. metropolitan area when he was eleven.
Kundra attended college at the University of Maryland, College Park, where he received a degree in psychology. He earned a master's degree in information technology, from University of Maryland University College. Additionally, he is a graduate of the University of Virginia's Sorensen Institute for Political Leadership.
Career
Kundra is currently the chief operating officer at Sprinklr, a provider of enterprise customer experience management software based in NYC.
Previously, Kundra served as director of Infrastructure Technology for Arlington County, Virginia, starting September 11, 2001.
Governor Tim Kaine appointed Kundra in January 2006 to the post of Assistant Secretary of Commerce and Technology for Virginia, the first dual cabinet role in the state's history.
Mayor Adrian Fenty appointed him on March 27, 2007, to the cabinet post of chief technology officer (CTO) for the District of Columbia. Kundra worked on developing programs to spur open source and crowdsourced applications using publicly accessible Web services from the District of Columbia. Building on the work of Suzanne Peck, who preceded him as DC's CTO and created the D.C. Data Catalog, he used that data as the source material for an initiative called Apps for Democracy. The contest yielded 47 web, iPhone and Facebook applications from residents in 30 days. Mayor Fenty stated that the program cost the District "50 thousand dollars total and we estimate that we will save the district millions of dollars in program development costs". This cost-benefit was claimed by the D.C. government as savings in internal operational and contractual costs. Taking a page from Kundra this initiative was mirrored by New York City's mayor Michael Bloomberg in launching a "BigApps" contest housed at NYC BigApps as well as New York City's DataMine. The city of San Francisco launched a data portal similar DC's in 2009.
Kundra won recognition for the project management system he implemented for the District government. The system imagined projects as publicly traded companies, project schedules as quarterly reports, and user satisfaction as stock prices. Buying or selling a stock corresponded to adding resources to a project or taking them away. The goal of management was to optimize the project portfolio for return on investment. The system effectively replaced subjective judgments about projects with objective, data driven analytics.
Kundra's efforts to use cloud-based web applications in the D.C. government have also been considered innovative. Following the D.C. example driven by Kundra, the city of Los Angeles is now taking steps to adopt the cloud computing model for its IT needs. A D.C. spokeswoman said that the District of Columbia paid $479,560 for the Enterprise Google Apps license, which is $3.5 million less than what it had planned to spend on an alternative plan. Since its deployment in July 2008 Google Apps is available to 38,000 D.C. city employees, but only 1,000–2,000 are actively using Google Docs. Only 200 employees are actively using Gmail. In late 2010, hoping to spur use of Gmail, the city ran a pilot program, selecting about 300 users and having them use the Google product for three months. Google participated closely in the project, but Gmail ultimately didn't pass the "as good or better" test with the users, who preferred Exchange/Outlook. In July 2011, the General Services Administration (GSA) became the first federal agency to migrate its email services for 17,000 employees and contractors to the cloud-based Google Apps for Government, saving $15.2 million over 5 years. As of July 2011, government agencies in 42 states are leveraging cloud-based messaging and collaboration services.
The first major cloud project during his tenure was GSA's migration of e-mail/Lotus Notes to the Gmail and Salesforce.com's platform. GSA awarded a contract for e-mail in December 2010 and a five-year contract to Salesforce in August 2011. A September 2012 Inspector General report found the savings and cost analysis not verifiable and recommended GSA update its cost analysis. GSA office of CIO was unable to provide documentation supporting its analysis regarding the initial projected savings for government staffing and contractor support. The audit found that the agency could neither verify those savings nor clearly determine if the cloud migration is meeting agency expectations despite initial claims that indicated 50% cost savings
Kundra also moved the city's geographic information systems department to a middle school.
Federal Chief Information Officer (CIO)
Before his appointment as CIO, Mr. Kundra served as technology adviser on President Barack Obama's transition team. Kundra was officially named by President Obama on March 5, 2009, to the post of Federal CIO.
The Federal Chief Information Officer is responsible for directing the policy and strategic planning of federal information technology investments as well as for oversight of federal technology spending. Until Kundra, the position had previously been more limited within the Office of Management and Budget where a federal chief information officer role had been created by the E-Government Act of 2002. The Federal CIO establishes and oversees enterprise architecture to ensure system interoperability and information sharing and maintains information security and privacy across the federal government. According to President Obama, as Chief Information Officer, Kundra "will play a key role in making sure our government is running in the most secure, open, and efficient way possible". To further President Obama's overall technology agenda, Vivek Kundra, Jeffrey Zients, the Chief Performance Officer, and Aneesh Chopra, the chief technology officer, will work closely together. Kundra and Chopra previously worked in Governor Tim Kaine's administration.
Kundra made it a priority to focus on the following areas:
Cybersecurity
Ensuring openness and transparency
Innovation
Lowering the cost of government
Participatory democracy
One of his first projects was the launch of Data.gov, a site for access to raw government data. Another project launched by Kundra in June 2009 was the Federal IT Dashboard, which gives an assessment (in terms of cost, schedule and CIO ranking) of many large government IT projects.
Democratizing data
Kundra launched the Data.gov platform on May 21, 2009 with the goal of providing public access to raw datasets generated by the Executive Branch of the Federal Government to enable public participation and private sector innovation. Data.gov draws conceptual parallels from the DC Data Catalog launched by Kundra when he was CTO of Washington, D.C., where he published vast amounts of datasets for public use. Immediately after the Data.gov launch, the Apps for America contest by the Sunlight Foundation challenged the American people to develop innovative solutions using Data.gov. San Francisco, the City of New York, the State of California, the State of Utah, the State of Michigan, and the Commonwealth of Massachusetts have launched public access websites modeled after Data.gov. Internationally, over 46 countries have launched open data sites patterned after Data.Gov, some using the U.S. Data.gov software which was made open source and made available on GitHub. Additionally, states, cities and counties have launched sites, notablysome cities in Canada, and the UK are following suit.
IT Dashboard
On June 30, 2009, at the Personal Democracy Forum in New York, Vivek Kundra, unveiled the IT Dashboard that tracks over $76 billion in Federal IT spending. The IT Dashboard is part of USASpending.gov to track all government spending. The IT Dashboard is designed to provide CIOs of individual government agencies, the public and agency leaders unprecedented visibility into the operations and performance of Federal IT investments (spending), and the ability to provide and receive direct feedback to those directly accountable. In January 2010, Kundra followed up the work on the IT Dashboard with TechStat accountability sessions. These sessions are designed to turn around, halt or terminate at-risk and failing IT projects in the federal government. It allows agency CIOs, CFOs, and other key stakeholders to find solutions for IT projects that are over budget, behind schedule, or under-performing.
Cloud computing
Kundra launched the federal government strategy and the cloud computing portal Apps.gov at NASA's Ames Research Center, Moffett Field in California, on September 15, 2009. Apps.gov is a new service provided by the GSA where federal agencies can subscribe to IT services. Kundra saw the cloud as an alternative to hardware investments, as means to reduce IT costs, and to shift focus of federal IT from infrastructure management to strategic projects. This initiative aims to use commercially derived technologies to promote software tools, vast data storage and data sharing, and to foster collaboration across all federal agencies. Howard Schmidt, White House cybersecurity coordinator, will work closely with the Federal CIO and CTO with respect to cloud initiatives and has the responsibility of orchestrating all cybersecurity activities across the government.
On December 9, 2010, Kundra published the "25 Point Implementation Plan to Reform Federal Information Technology Management", which included Cloud First as one of its top priorities for achieving IT efficiency. Cloud First required each agency to identify three cloud initiatives. He announced his decision to leave the federal government and join Harvard University within 7 months of this strategy, too short for any of cloud first initiatives to have demonstrated cost savings. After a short 5 months at Harvard he left to join Salesforce.com, a cloud SaaS and PaaS provider.
Management reforms
Kundra published a 25-point implementation plan to reform how the federal government manages information technology. The execution plan follows his decision to reevaluate some of the government's most troubled IT projects. Of 38 projects reviewed, four have been canceled, 11 have been rescoped, and 12 have cut the time for delivery of functionality down by more than half, from two to three years down to an average of 8 months, achieving a total of $3 billion in lifecycle budget reductions, according to whitehouse.gov
Suspension
On March 13, 2009, Kundra was placed on indefinite leave following an FBI raid on his former D.C. office and the arrest of two individuals in relation to a bribery investigation. Kundra returned to duties after five days with no finding of wrongdoing on his part.
Post-Obama administration career
Kundra left his post as chief information officer in August 2011 to accept an academic fellowship at Harvard University, conducting research at both the Berkman Center for Internet & Society and the Joan Shorenstein Center on the Press, Politics and Public Policy.
In January 2012 Kundra joined Salesforce.com as Executive Vice President of Emerging Markets. In February, 2017 he joined Outcome Health as EVP of Provider Solutions, and then promoted to Chief Operating Officer in July. Vivek left Outcome in Nov 2017 soon after its major investors filed a lawsuit alleging improper practices against its founders for misleading advertisers and investors. On May 16, 2018 Kundra joined the private start-up Sprinklr as chief operating officer.
Professional recognition
In May 2011, Kundra was selected by EMC Corporation for their Data Hero Visionary Award for his pioneering work under the Obama Administration to reform how the Federal government manages and uses information technology. EMC states that, "Kundra has led the nation to seek innovative solutions to lower the cost of government operations, while exploring ways to fundamentally change the way the public sector and the public interact".
In March 2011, Kundra was selected by the World Economic Forum as a Young Global Leader for his professional accomplishments, commitment to society and potential to contribute to shaping the future of the world.
Kundra was awarded the 2010 National Cyber Security Leadership Award by the SANS Institute for uncovering more than $300 million each year in wasted federal spending on ineffective certification and accreditation reporting and demonstrating an alternative approach called "continuous monitoring" that provides more effective security for federal systems at lower costs.
Kundra was named Chief of the Year on December 21, 2009, by InformationWeek for driving unprecedented change in federal IT.
Kundra was named by InfoWorld among the top 25 CTO's in the country.
He was also selected as a 2008 MIT Sloan CIO Symposium Award Finalist on 'Balancing Innovation and Cost Leadership'. Both organizations cited the "stock market" approach to IT portfolio management that Kundra implemented for the District of Columbia. The system measured project performance and allocated IT investments similar to the way the public companies trade on the stock market.
Harvard Kennedy School of Government's Ash Institute also awarded the Innovations in American Government Award (2009) to "District of Columbia's Data Feeds: Democratization of Government Data". The project spearheaded by Kundra, Mayor Fenty, and CPO David Gragan was cited for "increase in civic participation, government accountability, and transparency in D.C. government practices" through sites like the Digital Public Square and the DC Data Catalog.
Kundra was recognized as the 2008 Government Sector IT Executive of the Year by the Tech Council of Maryland. The organization cited Kundra's efforts to increase public access to government information and services through live data feeds and data sets. Kundra was also a recipient of the Federal 100 Award for significant contributions to the federal information technology community.
See also
Government 2.0
References
External links
Federal CIO Council
'Democratizing Data and Putting it in the Public Domain', November 20, 2008
Live On Video: Federal CIO Vivek Kundra In His Own Words, InformationWeek Wolfe's Den blog - March 6, 2009
Kundra on Democratizing Data Government Technology Magazine, March 2009
The Most Influential Global Indians in Technology Dataquest
1974 births
Chief information officers
Tanzanian emigrants to the United States
Living people
Obama administration personnel
Technology evangelists
Indian emigrants to the United States
University of Maryland, College Park alumni
Washington, D.C. Democrats
People from New Delhi
American chief technology officers
American politicians of Indian descent
Berkman Fellows |
40918000 | https://en.wikipedia.org/wiki/Thomas%20M.%20Siebel%20Center%20for%20Computer%20Science | Thomas M. Siebel Center for Computer Science | The Thomas M. Siebel Center for Computer Science is a $50 million, integrated research and educational facility designed by Bohlin Cywinski Jackson located on the Urbana campus at the University of Illinois at Urbana-Champaign (UIUC). The Siebel Center houses the Department of Computer Science, which currently shares the distinction of being the fifth best Computer Science department in the nation after Stanford University, the University of California, Berkeley, Carnegie Mellon University, and the Massachusetts Institute of Technology which are all tied for the best department. The center has over 225,000 square feet (21,000 m²) of research, office, and laboratory space, an undergraduate population of 1,790, over 900 graduate students, and 80 faculty and research members. The Siebel Center claims to be the first "Computing Habitat", featuring a fully interactive environment and intelligent building system. The facility is equipped with computer-controlled locks, proximity and location sensors, cameras to track room activity, and other sensory and control features.
The building is dedicated to Thomas Siebel in recognition of his donation to the University that funded a portion of the construction.
References
External links
About the Siebel Center
Thomas M. Siebel Center for Computer Science
Facilities and Services
Siebel Center Facilities
College of Engineering
Siebel Center for Computer Science |
2361042 | https://en.wikipedia.org/wiki/Windows%20Server%202008 | Windows Server 2008 | Windows Server 2008 is is the third release of the Windows Server operating system produced by Microsoft as part of the Windows NT family of the operating systems. It was released to manufacturing on February 4, 2008, and generally to retail on February 27, 2008. Derived from Windows Vista, Windows Server 2008 is the successor of Windows Server 2003, which is derived from the Windows XP codebase, released nearly five years earlier.
On January 12, 2016, Microsoft ended support for all Internet Explorer versions older than Internet Explorer 11 released in 2013 for Windows 7. Extended support for Windows Server 2008 ended on January 14, 2020, after which the supported Windows versions were scattered across unsupported Windows versions.
Extended Security Updates (ESU) updates last until January 10, 2023 (January 9, 2024 for Azure customers).
Windows Server 2008 is the final version which supports IA-32-based processors (also known as 32-bit processors). Its successor, Windows Server 2008 R2, requires a 64-bit processor in any supported architecture (x86-64 for x86 and Itanium).
History
Originally known as Windows Server Codename "Longhorn", Microsoft chairman Bill Gates announced its official title (Windows Server 2008) during his keynote address at WinHEC 16 May 2007.
Beta 1 was released on July 27, 2005; Beta 2 was announced and released on May 23, 2006, at WinHEC 2006 and Beta 3 was released publicly on April 25, 2007. Release Candidate 0 was released to the general public on September 24, 2007 and Release Candidate 1 was released to the general public on December 5, 2007. Windows Server 2008 was released to manufacturing on February 4, 2008, and officially launched on 27th of that month.
Features
Windows Server 2008 is built from the same codebase as Windows Vista and thus it shares much of the same architecture and functionality. Since the codebase is common, Windows Server 2008 inherits most of the technical, security, management and administrative features new to Windows Vista such as the rewritten networking stack (native IPv6, native wireless, speed and security improvements); improved image-based installation, deployment and recovery; improved diagnostics, monitoring, event logging and reporting tools; new security features such as BitLocker and address space layout randomization (ASLR); the improved Windows Firewall with secure default configuration; .NET Framework 3.0 technologies, specifically Windows Communication Foundation, Microsoft Message Queuing and Windows Workflow Foundation; and the core kernel, memory and file system improvements. Processors and memory devices are modeled as Plug and Play devices to allow hot-plugging of these devices. This allows the system resources to be partitioned dynamically using dynamic hardware partitioning - each partition has its own memory, processor and I/O host bridge devices independent of other partitions.
Server Core
Windows Server 2008 includes a variation of installation called Server Core. Server Core is a significantly scaled-back installation where no Windows Explorer shell is installed. It also lacks Internet Explorer, and many other non-essential features. All configuration and maintenance is done entirely through command-line interface windows, or by connecting to the machine remotely using Microsoft Management Console (MMC). Notepad and some Control Panel applets, such as Regional Settings, are available.
A Server Core installation can be configured for several basic roles, including the domain controller (Active Directory Domain Services), Active Directory Lightweight Directory Services (formerly known as Active Directory Application Mode), DNS Server, DHCP server, file server, print server, Windows Media Server, Internet Information Services 7 web server and Hyper-V virtual server roles. Server Core can also be used to create a cluster with high availability using failover clustering or network load balancing.
Andrew Mason, a program manager on the Windows Server team, noted that a primary motivation for producing a Server Core variant of Windows Server 2008 was to reduce the attack surface of the operating system, and that about 70% of the security vulnerabilities in Microsoft Windows from the prior five years would not have affected Server Core.
Active Directory
The Active Directory domain functionality that was retained from Windows Server 2003 was renamed to Active Directory Domain Services (ADDS).
Active Directory Federation Services (ADFS) enables enterprises to share credentials with trusted partners and customers, allowing a consultant to use their company user name and password to log in on a client's network.
Active Directory Lightweight Directory Services (AD LDS), (formerly Active Directory Application Mode, or ADAM)
Active Directory Certificate Services (ADCS) allow administrators to manage user accounts and the digital certificates that allow them to access certain services and systems. Identity Integration Feature Pack is included as Active Directory Metadirectory Services.
Active Directory Rights Management Services (ADRMS)
Read-only domain controllers (RODCs), intended for use in branch office or other scenarios where a domain controller may reside in a low physical security environment. The RODC holds a non-writeable copy of Active Directory, and redirects all write attempts to a full domain controller. It replicates all accounts except sensitive ones. In RODC mode, credentials are not cached by default. Also, local administrators can be designated to log on to the machine to perform maintenance tasks without requiring administrative rights on the entire domain.
Restartable Active Directory allows ADDS to be stopped and restarted from the Management Console or the command-line without rebooting the domain controller. This reduces downtime for offline operations and reduces overall DC servicing requirements with Server Core. ADDS is implemented as a Domain Controller Service in Windows Server 2008.
All of the Group Policy improvements from Windows Vista are included. Group Policy Management Console (GPMC) is built-in. The Group Policy objects are indexed for search and can be commented on.
Policy-based networking with Network Access Protection, improved branch management and enhanced end user collaboration. Policies can be created to ensure greater quality of service for certain applications or services that require prioritization of network bandwidth between client and server.
Granular password settings within a single domain - ability to implement different password policies for administrative accounts on a "group" and "user" basis, instead of a single set of password settings to the whole domain.
Failover Clustering
Windows Server 2008 offers high availability to services and applications through Failover Clustering. Most server features and roles can be kept running with little to no downtime.
In Windows Server 2008, the way clusters are qualified changed significantly with the introduction of the cluster validation wizard. The cluster validation wizard is a feature that is integrated into failover clustering in Windows Server 2008. With the cluster validation wizard, an administrator can run a set of focused tests on a collection of servers that are intended to use as nodes in a cluster. This cluster validation process tests the underlying hardware and software directly, and individually, to obtain an accurate assessment of how well failover clustering can be supported on a given configuration.
This feature is only available in Enterprise and Datacenter editions of Windows Server.
Disk management and file storage
The ability to resize hard disk partitions without stopping the server, even the system partition. This applies only to simple and spanned volumes, not to striped volumes.
Shadow Copy based block-level backup which supports optical media, network shares and Windows Recovery Environment.
DFS enhancements - SYSVOL on DFS-R, Read-only Folder Replication Member. There is also support for domain-based DFS namespaces that exceed the previous size recommendation of 5,000 folders with targets in a namespace.
Several improvements to Failover Clustering (high-availability clusters).
Internet Storage Naming Server (iSNS) enables central registration, deregistration and queries for iSCSI hard drives.
Self-healing NTFS: In Windows versions prior to Windows Vista, if the operating system detected corruption in the file system of an NTFS volume, it marked the volume "dirty"; to correct errors on the volume, it had to be taken offline. With self-healing NTFS, an NTFS worker thread is spawned in the background which performs a localized fix-up of damaged data structures, with only the corrupted files/folders remaining unavailable without locking out the entire volume and needing the server to be taken down. S.M.A.R.T. detection techniques were added to help determine when a hard disk may fail.
Hyper-V
Hyper-V is hypervisor-based virtualization software, forming a core part of Microsoft's virtualization strategy. It virtualizes servers on an operating system's kernel layer. It can be thought of as partitioning a single physical server into multiple small computational partitions. Hyper-V includes the ability to act as a Xen virtualization hypervisor host allowing Xen-enabled guest operating systems to run virtualized. A beta version of Hyper-V shipped with certain x86-64 editions of Windows Server 2008, prior to Microsoft's release of the final version of Hyper-V on 26 June 2008 as a free download. Also, a standalone variant of Hyper-V exists; this variant supports only x86-64 architecture. While the IA-32 editions of Windows Server 2008 cannot run or install Hyper-V, they can run the MMC snap-in for managing Hyper-V.
Windows System Resource Manager
Windows System Resource Manager (WSRM) is integrated into Windows Server 2008. It provides resource management and can be used to control the amount of resources a process or a user can use based on business priorities. Process Matching Criteria, which is defined by the name, type or owner of the process, enforces restrictions on the resource usage by a process that matches the criteria. CPU time, bandwidth that it can use, number of processors it can be run on, and allocated to a process can be restricted. Restrictions can be set to be imposed only on certain dates as well.
Server Manager
Server Manager is a new roles-based management tool for Windows Server 2008. It is a combination of Manage Your Server and Security Configuration Wizard from Windows Server 2003. Server Manager is an improvement of the Configure my server dialog that launches by default on Windows Server 2003 machines. However, rather than serve only as a starting point to configuring new roles, Server Manager gathers together all of the operations users would want to conduct on the server, such as, getting a remote deployment method set up, adding more server roles etc., and provides a consolidated, portal-like view about the status of each role.
Protocol and cryptography
Support for 128- and 256-bit AES encryption for the Kerberos authentication protocol.
New cryptography (CNG) API which supports elliptic curve cryptography and improved certificate management.
Secure Socket Tunneling Protocol, a new Microsoft proprietary VPN protocol.
AuthIP, a Microsoft proprietary extension of the IKE cryptographic protocol used in IPsec VPN networks.
Server Message Block 2.0 protocol in the new TCP/IP stack provides a number of communication enhancements, including greater performance when connecting to file shares over high-latency links and better security through the use of mutual authentication and message signing.
Miscellaneous
Fully componentized operating system.
Improved hot patching, a feature that allows non-kernel patches to occur without the need for a reboot.
Support for being booted from Extensible Firmware Interface (EFI)-compliant firmware on x86-64 systems.
Dynamic Hardware Partitioning supports hot-addition or replacement of processors and memory, on capable hardware.
Windows Deployment Services (WDS) replacing Automated Deployment Services Windows Server 2008 home entertainment and Remote Installation Services. Windows Deployment Services supports an enhanced multicast feature when deploying operating system images.
Internet Information Services 7 - Increased security, Robocopy deployment, improved diagnostic tools, delegated administration.
Windows Internal Database, a variant of SQL Server Express 2005, which serves as a common storage back-end for several other components such as Windows System Resource Manager, Windows SharePoint Services and Windows Server Update Services. It is not intended to be used by third-party applications.
An optional "desktop experience" component provides the same Windows Aero user interface as Windows Vista, both for local users, as well as remote users connecting through Remote Desktop.
Removed features
The Open Shortest Path First (OSPF) routing protocol component in Routing and Remote Access Service was removed.
Services for Macintosh, which provided file and print sharing via the now deprecated AppleTalk protocol, has been removed. Services for Macintosh were initially removed in Windows XP but were available in Windows Server 2003.
NTBackup is replaced by Windows Server Backup, and no longer supports backing up to tape drives. As a result of NTBackup removal, Exchange Server 2007 does not have volume snapshot backup functionality; however Exchange Server 2007 SP2 adds back an Exchange backup plug-in for Windows Server Backup which restores partial functionality. Windows Small Business Server and Windows Essential Business Server both include this Exchange backup component.
The POP3 service has been removed from Internet Information Services 7.0. The SMTP (Simple Mail Transfer Protocol) service is not available as a server role in IIS 7.0, it is a server feature managed through IIS 6.0.
NNTP (Network News Transfer Protocol) is no longer part of Internet Information Services 7.0.
ReadyBoost, which is available in Windows Vista, is not supported in Windows Server 2008.
Support lifecycle
Support for the RTM version of Windows Server 2008 ended on July 12, 2011, and users will not be able to receive further security updates for the operating system. As a component of Windows Vista, Windows Server 2008 will continue to be supported with security updates, lasting until January 14, 2020, the same respective end-of-life dates of Windows 7.
Microsoft planned to end support for Windows Server 2008 on January 12, 2016. However, in order to give customers more time to migrate to newer Windows versions, particularly in developing or emerging markets, Microsoft decided to extend support until January 14, 2020. Microsoft announced that the Extended Security Updates (ESU) service will expire on January 10, 2023 (and on January 9, 2024, for Azure customers).
Windows Server 2008 can be upgraded to Windows Server 2008 R2 on 64-bit systems only.
Editions
Most editions of Windows Server 2008 are available in x86-64 and IA-32 variants. These editions come in two DVDs: One for installing the IA-32 variant and the other for x64. Windows Server 2008 for Itanium-based Systems supports IA-64 processors. The IA-64 variant is optimized for high-workload scenarios like database servers and Line of Business (LOB) applications. As such, it is not optimized for use as a file server or media server. Windows Server 2008 is the last 32-bit Windows server operating system.
Editions of Windows Server 2008 include:
Windows Server 2008 Foundation (codenamed "Lima"; x86-64) for OEMs only
Windows Server 2008 Standard (IA-32 and x86-64)
Windows Server 2008 Enterprise (IA-32 and x86-64)
Windows Server 2008 Datacenter (IA-32 and x86-64)
Windows Server 2008 for Itanium-based Systems (IA-64)
Windows Web Server 2008 (IA-32 and x86-64)
Windows HPC Server 2008 (codenamed "Socrates"; replacing Windows Compute Cluster Server)
Windows Storage Server 2008 (codenamed "Magni"; IA-32 and x86-64)
Windows Small Business Server 2008 (codenamed "Cougar"; x86-64) for small businesses
Windows Essential Business Server 2008 (codenamed "Centro"; x86-64) for medium-sized businesses - this edition was discontinued in 2010.
The Microsoft Imagine program, known as DreamSpark at the time, used to provide verified students with the 32-bit variant of Windows Server 2008 Standard Edition, but the version has since then been removed. However, they still provide the R2 release.
The Server Core feature is available in the Web, Standard, Enterprise and Datacenter editions.
Windows Server 2008 Foundation Released on May 21, 2009.
Updates
Windows Server 2008 shares most of its updates with Windows Vista due to being based on that operating system's codebase. A workaround was found that allowed the installation of updates for Windows Server 2008 on Windows Vista, adding three years of security updates to that operating system (Support for Windows Vista ended on April 11, 2017, while support for Windows Server 2008 ended on January 14, 2020).
Service Pack 2
Due to the operating system being based on the same codebase as Windows Vista and being released on the same day as the initial release of Windows Vista Service Pack 1, the RTM release of Windows Server 2008 already includes the updates and fixes of Service Pack 1.
Service Pack 2 was initially announced on October 24, 2008 and released on May 26, 2009. Service Pack 2 added new features, such as Windows Search 4.0, support for Bluetooth 2.1, the ability to write to Blu-ray discs, and simpler Wi-Fi configuration. Windows Server 2008 specifically received the final release of Hyper-V 1.0, improved backwards compatibility with Terminal Server license keys and an approximate 10% reduction in power usage with this service pack.
Windows Vista and Windows Server 2008 share the same service pack update binary because the codebases of the two operating systems are unified - Windows Vista and Windows Server 2008 are the first Microsoft client and server operating systems to share the same codebase since the release of Windows 2000. The predecessors to Windows Vista and Windows Server 2008, Windows XP and Windows Server 2003, had unique codebases that used their own updates and service packs.
Platform Update
On October 27, 2009, Microsoft released the Platform Update for Windows Server 2008 and Windows Vista. It backports several APIs and libraries introduced in Windows Server 2008 R2 and Windows 7 to Windows Server 2008 and Windows Vista, including the Ribbon API, DirectX 11, the XPS library, the Windows Automation API and the Portable Device Platform. A supplemental update was released in 2011 to provide improvements and bug fixes.
Internet Explorer 9
Windows Server 2008 shipped with Internet Explorer 7, the same version that shipped with Windows Vista. The last supported version of Internet Explorer for Windows Server 2008 is Internet Explorer 9, released in 2011. Internet Explorer 9 was continually updated with cumulative monthly update rollups until support for Internet Explorer 9 on Windows Server 2008 ended on January 14, 2020.
.NET Framework
The latest supported version of the .NET Framework officially is version 4.6, released on October 15, 2015.
TLS 1.1 and 1.2 support
In July 2017, Microsoft released an update to add TLS 1.1 and 1.2 support to Windows Server 2008, however it is disabled by default after installing the update.
SHA-2 signing support
Starting in March 2019, Microsoft began transitioning to exclusively signing Windows updates with the SHA-3 algorithm. As a result of this Microsoft released several updates throughout 2019 to add SHA-2 signing support to Windows Server 2008.
Monthly update rollups
In June 2018, Microsoft announced that they would be moving Windows Server 2008 to a monthly update model beginning with updates released in September 2018 - two years after Microsoft switched the rest of their supported operating systems to that model.
With the new update model, instead of updates being released as they became available, only two update packages were released on the second Tuesday of every month until Windows Server 2008 reached its end of life - one package containing security and quality updates, and a smaller package that contained only the security updates. Users could choose which package they wanted to install each month. Later in the month, another package would be released which was a preview of the next month's security and quality update rollup.
Installing the preview rollup package released for Windows Server 2008 on March 19, 2019, or any later released rollup package, will update the operating system kernel's build number from version 6.0.6002 to 6.0.6003. This change was made so Microsoft could continue to service the operating system while avoiding “version-related issues”.
The last free security update rollup packages were released on January 14, 2020.
Extended Security Updates
Windows Server 2008 is eligible for the Extended Security Updates program. This program allows volume license customers to purchase, in yearly installments, security updates for the operating system until at most January 10, 2023. The licenses are paid for on a per-machine basis. If a user purchases an Extended Security Updates license in a later year of the program, they must pay for any previous years of Extended Security Updates as well. Extended Security Updates are released only as they become available.
Windows Server 2008 R2
A second release of Windows Server 2008, called the Windows 7-based Windows Server 2008 R2 was released to manufacturing on July 22, 2009 and became generally available on October 22, 2009. New features added in Windows Server 2008 R2 include new virtualization features, new Active Directory features, Internet Information Services 7.5 and support for up to 256 logical processors. It is the first server operating system by Microsoft to exclusively support 64-bit processors.
A service pack for Windows 7 and Windows Server 2008 R2, formally designed Service Pack 1, was released in February 2011.
System requirements
System requirements for Windows Server 2008 are as follows:
Scalability
Windows Server 2008 supports the following maximum hardware specifications:
See also
BlueKeep (security vulnerability)
Comparison of Microsoft Windows versions
Comparison of operating systems
History of Microsoft Windows
List of operating systems
Microsoft Servers
Notes
References
Further reading
External links
Windows Server Performance Team Blog
2008 software
IA-32 operating systems
X86-64 operating systems |
5321416 | https://en.wikipedia.org/wiki/Live%20USB | Live USB | A live USB is a portable USB-attached external data storage device containing a full operating system that can be booted from. The term is reminiscent of USB flash drives but may encompass an external hard disk drive or solid-state drive, though they may be referred to as "live HDD" and "live SSD" respectively. They are the evolutionary next step after live CDs, but with the added benefit of writable storage, allowing customizations to the booted operating system. Live USBs can be used in embedded systems for system administration, data recovery, or test driving, and can persistently save settings and install software packages on the USB device.
Many operating systems including , , Windows XP Embedded and a large portion of Linux and BSD distributions can run from a USB flash drive, and Windows 8 Enterprise has a feature titled Windows To Go for a similar purpose.
Background
To repair a computer with booting issues, technicians often use lightweight operating systems on bootable media and a command-line interface. The development of the first live CDs with graphical user interface made it feasible for non-technicians to repair malfunctioning computers. Most Live CDs are Linux-based, and in addition to repairing computers, these would occasionally be used in their own right as operating systems.
Personal computers introduced USB booting in the early 2000s, with the Macintosh computers introducing the functionality in 1999 beginning with the Power Mac G4 with AGP graphics and the slot-loading iMac G3 models. Intel-based Macs carried this functionality over with booting macOS from USB. Specialized USB-based booting was proposed by IBM in 2004 with Reincarnating PCs with Portable SoulPads and Boot Linux from a FireWire device.
Benefits and limitations
Live USBs share many of the benefits and limitations of live CDs, and also incorporate their own.
Benefits
In contrast to live CDs, the data contained on the booting device can be changed and additional data stored on the same device. A user can carry their preferred operating system, applications, configuration, and personal files with them, making it easy to share a single system between multiple users.
Live USBs provide the additional benefit of enhanced privacy because users can easily carry the USB device with them or store it in a secure location (e.g. a safe), reducing the opportunities for others to access their data. On the other hand, a USB device is easily lost or stolen, so data encryption and backup is even more important than with a typical desktop system.
The absence of moving parts in USB flash devices allows true random access, thereby avoiding the rotational latency and seek time of hard drives or optical media, meaning small programs will start faster from a USB flash drive than from a local hard disk or live CD. However, as USB devices typically achieve lower data transfer rates than internal hard drives, booting from older computers that lack support for USB 2.0 or newer can be very slow.
Limitations
LiveUSB OSes like Ubuntu Linux apply all filesystem writes to a casper filesystem overlay (casper-rw) that, once full or out of flash drive space, becomes unusable and the OS ceases to boot.
USB controllers on add-in cards (e.g. ISA, PCI, and PCI-E) are almost never capable of being booted from, so systems that do not have native USB controllers in their chipset (e.g. such as older ones before USB) likely will be unable to boot from USB even when USB is enabled via such an add-in card.
Some computers, particularly older ones, may not have a BIOS that supports USB booting. Many which do support USB booting may still be unable to boot the device in question. In these cases a computer can often be "redirected" to boot from a USB device through use of an initial bootable CD or floppy disk.
Some Intel-based Macintosh computers have limitations when booting from USB devices – while the Extensible Firmware Interface (EFI) firmware can recognize and boot from USB drives, it can do this only in EFI mode. When the firmware switches to "legacy" BIOS mode, it no longer recognizes USB drives. Non-Macintosh systems, notably Windows and Linux, may not be typically booted in EFI mode and thus USB booting may be limited to supported hardware and software combinations that can easily be booted via EFI. However, programs like Mac Linux USB Loader can alleviate the difficulties of the task of booting a Linux-live USB on a Mac. This limitation could be fixed by either changing the Apple firmware to include a USB driver in BIOS mode, or changing the operating systems to remove the dependency on the BIOS.
Due to the additional write cycles that occur on a full-blown installation, the life of the flash drive may be slightly reduced. This doesn't apply to systems particularly designed for live systems which keep all changes in RAM until the user logs off. A write-locked SD card (known as a Live SD, the solid-state counterpart to a live CD) in a USB flash card reader adapter is an effective way to avoid any duty cycles on the flash medium from writes and circumvent this problem. The SD card as a WORM device has an essentially unlimited life. An OS such as Linux can then run from the live USB/SD card and use conventional media for writing, such as magnetic disks, to preserve system changes; .
Setup
Various applications exist to create live USBs; examples include Universal USB Installer, Rufus, Fedora Live USB Creator, and UNetbootin. There are also software applications available that can be used to create a Multiboot live USB; some examples include YUMI Multiboot Bootable USB Creator and Ventoy. A few Linux distributions and live CDs have ready-made scripts which perform the steps below automatically. In addition, on Knoppix and Ubuntu extra applications can be installed, and a persistent file system can be used to store changes. A base install ranges between as little as 16 MiB (Tiny Core Linux) to a large DVD-sized install (4 gigabytes).
To set up a live USB system for commodity PC hardware, the following steps must be taken:
A USB flash drive needs to be connected to the system, and be detected by it
One or more partitions may need to be created on the USB flash drive
The "bootable" flag must be set on the primary partition on the USB flash drive
An MBR must be written to the primary partition of the USB flash drive
The partition must be formatted (most often in FAT32 format, but other file systems can be used too)
A bootloader must be installed to the partition (most often using syslinux when installing a Linux system)
A bootloader configuration file (if used) must be written
The necessary files of the operating system and default applications must be copied to the USB flash drive
Language and keyboard files (if used) must be written to the USB flash drive
USB support in the BIOS’s boot menu (although there are ways to get around this; actual use of a CD or DVD can allow the user to choose if the medium can later be written to. Write Once Read Many discs allow certainty that the live system will be clean the next time it is rebooted.)
Knoppix live CDs have a utility that, on boot, allows users to declare their intent to write the operating system's file structures either temporarily, to a RAM disk, or permanently, on disk or flash media to preserve any added configurations and security updates. This can be easier than recreating the USB system but may be moot since many live USB tools are simple to use.
Full installation
An alternative to a live solution is a traditional operating system installation with the elimination of swap partitions. This installation has the advantage of being efficient for the software, as a live installation would still contain software removed from the persistent file due to the operating system’s installer still being included with the media. However, a full installation is not without disadvantages; due to the additional write cycles that occur on a full installation, the life of the flash drive may be slightly reduced. To mitigate this, some live systems are designed to store changes in RAM until the user powers down the system, which then writes such changes. Another factor is if the speed of the storage device is poor; performance can be comparable to legacy computers even on machines with modern parts if the flash drive transfers such speeds. One way to solve this is to use a USB hard drive, as they generally give better performance than flash drives regardless of the connector.
Microsoft Windows
Although many live USBs rely on booting an open-source operating system such as Linux, it is possible to create live USBs for Microsoft Windows by using Diskpart or WinToUSB.
See also
Boot disk
dd (Unix)
Disk cloning
Extensible Firmware Interface
External hard disk
extlinux
initramfs
ISO file
Lightweight Linux distribution
List of live CDs
List of tools to create Live USB systems
List of Linux distributions that run from RAM
Live USB creator
Multiboot Specification
Comparison of Linux Live CDs
Partitionless
Persistence (computer science)
Portable Apps
Portable-VirtualBox
PXE
Self-booting diskette
UNetbootin
Virtualization
References
External links
The Differences Between Persistent Live USB and Full Linux Install on USB
Universal USB Installer
Partitionless Installation
Tutorial – How to Set your BIOS to boot from CD or USB
HOW TO: Create a working Live USB
Debian Live project
How to create a Live USB in Ubuntu
Casper
USB |
238781 | https://en.wikipedia.org/wiki/ThinkPad | ThinkPad | ThinkPad is a line of business-oriented laptop computers and tablets designed, developed and marketed by Lenovo, and formerly IBM. The line was originally sold by IBM until 2005, when a part of the company's business was acquired by Lenovo. ThinkPads have a distinct black, boxy design language, inspired by a Japanese bento lunchbox, which originated in 1990 and is still used in some models. Most models also feature a red-colored trackpoint on the keyboard, which has become an iconic and distinctive design characteristic associated with the ThinkPad line.
The ThinkPad line was first developed at the IBM Yamato Facility in Japan, and the first ThinkPads were released in October 1992. It has seen significant success in the business market. ThinkPad laptops have been used in outer space and for many years were the only laptops certified for use on the International Space Station. ThinkPads have also for several years been one of the preferred laptops used by the United Nations.
History
The ThinkPad was developed to compete with Toshiba and Compaq, who had created the first two portable notebooks, with an emphasis on sales to the Harvard Business School. The task of creating a notebook was given to the Yamato Facility in Japan, headed by , a Japanese engineer and product designer who had joined IBM in the 1970s, now known as the "Father of ThinkPad".
The name "ThinkPad" was a product of IBM's corporate history and culture. Thomas J. Watson, Sr., first introduced "THINK" as an IBM slogan in the 1920s. With every minicomputer and mainframe, IBM installed (almost all were leased – not sold), a blue plastic sign was placed atop the operator's console, with the text "Think" printed on an aluminium plate.
For decades IBM had also distributed small notepads with the word "THINK" emblazoned on a brown leatherette cover to customers and employees. The name "ThinkPad" was suggested by IBM employee Denny Wainwright, who had one such notepad in his pocket. The name was opposed by the IBM corporate naming committee as all the names for IBM computers were numeric at that time, but "ThinkPad" was kept due to praise from journalists and the public.
Early models
In April 1992, IBM announced the first ThinkPad model, the 700, later renamed the 700T after the release of three newer models, the 300, (new) 700 and 700C in October 1992. The 700T was a tablet computer.
This machine was the first product produced under IBM's new "differentiated product personality" strategy, a collaboration between Richard Sapper and Tom Hardy, head of the corporate IBM Design Program. Development of the 700C also involved a close working relationship between Sapper and Kazuhiko Yamazaki, lead notebook designer at IBM's Yamato Design Center in Japan and liaison between Sapper and Yamato engineering.
This 1990–1992 "pre-Internet" collaboration between Italy and Japan was facilitated by a special Sony digital communications system that transmitted high-res images over telephone lines. This system was established in several key global Design Centers by Hardy so IBM designers could visually communicate more effectively and interact directly with Sapper for advice on their projects. For his innovative design management leadership during ThinkPad development, Hardy was named "innovator of the Year 1992" by PC Magazine.
The first ThinkPad tablet, a PenPoint-based device formally known as the IBM 2521 ThinkPad, was positioned as a developer's release. The ThinkPad tablet became available for purchase by the general public in October of the same year.
IBM marketed the ThinkPad creatively, through methods such as early customer pilot programs, numerous pre-launch announcements, and an extensive loaner program designed to showcase the product's strengths and weaknesses, including loaning a machine to archaeologists excavating the ancient Egyptian city of Leontopolis. The resulting report documented the ThinkPad's excellent performance under difficult conditions; "The ThinkPad is an impressive machine, rugged enough to be used without special care in the worst conditions Egypt has to offer."
The first ThinkPads were very successful, collecting more than 300+ awards for design and quality.
Acquisition by Lenovo
In 2005, technology company Lenovo purchased the IBM personal computer business and the ThinkPad as a flagship brand along with it. Speaking about the purchase of IBM's personal computer division, Liu Chuanzhi said, "We benefited in three ways from the IBM acquisition. We got the ThinkPad brand, IBM's more advanced PC manufacturing technology and the company's international resources, such as its global sales channels and operation teams. These three elements have shored up our sales revenue in the past several years."
Although Lenovo acquired the right to use the IBM brand name for five years after its acquisition of IBM's personal computer business, Lenovo only used it for three years.
Today Lenovo manufactures and markets Think-branded products while IBM is mostly responsible for overseeing servicing and repairs for the Think line of products produced by Lenovo.
Both IBM and Lenovo play a key role in the design of their "Think" branded products.
Most of the Think line of products are designed at the Yamato Labs in Japan.
Manufacturing
The majority of ThinkPad computers since the 2005 acquisition of the brand by Lenovo have been made in Mexico, Slovakia, India and China.
Lenovo also employs ~300 people at a combined manufacturing and distribution centre near its American headquarters. Each device made in this facility is labelled with a red-white-and-blue sticker proclaiming "Whitsett, North Carolina."|alt=ThinkPad Logos|196x196px]]In 2012, Lenovo produced a short run of special edition anniversary ThinkPads in Yonezawa, Yamagata, in partnership with NEC, as part of a larger goal to move manufacturing from away from China and in to Japan.
In 2014, although sales rose 5.6 percent from the previous year, Lenovo lost its position as the top commercial notebook maker. However, the company celebrated a milestone in 2015 with the shipment of the 100 millionth unit of its ThinkPad line.
Design
The design language of the ThinkPad has remained very similar throughout the entire lifetime of the brand. Almost all models are solid black inside and out, with a boxy, right-angled external case design. Some newer Lenovo models incorporate more curved surfaces in their design. Many ThinkPads have incorporated magnesium, carbon fiber reinforced plastic or titanium into their chassis.
The industrial design concept was created in 1990 by Italy-based designer Richard Sapper, a corporate design consultant of IBM and, since 2005, Lenovo. The design was based on the concept of a traditional Japanese bento lunchbox, which revealed its nature only after being opened. According to later interviews with Sapper, he also characterized the simple ThinkPad form to be as elementary as a simple, black cigar box and with similar proportions, with the same observation that it offers a 'surprise' when opened.
Since 1992, the ThinkPad design has been regularly updated, developed and refined over the years by Sapper and the respective teams at IBM and later Lenovo. On the occasion of the 20th anniversary of ThinkPad's introduction, David Hill authored and designed a commemorative book about ThinkPad design titled ThinkPad Design: Spirit & Essence.
Features and technologies
Several unique features have appeared in the ThinkPad line, like drive protection, pointing stick or TPM chips.
While few features remain unique to the series, several laptop technologies originated on ThinkPads:
Current
Lenovo Vantage
Early known as "IBM Access", later "ThinkVantage", the Lenovo Vantage is a suite of computer management applications. This software can give additional support for system management (backup, encrypting, system drivers installation and upgrade, system monitoring and other). Currently some old features are replaced by internal Windows 10 features.
TPM chips
IBM was the first company that supported a TPM module. Modern ThinkPads still have this feature.
ThinkShutter
ThinkShutter is the branding of a webcam privacy shutter present in some ThinkPad notebook computers. It is a simple mechanical sliding cover that allows the user to obstruct the webcam's view. Some add-on webcams and other laptop brands provide a similar feature. IdeaPad notebooks carry the TrueBlock branding for their privacy shutters.
Spill-resistant keyboards
Some ThinkPad models have a keyboard membrane and drain holes (P series, classic T series and T###p models), and some have a solid rubber or plastic membrane (like X1 series and current T and X series), without draining holes.
UltraNav
The first ThinkPad 700 was equipped with the signature TrackPoint red dot pointing stick invented by Ted Selker. By 2000 the trackpad pointer had become more popular for laptops due to innovations by Synaptics so IBM introduced UltraNav as a complementary combination of TrackPoint and TouchPad designed by Dave Sawin, Hiroaki Yasuda, Fusanobo Nakamura, and Mitsuo Horiuchi to please all users.
A roll cage frame and stainless steel hinges with 180° or 360° opening angle
The "roll cage" is a internal frame, designed to minimize motherboard flex (current P series and T##p series) or magnesium composite case (all other hi-end models). The display modules lacks a magnesium frames, and some 2012-2016 models have a common issue with a cracked plastic lid. The 180° hinges is typical, the 360° hinges is Yoga line basic feature.
OLED screens
Introduced in 2018 as hi-end display option for some models.
The Active Protection System
Option for some ThinkPad that still uses the 2.5" drive bay; These systems use an accelerometer sensor to detect when a ThinkPad is falling and shut down the hard disk drive to prevent damage.
Biometric fingerprint reader and NFC Smart card reader options
The fingerprint reader was introduced as an option by IBM in 2004.ThinkPads were one of the first laptops to include this feature
Internal WWAN modules and Wi-Fi 3x3 MIMO
The Mobile broadband support is a common feature for most of actual ThinkPad models after 2006; the support of 3x3 MIMO is a common feature for most of hi-end models.
The some additional features (dock stations, UltraBay, accessories support) were listed in Accessories section.
Past
ThinkLight
External keyboard light, replaced by internal backlight; is an LED light located at the top of the LCD screen which illuminates the keyboard from above.
ThinkBridge
Only T, W and X series ThinkPad's feature (for some 2013-2018 models) — internal secondary battery (as succession of secondary UltraBay battery) that support a hot-swapping of primary battery.
7-row Keyboards
Original IBM keyboard design (1992-2012) — The original keyboard offered in the ThinkPad line until 2012, when it was swapped out for the chiclet style keyboard now used today.
IBM TrackWrite keyboard design — A unique keyboard designed by John Karidis introduced by Lenovo in 1995, used in the ThinkPad 701 series. When the machine is closed the keyboard is folded inwards, making the machine more compact. However when the machine is open and in use, it slides out, giving the user a normal sized keyboard. That keyboard, referred to as a butterfly keyboard, which is widely considered a design masterpiece and is in the permanent collection of the Museum of Modern Art in New York City.
The ThinkPad 760 series also included an unusual keyboard design; the keyboard was elevated by two arms riding on small rails on the side of the screen, tilting the keyboard to achieve a more ergonomic design.
The keyboard design was replaced by the Chiclet style keyboard (2012-current) — The keyboard adopted by Lenovo in 2012 over the original IBM keyboard design. And does not support the ThinkLight to illuminate the keyboard, instead using a keyboard backlight. (Some ThinkPad models during the intermission period between the classic IBM design and the Lenovo chiclet design could be outfitted with both the backlit chiclet style keyboard and the ThinkLight.)
FlexView AFFS or IPS screens
The introduced in 2004 line of hi-end displays with wide view angles and optional high resolution (up to 15" 1600x1200 or (rarely) 2048x1536 pixels). Partially dropped in 2008 (after partial defunct of BOE-Hydis display supplier), and reintroduced as ordinary IPS screen option in 2013.
Batteries
Some Lenovo laptops (such as the X230, W530 and T430) block third-party batteries. Lenovo calls this feature "Battery Safeguard". It was first introduced on some models in May 2012. Laptops with this feature scan for security chips that only ThinkPad-branded batteries contain. Affected Thinkpads flash a message stating "Genuine Lenovo Battery Not Attached" when third-party batteries are used.
Operating systems
The ThinkPad has shipped with Microsoft Windows from its inception until present day. Alongside MS-DOS, Windows 3.1x was the default operating system on the original ThinkPad 700.
IBM and Microsoft's joint operating system, known as OS/2, although not as popular, was also made available as an option from the ThinkPad 700 in 1992, and was officially supported until the T43 in 2005.
IBM took its first steps toward ThinkPads with an alternative operating system, when they quietly certified the 390 model for SUSE Linux in November 1998. The company released its first Linux-based unit with the ThinkPad A20m in July 2000. This model, along with the closely-released A21m, T21 and T22 models, came preinstalled with Caldera OpenLinux. IBM shifted away from preinstalled Linux on the ThinkPad after 2002, but continued to support other distributions such as Red Hat Linux, SUSE Linux Enterprise, and Turbolinux by means of customer installations on A30, A30p, A31p models. This continued through the Lenovo transition with the T60p, until September 2007.
The following year, ThinkPads began shipping with Linux again, when the R61 and T61 were released with SUSE Linux Enterprise as an option. This was shortlived, as Lenovo discontinued that practice in 2009. ThinkPad hardware continued to be certified for Linux.
In 2020, Lenovo shifted into much heavier support of Linux when they announced the ThinkPad X1 Carbon Gen 8, the P1 Gen 2, and the P53 would come with Fedora Linux as an option. This was the first time that Fedora Linux was made available as a preinstalled option from a major hardware vendor. Following that, Lenovo then began making Ubuntu available as a preinstalled option across nearly thirty different notebook and desktop models, and Fedora Linux on all of its P series lineup.
A small number of ThinkPads are preinstalled with Google's Chrome OS. On these devices, Chrome OS is the only officially supported operating system where installation of Windows and other operating systems requires putting the device into developer mode.
Use in space
ThinkPads have been used heavily in space programs. NASA purchased more than 500 ThinkPad 750 laptops for flight qualification, software development, and crew training, and astronaut (and senator) John Glenn used ThinkPad laptops on his spaceflight mission STS-95 in 1998.
ThinkPad models used on Shuttle missions include:
ThinkPad 750 (first use in December 1993 supporting the Hubble repair mission)
ThinkPad 750C
ThinkPad 755C
ThinkPad 760ED
ThinkPad 760XD (ISS Portable Computing System)
ThinkPad 770
ThinkPad A31p (ISS Portable Computing System)
ThinkPad T61p
ThinkPad P52
ThinkPad T490
ThinkPad P15
The ThinkPad 750 flew aboard the Space Shuttle Endeavour during a mission to repair the Hubble Space Telescope on 2 December 1993, running a NASA test program which checked if radiation in the space environment caused memory anomalies or other unexpected problems. ThinkPads were also used in conjunction with a joystick for the Portable In-Flight Landing Operations Trainer (PILOT).
ThinkPads have also been used on space stations. At least three ThinkPad 750C were left in the Spektr module of Mir when it depressurized, and the 755C and 760ED were used as part of the Shuttle–Mir Program, the 760ED without modifications. Additionally, for several decades ThinkPads were the only laptops certified for use on the International Space Station.
ThinkPads used aboard the space shuttle and International Space Station feature safety and operational improvements for the environment they must operate in. Modifications include Velcro tape to attach to surfaces, upgrades to the CPU and video card cooling fans to accommodate for microgravity (in which warmer air does not rise) and lower density of the cabin air, and an adapter for the station's 28 volt DC power.
Throughout 2006, a ThinkPad A31p was being used in the Service Module Central Post of the International Space Station and seven ThinkPad A31p laptops were in service in orbit aboard the International Space Station. As of 2010, the Space Station was equipped with ThinkPad A31 computers and 32 ThinkPad T61p laptops. All laptops aboard the ISS are connected to the station's LAN via Wi-Fi and are connected to the ground at 3 Mbit/s up and 10 Mbit/s down, comparable to home DSL connection speeds.
Since a new contract with HP in 2016 provided a small number of modified ZBook laptops for ISS use, ThinkPads are no longer the only laptops flown on the ISS but are the predominant laptop present there.
ThinkPads in the United Nations
For several years ThinkPads have been one of the preferred laptop brands used by the United Nations.
The models found in the UN today include:
L480
T480
T480s
T14
X1 carbon gen 6, gen 7 and gen 8
X380 Yoga
X390 Yoga
Certain ThinkVision monitors (T24v) are also used with ThinkPad docking stations.
Popularity
ThinkPads have enjoyed cult popularity for many years. There are large communities on the Internet dedicated to the line where people have discussions about it, share photos and videos of their own ThinkPads, etc. Older ThinkPad models remain popular among enthusiasts and collectors, who still see them as durable, highly usable machines, despite no longer being modern. They have gained a reputation for being reliable, or "indestructible", even. Newer models are also still popular among consumers and businesses nowadays (as of 2021), though Lenovo has received some backlash in recent years for the apparent declining quality of their ThinkPad line (as well as all their other lines in general), many customers being unhappy with the build quality and reliability, or lack thereof, of their devices.
Aftermarket parts have been developed for some models, such as the X60 and X200, for which custom motherboards with more modern processors have been created.
In January 2015, Lenovo celebrated one hundred million ThinkPads being sold. They also announced some new ThinkPad products for the occasion.
Reviews and awards
Laptop Magazine in 2006 called the ThinkPad the highest-quality laptop computer keyboard available. It was ranked first in reliability and support in PC Magazine's 2007 Survey.
The ThinkPad was the PC Magazine 2006 Reader's Choice for PC based laptops, and ranked number 1 in Support for PC based laptops. The ThinkPad Series was the first product to receive PC World's Hall of Fame award.
The Enderle Group's Rob Enderle said that the constant thing about ThinkPad is that the "brand stands for quality" and that "they build the best keyboard in the business."
The ThinkPad X Tablet-series was PC Magazine Editor's Choice for tablet PCs. The ThinkPad X60s was ranked number one in ultraportable laptops by PC World. It lasted 8 hours and 21 minutes on a single charge with its 8-cell battery. The Lenovo ThinkPad X60s Series is on PC World's Top 100 Products of 2006. The 2005 PC World Reliability and Service survey ranked ThinkPad products ahead of all other brands for reliability.
In the 2004 survey, they were ranked second (behind eMachines). Lenovo was named the most environment-friendly company in the electronics industry by Greenpeace in 2007 but has since dropped to place 14 of 17 as of October 2010.
The IBM/Lenovo ThinkPad T60p received the Editor's Choice award for Mobile Graphic Workstation from PC Magazine. Lenovo ThinkPad X60 is the PC Magazine Editor's Choice among ultra-portable laptops. The Lenovo ThinkPad T400-Series was on PC World's Top 100 Products of 2009.
Current model lines
Starting Weight
ThinkPad Yoga (2013–current)
The ThinkPad Yoga is an Ultrabook-class convertible device that functions as both a laptop and tablet computer. The Yoga gets its name from the consumer-oriented IdeaPad Yoga line of computers with the same form factor. The ThinkPad Yoga has a backlit keyboard that flattens when flipped into tablet mode. This was accomplished on 1st generation X1 Yoga with a platform surrounding the keys that rises until level with the keyboard buttons, a locking mechanism that prevents key presses, and feet that pop out to prevent the keyboard from directly resting on flat surfaces. On later X1 Yoga generations, the keys themselves retract in the chassis, so the computer rests on fixed small pads. Touchpad is disabled in this configuration. Lenovo implemented this design in response to complaints about its earlier Yoga 13 and 11 models being awkward to use in tablet mode. A reinforced hinge was required to implement this design. Other than its convertible form factor, the ThinkPad Yoga retains standard ThinkPad features such as a black magnesium-reinforced chassis, island keyboard, a red TrackPoint, and a large touchpad.
Tablets
ThinkPad Tablet
Released in August 2011, the ThinkPad Tablet is the first in Lenovo's line of business-oriented Tablets with the ThinkPad brand. The tablet has been described by Gadget Mix as a premium business tablet. Since the Tablet is primarily business-oriented, it includes features for security, such as anti-theft software, the ability to remotely disable the tablet, SD card encryption, layered data encryption, and Cisco Virtual Private Network (VPN).
Additionally, the ThinkPad Tablet is able to run software such as IBM's Lotus Notes Traveler. The stylus could be used to write notes on the Tablet, which also included software to convert this handwritten content to text. Another feature on the Tablet was a drag-and-drop utility designed to take advantage of the Tablet's touch capabilities. This feature could be used to transfer data between USB devices, internal storage, or an SD card.
Slashgear summarized the ThinkPad Tablet by saying, "The stylus and the styling add up to a distinctive slate that doesn't merely attempt to ape Apple's iPad."
ThinkPad Tablet 2
In order to celebrate the 20th anniversary of the ThinkPad, Lenovo held a large party in New York where it announced several products, including the Tablet 2. Lenovo says that the ThinkPad Tablet 2 will be available on 28 October 2012 when Windows 8 is released. The ThinkPad Tablet 2 runs the Windows 8 Professional operating system. It will be able to run any desktop software compatible with this version of Windows.
The Tablet 2 is based on the Clover Trail version of the Intel Atom processor that has been customized for tablets. The Tablet 2 has 2 gigabytes of RAM and a 64GB SSD. The Tablet 2 has a 10.1-inch IPS display with a 16:9 aspect ratio and a resolution of . In a preview, CNET wrote, "Windows 8 looked readable and functional, both in Metro and standard Windows-based interfaces." A mini-HDMI port is included for video output. An 8-megapixel rear camera and a 2-megapixel front camera are included along with a noise-canceling microphone in order to facilitate video conferencing.
ThinkPad 8
Announced and released in January 2014, the ThinkPad 8 is based on Intel's Bay Trail Atom Z3770 processor, with 2 GB of RAM and up to 128 GB of built-in storage. ThinkPad 8 has an 8.3-inch IPS display with a 16:10 aspect ratio and a resolution of pixels. Other features include an aluminum chassis, micro-HDMI port, 8-megapixel back camera (with flash), and optional 4G connectivity. It runs Windows 8 as an operating system.
ThinkPad 10
Announced in May 2014, Lenovo ThinkPad 10 is a successor to the ThinkPad Tablet 2 and was scheduled to launch in the summer of 2014 along with accessories such as a docking station and external detachable magnetic keyboards. It used Windows 8.1 Pro as its operating system. It was available in 64 and 128GB variants with 1.6GHz quad-core Intel Atom Baytrail processor and 2GB or 4GB of RAM. It optionally supported both 3G and 4G (LTE). Display resolution was announced to be , paired with a stylus pen.
ThinkPad X1 Tablet
The ThinkPad X1 Tablet is a fanless tablet powered by Core M CPUs. It is available with 4, 8 or 16GB of LPDDR3 RAM and SATA or a PCIe NVMe SSDs with up to 1TB. It has a IPS screen and supports touch and pen input.
ThinkPad 11e (2014–current)
The ThinkPad 11e is a "budget" laptop computer for schools and students with an 11-inch screen and without trackpoint. 11e Yoga is a convertible version of 11e.
E Series (2011–current)
The E Series is a low-cost ThinkPad line, designed for small business mass-market requirements, and currently contains only a 14" and 15" sub-lines. The E Series line of laptops replaced Lenovo's Edge Series, but somewhere (in some countries) currently (May 2019) offered as both of "Thinkpad Edge/E series" names. The E series also lack metals like magnesium and carbon fibre in their construction which other members of the ThinkPad family enjoy.
L Series (2010–current)
The L Series replaced the former R Series, and is positioned as a mid-range ThinkPad offering with mainstream Intel Core i3/i5/i7 CPUs. The L Series have 3 sub-lines, the long-running 14" and 15.6" (and as launched this line had two models, L412 and the L512 in 2010); and as of 2018 there is also a 13" L380 available, which replaces the ThinkPad 13.
T series (2000–current)
The T series is the most popular and most well-known line of ThinkPad. Being the successor of the 600 series, it historically had high-end features, such as magnesium alloy roll-cages, high-density IPS screens known as FlexView (discontinued after the T60 series), 7-row keyboards, screen latches, the Lenovo UltraBay, and ThinkLight. Models included both 14.1-inch and 15.4-inch displays available in 4:3 and 16:10 aspect ratios.
Since 2012, the entire ThinkPad line was given a complete overhaul, with modifications such as the removal of separate buttons for use with the TrackPoint (xx40 series – 2014, then reintroduced xx50 series – 2015), removal of separate audio control buttons, removal of screen latch, and the removal of LED indicator lights. Models starting from the xx40 series featured a Power Bridge battery system, which had a combination of a lower capacity built-in battery and a higher capacity external battery, enabling the user to switch the external without putting the computer into hibernation. However, beginning with the 2019 xx90 series models, the external battery was removed in favor of a single internal battery. Also, non-widescreen displays are no longer available, with 16:9 aspect ratio as the only remaining choice.
The Tx20 series ThinkPads came in two editions: 15" (T520) or a 14" (T420). These are the last ThinkPads to use the classic 7-row keyboard, with the exception of the Lenovo ThinkPad 25th anniversary edition released on Oct. 5, 2017, which was based on the ThinkPad T470.
As it can be seen above, over time, The T series ThinkPad's purpose has slightly changed. Initially, the T series ThinkPad was meant to have high-end business features and carry a 10–20% markup over the other ThinkPads. Starting with the T400, The ThinkPad T series became a less of a high-end business laptop and became more suited as a mobile workstation, becoming similar to the W-series or P-series ThinkPads. Achieving similar performance to the W-series, but with a 5–10% smaller profile than the W-series ThinkPads. In 2013, the T440 introduced another major shift in The ThinkPad T series. The ThinkPad became more of an overall office machine than a mobile workstation. By today's standards, The ThinkPad T series is thicker than most of its competitors.
X Series (2000–current)
The X Series is the main high-end ultraportable ThinkPad line, offering a lightweight, highly portable laptop with moderate performance. The current sub-lines for the X series includes:
13" X13 (mainstream);
X13 Yoga (convertible sub-line),
14" X1 Carbon (premium sub-line),
X1 Yoga (premium convertible sub-line), and
15" X1 Extreme (premium sub-line).
The daughter line includes the X1 Tablet (not to be confused with the 2005-2013 X Series tablets).
The mainstream current "workhorse" models is a X13 and X13 Yoga, the 13" successors of the classic discontinued 12" line of Lenovo X Series ThinkPads.
The premium 14"/15" thin-and-light line were the 13.3" ThinkPad models (the X300/X301) with ultrabay CD-ROM and removable battery, but are now replaced by the modern premium X1-series ultrabook line, such as the X1 Carbon, X1 Yoga, and X1 Extreme sub-series.
Discontinued mainstream lines such as the 12" X200(s), X201(s), and X220 models could be ordered with all of the high-end ThinkPad features (like Trackpoint, ThinkLight, a 7-row keyboard, a docking port, hot-swappable HDD, solid magnesium case and optional slice battery). The discontinued 12.5" X220 and X230 still featured a roll cage, a ThinkLight, and an optional premium IPS display (the first IPS display on a non-tablet ThinkPad since the T60p), but the 7-row keyboard was offered only with the X220. However, it lacked the lid latch mechanism which was present on the previous X200 and X201 versions. The discontinued slim 12" line contained only X200s and X201s with low power CPUs and high resolution displays, and X230s with low power CPUs. The 12.5" X series ThinkPads (such as X240 and later) had a more simplified design, and last 12" X280 model had only the Trackpoint feature, partially magnesium case and simplified docking port.
The obsolete low-cost 11.6" (netbook line) X100e and X120e were are all plastic, lacking both the latch and the ThinkLight, and using a variant of the island keyboard (known as chiclet keyboard) found on the Edge series. The X100e was also offered in red in addition to blue, and white in some countries. Those were more like high-end netbooks, whereas the X200 series were more like full ultraportables, featuring Intel Core (previously Core 2 and Celeron) series CPUs rather than AMD netbook CPUs.
The X Series with "tablet" suffixes is an outdated variant of the 12" X Series models, with low voltage CPUs and a flip-screen tablet resistive touchscreen. These include the traditional ThinkPad features, and have been noted for using a higher quality AFFS-type screen with better viewing angles compared to the screens used on other ThinkPads.
P Series (2015–current)
The P Series line of laptops replaced Lenovo's W Series and reintroduced 17" screens to the ThinkPad line. The P Series (excluding models with 's' suffix) is designed for engineers, architects, animators, etc. and comes with a variety of "high-end" options. All P Series models come included with fingerprint readers. The ThinkPad P Series includes features such as dedicated magnesium roll cages, more indicator LED lights, and high-resolution displays.
Z series (2022)
The Z series currently consists of two models: the 13-inch model, Z13, and the 16-inch model, Z16. It was introduced in January 2022 and will be available for purchase in May 2022; the Z13 model will start at $1549, while the Z16 model will start at $2099. The series is marketed towards business customers, as well as a generally younger audience. The Verge wrote: "Lenovo is trying to make ThinkPads cool to the kids. The company has launched the ThinkPad Z-series, a thin and light ThinkPad line with funky colors, eco-friendly packaging, and a distinctly modern look." The series features a new metal sleek, contemporary, thin design, which differs greatly from other recent, more traditional-looking ThinkPad models. The Z13 model was introduced in three new colors—black, silver, and black vegan leather with bronze accents—while the Z16 is only available in one of them, silver. The laptops are equipped with new AMD Ryzen Pro processors. Other notable features include 1080p webcams, OLED displays, new, redesigned touchpads, spill resistant keyboards, Dolby Atmos speaker systems, and Windows 11 with Windows Hello support.
Accessories
Lenovo also makes a range of accessories meant to complement and enhance the experience of using a ThinkPad device. These include:
ThinkPad Stack (2015–current)
The ThinkPad Stack line of products includes accessories designed for portability and interoperability. This line includes external hard drives, a wireless router, a power bank, and a Bluetooth 4.0 speaker. Each Stack device includes rubber feet, magnets, and pogo-pin power connections that allow the use of a single cable. The combined weight of all the Stack devices is slightly less than two pounds. The Stack series was announced in January 2015 at the International CES. The Stack series of accessories was expanded at the 2016 International CES to include a 720p resolution projector with 150 lumens of brightness and a wireless charging station.
The Stack has a "blocky, black, and rectangular" look with the ThinkPad logo. It shares a common design language with ThinkPad laptop computers.
Dock Stations (1993–current)
Current docking stations (or docks) add much of the functional abilities of a desktop computer, including multiple display outputs, additional USB ports, and occasionally other features. This allows the ThinkPads to be connected and disconnected from various peripherals quickly and easily.
Recent docks connect via a proprietary connector located on the underside of the laptops; or via USB-C.
UltraBay (1995–2014)
The internal replaceable (hot-swappable) CD-drive bay that supports a list of optional components, such as a CD-/DVD/Blu-ray drives, hard drive caddies, additional batteries, or device cradles.
Slice batteries (2000-2012)
Some classic models (IBM and early Lenovo T and X series) can support an additional slice battery instead of the UltraBay additional battery.
UltraPort (2000–2002)
ThinkPad USB 3.0 Secure Hard Drive
An external USB 3.0/2.0 hard drive that was designed by Lenovo in 2009. It requires the input of a 4 digit PIN to access data and this can be set by the user.
These drives are manufactured for lenovo by Apricorn, Inc.
ThinkPad keyboards (external)
IBM/Lenovo made several usb/Bluetooth keyboards with integrated UltraNav's and TrackPoints. Notable models include
SK-8845
SK-8835
SK-8855
ThinkPad Compact USB Keyboard (current model)
ThinkPad Compact Bluetooth Keyboard (current model)
ThinkPad TrackPoint Keyboard II (current model)
ThinkPad mice
ThinkPad mice come in several different varieties ranging from Bluetooth ones through wired ones, to even ones with a trackpoint built-in and labelled as a scroll point.
ThinkPad stands
Thinkplus laptop stands(asia markets only)
ThinkPlus charger
GaN charger with a USB-C output.
They are mostly sold with the "thinkplus" branding in Asia (notably south-east Asia) and are popular there.
Historical models
ThinkPad 235
The Japan-only ThinkPad 235 (or Type 2607) was the progeny of the IBM/Ricoh RIOS project. Also known as Clavius or Chandra2, it contains unusual features like the presence of three PCMCIA slots and the use of dual camcorder batteries as a source of power. Features an Intel Pentium MMX 233 MHz CPU, support for up to 160 MB of EDO memory, and a built-in hard drive with UDMA support. Hitachi marketed Chandra2 as the Prius Note 210.
ThinkPad 240
The ultraportable ThinkPad 240 (X, Z) started with an Intel Celeron processor and went up to the 600 MHz Intel Pentium III. In models using the Intel 440BX chipset, the RAM was expandable to 320 MB max with a BIOS update. Models had a screen and an key pitch (a standard key pitch is ). They were also one of the first ThinkPad series to contain a built-in Mini PCI card slot (form factor 3b). The 240s have no optical disc drives and an external floppy drive. An optional extended battery sticks out the bottom like a bar and props up the back of the laptop. Weighing in at , these were the smallest and lightest ThinkPads ever made.
300 Series
The 300-series (300, 310, 340, 345, 350, 360, 365, 370, 380, 385, 390 (all with various sub-series)) was a long-running value series starting at the 386SL/25 processor, all the way to the Pentium III 450. The 300 series was offered as a slightly lower-price alternative from the 700 series, with a few exceptions.
The ThinkPad 360P and 360PE was a low-end version of ThinkPad 750P, and was unique model in the 300 series in that it could be used as a regular laptop, or transform into a tablet by flipping the monitor on top of itself. Retailing for $3,699 in 1995, the 360PE featured a touch sensitive monitor that operated with the stylus; the machine could run operating systems that supported the touch screen such as PenDOS 2.2.
500 Series
The 500-series (500, 510, 560 (E, X, Z), 570 (E)) were the main line of the ultraportable ThinkPads. Starting with the 486SLC2-50 Blue Lightning to the Pentium III 500, these machines had only a hard disk on board. Any other drives were external (or in the 570's case in the UltraBase). They weighed in at around .
600 Series
The 600-series (600, 600E, and 600X) are the direct predecessors of the T series. The 600-series packed a SVGA or a XGA TFT LCD, Pentium MMX, Pentium II or III processor, full-sized keyboard, and optical bay into a package weighing roughly . IBM was able to create this light, fully featured machine by using lightweight but strong carbon fiber composite plastics. The battery shipped with some 600-series models had a manufacturing defect that left it vulnerable to memory effect and resulted in poor battery life, but this problem can be avoided by use of a third-party battery.
700 Series
The 700-series was a hi-end ThinkPad line; The released models (700T, 710T and 730T tablets; 700, 701, 720, 730, 750, 755, 760, 765, 770 laptops with various sub-models) can be configured with the best screens, largest hard drives and fastest processors available in the ThinkPad range; some features can be found only on a 700 series models, and was the first successful ThinkPad introduced in 1992 (that was a tablet PC 700T model without a keyboard and a mouse).
800 Series
The ThinkPad 800-series (800/820/821/822/823/850/851/860) were unique as they were based on the PowerPC architecture rather than the Intel x86 architecture. Most of the 800 Series laptops used the PowerPC 603e CPU, at speeds of 100 MHz, or 166 MHz in the 860 model. The PowerPC ThinkPad line was considerably more expensive than the standard x86 ThinkPads — even a modestly configured 850 cost upwards of $12,000. All of the PowerPC ThinkPads could run Windows NT 3.51 and 4.0, AIX 4.1.x, and Solaris Desktop 2.5.1 PowerPC Edition.
WorkPad
Based on ThinkPad design although branded WorkPad, the IBM WorkPad z50 was a Handheld PC running Windows CE, released in 1999.
i Series (1998–2002)
The ThinkPad i Series was introduced by IBM in 1999 and was geared towards a multimedia focus with many models featuring independent integrated CD players and multimedia access buttons. The 1400 and 1500 models were designed by Acer for IBM under contract (and are thus nicknamed the AcerPad) and featured similar hardware found in Acer laptops (including ALi chipsets, three way audio jacks and the internal plastics painted with a copper paint). Some of the i Series ThinkPads, particularly the Acer developed models, are prone to broken hinges and stress damage on the chassis.
One notable ThinkPad in the i Series lineup are the S3x (S30/S31) models: featuring a unique keyboard and lid design allowing a standard size keyboard to fit in a chassis that otherwise wouldn't be able to support the protruding keyboard. These models were largely only available in Asia Pacific. IBM offered an optional piano black lid on these models (designed by the Yamato Design lab). This is the only ThinkPad since the 701C to feature a special design to accommodate a keyboard that's physically larger than the laptop and also the only ThinkPad (aside from the Z61) to deviate away from the standard matte lid.
A Series (2000–2004)
The A-series was developed as an all-around productivity machine, equipped with hardware powerful enough to make it a desktop replacement. Hence it was the biggest and heaviest ThinkPad series of its time, but also had features not even found in a T-series of the same age. The A-series was dropped in favor of the G-series and R-series.
The A31 was released in 2002 as a desktop replacement system equipped with: A Pentium 4-M processor clocked at 1.6, 1.8, 1.9, or 2.0 GHz (max supported is a 2.6 GHz), An ATI Mobility Radeon 7500, 128 or 256 MB of PC2100 RAM (officially upgradable to 1 GB but can be unofficially upgraded to 2 GB), IBM High Rate Wireless (PRISM 2.5 Based, can be modified to support WPA-TKIP) and equipped with a 20, 30, or 40 GB hard disk drive.
R Series (2001–2010, 2018-2019)
The R Series was a budget line, beginning with the R30 in 2001 and ending with the R400 and R500 presented in 2008.
The successor of a R400 and R500 models is a ThinkPad L series L412 and L512 models.
A notable model is the R50p with an optional 15" IPS LCD screen (introduced in 2003).
The R series reintroduced in 2018 (for Chinese market only) with the same hardware as E series models, but with aluminum display cover, discrete GPU, TPM chip and fingerprint reader.
G Series (2003–2006)
The G-series consisted of only three models, the G40, G41 and G50. Being large and heavy machines, equipped with powerful desktop processors, this line of ThinkPads consequently served mainly as replacements for desktop computers.
Z Series (2005–2007)
The Z series was released as a high-end multimedia laptop; as a result this was the first ThinkPad to feature a widescreen (16:10 aspect ratio) display. The Z-Series was also unique in that certain models featured an (optional) titanium lid. Integrated WWAN and a webcam were also found on some configurations. The series has only ever included the Z60 (Z60m and Z60t) and Z61 (Z61m, Z61t and Z61p); the latter of which is the first Z-Series ThinkPad with Intel "Yonah" Dual-Core Technology. The processor supports Intel VT-x; this is disabled in the BIOS but can be turned on with a BIOS update. Running fully virtualised operating systems via Xen or VMware is therefore possible. Despite the Z61 carrying the same number as the T61, the hardware of the Z61 is closer to a T60 (and likewise the Z60 being closer to a T43).
ThinkPad Reserve Edition (2007)
The "15-year anniversary" Thinkpad model (based on a X60s laptop).
This model was initially known inside of Lenovo as the "Scout". This was the name of the horse ridden by Tonto, the sidekick from the 1950s television series The Lone Ranger. Lenovo envisioned the Scout as a very high-end ThinkPad that would be analogous to a luxury car. Each unit was covered in fine leather embossed with its owners initials. Extensive market research was conducted on how consumers would perceive this form factor. It was determined that they appreciated that it emphasised warmth, nature, and human relations over technology. The Scout was soon renamed the ThinkPad Reserve Edition. It came bundled with premium services including a dedicated 24-hour technical support hotline that would be answered immediately. It was released in 2007 and sold for $5,000 in the United States.
SL Series (2008–2010)
The SL Series was launched in 2008 as a low-end ThinkPad targeted mainly geared toward small businesses. These lacked several traditional ThinkPad features, such as the ThinkLight, magnesium alloy roll cage, UltraBay, and lid latch, and use a 6-row keyboard with a different layout than the traditional 7-row ThinkPad keyboard; also, SL-series models have IdeaPad-based firmware. Models offered included 13.3" (SL300), 14" (SL400 and SL410) and 15.6" (SL500 and SL510).
W Series (2008–2015)
The W-series laptops were introduced by Lenovo as workstation-class laptops with their own letter designation, a descendant of prior ThinkPad T series models suffixed with 'p' (e.g. T61p), and are geared towards CAD users, photographers, power users, and others, who need a high-performance system for demanding tasks.. The W-series laptops were launched in 2008, at the same time as the Intel Centrino 2, marking an overhaul of Lenovo's product lineup. The first two W-series laptops introduced were the W500 and the W700.
Previously available were the W7xx series (17" widescreen model), the W500 (15.4" 16:10 ratio model), the W510 (15.6" 16:9 ratio model), and W520 (15.6" 16:9 ratio model). The W700DS and the W701DS both had two displays: a 17" main LCD and a 10" slide-out secondary LCD. The W7xx series were also available with a Wacom digitizer built into the palm rest. These high-performance workstation models offered more high-end components, such as quad core CPUs and higher-end workstation graphics compared to the T-series, and were the most powerful ThinkPad laptops available. Until the W540, they retained the ThinkLight, UltraBay, roll cage, and lid latch found on the T-series. The W540 release marked the end of the lid latch, ThinkLight, and hot-swappable UltraBays found in earlier models.
The ThinkPad W-series laptops from Lenovo are described by the manufacturer as being "mobile workstations", and suit that description by being physically on the larger side of the laptop spectrum, with screens ranging from 15" to 17" in size. Most W-series laptops offer high-end quad-core Intel processors with an integrated GPU as well as an Nvidia Quadro discrete GPU, utilizing Nvidia Optimus to switch between the two GPUs as required. Notable exceptions are the W500, which has ATI FireGL integrated workstation-class graphics, and the W550s, which is an Ultrabook-specification laptop with only a dual-core processor. The W-series laptops offer ISV certifications from various vendors such as Adobe Systems and Autodesk for CAD and 3D modeling software.
The ThinkPad W series has been discontinued and replaced by the P series mobile workstations.
Edge Series (2010)
The Edge Series was released early in 2010 as small business and consumer-end machines. The design was a radical departure compared to the traditional black boxy ThinkPad design, with glossy surfaces (optional matte finish on later models), rounded corners, and silver trim. They were also offered in red, a first for the traditionally black ThinkPads. Like the SL, this series was targeted towards small businesses and consumers, and lack the roll cage, UltraBay, lid latch, and ThinkLight of traditional ThinkPads (though the 2011 E220s and E420s had ThinkLights). This also introduced an island-style keyboard with a significantly different layout.
Models included 13.3" (Edge 13), 14" (Edge 14), and 15.6" (Edge 15) sizes. An 11.6" (Edge 11) model was offered, but not available in the United States. The latest models of E series can be offered with Edge branding, but this naming is optional and uncommon.
S Series (2012–2014)
The S Series is positioned as a mid-range ThinkPad offering, containing ultrabooks derived from the Edge Series. As of August 2013, the S Series includes S531 and S440 models; their cases are made of aluminum and magnesium alloy, available in silver and gunmetal colors.
ThinkPad Twist (2012)
The Lenovo ThinkPad Twist (S230u) is a laptop/tablet computer hybrid aimed at high-end users. The Twist gets its name from its screen's ability to twist in a manner that converts the device into a tablet. The Twist has a 12.5" screen and makes use of Intel's Core i7 processor and SSD technology in lieu of a hard drive.
In a review for Engadget Dana Wollman wrote, "Lately, we feel like all of our reviews of Windows 8 convertibles end the same way. The ThinkPad Twist has plenty going for it: a bright IPS display, a good port selection, an affordable price and an unrivaled typing experience. Like ThinkPads past, it also offers some useful software features for businesses lacking dedicated IT departments. All good things, but what's a road warrior to do when the battery barely lasts four hours? Something tells us the Twist will still appeal to Lenovo loyalists, folks who trust ThinkPad's build quality and wouldn't be caught dead using any other keyboard. If you're more brand-agnostic, though, there are other Windows 8 convertibles with comfortable keyboards – not to mention, sharper screens, faster performance and longer battery life."
ThinkPad Helix (2013–2015)
The Helix is a convertible laptop satisfying both tablet and conventional notebook users. It uses a "rip and flip" design that allows the user to detach the display and then replace it facing in a different direction. It sports an 11.6" Full HD (1920 × 1080) display, with support for Windows 8 multi-touch. As all essential processing hardware is contained in the display assembly and it has multitouch capability, the detached monitor can be used as a standalone tablet computer. The Helix's high-end hardware and build quality, including Gorilla Glass, stylus-based input, and Intel vPro hardware-based security features, are designed to appeal to business users.
In a review published in Forbes Jason Evangelho wrote, "The first laptop I owned was a ThinkPad T20, and the next one may very likely be the ThinkPad Helix which Lenovo unveiled at CES 2013. In a sea of touch-inspired Windows 8 hardware, it's the first ultrabook convertible with a form factor that gets everything right. The first batch of Windows 8 ultrabooks get high marks for their inspired designs, but aren't quite flexible enough to truly be BYOD (Bring Your Own Device) solutions. Lenovo's own IdeaPad Yoga came close, but the sensation of feeling the keyboard underneath your fingers when transformed into tablet mode was slightly jarring. Dell's XPS 12 solved that problem with its clever rotating hinge design, but I wanted the ability to remove the tablet display entirely from both of those products."
ThinkPad 13 (2016–2017)
The ThinkPad 13 (Also known as the Thinkpad S2 in Mainland China) is a "budget" model with a 13-inch screen. Versions running Windows 10 and Google's Chrome OS were options. The most powerful configuration had a Skylake Core i7 processor and a 512GB SSD. Connectivity includes HDMI, USB 3.0, OneLink+, USB Type-C, etc. It weighs and is thick. As of 2017, a second generation Ultrabook model has been released with up to a Kaby Lake Core i7 processor and a FHD touchscreen available in certain countries. This lineup was merged into the L-Series in 2018, with the L380 being the successor to the 13 Second Generation.
25th anniversary Retro ThinkPad (2017)
Lenovo released the 25th anniversary Retro ThinkPad 25 in October 2017. The model is based on the T470, the difference being it having the 7-Row "Classic" keyboard with the layout found on the −20 Series, and the logo received a splash of colour reminiscent of the IBM era. The last ThinkPad models with the 7-row keyboard were introduced in 2011.
A Series (2017–2018)
In September 2017, Lenovo announced two ThinkPad models featuring AMD's PRO chipset technology – the A275 and A475. This sees the revival of the A Series nameplate not seen since the early 2000s when ThinkPads were under IBM's ownership, however it is likely the "A" moniker emphasised that it uses AMD technology rather than comparative product segment (workstation class) of the previous line.
While this isn't the first time Lenovo had offered an AMD derived ThinkPad, it is the first to be released as an alternative premium offering to the established T Series and X Series ThinkPads, which use Intel chipsets instead.
A275 and A475The A275 is a 12.5" ultraportable based on the Intel derived X270 model. Weighing in at 2.9 pounds (1.31 kg) this model features AMD Carrizo or Bristol Ridge APU's, AMD Radeon R7 graphics and AMD DASH (Desktop and mobile Architecture for System Hardware) for enterprise computing.
The A475 is a 14" mainstream portable computer based on the Intel derived T470 model. Weighing at 3.48 pounds (1.57 kg), like the A275 it features AMD Carrizo or Bristol Ridge APU's, AMD Radeon R7 graphics and AMD DASH (Desktop and mobile Architecture for System Hardware) for enterprise computing.
A285 and A485The A285 is a 12.5" laptop which is an upgraded version of the A275. Weighing in at , this model utilizes an AMD Raven Ridge APU with integrated Vega graphics, specifically the Ryzen 5 Pro 2500U. The laptop also contains a Discrete Trusted Platform Module (dTPM) for data encryption and password protection, supporting TPM 2.0. Optional security features include a fingerprint scanner and smart card reader. The display's native resolution can be either or depending on the configuration.
The A485 is a 14" laptop which is an upgraded version of the A475. Weighing , this model utilizes AMD's Raven Ridge APU's with integrated Vega graphics. This model can use multiple models of Raven Ridge APU's, unlike the A285. The laptop also contains a Discrete Trusted Platform Module (dTPM) for data encryption and password protection, supporting TPM 2.0. Optional security features include a fingerprint scanner and smart card reader. The display's native resolution can be either or depending on the configuration.
Rivals of ThinkPad
There are a lot of companies producing similar laptops to Lenovo/IBM ThinkPads, targeting the same market audience. These laptops often offer similar features to ThinkPad computers like a pointing stick or active hard drive protection. The ThinkPad series' main rivals have been Dell Latitude and HP EliteBook laptops for a long while.
Dell:
Dell Latitude 7xxx: Rivals ThinkPad T and X series
Dell Latitude 5xxx: Rivals ThinkPad L and E series
Dell Latitude 3xxx: Rivals ThinkPad E series
Dell Vostro 3xxx: Rivals ThinkPad E series
Dell XPS 9xxx: Indirectly rivals ThinkPad X1 series
Dell Precision 7xxx: Rivals ThinkPad P1 series
HP:
HP EliteBook 6xx: Rivals ThinkPad L series
HP EliteBook 8xx: Rivals ThinkPad T and X series
HP EliteBook 1040: Rivals ThinkPad X1 Carbon
HP Elite Dragonfly: Rivals ThinkPad X1 Nano
HP ZBook Firefly: Rivals ThinkPad P14, P15 and T15p
HP ZBook Power: Indirectly rivals ThinkPad P15 and P1
HP ZBook Studio: Rivals ThinkPad P1
HP ZBook Fury: Rivals ThinkPad P15
HP ProBook 4xx and 6xx: Rivals ThinkPad L and E series
Acer:
Acer TravelMate P6: Rivals ThinkPad X1 Carbon
Acer TravelMate P4: Rivals ThinkPad T14 and T14s
Acer TravelMate P2: Rivals ThinkPad L series
Acer TravelMate Spin B3: Rivals ThinkPad 11e Yoga
Acer Swift 7: Indirectly rivals ThinkPad X1 Nano
Fujitsu:
Fujitsu LifeBook U9xxx: Rivals ThinkPad X1 series
Fujitsu LifeBook U7xxx: Rivals ThinkPad T and X series
Fujitsu LifeBook U5xxx: Rivals ThinkPad L series
Fujitsu LifeBook U3xxx: Rivals ThinkPad E series
Dynabook (formerly Toshiba):
Dynabook Portégé: Rivals ThinkPad X series
Dynabook Tecra Xxx: Rivals ThinkPad T series
VAIO (formerly made by Sony):
VAIO Z: Rivals ThinkPad X1 Carbon and T14s
VAIO SX: Indirectly rivals ThinkPad L13
Apple:
Apple MacBook Pro: Indirectly rivals Thinkpad X1 Extreme, X1 Carbon, Z series and T14s
Apple MacBook Air: Indirectly rivals ThinkPad Z series, X1 Nano and X1 Carbon
Asus:
ASUSPRO Pxxx: Rivals ThinkPad T series
Asus Zenbook: Rivals ThinkPad X1 series
Asus ExpertBook: Rivals ThinkPad L and E series
Microsoft:
Microsoft Surface Pro: Rivals ThinkPad X1 Tablet
Microsoft Surface Laptop: Indirectly rivals ThinkPad X1 Carbon and X1 Nano
Huawei:
Huawei MateBook X Pro: Rivals ThinkPad X1 Carbon
Huawei MateBook X: Indirectly rivals ThinkPad X1 Nano
See also
ThinkBook
IBM/Lenovo ThinkCentre and ThinkStation desktops
List of IBM products
HP EliteBook
Dell Latitude and Precision
Fujitsu Lifebook and Celsius
Acer TravelMate
References
External links
ThinkPad models on ThinkWiki
Withdrawn models Specs Books
Think
Think
Consumer electronics brands
Computer-related introductions in 1992
Products introduced in 1992
2005 mergers and acquisitions
Divested IBM products |
4095896 | https://en.wikipedia.org/wiki/Leap%20%28computer%20worm%29 | Leap (computer worm) | The Oompa-Loompa malware, also called OSX/Oomp-A or Leap.A, is an application-infecting, LAN-spreading worm for Mac OS X, discovered by the Apple security firm Intego on February 14, 2006. Leap cannot spread over the Internet, and can only spread over a local area network reachable using the Bonjour protocol. On most networks this limits it to a single IP subnet.
Delivery and infection
The Leap worm is delivered over the iChat instant messaging program as a gzip-compressed tar file called . For the worm to take effect, the user must manually invoke it by opening the tar file and then running the disguised executable within.
The executable is disguised with the standard icon of an image file, and claims to show a preview of Apple's next OS. Once it is run, the worm will attempt to infect the system.
For non-"admin" users, it will prompt for the computer's administrator password in order to gain the privilege to edit the system configuration. It doesn't infect applications on disk, but rather when they are loaded, by using a system facility called "apphook".
Leap only infects Cocoa applications, and it does not infect applications owned by the system (including the apps that come pre-installed on a new machine), but only apps owned by the user who is currently logged in. Typically, that means apps that the current user has installed by drag-and-drop, rather than by Apple's installer system. When an infected app is launched, Leap tries to infect the four most recently used applications. If those four don't meet the above criteria, then no further infection takes place at that time.
Payload
Once activated, Leap then attempts to spread itself via the user's iChat Bonjour buddy list. It does not spread using the main iChat buddy list, nor over XMPP. (By default, iChat does not use Bonjour and thus cannot transmit this worm.)
Leap does not delete data, spy on the system, or take control of it, but it does have one harmful effect: due to a bug in the worm itself, an infected application will not launch. This is helpful in that it prevents people from continuing to launch the infected program.
Protection and recovery
A common method of protecting against this type of Computer Worm is avoiding launching files from untrusted sources. An existing admin account can be "declawed" by unchecking the box "Allow this user to administer this computer." (At least one admin account must remain on the system in order to install software and change vital system settings, even if it is an account created solely for that purpose.)
Recovering after a Leap infection involves deleting the worm files and replacing infected applications with fresh copies. It does not require re-installing the OS, since system-owned applications are immune.
References
External links
Intego Analysis - OSX/Leap.A aka OSX/Oompa-Loompa
Macworld- Mac Security: Antivirus
Macworld test of Leap A, with recovery tips
Leap-A malware: what you need to know
Computer worms
MacOS malware |
46245067 | https://en.wikipedia.org/wiki/Lingotek | Lingotek | Lingotek is a cloud-based translation services provider, offering translation management software and professional linguistic services for web content, software platforms, product documentation and electronic documents.
Company History
Lingotek was founded in 2006 and received $1.7 million Series A-1 in venture capital funding from Canopy Ventures and Flywheel Ventures to develop language search engine technologies.
In 2007, the software development and translation solutions company secured $1.6 million in Series A-2 financing. The A-2 round was led by Canopy Ventures of Lindon, Utah, contributing $1 million. Previous investors including Flywheel Ventures also participated in the A-2 round. The funding was to expand its sales and marketing efforts and further increase Lingotek's presence in the language translation market.
On July 16, 2008, Lingotek received a strategic investment with In-Q-Tel, a strategic, not-for-profit investment firm that works to identify, adapt, and deliver innovative technology solutions to support the mission of the Central Intelligence Agency and the broader U.S. Intelligence Community. Launched by the CIA in 1999 as a private, independent organization, the In-Q-Tel function is to identify and partner with companies developing technologies that serve the national security interests of the United States. In exchange, Lingotek was to provide a platform to facilitate more efficient, faster language translation. The agreement funded the development and enhancement of new translation solutions, including breakthrough global collaboration translation technology. The In-Q-Tel investment was also part of a Series B funding round, with participation by Flywheel Ventures, and Canopy Ventures. The funds were used to expand business operations, distribution, and further develop Lingotek's language technology capabilities.
Products
In 2006, Lingotek was the first U.S. company to launch a fully online, web-based, computer-assisted translation (CAT) system and pioneered the integration of translation memories (TM) with a main-frame powered machine translation (MT). While the translation products of more than a dozen European companies appeared in the U.S. market, the only standalone tools to directly support human translators developed in the U.S. were Lingotek in 2006. The developer was based in Utah and came from within the LDS Church. The LDS Church uses Lingotek as its preferred tool for its crowdsourcing translation. While Lingotek was originally marketed to government entities, translation companies, and freelance translators, the current marketing effort is focused on larger corporations with translation needs.
In August, 2006, Lingotek launched a beta version of its collaborative language translation service that enhanced a translator's efficiency by quickly finding meaning-based translated material for reuse. Branded as the Lingotek Collaborative Translation Platform, the service was based on three tiers of translation: automatic, community, and professional. Lingotek's language search engine indexed linguistic knowledge from a growing repository of multilingual content and language translations, instead of web pages. Users could then access its database of previously translated material to find more specific combinations of words for re-use. Such meaning-based searching maintained better style, tone, and terminology. Lingotek ran within most popular web browsers, including initial support for Internet Explorer and Firefox. Lingotek supported Microsoft Office, Microsoft Word, Rich Text Format (RTF), Open Office, HTML, XHTML, and Microsoft Excel formats, thereby allowing users to upload such documents directly into Lingotek. Lingotek also supported existing translation memory files that were Translation Memory eXchange (TMX)-compliant memories, thus allowing users to import TMX files into both private and public indices.
In June 2007, Lingotek began offering free access to its language search engine and other web 2.0-based translation-related software tools. Free access to the language search engine included both open and closed translation memory (TM). The Lingotek project management system helped project managers track translation projects in real-time. The system's alignment tool, glossary capabilities, version tracking and other tools were all included and available at no charge to all Lingotek users.
In 2008, Lingotek moved to Amazon Web Services's on-demand cloud computing platform. The Lingotek TMS has long been used internally and for government agencies. The company's first version of the product was a hosted, multi-tenant web app in 2006, which moved into the Amazon cloud in 2008. Though chiefly known in the industry as a technology-enabled LSP, Lingotek has been licensing the system separately, to customers that don't have a language services relationship. Due to its connectivity, scalability, security, and strong feature set, this is an option that enterprises should consider for their own translation environments. Lingotek's system was developed specifically with mass collaboration in mind.
The company introduced software-as-a-service (SaaS) collaborative translation technology in 2009, which combined the workflow and computer-aided translation (CAT) capabilities of human and machine translation into one application. Organizations can upload new projects, assign translators (paid or unpaid), check the status of current projects in real time, and download completed documents from any computer with web access.
In 2010, Lingotek re-positioned its Collaborative Translation Platform (CTP) as a software-as-a-service (SaaS) product which combined machine translation, real-time community translation, and management tools.
Lingotek's cloud-based CAT system was available on the market in 2012. The translation system can process text files and offers comprehensive support for the localization of web page files in HTML. In addition, the Lingotek CAT tools can handle several file types, including:
Microsoft Office, Microsoft Word, Microsoft PowerPoint, and Microsoft Excel;
Adobe FrameMaker files;
Files with the standardized format for localization: XML Localisation Interchange File Format (.xliff), .ttx (XML font file format) files, and .po (portable object)
Java properties files;
OpenDocument files;
Windows resource files;
Mac OS and OS X; and
TMX (Translation Memory eXchange).
Lingotek's stand‐alone translation management system (TMS) can be used to manage translation workflow for many different types of assets, from documents to websites. Since different types of content require different workflows, and often different service providers, Lingotek's enterprise TMS enables operators to manage not just the translation process, but also the vendor supply chain.
In 2010, Lingotek created a solution that integrated the Collaborative Translation Platform with other applications. The Lingotek - Inside API (application programming interface) allows users to translate content in web applications such as SharePoint, Drupal, Salesforce.com, Jive Social CRM, and Oracle universal content management (UCM).
Lingotek translation connectors work in conjunction with other content creation tools such as Drupal and WordPress, that integrate with its TMS. In 2016, the company was named a Top 30 Drupal Contributor.
Lingotek's TMS added multi-vendor translation, which enables brands to choose any translation agency for in-workflow translation in 2014.
Awards and recognition
Lingotek was named Comparably Best Places to Work 2017 and received an award for Utah Best in State Language Services 2017. In 2016, Lingotek was identified as a Top 30 Drupal Contributor. The company was a Bronze Winner of the 2015 Edison Award Verbal Communications; named a CIO Microsoft 100 Solution Provider by CIO Review; a Gartner Cool Vendor of the Year (2012), and received the Stevie Award for Best New Product or Service of the Year – Software as a Service (2010). In 2006, Lingotek was named Most Innovative Product by Utah Valley Entrepreneurial Forum.
References
Translation companies |
2437747 | https://en.wikipedia.org/wiki/Zhangjiang%20Hi-Tech%20Park | Zhangjiang Hi-Tech Park | The Zhangjiang Hi-Tech Park is a technology park in the Pudong district of Shanghai, China. It is operated by Zhangjiang Hi-Tech Park Development Co., Ltd. The park specializes in research in life sciences, software, semiconductors, and information technology.
As of 2009, there were 110 research and development institutions, 3,600 companies and 100,000 workers located in the technology park. In some circles the park is also known as China's Silicon Valley.
History
The Zhangjiang Hi-Tech Park was established in July 1992. It is situated in the Pudong New Area with a total area of . In 2018, it has bases such as the National Shanghai Biomedical Science and Technology Industry Base, National Information Industry Base, National Integrated Circuit Industry Base, National Semiconductor Lighting Industry Base, National 863 Information Security Fruit Industrialization (Eastern) Base, National Software Industry Base, National Software Export Base, National Cultural Industry Model Base, National Online Games and Animation Industry Development Base. It also has the parks National Torch Entrepreneurship Park and National Overseas Student Pioneering Park. The park is made up of the following areas: the Technical Innovation Zone, the Hi-Tech Industry Zone, the Scientific Research and Education Zone, and the Residential Zone.
The park's center area now has 400 research and development institutions. In 2013, Shanda opened sales of a real estate investment project in the park and accepted payment for apartments with bitcoin. Shanda World opened in the park in 2018.
In August 1999, the Shanghai Municipal Committee and Municipal Government developed a strategy and accompanying report called "Focus on Zhangjiang." The report identified that investments from the IC industry, the software industry, and the biomedical industry would be targeted. They were seen as the industries which should have leading roles in innovation and that would drive future economic growth and higher employment in Zhangjiang Town and the Hi-Tech Park.
The park is classified as a Special Economic Zone.
Presence
Major companies that have a presence in the park include life science firms GSK, Roche, Eli Lilly, Pfizer, Novartis, GE, and AstraZeneca. Internet technology firms include Hewlett-Packard, Lenovo, Intel, and Infineon. Software firms include IBM, Citibank, eBay, Tata Consultancy Services, Infosys, and SAP AG. Chemical companies include Wison Group, DSM, Henkel, Dow, Dupont, and Rohm and Haas. Semiconductor firms include Semiconductor Manufacturing International Corporation (SMIC), Hua Hong NEC, Grace Semiconductor, Spreadtrum, and VeriSilicon. Other firms present include Asia-Pacific Software, Sony, Bearing Point, Kyocera, Cognizant, TCS China, Satyam and Applied Materials. There are also a multitude of biotech firms, over a hundred of them being domestically owned companies.
The 2013 founded ShanghaiTech University aims to be the academic center of the Zhanghjiang Hi-Tech Park, alongside satellite campuses of Fudan University and Shanghai Jiao Tong University.
Location
Road links
Zhangjiang Hi-Tech Park can be reached via the inner or outer ring roads that serve the Shanghai metropolitan area. The park is 3.6 km from Nanpu Bridge and 13 km from People's Square. It is 9 km from The Bund.
Longdong Avenue on the park's northern boundary is the main road connecting the inner ring road and Shanghai Pudong International Airport. Luonan Avenue on the park's western boundary is the feeder road connecting the inner ring road and outer ring road.
Air links
Zhangjiang Hi-Tech Park is located in the Pudong district. It is 21 km from Pudong Airport and 25 km from Hongqiao Airport.
Rail links
Zhangjiang Hi-Tech Park can be reached by taking Line 2 of the Shanghai Metro to Zhangjiang Hi-Tech Park station, Jinke Road, or Guanglan Road. An extension of Line 13 goes through the center of Zhangjiang. The Zhangjiang Tram system, which runs inside the zone and connects to the metro line is also available.
References
External links
Official website
China wants to rule on AI and American internet giants are jumping in
Operator of the Park
K+R Planning & Urban Design
1992 establishments in China
Economy of Shanghai
Geography of Shanghai
Special Economic Zones of China |
27362474 | https://en.wikipedia.org/wiki/6002%20Eetion | 6002 Eetion | 6002 Eetion, provisional designation: , is a mid-sized Jupiter trojan from the Trojan camp, approximately in diameter. It was discovered by Poul Jensen at the Brorfelde Observatory in 1988, and has not been named since its numbering in June 1994. The dark Jovian asteroid has a rotation period of 12.9 hours. In 2021, it was named from Greek mythology after King Eetion, who was killed by Achilles during the raid on Thebe.
Discovery
Eetion discovered on 8 September 1988, by Danish astronomer Poul Jensen at the Brorfelde Observatory near Holbæk, Denmark, who on very same night also discovered the Jupiter trojan , and several other main-belt asteroids including , , , , and .
Orbit and classification
Eetion is a dark Jovian asteroid in a 1:1 orbital resonance with Jupiter. It is located in the trailering Trojan camp at the Gas Giant's Lagrangian point, 60° behind its orbit . It is also a non-family asteroid of the Jovian background population.
It orbits the Sun at a distance of 4.7–5.7 AU once every 11 years and 11 months (4,361 days; semi-major axis of 5.22 AU). Its orbit has an eccentricity of 0.09 and an inclination of 16° with respect to the ecliptic. A first precovery was taken at Palomar Observatory in September 1953, extending the body's observation arc by 35 years prior to its official discovery observation at Brorfelde.
Numbering and naming
This minor planet was numbered by the Minor Planet Center on 23 June 1994 (). On 29 November 2021, IAU's Working Group Small Body Nomenclature it from Greek mythology after King Eetion of Thebe Hypoplakia, father of Andromache, and father-in-law of Hector. Eetion was killed during the raid on Thebe by Achilles.
Physical characteristics
This Jupiter trojan is an assumed, carbonaceous C-type asteroid.
Rotation period
In February 1993, Eetion was observed by astronomers Stefano Mottola and Mario Di Martino with the ESO 1-metre telescope and its DLR MkII CCD-camera at La Silla in Chile. The photometric observations were used to build a lightcurve showing a rotation period of hours with a brightness variation of magnitude (). It was the body's first determined rotation period in literature.
Diameter and albedo
According to the survey carried out by NASA's Wide-field Infrared Survey Explorer with its subsequent NEOWISE mission, Eetion measures 40.4 kilometers in diameter and its surface has an albedo of 0.075, while the Collaborative Asteroid Lightcurve Link assumes a standard albedo for a carbonaceous asteroid of 0.057 and calculates a diameter of 42.23 kilometers, based on an absolute magnitude of 10.6.
References
External links
Asteroid Lightcurve Database (LCDB), query form (info )
Discovery Circumstances: Numbered Minor Planets (5001)-(10000) – Minor Planet Center
Asteroid (6002) 1988 RO at the Small Bodies Data Ferret
006002
Discoveries by Poul Jensen (astronomer)
Minor planets named from Greek mythology
Named minor planets
19880908 |
378274 | https://en.wikipedia.org/wiki/Damn%20Small%20Linux | Damn Small Linux | Damn Small Linux (DSL) was a computer operating system for the x86 family of personal computers. It is free and open-source software under the terms of the GNU GPL and other free and open source licenses. It was designed to run graphical user interface applications on older PC hardware, for example, machines with 486 and early Pentium microprocessors and very little random-access memory (RAM). DSL is a Live CD with a size of 50 megabytes (MB). What originally began as an experiment to see how much software could fit in 50 MB eventually became a full Linux distribution. It can be installed on storage media with small capacities, like bootable business cards, USB flash drives, various memory cards, and Zip drives.
History
DSL was originally conceived and maintained by John Andrews. For five years the community included Robert Shingledecker who created the MyDSL system, DSL Control Panel and other features. After issues with the main developers, Robert was, by his account, exiled from the project. He currently continues his work on Tiny Core Linux which he created in April 2008.
DSL was originally based on Model-K, a 22 MB stripped down version of Knoppix, but soon after was based on Knoppix proper, allowing much easier remastering and improvements.
System requirements
DSL supports only x86 PCs. The minimum system requirements are a 486 processor and 8 MB of RAM. DSL has been demonstrated browsing the web with Dillo, running simple games and playing music on systems with a 486 processor and 16 MB of RAM. The system requirements are higher for running Mozilla Firefox and optional add-ons such as the OpenOffice.org office suite.
Features
, version 4.4.10 of DSL, released November 18, 2008, was current. It includes the following software:
Text editors: Beaver, Nano, Vim
File managers: DFM, emelFM
Graphics: mtPaint (raster graphics editor), xzgv (image viewer)
Multimedia: gphone, XMMS with MPEG-1 and Video CD (VCD) support
Office: Siag Office (spreadsheet program), Ted (word processor) with spell checker, Xpdf (viewer for Portable Document Format (PDF) documents)
Internet:
Web browsers: Dillo, Firefox, Netrik
Sylpheed (E-mail client)
naim (AOL Instant Messenger (AIM), ICQ, and IRC client)
AxyFTP (File Transfer Protocol (FTP) client), BetaFTPD (FTP server)
Monkey (web server)
Server Message Block (SMB) client
Rdesktop (Remote Desktop Protocol (RDP) client, Virtual Network Computing (VNC) viewer
Others: Dynamic Host Configuration Protocol (DHCP) client, Secure Shell (SSH) and secure copy protocol (SCP) client and server; Point-to-Point Protocol (PPP), Point-to-Point Protocol over Ethernet (PPPoE), Asymmetric Digital Subscriber Line (ADSL) support; FUSE, Network File System (NFS), SSH Filesystem (SSHFS) support; UnionFS; generic and Ghostscript printing support; PC card, Universal Serial Bus (USB), Wi-Fi support; calculator, games, system monitor; many command-line tools
DSL has built-in scripts to download and install Advanced Packaging Tool (APT). Once APT is enabled, a user can install packages from Debian's repositories. Also, DSL hosts software ranging from large applications like OpenOffice.org and GNU Compiler Collection (GCC), to smaller ones such as aMSN, by means of the MyDSL system, which allows convenient one-click download and installing of software. Files hosted on MyDSL are called extensions. As of June 2008, the MyDSL servers were hosting over 900 applications, plugins, and other extensions.
Boot options
Boot options are also called "cheat codes" in DSL. Automatic hardware detection may fail, or the user may want to use something other than the default settings (language, keyboard, VGA, fail safe graphics, text mode...). DSL allows the user to enter one or more cheat codes at the boot prompt. If nothing is entered, DSL will boot with the default options. Cheat codes affect many auto-detection and hardware options. Many cheat codes also affect the GUI. The list of cheat codes can be seen at boot time and also at the DSL Wiki.
You can also Run PartyDisk on DSL.
The MyDSL system
MyDSL is handled and maintained mostly by Robert Shingledecker and hosted by many organizations, such as ibiblio and Belgium's BELNET. There are 2 areas of MyDSL: regular and testing. The regular area contains extensions that have been proven stable enough for everyday use and is broken down into different areas such as apps, net, system, and uci (Universal Compressed ISO - Extensions in .uci format are mounted as a separate file system to minimize RAM use). The testing area is for newly submitted extensions that theoretically work well enough, but may have any number of bugs.
Versions and ports
Release timeline
Flavours
The standard flavour of DSL is the Live CD. There are also other versions available:
'Frugal' installation: DSL's 'cloop' image is installed, as a single file, to a hard disk partition. This is likely more reliable and secure than a traditional hard drive installation, since the cloop image cannot be directly modified; any changes made are only stored in memory and discarded upon rebooting.
'dsl-version-embedded.zip': Includes QEMU for running DSL inside Windows or Linux.
'dsl-version-initrd.iso': Integrates the normally-separate cloop image into the initrd image; this allows network booting, using PXE. As a regular toram boot, requires at least 128mb ram.
'dsl-version-syslinux.iso': Boots using syslinux floppy image emulation instead of isolinux; for very old PCs that cannot boot with isolinux.
'dsl-version-vmx.zip': A virtual machine hard drive image that can be run in VirtualBox, VMware Workstation or VMware Player.
DSL-N: A larger version of DSL that exceeds the 50 MB limit of business-card CDs. DSL-N uses version 2 of the GTK+ widget toolkit and version 2.6 of the Linux kernel. The latest release of DSL-N, 0.1RC4, is 95 MB in size. It is not actively maintained.
One can also boot DSL using a boot-floppy created from one of the available floppy images ('bootfloppy.img'; 'bootfloppy-grub.img'; 'bootfloppy-usb.img'; or 'pcmciabootfloppy.img') on very old computers, where the BIOS does not support the El Torito Bootable CD Specification. The DSL kernel is loaded from the floppy disk into RAM, after which the kernel runs DSL from the CD or USB drive.
Ports and derivatives
DSL was ported to the Xbox video game console as X-DSL. X-DSL requires a modified Xbox. It can run as a Live CD or be installed to the Xbox hard drive. Users have also run X-DSL from a USB flash drive, using the USB adaptor included with Phantasy Star Online, which plugs into the memory card slot and includes one USB 1.1 port. X-DSL boots into a X11-based GUI; the Xbox controller can be used to control the mouse pointer and enter text using a virtual keyboard. X-DSL has a Fluxbox desktop, with programs for E-mail, web browsing, word processing and playing music. X-DSL can be customized by downloading extensions from the same MyDSL servers as DSL.
Linux distributions derived from Damn Small Linux include Hikarunix, used for a CD image that runs the game of Go released in 2005,
and Damn Vulnerable Linux.
Live USB
A Live USB of Damn Small Linux can be created manually or with applications like UNetbootin. See List of tools to create Live USB systems for full list.
Status
Due to infighting among the project's originators and main developers, DSL development seemed to be at a standstill for a long time, and the future of the project was uncertain, much to the dismay of many of the users. On July 8, 2012, John Andrews (the original developer) announced that a new release was being developed. The DSL website, including the forums which were once inaccessible, were back, as well. The first RC of the new 4.11 was released on August 3, 2012, followed by a second one on September 26. The damnsmalllinux.org site was inaccessible again sometime in 2015 to February 2016. As of March 27, 2016, it was again accessible for some time, but as of February 10, 2019 was inaccessible yet again. As of 2021 it was accessible.
See also
Comparison of Linux distributions
Lightweight Linux distribution
List of Linux distributions
List of Linux distributions that run from RAM
Tiny Core Linux, the project Robert Shingledecker began
References
External links
Damn Small Linux website
USB DSL tutorial
DistroWatch interview
Archive.org's DSL ISO Archive
Reviews
IBM developerWorks review
OSNews review (2004), OSNews review (2011)
Tech Source From Bohol review
Review of version 4.4.10 at IT Reviews
Knoppix
LiveDistro
Light-weight Linux distributions
Live USB
Debian-based distributions
Lua (programming language)-scripted software
Lightweight Unix-like systems
Linux distributions without systemd
Linux distributions |
45315201 | https://en.wikipedia.org/wiki/Nike-X | Nike-X | Nike-X was an anti-ballistic missile (ABM) system designed in the 1960s by the United States Army to protect major cities in the United States from attacks by the Soviet Union's intercontinental ballistic missile (ICBM) fleet during the Cold War. The X in the name referred to its experimental basis and was supposed to be replaced by a more appropriate name when the system was put into production. This never came to pass; in 1967 the Nike-X program was canceled and replaced by a much lighter defense system known as Sentinel.
The Nike-X system was developed in response to limitations of the earlier Nike Zeus system. Zeus' radars could only track single targets, and it was calculated that a salvo of only four ICBMs would have a 90% chance of hitting a Zeus base. The attacker could also use radar reflectors or high-altitude nuclear explosions to obscure the warheads until they were too close to attack, making a single-warhead attack highly likely to succeed. Zeus would have been useful in the late 1950s when the Soviets had only a few dozen missiles, but would be of little use by the early 1960s when it was believed they would have hundreds.
The key concept that led to Nike-X was that the rapidly thickening atmosphere below altitude disrupted the reflectors and explosions. Nike-X intended to wait until the enemy warheads descended below this altitude and then attack them using a very fast missile known as Sprint. The entire engagement would last only a few seconds and could take place as low as . To provide the needed speed and accuracy, as well as deal with multi-warhead attacks, Nike-X used a new radar system and building-filling computers that could track hundreds of objects at once and control salvos of many Sprints. Many dozens of warheads would need to arrive at the same time to overwhelm the system.
Building a complete deployment would have been extremely expensive, on the order of the total yearly budget of the Department of Defense. Robert McNamara, the Secretary of Defense, believed that the cost could not be justified and worried it would lead to a further nuclear arms race. He directed the teams to consider deployments where a limited number of interceptors might still be militarily useful. Among these, the I-67 concept suggested building a lightweight defense against very limited attacks. When the People's Republic of China exploded their first H-bomb in June 1967, I-67 was promoted as a defense against a Chinese attack, and this system became Sentinel in October. Nike-X development, in its original form, ended.
History
Nike Zeus
In 1955 the US Army began considering the possibility of further upgrading their Nike B surface-to-air missile (SAM) as an anti-ballistic missile to intercept ICBMs. Bell Labs, the primary contractor for Nike, was asked to study the issue. Bell returned a report stating that the missile could be upgraded to the required performance relatively easily, but the system would need extremely powerful radar systems to detect the warhead while it was still far enough away to give the missile time to launch. All of this appeared to be within the state of the art, and in early 1957 Bell was given the go-ahead to develop what was then known as Nike II. Considerable interservice rivalry between the Army and Air Force led to the Nike II being redefined and delayed several times. These barriers were swept aside in late 1957 after the launch of the R-7 Semyorka, the first Soviet ICBM. The design was further upgraded, given the name Zeus, and assigned the highest development priority.
Zeus was similar to the two Nike SAM designs that preceded it. It used a long-range search radar to pick up targets, separate radars to track the target and interceptor missiles in flight, and a computer to calculate intercept points. The missile itself was much larger than earlier designs, with a range of up to , compared to Hercules' . To ensure a kill at altitude, where there was little atmosphere to carry a shock wave, it mounted a 400 kiloton (kT) warhead. The search radar was a rotating triangle wide, able to pick out warheads while still over away, an especially difficult problem given the small size of a typical warhead. A new transistorized digital computer offered the performance needed to calculate trajectories for intercepts against warheads traveling over .
The Zeus missile began testing in 1959 at White Sands Missile Range (WSMR) and early launches were generally successful. Longer range testing took place at Naval Air Station Point Mugu, firing out over the Pacific Ocean. For full-scale tests, the Army built an entire Zeus base on Kwajalein Island in the Pacific, where it could be tested against ICBMs launched from Vandenberg Air Force Base in California. Test firings at Kwajalein began in June 1962; these were very successful, passing within hundreds of yards of the target warheads, and in some tests, low-flying satellites.
Zeus problems
Zeus had initially been proposed in an era when ICBMs were extremely expensive and the US believed that the Soviet fleet contained a few dozen missiles. At a time when the US deterrent fleet was based entirely on manned bombers, even a small number of missiles aimed at Strategic Air Command's (SAC) bases presented a serious threat. Two Zeus deployment plans were outlined. One was a heavy defensive system that would provide protection over the entire continental United States, but require as many as 7000 Zeus missiles. McNamara supported a much lighter system that would use only 1200 missiles.
Technological improvements in warheads and missiles in the late 1950s greatly reduced the cost of ICBMs. After the launch of Sputnik, Pravda quoted Nikita Khrushchev claiming they were building them "like sausages". This led to a series of intelligence estimates that predicted the Soviets would have hundreds of missiles by the early 1960s, creating the so-called "missile gap". It was later shown that the number of Soviet missiles did not reach the hundreds until the late 1960s, and at the time they had only four.
Zeus used mechanically steered radars, like the Nike SAMs before it, limiting the number of targets it could attack at once. A study by the Weapons Systems Evaluation Group (WSEG) calculated that the Soviets had a 90 percent chance of successfully hitting a Zeus base by firing only four warheads at it. These did not even have to land close in order to destroy the base; an explosion within several miles would destroy its radars, which were very difficult to harden. If the Soviets did have hundreds of missiles, they could easily afford to use some to attack the Zeus sites.
Additionally, technical problems arose that appeared to make the Zeus almost trivially easy to defeat. One problem, discovered in tests during 1958, was that nuclear fireballs expanded to very large sizes at high altitudes, rendering everything behind them invisible to radar. This was known as nuclear blackout. By the time an enemy warhead passed through the fireball, about above the base, it would only be about eight seconds from impact. That was not enough time for the radar to lock on and fire a Zeus before the warhead hit its target.
It was also possible to deploy radar decoys to confuse the defense. Decoys are made of lightweight materials, often strips of aluminum or mylar balloons, which can be packed in with the reentry vehicle (RV), adding little weight. In space, these are ejected to create a threat tube a few kilometers across and tens of kilometers long. Zeus had to get within about to kill a warhead, which could be anywhere in the tube. The WSEG suggested that a single ICBM with decoys would almost certainly defeat Zeus. A mid-1961 staff report by ARPA suggested that a single large missile with multiple warheads would require four entire Zeus batteries, of 100 missiles each, to defeat it.
Nike-X
The Advanced Research Projects Agency (ARPA, today known as DARPA) was formed in 1958 by President Dwight Eisenhower's Secretary of Defense, Neil McElroy, in reaction to Soviet rocketry advances. US efforts had suffered from massive duplication of effort between the Army, Air Force, and Navy, and seemed to be accomplishing little in comparison to the Soviets. ARPA was initially handed the mission of overseeing all of these efforts. As the problems with Zeus became clear, McElroy also asked ARPA to consider the antimissile problem and come up with other solutions. The resulting Project Defender was extremely broad in scope, considering everything from minor Zeus system upgrades to far-out concepts like antigravity and the recently invented laser.
Meanwhile, one improvement to Zeus was already being studied: a new phased-array radar replacing Zeus' mechanical ones would greatly increase the number of targets and interceptors that a single site could handle. Much more powerful computers were needed to match this performance. Additionally, the antennas were mounted directly in concrete and would have increased blast resistance. Initial studies at Bell Labs started in 1960 on what was then known as the Zeus Multi-function Array Radar, or ZMAR. In June 1961, Western Electric and Sylvania were selected to build a prototype, with Sperry Rand Univac providing the control computer.
By late 1962 a decision on whether or not to deploy Zeus was looming. Bell began considering a replacement for the Zeus missile that would operate at much shorter ranges, and in October sent out study contracts to three contractors to be returned in February. Even before these were returned, in January 1963 McNamara announced that the construction funds allocated for Zeus would not be released, and the funding would instead be used for development of a new system using the latest technologies. The name Nike-X was apparently an ad hoc suggestion by Jack Ruina, the director of ARPA, who was tasked with presenting the options to the President's Science Advisory Committee (PSAC). With the ending of Zeus, the ZMAR radar effort was renamed MAR, and plans for an even more powerful version, MAR-II, became the central part of the Nike-X concept.
System concept
Decoys are lighter than the RV, and therefore suffer higher atmospheric drag as they begin to reenter the atmosphere. This will eventually cause the RV to move out in front of the decoys. The RV can often be picked out earlier by examining the threat tube and watching for objects that have lower deceleration. This process, known as atmospheric filtering, or more generally, decluttering, will not provide accurate information until the threat tube begins to reenter the denser portions of the atmosphere, at altitudes around . Nike-X intended to wait until the decluttering was complete, meaning the interceptions would take place only seconds before the warheads hit their targets, between away from the base.
Low-altitude intercepts would also have the advantage of reducing the problem with nuclear blackout. The lower edge of the extended fireball used to induce this effect extended down to about 60 km, the same altitude at which decluttering became effective. Hence, low-altitude intercepts meant that deliberate attempts to create a blackout would not affect the tracking and guidance of the Sprint missile. Just as importantly, because the Sprint's own warheads would be going off far below this altitude, their fireballs would be much smaller and would only black out a small portion of the sky. The radar would have to survive the electrical effects of EMP, and significant effort was expended on this. It also meant that the threat tube trajectories would have to be calculated rapidly, before or between blackout periods, and the final tracking of the warheads in the 10 seconds or so between clearing the clutter and hitting their targets. This demanded a very high-performance computer, one that did not exist at that time.
The centerpiece of the Nike-X system was MAR, using the then-new active electronically scanned array (AESA) concept to allow it to generate multiple virtual radar beams, simulating any number of mechanical radars needed. While one beam scanned the sky for new targets, others were formed to examine the threat tubes and generate high-quality tracking information very early in the engagement. More beams were formed to track the RVs once they had been picked out, and still more to track the Sprints on their way to the interceptions. To make all of this work, MAR required data processing capabilities on an unprecedented level, so Bell proposed building the system using the newly invented resistor–transistor logic small-scale integrated circuits. Nike-X centralized the battle control systems at their Defense Centers, consisting of a MAR and its associated underground Defense Center Data Processing System (DCDPS).
Because the Sprint was designed to operate at short range, a single base could not provide protection to a typical US city, given urban sprawl. This required the Sprint launchers to be distributed around the defended area. Because a Sprint launched from a remote base might not be visible to the MAR during the initial stages of the launch, Bell proposed building a much simpler radar at most launch sites, the Missile Site Radar (MSR). MSR would have just enough power and logic to generate tracks for its outgoing Sprint missiles and would hand that information off to the DCDPS using conventional telephone lines and modems. Bell noted that the MSR could also provide a useful second-angle look at threat tubes, which might allow the decoys to be picked out earlier. Used as radio receivers, they could also triangulate any radio broadcasts coming from the threat tube, which the enemy might use as a radar jammer.
When the system was first being proposed it was not clear whether the phased-array systems could provide the accuracy needed to guide the missiles to a successful interception at very long ranges. Early concepts retained Zeus Missile Tracking Radars and Target Tracking Radars (MTRs and TTRs) for this purpose. In the end, the MAR proved more than capable of the required resolution, and the additional radars were dropped.
Problems and alternatives
Nike-X had been defined in the early 1960s as a system to defend US cities and industrial centers against a heavy Soviet attack during the 1970s. By 1965 the growing fleets of ICBMs in the inventories of both the US and USSR were making the cost of such a system very expensive. NIE 11-8-63, published 18 October 1963, estimated the Soviets would have 400–700 ICBMs deployed by 1969, and their deployment eventually reached 1,601 launchers, limited by the SALT agreements.
While Nike-X could be expected to attack these with a reasonable 1 to 1 exchange ratio, compared to Zeus' 20 to 1, it could only do so over a limited area. Most nationwide deployment scenarios contained thousands of Sprint missiles protecting only the largest US cities. Such a system would cost an estimated $40 billion to build ($ billion in , about half the annual military budget).
This led to further studies of the system to try to determine whether an ABM would be the proper way to save lives, or if there was some other plan that would do the same for less money. In the case of Zeus, for instance, it was clear that building more fallout shelters would be less expensive and save more lives. A major report on the topic by PSAC in October 1961 made this point, suggesting that Zeus without shelters was useless, and that having Zeus might lead the US to "introduce dangerously misleading assumptions concerning the ability of the US to protect its cities".
This led to a series of increasingly sophisticated models to better predict the effectiveness of an ABM system and what the opposition would do to improve their performance against it. A key development was the Prim-Read theory, which provided an entirely mathematical solution to generating the ideal defensive layout. Using a Prim-Read layout for Nike-X, Air Force Brigadier General Glenn Kent began considering Soviet responses. His 1964 report produced a cost-exchange ratio that required $2 of defense for every $1 of offence if one wanted to limit US casualties to 30 percent of the population. The cost increased to 6-to-1 if the US wished to limit casualties to 10 percent. ABMs would only be cheaper than ICBMs if the US was willing to allow over half its population to die in the exchange. When he realized he was using outdated exchange rates for the Soviet ruble, the exchange ratio for the 30 percent casualty rate jumped to 20-to-1.
As the cost of defeating Nike-X by building more ICBMs was less than the cost of building Nike-X to counter them, reviewers concluded that the construction of an ABM system would simply prompt the Soviets to build more ICBMs. This led to serious concerns about a new arms race, which it was believed would increase the chance of an accidental war. When the numbers were presented to McNamara, according to Kent:
In spite of its technical capabilities, Nike-X still shared one seemingly intractable problem that had first been noticed with Zeus. Facing an ABM system, the Soviets would change their targeting priorities to maximize damage, by attacking smaller, undefended cities for instance. Another solution was to drop their warheads just outside the range of the defensive missiles, upwind of the target. Ground bursts would throw enormous amounts of radioactive dust into the air, causing fallout that would be almost as deadly as a direct attack. This would make the ABM system essentially useless unless the cities were also extensively protected from fallout. Those same fallout shelters would save many lives on their own, to the point that the ABM seemed almost superfluous. While reporting to Congress on the issue in the spring of 1964, McNamara noted:
Under any reasonable set of assumptions, even an advanced system like Nike-X offered only marginal protection and did so for huge costs. Around 1965, the ABM became what one historian calls a "technology in search of a mission". In early 1965, the Army launched a series of studies to find a mission concept that would lead to deployment.
Hardpoint and Hardsite
One of the original deployment plans for Zeus had been a defensive system for SAC. The Air Force argued against such a system, in favor of building more ICBMs of their own. Their logic was that every Soviet missile launched in a counterforce strike could destroy a single US missile. If both forces had similar numbers of missiles, such an attack would leave both forces with few remaining missiles to launch a counterstrike. Adding Zeus would reduce the number of losses on the US side, helping ensure a counterstrike force would survive. The same would be true if the US built more ICBMs instead. The Air Force was far more interested in building its own missiles than the Army's, especially in the case of Zeus, which appeared to be easily outwitted.
Things changed in the early 1960s when McNamara placed limits on the Air Force fleet of 1,000 Minuteman missiles and 54 Titan IIs. This meant that the Air Force could not respond to new Soviet missiles by building more of their own. An even greater existential threat to Minuteman than Soviet missiles was the US Navy's Polaris missile fleet, whose invulnerability led to questions about the need for ground-based ICBMs. The Air Force responded by changing missions; the increasingly accurate Minuteman was now tasked with attacking Soviet missile silos, which the less accurate Navy missiles could not do. If the force was going to carry out this mission there had to be the expectation that enough missiles could survive a Soviet attack for a successful counterstrike. An ABM might provide that assurance.
A fresh look at this concept started at ARPA around 1963–64 under the name Hardpoint. This led to the construction of the Hardpoint Demonstration Array Radar, and an even faster missile concept known as HiBEX. This proved interesting enough for the Army and Air Force to collaborate on a follow-up study, Hardsite. The first Hardsite concept, HSD-I, considered the defending of bases within urban areas that would have Nike-X protection anyway. An example might be a SAC command and control center or an airfield on the outskirts of a city. The second study, HSD-II, considered the protection of isolated bases like missile fields. Most follow-up work focused on the HSD-II concept.
HSD-II proposed building small Sprint bases close to Minuteman fields. Incoming warheads would be tracked until the last possible moment, decluttering them completely and generating highly accurate tracks. Since the warheads had to land within a short distance of a missile silo to damage it, any warheads that could be seen to be falling outside that area were simply ignored – only those entering the "Site Protection Volume" needed to be attacked. At the time, Soviet inertial navigation systems (INS) were not particularly accurate. This acted as a force multiplier, allowing a few Sprints to defend against many ICBMs.
Although initially supportive of the Hardsite concept, by 1966 the Air Force came to oppose it largely for the same reasons it had opposed Zeus in the same role. If money was to be spent on protecting Minuteman, they felt that money would be better spent by the Air Force than the Army. As Morton Halperin noted:
Small City Defense, PAR
During the project's development phase, the siting and size of the Nike-X bases became a major complaint of smaller cities. Originally intended to protect only the largest urban areas, Nike-X was designed to be built at a very large size with many missiles controlled by an expensive computer and radar network. Smaller sites were to be left undefended in the original Nike-X concept since the system was simply too expensive to build with only a few interceptors. These cities complained that they were not only being left open to attack, but that their lack of defenses might make them primary targets. This led to a series of studies on the Small City Defense (SCD) concept. By 1964 SCD had become part of the baseline Nike-X deployment plans, with every major city being provided some level of defensive system.
SCD would consist primarily of a single autonomous battery centered on a cut-down MAR called TACMAR (TACtical MAR), along with a simplified data processing system known as the Local Data Processor (LDP). This was essentially the DCDP with fewer modules installed, reducing the number of tracks it could compile and the amount of decluttering it could handle. To further reduce costs, Bell later replaced the cut-down MAR with an upgraded MSR, the "Autonomous MSR". They studied a wide variety of potential deployments, starting with systems like the original Nike-X proposal with no SCDs, to deployments offering complete continental US protection with many SCD modules of various types and sizes. The deployments were arranged so that they could be built in phases, working up to complete coverage.
One issue that emerged from these studies was the problem of providing early warning to the SCD sites. The SCD's MSR radars provided detection at perhaps , which meant targets would appear on their radars only seconds before launches would have to be carried out. In a sneak attack scenario, there would not be enough time to receive command authority for the release of nuclear weapons. This meant the bases would require launch on warning authority, which was politically unacceptable.
This led to proposals for a new radar dedicated solely to the early warning role, determining only which MAR or SCD would ultimately have to deal with the threat. Used primarily in the first minutes of the attack, and not responsible for the engagements, the system could be considered disposable and did not need anything like the sophistication or hardening of the MAR. This led to the Perimeter Acquisition Radar (PAR), which would operate cheaper electronics at VHF frequencies.
X-ray attacks, Zeus EX
The high-altitude explosions that had caused so much concern for Nike Zeus due to blackout had been further studied in the early 1960s and led to a new possibility for missile defense. When a nuclear warhead explodes in a dense atmosphere, its initial high-energy X-rays ionize the air, blocking other X-rays. In the highest layers of the atmosphere, there is too little gas for this to occur, and the X-rays can travel long distances. Sufficient X-ray exposure to an RV can damage its heat shields.
In late 1964 Bell was considering the role of an X-ray-armed Zeus missile in the Nike-X system. A January 1965 report outlines this possibility, noting that it would have to have a much larger warhead dedicated to the production of X-rays, and would have to operate at higher altitudes to maximize the effect. A major advantage was that accuracy needs were much reduced, from a minimum of about for the original Zeus' neutron-based attack, to something on the order of a few miles. This meant that the range limits of the original Zeus, which were defined by the accuracy of the radars to about , were greatly eased. This, in turn, meant that a less sophisticated radar could be used, one with accuracy on the order of a mile rather than feet, which could be built much less expensively using VHF parts.
This Extended Range Nike Zeus, or Zeus EX for short, would be able to provide protection over a wider area, reducing the number of bases needed to provide full-country defense. Work on this concept continued throughout the 1960s, eventually becoming the primary weapon in the following Sentinel system, and in the modified Sentinel system that was later renamed Safeguard.
Nth Country, DEPEX, I-67
In February 1965 the Army asked Bell to consider different deployment concepts under the Nth Country study. This examined what sort of system would be needed to provide protection against an unsophisticated attack with a limited number of warheads. Using Zeus EX, a few bases could provide coverage for the entire US. The system would be unable to deal with large numbers of warheads, but that was not a concern for a system that would only be tasked with beating off small attacks.
With only small numbers of targets, the full MAR was not needed and Bell initially proposed TACMAR to fill this need. This would have a shorter detection range, so a long range radar like PAR would be needed for early detection. The missile sites would consist of a single TACMAR along with about 20 Zeus EX missiles. In October 1965 the TACMAR was replaced by the upgraded MSR from the SCD studies. Since this radar had an even shorter range than TACMAR, it could not be expected to generate tracking information in time for a Zeus EX launch. PAR would thus have to be upgraded to have higher accuracy and the processing power to generate tracks that would be handed off to the MSRs. During this same time, Bell had noted problems with long wavelength radars in the presence of radar blackout. Both of these issues argued for a change from VHF to UHF frequencies for the PAR.
Further work along these lines led to the Nike-X Deployment Study, or DEPEX. DEPEX outlined a deployment that started out very similar to Nth Country, with a few bases primarily using Nike EX to provide lightweight cover, but which also included design features that allowed more bases to be added as the nature of the threat changed. The study described a four-phase deployment sequence that added more and more terminal defenses as the sophistication of the Nth Country missiles increased over time.
In December 1966, the Army asked Bell to prepare a detailed deployment concept combining the light defense of Nth Country with the point defense of Hardsite. On 17 January 1967, this became the I-67 project, which delivered its results on 5 July. I-67 was essentially Nth Country but with more bases near Minuteman fields, armed primarily with Sprint. The wide-area Zeus and short-range Sprint bases would both be supported by the PAR network.
Continued pressure to deploy
The basic outlines of these various studies were becoming clear by 1966. The heavy defense from the original Nike-X proposals would cost about $40 billion ($ billion in ) and offer limited protection and damage prevention in an all-out attack, but would be expected to blunt or completely defeat any smaller attack. The thin defense of Nth Country would be much less expensive, around $5 billion ($ billion in ), but would only have any effect at all under certain limited scenarios. Finally, the Hardsite concepts would cost about the same as the thin defense, and provide some protection against a certain class of counterforce attacks.
None of these concepts appeared to be worth deploying, but there was considerable pressure from Congressional groups dominated by hawks who continued to force development of the ABM even when McNamara and President Johnson had not asked for it. The debate spilled over into the public and led to comments about an "ABM gap", especially by Republican Governor George W. Romney. The Air Force continued their opposition to the ABM concept, having previously criticized their earlier efforts in the press, but the construction of the A-35 ABM systems around Tallinn and Moscow overrode their opposition. The Joint Chiefs of Staff (JCS) used the Soviet ABM as an argument for deployment, having previously had no strong opinion on the matter.
McNamara attempted to short-circuit deployment in early 1966 by stating that the only program that had any reasonable cost-effectiveness was the thin defense against the Chinese, and then noted there was no rush to build such a system as it would be some time before they had an ICBM. Overruling him, Congress provided $167.9 million ($ billion in ) for immediate production of the original Nike-X concept. McNamara and Johnson met on the issue on 3 November 1966, and McNamara once again convinced Johnson that the system could not justify the cost of deployment. McNamara headed off the expected counterattack from Romney by calling a press conference on the topic of Soviet ABMs and stating that the new Minuteman III and Poseidon SLBM would ensure the Soviet system would be overwhelmed.
Another meeting on the issue was called on 6 December 1966, attended by Johnson, McNamara, the deputy Secretary of Defense Cyrus Vance, Walt Rostow of the National Security Agency (NSA) and the Joint Chiefs. Rostow took the side of the JCS and it appeared that development would start. However, McNamara once again outlined the problems and stated that the simplest way to close the ABM gap was to simply build more ICBMs, rendering the Soviet system impotent and a great waste of money. He then proposed that the money sidelined by Congress for deployment be used for initial deployment studies while the US attempted to negotiate an arms limitation treaty. Johnson agreed with this compromise, and ordered Secretary of State Dean Rusk to open negotiations with the Soviets.
Nike-X becomes Sentinel
By 1967 the debate over ABM systems had become a major public policy issue, with almost continual debate on the topic in newspapers and magazines. It was in the midst of these debates, on 17 June 1967, that the Chinese tested their first H-bomb in Test No. 6. Suddenly the Nth Country concept was no longer simply theoretical. McNamara seized on this event as a way to deflect criticism over the lack of deployment while still keeping costs under control. On 18 September 1967, he announced that Nike-X would now be known as Sentinel, and outlined deployment plans broadly following the I-67 concept.
Testing
Although the original Nike-X concept was canceled, some of its components were built and tested both as part of Nike-X and the follow-on Sentinel. MAR, MSR, Sprint and Spartan were the main programs during the Nike-X period.
MAR
Work in ZMAR was already underway by the early 1960s, before McNamara canceled Zeus in 1963. Initial contracts were offered to Sylvania and General Electric (GE), who both built experimental systems consisting of a single row of elements, essentially a slice of a larger array. Sylvania's design used MOSAR phase-shifting using time delays, while GE's used a "novel modulation scanning system". Sylvania's system won a contract for a test system, which became MAR-I when Nike-X took over from Zeus.
To save money, the prototype MAR-I would only install antenna elements for the inner section of the original diameter antenna, populating the central . This had the side-effect of reducing the number of antenna elements from 6,405 to 2,245 but would not change the basic control logic. The number of elements on the transmitter face was similarly reduced. A full sized, four-sided MAR would require 25,620 parametric amplifiers to be individually wired by hand, so building the smaller MAR-I greatly reduced cost and construction time. Both antennas were built full sized and could be expanded out to full MAR performance at any time. In spite of these cost reduction methods, MAR-I cost an estimated $100 million to build ($ million in ).
A test site for MAR-I had already been selected at WSMR, about a mile off of US Route 70, and some north of the Army's main missile launch sites along WSMR Route 2 (Nike Avenue). A new road, WSMR Route 15, was built to connect the MAR-I to Launch Complex 38 (LC38), the Zeus launch site. MAR-I's northern location meant that the MAR would see the many rocket launches taking place at the Army sites to the south, as well as the target missiles that were launched towards them from the north from the Green River Launch Complex in Utah.
Since MAR was central to the entire Nike-X system, it had to survive attacks directed at the radar itself. At the time, the response of hardened buildings to nuclear shock was not well understood, and the MAR-I building was extremely strong. It consisted of a large central hemispherical dome of thick reinforced concrete with similar but smaller domes arranged on the corners of a square bounding the central dome. The central dome held the receiver arrays, and the smaller domes the transmitters. The concept was designed to allow a transmitter and receiver to be built into any of the faces to provide wide coverage around the radar site. As a test site, MAR-I only installed the equipment on the northwest facing side, although provisions were made for a second set on the northeast side that was never used. A tall clutter fence surrounded the building, preventing reflections from nearby mountains.
Groundbreaking on the MAR-I site started in March 1963 and construction proceeded rapidly. The radar was powered up for the first time in June 1964 and achieved its first successful tracking on 11 September 1964, repeatedly tracking and breaking lock on a balloon target over a 50-minute period. However, the system demonstrated very low reliability in the transmitter's travelling wave tube (TWT) amplifiers, which led to an extremely expensive re-design and re-installation. Once upgraded, MAR-I demonstrated the system would work as expected; it could generate multiple virtual radar beams, could simultaneously generate different types of beams for detection, tracking, and discrimination at the same time, and had the accuracy and speed needed to generate many tracks.
By this time work had already begun on MAR-II on Kwajalein; built by General Electric, it differed in form and in its beam steering system. The prototype MAR-II was built on reclaimed land just west of the original Zeus site. MAR-II was built into a pyramid with its back half removed. Like MAR-I, to save money MAR-II would be equipped with only one set of transmitter and receiver elements, but with all the wiring in place in case it had to be upgraded in the future. Nike-X was canceled before MAR-II was complete, and the semi-completed building was instead used as a climate-controlled storage facility.
Testing on MAR-I lasted until 30 September 1967. It continued to be used at a lower level as part of the Sentinel developments. This work ended in May 1969, when the facility was mothballed. In November, the building was re-purposed as the main fallout shelter for everyone at Holloman Air Force Base, about to the east. To hold the 5,800 staff and their dependents, starting in 1970 the radar and its underground equipment areas were completely emptied. In the early 1980s, the site was selected as the basis for the High Energy Laser Systems Test Facility, and extensively redeveloped.
In 1972, Stirling Colgate, a professor at New Mexico Tech, wrote a letter to Science proposing salvaging MAR. He felt that after minor re-tuning it would make an excellent radio astronomy instrument for observing the hydrogen line. Colgate's suggestion was never adopted, but over 2000 of the Western Electric parametric amplifiers driving the system ended up being salvaged by the university. About a dozen of these found their way into the astronomy field, including Colgate's supernova detector, SNORT.
About 2,000 remained in storage at New Mexico Tech until 1980. An assay at that time discovered that there was well over one ounce of gold in each one, and the remaining stocks were melted down to produce $941,966 for the university ($ million in ). The money was used to build a new wing on the university's Workman Center, known unofficially as the "Gold Building".
MSR
Bell ran studies to identify the sweet spot for the MSR that would allow it to have enough functionality to be useful at different stages of the attack, as well as being inexpensive enough to justify its existence in a system dominated by MAR. This led to an initial proposal for an S band system using passive scanning (PESA) that was sent out in October 1963. Of the seven proposals received, Raytheon won the development contract in December 1963, with Varian providing the high-power klystrons (twystrons) for the transmitter.
An initial prototype design was developed between January and May 1964. When used with MAR, the MSR needed only short range, enough to hand off the Sprint missiles. This led to a design with limited radiated power. For Small City Defense, this would not offer enough power to acquire the warheads at reasonable range. This led to an upgraded design with five times the transmitter power, which was sent to Raytheon in May 1965. A further upgrade in May 1966 included the battle control computers and other features for the SCD system.
The earlier Zeus system had taken up most of the available land on Kwajalein Island itself, so the missile launchers and MSR were to be built on Meck Island, about north. This site would host a complete MSR, allowing the Army to test both MAR-hosted (using MAR-II) and autonomous MSR deployments. A second launcher site was built on Illeginni Island, northwest of Meck, with two Sprint and two Spartan launchers. Three camera stations built to record the Illeginni launches were installed, and these continue to be used .
Construction of the launch site on Meck began in late 1967. In this installation, the majority of the system was built above ground in a single-floor rectangular building. The MSR was built in a boxy extension on the northwestern corner of the roof, with two sides angled back to form a half-pyramid shape where the antennas were mounted. Small clutter fences were built to the north and northwest, and the western side faced out over the water which was only a few tens of meters from the building. Illeginni did not have a radar site; it was operated remotely from Meck.
Sprint
On 1 October 1962, Bell's Nike office sent specifications for a high-speed missile to three contractors. The responses were received on 1 February 1963, and Martin Marietta was selected as the winning bidder on 18 March.
Sprint ultimately proved to be the most difficult technical challenge of the Nike-X system. Designed to intercept incoming warheads at an altitude of about , it had to have unmatched acceleration and speed. This caused enormous problems in materials, controls, and even receiving radio signals through the ionized air around the missile. The development program was referred to as "pure agony".
In the original Nike-X plans, Sprint was the primary weapon and thus was considered to be an extremely high-priority development. To speed development, a sub-scale version of Sprint known as Squirt was tested from Launch Complex 37 at White Sands, the former Nike Ajax/Hercules test area. A total of five Squirts were fired between 6 November 1964 and 1965. The first Sprint Propulsion Test Vehicle (PTV) was launched from another area at the same complex on 17 November 1965, only 25 months after the final design was signed off. Sprint testing pre-dated construction of an MSR, and the missiles were initially guided by Zeus TTR and MTR radars. Testing continued under Safeguard, with a total of 42 test flights at White Sands and another 34 at Kwajalein.
Spartan
Zeus B had been test fired at both White Sands and the Zeus base on Kwajalein. For Nike-X, the extended range EX model was planned, replacing Zeus' second stage with a larger model that provided more thrust through the midsection of the boost phase. Also known as the DM-15X2, the EX was renamed Spartan in January 1967. The Spartan never flew as part of the original Nike-X, and its first flight in March 1968 took place under Sentinel.
Reentry testing
One of the reasons for the move from Zeus to Nike-X was concern that the Zeus radars would not be able to tell the difference between the warhead and a decoy until it was too late to launch. One solution to this problem was the Sprint missile, which had the performance required to wait until decluttering was complete. Another potential solution was to look for some sort of signature of the reentry through the highest levels of the atmosphere that might differ between a warhead and decoy; in particular, it appeared that the ablation of the heat shield might produce a clear signature pointing out the warhead.
The reentry phenomenology was of interest both to the Army, as it might allow long-range decluttering to be carried out, and to the Air Force, whose own ICBMs might be at risk of long-range interception if the Soviets exploited a similar concept. A program to test these concepts was a major part of ARPA's Project Defender, especially Project PRESS, which started in 1960. This led to the construction of high-power radar systems on Roi-Namur, the northernmost point of the Kwajalein atoll. Although the results remain classified, several sources mention the failure to find a reliable signature of this sort.
In 1964, Bell Labs formulated their own set of requirements for radar work in relation to Nike-X. Working with the Army, Air Force, Lincoln Labs and ARPA, the Nike-X Reentry Measurements Program (RMP) ran a long series of reentry measurements with the Project PRESS radars, especially TRADEX. Additionally, a Lockheed EC-121 Warning Star aircraft was refit with optical and infrared telescopes for optical tracking tests. The first series of tests, RMP-A, focused on modern conical reentry vehicles. It concluded on 30 June 1966. These demonstrated that these vehicles were difficult to discriminate because of their low drag. RMP-B ran between 1967 and 1970, supported by 17 launches from Vandenberg, with a wide variety of vehicle shapes and penetration aids.
The program ran until the 1970s, but by the late 1960s, it was clear that discrimination of decoys was an unsolved problem, although some of the techniques developed might still be useful against less sophisticated decoys. This work appears to be one of the main reasons that the thin defense of I-67 was considered worthwhile. At that time, in 1967, ARPA passed the PRESS radars to the Army.
Description
A typical Nike-X deployment around a major city would have consisted of several missile batteries. One of these would be equipped with the MAR and its associated DCDP computers, while the others would optionally have an MSR. The sites were all networked together using communications equipment working at normal voice bandwidths. Some of the smaller bases would be built north of the MAR to provide protection to this central station.
Almost every aspect of the battle would be managed by the DCDPS at the MAR base. The reason for this centralization was two-fold; one was that the radar system was extremely complex and expensive and could not be built in large numbers, the second was that the transistor-based computers needed to process the data were likewise very expensive. Nike-X thus relied on a few very expensive sites, and many greatly simplified batteries.
MAR
MAR was an L band active electronically scanned phased-array radar. The original MAR-I had been built into a strongly reinforced dome, but the later designs consisted of two half-pyramid shapes, with the transmitters in a smaller pyramid in front of the receivers. The reduction in size and complexity was the result of studies on nuclear hardening, especially those carried out as part of Operation Prairie Flat and Operation Snowball in Alberta, where a sphere of TNT was detonated to simulate a nuclear explosion.
MAR used separate transmitter and receivers, a necessity at the time due to the size of the individual transmit and receive units and the switching systems that would be required. Each transmitter antenna was fed by its own power amplifier using travelling wave tubes with switching diodes and striplines performing the delays. The broadcast signal had three parts in sequence and the receivers had three channels, one tuned to each part of the pulse chain. This allowed the receiver to send each part of the signal to different processing equipment, allowing search, track, and discrimination in a single pulse.
MAR operated in two modes: surveillance and engagement. In surveillance mode, the range was maximized, and each face performed a scan in about 5 seconds. Returns were fed into systems that automatically extracted the range and velocity, and if the return was deemed interesting, the system automatically began a track for threat verification. During the threat verification phase, the radar spent more time examining the returns in an effort to accurately determine the trajectory and then ignored any objects that would fall outside its area.
Those targets that did pose a threat automatically triggered the switch to engagement mode. This created a new beam constantly aimed at the target, sweeping its focus point through the threat tube to pick out individual objects within it. Data from these beams extracted velocity data to a separate computer to attempt to pick out the warhead as the decoys slowed in the atmosphere. Only one Coherent Signal Processing System (CSPS) was ever built, and for testing it was connected to the Zeus Discrimination Radar on Kwajalein.
Nike-X also considered a cut down version of MAR known as TACMAR. This was essentially a MAR with half of the elements hooked up, reducing its price at the cost of shorter detection range. The processing equipment was likewise reduced in complexity, lacking some of the more sophisticated discrimination processing. TACMAR was designed from the start to be able to be upgraded to full MAR performance if needed, especially as the sophistication of the threat grew. MAR-II is sometimes described as the prototype TACMAR, but there is considerable confusion on this point in existing sources.
MSR
As initially conceived, MSR was a short-range system for tracking Sprint missiles before they appeared in the MAR's view, as well as offering a secondary target and jammer tracking role. In this initial concept, the MSR would have limited processing power, just enough to create tracks to feed back to the MAR. In the anti-jamming role, each MAR and MSR would measure the angle to the jammer.
The MSR was an S-band passive electronically scanned array (PESA), unlike the actively scanned MAR. A PESA system cannot (normally) generate multiple signals like AESA, but is much less expensive to build because a single transmitter and receiver is used for the entire system. The same antenna array can easily be used for both transmitting and receiving, as the area behind the array is much less cluttered and has ample room for switching in spite of the large radio frequency switches needed at this level of power.
Unlike the MAR, which would be tracking targets primarily from the north, the MSR would be tracking its interceptors in all directions. MSR was thus built into a four-faced truncated pyramid, with any or all of the faces carrying radar arrays. Isolated sites, like the one considered in Hawaii, would normally have arrays on all four faces. Those that were networked into denser systems could reduce the number of faces and get the same information by sending tracking data from site to site.
Sprint
Sprint was the primary weapon of Nike-X as originally conceived; it would have been placed in clusters around the targets being defended by the MAR system. Each missile was housed in an underground silo and was driven into the air before launch by a gas-powered piston. The missile was initially tracked by the local MSR, which would hand off tracking to the MAR as soon as it became visible. A transponder in the missile would respond to signals from either the MAR or MSR to provide a powerful return for accurate tracking.
Although a primary concern of the Sprint missile was high speed, the design was not optimized for maximum energy, but instead relied on the first stage (booster) to provide as much thrust as possible. This left the second stage (sustainer) lighter than optimal, to improve its manoeuvrability. Staging was under ground control, with the booster cut away from the missile body by explosives. The sustainer was not necessarily ignited immediately, depending on the flight profile. For control, the first stage used a system that injected Freon into the exhaust to cause thrust vectoring to control the flight. The second stage used small air vanes for control.
The first stage accelerated the missile at over 100 g, reaching Mach 10 in a few seconds. At these speeds, aerodynamic heating caused the airframe's outer layer to become hotter than an oxy-acetylene welding torch. The required acceleration required a new solid fuel mixture that burned ten times as fast as contemporary designs such as the Pershing or Minuteman. The burning fuel and aerodynamic heating together created so much heat that radio signals were strongly attenuated through the resulting ionized plasma around the missile body. It was expected that the average interception would take place at about at a range of after 10 seconds of flight time.
Two warheads were designed for Sprint starting in 1963, the W65 at Livermore and the W66 at Los Alamos. The W65 was entering Phase 3 testing in October 1965 with a design yield of around 5 kT, but this was cancelled in January 1968 in favor of the W66. The W66's explosive yield was reported to have been in the "low kiloton" range, with various references claiming it was anywhere from 1 to 20 kT. The W66 was the first enhanced radiation bomb, or neutron bomb, to be fully developed; it was tested in the late 1960s and entered production in June 1974.
See also
Project Nike, the technical office that ran Nike-X.
The A-135 anti-ballistic missile system was the Soviet equivalent to Nike-X.
Notes
References
Citations
Bibliography
External links
"Army Air Defense Command", part of the US Army's "The Big Picture" series, this episode discusses the ARADCOM system in 1967. A section at the end, starting at the 22 minute mark, discusses Nike-X, MAR, MSR, Zeus and Sprint. Darren McGavin narrates.
Anti-ballistic missiles of the United States
Cold War surface-to-air missiles of the United States
Missile defense
Project Nike |
1163729 | https://en.wikipedia.org/wiki/ROM%20hacking | ROM hacking | ROM hacking is the process of modifying a ROM image or ROM file of a video game to alter the game's graphics, dialogue, levels, gameplay, and/or other elements. This is usually done by technically inclined video game fans to breathe new life into a cherished old game, as a creative outlet, or to make essentially new unofficial games using the old game's engine. ROM hacks either re-design a game for new, fun gameplay while keeping all items the same, or unlock features that exist in the game but are not utilised in-game.
ROM hacking is generally accomplished through use of a hex editor (a program for editing non-textual data) and various specialized tools such as tile editors, and game-specific tools which are generally used for editing levels, items, and the like, although more advanced tools such as assemblers and debuggers are occasionally used. Once ready, they are usually distributed on the Internet for others to play on an emulator or games console.
Fan translation (known as "translation hacking" within the ROM hacking community) is a type of ROM hacking; there are also anti-censorship hacks that exist to restore a game to its original state, which is often seen with older games that were imported, as publishers' content policies for video games (most notably, Nintendo's) were much stricter in the United States than Japan or Europe; there are also randomisers which shuffle entity placements. Although much of the method applies to both types of hacking, this article focuses on "creative hacking" such as editing game levels.
Communities
Most hacking groups offer web space for hosting hacks and screenshots (sometimes only hosting hacks by the group's members, sometimes hosting almost any hack), a message board, and often have an IRC channel.
Methods
Having been created by many different programmers or programming teams, ROM data can be very diverse.
Hex editing
A hex editor is one of the most fundamental tools in any ROM hacker's repertoire. Hex editors are usually used for editing text, and for editing other data for which the structure is known (for example, item properties), and Assembly hacking.
Editing text is one of the most basic forms of hacking. Many games do not store their text in ASCII form, and because of this, some specialized hex editors have been developed, which can be told what byte values correspond to what letter(s) of the alphabet, to facilitate text editing; a file that defines these byte=letter relationships is called a "table" file. Other games use simple text compression techniques (such as byte pair encoding, also called dual tile encoding or DTE, in which certain combinations of two or more letters are encoded as one byte) which a suitably equipped hex editor can facilitate editing.
A hex editor is the tool of choice for editing things such as character/item properties, if the structure and location of this data is known and there is no game-specific editor for the game that can edit this information. Some intrepid hackers also perform level editing with a hex editor, but this is extremely difficult (except on games whose level storage format closely resembles how it is presented in a hex editor).
Graphics editing
Another basic hacking skill is graphics hacking, which is changing the appearance of the game's environments, characters, fonts, or other such things. The format of graphics data varies from console to console, but most of the early ones (NES, Super NES, Game Boy, etc.) store graphics in tiles, which are 8x8-pixel units of data, which are arranged on-screen to produce the desired result. Editing these tiles is also possible with a hex editor, but is generally accomplished with a tile editor (such as Tile Layer or Tile Molester), which can display the ROM data in a graphical way, as well as finding and editing tiles.
Graphics hacks can range from simple edits (such as giving Luigi a golf club, or making pixelated sprites for later generation Pokémon) to "porting" characters from one game to another, to full-blown thematic changes (usually with accompanying palette changes; see below).
More sophisticated graphics hacking involves changing more than just tiles and colors, but also the way in which the tiles are arranged, or tile groups generated, giving more flexibility and control over the final appearance. This is accomplished through hex editing or a specialized tool (either for the specific game or a specific system). A good example of a graphics hack is the uncompleted Pokémon Torzach, a hack which attempted to add a whole new generation of Pokémon and tiles to the game. The hack has since been discontinued, but it still serves as a good example on what can be achieved with the tools available.
Palette editing
Another common form of hacking is palette hacking, where color values are modified to change the colors a player sees in the game (this often goes hand-in-hand with graphics hacking); Palette values are commonly stored in Hex. This is fairly easy for NES games, the graphics of which use a pre-defined set of colors among which a game selects; palette hacking in this case entails changing which of those colors are selected. The matter is slightly more complicated with Super NES, Mega Drive games or games for other systems, which store absolute RGB color values. Palette editors are usually simple and often are with Level editors, or Game specific graphics editors.
Level editing
One of the most popular forms of ROM hacking, level editing entails modifying or redesigning a game's levels or maps. This is almost exclusively done with an editor specially tailored for a particular game (called a level editor). Level edits can be done to make the game more challenging, to alter the flow of the game's plot, or just to give something new to an old game. Combined with extensive graphics hacking, the game can take on a very different look and feel.
Data editing
A core component of many hacks (especially of role-playing video games) is editing data such as character, item, and enemy properties. This is usually done either "by hand" (with a hex editor) if the location and structure of the data is known, or with a game-specific editor that has this functionality. Through this, a hacker can alter how weapons work, how strong enemies are or how they act, etc. This can be done to make the game easier or harder, or to create new scenarios for the player to face.
Assembly hacking
The most powerful, and arguably the most difficult, hacking technique is editing the game's actual code, a process called ASM hacking ("ASM" means "assembly", referring to the low level programming language that gets executed by the CPU). There is no set pattern for ASM hacking, as the code varies widely from game to game, but most skilled ASM hackers either use an emulator equipped with a built-in debugger or tracer, or run the ROM through a disassembler, then analyze the code and modify it using a hex editor or assembler according to their needs. While quite challenging compared to the relatively simple methods listed above, anything is possible with ASM hacking (of course, within the limits of the hardware/software of the gaming platform), ranging from altering enemy AI to changing how graphics are generated. (Of course, the possibilities are still limited by the hacker's ability to comprehend and modify the existing code.)
If the developers used a typed language, the hacker may be able to compile their own code for the game in the same language if they have access to a proper compiler. One such example would be using C to hack Nintendo 64 games, since MIPS-GCC can compile code for the Nintendo 64.
Music hacking
Music hacks are relatively rare, due to the wide variety of ways games store music data (hence the difficulty in locating and modifying this data) and the difficulties in composing new music (or porting music from another game). As music cracking is very uncommon, many hacks do not have any ported/composed music added in. However, as many Game Boy Advance games use the M4A Engine (also called "Sappy Driver") for music, the program SapTapper can be used to hack Game Boy Advance music data. Various other utilities were created to work with the engine such as Sappy 2006.
Another instance of the same engine being used between games is on the Nintendo 64, in which most games use the same format; although they use different sound banks. A utility known as the N64 Midi Tool was created to edit the sequences that the majority of Nintendo 64 games use, though it does not cover the first-party N64 titles that use a slightly different engine, such as Super Mario 64.
Several Mega Drive games use a sound engine unofficially known as "SMPS", which has been researched for decades by many hackers. As of today, various tools exist to alter the music of games which use the SMPS engine (Sonic the Hedgehog games in particular), and many of them made their way to the Steam Workshop.
ROM expansion
Generally speaking, a ROM hacker cannot normally add content to a game, but merely change existing content. This limit can be overcome through ROM expansion, whereby the total size of the ROM image is increased, making room for more content and, in turn, a larger game. The difficulty in doing this varies depending on the system for which the game was made. For example, expanding an NES ROM may be difficult or even impossible due to the mapper used by the game. For example, if a mapper allows 16 ROM banks and all of them are used, expanding the ROM further is impossible without somehow converting the game to another mapper, which could be easy or extremely difficult. On the other hand, expanding a SNES game is (relatively) straightforward. To utilize the added space, parts of the game code have to be modified or re-written (see Assembly hacking above) so the game knows where to look. Another type of ROM expansion that is fairly easy is Game Boy Advance ROMs. The ROMs themselves are generally small, but the memory space available sometimes exceeds it by multiples of up to 17.
Distribution
Once a hack is completed (or an incomplete version is deemed suitable for an interim release) it is released onto the Internet for others to play. The generally accepted way to do this is by making an unofficial patch (in IPS format or others) that can be applied to the unmodified ROM. This, and usually some form of documentation, is put in an archive file and uploaded somewhere. IPS is a format for recording the differences between two binary files (in this case, between the unmodified and hacked ROMs) and is suitable for ROM hacks. IPS is still used today for small patches—however, as ROMs became larger in size, this format became useless, leading to quite a few file formats being created—such as NINJA and PPF ("PlayStation Patch Format"). PPF is still used today, particularly to patch large files such as ISO CD images and Nintendo 64 games. A new patch format, UPS, has also been developed by the ROM hacking community, designed to be the successor to IPS and PPF.
A more recent patching format, the APS patching system, has also been developed by a devoted Game Boy Advance ROM hacker. The APS system is more space efficient, is reversible, and is faster than its predecessor.
The purpose of distributing a hack in patch form is to avoid the legal aspects of distributing entire ROM images; the patch records only what has changed in the ROM, hence distributing it does not usually distribute parts of the original game. A patch is also normally drastically smaller than the full ROM image (an NES ROM can run anywhere from 8 KB to 2 MB; a Super NES ROM can run from 256 kB to 6 MB).
In a novel example of legal distribution, Sega released a Steam-based virtual hub for its previous collection of Mega Drive/Genesis games, entitled Sega Mega Drive Classics Hub. The Hub, besides allowing players to play emulated versions of these older games, takes advantage of Steam's support for user-created content through the Steam Workshop, officially allowing the distribution of ROM hacks of any of the offered games.
Usage
Patched ROMs are often played on emulators, however it is also possible to play patched ROMs on the original hardware. The destination cartridge could be the original cartridge from which the initial unpatched ROM was pulled, or another compatible cartridge of the same type. This is particularly popular for fan translations, homebrew games, prototypes, or other games for which original cartridges were never produced, or for games which require exact timing or other elements of the original hardware which are not available in emulators.
Systems and games
The majority of ROM hacking is done on NES, SNES and Sega Genesis games, since such games are small and simple compared to games of more advanced consoles such as the Nintendo 64 or Nintendo DS. Games for the Game Boy, Game Boy Color and Game Boy Advance are also popular for hacking, as well as games for the PlayStation to a lesser extent. However, games intended for more recent consoles are not exempt from hacking, as computers have become faster and more programs and utilities have been written, more PlayStation, Nintendo 64 and Nintendo DS hacks have emerged.
Of these, popular games to play are popular games to hack; many hacks have been released of games of the Sonic the Hedgehog series, Super Mario series (including Mario Bros., Super Mario Bros., Super Mario Bros. 2, Super Mario Bros. 3, Super Mario Land, Super Mario Land 2: 6 Golden Coins, Super Mario 64 and Super Mario World), Mario Kart series (most notably Super Mario Kart, Mario Kart Wii, Mario Kart 7, and Mario Kart DS), Pokémon series, Chip's Challenge, Castlevania, Final Fantasy, The Legend of Zelda, games from the Mega Man series, Fire Emblem series, EarthBound, Super Metroid, and many others.
A notable hacked arcade game was Street Fighter II: Rainbow Edition, which featured increased game speed and new special moves. The success of this game prompted Capcom to release Street Fighter II: Hyper Fighting as an official response.
Your Sinclair magazine published a monthly column called "Program Pitstop". This focused mainly on cheat hacks for games, but also featured both a level map printer for the original Gauntlet, as well as a full level editor for the same game.
See also
Emergent gameplay
Fangame
Fan translation of video games
Forking (software development)
Game Genie
GameShark
Homebrew (video games)
Mod (video gaming)
Undubbing
References
Hacker culture |
31408 | https://en.wikipedia.org/wiki/Thomas%20J.%20Watson | Thomas J. Watson | Thomas John Watson Sr. (February 17, 1874 – June 19, 1956) was an American businessman who served as the chairman and CEO of IBM. He oversaw the company's growth into an international force from 1914 to 1956. Watson developed IBM's management style and corporate culture from John Henry Patterson's training at NCR. He turned the company into a highly effective selling organization, based largely on punched card tabulating machines. A leading self-made industrialist, he was one of the richest men of his time and was called the world's greatest salesman when he died in 1956.
Early life and career
Thomas J. Watson was born in Campbell, New York, the fifth child and only son of Thomas and Jane Fulton White Watson. His four older siblings were Jennie, Effie, Loua, and Emma. His father farmed and owned a modest lumber business located near Painted Post, a few miles west of Corning, in the Southern Tier region of New York. Thomas worked on the family farm in East Campbell, New York and attended the District School Number Five in the late 1870s. As Watson entered his teen years he attended Addison Academy In Addison, New York.
Having given up his first job—teaching—after just one day, Watson took a year's course in accounting and business at the Miller School of Commerce in Elmira. He left the school in 1891, taking a job at $6 a week as bookkeeper for Clarence Risley's Market in Painted Post. One year later he joined a traveling salesman, George Cornwell, peddling organs and pianos around the farms for William Bronson's local hardware store, Watson's first sales job. When Cornwell left, Watson continued alone, earning $10 per week. After two years of this life, he realized he would be earning $70 per week if he were on a commission. His indignation on making this discovery was such that he quit and moved from his familiar surroundings to the relative metropolis of Buffalo.
Watson then spent a very brief period selling sewing machines for Wheeler and Wilson. According to son, Tom Watson, Jr.'s, autobiography: One day my dad went into a roadside saloon to celebrate a sale and had too much to drink. When the bar closed, he found that his entire rig—horse, buggy, and samples—had been stolen. Wheeler and Wilson fired him and dunned him for the lost property. Word got around, of course, and it took Dad more than a year to find another steady job. Watson would later enforce strict rules at IBM against alcohol consumption, even off the job. According to Tom Jr.: This anecdote never made it into IBM lore, which is too bad, because it would have helped explain Father to the tens of thousands of people who had to follow his rules.
Watson's next job was peddling shares of the Buffalo Building and Loan Company for a huckster named C. B. Barron, a showman renowned for his disreputable conduct, which Watson deplored. Barron absconded with the commission and the loan funds. Next Watson opened a butcher shop in Buffalo, which soon failed, leaving Watson with no money, no investment, and no job.
NCR
Watson had a newly acquired NCR cash register in his butcher shop, for which he had to arrange transfer of the installment payments to the new owner of the butcher shop. On visiting NCR, he met John J. Range and asked him for a job. Determined to join the company, he repeatedly called on Range until, after a number of abortive attempts, he finally was hired in November 1896, as sales apprentice to Range.
Led by John Patterson, NCR was then one of the leading selling organizations, and John J. Range, its Buffalo branch manager, became almost a father figure for Watson and was a model for his sales and management style. Certainly in later years, in a 1952 interview, he claimed he learned more from Range than anyone else. But at first, he was a poor salesman, until Range took him personally in hand. Then he became the most successful salesman in the East, earning $100 per week.
Four years later, NCR assigned Watson to run the struggling NCR agency in Rochester, New York. As an agent, he got 35% commission and reported directly to Hugh Chalmers, the second-in-command at NCR. In four years Watson made Rochester effectively an NCR monopoly by using the technique of knocking the main competitor, Hallwood, out of business, sometimes resorting to sabotage of the competitor's machines. As a reward he was called to the NCR head office in Dayton, Ohio.
In 1912, the company was found guilty of violating the Sherman Antitrust Act. Patterson, Watson, and 26 other NCR executives and managers were convicted for illegal anti-competitive sales practices and were sentenced to one year of imprisonment. Their convictions were unpopular with the public because of the efforts of Patterson and Watson to help those affected by the Dayton, Ohio floods of 1913, but efforts to have them pardoned by President Woodrow Wilson were unsuccessful. However, their convictions were overturned on appeal in 1915 on the grounds that important defense evidence should have been admitted.
Head of IBM
Charles Ranlett Flint, who had engineered the amalgamation (via stock acquisition) forming the Computing-Tabulating-Recording Company (CTR) found it difficult to manage the five companies. He hired Watson as general manager on May 1, 1914, when the five companies had about 1,300 employees. Eleven months later he was made President when court cases relating to his time at NCR were resolved. Within four years revenues had been doubled to $9 million. In 1924, he renamed CTR to International Business Machines. Watson built IBM into such a dominant company that the federal government filed a civil antitrust suit against it in 1952. IBM owned and leased to its customers more than 90 percent of all tabulating machines in the United States at the time. When Watson died in 1956, IBM's revenues were $897 million, and the company had 72,500 employees.
Throughout his life, Watson maintained a deep interest in international relations, from both a diplomatic and a business perspective. He was known as US President Franklin D. Roosevelt's unofficial ambassador in New York and often entertained foreign statesmen. In 1937, he was elected president of the International Chamber of Commerce (ICC) and at that year's biennial congress in Berlin stated that the conference keynote would be "World Peace Through World Trade." That phrase became the slogan of both the ICC and IBM.
Dealings with Nazi Germany
In 1937, as President of the International Chamber of Commerce, Watson met Adolf Hitler. During the 1930s, IBM's German subsidiary was its most profitable foreign operation, and a 2001 book by Edwin Black, IBM and the Holocaust, proves that Watson's pursuit of profit led him to personally approve and spearhead IBM's strategic technological relationship with Nazi Germany. It describes how IBM provided the tabulating equipment Hitler used to round up the Jews. His Hollerith punch-card machines are in the Holocaust Museum today. The book describes IBM's punch cards as "a card with standardized holes", each representing a different trait of the individual. The card was fed into a "reader" and sorted. Punch cards identified Jews by name. Each one served as "a nineteenth-century bar code for human beings". In particular, critics point to the Order of the German Eagle medal that Watson received at the Berlin ICC meeting in 1937, as evidence that he was being honored for the help that IBM's German subsidiary Dehomag (Deutsche Hollerith-Maschinen Gesellschaft mbH) and its punch card machines provided the Nazi regime, particularly in the tabulation of census data (i.e. location of Jews). Another study argues that Watson believed, perhaps naively, that the medal was in recognition of his years of labor on behalf of global commerce and international peace. Within a year of the Berlin congress though, where Watson's hopes had run high, he found himself strongly protesting the German policy toward the Jews.
Because of his strong feelings about the issue, Watson wanted to return his German citation shortly after receiving it. When Secretary of State Hull advised him against that course of action, he gave up the idea until the spring of 1940. Then Hull refused advice, and Watson sent the medal back in June 1940. Dehomag's management disapproved of Watson's action and considered separating from IBM. This occurred when Germany declared war on the United States in December 1941, and the German shareholders took custody of the Dehomag operation. However, during World War II, IBM subsidiaries in occupied Europe never stopped delivery of punch cards to Dehomag, and documents uncovered show that senior executives at IBM world headquarters in New York took great pains to maintain legal authority over Dehomag's operations and assets through the personal intervention of IBM managers in neutral Switzerland, directed via personal communications and private letters.
Dealings with the United States
During this same period, IBM became more deeply involved in the war effort for the U.S., focusing on producing large quantities of data processing equipment for the military and experimenting with analog computers. Watson, Sr. also developed the "1% doctrine" for war profits which mandated that IBM receive no more than 1% profit from the sales of military equipment to U.S. Government. Watson was one of the few CEOs to develop such a policy.
In 1941, Watson received the third highest salary and compensation package in the U.S., $517,221, on which he paid 69% in tax.
Watson had a personal interest in the progress of the war. His eldest son, Thomas J. Watson Jr., joined the United States Army Air Corps and became a bomber pilot. He was soon hand-picked to become the assistant and personal pilot for General Follet Bradley, who was in charge of all Lend-Lease equipment supplied to the Soviet Union from the United States. Watson, Sr.'s youngest son, Arthur K. Watson, also joined the military during the conflict.
Post-World War II
Watson worked with local leaders to create a college in the Binghamton area, where IBM was founded and had major plants. In 1946, IBM provided land and funding for Triple Cities College, an extension of Syracuse University. Later it became known as Harpur College, and eventually evolved into Binghamton University. Its school of engineering and applied science is named the Thomas J. Watson College of Engineering and Applied Science.
After World War II, Watson began work to further the extent of IBM's influence abroad and in 1949, he created the IBM World Trade Corporation in order to oversee IBM's foreign business.
Watson retired in 1956 and his oldest son, Thomas J. Watson Jr., became IBM's CEO. He died on June 19, 1956, in Manhattan, New York City and was buried in Sleepy Hollow Cemetery in Sleepy Hollow, New York.
Personal life
Watson married Jeanette Kittredge, from a prominent Dayton, Ohio railroad family, on April 17, 1913. They had two sons and two daughters.
Thomas Watson, Jr. succeeded his father as IBM chairman and later served as ambassador to the Soviet Union under Jimmy Carter
Jeanette Watson Irwin married businessman John N. Irwin II, later ambassador to France
Helen Watson Buckner became an important philanthropist in New York City
Arthur K. Watson served as president of IBM World Trade Corporation and later, as ambassador to France
As a Democrat (after his criminal indictment by the Taft Administration), Watson was an ardent supporter of Roosevelt. He was one of the most prominent businessmen in the Democratic Party. He was considered Roosevelt's strongest supporter in the business community.
Watson served as a powerful trustee of Columbia University from June 6, 1933, until his death. He engineered the selection of Dwight D. Eisenhower as its president and played the central role in convincing Eisenhower to become president of the university. Additionally, he served as a trustee of Lafayette College and is the namesake of Watson Hall, a campus residence hall.
In 1936 the U.S. Supreme Court upheld a lower court decision that IBM, together with Remington Rand, should cease its practice of requiring its customers to buy their punch cards from it alone. The ruling made little difference because IBM was the only effective supplier to the market, and profits continued undiminished.
In 1937, Watson was awarded the Order of the German Eagle by Adolf Hitler. Watson was also president of the International Chamber of Commerce in 1937; the medal was awarded while the ICC was meeting in Germany that year.
In 1939, he received an honorary degree in Doctor of Commercial Science from Oglethorpe University.
In the 1940s, Watson was on the national executive board of the Boy Scouts of America and served for a time as an international Scout commissioner. E. Urner Goodman recounts that the elderly Watson attended an international Scout commissioners' meeting in Switzerland, where the IBM founder asked not to be put on a pedestal. Before the conference was over, Goodman relates, Watson "... sat by that campfire, in Scout uniform, 'chewing the fat' like the rest of the boys". He received the Silver Buffalo Award in 1944. His son, Thomas Jr., later served as national president of the Boy Scouts of America from 1964 to 1968. He was also inducted into the Steuben County (NY) Hall of Fame. Throughout his life Watson continued to own and enjoy the family farm on which he was born. In 1955 he and his wife gave it, along with one million dollars, to the Methodist Church for use as a retreat and conference center, to be named Watson Homestead in memory of his parents. Watson Homestead became independent of the church in 1995, and continues as a conference and retreat center. The one-room school that Watson attended as a child is still on the grounds.
Watson was chairman of the Elmira College centennial committee in 1955 and donated Watson Hall, primarily a music and mathematics academic building.
He was posthumously inducted into the Junior Achievement U.S. Business Hall of Fame in 1990.
Famous attribution
Although Watson is well known for his alleged 1943 statement, "I think there is a world market for maybe five computers," there is scant evidence he said it. Author Kevin Maney tried to find the origin of the quote, but has been unable to locate any speeches or documents of Watson's that contain this, nor are the words present in any contemporary articles about IBM.
One of the first attributions is in the German magazine Der Spiegel of May 22, 1965, stating that IBM boss Thomas Watson had not been interested in the new machines initially, and when the first commercial calculation behemoths appeared in the early 1950s, filling whole floors with thousands of heat generating vacuum tubes, he estimated the demand by the US economy at a maximum of five.
Later attributions may be found in The Experts Speak, a book written by Christopher Cerf and Victor S. Navasky in 1984, however Cerf and Navasky just quote from a book written by Morgan and Langford, Facts and Fallacies. Another early article source (May 15, 1985) is a column by Neil Morgan, a San Diego Evening Tribune writer who wrote: "Forrest Shumway, chairman of The Signal Cos., doesn't make predictions. His role model is Tom Watson, then IBM chairman, who said in 1958: 'I think there is a world market for about five computers. The earliest known citation on the Internet is from 1986 on Usenet in the signature of a poster from Convex Computer Corporation as I think there is a world market for about five computers' —Remark attributed to Thomas J. Watson (Chairman of the Board of International Business Machines), 1943". All these early quotes are questioned by Eric Weiss, an editor of the Annals of the History of Computing in ACS letters in 1985.
There are documented versions of similar quotes by other people in the early history of the computer. In 1946 Sir Charles Darwin (grandson of the famous naturalist), head of Britain's NPL (National Physical Laboratory), where research into computers was taking place, wrote: it is very possible that ... one machine would suffice to solve all the problems that are demanded of it from the whole country.
In 1985 the story was discussed on Usenet (in net.misc), without Watson's name being attached. The original discussion has not survived, but an explanation has; it attributes a very similar quote to the Cambridge mathematician Professor Douglas Hartree, around 1951:
I went to see Professor Douglas Hartree, who had built the first differential analyzers in England and had more experience in using these very specialized computers than anyone else. He told me that, in his opinion, all the calculations that would ever be needed in this country could be done on the three digital computers which were then being built—one in Cambridge, one in Teddington, and one in Manchester. No one else, he said, would ever need machines of their own, or would be able to afford to buy them.The Language of Computers a transcript of a talk given by Lord Bowden of Chesterfield, at Brighton College of Technology. the first Richard Goodman Memorial Lecture
Howard H. Aiken made a similar statement in 1952:
Originally one thought that if there were a half dozen large computers in this country, hidden away in research laboratories, this would take care of all requirements we had throughout the country.Cohen, I. Bernard (1998). IEEE Annals of the History of Computing 20.3 pp. 27–33
The story already had been described as a myth in 1973; the Economist quoted a Mr. Maney as "revealing that Watson never made his oft-quoted prediction that there was 'a world market for maybe five computers.
Since the attribution typically is used to demonstrate the fallacy of predictions, if Watson had made such a prediction in 1943, then, as Gordon Bell pointed out in his ACM 50 years celebration keynote, it would have held true for some ten years.
The IBM archives of Frequently Asked Questions notes an inquiry about whether he said in the 1950s that he foresaw a market potential for only five electronic computers. The document says no, but quotes his son and then IBM President Thomas J. Watson, Jr., at the annual IBM stockholders meeting, April 28, 1953, as speaking about the IBM 701 Electronic Data Processing Machine, which it identifies as "the company's first production computer designed for scientific calculations". He said that "IBM had developed a paper plan for such a machine and took this paper plan across the country to some 20 concerns that we thought could use such a machine. I would like to tell you that the machine rents for between $12,000 and $18,000 a month, so it was not the type of thing that could be sold from place to place. But, as a result of our trip, on which we expected to get orders for five machines, we came home with orders for 18." Watson, Jr., later gave a slightly different version of the story in his autobiography, where he said the initial market sampling indicated 11 firm takers and 10 more prospective orders.
Famous motto
"THINK" – Watson began using "THINK" to motivate, or inspire, staff while at NCR and continued to use it at CTR. International Business Machines's first U.S. trademark was for the name "THINK" filed as a U.S. trademark on June 6, 1935, with the description "periodical publications". This trademark was filed fourteen years before the company filed for a U.S. trademark on the name IBM. A biographical article in 1940 noted that "This word is on the most conspicuous wall of every room in every IBM building. Each employee carries a THINK notebook in which to record inspirations. The company stationery, matches, scratch pads all bear the inscription, THINK. A monthly magazine called 'Think' is distributed to the employees." THINK remains a part of IBM's corporate culture; it was the inspiration behind naming IBM's successful line of notebook computers, IBM ThinkPad. In 2007, IBM Mid America Employees Federal Credit Union changed its name to Think Mutual Bank.
See also
IBM and the Holocaust
IBM during World War II
Jeannette K. Watson Fellowship
Thomas J. Watson Fellowship
Thomas J. Watson Research Center
Thomas J. Watson School of Engineering and Applied Science
Watson (computer), named in honor of Thomas J. Watson
References
Further reading
Belden, Thomas Graham; Belden, Marva Robins (1962). The Lengthening Shadow: The Life of Thomas J. Watson. Boston: Little, Brown and Co. 332 pp.
Greulich, Peter E. (2011) The World's Greatest Salesman: An IBM Caretaker's Perspective: Looking Back. Austin, TX: MBI Concepts. . The bulk of the book consists of abridged texts from Watson's Men—Minutes—Money.
Greulich, Peter E. (2012) Tom Watson Sr. Essays on Leadership: Volume 1, Democracy in Business. Austin, TX: MBI Concepts. (electronic version only)
Greulich, Peter E. (2012) Tom Watson Sr. Essays on Leadership: Volume 2, We Are All Assistants. Austin, TX: MBI Concepts. (electronic version only)
Greulich, Peter E. (2012) Tom Watson Sr. Essays on Leadership: Volume 3, We Forgive Thoughtful Mistakes. Austin, TX: MBI Concepts. (electronic version only)
Maney, Kevin (2003). The Maverick and His Machine: Thomas Watson, Sr. and the Making of IBM. John Wiley & Sons.
Ridgeway, George L. (1938) Merchants of Peace: Twenty Years of Business Diplomacy Through the International Chamber of Commerce 1919–1938, Columbia University Press, 419pp. There is a 1959 revised edition.
Rodgers, William H. (1969) THINK: A Biography of the Watsons and IBM. New York: Stein and Day.
Sobel, Robert (2000). Thomas Watson, Sr.: IBM and the Computer Revolution. Washington: BeardBooks.
Tedlow, Richard S. (2003). The Watson Dynasty: The Fiery Reign and Troubled Legacy of IBM's Founding Father and Son. New York: HarperBusiness.
Wilson, John S. (1959). Scouting Round the World. Blandford Press. pp. 186–272.
External links
Oral history interview with Thomas J. Watson, Jr., April 25, 1985, Armonk, New York, Charles Babbage Institute, University of Minnesota.
Audio recordings of Thomas J. Watson speaking at The Metropolitan Museum of Art
"Thomas J. Watson Sr. Is Dead; I.B.M. Board Chairman Was 82". The New York Times. June 20, 1956.
The IBM Songbook.
First Usenet Posting of the misquote
IBM biography of Watson
1874 births
1956 deaths
American computer businesspeople
American technology chief executives
American Methodists
Burials at Sleepy Hollow Cemetery
Businesspeople from Buffalo, New York
Businesspeople from New York City
IBM employees
New York (state) Democrats
NCR Corporation people
People from Manhattan
Businesspeople from Rochester, New York
People from Steuben County, New York
Retailers
World Scout Committee members
Columbia University people
Lafayette College trustees |
9347800 | https://en.wikipedia.org/wiki/United%20States%20Africa%20Command | United States Africa Command | The United States Africa Command (USAFRICOM, U.S. AFRICOM, and AFRICOM), is one of the eleven unified combatant commands of the United States Department of Defense, headquartered at Kelley Barracks, Stuttgart, Germany. It is responsible for U.S. military operations, including fighting regional conflicts and maintaining military relations with 53 African nations. Its area of responsibility covers all of Africa except Eritrea, which has denied interest of becoming a member for years. It also does not cover Egypt, which is within the area of responsibility of the United States Central Command. U.S. AFRICOM headquarters operating budget was $276 million in fiscal year 2012.
The Commander of U.S. AFRICOM reports to the Secretary of Defense. The current Commander of the U.S. Africa Command stated that the purpose of the command is to work alongside African military personnel to support their military operations. In individual countries, U.S. ambassadors continue to be the primary diplomatic representative for relations with host nations.
History
Origins
Prior to the creation of AFRICOM, responsibility for U.S. military operations in Africa was divided across three unified commands: United States European Command (EUCOM) for West Africa, United States Central Command (CENTCOM) for East Africa, and United States Pacific Command (PACOM) for Indian Ocean waters and islands off the east coast of Africa.
A U.S. military officer wrote the first public article calling for the formation of a separate African command in November 2000. Following a 2004 global posture review, the United States Department of Defense began establishing a number of Cooperative Security Locations (CSLs) and Forward Operating Sites (FOSs) across the African continent, through the auspices of EUCOM which had nominal command of West Africa at that time. These locations, along with Camp Lemonnier in Djibouti, would form the basis of AFRICOM facilities on the continent. Areas of military interest to the United States in Africa include the Sahara/Sahel region, over which Joint Task Force Aztec Silence is conducting anti-terrorist operations (Operation Enduring Freedom - Trans Sahara), Djibouti in the Horn of Africa, where Combined Joint Task Force – Horn of Africa is located (overseeing Operation Enduring Freedom - Horn of Africa), and the Gulf of Guinea.
The website Magharebia.com was launched by USEUCOM in 2004 to provide news about North Africa in English, French and Arabic. When AFRICOM was created, it took over operation of the website. Information operations of the United States Department of Defense was criticized by the Senate Armed Forces Committee and defunded by Congress in 2011. The site was closed down in February 2015.
In 2007, the United States Congress approved $500 million for the Trans-Saharan Counterterrorism Initiative (TSCTI) over six years to support countries involved in counterterrorism against threats of Al Qaeda operating in African countries, primarily Algeria, Chad, Mali, Mauritania, Niger, Senegal, Nigeria, and Morocco. This program builds upon the former Pan Sahel Initiative (PSI), which concluded in December 2004 and focused on weapon and drug trafficking, as well as counterterrorism. Previous U.S. military activities in Sub-Saharan Africa have included Special Forces associated Joint Combined Exchange Training. Letitia Lawson, writing in 2007 for a Center for Contemporary Conflict journal at the Naval Postgraduate School, noted that U.S. policy towards Africa, at least in the medium-term, looks to be largely defined by international terrorism, the increasing importance of African oil to American energy needs, and the dramatic expansion and improvement of Sino-African relations since 2000.
Creation of the command (2006–2008)
In mid-2006, Defense Secretary Donald Rumsfeld formed a planning team to advise on requirements for establishing a new Unified Command for the African continent. In early December, he made his recommendations to President George W. Bush.
On 6 February 2007, Defense Secretary Robert Gates announced to the Senate Armed Services Committee that President George W. Bush had given authority to create the new African Command. U.S. Navy Rear Admiral Robert Moeller, the director of the AFRICOM transition team, arrived in Stuttgart, Germany to begin creating the logistical framework for the command. The creation of the command was introduced to African military leaders by General William E. "Kip" Ward who traveled to various African countries. On 28 September, the U.S. Senate confirmed General Ward as AFRICOM's first commander and AFRICOM officially became operational as a sub-unified command of EUCOM with a separate headquarters. On 1 October 2008 became a fully operational command and incorporated pre-existing entities, including the Combined Joint Task Force - Horn of Africa that was created in 2002. At this time, the command also separated from USEUCOM and began operating on its own as a full-fledged combatant command.
Function
In 2007, the White House announced that Africa Command "will strengthen our security cooperation with Africa and create new opportunities to bolster the capabilities of our partners in Africa. Africa Command will enhance our efforts to bring peace and security to the people of Africa and promote our common goals of development, health, education, democracy, and economic growth in Africa."
General Carter F. Ham said in a 2012 address at Brown University that U.S. strategy for Sub-Saharan Africa is to strengthen democratic institutions and boost broad-based economic growth.
The U.S. Africa Command is currently operating along five lines of effort:
Neutralize al-Shabaab and transition the security responsibilities of the African Union Mission in Somalia (AMISOM) to the Federal Government of Somalia (FGS)
Degrade violent extremist organizations in the Sahel Maghreb and contain instability in Libya
Contain and degrade Boko Haram
Interdict illicit activity in the Gulf of Guinea and Central Africa with willing and capable African partners
Build peacekeeping, humanitarian assistance and disaster response capacity of African partners
On 18 March 2019, AFRICOM conducted an airstrike over Mogadishu, Somalia aimed at "the terrorist network and its recruiting efforts in the region", specifically referencing al-Shabaab. AFRICOM reported that the number of terrorists killed by this airstrike was 3, but this fact, as well as how many civilian casualties there were is still under dispute.
Area of responsibility
The territory of the command consists of all of the African continent except for Egypt, which remains under the responsibility of Central Command, as it closely relates to the Middle East. USAFRICOM also covers island countries commonly associated with Africa:
Cape Verde
São Tomé and Príncipe
Comoros
Madagascar
Mauritius
Seychelles
The U.S. military areas of responsibility involved were transferred from three separate U.S. unified combatant commands. Most of Africa was transferred from the United States European Command with the Horn of Africa and Sudan transferred from the United States Central Command. Responsibility for U.S. military operations in the islands of Madagascar, the Comoros, the Seychelles and Mauritius was transferred from the United States Pacific Command.
Headquarters and facilities
The AFRICOM headquarters is located at Kelley Barracks, a small urban facility near Stuttgart, Germany, and is staffed by 1,500 personnel. In addition, the command has military and civilian personnel assigned at Camp Lemonnier, Djibouti; RAF Molesworth, United Kingdom; MacDill Air Force Base, Florida; and in Offices of Security Cooperation and Defense Attaché Offices in about 38 African countries.
Selection of the headquarters
It was reported in June 2007 that African countries were competing to host the headquarters because it would bring money for the recipient country. Liberia has publicly expressed a willingness to host AFRICOM's headquarters, and in 2021 Nigeria expressed a similar interest. The U.S. declared in February 2008 that AFRICOM would be headquartered in Stuttgart for the "foreseeable future". In August 2007, Dr. Wafula Okumu, a research fellow at the Institute for Security Studies in South Africa, testified before the United States Congress about the growing resistance and hostility on the African continent. Nigeria announced it will not allow its country to host a base and opposed the creation of a base on the continent. South Africa and Libya also expressed reservations of the establishment of a headquarters in Africa.
The Sudan Tribune considered it likely that Ethiopia, a strong U.S. ally in the region, will house USAFRICOM's headquarters due to the collocation of AFRICOM with the African Union's developing peace and security apparatus. Prime Minister Meles Zenawi stated in early November that Ethiopia would be willing to work together closely with USAFRICOM. This was further reinforced when a U.S. Air Force official said on 5 December 2007, that Addis Ababa was likely to be the headquarters.
On 18 February 2008, General Ward told an audience at the Royal United Services Institute in London that some portion of that staff headquarters being on the continent at some point in time would be "a positive factor in helping us better deliver programs." General Ward also told the BBC the same day in an interview that there are no definite plans to take the headquarters or a portion of it to any particular location on the continent.
President Bush denied that the United States was contemplating the construction of new bases on the African continent. U.S. plans include no large installations such as Camp Bondsteel in Kosovo, but rather a network of "cooperative security locations" at which temporary activities will be conducted. There is one U.S. base on the continent, Camp Lemonnier in Djibouti, with approximately 2,300 troops stationed there having been inherited from USCENTCOM upon standup of the command.
In general, U.S. Unified Combatant Commands have an HQ of their own in one location, subordinate service component HQs, sometimes one or two co-located with the main HQ or sometimes spread widely, and a wide range of operating locations, main bases, forward detachments, etc. USAFRICOM initially appears to be considering something slightly different; spreading the actually COCOM HQ over several locations, rather than having the COCOM HQ in one place and the putative "U.S. Army Forces, Africa", its air component, and "U.S. Naval Forces, Africa" in one to four separate locations. AFRICOM will not have the traditional J-type staff divisions, instead having outreach, plans and programs, knowledge development, operations and logistics, and resources branches. AFRICOM went back to a traditional J-Staff in early 2011 after General Carter Ham took command.
In the summer of 2020, U.S. Defense Secretary Mark Esper directed AFRICOM leadership to study a possible headquarters relocation outside of Germany after plans were announced that neighboring U.S. European Command would relocate to Belgium.
On 20 November 2020 a new Army service component command (ASCC), U.S. Army Europe and Africa (USAREUR-AF) consolidated USAREUR and USARAAF. The U.S. Army Africa/Southern European Task Force is now the U.S. Army Southern European Task Force, Africa (SETAF-AF).
Personnel
U.S. Africa Command completed fiscal year 2010 with approximately 2,000 assigned personnel, which includes military, civilian, contractor, and host nation employees. About 1,500 work at the command's main headquarters in Stuttgart. Others are assigned to the command's units in England and Florida, along with security cooperation officers posted at U.S. embassies and diplomatic missions in Africa to coordinate Defense Department programs within the host nation.
As of December 2010, the command has five Senior Foreign Service officers in key positions as well as more than 30 personnel from 13 U.S. Government Departments and Agencies serving in leadership, management, and staff positions. Some of the agencies represented are the United States Departments of State, Treasury, and Commerce, United States Agency for International Development, and the United States Coast Guard.
U.S. Africa Command has limited assigned forces and relies on the Department of Defense for resources necessary to support its missions.
Components
On 1 October 2008, the Seventeenth Air Force was established at Ramstein Air Base, Germany as the United States Air Force component of the Africa Command. Brig. Gen. Tracey Garrett was named as commander of the new USMC component, U.S. Marine Corps Forces Africa (MARFORAF), in November 2008. MARFORAF is a dual-mission arrangement for United States Marine Corps Forces, Europe.
On 3 December 2008, the U.S. announced that Army and Navy headquarters units of AFRICOM would be hosted in Italy. The AFRICOM section of the Army's Southern European Task Force would be located in Vicenza and Naval Forces Europe in Naples would expand to include the Navy's AFRICOM component. Special Operations Command, Africa (SOCAFRICA) is also established, gaining control over Joint Special Operations Task Force-Trans Sahara (JSOTF-TS) and Special Operations Command and Control Element – Horn of Africa (SOCCE-HOA).
The U.S. Army has allocated a brigade to the Africa Command.
U.S. Army Europe and Africa (USAREUR-AF)
Headquartered on Lucius D. Clay Kaserne in Wiesbaden, Germany, U.S. Army Europe and Africa — Southern European Task Force - Africa (SETAF-AF), in concert with national and international partners, conducts sustained security engagement with African land forces to promote peace, stability, and security in Africa. As directed, it can deploy as a contingency headquarters in support of crisis response. The commander of SETAF-AF is DCG for Africa.
As of March 2013, the 2nd Brigade Combat Team, 1st Infantry Division, the "Dagger Brigade", is being aligned with AFRICOM.
U.S. Naval Forces, Africa (NAVAF)
U.S. Naval Forces Europe - Naval Forces Africa (NAVEUR-NAVAF) area of responsibility (AOR) covers approximately half of the Atlantic Ocean, from the North Pole to Antarctica; as well as the Adriatic, Baltic, Barents, Black, Caspian, Mediterranean and North Seas. NAVEUR-NAVAF covers all of Russia, Europe and nearly the entire continent of Africa. It encompasses 105 countries with a combined population of more than one billion people and includes a landmass extending more than 14 million square miles.
The area of responsibility covers more than 20 million square nautical miles of ocean, touches three continents and encompasses more than 67 percent of the Earth's coastline, 30 percent of its landmass, and nearly 40 percent of the world's population.
Task Force 60 will normally be the commander of Naval Task Force Europe and Africa. Any naval unit within the USEUCOM or USAFRICOM AOR may be assigned to Task Force 60 as required upon by the Commander of the Sixth Fleet.
U.S. Air Forces Africa (AFAFRICA)
Air Forces Africa (AFAFRICA) is located at Ramstein Air Base, Germany, and serves as the air and space component to U.S. Africa Command (AFRICOM) located at Stuttgart, Germany. Air Forces Africa shares a headquarters and units with United States Air Forces in Europe, and its component Air Force, 3AF (AFAFRICA) conducts sustained security engagement and operations as directed to promote air safety, security and development on the African continent. Through its Theater Security Cooperation (TSC) events, Air Forces Africa carries out AFRICOM's policy of seeking long-term partnership with the African Union and regional organizations as well as individual nations on the continent.
Air Forces Africa works with other U.S. Government agencies, to include the State Department and the U.S. Agency for International Development (USAID), to assist African partners in developing national and regional security institution capabilities that promote security and stability and facilitate development.
3AF succeeds the Seventeenth Air Force by assuming the AFAFRICA mission upon the 17AF's deactivation on 20 April 2012.
U.S. Marine Corps Forces, Africa (MARFORAF)
U.S. Marine Corps Forces, Africa conducts operations, exercises, training, and security cooperation activities throughout the AOR. In 2009, MARFORAF participated in 15 ACOTA missions aimed at improving partners' capabilities to provide logistical support, employ military police, and exercise command and control over deployed forces.
MARFORAF conducted military to military events in 2009 designed to familiarize African partners with nearly every facet of military operations and procedures, including use of unmanned aerial vehicles, tactics, and medical skills. MARFORAF, as the lead component, continues to conduct Exercise AFRICAN LION in Morocco—the largest annual Combined Joint Chiefs of Staff (CJCS) exercise on the African continent—as well as Exercise SHARED ACCORD 10, which was the first CJCS exercise conducted in Mozambique.
In 2013, the Special Purpose Marine Air-Ground Task Force - Crisis Response - Africa was formed to provide quick response to American interests in North Africa by flying marines in Bell Boeing V-22 Osprey aircraft from bases in Europe.
Subordinate Commands
U.S. Special Operations Command Africa
Special Operations Command Africa was activated on 1 October 2008 and became fully operationally capable on 1 October 2009. SOCAFRICA is a Subordinate-Unified Command of United States Special Operations Command, operationally controlled by U.S. Africa Command, collocated with USAFRICOM at Kelley Barracks, Stuttgart-Möhringen, Germany. Also on 1 October 2008, SOCAFRICA assumed responsibility for the Special Operations Command and Control Element – Horn of Africa, and on 15 May 2009, SOCAFRICA assumed responsibility for Joint Special Operations Task Force Trans – Sahara (JSOTF-TS) – the SOF component of Operation Enduring Freedom – Trans Sahara.
SOCAFRICA's objectives are to build operational capacity, strengthen regional security and capacity initiatives, implement effective communication strategies in support of strategic objectives, and eradicate violent extremist organizations and their supporting networks. SOCAFRICA forces work closely with both U.S. Embassy country teams and African partners, maintaining a small but sustained presence throughout Africa, predominantly in the OEF-TS and CJTF-HOA regions. SOCAFRICA's persistent SOF presence provides an invaluable resource that furthers USG efforts to combat violent extremist groups and builds partner nation CT capacity.
On 8 April 2011, Naval Special Warfare Unit 10, operationally assigned and specifically dedicated for SOCAFRICA missions, was commissioned at Panzer Kaserne, near Stuttgart, Germany. It is administratively assigned to Naval Special Warfare Group 2 on the U.S. East Coast.
Organizations included in SOCAFRICA include:
Special Operations Command Forward—East (Special Operations Command and Control Element—Horn of Africa)
Special Operations Command Forward—Central (AFRICOM Counter—Lord's Resistance Army Control Element)
Special Operations Command Forward—West (Joint Special Operations Task Force—Trans Sahara)
Naval Special Warfare Unit 10, Joint Special Operations Air Component Africa, and SOCAFRICA Signal Detachment
Commander SOCAFRICA serves as the special operations adviser to commander, USAFRICOM.
Combined Joint Task Force – Horn of Africa
Combined Joint Task Force – Horn of Africa (CJTF-HOA) conducts operations in the East Africa region to build partner nation capacity in order to promote regional security and stability, prevent conflict, and protect U.S. and coalition interests. CJTF-HOA's efforts, as part of a comprehensive whole-of-government approach, are aimed at increasing African partner nations' capacity to maintain a stable environment, with an effective government that provides a degree of economic and social advancement for its citizens.
Programs and operations
The programs conducted by AFRICOM, in conjunction with African military forces focus on reconnaissance and direct action. However, AFRICOM's directives are to keep American military forces out of direct combat as best as possible. Despite this, the United States has admitted to American troops being involved in direct action during missions with African military partners, namely in classified 127e programs. As of 2019, there have been at least 139 confirmed drone strikes from AFRICOM operations in Somalia. Estimates place the total number of deaths at least 965, with at least 10 civilians killed. Each AFRICOM operations has a specific mission. Some of the operations in North and West Africa target ISIS, and Boko Haram. In East Africa, missions focus on targeting terrorist group Al-Shabaab and piracy.
By country
Djibouti
The largest number of US troops in Africa are in Djibouti and perform a counter terrorism mission.
Niger
In January 2013, a senior Niger official told Reuters that Bisa Williams, the then-United States Ambassador to Niger, requested permission to establish a drone base in a meeting with Nigerien President Mahamadou Issoufou. On 5 February, officials from both Niger and the U.S. said that the two countries signed a status of forces agreement that allowed the deployment of unarmed surveillance drones. In that month, U.S. President Barack Obama sent 150 military personnel to Niger to set up a surveillance drone operation that would aid France in its counterterrorism efforts in the Northern Mali conflict. In October 2015, Niger and the U.S. signed a military agreement committing the two countries "to work together in the fight against terrorism". U.S. Army Special Forces personnel (commonly referred to as Green Berets) were sent to train the Niger Armed Forces (FAN) to assist in the fight against terrorists from neighboring countries. As of October 2017, there are about 800 U.S. military personnel in Niger, most of whom are working to build a second drone base for American and French aircraft in Agadez. Construction of the base is expected to be completed in 2018, which will allow the U.S. to conduct surveillance operations with the General Atomics MQ-9 Reaper to monitor ISIL insurgents flowing south and other extremists flowing north from the Sahel region.
Somalia
The United States has roughly 400 troops in Somalia. American military forces work closely with African Union troops. Troops conduct raids with Somali troops and provide transport. American forces have engaged in firefights in self-defense and drone airstrikes have been called in to provide additional support.
Programs
African Contingency Operations Training and Assistance
Africa Partnership Station is the U.S. Africa Command's primary maritime security engagement program which strengthens maritime security through maritime training with various nations.
Combating Terrorism Fellowship Program
Pandemic Response Program
State Partnership Program connects a U.S. state's National Guard to an African nation for military training and relationship-building.
Diplomatic Engagement
Conferences
Military-to-Military Engagement
National Guard State Partnership Program
African Partnership Flight
Africa Partnership Station
African Maritime Law Enforcement Partnership
Non-Commissioned Officer Development
Logistics Engagement
Military Intelligence
Chaplain Engagement
Women, Peace, and Security
Joint Exercise Programs
African Lion
Training exercises sponsored by the United States through AFRICOM and Morocco. Participants of this program came from Europe and Africa to undergo training in various military exercises and skills. Exercises conducted during African Lion included "command-and-control techniques, combat tactics, peacekeeping, and humanitarian assistance operations". Reported by AFRICOM to have improved the quality of operations conducted between the North African and United States military.
Western Accord
Training exercises sponsored by AFRICOM, European, and Western African countries for the first time in 2014. The goal of this exercise was to improve African forces' skills in conducting peace support operations. An ebola epidemic occurring from 2014 to 2015 resulted in the exercises being hosted by the Netherlands. During this exercise the mission command for the "United Nation’s Multidimensional Integrated Stabilization Mission" in Mali was replicated. Name of exercise changed to United Accord some time later.
Central Accord
Training exercises conducted with the goal of increasing both the military knowledge and efficacy of collaborative interactions of the participating groups. Emphasis placed on crisis response tactics and fostering strong partnerships between participating groups. Forces came from Africa, the United States, and Europe. The Lake Chad Basin is an example of a regional mission conducted by The Multi-National Joint Task Force.
Eastern Accord
Series of training exercises originally began in 1998 with a series of exercises titled "Natural Fire". The Justified Accord was a further continuation of the large group of exercises conducted under the name Eastern Accord. Participating forces came from the United States and various African allies. Conducted with the goal of improving coordinated operations in East Africa. Notable aspects of the training included discussion-based modules focused on peace-keeping endeavors.
Southern Accord
Annual training exercise sponsored by AFRICOM in conjunction with allied African forces over several years. In 2014 partners also included the United Nation Integrated Training Service and U.S. Army Peacekeeping and Stability Operations Institute. Exercises focused around the goal of peacekeeping. In 2017, Southern Accord was renamed as United Accord.
Cutlass Express
Series of training exercises held at sea off the coast of East Africa. The Cutlass Express series was conducted by the United States Naval Forces Africa, a group within AFRICOM. Exercises performed at this time focused on maritime security, piracy countermeasures and interception of prohibited cargo. Express series included operations Obangame Express, Saharan Express, and Phoenix Express.
Obangame Express
Saharan Express
Phoenix Express
Flintlock
Silent Warrior
Africa Endeavor
Operations
Armada Sweep - U.S. Navy electronic surveillance from ships off the coast of East Africa to support drone operations in the region
Echo Casemate - Support of French and African peacekeeping forces in the Central African Republic.
Operation Enduring Freedom - Horn of Africa
Operation Enduring Freedom - Trans Sahara
Exile Hunter - Training of Ethiopian forces for operations in Somalia
Jukebox Lotus - Operations in Libya after attack on Benghazi Consulate.
Junction Rain - Maritime security operations in the Gulf of Guinea.
Junction Serpent - Surveillance operations of ISIS forces near Sirte, Libya
Juniper Micron - Airlift of French forces to combat Islamic extremists in Mali
Juniper Nimbus - Support for Nigerian Forces against Boko Haram
Juniper Shield - Counterterrorism operations in northwest Africa
Jupiter Garrett – Joint Special Operations Command operation against high value targets in Somalia.
Justified Seamount - Counter piracy operation off east African coast
Kodiak Hunter - Training of Kenyan forces for operations in Somalia
* Mongoose Hunter - Training of Somali forces for operations against Al Shabab
New Normal - Development of rapid response capability in Africa
Nimble Shield - Operation against Boko Haram and ISIS West Africa.
Oaken Sonnet I - 2013 rescue of United States personnel from South Sudan during its civil war
Oaken Sonnet II - 2014 operation in South Sudan
Oaken Sonnet III – 2016 operation in South Sudan
Oaken Steel - July 2016 to January 2017 deployment to Uganda and reinforcement of security forces at US embassy in South Sudan
Objective Voice - Information operations and psychological warfare in Africa
Oblique Pillar - Contracted helicopter support for Somali National Army forces.
Operation Observant Compass.
Obsidian Lotus - Training Libyan special operations units
Obsidian Mosaic - Operation in Mali
Obsidian Nomad I - Counterterrorism operation in Diffa, Niger
Obsidian Nomad II - Counterterrorism operation in Arlit, Niger
Octave Anchor - Psychological warfare operations focused on Somalia.
Octave Shield - Operation by Combined Joint Task Force-Horn of Africa.
Octave Soundstage- Psychological warfare operations focused on Somalia.
Octave Stingray - Psychological warfare operations focused on Somalia.
Octave Summit - Psychological warfare operations focused on Somalia.
Operation Odyssey Dawn - Libya, was the first major combat deployment directed by Africa Command.
Operation Odyssey Lightning - Libya
Odyssey Resolve - Intelligence, Surveillance and Reconnaissance operations in area of Sirte, Libya.
Operation Onward Liberty - Liberia
Paladin Hunter - Counterterrorism operation in Puntland.
RAINMAKER: A highly sensitive classified signals intelligence effort. Bases used: Chebelley, Djibouti; Baidoa, Baledogle, Kismayo and Mogadishu, Somalia
Ultimate Hunter - Counterterorism operation by US trained Kenyan force in Somalia
Operation Unified Protector - Libya
Contingency Operations
Operation Odyssey Dawn
Operation Juniper Micron
Protection of U.S. Personnel and Facilities
Operation United Assistance
Operation Odyssey Lightning
Security Cooperation Operations
Support to Peacekeeping Operations
African Union Mission in Somalia
Operation Observant Compass
Counter-Boko Haram
Africa Contingency Operations Training and Assistance
Africa Deployment Assistance Partnership Team
Counter-IED Training
Foreign Military Sales
International Military Education and Training
Counter Narcotics
Counter-Illicit Trafficking
Medical Engagement
Pandemic Response Program
African Partner Outbreak Alliance
West Africa Disaster Preparedness Initiative
Veterinary Civil Action Program
List of commanders
References
Further reading
"AFRICOM Arrives", Jane's Defence Weekly, 1 October 2008
External links
and March 2010 posture statement
United States Army Africa official website
Africa Interactive Map from the United States Army Africa
APCN (Africa Partner Country Network)
, U.S. Senate Committee on Armed Services testimony.
by Sean McFate in Military Review, January–February 2008
Africa’s Security Challenges and Rising Strategic Significance, Strategic Insights, January 2007
, United States Department of Defense, 2 February 2007
"Blood Oil" by Sebastian Junger in Vanity Fair, February 2007. Retrieved 28 January 2007
"Africa Command: 'Follow the oil'" in World War 4 Report, 16 February 2007
The Americans Have Landed, Esquire, 27 June 2007. Retrieved 2007-08-10.
Does Africa need Africom?
ResistAFRICOM website
Secret US Military Documents Reveal a Constellation of American Military Bases Across Africa
Maps of Operation Enduring Freedom
Trans-Sahara Counterterrorism Initiative Details of the operation by Global Security.
2007 establishments in Germany
Military in Africa
Organisations based in Stuttgart
Organizations established in 2007
Africa Command
United States military in Stuttgart
United States–African relations |
1785665 | https://en.wikipedia.org/wiki/Red%20Badgro | Red Badgro | Morris Hiram "Red" Badgro (December 1, 1902 – July 13, 1998) was an American football player and football coach who also played professional baseball. He was inducted into the Pro Football Hall of Fame in 1981.
A native of Orillia, Washington, he attended the University of Southern California (USC) where he played baseball, basketball, and football. He then played nine seasons of professional football as an end for the New York Yankees (1927–1928), New York Giants (1930–1935), and Brooklyn Dodgers (1936). He was selected as a first-team All-Pro in 1931, 1933, and 1934. He scored the first touchdown in the first NFL Championship Game and was a member of the 1934 New York Giants team that won the second NFL Championship Game.
Badgro also played professional baseball as an outfielder for six years from 1928 to 1933, including two seasons in Major League Baseball for the St. Louis Browns (1929–1930). After his career as an athlete was over, Badgro served as a football coach for 14 years, including stints as the ends coach for Columbia (1939–1942) and Washington (1946–1953).
Early years
Badgro was born in 1902 in Orillia, Washington. His father, Walter Badgro (1865–1940), was a farmer in Orillia. He attended Kent High School where he was twice named captain of the basketball and baseball teams. Badgro later recalled that his focus was on baseball and basketball in high school, noting that he only played "maybe three games of football in four years" of high school.
University of Southern California
Badgro enrolled at the University of Southern California (USC) on a basketball scholarship. At USC, was a multi-sport star in baseball, basketball, and football. Playing at the end position for the USC football team, he was selected by the United Press as a first-team player on the 1926 All-Pacific Coast football team. He was a forward for the USC basketball team and was named to the All-Pacific Coast Conference basketball team in 1927. During the 1927 baseball season, he led USC with a .352 batting average, scored 25 runs in 21 games, and was named to the All-California baseball team.
Professional athlete
Football
Badgro played 10 seasons of professional football. During the 1927 season, he appeared in 12 games for the New York Yankees. The Yankees folded after the 1928 season, and Badgro opted to focus on professional baseball. He did not play professional football in 1929.
After playing Major League Baseball in 1929 and 1930, Badgro qualified as a free agent in professional football and signed with the New York Giants for $150 a game. He gained his greatest acclaim as the starting left end for the Giants from 1930 to 1935. He was regarded as a sure-tackling defender and an effective blocker and talented receiver on offense. Giants coach Steve Owen said of Badgro: "He could block, tackle, and catch passes equally well. And he could do each with the best of them." Highlights from Badgro's prime years include the following:
In 1930, he appeared in 17 games at left end, 14 as a starter, and was selected by the Green Bay Press-Gazette as a second-team end on the 1930 All-Pro Team.
In 1931, he appeared in 13 games, 11 as a starter, and was selected by the NFL as a first-team end on the official 1931 All-Pro Team.
In 1932, he appeared in 12 games, 11 as a starter.
In 1933, he appeared in 12 games, 10 as a starter, and was selected by the Chicago Daily News as a second-team end on the 1933 All-Pro Team. He helped lead the Giants to the 1933 NFL Championship Game where he scored the first touchdown in the first NFL Championship Game, a 29-yard touchdown on a pass from Harry Newman.
In 1934, he appeared in 13 games, all as a starter, for the Giants team that won the 1934 NFL Championship Game. He was selected by the NFL and the Chicago Daily News as a first-team end on the 1934 All-Pro Team. He also led the NFL with 16 receptions.
Playing against the Boston Redskins in 1935, Badgro blocked a punt, and teammate Les Corzine returned it for a go-ahead touchdown.
Badgro concluded his playing career with the Brooklyn Dodgers in 1936.
Baseball
Badgro also played professional baseball. He played minor league ball in 1928 for the Tulsa Oilers in the Western League and the Muskogee Chiefs in the Western Association, compiling a .351 batting average in 513 at bats. He also played for the Milwaukee Brewers of the American Association in 1929.
In June 1929, Badgro made his major league debut with the St. Louis Browns. Over the 1929 and 1930 season, he appeared in 143 games, 80 of them as a right fielder and 13 as a center fielder. He compiled a .257 batting average in 382 major league at-bats and appeared in his final major league game on September 18, 1930.
Badgro continued to play in the minor leagues for several years, including stints with the Wichita Falls Spudders of the Texas League (1931–1932) and Seattle Indians of the Pacific Coast League (1933).
Coaching career
In 1937, Badgro returned to USC to finish the credits he needed to graduate. At the same time, he was a member of Howard Jones' football coaching staff at USC, responsible for working with USC's frosh players.
In June 1938, Badgro was hired as the football coach at Ventura High School in Ventura, California. He also coached football, baseball, and basketball for Ventura Junior College.
In June 1939, he was hired as an assistant coach (responsible for ends) under Lou Little at Columbia. He remained at Columbia through the 1942 season.
In 1944, Badgro was employed in a Seattle war plant.
In February 1946, Badgro was hired as an assistant football coach at the University of Washington. When Howard Odell took over as Washington's head coach, he retained Badgro as his ends coach. Badgro was again retained when John Cherberg took over as head coach in 1953. He resigned his coaching post at Washington in January 1954 in order to pursue private business in Kent, Washington.
Family, later years, and honors
Badgro was married to Dorothea Taylor. After retiring from football, Badgro worked for the Department of Agriculture in the State of Washington.
In 1967, Badgro was inducted into the Washington State Sports Hall of Fame. Badgro was inducted into the Pro Football Hall of Fame in 1981 at age 78. At that time, he was the oldest person to be inducted into the Hall of Fame.
Badgro died in July 1998 at age 95 in Kent, Washington. He had been hospitalized after a fall. He was buried at Hillcrest Burial Park in Kent.
References
External links
1902 births
1998 deaths
American football defensive ends
American football ends
Forwards (basketball)
Major League Baseball right fielders
Brooklyn Dodgers (NFL) players
Columbia Lions football coaches
New York Yankees (NFL) players
New York Giants players
St. Louis Browns players
USC Trojans baseball players
USC Trojans football coaches
USC Trojans football players
USC Trojans men's basketball players
Washington Huskies football coaches
Longview Cannibals players
Milwaukee Brewers (minor league) players
Muskogee Chiefs players
Seattle Indians players
Tulsa Oilers (baseball) players
Wichita Falls Spudders players
High school football coaches in California
Pro Football Hall of Fame inductees
Sportspeople from Kent, Washington
Players of American football from Washington (state)
Baseball players from Washington (state)
Basketball players from Washington (state)
Accidental deaths from falls
Accidental deaths in Washington (state)
American men's basketball players |
436768 | https://en.wikipedia.org/wiki/Andrew%20File%20System | Andrew File System | The Andrew File System (AFS) is a distributed file system which uses a set of trusted servers to present a homogeneous, location-transparent file name space to all the client workstations. It was developed by Carnegie Mellon University as part of the Andrew Project. Originally named "Vice", "Andrew" refers to Andrew Carnegie and Andrew Mellon. Its primary use is in distributed computing.
Features
AFS has several benefits over traditional networked file systems, particularly in the areas of security and scalability. One enterprise AFS deployment at Morgan Stanley exceeds 25,000 clients. AFS uses Kerberos for authentication, and implements access control lists on directories for users and groups. Each client caches files on the local filesystem for increased speed on subsequent requests for the same file. This also allows limited filesystem access in the event of a server crash or a network outage.
AFS uses the Weak Consistency model. Read and write operations on an open file are directed only to the locally cached copy. When a modified file is closed, the changed portions are copied back to the file server. Cache consistency is maintained by callback mechanism. When a file is cached, the server makes a note of this and promises to inform the client if the file is updated by someone else. Callbacks are discarded and must be re-established after any client, server, or network failure, including a timeout. Re-establishing a callback involves a status check and does not require re-reading the file itself.
A consequence of the file locking strategy is that AFS does not support large shared databases or record updating within files shared between client systems. This was a deliberate design decision based on the perceived needs of the university computing environment. For example, in the original email system for the Andrew Project, the Andrew Message System, a single file per message is used, like maildir, rather than a single file per mailbox, like mbox. See AFS and buffered I/O Problems for handling shared databases
A significant feature of AFS is the volume, a tree of files, sub-directories and AFS mountpoints (links to other AFS volumes). Volumes are created by administrators and linked at a specific named path in an AFS cell. Once created, users of the filesystem may create directories and
files as usual without concern for the physical location of the volume. A volume may have a quota assigned to it in order to limit the amount of space consumed. As needed, AFS administrators can move that volume to another server and disk location without the need to notify users; the operation can even occur while files in that volume are being used.
AFS volumes can be replicated to read-only cloned copies. When accessing files in a read-only volume, a client system will retrieve data from a particular read-only copy. If at some point, that copy becomes unavailable, clients will look for any of the remaining copies. Again, users of that data are unaware of the location of the read-only copy; administrators can create and relocate such copies as needed. The AFS command suite guarantees that all read-only volumes contain exact copies of the original read-write volume at the time the read-only copy was created.
The file name space on an Andrew workstation is partitioned into a shared and local name space. The shared name space (usually mounted as /afs on the Unix filesystem) is identical on all workstations. The local name space is unique to each workstation. It only contains temporary files needed for workstation initialization and symbolic links to files in the shared name space.
The Andrew File System heavily influenced Version 4 of Sun Microsystems' popular Network File System (NFS). Additionally, a variant of AFS, the DCE Distributed File System (DFS) was adopted by the Open Software Foundation in 1989 as part of their Distributed Computing Environment. Finally AFS (version two) was the predecessor of the Coda file system.
Implementations
Besides the original, a few other implementations were developed. OpenAFS was built from source released by Transarc (IBM) in 2000. Transarc software became deprecated and lost support.
Arla was an independent implementation of AFS developed at the Royal Institute of Technology in Stockholm in the late 1990s and early 2000s.
A fourth implementation of an AFS client exists in the Linux kernel source code since at least version 2.6.10. Committed by Red Hat, this is a fairly simple implementation still incomplete .
Available permissions
The following Access Control List (ACL) permissions can be granted:
Lookup (l)
allows a user to list the contents of the AFS directory, examine the ACL associated with the directory and access subdirectories.
Insert (i)
allows a user to add new files or subdirectories to the directory.
Delete (d)
allows a user to remove files and subdirectories from the directory.
Administer (a)
allows a user to change the ACL for the directory. Users always have this right on their home directory, even if they accidentally remove themselves from the ACL.
Permissions that affect files and subdirectories include:
Read (r)
allows a user to look at the contents of files in a directory and list files in subdirectories. Files that are to be granted read access to any user, including the owner, need to have the standard UNIX "owner read" permission set.
Write (w)
allows a user to modify files in a directory. Files that are to be granted write access to any user, including the owner, need to have the standard UNIX "owner write" permission set.
Lock (k)
allows the processor to run programs that need to "flock" files in the directory.
Additionally, AFS includes Application ACLs (A)-(H) which have no effect on access to files.
See also
Global filesystem
References
External links
OpenAFS
Arla
Further reading
The Andrew File System (2014), Arpaci-Dusseau, Remzi H.; Arpaci-Dusseau, Andrea C.; Arpaci-Dusseau Books
Network file systems
Carnegie Mellon University software
IBM file systems
Distributed file systems supported by the Linux kernel |
350705 | https://en.wikipedia.org/wiki/Layer%202%20Tunneling%20Protocol | Layer 2 Tunneling Protocol | In computer networking, Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol used to support virtual private networks (VPNs) or as part of the delivery of services by ISPs. It uses encryption ('hiding') only for its own control messages (using an optional pre-shared secret), and does not provide any encryption or confidentiality of content by itself. Rather, it provides a tunnel for Layer 2 (which may be encrypted), and the tunnel itself may be passed over a Layer 3 encryption protocol such as IPsec.
History
Published in 2000 as proposed standard RFC 2661, L2TP has its origins primarily in two older tunneling protocols for point-to-point communication: Cisco's Layer 2 Forwarding Protocol (L2F) and Microsoft's
Point-to-Point Tunneling Protocol (PPTP). A new version of this protocol, L2TPv3, appeared as proposed standard RFC 3931 in 2005. L2TPv3 provides additional security features, improved encapsulation, and the ability to carry data links other than simply Point-to-Point Protocol (PPP) over an IP network (for example: Frame Relay, Ethernet, ATM, etc.).
Description
The entire L2TP packet, including payload and L2TP header, is sent within a User Datagram Protocol (UDP) datagram. A virtue of transmission over UDP (rather than TCP) is that it avoids the "TCP meltdown problem". It is common to carry PPP sessions within an L2TP tunnel. L2TP does not provide confidentiality or strong authentication by itself. IPsec is often used to secure L2TP packets by providing confidentiality, authentication and integrity. The combination of these two protocols is generally known as L2TP/IPsec (discussed below).
The two endpoints of an L2TP tunnel are called the L2TP access concentrator (LAC) and the L2TP network server (LNS). The LNS waits for new tunnels. Once a tunnel is established, the network traffic between the peers is bidirectional. To be useful for networking, higher-level protocols are then run through the L2TP tunnel. To facilitate this, an L2TP session is established within the tunnel for each higher-level protocol such as PPP. Either the LAC or LNS may initiate sessions. The traffic for each session is isolated by L2TP, so it is possible to set up multiple virtual networks across a single tunnel.
The packets exchanged within an L2TP tunnel are categorized as either control packets or data packets. L2TP provides reliability features for the control packets, but no reliability for data packets. Reliability, if desired, must be provided by the nested protocols running within each session of the L2TP tunnel.
L2TP allows the creation of a virtual private dialup network (VPDN) to connect a remote client to its corporate network by using a shared infrastructure, which could be the Internet or a service provider's network.
Tunneling models
An L2TP tunnel can extend across an entire PPP session or only across one segment of a two-segment session. This can be represented by four different tunneling models, namely:
voluntary tunnel
compulsory tunnel — incoming call
compulsory tunnel — remote dial
L2TP multihop connection
L2TP packet structure
An L2TP packet consists of :
Field meanings:
Flags and version control flags indicating data/control packet and presence of length, sequence, and offset fields.
Length (optional) Total length of the message in bytes, present only when length flag is set.
Tunnel ID Indicates the identifier for the control connection.
Session ID Indicates the identifier for a session within a tunnel.
Ns (optional) sequence number for this data or control message, beginning at zero and incrementing by one (modulo 216) for each message sent. Present only when sequence flag set.
Nr (optional) sequence number for expected message to be received. Nr is set to the Ns of the last in-order message received plus one (modulo 216). In data messages, Nr is reserved and, if present (as indicated by the S bit), MUST be ignored upon receipt..
Offset Size (optional) Specifies where payload data is located past the L2TP header. If the offset field is present, the L2TP header ends after the last byte of the offset padding. This field exists if the offset flag is set.
Offset Pad (optional) Variable length, as specified by the offset size. Contents of this field are undefined.
Payload data Variable length (Max payload size = Max size of UDP packet − size of L2TP header)
L2TP packet exchange
At the time of setup of L2TP connection, many control packets are exchanged between server and client to establish tunnel and session for each direction. One peer requests the other peer to assign a specific tunnel and session id through these control packets. Then using this tunnel and session id, data packets are exchanged with the compressed PPP frames as payload.
The list of L2TP Control messages exchanged between LAC and LNS, for handshaking before establishing a tunnel and session in voluntary tunneling method are
L2TP/IPsec
Because of the lack of confidentiality inherent in the L2TP protocol, it is often implemented along with IPsec. This is referred to as L2TP/IPsec, and is standardized in IETF RFC 3193. The process of setting up an L2TP/IPsec VPN is as follows:
Negotiation of IPsec security association (SA), typically through Internet key exchange (IKE). This is carried out over UDP port 500, and commonly uses either a shared password (so-called "pre-shared keys"), public keys, or X.509 certificates on both ends, although other keying methods exist.
Establishment of Encapsulating Security Payload (ESP) communication in transport mode. The IP protocol number for ESP is 50 (compare TCP's 6 and UDP's 17). At this point, a secure channel has been established, but no tunneling is taking place.
Negotiation and establishment of L2TP tunnel between the SA endpoints. The actual negotiation of parameters takes place over the SA's secure channel, within the IPsec encryption. L2TP uses UDP port 1701.
When the process is complete, L2TP packets between the endpoints are encapsulated by IPsec. Since the L2TP packet itself is wrapped and hidden within the IPsec packet, the original source and destination IP address is encrypted within the packet. Also, it is not necessary to open UDP port 1701 on firewalls between the endpoints, since the inner packets are not acted upon until after IPsec data has been decrypted and stripped, which only takes place at the endpoints.
A potential point of confusion in L2TP/IPsec is the use of the terms tunnel and secure channel. The term tunnel-mode refers to a channel which allows untouched packets of one network to be transported over another network. In the case of L2TP/PPP, it allows L2TP/PPP packets to be transported over IP. A secure channel refers to a connection within which the confidentiality of all data is guaranteed. In L2TP/IPsec, first IPsec provides a secure channel, then L2TP provides a tunnel. IPsec also specifies a tunnel protocol: this is not used when a L2TP tunnel is used.
Windows implementation
Windows has had native support (configurable in control panel) for L2TP since Windows 2000. Windows Vista added 2 alternative tools, an MMC snap-in called "Windows Firewall with Advanced Security" (WFwAS) and the "netsh advfirewall" command-line tool. One limitation with both of the WFwAS and netsh commands is that servers must be specified by IP address. Windows 10 added the "Add-VpnConnection" and "Set-VpnConnectionIPsecConfiguration" PowerShell commands. A registry key must be created on the client and server if the server is behind a NAT-T device.
L2TP in ISPs' networks
L2TP is often used by ISPs when internet service over for example ADSL or cable is being resold. From the end user, packets travel over a wholesale network service provider's network to a server called a Broadband Remote Access Server (BRAS), a protocol converter and router combined. On legacy networks the path from end user customer premises' equipment to the BRAS may be over an ATM network.
From there on, over an IP network, an L2TP tunnel runs from the BRAS (acting as LAC) to an LNS which is an edge router at the boundary of the ultimate destination ISP's IP network. See example of reseller ISPs using L2TP.
RFC references
Cisco Layer Two Forwarding (Protocol) "L2F" (a predecessor to L2TP)
Point-to-Point Tunneling Protocol (PPTP)
Layer Two Tunneling Protocol "L2TP"
Implementation of L2TP Compulsory Tunneling via RADIUS
Secure Remote Access with L2TP
Layer Two Tunneling Protocol (L2TP) over Frame Relay
L2TP Disconnect Cause Information
Securing L2TP using IPsec
Layer Two Tunneling Protocol (L2TP): ATM access network
Layer Two Tunneling Protocol (L2TP) Differentiated Services
Layer Two Tunneling Protocol (L2TP) Over ATM Adaptation Layer 5 (AAL5)
Layer Two Tunneling Protocol "L2TP" Management Information Base
Layer Two Tunneling Protocol Extensions for PPP Link Control Protocol Negotiation
Layer Two Tunneling Protocol (L2TP) Internet Assigned Numbers: Internet Assigned Numbers Authority (IANA) Considerations Update
Signaling of Modem-On-Hold status in Layer 2 Tunneling Protocol (L2TP)
Layer 2 Tunneling Protocol (L2TP) Active Discovery Relay for PPP over Ethernet (PPPoE)
Layer Two Tunneling Protocol - Version 3 (L2TPv3)
Extensions to Support Efficient Carrying of Multicast Traffic in Layer-2 Tunneling Protocol (L2TP)
Fail Over Extensions for Layer 2 Tunneling Protocol (L2TP) "failover"
See also
IPsec
Layer 2 Forwarding Protocol
Point-to-Point Tunneling Protocol
Point-to-Point Protocol
Virtual Extensible LAN
References
External links
Implementations
Cisco: Cisco L2TP documentation, also read Technology brief from Cisco
Open source and Linux: xl2tpd, Linux RP-L2TP, OpenL2TP, l2tpns, l2tpd (inactive), Linux L2TP/IPsec server, FreeBSD multi-link PPP daemon, OpenBSD npppd(8), ACCEL-PPP - PPTP/L2TP/PPPoE server for Linux
Microsoft: built-in client included with Windows 2000 and higher; Microsoft L2TP/IPsec VPN Client for Windows 98/Windows Me/Windows NT 4.0
Apple: built-in client included with Mac OS X 10.3 and higher.
VPDN on Cisco.com
Other
IANA assigned numbers for L2TP
L2TP Extensions Working Group (l2tpext) - (where future standardization work is being coordinated)
Using Linux as an L2TP/IPsec VPN client
L2TP/IPSec with OpenBSD and npppd
Comparison of L2TP, PPTP and OpenVPN
Internet protocols
Internet Standards
Tunneling protocols
Virtual private networks |
23588 | https://en.wikipedia.org/wiki/Pinyin | Pinyin | Hanyu Pinyin (), often abbreviated to pinyin, is the official romanization system for Standard Mandarin Chinese in mainland China and to some extent in Taiwan and Singapore. It is often used to teach Standard Mandarin, which is normally written using Chinese characters. The system includes four diacritics denoting tones. Pinyin without tone marks is used to spell Chinese names and words in languages written with the Latin alphabet and also in certain computer input methods to enter Chinese characters.
The pinyin system was developed in the 1950s by a group of Chinese linguists including Zhou Youguang and was based on earlier forms of romanizations of Chinese. It was published by the Chinese government in 1958 and revised several times. The International Organization for Standardization (ISO) adopted pinyin as an international standard in 1982 and was followed by the United Nations in 1986. Attempts to make pinyin standard in Taiwan occurred in 2002 and 2009, but "Today Taiwan has no standardized spelling system" so that in 2019 "alphabetic spellings in Taiwan are marked more by a lack of system than the presence of one." Moreover, "some cities, businesses, and organizations, notably in the south of Taiwan, did not accept [efforts to introduce pinyin], as it suggested that Taiwan is more closely tied to the PRC", so it remains one of several rival romanization systems in use.
The word () means 'the spoken language of the Han people', while () literally means 'spelled sounds'.
When a foreign writing system with one set of coding/decoding system is taken to write a language, certain compromises may have to be made. The result is that the decoding systems used in some foreign languages will enable non-native speakers to produce sounds more closely resembling the target language than will the coding/decoding system used by other foreign languages. Native speakers of English will decode pinyin spellings to fairly close approximations of Mandarin except in the case of certain speech sounds that are not ordinarily produced by most native speakers of English: j /tɕ/, q /tɕʰ/, x /ɕ/, z /ts/, c /tsʰ/, zh /ʈʂ/, ch /ʈʂʰ/, h /x/ and r /ɻ/ exhibiting the greatest discrepancies.
In this system, the correspondence between the Roman letter and the sound is sometimes idiosyncratic, though not necessarily more so than the way the Latin script is employed in other languages. For example, the aspiration distinction between b, d, g and p, t, k is similar to that of these syllable-initial consonants English (in which the two sets are however also differentiated by voicing), but not to that of French. Letters z and c also have that distinction, pronounced as and (which is reminiscent of these letters being used to represent the phoneme in the German language and Latin-script-using Slavic languages, respectively). From s, z, c come the digraphs sh, zh, ch by analogy with English sh, ch. Although this introduces the novel combination zh, it is internally consistent in how the two series are related. In the x, j, q series, the pinyin use of x is similar to its use in Portuguese, Galician, Catalan, Basque and Maltese and the pinyin q is akin to its value in Albanian; both pinyin and Albanian pronunciations may sound similar to the ch to the untrained ear. Pinyin vowels are pronounced in a similar way to vowels in Romance languages.
The pronunciation and spelling of Chinese words are generally given in terms of initials and finals, which represent the segmental phonemic portion of the language, rather than letter by letter. Initials are initial consonants, while finals are all possible combinations of medials (semivowels coming before the vowel), a nucleus vowel and coda (final vowel or consonant).
History
Background: romanization of Chinese before 1949
In 1605, the Jesuit missionary Matteo Ricci published Xizi Qiji () in Beijing. This was the first book to use the Roman alphabet to write the Chinese language. Twenty years later, another Jesuit in China, Nicolas Trigault, issued his () at Hangzhou. Neither book had much immediate impact on the way in which Chinese thought about their writing system, and the romanizations they described were intended more for Westerners than for the Chinese.
One of the earliest Chinese thinkers to relate Western alphabets to Chinese was late Ming to early Qing dynasty scholar-official, Fang Yizhi (; 1611–1671).
The first late Qing reformer to propose that China adopt a system of spelling was Song Shu (1862–1910). A student of the great scholars Yu Yue and Zhang Taiyan, Song had been to Japan and observed the stunning effect of the kana syllabaries and Western learning there. This galvanized him into activity on a number of fronts, one of the most important being reform of the script. While Song did not himself actually create a system for spelling Sinitic languages, his discussion proved fertile and led to a proliferation of schemes for phonetic scripts.
Wade–Giles
The Wade–Giles system was produced by Thomas Wade in 1859, and further improved by Herbert Giles in the Chinese–English Dictionary of 1892. It was popular and used in English-language publications outside China until 1979.
Sin Wenz
In the early 1930s, Communist Party of China leaders trained in Moscow introduced a phonetic alphabet using Roman letters which had been developed in the Soviet Oriental Institute of Leningrad and was originally intended to improve literacy in the Russian Far East. This Sin Wenz or "New Writing" was much more linguistically sophisticated than earlier alphabets, but with the major exception that it did not indicate tones of Chinese.
In 1940, several thousand members attended a Border Region Sin Wenz Society convention. Mao Zedong and Zhu De, head of the army, both contributed their calligraphy (in characters) for the masthead of the Sin Wenz Society's new journal. Outside the CCP, other prominent supporters included Sun Yat-sen's son, Sun Fo; Cai Yuanpei, the country's most prestigious educator; Tao Xingzhi, a leading educational reformer; and Lu Xun. Over thirty journals soon appeared written in Sin Wenz, plus large numbers of translations, biographies (including Lincoln, Franklin, Edison, Ford, and Charlie Chaplin), some contemporary Chinese literature, and a spectrum of textbooks. In 1940, the movement reached an apex when Mao's Border Region Government declared that the Sin Wenz had the same legal status as traditional characters in government and public documents. Many educators and political leaders looked forward to the day when they would be universally accepted and completely replace Chinese characters. Opposition arose, however, because the system was less well adapted to writing regional languages, and therefore would require learning Mandarin. Sin Wenz fell into relative disuse during the following years.
Yale romanization
In 1943, the U.S. military engaged Yale University to develop a romanization of Mandarin Chinese for its pilots flying over China. The resulting system is very close to pinyin, but does not use English letters in unfamiliar ways; for example, pinyin x for is written as sy in the Yale system. Medial semivowels are written with y and w (instead of pinyin i and u), and apical vowels (syllabic consonants) with r or z. Accent marks are used to indicate tone.
Emergence and history of Hanyu Pinyin
Pinyin was created by a group of Chinese linguists, including Zhou Youguang who was an economist, as part of a Chinese government project in the 1950s. Zhou, often called "the father of pinyin," worked as a banker in New York when he decided to return to China to help rebuild the country after the establishment of the People's Republic of China in 1949. He became an economics professor in Shanghai, and in 1955, when China's Ministry of Education created a Committee for the Reform of the Chinese Written Language, Premier Zhou Enlai assigned Zhou Youguang the task of developing a new romanization system, despite the fact that he was not a professional linguist.
Hanyu Pinyin was based on several existing systems: Gwoyeu Romatzyh of 1928, Latinxua Sin Wenz of 1931, and the diacritic markings from zhuyin (bopomofo). "I'm not the father of pinyin," Zhou said years later; "I'm the son of pinyin. It's [the result of] a long tradition from the later years of the Qing dynasty down to today. But we restudied the problem and revisited it and made it more perfect."
A draft was published on February 12, 1956. The first edition of Hanyu Pinyin was approved and adopted at the Fifth Session of the 1st National People's Congress on February 11, 1958. It was then introduced to primary schools as a way to teach Standard Chinese pronunciation and used to improve the literacy rate among adults.
During the height of the Cold War, the use of pinyin system over the Yale romanization outside of China was regarded as a political statement or identification with the communist Chinese regime. Beginning in the early 1980s, Western publications addressing Mainland China began using the Hanyu Pinyin romanization system instead of earlier romanization systems; this change followed the normalization of diplomatic relations between the United States and the PRC in 1979. In 2001, the PRC Government issued the National Common Language Law, providing a legal basis for applying pinyin. The current specification of the orthographic rules is laid down in the National Standard GB/T 16159–2012.
Initials and finals
Unlike European languages, clusters of letters — initials () and finals () — and not consonant and vowel letters, form the fundamental elements in pinyin (and most other phonetic systems used to describe the Han language). Every Mandarin syllable can be spelled with exactly one initial followed by one final, except for the special syllable er or when a trailing -r is considered part of a syllable (see below, and see erhua). The latter case, though a common practice in some sub-dialects, is rarely used in official publications.
Even though most initials contain a consonant, finals are not always simple vowels, especially in compound finals (), i.e. when a "medial" is placed in front of the final. For example, the medials and are pronounced with such tight openings at the beginning of a final that some native Chinese speakers (especially when singing) pronounce yī (, clothes, officially pronounced ) as and wéi (, to enclose, officially pronounced ) as or . Often these medials are treated as separate from the finals rather than as part of them; this convention is followed in the chart of finals below.
Initials
In each cell below, the bold letters indicate pinyin and the brackets enclose the symbol in the International Phonetic Alphabet.
1 y is pronounced (a labial-palatal approximant) before u.2 The letters w and y are not included in the table of initials in the official pinyin system. They are an orthographic convention for the medials i, u and ü when no initial is present. When i, u, or ü are finals and no initial is present, they are spelled yi, wu, and yu, respectively.
The conventional lexicographical order (excluding w and y), derived from the zhuyin system ("bopomofo"), is:
{|cellspacing="0" cellpadding="3"
|style="background: #ccf;"|b p m f
|style="background: #cfc;"| d t n l
|style="background: #fcc;"| g k h
|style="background: #fcf;"| j q x
|style="background: #cff;"| zh ch sh r
|style="background: #ffc;"| z c s
|}
According to Scheme for the Chinese Phonetic Alphabet, zh, ch, and sh can be abbreviated as ẑ, ĉ, and ŝ (z, c, s with a circumflex). However, the shorthands are rarely used due to difficulty of entering them on computers and are confined mainly to Esperanto keyboard layouts.
Finals
In each cell below, the first line indicates IPA, the second indicates pinyin for a standalone (no-initial) form, and the third indicates pinyin for a combination with an initial. Other than finals modified by an -r, which are omitted, the following is an exhaustive table of all possible finals.1
The only syllable-final consonants in Standard Chinese are -n and -ng, and -r, the last of which is attached as a grammatical suffix. A Chinese syllable ending with any other consonant either is from a non-Mandarin language (a southern Chinese language such as Cantonese, or a minority language of China; possibly reflecting final consonants in Old Chinese), or indicates the use of a non-pinyin romanization system (where final consonants may be used to indicate tones).
1 For other finals formed by the suffix -r, pinyin does not use special orthography; one simply appends r to the final that it is added to, without regard for any sound changes that may take place along the way. For information on sound changes related to final r, please see Erhua#Rules.
2 ü is written as u after y, j, q, or x.
3 uo is written as o after b, p, m, f, or w.
Technically, i, u, ü without a following vowel are finals, not medials, and therefore take the tone marks, but they are more concisely displayed as above. In addition, ê () and syllabic nasals m (, ), n (, ), ng (, ) are used as interjections.
According to Scheme for the Chinese Phonetic Alphabet, ng can be abbreviated with a shorthand of ŋ. However, this shorthand is rarely used due to difficulty of entering them on computers.
The ü sound
An umlaut is placed over the letter u when it occurs after the initials l and n when necessary in order to represent the sound [y]. This is necessary in order to distinguish the front high rounded vowel in lü (e.g. ) from the back high rounded vowel in lu (e.g. ). Tonal markers are added on top of the umlaut, as in lǘ.
However, the ü is not used in the other contexts where it could represent a front high rounded vowel, namely after the letters j, q, x, and y. For example, the sound of the word / (fish) is transcribed in pinyin simply as yú, not as yǘ. This practice is opposed to Wade–Giles, which always uses ü, and Tongyong Pinyin, which always uses yu. Whereas Wade–Giles needs the umlaut to distinguish between chü (pinyin ju) and chu (pinyin zhu), this ambiguity does not arise with pinyin, so the more convenient form ju is used instead of jü. Genuine ambiguities only happen with nu/nü and lu/lü, which are then distinguished by an umlaut.
Many fonts or output methods do not support an umlaut for ü or cannot place tone marks on top of ü. Likewise, using ü in input methods is difficult because it is not present as a simple key on many keyboard layouts. For these reasons v is sometimes used instead by convention. For example, it is common for cellphones to use v instead of ü. Additionally, some stores in China use v instead of ü in the transliteration of their names. The drawback is that there are no tone marks for the letter v.
This also presents a problem in transcribing names for use on passports, affecting people with names that consist of the sound lü or nü, particularly people with the surname (Lǚ), a fairly common surname, particularly compared to the surnames (Lù), (Lǔ), (Lú) and (Lù). Previously, the practice varied among different passport issuing offices, with some transcribing as "LV" and "NV" while others used "LU" and "NU". On 10 July 2012, the Ministry of Public Security standardized the practice to use "LYU" and "NYU" in passports.
Although nüe written as nue, and lüe written as lue are not ambiguous, nue or lue are not correct according to the rules; nüe and lüe should be used instead. However, some Chinese input methods (e.g. Microsoft Pinyin IME) support both nve/lve (typing v for ü) and nue/lue.
Approximation from English pronunciation
Most rules given here in terms of English pronunciation are approximations, as several of these sounds do not correspond directly to sounds in English.
Pronunciation of initials
* Note on y and w
Y and w are equivalent to the semivowel medials i, u, and ü (see below). They are spelled differently when there is no initial consonant in order to mark a new syllable: fanguan is fan-guan, while fangwan is fang-wan (and equivalent to *fang-uan). With this convention, an apostrophe only needs to be used to mark an initial a, e, or o: Xi'an (two syllables: ) vs. xian (one syllable: ). In addition, y and w are added to fully vocalic i, u, and ü when these occur without an initial consonant, so that they are written yi, wu, and yu. Some Mandarin speakers do pronounce a or sound at the beginning of such words—that is, yi or , wu or , yu or ,—so this is an intuitive convention. See below for a few finals which are abbreviated after a consonant plus w/u or y/i medial: wen → C+un, wei → C+ui, weng → C+ong, and you → C+iu.
** Note on the apostrophe
The apostrophe (') () is used before a syllable starting with a vowel (, , or ) in a multiple-syllable word when the syllable does not start the word, unless the syllable immediately follows a hyphen or other dash. For example, is written as Xi'an or Xī'ān, and is written as Tian'e or Tiān'é, but is written "dì-èr", without an apostrophe. This apostrophe is not used in the Taipei Metro names.
Apostrophes (as well as hyphens and tone marks) are omitted on Chinese passports.
Pronunciation of finals
The following is a list of finals in Standard Chinese, excepting most of those ending with r.
To find a given final:
Remove the initial consonant. Zh, ch, and sh count as initial consonants.
Change initial w to u and initial y to i. For weng, wen, wei, you, look under ong, un, ui, iu.
For u after j, q, x, or y, look under ü.
Tones
The pinyin system also uses diacritics to mark the four tones of Mandarin. The diacritic is placed over the letter that represents the syllable nucleus, unless that letter is missing (see below).
Many books printed in China use a mix of fonts, with vowels and tone marks rendered in a different font from the surrounding text, tending to give such pinyin texts a typographically ungainly appearance. This style, most likely rooted in early technical limitations, has led many to believe that pinyin's rules call for this practice, e.g. the use of a Latin alpha (ɑ) rather than the standard style (a) found in most fonts, or g often written with a single-storey ɡ. The rules of Hanyu Pinyin, however, specify no such practice.
The first tone (flat or high-level tone) is represented by a macron (ˉ) added to the pinyin vowel:
ā ē ī ō ū ǖ Ā Ē Ī Ō Ū Ǖ
The second tone (rising or high-rising tone) is denoted by an acute accent (ˊ):
á é í ó ú ǘ Á É Í Ó Ú Ǘ
The third tone (falling-rising or low tone) is marked by a caron/háček (ˇ). It is not the rounded breve (˘), though a breve is sometimes substituted due to ignorance or font limitations.
ǎ ě ǐ ǒ ǔ ǚ Ǎ Ě Ǐ Ǒ Ǔ Ǚ
The fourth tone (falling or high-falling tone) is represented by a grave accent (ˋ):
à è ì ò ù ǜ À È Ì Ò Ù Ǜ
The fifth tone (neutral tone) is represented by a normal vowel without any accent mark:
a e i o u ü A E I O U Ü
In dictionaries, neutral tone may be indicated by a dot preceding the syllable; for example, ·ma. When a neutral tone syllable has an alternative pronunciation in another tone, a combination of tone marks may be used: zhī·dào ().
These tone marks normally are only used in Mandarin textbooks or in foreign learning texts, but they are essential for correct pronunciation of Mandarin syllables, as exemplified by the following classic example of five characters whose pronunciations differ only in their tones:
The words are "mother", "hemp", "horse", "scold", and a question particle, respectively.
Numerals in place of tone marks
Before the advent of computers, many typewriter fonts did not contain vowels with macron or caron diacritics. Tones were thus represented by placing a tone number at the end of individual syllables. For example, tóng is written tong².
The number used for each tone is as the order listed above, except the neutral tone, which is either not numbered, or given the number 0 or 5, e.g. ma⁵ for /, an interrogative marker.
Rules for placing the tone mark
Briefly, the tone mark should always be placed by the order—a, o, e, i, u, ü, with the only exception being iu, where the tone mark is placed on the u instead. Pinyin tone marks appear primarily above the nucleus of the syllable, for example as in kuài, where k is the initial, u the medial, a the nucleus, and i the coda. The exception is syllabic nasals like /m/, where the nucleus of the syllable is a consonant, the diacritic will be carried by a written dummy vowel.
When the nucleus is /ə/ (written e or o), and there is both a medial and a coda, the nucleus may be dropped from writing. In this case, when the coda is a consonant n or ng, the only vowel left is the medial i, u, or ü, and so this takes the diacritic. However, when the coda is a vowel, it is the coda rather than the medial which takes the diacritic in the absence of a written nucleus. This occurs with syllables ending in -ui (from wei: (wèi → -uì) and in -iu (from you: yòu → -iù.) That is, in the absence of a written nucleus the finals have priority for receiving the tone marker, as long as they are vowels: if not, the medial takes the diacritic.
An algorithm to find the correct vowel letter (when there is more than one) is as follows:
If there is an a or an e, it will take the tone mark
If there is an ou, then the o takes the tone mark
Otherwise, the second vowel takes the tone mark
Worded differently,
If there is an a, e, or o, it will take the tone mark; in the case of ao, the mark goes on the a
Otherwise, the vowels are -iu or -ui, in which case the second vowel takes the tone mark
If the tone is written over an i, the tittle above the i is omitted, as in yī.
Phonological intuition
The placement of the tone marker, when more than one of the written letters a, e, i, o, and u appears, can also be inferred from the nature of the vowel sound in the medial and final. The rule is that the tone marker goes on the spelled vowel that is not a (near-)semi-vowel. The exception is that, for triphthongs that are spelled with only two vowel letters, both of which are the semi-vowels, the tone marker goes on the second spelled vowel.
Specifically, if the spelling of a diphthong begins with i (as in ia) or u (as in ua), which serves as a near-semi-vowel, this letter does not take the tone marker. Likewise, if the spelling of a diphthong ends with o or u representing a near-semi-vowel (as in ao or ou), this letter does not receive a tone marker. In a triphthong spelled with three of a, e, i, o, and u (with i or u replaced by y or w at the start of a syllable), the first and third letters coincide with near-semi-vowels and hence do not receive the tone marker (as in iao or uai or iou). But if no letter is written to represent a triphthong's middle (non-semi-vowel) sound (as in ui or iu), then the tone marker goes on the final (second) vowel letter.
Using tone colors
In addition to tone number and mark, tone color has been suggested as a visual aid for learning. Although there are no formal standards, there are a number of different color schemes in use, Dummitt's being one of the first.
Third tone exceptions
In spoken Chinese, the third tone is often pronounced as a "half third tone", in which the pitch does not rise. Additionally, when two third tones appear consecutively, such as in (nǐhǎo, hello), the first syllable is pronounced with the second tone — this is called tone sandhi. In pinyin, words like "hello" are still written with two third tones (nǐhǎo).
Orthographic rules
Letters
The Scheme for the Chinese Phonetic Alphabet lists the letters of pinyin, along with their pronunciations, as:
Pinyin differs from other romanizations in several aspects, such as the following:
Syllables starting with u are written as w in place of u (e.g., *uan is written as wan). Standalone u is written as wu.
Syllables starting with i are written as y in place of i (e.g., *ian is written as yan). Standalone i is written as yi.
Syllables starting with ü are written as yu in place of ü (e.g., *üe is written as yue). Standalone ü is written as yu.
ü is written as u when there is no ambiguity (such as ju, qu, and xu) but as ü when there are corresponding u syllables (such as lü and nü). If there are corresponding u syllables, it is often replaced with v on a computer to make it easier to type on a standard keyboard.
After by a consonant, iou, uei, and uen are simplified as iu, ui, and un, which do not represent the actual pronunciation.
As in zhuyin, syllables that are actually pronounced as buo, puo, muo, and fuo are given a separate representation: bo, po, mo, and fo.
The apostrophe (') is used before a syllable starting with a vowel (a, o, or e) in a syllable other than the first of a word, the syllable being most commonly realized as unless it immediately follows a hyphen or other dash. That is done to remove ambiguity that could arise, as in Xi'an, which consists of the two syllables xi () an (), compared to such words as xian (). (The ambiguity does not occur when tone marks are used since both tone marks in "Xīān" unambiguously show that the word has two syllables. However, even with tone marks, the city is usually spelled with an apostrophe as "Xī'ān".)
Eh alone is written as ê; elsewhere as e. Schwa is always written as e.
Zh, ch, and sh can be abbreviated as ẑ, ĉ, and ŝ (z, c, s with a circumflex). However, the shorthands are rarely used because of the difficulty of entering them on computers and are confined mainly to Esperanto keyboard layouts. Early drafts and some published material used diacritic hooks below instead: (/), , ().
Ng has the uncommon shorthand of ŋ, which was also used in early drafts.
Early drafts also contained the symbol ɥ or the letter ч borrowed from the Cyrillic script, in place of later j for the voiceless alveolo-palatal sibilant affricate.
The letter v is unused, except in spelling foreign languages, languages of minority nationalities, and some dialects, despite a conscious effort to distribute letters more evenly than in Western languages. However, the ease of typing into a computer causes the v to be sometimes used to replace ü. (The Scheme table above maps the letter to bopomofo ㄪ, which typically maps to .)
Most of the above are used to avoid ambiguity when words of more than one syllable are written in pinyin. For example, uenian is written as wenyan because it is not clear which syllables make up uenian; uen-ian, uen-i-an, u-en-i-an, u-e-nian, and u-e-ni-an are all possible combinations, but wenyan is unambiguous since we, nya, etc. do not exist in pinyin. See the pinyin table article for a summary of possible pinyin syllables (not including tones).
Words, capitalization, initialisms and punctuation
Although Chinese characters represent single syllables, Mandarin Chinese is a polysyllabic language. Spacing in pinyin is usually based on words, and not on single syllables. However, there are often ambiguities in partitioning a word.
The Basic Rules of the Chinese Phonetic Alphabet Orthography () were put into effect in 1988 by the National Educational Commission () and the National Language Commission (). These rules became a Guóbiāo recommendation in 1996 and were updated in 2012.
General
Single meaning: Words with a single meaning, which are usually set up of two characters (sometimes one, seldom three), are written together and not capitalized: rén (, person); péngyou (, friend); qiǎokèlì (, chocolate)
Combined meaning (2 or 3 characters): Same goes for words combined of two words to one meaning: hǎifēng (, sea breeze); wèndá (, question and answer); quánguó (, nationwide); chángyòngcí (, common words)
Combined meaning (4 or more characters): Words with four or more characters having one meaning are split up with their original meaning if possible: wúfèng gāngguǎn (, seamless steel-tube); huánjìng bǎohù guīhuà (, environmental protection planning); gāoměngsuānjiǎ (, potassium permanganate)
Duplicated words
AA: Duplicated characters (AA) are written together: rénrén (, everybody), kànkan (, to have a look), niánnián (, every year)
ABAB: Two characters duplicated (ABAB) are written separated: yánjiū yánjiū (, to study, to research), xuěbái xuěbái (, white as snow)
AABB: Characters in the AABB schema are written together: láiláiwǎngwǎng (, come and go), qiānqiānwànwàn (, numerous)
Prefixes () and Suffixes (): Words accompanied by prefixes such as fù (, vice), zǒng (, chief), fēi (, non-), fǎn (, anti-), chāo (, ultra-), lǎo (, old), ā (, used before names to indicate familiarity), kě (, -able), wú (, -less) and bàn (, semi-) and suffixes such as zi (, noun suffix), r (, diminutive suffix), tou (, noun suffix), xìng (, -ness, -ity), zhě (, -er, -ist), yuán (, person), jiā (, -er, -ist), shǒu (, person skilled in a field), huà (, -ize) and men (, plural marker) are written together: fùbùzhǎng (, vice minister), chéngwùyuán (, conductor), háizimen (, children)
Nouns and names ()
Words of position are separated: mén wài (, outdoor), hé li (, under the river), huǒchē shàngmian (, on the train), Huáng Hé yǐnán (, south of the Yellow River)
Exceptions are words traditionally connected: tiānshang (, in the sky or outerspace), dìxia (, on the ground), kōngzhōng (, in the air), hǎiwài (, overseas)
Surnames are separated from the given names, each capitalized: Lǐ Huá (), Zhāng Sān (). If the surname and/or given name consists of two syllables, it should be written as one: Zhūgě Kǒngmíng ().
Titles following the name are separated and are not capitalized: Wáng bùzhǎng (, Minister Wang), Lǐ xiānsheng (, Mr. Li), Tián zhǔrèn (, Director Tian), Zhào tóngzhì (, Comrade Zhao).
The forms of addressing people with prefixes such as Lǎo (), Xiǎo (), Dà () and Ā () are capitalized: Xiǎo Liú (, [young] Ms./Mr. Liu), Dà Lǐ (, [great; elder] Mr. Li), Ā Sān (, Ah San), Lǎo Qián (, [senior] Mr. Qian), Lǎo Wú (, [senior] Mr. Wu)
Exceptions include Kǒngzǐ (, Confucius), Bāogōng (, Judge Bao), Xīshī (, Xishi), Mèngchángjūn (, Lord Mengchang)
Geographical names of China: Běijīng Shì (, city of Beijing), Héběi Shěng (, province of Hebei), Yālù Jiāng (, Yalu River), Tài Shān (, Mount Tai), Dòngtíng Hú (, Dongting Lake), Qióngzhōu Hǎixiá (, Qiongzhou Strait)
Monosyllabic prefixes and suffixes are written together with their related part: Dōngsì Shítiáo (, Dongsi 10th Alley)
Common geographical nouns that have become part of proper nouns are written together: Hēilóngjiāng (, Heilongjiang)
Non-Chinese names are written in Hanyu Pinyin: Āpèi Āwàngjìnměi (, Ngapoi Ngawang Jigme); Dōngjīng (, Tokyo)
Verbs (): Verbs and their suffixes -zhe (), -le () or -guo (() are written as one: kànzhe (, seeing), jìnxíngguo (, have been implemented). Le as it appears in the end of a sentence is separated though: Huǒchē dào le. (, The train [has] arrived).
Verbs and their objects are separated: kàn xìn (, read a letter), chī yú (, eat fish), kāi wánxiào (, to be kidding).
If verbs and their complements are each monosyllabic, they are written together; if not, they are separated: gǎohuài (, to make broken), dǎsǐ (, hit to death), huàwéi (, to become), zhěnglǐ hǎo (, to sort out), gǎixiě wéi (, to rewrite as)
Adjectives (): A monosyllabic adjective and its reduplication are written as one: mēngmēngliàng (, dim), liàngtángtáng (, shining bright)
Complements of size or degree such as xiē (), yīxiē (), diǎnr () and yīdiǎnr () are written separated: dà xiē (), a little bigger), kuài yīdiǎnr (, a bit faster)
Pronouns ()
Personal pronouns and interrogative pronouns are separated from other words: Wǒ ài Zhōngguó. (, I love China); Shéi shuō de? (, Who said it?)
The demonstrative pronoun zhè (, this), nà (, that) and the question pronoun nǎ (, which) are separated: zhè rén (, this person), nà cì huìyì (, that meeting), nǎ zhāng bàozhǐ (, which newspaper)
Exception—If zhè, nà or nǎ are followed by diǎnr (), bān (), biān (), shí (), huìr (), lǐ (), me () or the general classifier ge (), they are written together: nàlǐ (, there), zhèbiān (, over here), zhège (, this)
Numerals () and measure words ()
Numbers and words like gè (, each), měi (, each), mǒu (, any), běn (, this), gāi (, that), wǒ (, my, our) and nǐ (, your) are separated from the measure words following them: liǎng gè rén (, two people), gè guó (, every nation), měi nián (, every year), mǒu gōngchǎng (, a certain factory), wǒ xiào (, our school)
Numbers up to 100 are written as single words: sānshísān (, thirty-three). Above that, the hundreds, thousands, etc. are written as separate words: jiǔyì qīwàn èrqiān sānbǎi wǔshíliù (, nine hundred million, seventy-two thousand, three hundred fifty-six). Arabic numerals are kept as Arabic numerals: 635 fēnjī (, extension 635)
According to 6.1.5.4, the dì () used in ordinal numerals is followed by a hyphen: dì-yī (, first), dì-356 (, 356th). The hyphen should not be used if the word in which dì () and the numeral appear does not refer to an ordinal number in the context. For example: Dìwǔ (, a Chinese compound surname). The chū () in front of numbers one to ten is written together with the number: chūshí (, tenth day)
Numbers representing month and day are hyphenated: wǔ-sì (, May fourth), yīèr-jiǔ (, December ninth)
Words of approximations such as duō (), lái () and jǐ () are separated from numerals and measure words: yībǎi duō gè (, around a hundred); shí lái wàn gè (, around a hundred thousand); jǐ jiā rén (, a few families)
Shíjǐ (, more than ten) and jǐshí (, tens) are written together: shíjǐ gè rén (, more than ten people); jǐshí (, tens of steel pipes)
Approximations with numbers or units that are close together are hyphenated: sān-wǔ tiān (, three to five days), qiān-bǎi cì (, thousands of times)
Other function words () are separated from other words
Adverbs (): hěn hǎo (, very good), zuì kuài (, fastest), fēicháng dà (, extremely big)
Prepositions (): zài qiánmiàn (, in front)
Conjunctions (): nǐ hé wǒ (, you and I/me), Nǐ lái háishi bù lái? (, Are you coming or not?)
"Constructive auxiliaries" () such as de (), zhī () and suǒ (): mànmàn de zou (), go slowly)
A monosyllabic word can also be written together with de (): wǒ de shū / wǒde shū (, my book)
Modal auxiliaries at the end of a sentence: Nǐ zhīdào ma? (, Do you know?), Kuài qù ba! (, Go quickly!)
Exclamations and interjections: À! Zhēn měi! (), Oh, it's so beautiful!)
Onomatopoeia: mó dāo huòhuò (, honing a knife), hōnglōng yī shēng (, rumbling)
Capitalization
The first letter of the first word in a sentence is capitalized: Chūntiān lái le. (, Spring has arrived.)
The first letter of each line in a poem is capitalized.
The first letter of a proper noun is capitalized: Běijīng (, Beijing), Guójì Shūdiàn (, International Bookstore), Guójiā Yǔyán Wénzì Gōngzuò Wěiyuánhuì (, National Language Commission)
On some occasions, proper nouns can be written in all caps: BĚIJĪNG, GUÓJÌ SHŪDIÀN, GUÓJIĀ YǓYÁN WÉNZÌ GŌNGZUÒ WĚIYUÁNHUÌ
If a proper noun is written together with a common noun to make a proper noun, it is capitalized. If not, it is not capitalized: Fójiào (, Buddhism), Tángcháo (, Tang dynasty), jīngjù (, Beijing opera), chuānxiōng (, Szechuan lovage)
Initialisms
Single words are abbreviated by taking the first letter of each character of the word: Beǐjīng (, Beijing) → BJ
A group of words are abbreviated by taking the first letter of each word in the group: guójiā biāozhǔn (, Guóbiāo standard) → GB
Initials can also be indicated using full stops: Beǐjīng → B.J., guójiā biāozhǔn → G.B.
When abbreviating names, the surname is written fully (first letter capitalized or in all caps), but only the first letter of each character in the given name is taken, with full stops after each initial: Lǐ Huá () → Lǐ H. or LǏ H., Zhūgě Kǒngmíng () → Zhūgě K. M. or ZHŪGĚ K. M.
Line wrapping
Words can only be split by the character:guāngmíng (, bright) → guāng-míng, not gu-āngmíng
Initials cannot be split:Wáng J. G. () → WángJ. G., not Wáng J.-G.
Apostrophes are removed in line wrapping:Xī'ān (, Xi'an) → Xī-ān, not Xī-'ān
When the original word has a hyphen, the hyphen is added at the beginning of the new line:chēshuǐ-mǎlóng (, heavy traffic: "carriage, water, horse, dragon") → chēshuǐ--mǎlóng
Hyphenation: In addition to the situations mentioned above, there are four situations where hyphens are used.
Coordinate and disjunctive compound words, where the two elements are conjoined or opposed, but retain their individual meaning: gōng-jiàn (, bow and arrow), kuài-màn (, speed: "fast-slow"), shíqī-bā suì (, 17–18 years old), dǎ-mà (, beat and scold), Yīng-Hàn (, English-Chinese [dictionary]), Jīng-Jīn (, Beijing-Tianjin), lù-hǎi-kōngjūn (, army-navy-airforce).
Abbreviated compounds (): gōnggòng guānxì (, public relations) → gōng-guān (, PR), chángtú diànhuà (, long-distance calling) → cháng-huà (, LDC). Exceptions are made when the abbreviated term has become established as a word in its own right, as in chūzhōng () for chūjí zhōngxué (, junior high school). Abbreviations of proper-name compounds, however, should always be hyphenated: Běijīng Dàxué (, Peking University) → Běi-Dà (, PKU).
Four-syllable idioms: fēngpíng-làngjìng (), calm and tranquil: "wind calm, waves down"), huījīn-rútǔ (, spend money like water: "throw gold like dirt"), zhǐ-bǐ-mò-yàn (, paper-brush-ink-inkstone [four coordinate words]).
Other idioms are separated according to the words that make up the idiom: bēi hēiguō (, to be made a scapegoat: "to carry a black pot"), zhǐ xǔ zhōuguān fànghuǒ, bù xǔ bǎixìng diǎndēng (, Gods may do what cattle may not: "only the official is allowed to light the fire; the commoners are not allowed to light a lamp")
Punctuation
The Chinese full stop (。) is changed to a western full stop (.)
The hyphen is a half-width hyphen (-)
Ellipsis can be changed from 6 dots (......) to 3 dots (...)
The enumeration comma (、) is changed to a normal comma (,)
All other punctuation marks are the same as the ones used in normal texts
Comparison with other orthographies
Pinyin is now used by foreign students learning Chinese as a second language, as well as Bopomofo.
Pinyin assigns some Latin letters sound values which are quite different from those of most languages. This has drawn some criticism as it may lead to confusion when uninformed speakers apply either native or English assumed pronunciations to words. However, this problem is not limited only to pinyin, since many languages that use the Latin alphabet natively also assign different values to the same letters. A recent study on Chinese writing and literacy concluded, "By and large, pinyin represents the Chinese sounds better than the Wade–Giles system, and does so with fewer extra marks."
Because Pinyin is purely a representation of the sounds of Mandarin, it completely lacks the semantic cues and contexts inherent in Chinese characters. Pinyin is also unsuitable for transcribing some Chinese spoken languages other than Mandarin, languages which by contrast have traditionally been written with Han characters allowing for written communication which, by its unified semanto-phonetic orthography, could theoretically be readable in any of the various vernaculars of Chinese where a phonetic script would have only localized utility.
Comparison charts
Unicode code points
Based on ISO 7098:2015, Information and Documentation: Chinese Romanization (), tonal marks for pinyin should use the symbols from Combining Diacritical Marks, as opposed by the use of Spacing Modifier Letters in Bopomofo. Lowercase letters with tone marks are included in GB/T 2312 and their uppercase counterparts are included in JIS X 0212; thus Unicode includes all the common accented characters from pinyin.
Due to The Basic Rules of the Chinese Phonetic Alphabet Orthography, all accented letters are required to have both uppercase and lowercase characters as per their normal counterparts.
GBK has mapped two characters ‘ḿ’ and ‘ǹ’ to Private Use Areas in Unicode as U+E7C7 () and U+E7C8 () respectively, thus some Simplified Chinese fonts (e.g. SimSun) that adheres to GBK include both characters in the Private Use Areas, and some input methods (e.g. Sogou Pinyin) also outputs the Private Use Areas code point instead of the original character. As the superset GB 18030 changed the mappings of ‘ḿ’ and ‘ǹ’, this has caused issue where the input methods and font files use different encoding standard, and thus the input and output of both characters are mixed up.
Other symbols that are used in pinyin is as follow:
Other punctuation mark and symbols in Chinese are to use the equivalent symbol in English noted in to GB/T 15834.
In educational usage, to match the handwritten style, some fonts used a different style for the letter a and g to have an appearance of single-storey a and single-storey g. Fonts that follow GB/T 2312 usually make single-storey a in the accented pinyin characters but leaving unaccented double-storey a, causing a discrepancy in the font itself. Unicode did not provide an official way to encode single-storey a and single-storey g, but as IPA require the differentiation of single-storey and double-storey a and g, thus the single-storey character ɑ/ɡ in IPA should be used if the need to separate single-storey a and g arises. For daily usage there is no need to differentiate single-storey and double-storey a/g.
Usage
Pinyin superseded older romanization systems such as Wade–Giles (1859; modified 1892) and postal romanization, and replaced zhuyin as the method of Chinese phonetic instruction in mainland China. The ISO adopted pinyin as the standard romanization for modern Chinese in 1982 (ISO 7098:1982, superseded by ISO 7098:2015). The United Nations followed suit in 1986. It has also been accepted by the government of Singapore, the United States's Library of Congress, the American Library Association, and many other international institutions.
The spelling of Chinese geographical or personal names in pinyin has become the most common way to transcribe them in English. Pinyin has also become the dominant method for entering Chinese text into computers in Mainland China, in contrast to Taiwan; where Bopomofo is most commonly used.
Families outside of Taiwan who speak Mandarin as a mother tongue use pinyin to help children associate characters with spoken words which they already know. Chinese families outside of Taiwan who speak some other language as their mother tongue use the system to teach children Mandarin pronunciation when they learn vocabulary in elementary school.
Since 1958, pinyin has been actively used in adult education as well, making it easier for formerly illiterate people to continue with self-study after a short period of pinyin literacy instruction.
Pinyin has become a tool for many foreigners to learn Mandarin pronunciation, and is used to explain both the grammar and spoken Mandarin coupled with Chinese characters (). Books containing both Chinese characters and pinyin are often used by foreign learners of Chinese. Pinyin's role in teaching pronunciation to foreigners and children is similar in some respects to furigana-based books (with hiragana letters written above or next to kanji, directly analogous to zhuyin) in Japanese or fully vocalised texts in Arabic ("vocalised Arabic").
The tone-marking diacritics are commonly omitted in popular news stories and even in scholarly works. This results in some degree of ambiguity as to which words are being represented.
Computer input systems
Simple computer systems, able to display only 7-bit ASCII text (essentially the 26 Latin letters, 10 digits, and punctuation marks), long provided a convincing argument for using unaccented pinyin instead of Chinese characters. Today, however, most computer systems are able to display characters from Chinese and many other writing systems as well, and have them entered with a Latin keyboard using an input method editor. Alternatively, some PDAs, tablet computers, and digitizing tablets allow users to input characters graphically by writing with a stylus, with concurrent online handwriting recognition.
Pinyin with accents can be entered with the use of special keyboard layouts or various character map utilities. X keyboard extension includes a "Hanyu Pinyin (altgr)" layout for AltGr-triggered dead key input of accented characters.
In Taiwan
Taiwan (Republic of China) adopted Tongyong Pinyin, a modification of Hanyu Pinyin, as the official romanization system on the national level between October 2002 and January 2009, when it decided to promote Hanyu Pinyin. Tongyong Pinyin ("common phonetic"), a romanization system developed in Taiwan, was designed to romanize languages and dialects spoken on the island in addition to Mandarin Chinese. The Kuomintang (KMT) party resisted its adoption, preferring the Hanyu Pinyin system used in mainland China and in general use internationally. Romanization preferences quickly became associated with issues of national identity. Preferences split along party lines: the KMT and its affiliated parties in the pan-blue coalition supported the use of Hanyu Pinyin while the Democratic Progressive Party and its affiliated parties in the pan-green coalition favored the use of Tongyong Pinyin.
Tongyong Pinyin was made the official system in an administrative order that allowed its adoption by local governments to be voluntary. Locales in Kaohsiung, Tainan and other areas use romanizations derived from Tongyong Pinyin for some district and street names. A few localities with governments controlled by the KMT, most notably Taipei, Hsinchu, and Kinmen County, overrode the order and converted to Hanyu Pinyin before the January 1, 2009 national-level decision, though with a slightly different capitalization convention than mainland China. Most areas of Taiwan adopted Tongyong Pinyin, consistent with the national policy. Today, many street signs in Taiwan are using Tongyong Pinyin-derived romanizations, but some, especially in northern Taiwan, display Hanyu Pinyin-derived romanizations. It is not unusual to see spellings on street signs and buildings derived from the older Wade–Giles, MPS2 and other systems.
Attempts to make pinyin standard in Taiwan have had uneven success, with most place and proper names remaining unaffected, including all major cities. Personal names on Taiwanese passports honor the choices of Taiwanese citizens, who can choose Wade-Giles, Hakka, Hoklo, Tongyong, aboriginal, or pinyin. Official pinyin use is controversial, as when pinyin use for a metro line in 2017 provoked protests, despite government responses that “The romanization used on road signs and at transportation stations is intended for foreigners... Every foreigner learning Mandarin learns Hanyu pinyin, because it is the international standard...The decision has nothing to do with the nation’s self-determination or any ideologies, because the key point is to ensure that foreigners can read signs.”
In Singapore
Singapore implemented Hanyu Pinyin as the official romanization system for Mandarin in the public sector starting in the 1980s, in conjunction with the Speak Mandarin Campaign. Hanyu Pinyin is also used as the romanization system to teach Mandarin Chinese at schools. While the process of Pinyinisation has been mostly successful in government communication, placenames, and businesses established in the 1980s and onward, it continues to be unpopular in some areas, most notably for personal names and vocabulary borrowed from other varieties of Chinese already established in the local vernacular. In these situations, romanization continues to be based on the Chinese language variety it originated from, especially the three largest Chinese varieties traditionally spoken in Singapore (Hokkien, Teochew, and Cantonese).
For other languages
Pinyin-like systems have been devised for other variants of Chinese. Guangdong Romanization is a set of romanizations devised by the government of Guangdong province for Cantonese, Teochew, Hakka (Moiyen dialect), and Hainanese. All of these are designed to use Latin letters in a similar way to pinyin.
In addition, in accordance to the Regulation of Phonetic Transcription in Hanyu Pinyin Letters of Place Names in Minority Nationality Languages () promulgated in 1976, place names in non-Han languages like Mongolian, Uyghur, and Tibetan are also officially transcribed using pinyin in a system adopted by the State Administration of Surveying and Mapping and Geographical Names Committee known as SASM/GNC romanization. The pinyin letters (26 Roman letters, plus ü and ê) are used to approximate the non-Han language in question as closely as possible. This results in spellings that are different from both the customary spelling of the place name, and the pinyin spelling of the name in Chinese:
Tongyong Pinyin was developed in Taiwan for use in rendering not only Mandarin Chinese, but other languages and dialects spoken on the island such as Taiwanese, Hakka, and aboriginal languages.
See also
Combining character
Cyrillization of Chinese
Pinyin input method
Romanization of Japanese
Tibetan pinyin
Transcription into Chinese characters
Comparison of Chinese transcription systems
Notes
References
Further reading
External links
Scheme for the Chinese Phonetic Alphabet—The original 1958 Scheme, apparently scanned from a reprinted copy in Xinhua Zidian. PDF version from the Chinese Ministry of Education.
Basic rules of the Chinese phonetic alphabet orthography—The official standard GB/T 16159–2012 in Chinese. PDF version from the Chinese Ministry of Education.
HTML version
Chinese phonetic alphabet spelling rules for Chinese names—The official standard GB/T 28039–2011 in Chinese. PDF version from the Chinese Ministry of Education
HTML version
Pinyin-Guide.com Pronunciation and FAQs related to Pinyin
Pinyin Tone Tool (archive) Online editor to create Pinyin with tones
|-
|-
|-
Writing systems introduced in 1958
Chinese language
Chinese words and phrases
ISO standards
Mandarin words and phrases
Phonetic alphabets
Phonetic guides
Romanization of Chinese
Ruby characters |
31197929 | https://en.wikipedia.org/wiki/Media%20cross-ownership%20in%20the%20United%20States | Media cross-ownership in the United States | Media cross-ownership is the common ownership of multiple media sources by a single person or corporate entity. Media sources include radio, broadcast television, specialty and pay television, cable, satellite, Internet Protocol television (IPTV), newspapers, magazines and periodicals, music, film, book publishing, video games, search engines, social media, internet service providers, and wired and wireless telecommunications.
Much of the debate over concentration of media ownership in the United States has for many years focused specifically on the ownership of broadcast stations, cable stations, newspapers, and websites. Some have pointed to an increase in media merging and concentration of ownership which may correlate to decreased trust in 'mass' media.
Ownership of American media
Over time, both the number of media outlets and concentration of ownership have increased, translating to fewer companies owning more media outlets.
Digital
Also known as "Big Tech," a collection of five major digital media companies are also noted for their strong influence over their respective industries:
Alphabet Owns search engine Google, video sharing site YouTube, proprietary rights to the open-source Android operating system, blog hosting site Blogger, Gmail e-mail service, and numerous other online media and software outlets.
Amazon Owns the Amazon.com e-commerce marketplace, cloud computing platform AWS, video streaming service Amazon Prime Video, music streaming service Amazon Music, and video live streaming service Twitch.
Apple Produces iPhone, iPad, Mac, Apple Watch and Apple TV products, the iOS, iPadOS, macOS, watchOS, and tvOS operating systems, music streaming service Apple Music, video streaming service Apple TV+, news aggregator Apple News, and gaming platform Apple Arcade.
Meta Owns social networks Facebook and Instagram, messaging services Facebook Messenger and WhatsApp, and virtual reality platform Oculus VR.
Microsoft Owns business-oriented social network LinkedIn, web portal MSN, search engine Bing, cloud computing platform Microsoft Azure, Xbox gaming consoles and related services, Office productivity suite, Outlook.com e-mail service, Skype video chat service, and Windows operating system. See: List of mergers and acquisitions by Microsoft.
Video
The Walt Disney Company Owns the ABC television network, cable networks ESPN, Disney Channel, Disney XD, Freeform, FX, FXX, FX Movie Channel, National Geographic, Nat Geo Wild, History, A&E and Lifetime, Disney Mobile, Disney Music Group, Disney Publishing Worldwide, , production companies Walt Disney Pictures, Pixar Animation Studios, Lucasfilm, Marvel Entertainment, 20th Century Studios, Searchlight Pictures, ABC Audio (including three AM radio stations), Disney Consumer Products, and Disney Parks theme parks in several countries.See: List of assets owned by The Walt Disney Company.
Netflix Owns the largest subscription over-the-top video service in the United States; it also owns many of the films and television series released on the service. Netflix also owns DVD.com, a mail-order video rental service. Netflix also has close ties to Roku, Inc., which it spun off in 2008 to avoid self-dealing accusations but maintains a substantial investment and owns the Roku operating system used on a large proportion of smart televisions and set-top boxes.
WarnerMedia Owns HBO, CNN, The CW (a joint venture with CBS), Cinemax, Cartoon Network, Adult Swim, HLN, NBA TV, TBS, TNT, TruTV, Turner Classic Movies, Otter Media (Owns Fullscreen, and Rooster Teeth), Warner Bros. Pictures, Castle Rock, DC Comics, Warner Bros. Interactive Entertainment, and New Line Cinema). WarnerMedia is a subsidiary of AT&T.See: List of WarnerMedia subsidiaries.
NBCUniversal Owns NBC, Telemundo, Universal Pictures, Illumination, Focus Features, DreamWorks Animation, 26 television stations in the United States and cable networks USA Network, Bravo, CNBC, MSNBC, Syfy, NBCSN, Golf Channel, E!, and NBC Sports Regional Networks. NBCUniversal is a subsidiary of Comcast, in turn controlled by the family of Ralph J. Roberts (with Ralph's son Brian L. Roberts being the largest shareholder).See: List of assets owned by NBCUniversal.
Paramount Global Owns the CBS television network and the CW (a joint venture with WarnerMedia), cable networks CBS Sports Network, Showtime, Pop; 30 television stations; CBS Studios; MTV, Nickelodeon/Nick at Nite, TV Land, VH1, BET, CMT, Comedy Central, Logo TV, Paramount Network, Paramount Pictures, and Paramount Home Entertainment. The Redstone family, through National Amusements, holds a controlling stake in Paramount Global.see: List of assets owned by Paramount Global.
Fox Corporation Owns Fox Broadcasting Company, Fox News Group (Fox News Channel, Fox Business Network, Fox Weather, Fox News Radio, Fox News Talk, Fox Nation), Fox Sports (FS1, FS2, Fox Deportes, Big Ten Network (51%), Fox Sports Radio), Fox Television Stations, Bento Box Entertainment, and Tubi. Australian-American media magnate Rupert Murdoch and his family are the major stakeholders in Fox.
Sony Pictures Entertainment Owns Sony Pictures Entertainment Motion Picture Group (including Columbia Pictures, TriStar Pictures, and Sony Pictures Animation) and Sony Pictures Television. Sony Pictures Entertainment is a subsidiary of Sony, a Japanese conglomerate.
Discovery, Inc. Owns a number of major U.S. cable networks dedicated primarily to factual, non-fiction programming, including Discovery Channel, TLC, Animal Planet, HGTV, Food Network, DIY Network, Cooking Channel, Travel Channel, and ID. The company also owns a variety of spin-off networks, including Science and Velocity, and has a major presence in Europe with localized versions of its U.S. brands, as well as the pan-European sports service Eurosport. Advance Publications, a company controlled by the heirs of S. I. Newhouse, owns a substantial minority stake in the company.
MGM Holdings Owns Metro-Goldwyn-Mayer, Orion Pictures, MGM Television, cable channel MGM HD, premium cable channel and direct-to-consumer streaming service Epix, and extensive film and television content libraries. Privately owned by a group of creditors following MGM's emergence from bankruptcy in 2010. On May 26, 2021, Amazon announced their intent to acquire MGM Holdings for $8.45 billion, with the studio and its units/assets continuing operations under the new parent company.
Lionsgate Owns Lionsgate Films, Lionsgate Television, Lionsgate Interactive, and a variety of subsidiaries such as Summit Entertainment, Debmar-Mercury, and Starz Inc.
AMC Networks Owns cable networds AMC, IFC, SundanceTV, WeTV, and 49.9% of BBC America. Owns film studios IFC Films and RLJE Films, and streaming services AMC+, Shudder, Sundance Now, Allblk, and Acorn TV, and a minority stake in BritBox. James Dolan and his family have 67% voting power over the company.
Print
Due to cross-ownership restrictions in place for much of the 20th century limiting broadcasting and print assets, as well as difficulties in establishing synergy between the two media, print companies largely stay within the print medium.
The New York Times Company (Carlos Slim) In addition to The New York Times, the company also owns The New York Times Magazine, T: The New York Times Style Magazine, The New York Times Book Review, The New York Times International Edition, Wirecutter, Audm, and Serial Productions
Nash Holdings (Jeff Bezos) Owns The Washington Post, whose subsidiaries include content management system provider Arc Publishing and media monetization platform Zeus Technology.
News Corp Owns Dow Jones & Company (Wall Street Journal, Barron's, Investor's Business Daily, and MarketWatch), the New York Post, and book publisher HarperCollins. See: List of assets owned by News Corp. Both News Corp and Fox Corporation are controlled by the family of Rupert Murdoch.
Bloomberg L.P. Owns Bloomberg News (Bloomberg Businessweek, Bloomberg Markets, Bloomberg Television, and Bloomberg Radio) and produces the Bloomberg Terminal which is used by financial professionals to access market data and news. Bloomberg is owned and named after Michael Bloomberg.
Advance Publications Owns magazine publisher Condé Nast (The New Yorker, Vogue, Bon Appétit, Architectural Digest, Condé Nast Traveler, Vanity Fair, Wired, GQ, and Allure), American City Business Journals, and a chain of local newspapers and regional news websites. The company also holds stakes in cable television provider Charter (which operates the Spectrum News and Spectrum Sports regional cable channels), and Discovery, Inc. (see above).
Hearst Communications Owns a wide variety of newspapers and magazines including the San Francisco Chronicle, the Houston Chronicle, Cosmopolitan, Esquire, and King Features Syndicate (print syndicator). See: List of assets owned by Hearst Communications.
Gannett Owns the national newspaper USA Today. Its largest non-national newspaper is The Arizona Republic in Phoenix, Arizona. Other significant newspapers include The Indianapolis Star, The Cincinnati Enquirer, The Tennessean in Nashville, Tennessee, The Courier-Journal in Louisville, Kentucky, the Democrat and Chronicle in Rochester, New York, The Des Moines Register, the Detroit Free Press and The News-Press in Fort Myers. The company also previously held several television stations, which are now the autonomous company Tegna Inc., and syndication company Multimedia Entertainment (the assets of which are now owned by Comcast). In November 2019, GateHouse Media merged with Gannett, creating the largest newspaper publisher in the United States, which adopted the Gannett name. See: List of assets owned by Gannett.
Tribune Publishing Second-largest owner of newspapers in the United States by total number of subscribers, which owns the Chicago Tribune, the New York Daily News, the Denver Post, The Mercury News, among other daily and weekly newspapers. Tribune Publishing is controlled by Alden Global Capital.
Record labels
Universal Music Group Largest of the "Big Three" record labels. The company is majority-owned by public, with Tencent and Vivendi owning their minority stake.
Sony Music Group Second-largest of the "Big Three" record labels. The company is owned by Sony.
Warner Music Group Third-largest of the "Big Three" record labels. The company is majority-owned by Len Blavatnik's Access Industries, with Tencent owning a minority stake.
Radio
Sirius XM Radio Owns a monopoly on American satellite radio, as well as Pandora Radio, a prominent advertising-supported Internet radio platform.
iHeartMedia Owns 858 radio stations, the radio streaming platform iHeartRadio, Premiere Networks (which in turn owns The Rush Limbaugh Show, The Sean Hannity Show, The Glenn Beck Program, Coast to Coast AM, American Top 40, Delilah, and Fox Sports Radio, all being among the top national radio programs in their category), and previously held a stake in Live Nation and Sirius XM Radio as well as several television stations (later under the management of Newport Television, and now owned by separate companies). Also owns record chart company Mediabase.
Audacy Owns 235 radio stations across 48 media markets and internet radio platform Audacy.
Cumulus Media Owns 429 radio stations, including the former assets of Westwood One (which includes Transtar Radio Networks and Mutual Broadcasting System), Jones Radio Networks, Waitt Radio Networks, Satellite Music Network (all of the major satellite music radio services intended for relay through terrestrial stations), most of ABC's radio network offerings and stations, most of Watermark Inc. (except the American Top 40 franchise), a significant number of radio stations ranging from small to large markets, and distribution rights to CBS Radio News and National Football League radio broadcasts.
Townsquare Media Owns 321 radio stations in 67 markets, including the assets of Regent Communications, Gap Broadcasting, and Double O Radio.
Local television
E. W. Scripps Company Owns hundreds of television stations and networks Ion Television, Laff, Court TV, Court TV Mystery, Grit, Bounce TV, and Newsy TV. Digital assets include United Media, Cracked.com, and Stitcher. Scripps previously held assets in radio, newspapers and cable television channels but has since divested those assets.
Gray Television Owns television stations in 113 markets, including the assets of Hoak Media, Meredith Corporation, Quincy Media, Raycom Media, and Schurz Communications. Also co-manages the digital network Circle with the Grand Ole Opry. See: List of stations owned or operated by Gray Television.
Hearst Television Owns 29 local television stations. It is the third-largest group owner of ABC-affiliated stations and the second-largest group owner of NBC affiliates. Parent company Hearst Communications owns 50% of broadcasting firm A&E Networks, and 20% of the sports broadcaster ESPN—the last two both co-owned with The Walt Disney Company. See: List of assets owned by Hearst Communications.
Nexstar Media Group The largest television station owner in the United States owning 197 television stations across the U.S., most of whom are affiliated with the four "major" U.S. television networks. It also owns NewsNation (formerly WGN America) and Antenna TV. See: List of stations owned or operated by Nexstar Media Group.
Sinclair Broadcast Group It owns or operates a large number of television stations across the country that are affiliated with all six major television networks, including stations formerly owned by Allbritton Communications, Barrington Broadcasting, Fisher Communications, Newport Television (and predecessor Clear Channel) and Bally Sports. Other assets include wrestling promotion Ring of Honor, Tennis Channel, sports network Stadium, digital networks Comet, Charge! and TBD, and over-the-top video service Stirr. See: List of stations owned or operated by Sinclair Broadcast Group.
Tegna Inc. Owns or operates 66 television stations in 54 markets, and holds properties in digital media. Comprises the broadcast television and digital media divisions of the old Gannett Company.
History of FCC regulations
The First Amendment to the United States Constitution included a provision that protected "freedom of the press" from Congressional action. For newspapers and other print items, in which the medium itself was practically infinite and publishers could produce as many publications as they wanted without interfering with any other publisher's ability to do the same, this was not a problem.
The debut of radio broadcasting in the first part of the 20th century complicated matters; the radio spectrum is finite, and only a limited number of broadcasters could use the medium at the same time. The United States government opted to declare the entire broadcast spectrum to be government property and license the rights to use the spectrum to broadcasters. After several years of experimental broadcast licensing, the United States licensed its first commercial radio station, KDKA, in 1920.
Prior to 1927, public airwaves in the United States were regulated by the United States Department of Commerce and largely litigated in the courts as the growing number of stations fought for space in the burgeoning industry. In the earliest days, radio stations were typically required to share the same standard frequency (833 kHz) and were not allowed to broadcast an entire day, instead having to sign on and off at designated times to allow competing stations to use the frequency.
The Federal Radio Act of 1927 (signed into law February 23, 1927) nationalized the airwaves and formed the Federal Radio Commission, the forerunner of the modern Federal Communications Commission (FCC) to assume control of the airwaves. One of the first moves of the FRC was General Order 40, the first U.S. bandplan, which allocated permanent frequencies for most U.S. stations and eliminated most of the part-time broadcasters.
Communications Act of 1934
The Communications Act of 1934 was the stepping stone for all of the communications rules that are in place today. When first enacted, it created the FCC (Federal Communications Commission). It was created to regulate the telephone monopolies, but also regulate the licensing for the spectrum used for broadcasting. The FCC was given authority by Congress to give out licenses to companies to use the broadcasting spectrum. However, they had to determine whether the license would serve "the public interest, convenience, and necessity". The primary goal for the FCC, from the start, has been to serve the "public interest". A debated concept, the term "public interest" was provided with a general definition by the Federal Radio Commission. The Commission determined, in its 1928 annual report, that "the emphasis must be first and foremost on the interest, the convenience, and the necessity of the listening public, and not on the interest, convenience, or necessity of the individual broadcaster or the advertiser." Following this reasoning, early FCC regulations reflected the presumption that "it would not be in the public's interest for a single entity to hold more than one broadcast license in the same community. The view was that the public would benefit from a diverse array of owners because it would lead to a diverse array of program and service viewpoints."
The Communications Act of 1934 refined and expanded on the authority of the FCC to regulate public airwaves in the United States, combining and reorganizing provisions from the Federal Radio Act of 1927 and the Mann-Elkins Act of 1910. It empowered the FCC, among other things, to administer broadcasting licenses, impose penalties and regulate standards and equipment used on the airwaves. The Act also mandated that the FCC would act in the interest of the "public convenience, interest, or necessity." The Act established a system whereby the FCC grants licenses to the spectrum to broadcasters for commercial use, so long as the broadcasters act in the public interest by providing news programming.
Lobbyists from the largest radio broadcasters, ABC and NBC, wanted to establish high fees for broadcasting licenses, but Congress saw this as a limitation upon free speech. Consequently, "the franchise to operate a broadcasting station, often worth millions, is awarded free of charge to enterprises selected under the standard of 'public interest, convenience, or necessity.'"
Nevertheless, radio and television was dominated by the Big Three television networks until the mid-1990s, when the Fox network and UPN and The WB started to challenge that hegemony.
Cross ownership rules of 1975
In 1975, the FCC passed the newspaper and broadcast cross-ownership rule. This ban prohibited the ownership of a daily newspaper and any "full-power broadcast station that serviced the same community". This rule emphasized the need to ensure that a broad number of voices were given the opportunity to communicate via different outlets in each market. Newspapers, explicitly prohibited from federal regulation because of the guarantee of freedom of the press in the First Amendment to the United States Constitution, were out of the FCC's jurisdiction, but the FCC could use the ownership of a newspaper as a preclusion against owning radio or television licenses, which the FCC could and did regulate.
The FCC designed rules to make sure that there is a diversity of voices and opinions on the airwaves. "Beginning in 1975, FCC rules banned cross-ownership by a single entity of a daily newspaper and television or radio broadcast station operating in the same local market." The ruling was put in place to limit media concentration in TV and radio markets, because they use public airwaves, which is a valuable, and now limited, resource.
Telecommunications Act 1996
The Telecommunications Act of 1996 was an influential act for media cross-ownership. One of the requirements of the act was that the FCC must conduct a biennial review of its media ownership rules "and shall determine whether any of such rules are necessary in the public interest as the result of competition." The Commission was ordered to "repeal or modify any regulation it determines to be no longer in the public interest."
The legislation, touted as a step that would foster competition, actually resulted in the subsequent mergers of several large companies, a trend which still continues. Over 4,000 radio stations were bought out, and minority ownership of TV stations dropped to its lowest point since the federal government began tracking such data in 1990.
Since the Telecommunications Act of 1996, restrictions on media merging have decreased. Although merging media companies seems to provide many positive outcomes for the companies involved in the merge, it might lead to some negative outcomes for other companies, viewers and future businesses. The FCC even found that they were indeed negative effects of recent merges in a study that they issued.
Since 21st century
In September 2002, the FCC issued a Notice of Proposed Rulemaking stating that the Commission would re-evaluate its media ownership rules pursuant to the obligation specified in the Telecommunications Act of 1996. In June 2003, after its deliberations which included a single public hearing and the review of nearly two-million pieces of correspondence from the public opposing further relaxation of the ownership rules the FCC voted 3-2 to repeal the newspaper/broadcast cross-ownership ban and to make changes to or repeal several of its other ownership rules as well. In the order, the FCC noted that the newspaper/broadcast cross-ownership rule was no longer necessary in the public interest to maintain competition, diversity or localism. However, in 2007 the FCC revised its rules and ruled that they would take it "case-by-case and determine if the cross-ownership would affect the public interest. The rule changes permitted a company to own a newspaper and broadcast station in any of the nation's top 20 media markets as long as there are at least eight media outlets in the market. If the combination included a television station, that station couldn't be in the market's top four. As it has since 2003, Prometheus Radio Project argued that the relaxed rule would pave the way for more media consolidation. Broadcasters, pointing to the increasing competition from new platforms, argued that the FCC's rules—including other ownership regulations that govern TV duopolies and radio ownership—should be relaxed even further. The FCC, meanwhile, defended its right to change the rules either way." That public interest is what the FCC bases its judgments on, whether a media cross-ownership would be a positive and contributive force, locally and nationally.
The FCC held one official forum, February 27, 2003, in Richmond, Virginia in response to public pressures to allow for more input on the issue of elimination of media ownership limits. Some complain that more than one forum was needed.
In 2003 the FCC set out to re-evaluate its media ownership rules specified in the Telecommunications Act of 1996.
On June 2, 2003, FCC, in a 3-2 vote under Chairman Michael Powell, approved new media ownership laws that removed many of the restrictions previously imposed to limit ownership of media within a local area. The changes were not, as is customarily done, made available to the public for a comment period.
Single-company ownership of media in a given market is now permitted up to 45% (formerly 35%, up from 25% in 1985) of that market.
Restrictions on newspaper and TV station ownership in the same market were removed.
All TV channels, magazines, newspapers, cable, and Internet services are now counted, weighted based on people's average tendency to find news on that medium. At the same time, whether a channel actually contains news is no longer considered in counting the percentage of a medium owned by one owner.
Previous requirements for periodic review of license have been changed. Licenses are no longer reviewed for "public-interest" considerations.
The decision by the FCC was overturned by the United States Court of Appeals for the Third Circuit in Prometheus Radio Project v. FCC in June, 2004. The Majority ruled 2-1 against the FCC and ordered the Commission to reconfigure how it justified raising ownership limits. The Supreme Court later turned down an appeal, so the ruling stands.
In June 2006, the FCC adopted a Further Notice of Proposed Rulemaking (FNPR) to address the issues raised by the United States Court of Appeals for the Third Circuit and also to perform the recurring evaluation of the media ownership rules required by the Telecommunications Act. The deliberations would draw upon three formal sources of input:(1) the submission of comments, (2) ten Commissioned studies, and (3) six public hearings.
The FCC in 2007 voted to modestly relax its existing ban on newspaper/broadcast cross-ownership.
The FCC voted December 18, 2007 to eliminate some media ownership rules, including a statute that forbids a single company to own both a newspaper and a television or radio station in the same city. FCC Chairman Kevin Martin circulated the plan in October 2007. Martin's justification for the rule change is to ensure the viability of America's newspapers and to address issues raised in the 2003 FCC decision that was later struck down by the courts. The FCC held six hearings around the country to receive public input from individuals, broadcasters and corporations. Because of the lack of discussion during the 2003 proceedings, increased attention has been paid to ensuring that the FCC engages in proper dialogue with the public regarding its current rules change. FCC Commissioners Deborah Taylor-Tate and Robert McDowell joined Chairman Martin in voting in favor of the rule change. Commissioners Michael Copps and Jonathan Adelstein, both Democrats, opposed the change.
UHF discount
Beginning in 1985, the FCC implemented a rule stating that television stations broadcasting on UHF channels would be "discounted" by half when calculating a broadcaster's total reach, under the market share cap of 39% of U.S. TV households. This rule was implemented because the UHF band was generally considered inferior to VHF for broadcasting analog television. The notion became obsolete since the completion of the transition from analog to digital television in 2009; the majority of television stations now broadcast on the UHF band because, by contrast, it is generally considered superior for digital transmission.
The FCC voted to deprecate the rule in September 2016; the Commission argued that the UHF discount had become technologically obsolete, and that it was now being used as a loophole by broadcasters to contravene its market share rules and increase their market share through consolidation. The existing portfolios of broadcasters who now exceeded the cap due to the change were grandfathered, including the holdings of Ion Media Networks, Tribune Media, and Univision.
However, on April 21, 2017, under new Trump administration FCC commissioner Ajit Pai, the discount was reinstated in a 2-1 vote, led by Pai and commissioner Michael O'Rielly. The move, along with a plan to evaluate increasing the national ownership cap, is expected to trigger a wider wave of consolidation in broadcast television. A challenge to the rule's restoration was filed on May 15 by The Institute for Public Representation (a coalition of public interest groups comprising Free Press, the United Church of Christ, Media Mobilizing Project, the Prometheus Radio Project, the National Hispanic Media Coalition and Common Cause), which requested an emergency motion to stay the UHF discount order – delaying its June 5 re-implementation – pending a court challenge to the rule. The groups re-affirmed that the rule was technologically obsolete, and was restored for the purpose of allowing media consolidation. The FCC rejected the claims, stating that the discount would only allow forward a regulatory review of any station group acquisitions, and that the Institute for Public Representation's criteria for the stay fell short of meeting adequate determination in favor of it by the court; it also claimed that the discount was "inextricably linked" to the agency's media ownership rules, a review of which it initiated in May of that year.
The challenge and subsequent stay motion was partly filed as a reaction to Sinclair Broadcast Group's proposed acquisition of Tribune Media (announced on May 8), which – with the more than 230 stations that the combined company would have, depending on any divestitures in certain markets where both groups own stations – would expand the group's national reach to 78% of all U.S. households with at least one television set with the discount. On June 1, 2017, the District of Columbia Court of Appeals issued a seven-day administrative stay to the UHF discount rulemaking to review the emergency stay motion. The D.C. Court of Appeals denied the emergency stay motion in a one-page memorandum on June 15, 2017, however, the merits of restoring the discount is still subject to a court appeal proceeding scheduled to occur at a later date.
Following this, in November 2017, the FCC voted 3-2 along partisan lines to eliminate the cross-ownership ban against owning multiple media outlets in the same local market, as well as increasing the number of television stations that one entity may own in a local market. Pai argued the removal of the ban was necessary for local media to compete with online information sources like Google and Facebook. The decision was appealed by advocacy groups, and in September 2019, the Third Circuit struck down the rule change in a 2-1 decision, with the majority opinion stating the FCC "did not adequately consider the effect its sweeping rule changes will have on ownership of broadcast media by women and racial minorities." Pai stated plans to appeal this ruling. The FCC petitioned to the Supreme Court under FCC v. Prometheus Radio Project. The Supreme Court ruled unanimously in April 2021 to reverse the Third Circuit's ruling, stating that the FCC's rule changes did not violate the Administrative Procedure Act, and that there was no Congressional mandate for the FCC to consider the impact on minority ownership of its rulemaking, thus allowing the FCC to proceed with relaxation of media cross-ownership rules.
Local content
A 2008 study found that news stations operated by a small media company produced more local news and more locally produced video than large chain-based broadcasting groups. It was then argued that the FCC claimed, in 2003, that larger media groups produced better quality local content. Research by Philip Napoli and Michael Yan showed that larger media groups actually produced less local content. In a different study, they also showed that "ownership by one of the big four broadcast networks has been linked to a considerable decrease in the amount of televised local public affairs programming"
The major reasoning the FCC made for deregulation was that with more capital, broadcasting organizations could produce more and better local content. However, the research studies by Napoli and Yan showed that once teamed-up, they produced less content. Cross ownership between broadcasting and newspapers is a complicated issue. The FCC believes that more deregulation is necessary. However, with research studies showing that they produced less local content - less voices being heard that are from within the communities. While less local voices are heard, more national-based voices do appear. Chain-based companies are using convergence, the same content being produced across multiple mediums, to produce this mass-produced content. It's cheaper and more efficient than having to run different local and national news. However, with convergence and chain-based ownership you can choose which stories to run and how the stories are heard - being able to be played in local communities and national stage.
Media consolidation debate
Robert W. McChesney
Robert McChesney is an advocate for media reform, and the co-founder of Free Press, which was established in 2003. His work is based on theoretical, normative, and empirical evidence suggesting that media regulation efforts should be more strongly oriented towards maintaining a healthy balance of diverse viewpoints in the media environment. However, his viewpoints on current regulation are; "there is every bit as much regulation by government as before, only now it is more explicitly directed to serve large corporate interests."
McChesney believes that the Free Press' objective is a more diverse and competitive commercial system with a significant nonprofit and noncommercial sector. It would be a system built for the citizens, but most importantly - it would be accessible to anyone who wants to broadcast. Not only specifically the big corporations that can afford to broadcast nationally, but more importantly . McChesney suggests that to better our current system we need to "establish a bona fide noncommercial public radio and television system, with local and national stations and networks. The expense should come out of the general budget"
Benjamin Compaine
Benjamin Compaine believes that the current media system is "one of the most competitive major industries in U.S. commerce." He believes that much of the media in the United States is operating in the same market. He also believes that all the content is being interchanged between different media.
Compaine believes that due to convergence, two or more things coming together, the media has been saved. Because of the ease of access to send the same message across multiple and different mediums, the message is more likely to be heard. He also believes that due to the higher amount of capital and funding, the media outlets are able to stay competitive because they are trying to reach more listeners or readers by using newer media.
Benjamin Compaine's main argument is that the consolidation of media outlets, across multiple ownerships, has allowed for a better quality of content. He also stated that the news is interchangeable, and as such, making the media market less concentrated than previously thought, the idea being that since the same story is being pushed across multiple different platforms, then it can only be counted as one news story from multiple sources. Compaine also believed the news is more readily available, making it far easier for individuals to access than traditional methods.
American public distrust in the media
A 2012 Gallup poll found that Americans' distrust in the mass media had hit a new high, with 60% saying they had little or no trust in the mass media to report the news fully, accurately, and fairly. Distrust had increased since the previous few years, when Americans were already more negative about the media than they had been in the years before 2004.
Music industry
Critics of media consolidation in broadcast radio say it has made the music played more homogeneous, and makes it more difficult for acts to gain local popularity. They also believe it has reduced the demographic diversity of popular music, pointing to a study which found representation of women in country music charts at 11.3% from 2000 to 2018.
Critics cite centralized control as having increased artist self-censorship, and several incidents of artists being banned from a large number of broadcast stations all at once. After the controversy caused by criticism of President George W. Bush and the Iraq War by a member of the Dixie Chicks, the band was banned by Cumulus Media and Clear Channel Communications, which also organized pro-war demonstrations. After the Super Bowl XXXVIII wardrobe malfunction, CBS CEO Les Moonves reportedly banned Janet Jackson from all CBS and Viacom properties, including MTV, VH1, the 46th Annual Grammy Awards, and Infinity Broadcasting Corporation radio stations, impacting sales of her album Damita Jo.
News
Critics point out that media consolidation has allowed Sinclair Broadcast Group to require hundreds of local stations to run editorials by Boris Epshteyn (an advisor to Donald Trump), terrorism alerts, and anti-John Kerry documentary Stolen Honor, and even to force local news anchors to read an editorial mirroring Trump's denunciation of the news media for bias and fake news.
See also
Alternative media
Big Three television networks
Concentration of media ownership
Fourth television network
Mainstream media
Media bias
Media conglomerate
Media democracy
Media imperialism
Media manipulation
Media proprietor
Media transparency
Monopolies of knowledge
Old media
Politico-media complex
Propaganda model
State controlled media
Telecommunications Act of 1996
Western media
Animation
Anime
Television
Television in the United States
References
United States communications regulation |
666924 | https://en.wikipedia.org/wiki/Service-oriented%20architecture | Service-oriented architecture | In software engineering, service-oriented architecture (SOA) is an architectural style that supports service orientation. By consequence, it is as well applied in the field of software design where services are provided to the other components by application components, through a communication protocol over a network. A service is a discrete unit of functionality that can be accessed remotely and acted upon and updated independently, such as retrieving a credit card statement online. SOA is also intended to be independent of vendors, products and technologies.
Service orientation is a way of thinking in terms of services and service-based development and the outcomes of services.
A service has four properties according to one of many definitions of SOA:
It logically represents a repeatable business activity with a specified outcome.
It is self-contained.
It is a black box for its consumers, meaning the consumer does not have to be aware of the service's inner workings.
It may be composed of other services.
Different services can be used in conjunction as a service mesh to provide the functionality of a large software application, a principle SOA shares with modular programming. Service-oriented architecture integrates distributed, separately maintained and deployed software components. It is enabled by technologies and standards that facilitate components' communication and cooperation over a network, especially over an IP network.
SOA is related to the idea of an application programming interface (API), an interface or communication protocol between different parts of a computer program intended to simplify the implementation and maintenance of software. An API can be thought of as the service, and the SOA the architecture that allows the service to operate.
Overview
In SOA, services use protocols that describe how they pass and parse messages using description metadata. This metadata describes both the functional characteristics of the service and quality-of-service characteristics. Service-oriented architecture aims to allow users to combine large chunks of functionality to form applications which are built purely from existing services and combining them in an ad hoc manner. A service presents a simple interface to the requester that abstracts away the underlying complexity acting as a black box. Further users can also access these independent services without any knowledge of their internal implementation.
Defining concepts
The related buzzword service-orientation promotes is loose coupling between services. SOA separates functions into distinct units, or services, which developers make accessible over a network in order to allow users to combine and reuse them in the production of applications. These services and their corresponding consumers communicate with each other by passing data in a well-defined, shared format, or by coordinating an activity between two or more services.
A manifesto was published for service-oriented architecture in October, 2009. This came up with six core values which are listed as follows:
Business value is given more importance than technical strategy.
Strategic goals are given more importance than project-specific benefits.
Intrinsic interoperability is given more importance than custom integration.
Shared services are given more importance than specific-purpose implementations.
Flexibility is given more importance than optimization.
Evolutionary refinement is given more importance than pursuit of initial perfection.
SOA can be seen as part of the continuum which ranges from the older concept of distributed computing and modular programming, through SOA, and on to practices of mashups, SaaS, and cloud computing (which some see as the offspring of SOA).
Principles
There are no industry standards relating to the exact composition of a service-oriented architecture, although many industry sources have published their own principles. Some of these
include the following:
Standardized service contract
Services adhere to a standard communications agreement, as defined collectively by one or more service-description documents within a given set of services.
Service reference autonomy (an aspect of loose coupling)
The relationship between services is minimized to the level that they are only aware of their existence.
Service location transparency (an aspect of loose coupling)
Services can be called from anywhere within the network that it is located no matter where it is present.
Service longevity
Services should be designed to be long lived. Where possible services should avoid forcing consumers to change if they do not require new features, if you call a service today you should be able to call the same service tomorrow.
Service abstraction
The services act as black boxes, that is their inner logic is hidden from the consumers.
Service autonomy
Services are independent and control the functionality they encapsulate, from a Design-time and a run-time perspective.
Service statelessness
Services are stateless, that is either return the requested value or give an exception hence minimizing resource use.
Service granularity
A principle to ensure services have an adequate size and scope. The functionality provided by the service to the user must be relevant.
Service normalization
Services are decomposed or consolidated (normalized) to minimize redundancy. In some, this may not be done. These are the cases where performance optimization, access, and aggregation are required.
Service composability
Services can be used to compose other services.
Service discovery
Services are supplemented with communicative meta data by which they can be effectively discovered and interpreted.
Service reusability
Logic is divided into various services, to promote reuse of code.
Service encapsulation
Many services which were not initially planned under SOA, may get encapsulated or become a part of SOA.
Patterns
Each SOA building block can play any of the three roles:
Service provider
It creates a web service and provides its information to the service registry. Each provider debates upon a lot of hows and whys like which service to expose, which to give more importance: security or easy availability, what price to offer the service for and many more. The provider also has to decide what category the service should be listed in for a given broker service and what sort of trading partner agreements are required to use the service.
Service broker, service registry or service repository
Its main functionality is to make the information regarding the web service available to any potential requester. Whoever implements the broker decides the scope of the broker. Public brokers are available anywhere and everywhere but private brokers are only available to a limited amount of public. UDDI was an early, no longer actively supported attempt to provide Web services discovery.
Service requester/consumer
It locates entries in the broker registry using various find operations and then binds to the service provider in order to invoke one of its web services. Whichever service the service-consumers need, they have to take it into the brokers, bind it with respective service and then use it. They can access multiple services if the service provides multiple services.
The service consumer–provider relationship is governed by a standardized service contract, which has a business part, a functional part and a technical part.
Service composition patterns have two broad, high-level architectural styles: choreography and orchestration. Lower level enterprise integration patterns that are not bound to a particular architectural style continue to be relevant and eligible in SOA design.
Implementation approaches
Service-oriented architecture can be implemented with web services or Microservices. This is done to make the functional building-blocks accessible over standard Internet protocols that are independent of platforms and programming languages. These services can represent either new applications or just wrappers around existing legacy systems to make them network-enabled.
Implementers commonly build SOAs using web services standards. One example is SOAP, which has gained broad industry acceptance after recommendation of Version 1.2 from the W3C (World Wide Web Consortium) in 2003. These standards (also referred to as web service specifications) also provide greater interoperability and some protection from lock-in to proprietary vendor software. One can, however, also implement SOA using any other service-based technology, such as Jini, CORBA, Internet Communications Engine, REST, or gRPC.
Architectures can operate independently of specific technologies and can therefore be implemented using a wide range of technologies, including:
Web services based on WSDL and SOAP
Messaging, e.g., with ActiveMQ, JMS, RabbitMQ
RESTful HTTP, with Representational state transfer (REST) constituting its own constraints-based architectural style
OPC-UA
Internet Communications Engine
WCF (Microsoft's implementation of Web services, forming a part of WCF)
Apache Thrift
gRPC
SORCER
Implementations can use one or more of these protocols and, for example, might use a file-system mechanism to communicate data following a defined interface specification between processes conforming to the SOA concept. The key is independent services with defined interfaces that can be called to perform their tasks in a standard way, without a service having foreknowledge of the calling application, and without the application having or needing knowledge of how the service actually performs its tasks. SOA enables the development of applications that are built by combining loosely coupled and interoperable services.
These services inter-operate based on a formal definition (or contract, e.g., WSDL) that is independent of the underlying platform and programming language. The interface definition hides the implementation of the language-specific service. SOA-based systems can therefore function independently of development technologies and platforms (such as Java, .NET, etc.). Services written in C# running on .NET platforms and services written in Java running on Java EE platforms, for example, can both be consumed by a common composite application (or client). Applications running on either platform can also consume services running on the other as web services that facilitate reuse. Managed environments can also wrap COBOL legacy systems and present them as software services..
High-level programming languages such as BPEL and specifications such as WS-CDL and WS-Coordination extend the service concept by providing a method of defining and supporting orchestration of fine-grained services into more coarse-grained business services, which architects can in turn incorporate into workflows and business processes implemented in composite applications or portals.
Service-oriented modeling is an SOA framework that identifies the various disciplines that guide SOA practitioners to conceptualize, analyze, design, and architect their service-oriented assets. The Service-oriented modeling framework (SOMF) offers a modeling language and a work structure or "map" depicting the various components that contribute to a successful service-oriented modeling approach. It illustrates the major elements that identify the "what to do" aspects of a service development scheme. The model enables practitioners to craft a project plan and to identify the milestones of a service-oriented initiative. SOMF also provides a common modeling notation to address alignment between business and IT organizations.
Organizational benefits
Some enterprise architects believe that SOA can help businesses respond more quickly and more cost-effectively to changing market conditions. This style of architecture promotes reuse at the macro (service) level rather than micro (classes) level. It can also simplify interconnection to—and usage of—existing IT (legacy) assets.
With SOA, the idea is that an organization can look at a problem holistically. A business has more overall control. Theoretically there would not be a mass of developers using whatever tool sets might please them. But rather they would be coding to a standard that is set within the business. They can also develop enterprise-wide SOA that encapsulates a business-oriented infrastructure. SOA has also been illustrated as a highway system providing efficiency for car drivers. The point being that if everyone had a car, but there was no highway anywhere, things would be limited and disorganized, in any attempt to get anywhere quickly or efficiently. IBM Vice President of Web Services Michael Liebow says that SOA "builds highways".
In some respects, SOA could be regarded as an architectural evolution rather than as a revolution. It captures many of the best practices of previous software architectures. In communications systems, for example, little development of solutions that use truly static bindings to talk to other equipment in the network has taken place. By embracing a SOA approach, such systems can position themselves to stress the importance of well-defined, highly inter-operable interfaces. Other predecessors of SOA include Component-based software engineering and Object-Oriented Analysis and Design (OOAD) of remote objects, for instance, in CORBA.
A service comprises a stand-alone unit of functionality available only via a formally defined interface. Services can be some kind of "nano-enterprises" that are easy to produce and improve. Also services can be "mega-corporations" constructed as the coordinated work of subordinate services.
Reasons for treating the implementation of services as separate projects from larger projects include:
Separation promotes the concept to the business that services can be delivered quickly and independently from the larger and slower-moving projects common in the organization. The business starts understanding systems and simplified user interfaces calling on services. This advocates agility. That is to say, it fosters business innovations and speeds up time-to-market.
Separation promotes the decoupling of services from consuming projects. This encourages good design insofar as the service is designed without knowing who its consumers are.
Documentation and test artifacts of the service are not embedded within the detail of the larger project. This is important when the service needs to be reused later.
SOA promises to simplify testing indirectly. Services are autonomous, stateless, with fully documented interfaces, and separate from the cross-cutting concerns of the implementation. If an organization possesses appropriately defined test data, then a corresponding stub is built that reacts to the test data when a service is being built. A full set of regression tests, scripts, data, and responses is also captured for the service. The service can be tested as a 'black box' using existing stubs corresponding to the services it calls. Test environments can be constructed where the primitive and out-of-scope services are stubs, while the remainder of the mesh is test deployments of full services. As each interface is fully documented with its own full set of regression test documentation, it becomes simple to identify problems in test services. Testing evolves to merely validate that the test service operates according to its documentation, and finds gaps in documentation and test cases of all services within the environment. Managing the data state of idempotent services is the only complexity.
Examples may prove useful to aid in documenting a service to the level where it becomes useful. The documentation of some APIs within the Java Community Process provide good examples. As these are exhaustive, staff would typically use only important subsets. The 'ossjsa.pdf' file within JSR-89 exemplifies such a file.
Criticism
SOA has been conflated with Web services; however, Web services are only one option to implement the patterns that comprise the SOA style. In the absence of native or binary forms of remote procedure call (RPC), applications could run more slowly and require more processing power, increasing costs. Most implementations do incur these overheads, but SOA can be implemented using technologies (for example, Java Business Integration (JBI), Windows Communication Foundation (WCF) and data distribution service (DDS)) that do not depend on remote procedure calls or translation through XML or JSON. At the same time, emerging open-source XML parsing technologies (such as VTD-XML) and various XML-compatible binary formats promise to significantly improve SOA performance.
Stateful services require both the consumer and the provider to share the same consumer-specific context, which is either included in or referenced by messages exchanged between the provider and the consumer. This constraint has the drawback that it could reduce the overall scalability of the service provider if the service-provider needs to retain the shared context for each consumer. It also increases the coupling between a service provider and a consumer and makes switching service providers more difficult. Ultimately, some critics feel that SOA services are still too constrained by applications they represent.
A primary challenge faced by service-oriented architecture is managing of metadata. Environments based on SOA include many services which communicate among each other to perform tasks. Due to the fact that the design may involve multiple services working in conjunction, an Application may generate millions of messages. Further services may belong to different organizations or even competing firms creating a huge trust issue. Thus SOA governance comes into the scheme of things.
Another major problem faced by SOA is the lack of a uniform testing framework. There are no tools that provide the required features for testing these services in a service-oriented architecture. The major causes of difficulty are:
Heterogeneity and complexity of solution.
Huge set of testing combinations due to integration of autonomous services.
Inclusion of services from different and competing vendors.
Platform is continuously changing due to availability of new features and services.
Extensions and variants
Event-driven architectures
Application programming interfaces
Application programming interfaces (APIs) are the frameworks through which developers can interact with a web application.
Web 2.0
Tim O'Reilly coined the term "Web 2.0" to describe a perceived, quickly growing set of web-based applications. A topic that has experienced extensive coverage involves the relationship between Web 2.0 and service-oriented architectures.
SOA is the philosophy of encapsulating application logic in services with a uniformly defined interface and making these publicly available via discovery mechanisms. The notion of complexity-hiding and reuse, but also the concept of loosely coupling services has inspired researchers to elaborate on similarities between the two philosophies, SOA and Web 2.0, and their respective applications. Some argue Web 2.0 and SOA have significantly different elements and thus can not be regarded "parallel philosophies", whereas others consider the two concepts as complementary and regard Web 2.0 as the global SOA.
The philosophies of Web 2.0 and SOA serve different user needs and thus expose differences with respect to the design and also the technologies used in real-world applications. However, , use-cases demonstrated the potential of combining technologies and principles of both Web 2.0 and SOA.
Microservices
Microservices are a modern interpretation of service-oriented architectures used to build distributed software systems. Services in a microservice architecture are processes that communicate with each other over the network in order to fulfill a goal. These services use technology agnostic protocols, which aid in encapsulating choice of language and frameworks, making their choice a concern internal to the service. Microservices are a new realisation and implementation approach to SOA, which have become popular since 2014 (and after the introduction of DevOps), and which also emphasize continuous deployment and other agile practices.
There is no single commonly agreed definition of microservices. The following characteristics and principles can be found in the literature:
fine-grained interfaces (to independently deployable services),
business-driven development (e.g. domain-driven design),
IDEAL cloud application architectures,
polyglot programming and persistence,
lightweight container deployment,
decentralized continuous delivery, and
DevOps with holistic service monitoring.
Service-oriented architectures for interactive applications
Interactive applications requiring real-time response times, for example low-latency interactive 3d applications, are using specific service oriented architectures addressing the specific needs of such kind of applications. These include for example low-latency optimized distributed computation and communication as well as resource and instance management.
See also
Application programming interface
Loose coupling
OASIS SOA Reference Model
Service granularity principle
SOA governance
Software architecture
Service-oriented communications (SOC)
Service-oriented development of applications
Service-oriented distributed applications
Web Application Description Language
References
Software design patterns
Architectural pattern (computer science)
Enterprise application integration
Service-oriented (business computing)
Web services |
1802686 | https://en.wikipedia.org/wiki/Robert%20Swirsky | Robert Swirsky | Robert Swirsky (born December, 1962, Brooklyn, NY) is a computer scientist, author and pianist. In the early 1980s, he was one of the first regular contributors to the nascent computer magazine industry, including Popular Computing, Kilobaud Microcomputing, and Interface Age to Creative Computing.
Swirsky holds bachelor's and master's degrees in computer science from Hofstra University, and is one of Hofstra's Alumni of Distinction. While there, he met VOIP pioneer Jeff Pulver who attended Hofstra as an undergraduate student. After graduating, Swirsky worked on projects ranging from aircraft avionics to one of the first all-software digital radio receivers for a VLF submarine application.
In 1989, Swirsky moved to California and joined Olivetti Advanced Technology's Unix group. He was a frequent speaker at Uniforum, Usenix, and other Unix shows, and hosted parties where he entertained people with song parodies about the Unix computer operating system, some of which were featured in a special Evatone Soundsheet issue of Interface Age magazine. He studied music and piano at Hofstra University with professor Morton Estrin.
After Olivetti, Swirsky went to Adobe Systems, where he was a member of the core PostScript team, and the team that developed the first versions of Photoshop for Microsoft Windows, including Win32s on Microsoft Windows for Workgroups 3.11. His work made him a participant in many industry standards committees, such as TWAIN, and he was a frequent speaker and contributor at ACM SIGGRAPH events. Before leaving Adobe in 1998, he worked with Will Harvey on HTML rendering technology.
The Disney years
In 1998, Swirsky began working for Walt Disney Imagineering R&D as Director, Creative Technology, under Bran Ferren, developing electronic games and digital imaging systems. He developed technology to play interactive games synchronized with live television shows, and electronic toys including Disney's Magical Moments Pin. His digital photography projects included systems to synchronize picture-taking with ride vehicles, and active infrared badges to identify picture-takers.
Swirsky was a major technical contributor to ABC's Enhanced TV, an Emmy Award-winning technology that allowed television viewers to play along with game shows and sporting events, and to answer live polls during talk shows. His interactive media research also involved working with nerdcore rapper Monzy, then an intern at Walt Disney Imagineering, on a variety of cutting-edge display technologies, including the display of digital data on a spherical surface.
Swirsky continues to work as a consultant for the themed entertainment industry, including Disney.
3D photography
Swirsky is known for his work in 3D digital photography. He has developed algorithms for generating full-color anaglyph images from stereo pairs that can be viewed through red/cyan glasses. A popular freeware program, Callipygian 3D, is widely used and has been featured on TechTV's The Screen Savers show several times, with Swirsky demonstrating it. The popularity of anaglyph images from Mars, and of anaglyph movies like Spy Kids 3D, introduced new audiences to anaglyph technology. Swirsky's software played a major role in enabling people to create their own anaglyph images.
Production company
In 2003, Swirsky started a production company, Thrill Science, Inc. , to produce and distribute short films and related media for the portable media player market. The company has a lot adjacent to Walt Disney World in Florida. The property, known as Swampworth, is used as a filming location for productions, and as a studio for Swirsky's other projects.
Code used in The Terminator
Some of Swirsky's computer code, from the May 1984 issue of 73 Magazine, was used in the movie The Terminator in a scene where COBOL code was briefly displayed.
References
External links
US Patents for Robert Swirsky
System and method for automating the creation of customized multimedia content
Method and system for managing the lifecycles of media assets
Method and system for managing media assets
1962 births
Living people
American computer scientists
Hofstra University alumni
20th-century American pianists
American male pianists
21st-century American pianists
20th-century American male musicians
21st-century American male musicians |
39199119 | https://en.wikipedia.org/wiki/Schlumberger%20Canada%20Ltd%20v%20Canada%20%28Commissioner%20of%20Patents%29 | Schlumberger Canada Ltd v Canada (Commissioner of Patents) | Schlumberger Canada Ltd v Canada (Commissioner of Patents) is a decision of the Federal Court of Appeal concerning the patentability of software inventions within the context of the Patent Act (Canada). At issue was the patentability of a method of combining and analyzing borehole measurements for oil and gas exploration using a computer programmed according to mathematical formulas. The Federal Court of Appeal held that the use of a computer "does not change the nature" of the discovered invention and that the process at issue was a "mere scientific principle or abstract theorem" and therefore not an "invention" within the meaning of the Patent Act.
More broadly, the case stands for the proposition that the use of a computer neither adds to, nor subtracts from, the patentability of an alleged invention.
Background
In oil and gas exploration, data is collected by taking measurements using instruments lowered into boreholes in geological formations. However, these measurements are not always useful to geologists. Schlumberger researchers (the appellants) developed a method to combine and analyze measurements to yield more meaningful information. The application described a process where the borehole measurements were recorded to magnetic tape and processed by a computer for mathematical processing and display.
Arguments by the Parties
Respondent's Arguments
The Commissioner of Patents argued that a computer program, even if it satisfied the novelty and utility requirements for patentability, was not an "invention" as defined in section 2 of the Patent Act.
Appellant's Arguments
Schlumberger argued that the invention was not the computer program, but rather the process of "transforming measurements into useful information." Schlumberger argued that the definition of "invention" in the Patent Act did not exclude inventions involving computers and so there was no reason the process was not a patentable invention.
Federal Court of Appeal Ruling
The court found in favour of the government, ruling that the application did not disclose a patentable invention.
The court started by observing that a mathematical formula would fall within the phrase "mere scientific principle or abstract theorem", then in section 28(3) of the Patent Act, for which "no patent shall issue". The court noted that if the calculations in the invention were performed by men rather than computers, then they would not be patentable.
The court reasoned that there was nothing new in using computers to make mathematical calculations. The court then rejected the appellant's argument that the operations were steps in a process, finding that, if the contention were true, it would have the effect that the "mere fact" of the use of a computer to perform the calculations would transform an unpatentable discovery (a mathematical formula) into patentable subject matter. The court found this unacceptable, holding that "the fact that a computer is or should be used to implement discovery does not change the nature of that discovery."
In view of the above, the Court dismissed the appellant's appeal.
See also
Subject matter in Canadian patent law
Software patents under Canadian patent law
References
Canadian patent case law
1981 in Canadian case law
Federal Court of Appeal (Canada) case law |
34493264 | https://en.wikipedia.org/wiki/Competitive%20programming | Competitive programming | Competitive programming is a mind sport usually held over the Internet or a local network, involving participants trying to program according to provided specifications. Contestants are referred to as sport programmers. Competitive programming is recognized and supported by several multinational software and Internet companies, such as Google and Facebook.
A programming competition generally involves the host presenting a set of logical or mathematical problems, also known as puzzles, to the contestants (who can vary in number from tens to several thousands), and contestants are required to write computer programs capable of solving each problem. Judging is based mostly upon number of problems solved and time spent for writing successful solutions, but may also include other factors (quality of output produced, execution time, program size, etc.)
History
One of the oldest contests known is ICPC which originated in the 1970s, and has grown to include 88 countries in its 2011 edition.
From 1990 to 1994, Owen Astrachan, Vivek Khera and David Kotz ran one of the first distributed, internet-based programming contests inspired by ICPC.
Interest in competitive programming has grown extensively since 2000, and is strongly connected to the growth of the Internet, which facilitates holding international contests online, eliminating geographical problems.
Overview
The aim of competitive programming is to write source code of computer programs which are able to solve given problems. A vast majority of problems appearing in programming contests are mathematical or logical in nature. Typical such tasks belong to one of the following categories: combinatorics, number theory, graph theory, algorithmic game theory, computational geometry, string analysis and data structures. Problems related to constraint programming and artificial intelligence are also popular in certain competitions.
Irrespective of the problem category, the process of solving a problem can be divided into two broad steps: constructing an efficient algorithm, and implementing the algorithm in a suitable programming language (the set of programming languages allowed varies from contest to contest). These are the two most commonly tested skills in programming competitions.
In most contests, the judging is done automatically by host machines, commonly known as judges. Every solution submitted by a contestant is run on the judge against a set of (usually secret) test cases. Normally, contest problems have an all-or-none marking system, meaning that a solution is "Accepted" only if it produces satisfactory results on all test cases run by the judge, and rejected otherwise. However, some contest problems may allow for partial scoring, depending on the number of test cases passed, the quality of the results, or some other specified criteria. Some other contests only require that the contestant submit the output corresponding to given input data, in which case the judge only has to analyze the submitted output data.
Online judges are online environments in which testing takes place. Online judges have ranklists showing users with the biggest number of accepted solutions and/or shortest execution time for a particular problem.
Notable competitions
There are two types of competition formats: short-term and long-term. Each round of short-term competition lasts from 1 to 5 hours. Long-term competitions can last from a few days to a few months.
Short-term
International Collegiate Programming Contest (ICPC) – one of the oldest competitions, for students of universities in groups of 3 persons each
International Olympiad in Informatics (IOI) – one of the oldest competitions, for secondary school students
American Computer Science League (ACSL) – computer science competition with written and programming portions, for middle/high school students
CodeChef – competition held from 2009, there are three contests held every month and an annual competition called CodeChef SnackDown
Codeforces Round – typically two hour contest, held every week
Facebook Hacker Cup – competition held from 2011, provided and sponsored by Facebook
HackerRank – multiple competitions
Gridwars – four competitions held between 2003 and 2004.
Google Code Jam – competition held from 2003, provided and sponsored by Google
IEEEXtreme Programming Competition – annual competition for IEEE Student Members held since 2006 by IEEE.
Topcoder Open (TCO) – Algorithm competition held since 2001 by Topcoder
In most of the above competitions, since the number of contestants is quite large, competitions are usually organized in several rounds. They usually require online participation in all rounds except the last, which requires onsite participation. A special exception to this is IEEEXtreme, which is a yearly 24-hour virtual programming competition. The top performers at IOI and ICPC receive gold, silver and bronze medals while in the other contests, cash prizes are awarded to the top finishers. Also hitting the top places in the score tables of such competitions may attract interest of recruiters from software and Internet companies.
Long-term
HackerRank Week of Code
ICFP Programming Contest – annual 3-day competition held since 1998 by the International Conference on Functional Programming
Topcoder Marathon Matches
Codechef Long Challenges - held every month - lasts up to 10 days
Artificial intelligence and machine learning
Kaggle – machine learning competitions.
CodeCup – board game AI competition held annually since 2003. Game rules get published in September and the final tournament is held in January.
Google AI Challenge – bi-annual competitions for students that ran 2009 to 2011
Halite – An AI programming challenge sponsored by Two Sigma, Cornell Tech, and Google
Russian AI Cup open artificial intelligence programming contest
Contests focusing on open source technologies
List may be incomplete
Online contest and training resources
The programming community around the world has created and maintained several internet-resources dedicated to competitive programming. They offer standalone contests with or without minor prizes. Also the past archives of problems are a popular resource for training in competitive programming. There are several organizations who host programming competitions on a regular basis. These include:
Notable participants
The following list consists of contestants who achieved significant results in programming contests, and to some extent, to their respective career. Note this list excludes people (for example Mark Zuckerberg) who might be successful in their career, but did not have significant results in competitive programming:
Gennady Korotkevich (tourist)
Petr Mitrichev (Petr)
Makoto Soejima (rng_58), one of the former admin & problem writer of Topcoder and founding member of Atcoder.
Tiancheng Lou (ACRush), 2 time Google Code Jam winner (2008 and 2009), co-founder of Pony.ai
Adam D'Angelo, CTO of Facebook and founder of Quora.
Scott Wu (scott_wu), CTO of Lunchclub.
Matei Zaharia (Matei), assistant professor of Stanford University and founder of Databricks.
Benefits and criticism
Participation in programming contests may increase student enthusiasm for computer science studies. The skills acquired in ICPC-like programming contests also improve career prospects, as they help to pass the "technical interviews", which often require candidates to solve complex programming and algorithmic problems on the spot.
There has also been criticism of competitive programming, particularly from professional software developers. One critical point is that many fast-paced programming contests teach competitors bad programming habits and code style (like unnecessary use of macros, lack of OOP abstraction and comments, use of short variable names, etc.). Also, by offering only small algorithmic puzzles with relatively short solutions, programming contests like ICPC and IOI don't necessarily teach good software engineering skills and practices, as real software projects typically have many thousands of lines of code and are developed by large teams over long periods of time. Peter Norvig stated that based on the available data, being a winner of programming contests correlated negatively with a programmer's performance at their job at Google (even though contest winners had higher chances of getting hired). Norvig later stated that this correlation was observed on a small data set, but that it could not be confirmed after examining a larger data set
Yet another sentiment is that rather than "wasting" their time on excessive competing by solving problems with known solutions, high-profile programmers should rather invest their time in solving real-world problems.
Literature
Halim, S., Halim, F. (2013). Competitive Programming 3: The New Lower Bound of Programming Contests. Lulu.
Laaksonen, A. (2017). Guide to Competitive Programming (Undergraduate Topics in Computer Science). Cham: Springer International Publishing.
See also
Category:Computer science competitions
Code golf
Hackathon
References
External links
Open-source project for running contests
Contest Management System Open-source tool in Python to run and manage a programming contest on a server IOI 2012 and IOI 2013.
Computer science competitions |
32021114 | https://en.wikipedia.org/wiki/SS%20Mona%27s%20Queen%20%281934%29 | SS Mona's Queen (1934) | TSS (RMS) Mona's Queen (III) No. 145308, was a ship built for the Isle of Man Steam Packet Company in 1934. The steamer, which was the third vessel in the company's history to bear the name, was one of five ships to be specially commissioned by the company between 1927 and 1937. They were replacements for the various second-hand steamers that had been purchased to replace the company's losses during the First World War. However, the life of the Mona's Queen proved to be short: six years after being launched she was sunk by a sea mine during the Dunkirk evacuation on 29 May 1940.
Construction
Ordered in August, 1933, Mona's Queen was built by Cammell Laird at Birkenhead at a cost of £30,000 (approx. £12.3 million in 2017). Mona's Queen was the sixth vessel to be built in the Birkenhead yards for the Isle of Man Steam Packet Company, and was completed in June 1934.
Constructed under special survey in accordance with the requirements of Lloyd's Register of Shipping and Classification, Mona's Queen was classed as A.1 "with freeboard for Irish Channel Service."The clerk of the works on behalf of the Company for the building of Mona's Queen was Charles Cannell.
Features
Design and layout
The vessel had a registered tonnage of 2,756; a depth of ; a length of ( between the perpendiculars); beam of and a speed of 22 knots. She was certified for 2,486 passengers and a crew of 83.
There were 5 decks: the Boat Deck, Promenade Deck, Shelter Deck, Main Deck and Lower Deck. The Boat Deck was long and the Promenade Deck . The Promenade Deck on the Mona's Queen extended forward towards the bow giving the impression it was larger than even the . The Shelter, Main and Lower decks extended the full length of the ship. She was considered to be an elegant ship because of her straight lines and elliptical stern.
Part of the space on the starboard side amidships on the main deck was occupied by provision rooms which included a refrigerated store, the ship was fitted with a Hallmark automatic refrigerator.
Power
Mona's Queen was propelled by twin screws driven through single reduction gearing by two sets of Parsons steam turbines. She was the first of the Company's ships to have water tube boilers, taking up less room than the scotch boilers previously used.
Each set of turbines comprised a high pressure and a low pressure turbine. The high pressure turbines were of an impulse reduction type, two rows of impulse blading being followed by end tightened reaction blading, while the low pressure turbine ahead blading was of the all reaction type. The astern turbines were incorporated in the after ends of both high pressure and low pressure turbine casings and were capable of developing up to 70% of the full ahead power. The turbines were fitted with governors for overspeed control.
The turbines exhausted into a large condenser capable of maintaining a vacuum of 29 inches at full power and fitted with turbines of Alumbro composition made by Imperial Chemical Industries. The condensers were placed outboard of the turbines and their exhaust openings were connected directly to the lower portions of the low pressure turbine casings. This arrangement eliminated the requirement for large overhead trunking and greatly simplified the work of overhauling the low pressure turbines. In order to ensure a suitable feed of water, a water softening plant supplied by Paterson Engineering was fitted, and an electric salinometer was installed to test the salinity of the condensate from both port and starboard condensers and also of the reserve feed water.
Steam was supplied at 230 lbf/in by three water tube boilers. The boilers were oil fired and operated under the closed stokehold system of forced draught. The air for combustion was supplied by two large fans driven by enclosed forced-lubrication engines, manufactured by Matthew Paul & Co. The oil firing equipment was supplied by Babcock & Wilcox, a special feature being the electrically driven lighting up set. The fuel oil was carried in two deep tanks arranged on either side of the after boiler with the oil settling tanks placed behind the boiler at the centre of the ship. Two large pumps were provided for oil transfer purposes and an additional pump was provided for emergency bilge duties. A recorder was fitted in the boiler room to assist combustion control. For fire fighting purposes a Foamite Firefoam system was installed.
The air pumps were of the Weir Paragon type and circulating water was supplied by centrifugal pumps driven by compound enclosed forced lubrication engines. The air pumps discharged through a gravitation type filter to a large feed tank. A turbo pump would draw from the feeder tank and discharge through the feed heater to the boilers, with a further direct acting pump being provided as standby. The feed heater would provide an automatic drain control. Lubrication was provided by three pumps and the oil cooler was fitted with tubes of cupronikel.
Propellers
The propellers were three bladed, cast in bronze and designed by Cammell Lairds in collaboration with the National Physical Laboratory. The propeller revolutions at full power were approximately 275 revolutions per minute.
Technology
Watertight compartments
The hull was subdivided into 10 watertight compartments and 5 of her bulkheads were fitted with sliding watertight doors operated on the Brunton hydraulic system and controlled from the Navigating Bridge.
Rudder and steering
Mona's Queen had two rudders, one forward as well as an Oertz streamline type astern.
Radio communication
The ship was equipped with a Marconi C.W./I.C.W. wireless installation together with a Marconi Echometer sounding device in order to derive the depth of water beneath the ship. Submarine signal receiving apparatus, with a distance finding capability was also installed, supplied by the Submarine Signal Co. (London) Ltd.
Electric power
Electric power was provided by two 90 kW turbo generators in addition to which a 35 kW diesel driven emergency generating set was fitted at the main deck level. As well as its emergency duties the 35 kW generator supplied current for essential services under harbour conditions when steam was not available.
Passenger facilities
On board passenger accommodation was considered advanced for its day. It had 20 cabins, consisting of eight private cabins and 12 convertible cabins, including one that was specially decorated. Each cabin was fitted with sofa berths and a wash basin.
The public rooms for the First Class passengers comprised a ladies' lounge on the Promenade Deck, a smoking room with a bar and a first class buffet on the Shelter Deck. A dining saloon with accommodation for 90 was situated forward on the Main Deck and a further 3 saloons on the Lower Deck. The Third Class rooms comprised an entrance hall on the shelter deck aft, with stairs leading down to a dining saloon and lounge and ladies' lounge on the Main Deck. A further two saloons were also situated on the Lower Deck.
Large promenading spaces were provided on the Shelter and Promenade Decks with screens on both sides of the ship fitted with vertically sliding windows. Screw operated, the windows were of a large area and were similar to those fitted to other ships in the Company.
The decoration of the First Class public rooms was specially designed for the vessel. The Ladies Lounge was panelled in light sycamore with jade green mouldings and furniture of mahogany. The Smoking Room was panelled in Olive teak; the First Class entrance and stairway from the Promenade Deck were in French walnut; the First Class Lounge in walnut and birch; the Dining Saloon in Burma mahogany and the special private cabin also in Burma mahogany.
Sleeping accommodation
A special feature of the First Class Lounge was an arrangement whereby the sofas at the sides of the vessel could be quickly transformed into 12 private cabins and so provide sleeping accommodation for 48 passengers.
The three saloons on the Lower Deck, together with the two aft for the Third Class passengers were also fitted with sofas which could provide sleeping accommodation. Berth curtains were provided for privacy when the spaces were being used.
Lifeboats
The ship was issued with a Board of Trade Lifesaving Appliance and Safety Certificate, the appliances including 10 Class 1A lifeboats carried in Columbus davits, and teak buoyant seats and rafts for over 2,440 persons – sufficient for all passengers and crew on board. Electric winches were installed for handling the lifeboats.
Launch and sea trials
Mona's Queen was launched by Mrs J. Waddington at 9:30am on 12 April 1934 in the presence of G. Clucas (Chairman of the Isle of Man Stem Packet Company), W. Cowley (director), J. Waddington (director), A. Robertson (director) and numerous other representatives of the Company. Amongst those representing the builders were: W. Hichens (Chairman), R. Johnson (managing director) and J. Caird (assistant managing director). Also in attendance were the Lord Mayor and Lady Mayoress of Liverpool and the Mayors of Wallasey, Birkenhead and Bootle. This rather unusual time for the ship's launch was as a consequence of tide conditions in the River Mersey.
Following her fitting-out, Mona's Queen underwent her sea trials on Wednesday 13 June.
Sailing from Cammell Laird's, she made passage to the Clyde for her speed test over the measured mile, during which a speed of was attained. After the completion of this test a further run was made over the measured mile, with the vessel using the bow rudder.
A six-hour consumption trial was carried out on the way back to Birkenhead. On her return she crossed Douglas Bay (but did not berth at her home port) as she continued back to Birkenhead, where she entered the wet basin in order to have her turbines examined.
Service
Domestic
Mona's Queen was the lead ship of the last three vessels – all twin-screw and geared turbines – to be built for the Steam Packet Company before the Second World War. She was painted with a white hull over green like the and . This was a summer colour scheme adopted by the company in the 1930s.
During the busy summer season, the Mona's Queen was employed on the main route between Douglas and Liverpool. It also inaugurated evening cruises from Douglas to the Calf of Man.
In the 1935 film No Limit, the Mona's Queen can be seen berthed alongside the Prince's Landing Stage in Liverpool just before it is boarded by the film's star, George Formby.
and followed the Mona's Queen into service in 1937 (however, all three ships would be lost during the war).
Mail and cargo
Mona's Queen's designation as a Royal Mail Ship (RMS) indicated that she carried mail under contract with the Royal Mail.
A specified area was allocated for the storage of letters, parcels and specie (bullion, coins and other valuables).
In addition, there was a considerable quantity of regular cargo, ranging from furniture to foodstuffs.
War service
Troop ship
Mona's Queen was requisitioned as a troop ship by the British government on 3 September 1939, the day war was declared. Although she served a military purpose, the ship remained a merchantman with a Steam Packet captain and crew. Most of May 1940 was spent evacuating refugees from Dutch and French ports as the massive German advance swept forward to the Channel. On 22 May she carried 2,000 British troops from Boulogne to Dover.
Dunkirk
Mona's Queen was one of the first vessels to make a successful round trip during the Dunkirk evacuation. Under the command of Captain Radcliffe Duggan, she arrived back in Dover during the night of 27 May with 1,200 troops. The next day the ship returned to sea and was shelled off the French coast by shore guns but escaped damage.
Captain Duggan was temporarily replaced by Capt. Holkham following which in the early hours of 29 May, the Mona's Queen set sail for Dunkirk from Dover loaded with water canisters because troops on the Dunkirk beaches were short of drinking water. However, the ship struck a magnetic sea mine outside Dunkirk harbour at 5:30am. The Mona's Queen sank in two minutes.
Captain Archibald Holkham, who had taken over as Master, and 31 members of the crew were picked up by destroyers. Twenty-four of the crew were lost. Of the crew who died, 14 worked in the engine room. They included the Chief and Second Engineer. Seventeen of the dead were from the Isle of Man. The wreck is designated as a war grave.
Memorial
To mark the seventieth anniversary of her sinking, Mona's Queens starboard anchor was raised on 29 May 2010 and subsequently returned to the Isle of Man to form the centrepiece of a permanent memorial. The anchor had become detached during the sinking, and therefore did not form part of the War Grave. Her anchor was raised by a French salvage vessel, and was shown live on BBC television. There was a 12-gun salute from as a crane lifted the anchor of Mona's Queen from the seabed.
On 29 May 2012, a memorial featuring the restored anchor from Mona's Queen, to commemorate the losses 72 years earlier on Mona's Queen, King Orry and Fenella was opened in a ceremony at Kallow Point in Port St Mary attended by representatives of local and national government, the Lieutenant Governor, the Isle of Man Steam Packet Company and the French Navy.
Notes
Citations
Bibliography
Chappell, Connery (1980). Island Lifeline T.Stephenson & Sons Ltd
Ships of the Isle of Man Steam Packet Company
1934 ships
Ferries of the Isle of Man
Ships sunk by mines
Steamships
Steamships of the United Kingdom
Merchant ships of the United Kingdom
World War II merchant ships of the United Kingdom
Ships built on the River Mersey
Maritime incidents in May 1940
Ships sunk by German aircraft
World War II shipwrecks in the English Channel |
47596422 | https://en.wikipedia.org/wiki/ColorOS | ColorOS | ColorOS is a mobile operating system created by Oppo Electronics based on the Android Open Source Project. Initially, Realme phones used ColorOS until it was replaced by Realme UI in 2020. Starting from OnePlus 9 series OnePlus will preinstall ColorOS on all smartphones that are sold in mainland China instead of HydrogenOS (Chinese version of OxygenOS).
Version history
Further reading
How OPPO's ColorOS 11 Pushes the Trend of Android Customization
References
External links
Android (operating system)
Android forks
ARM operating systems
Mobile Linux
Software forks
Linux distributions |
18793578 | https://en.wikipedia.org/wiki/Death%20of%20Caylee%20Anthony | Death of Caylee Anthony | Caylee Marie Anthony (August 9, 2005 – June-December 2008) was an American girl who lived in Orlando, Florida, with her mother, Casey Marie Anthony (born March 19, 1986), and her maternal grandparents, George and Cindy Anthony. On July 15, 2008, she was reported missing in a call made by Cindy, who said she had not seen Caylee for 31 days and that Casey's car smelled like a dead body had been inside it. Cindy said Casey had given varied explanations as to Caylee's whereabouts before finally telling her that she had not seen Caylee for weeks. Casey lied to detectives, telling them Caylee had been kidnapped by a nanny on June 9, and that she had been trying to find her, too frightened to alert the authorities. She was charged with first-degree murder in October 2008 and pleaded not guilty.
On December 11, 2008, two-year-old Caylee's skeletal remains were found with a blanket inside a laundry bag in a wooded area near the Anthony family's house. Investigative reports and trial testimony varied between duct tape being found near the front of the skull and on the mouth of the skull. The medical examiner mentioned duct tape as one reason she determined the death was a homicide, while the cause of death was listed as "death by undetermined means".
The trial lasted six weeks, from May to July 2011. The prosecution sought the death penalty and alleged Casey wished to free herself from parental responsibilities and murdered her daughter by administering chloroform and applying duct tape. The defense team, led by Jose Baez, countered that the child had drowned accidentally in the family's swimming pool on June 16, 2008, and that George Anthony disposed of the body. The defense contended that Casey lied about this and other issues because of a dysfunctional upbringing, which they said included sexual abuse by her father. The defense did not present evidence as to how Caylee died, nor evidence that Casey was sexually abused as a child, but challenged every piece of the prosecution's evidence, calling much of it "fantasy forensics". Casey did not testify. On July 5, 2011, the jury found Casey not guilty of first-degree murder, aggravated child abuse, and aggravated manslaughter of a child, but guilty of four misdemeanor counts of providing false information to a law enforcement officer. With credit for time served, she was released on July 17, 2011. A Florida appeals court overturned two of the misdemeanor convictions on January 25, 2013.
The not-guilty murder verdict was met with public outrage and was both attacked and defended by media and legal commentators. Some complained that the jury misunderstood the meaning of reasonable doubt, while others said the prosecution relied too heavily on the defendant's allegedly poor moral character because they had been unable to show conclusively how the victim had died. Time magazine described the case as "the social media trial of the century".
Disappearance
According to Casey Anthony's father, George Anthony, Casey left the family's home on June 16, 2008, taking her daughter Caylee (who was almost three years old) with her, and did not return for 31 days. Casey's mother Cindy asked repeatedly during the month to see Caylee, but Casey claimed that she was too busy with a work assignment in Tampa, Florida. At other times, she said Caylee was with a nanny, who Casey identified by the name of Zenaida "Zanny" Fernandez-Gonzalez, or at theme parks or the beach. It was eventually determined that a woman named Zenaida Fernandez-Gonzalez did in fact exist, but that she had never met Casey, Caylee, any other member of the Anthony family, nor any of Casey's friends.
Learning that Casey's car was in a tow yard, George Anthony went to recover it; he and the yard attendant noted a strong smell coming from the trunk. Both later stated that they believed the odor to be that of a decomposing body. When the trunk was opened, it contained only a bag of trash.
Cindy reported Caylee missing that day, July 15, to the Orange County Sheriff's Office. During the same telephone call, Casey confirmed to the 9-1-1 operator that Caylee had been missing for 31 days. Sounding distraught, Cindy said: "There is something wrong. I found my daughter's car today and it smells like there's been a dead body in the damn car."
Case
Investigation
When Detective Yuri Melich of the Orange County Sheriff's Department began investigating Caylee's disappearance, he found discrepancies in Casey's signed statement. When questioned, Casey said Caylee had been kidnapped by Zenaida Fernandez-Gonzalez, whom she also identified as "Zanny", Caylee's nanny. Although Casey had talked about her, Zanny had never been seen by Casey's family or friends, and in fact there was no nanny. Casey also told police that she was working at Universal Studios, a lie she had been telling her parents for years. Investigators took Casey to Universal Studios on July 16, 2008, the day after Caylee was reported missing, and asked her to show them her office. Casey led detectives around the building for around 25 minutes before she stopped, started smiling and jokingly admitted that she had no office there and that she had been fired years before.
Casey was first arrested on July 16, 2008, and was charged the following day with giving false statements to law enforcement, child neglect, and obstruction of a criminal investigation. The judge denied bail, saying Casey had shown "woeful disregard for the welfare of her child". On July 22, 2008, after a bond hearing, the judge set bail at $500,000. On August 21, 2008, after one month of incarceration, she was released from the Orange County jail after her $500,000 bond was posted by the nephew of California bail bondsman Leonard Padilla in hopes that she would cooperate and Caylee would be found.
On August 11, 12 and 13, 2008, meter reader Roy Kronk called police about a suspicious object found in a forested area near the Anthony residence. In the first instance, he was directed by the sheriff's office to call the tip line, which he did, receiving no return call. On the second instance, he again called the sheriff's office, and eventually was met by two police officers. He reported to them that he had seen what appeared to be a skull near a gray bag. On that occasion, the officer conducted a short search and stated he did not see anything. On December 11, 2008, Kronk again called the police. They searched and found the remains of a child in a trash bag. Investigative teams recovered duct tape which was hanging from hair attached to the skull and some tissue left on the skull. Over the next four days, more bones were found in the wooded area near the spot where the remains initially had been discovered. On December 19, 2008, medical examiner Jan Garavaglia confirmed that the remains found were those of Caylee Anthony. The death was ruled a homicide and the cause of death listed as undetermined.
Arrests and charges
Casey was offered a limited immunity deal on July 29, 2008, by prosecutors related to "the false statements given to law enforcement about locating her child", which was renewed on August 25, to expire August 28. She did not take it. On September 5, 2008, she was released again on bail for all pending charges after being fitted with an electronic tracking device. Her $500,000 bond was posted by her parents, Cindy and George Anthony, who signed a promissory note for the bond.
On October 14, 2008, Casey Anthony was indicted by a grand jury on charges of first degree murder, aggravated child abuse, aggravated manslaughter of a child, and four counts of providing false information to police. She was later arrested, and Judge John Jordan ordered that she be held without bond. On October 21, 2008, the charges of child neglect were dropped against Casey, according to the State Attorney's Office because "[as] the evidence proved that the child was deceased, the State sought an indictment on the legally appropriate charges." On October 28, Anthony was arraigned and pleaded not guilty to all charges.
On April 13, 2009, prosecutors announced that they planned to seek the death penalty in the case.
Trial
Evidence
Four hundred pieces of evidence were presented. A strand of hair was recovered from the trunk of Casey's car which was microscopically similar to hair taken from Caylee's hairbrush. The strand showed "root-banding," in which hair roots form a dark band after death, which was consistent with hair from a dead body.
Kronk, who discovered the remains, repeated the same basic story that he had told police. On Friday, October 24, 2008, a forensic report by Arpad Vass of the Oak Ridge National Laboratory judged that results from an air sampling procedure (called LIBS) performed in the trunk of Casey Anthony's car showed chemical compounds "consistent with a decompositional event" based on the presence of five key chemical compounds out of over 400 possible chemical compounds that Vass' research group considers typical of decomposition. Investigators stated that the trunk smelled strongly of human decomposition, but human decomposition was not specified on the laboratory scale. The process has not been affirmed by a Daubert Test in the courts. Vass' group also stated there was chloroform in the car trunk.
In October 2009, officials released 700 pages of documents related to the Anthony investigation, including records of Google searches of the terms "neck breaking" and "how to make chloroform" on a computer accessible to Casey, presented by the prosecutors as evidence of a crime.
According to detectives, crime-scene evidence included residue of a heart-shaped sticker found on duct tape over the mouth of Caylee's skull. However, the laboratory was not able to capture a heart-shape photographically after some duct tape was subjected to dye testing. A blanket found at the crime scene matched Caylee's bedding at her grandparents' home.
Among photos entered into evidence was one from the computer of Ricardo Morales, an ex-boyfriend of Casey Anthony, depicting a man leaning over a woman with a rag, captioned "Win her over with Chloroform".
Witness John Dennis Bradley's software, developed for computer investigations, was used by the prosecution to indicate that Casey had conducted extensive computer searches on the word "chloroform" 84 times, and to suggest that Anthony had planned to commit murder. He later discovered that a flaw in the software misread the forensic data and that the word "chloroform" had been searched for only one time and the website in question offered information on the use of chloroform in the 19th century (see ).
Attorneys and jury
The lead prosecutor in the case was Assistant State Attorney Linda Drane Burdick. Assistant State Attorneys Frank George and Jeff Ashton completed the prosecution team. Lead counsel for the defense was Jose Baez, a Florida criminal defense attorney. Attorneys J. Cheney Mason, Dorothy Clay Sims, and Ann Finnell served as co-counsel. During the trial, attorney Mark Lippman represented George and Cindy Anthony.
Jury selection began on May 9, 2011, at the Pinellas County Criminal Justice Center in Clearwater, Florida, because the case had been so widely reported in the Orlando area. Jurors were brought from Pinellas County to Orlando. Jury selection took longer than expected and ended on May 20, 2011, with twelve jurors and five alternates being sworn in. The panel consisted of nine women and eight men. The trial took six weeks, during which time the jury was sequestered to avoid influence from information available outside the courtroom.
Opening statements
The trial began on May 24, 2011, at the Orange County Courthouse, with Judge Belvin Perry presiding. In the opening statements, lead prosecutor Linda Drane Burdick described the story of the disappearance of Caylee Anthony day-by-day. The prosecution alleged an intentional murder and sought the death penalty against Casey Anthony. Prosecutors stated that Anthony used chloroform to render her daughter unconscious before putting duct tape over her nose and mouth to suffocate her, and left Caylee's body in the trunk of her car for a few days before disposing of it. They characterized Anthony as a party girl who killed her daughter to free herself from parental responsibility and enjoy her personal life.
The defense, led by Jose Baez, claimed in opening statements that Caylee drowned accidentally in the family's pool on June 16, 2008, and was found by George Anthony, who told Casey she would spend the rest of her life in jail for child neglect and then proceeded to cover up Caylee's death. Baez argued this is why Casey Anthony went on with her life and failed to report the incident for 31 days. He alleged that it was the habit of a lifetime for Casey to hide her pain and pretend nothing was wrong because she had been sexually abused by George Anthony since she was eight years old and her brother Lee also had made advances toward her. Baez also questioned whether Roy Kronk, the meter reader who found the bones, had actually removed them from another location, and further alleged that the police department's investigation was compromised by their desire to feed a media frenzy about a child's murder, rather than a more mundane drowning. He admitted that Casey had lied about there being a nanny named Zenaida Fernandez-Gonzales.
Witness testimony
Prosecutors called George Anthony as their first witness and, in a response to their question, he denied having sexually abused his daughter Casey. Anthony testified he did not smell anything resembling human decomposition in Casey's car when she visited him on June 24, but he did smell something similar to human decomposition when he picked the car up on July 15. Cindy Anthony testified that her comment to that Casey's car smelled "like someone died" was just a "figure of speech".
Baez asked an FBI analyst about the paternity test the FBI conducted to see if Lee Anthony, Casey's brother, was Caylee's father. She told the jury the test had come back negative. Regarding a photo on the computer of Ricardo Morales, an ex-boyfriend of Casey Anthony, depicting a poster with the caption "Win her over with Chloroform," Morales said that the photo was on his Myspace page and that he had never discussed chloroform with Anthony or searched for chloroform on her computer.
The prosecution called John Dennis Bradley, a former Canadian law enforcement officer who develops software for computer investigations, to analyze a data file from a desktop taken from the Anthony home. Bradley said he was able to use a program to recover deleted searches from March 17 and March 21, 2008, and that someone searched the website Sci-spot.com for "chloroform" 84 times. Bradley expressed his belief that "some of these items might have been bookmarked". Under cross-examination by the defense, Bradley agreed there were two individual accounts on the desktop and that there was no way to know who actually performed the searches.
Police dog handler Jason Forgey testified that Gerus, a German Shepherd cadaver dog certified in 2005, indicated a high alert of human decomposition in the trunk of Casey's car, saying the police dog has had real-world searches numbering "over three thousand by now". During cross-examination, Baez argued that the dog's search records were "hearsay". Sgt. Kristin Brewer also testified that her police dog, Bones, signaled decomposition in the backyard during a search in July 2008. However, neither police dog was able to detect decomposition during a second visit to the Anthony home. Brewer explained that this was because whatever had been in the yard was either moved or the odor dissipated.
The prosecution called chief medical examiner Jan Garavaglia, who testified that she determined Caylee's manner of death to be homicide, but listed it as "death by undetermined means". Garavaglia took into account the physical evidence present on the remains she examined, as well as all the available information on the way they were found and what she had been told by the authorities, before arriving at her determination. "We know by our observations that it's a red flag when a child has not been reported to authorities with injury, there's foul play," Garavaglia said, and continued "There is no child that should have duct tape on its face when it dies." Additionally, Garavaglia addressed the chloroform evidence found by investigators inside the trunk of Casey's car, testifying that even a small amount of chloroform would be sufficient to cause the death of a child.
University of Florida professor and human identification laboratory director Michael Warren was brought on by the prosecution to present a computer animation of the way duct tape could have been used in the death of the child, which the defense objected to hearing. Judge Perry, after a short recess to review, ruled that the video could be shown to the jury. The animation featured a picture of Caylee taken alongside Casey, superimposed with an image of Caylee's decomposed skull, and another with a strip of duct tape that was recovered with her remains. The images were slowly brought together showing that the duct tape could have covered her nose and mouth. Baez stated, "This disgusting superimposition is nothing more than a fantasy ...They're throwing things against the wall and seeing if it sticks." Jurors were seen taking notes of the imagery, and Warren testified that it was his opinion that the duct tape found with Caylee's skull was placed there before her body began decomposing.
FBI latent-print examiner Elizabeth Fontaine testified that adhesive in the shape of a heart was found on a corner of a piece of duct tape that was covering the mouth portion of Caylee's remains during ultraviolet testing. Fontaine examined three pieces of duct tape found on Caylee's remains for fingerprints, and said she did not find fingerprints but did not expect to, given the months the tape and the remains had been outdoors and exposed to the elements, stressing that any oil or sweat from a person's fingertips would have long since deteriorated. Although Fontaine showed the findings to her supervisor, she did not initially try to photograph the heart-shaped adhesive, explaining, "When I observe something is unexpected, I note it and continue with my examination." During the defense's cross-examination, Fontaine explained that when she examined the sticker evidence a second time, after subjecting the tape to dye testing, "It was no longer visible." She said that other FBI agents had tested the duct tape in the interim.
The defense called two government witnesses who countered prosecution witness testimony about the duct tape. The chief investigator for the medical examiner stated that the original placement of the duct tape was unclear and it could have shifted positions as he collected the remains. Cindy Anthony testified that their family buried their pets in blankets and plastic bags, using duct tape to seal the opening. Additionally, an FBI forensic document examiner found no evidence of a sticker or sticker residue on the duct tape found near the child's remains.
The defense called forensic pathologist Dr. Werner Spitz, who performed a second autopsy on Caylee after Garavaglia and challenged Garavaglia's autopsy report. He called her autopsy "shoddy," saying it was a failure that Caylee's skull was not opened during her examination. "You need to examine the whole body in an autopsy," he said. Spitz stated that he was not allowed to attend Garavaglia's initial autopsy on Caylee's remains, and that, from his own follow-up autopsy, he was not comfortable ruling the child's death a homicide. He said he could not determine what Caylee's manner of death was, but said that there was no indication to him that she was murdered. Additionally, Spitz testified that he believed the duct tape found on Caylee's skull was placed there after the body decomposed, opining that if tape was placed on the skin, there should have been DNA left on it, and suggested that someone may have staged some of the crime scene photos. "The person who took this picture, the person who prepared this, put the hair there," stated Spitz. When asked by Ashton during cross-examination, "So your testimony is the medical examiner's personnel took the hair that wasn't on the skull, placed it there?", Spitz answered, "It wouldn't be the first time, sir. I can tell you some horror stories about that."
On June 21, Bradley discovered that a flaw in his software misread the forensic data and that the word "chloroform" had been searched for only one time and the website in question offered information on the use of chloroform in the 19th century. On June 23, Baez called Cindy Anthony to the stand, who told jurors she had been the one who performed the "chloroform" search on the family computer in March 2008. The prosecution alleged that only Casey could have conducted this search and the others because she was the only one home at the time. When asked by prosecutors how she could have made the Internet searches when employment records show she was at work, Cindy Anthony said despite what her work time sheet indicates, she was at home during these time periods because she left from work early during the days in question. Bradley alerted prosecutor Linda Burdick and Sgt. Kevin Stenger of the Sheriff's Office the weekend of June 25 about the discrepancy in his software, and volunteered to fly to Orlando at his own expense to show them. On the same day, the judge temporarily halted proceedings when the defense filed a motion to determine if Anthony was competent to proceed with trial. The motion states the defense received a privileged communication from their client which caused them to believe that "Ms. Anthony is not competent to aid and assist in her own defense". The trial resumed on June 27 when the judge announced that the results of the psychological evaluations showed Anthony was competent to proceed. In later testimony about air samples, Dr. Ken Furton, a professor of chemistry at Florida International University, stated that there is no consensus in the field on what chemicals are typical of human decomposition. Judge Perry ruled that the jury would not get to smell air samples taken from the trunk.
The prosecution stated they discussed Bradley's software discrepancy with Baez on June 27, and he raised the issue in court testimony. Baez also asked Judge Perry to instruct the jury about this search information, but prosecutors disputed this and it was not done. Also on June 27, the defense called two private investigators who had searched the area in November 2008 where the body was later found. The search was videotaped, but nothing was found. On June 28, the defense called a Texas EquuSearch team leader who did two searches of the area and found no body. The defense then called Roy Kronk, who recounted the same basic story he told police about his discovery of Caylee Anthony's remains in December 2008. He acknowledged receiving $5,000 after the remains were identified, but denied that he told his son that finding the body would make him rich and famous. The next day, his son testified he had made such statements.
On June 30 the defense called Krystal Holloway, a volunteer in the search for Caylee, who stated that she had had an affair with George Anthony, that he had been to her home, and that he had texted her, "Just thinking about you. I need you in my life." She told the defense that George Anthony had told her that Caylee's death was "an accident that snowballed out of control." Under cross-examination by prosecutors, they pointed to her sworn police statement in which she had said that George Anthony believed it was an accident, rather than knowing that it was. In her initial report, Holloway reported George Anthony saying, "I really believe that it was an accident that just went wrong and (Casey Anthony) tried to cover it up." She said he had not told her he was present when the alleged accident occurred. During redirect examination, Baez asked Holloway if George Anthony had told her that Caylee was dead while stating publicly that she was missing, to which she replied yes.
In his earlier testimony George Anthony had denied the affair with Holloway, and said he visited her only because she was ill. He said he sent the text message because he needed everyone who had helped in his life. After Holloway's testimony, Judge Perry told jurors that it could be used to impeach George Anthony's credibility, but that it was not proof of how Caylee died, nor evidence of Casey Anthony's guilt or innocence.
The prosecution rested its case on June 15, after calling 59 witnesses for 70 different testimonies. The defense rested its case on June 30, after calling 47 witnesses for 63 different testimonies. Casey Anthony did not testify.
On June 30 and July 1, the prosecutor presented rebuttal arguments, beginning by showing the jury photographs of Caylee's clothes and George's suicide note. It called two representatives of Cindy Anthony's former employer who explained why their computer login system shows Cindy was at work the afternoon she said she went home early and searched her computer for information about chloroform. A police computer analyst testified someone had purposely searched online for "neck + breaking." Another analyst testified she did not find evidence that Cindy Anthony had searched certain terms she claimed to have searched. Anthropology professor Dr. Michael Warren from the University of Florida was recalled to rebut a defense witness on the need to open a skull during an autopsy. The lead detective stated that there were no phone calls between Cindy and George Anthony during the week of June 16, 2008; however, he told the defense he did not know that George had a second cell phone.
Closing arguments
Closing arguments were heard July 3 and July 4. Jeff Ashton, for the prosecution, told the jury, "When you have a child, that child becomes your life. This case is about the clash between that responsibility, and the expectations that go with it, and the life that Casey Anthony wanted to have." He outlined the state's case against Casey, touching on her many lies to her parents and others, the smell in her car's trunk—identified by several witnesses, including her own father, as the odor from human decomposition—and the items found with Caylee's skeletal remains in December 2008. He emphasized how Casey "maintains her lies until they absolutely cannot be maintained any more" and then replaces [them] with another lie, using "Zanny the Nanny" as an example. Anthony repeatedly told police that Caylee was with the nanny that she specifically identified as Zenaida Fernandez-Gonzalez. Police, however, were never able to find the nanny. Authorities did find a woman named Zenaida Fernandez-Gonzalez, but she denied ever meeting the Anthonys.
Ashton reintroduced the items found with Caylee's remains, including a Winnie the Pooh blanket that matched the bedding at her grandparents' home, one of a set of laundry bags with the twin bag found at the Anthony home, and duct tape he said was a relatively rare brand. "That bag is Caylee's coffin," Ashton said, holding up a photograph of the laundry bag, as Casey reacted with emotion. He further criticized the defense's theory that Caylee drowned in the Anthony pool and that Casey and George panicked upon finding the child's body and covered up her death. He advised jurors to use their common sense when deciding on a verdict. "No one makes an accident look like murder," he said.
Before closing arguments, Judge Perry ruled that the defense could argue that a drowning occurred due to reasonable conclusions aided by witness testimony, but that arguing sexual abuse was not allowed since there was nothing to support the claim that George sexually abused Casey. Baez began by contending that there were holes in the prosecution's forensic evidence, saying it was based on a "fantasy". He told the jury that the prosecution wanted them to see stains and insects that did not really exist, that they had not proven that the stains in Anthony's car trunk were caused by Caylee's decomposing body, rather than from a trash bag found there. He added that the prosecutors tried to make his client look like a promiscuous liar because their evidence was weak. He said the drowning is "the only explanation that makes sense" and showed jurors a photograph of Caylee opening the home's sliding glass door by herself. He stressed that there were no child safety locks in the home and that both of Casey's parents, George and Cindy, testified that Caylee could get out of the house easily. Although Cindy testified that Caylee could not put the ladder on the side of the pool and climb up, Baez alleged that Cindy may have left the ladder up the night before. "She didn't admit to doing so in testimony," he said, "but how much guilt would she have knowing it was her that left the ladder up that day?"
Defense attorney Jose Baez told jurors his biggest fear was that they would base their verdict on emotions, not evidence. "The strategy behind that is, if you hate her, if you think she's a lying, no-good slut, then you'll start to look at this evidence in a different light," he said. "I told you at the very beginning of this case that this was an accident that snowballed out of control... What made it unique is not what happened, but who it happened to." He explained Casey Anthony's behavior as being the result of her dysfunctional family situation. At one point as Baez spoke, Ashton could be seen smiling or chuckling behind his hand. This prompted Baez to refer to him as "this laughing guy right here". The judge called a sidebar conference, then a recess. When court resumed, he chastised both sides, saying both Ashton and Baez had violated his order that neither side should make disparaging remarks about opposing counsel. After both attorneys apologized, the judge accepted the apologies but warned that a recurrence would have the offending attorney excluded from the courtroom.
Defense attorney Cheney Mason then followed with an additional closing argument, addressing the jury to discuss the charges against Casey Anthony. "The burden rests on the shoulders of my colleagues at the state attorney's office," Mason said, referring to proving that Casey Anthony committed a crime. Mason said that the jurors are required, whether they like it or not, to find the defendant not guilty if the state did not adequately prove its case against Casey Anthony. Mason emphasized that the burden of proof is on the state, and that Casey Anthony's decision not to testify is not an implication of guilt.
Lead prosecutor Linda Drane Burdick in the prosecution rebuttal told the jurors that she and her colleagues backed up every claim they made in their opening statement six weeks ago, and implied that the defense never directly backed up their own opening-statement claims. "My biggest fear is that common sense will be lost in all the rhetoric of the case," she said, insisting that she would never ask the jury to make their decision based on emotion but rather the evidence. "Responses to guilt are oh, so predictable," she stated. "What do guilty people do? They lie, they avoid, they run, they mislead... they divert attention away from themselves and they act like nothing is wrong." She suggested that the garbage bag in the trunk of the car was a "decoy" put there to keep people from getting suspicious about the smell of the car when she left it abandoned in a parking stall directly beside a dumpster in an Amscot parking lot. "Whose life was better without Caylee?" she asked, stressing how George and Cindy Anthony were wondering where their daughter and granddaughter were in June and July 2008, the same time Casey was staying at her boyfriend's apartment while Caylee's body was decomposing in the woods. "That's the only question you need to answer in considering why Caylee Marie Anthony was left on the side of the road dead." Burdick then showed the jury a split-screen with a photo of Casey partying at a night club on one side and a close-up of the "" (meaning "Beautiful Life") tattoo that she got weeks after Caylee died on the other.
The jury began deliberations on July 4. On July 5, prosecutors stated that, during deliberations, they were about to give the jury the corrected information with regard to Bradley's software discrepancy; however, the jury reached a verdict before they could do so. One legal analyst stated that if the jury had found Casey guilty before receiving the exculpatory evidence, the prosecution's failure to fully disclose it could have been grounds for a mistrial.
Verdict and sentence
On July 5, 2011, the jury found Casey not guilty of counts one through three regarding first-degree murder, aggravated manslaughter of a child, and aggravated child abuse, while finding her guilty on counts four through seven for providing false information to law enforcement:
Count Four: Anthony said she was employed at Universal Studios during 2008, pursuant to the investigation of a missing persons report.
Count Five: Anthony said she had left Caylee at an apartment complex with a babysitter causing law enforcement to pursue the missing babysitter.
Count Six: Anthony said she informed two "employees" of Universal Studios, Jeff Hopkins and Juliet Lewis, at Universal, of the disappearance of Caylee.
Count Seven: Anthony said she had received a phone call and spoke to Caylee on July 15, 2008, causing law enforcement to expend further resources.
On July 7, 2011, sentencing arguments were heard. The defense asked for the sentencing to be based on one count of lying on the grounds that the offenses occurred as part of a single interview with police dealing with the same matter, the disappearance of her daughter, as one continuous lie. The defense also argued for concurrent sentences, that is for all four counts to become one count and the sentence to run together as one. The judge disagreed with defense arguments, finding that Anthony's statements consisted of "four distinct, separate lies" ordered the sentences be served consecutively, noting that "Law enforcement expended a great deal of time, energy and manpower looking for Caylee Marie Anthony. This search went on from July through December, over several months, trying to find Caylee Marie Anthony." Judge Perry sentenced Casey to one year in the county jail and $1,000 in fines for each of the four counts of providing false information to a law enforcement officer, the maximum penalty prescribed by law. She received 1,043 days credit for time served plus additional credit for good behavior, resulting in her release on July 17, 2011.
In September 2011, Perry, complying with a Florida statute requiring judges to assess investigative and prosecution costs if requested by a state agency, ruled that Casey Anthony must pay $217,000 to the state of Florida. He ruled she had to pay those costs directly related to lying to law enforcement about the death of Caylee, including search costs only up to September 30, 2008, when the Sheriff's Office stopped investigating a missing-child case. In earlier arguments, Mason had called the prosecutors' attempts to exact the larger sum "sour grapes" because the prosecution lost its case. He told reporters that Anthony is indigent.
In January 2013, a Florida appeals court reduced her convictions from four to two counts. Her attorney had argued that her false statements constituted a single offense; however, the appeals court noted she gave false information during two separate police interviews several hours apart.
Media coverage
Initial coverage
The case attracted a significant amount of national media attention, and was regularly the main topic of many TV talk shows; including those hosted by Greta Van Susteren, Nancy Grace and Geraldo Rivera. It has been featured on Fox's America's Most Wanted, NBC's Dateline, and ABC's 20/20. Nancy Grace referred to Casey Anthony as the "tot mom", and urged the public to let "the professionals, the psychics and police" do their jobs.
Casey Anthony's parents, Cindy and George, appeared on The Today Show on October 22, 2008. They maintained their belief that Caylee was alive and would be found. Larry Garrison, president of SilverCreek Entertainment, was their spokesman until November 2008, citing that he was resigning as spokesperson due to "the Anthony family's erratic behavior".
More than 6,000 pages of evidence released by the Orange County Sheriff's Department, including hundreds of instant messages between Casey and her ex-boyfriend Tony Rusciano, were the subject of increased scrutiny by the media for clues and possible motives in the homicide. Outside the Anthony home, WESH TV 2 reported that protesters repeatedly shouted "baby killer" and that George Anthony was physically attacked. George Anthony was reported missing on January 22, 2009, after he failed to show up for a meeting with his lawyer, Brad Conway. George was found in a Daytona Beach hotel the next day after sending messages to family members threatening suicide. He was taken to Halifax Hospital for psychiatric evaluation and later released.
Trial coverage
The trial was commonly compared to the O. J. Simpson murder case, both for its widespread media attention and initial shock at the not-guilty verdict. At the start of the trial, dozens of people raced to the Orange County Courthouse, hoping to secure one of 50 seats open to the public at the murder trial. Because the case received such thorough media attention in Orlando, jurors were brought in from Pinellas County, Florida, and sequestered for the entire trial. The trial became a "macabre tourist attraction", as people camped outside for seats in the courtroom, where scuffles also broke out among those wanting seats inside. The New York Post described the trial as going "from being a newsworthy case to one of the biggest ratings draws in recent memory", and Time magazine dubbed it "the social media trial of the century". Cable news channels and network news programs became intent upon covering the case as extensively as they could. Scot Safon, executive vice president of HLN, said it was "not about policy" but rather the "very, very strong human dimension" of the case that drove the network to cover it. The audience for HLN's Nancy Grace rose more than 150 percent, and other news channels deciding to focus on the trial saw their ratings double and triple. HLN achieved its most watched hour in network history (4.575 million) and peaked at 5.205 million when the verdict was read. According to The Christian Post, the O. J. Simpson case had a 91 percent television viewing audience, with 142 million people listening by radio and watching television as the verdict was delivered. "The Simpson case was the longest trial ever held in California, costing more than $20 million to fight and defend, running up 50,000 pages of trial transcript in the process." The Casey Anthony trial was expected to "far exceed" these numbers.
Opinions varied on what made the public thoroughly invested in the trial. Safon argued the Anthonys having been a regular and "unremarkable" family with complex relationships made them intriguing to watch. In a special piece for CNN, psychologist Frank Farley described the circumstantial evidence as "all over the map" and that combined with "the apparent lying, significant contradictions and flip-flops of testimony, and questionable or bizarre theories of human behavior, it is little wonder that this nation [was] glued to the tube". He said it was a trial that was both a psychologist's dream and nightmare, and believes that much of the public's fascination had to do with the uncertainty of a motive for the crime. Psychologist Karyl McBride discussed how some mothers stray away from "the saintly archetype" expected of mothers. "We want so badly to hang onto the belief system that mothers don't harm children," she stated. "It's fascinating that the defense in the Anthony case found a way to blame the father. While we don't know what is true and maybe never will, it is worth taking a look at the narcissistic family when maternal narcissism rules the roost. Casey Anthony is a beautiful white woman and the fact that the case includes such things as sex, lies, and videotapes makes it irresistible."
When the not-guilty verdict was rendered, there was significant outcry among the general public and media that the jury made the wrong decision. Outside the courthouse, many in the crowd of 500 reacted with anger, chanting their disapproval and waving protest signs. People took to Facebook and Twitter, as well as other social media outlets, to express their outrage. Traffic to news sites surged from about two million page views a minute to 3.3 million, with most of the visits coming from the United States. Mashable reported that between 2 pm and 3 pm, one million viewers were watching CNN.com/live, 30 times higher than the previous month's average. Twitter's trending topics in the United States were mostly about the subjects related to the case, and Newser reported that posts on Facebook were coming in "too fast for all Facebook to even count them, meaning at least 10 per second". Some people referred to the verdict as "O.J. Number 2", and various media personalities and celebrities expressed outrage via Twitter. News anchor Julie Chen became visibly upset while reading the not-guilty verdict on The Talk and had to be assisted by her fellow co-hosts, who also expressed their dismay.
Others, such as Sean Hannity of the Fox News Channel, felt the verdict was fair because the prosecution did not have enough evidence to establish guilt or meet its burden of proof beyond a reasonable doubt. Hannity said that the verdict was legally correct, and that all of the evidence that was presented by the prosecution was either impeached or contradicted by the defense. John Cloud of Time magazine echoed these sentiments, saying the jury made the right call: "Anthony got off because the prosecution couldn't answer [the questions]," Cloud stated. "Because the prosecutors had so little physical evidence, they built their case on Anthony's (nearly imperceptible) moral character. The prosecutors seemed to think that if jurors saw what a fantastic liar Anthony was, they would understand that she could also be a murderer."
Disagreement with the verdict was heavily debated by the media, lawyers and psychologists, who put forth several theories for public dissatisfaction with the decision, ranging from wanting justice for Caylee, to the circumstantial evidence having been strong enough, to some blaming the media. UCLA forensic psychiatrist Dr. Carole Lieberman, said, "The main reason that people are reacting so strongly is that the media convicted Casey before the jury decided on the verdict. The public has been whipped up into this frenzy wanting revenge for this poor little adorable child. And because of the desire for revenge, they've been whipped up into a lynch mob." She added, "Nobody likes a liar, and Anthony was a habitual liar. And nobody liked the fact that she was partying after Caylee's death. Casey obviously has a lot of psychological problems. Whether she murdered her daughter or not is another thing."
There was a gender gap in perceptions to the case. According to a USA Today/Gallup Poll of 1,010 respondents, about two-thirds of Americans (64 percent) believed Casey Anthony "definitely" or "probably" murdered her daughter; however, women were much more likely than men to believe the murder charges against Anthony and to be upset by the not-guilty verdict. The poll reported that women were more than twice as likely as men, 28 percent versus 11 percent, to think Anthony "definitely" committed murder. Twenty-seven percent of women said they were angry about the verdict, compared with nine percent of men. On the day Casey Anthony was sentenced for lying to investigators in the death of her daughter, supporters and protesters gathered outside the Orange County Courthouse, with one man who displayed a sign asking Anthony to marry him. Two men who drove overnight from West Virginia held signs that said, "We love and support you Casey Anthony," and "Nancy Grace, stop trying to ruin innocent lives. The jury has spoken. P.S. Our legal system still works!" The gender gap has partly been explained by "the maternal instinct"—the idea of a mother murdering her own child is a threat to the ideal of motherhood. For example, the trial was compared to the 1960s trial of Alice Crimmins, who was accused of murdering her two small children.
Explanations other than, or emphasizing, the prosecution's lack of forensic evidence were given for the jury's decision. A number of media commentators reasoned that the prosecution overcharged the case by tagging on the death penalty, concluding that people in good conscience could not sentence Anthony to death based on the circumstantial evidence presented. The CSI effect was also extensively argued—that society now lives "in a 'CSI age' where everyone expects fingerprints and DNA, and we are sending a message that old-fashioned circumstantial evidence is not sufficient". Likewise, commentators such as O. J. Simpson case prosecutor Marcia Clark believe that the jury interpreted "reasonable doubt" too narrowly. Clark said instruction on reasonable doubt is "the hardest, most elusive" instruction of all. "And I think it's where even the most fair-minded jurors can get derailed," she said, opining the confusion between reasonable doubt and a reason to doubt. "In Scotland, they have three verdicts: guilty, not guilty, and not proven. It's one way of showing that even if the jury didn't believe the evidence amounted to proof beyond a reasonable doubt, it didn't find the defendant innocent either. There's a difference."
Aftermath
Defense, prosecution, and jury
Following the criminal trial, Mason blamed the media for the passionate hatred directed toward Casey Anthony. He described it as a "media assassination" of her before and during the trial, saying, "I hope that this is a lesson to those of you who have indulged in media assassination for three years, bias, and prejudice, and incompetent talking heads saying what would be and how to be." Mason added: "I can tell you that my colleagues from coast to coast and border to border have condemned this whole process of lawyers getting on television and talking about cases that they don't know a damn thing about, and don't have the experience to back up their words or the law to do it. Now you have learned a lesson."
Mason's response was viewed as especially critical of Nancy Grace, whose news program is cited as having "almost single-handedly inflated the Anthony case from a routine local murder into a national obsession". Grace said that she did not understand why Mason would care what pundits are saying, and that she imagines she has tried and covered as many cases as Mason. She criticized the defense attorneys for delivering media criticism before mentioning Caylee's name in their post-verdict news conference, and said she disagrees with the verdict. At a meeting of local professionals, named the Tiger Bay Club of Tampa, Mason told the media and those in attendance that he was surprised by the not-guilty verdict.
State's Attorney Lawson Lamar said, "We're disappointed in the verdict today because we know the facts and we've put in absolutely every piece of evidence that existed. This is a dry-bones case. Very, very difficult to prove. The delay in recovering little Caylee's remains worked to our considerable disadvantage." Jose Baez said, "While we're happy for Casey, there are no winners in this case. Caylee has passed on far, far too soon, and what my driving force has been for the last three years has been always to make sure that there has been justice for Caylee and Casey because Casey did not murder Caylee. It's that simple." He added, "And today our system of justice has not dishonored her memory by a false conviction." Former Casey Anthony defense attorney Linda Kenney Baden shared Baez's sentiments. She believes the jury reached the right verdict. "We should embrace their verdict," she stated.
On July 6, 2011, Jeff Ashton gave his first interview about the case on The View. Ashton said of the verdict, "Obviously, it's not the outcome we wanted. But from the perspective of what we do, this was a fantastic case." He disagrees with those who state the prosecution overcharged the case, saying, "The facts that we had... this was first-degree murder. I think it all came down to the evidence. I think ultimately it came down to the cause of death." Ashton additionally explained that if the jury did not perceive first-degree murder when they saw the photograph of Caylee's skull with the duct tape, "then so be it". He said he accepts the jury's decision and that it has not taken away his faith in the justice system. "You can't believe in the rule of law and not accept that sometimes it doesn't go the way you think it should," stated Ashton, and explained that he understands why the case "struck such a nerve" with the public. He added that "I think when people see someone that they believe has so gone away from [a mother's love for her child], it just outrages them." Ashton also made appearances on several other talk shows in the days following, and complimented Jose Baez on his cross-examinations and as having "the potential to be a great attorney".
After the trial ended, the twelve jurors did not initially want to discuss the verdict with the media. 51-year-old Russell Huekler, an alternative juror who stepped forward the day of the verdict, said, "The prosecution didn't provide the evidence that was there for any of the charges from first-degree murder down to second-degree murder to the child abuse to even the manslaughter [charge]. It just wasn't there."
The next day, juror number three—Jennifer Ford, a 32-year-old nursing student—told ABC News, "I did not say she was innocent" and "I just said there was not enough evidence. If you cannot prove what the crime was, you cannot determine what the punishment should be." She added, "I'm not saying that I believe the defense," but that "it's easier for me logically to get from point A to point B" via the defense argument, as opposed to the prosecution argument. Ford believed George Anthony was "dishonest." She said the jury "was sick to [their] stomachs to get [the not-guilty] verdict" and that the decision process overwhelmed them to the point where they did not want to talk to reporters afterwards. Juror number two, a 46-year-old male who requested to stay unidentified, told the St. Petersburg Times that "everybody agreed if we were going fully on feelings and emotions, [Anthony] was done". He stated that a lack of evidence was the reason for the not-guilty verdict: "I just swear to God ... I wish we had more evidence to put her away. I truly do ... But it wasn't there." He also said that Anthony was "not a good person in my opinion".
In an anonymous interview, the jury foreman stated, "When I had to sign off on the verdict, the sheet that was given to me—there was just a feeling of disgust that came over me knowing that my signature and [Casey Anthony's] signature were going to be on the same sheet," but that "there was a suspicion of [George Anthony]" that played a part in the jury's deliberations. The foreman stated his work experience enabled him to read people and that George Anthony "had a very selective memory" which stayed with the jurors, emphasizing that the jury was frustrated by the motive, cause of death, and George Anthony. "That a mother would want to do something like that to her child just because she wanted to go out and party," he said. "We felt that the motive that the state provided was, in our eyes, was just kind of weak." Although the foreman objected to Casey Anthony's behavior in the wake of her daughter's death, he and the jury did not factor that behavior into their verdict because it was not illegal. They initially took a vote on the murder count, which was 10–2 (two voting guilty), but after more than ten hours of deliberation, they decided the only charges they felt were proven were the four counts of lying to law enforcement.
Perry announced at sentencing on July 7 that he would withhold the jurors' names for several months because of concern that "Some people would like to take something out on them." He released the jurors' names on October 25, 2011. On May 6, 2013, he stated that he believed there was sufficient evidence to convict Casey Anthony, even though most of the evidence was circumstantial, and that he was shocked by the not-guilty verdict.
Anthony family
Mark Lippman, the attorney for George and Cindy Anthony, told ABC News that the family received death threats after the not-guilty verdict was rendered. In response to the verdict, a statement was released by Lippman on behalf of the Anthony family (George, Cindy and Lee Anthony):
While the family may never know what has happened to Caylee Marie Anthony, they now have closure for this chapter of their life. They will now begin the long process of rebuilding their lives. Despite the baseless defense chosen by Casey Anthony, the family believes that the Jury made a fair decision based on the evidence presented, the testimony presented, the scientific information presented and the rules that were given to them by the Honorable Judge Perry to guide them. The family hopes that they will be given the time by the media to reflect on this verdict and decide the best way to move forward privately.
It was alleged in press reports that Cindy Anthony had perjured herself when telling jurors she—not Casey Anthony—was the one who used her family computer to search the Internet for "chloroform". The state attorney's office said she would not be charged.
On July 6, 2011, Anthony's jailhouse letters were released to the general public. They were originally released (though not to public) in April 2010 by prosecutors preparing for the Anthony trial. In more than 250 handwritten pages, Anthony discusses her life in jail, what she misses, and her plans for the future if freed. On July 8, 2011, Cindy Anthony had scheduled a visit to meet with Casey at 7:00 p.m., but Casey declined to meet with her mother. Mark Lippman told Reuters during the trial that Casey had cut off communication with her parents. It was later announced that George and Cindy Anthony would be appearing on Dr. Phil in September 2011 to tell their story.
Casey left for an undisclosed location not long after the verdict. However, on August 12, she was ordered to return to Florida to serve a year's supervised probation for an unrelated check-fraud conviction. When she pleaded guilty to that charge in January 2010, the judge in that case intended for Casey to serve her probation after proceedings in the murder case concluded, but an error in the sentencing documents allowed her to serve her probation while awaiting trial. Casey returned to Florida on August 25 and served out her probation in an undisclosed location. Due to numerous threats against her life, the Department of Corrections did not enter her information into the state parolee database. In August 2011, George and Cindy issued a statement that Casey would not be living at their home when she returned to Florida to serve her probation. According to Huffington Post, she was reportedly working with her probation officer to take online college classes in an unspecified field, while protected by her security, at an undisclosed educational institution.
In August 2011, the Florida Department of Children and Families released a report based on a three-year investigation into the disappearance and death of Caylee. An agency spokesperson stated, "It is the conclusion of the [DCF] that [Casey Anthony] failed to protect her child from harm either through her actions or lack of actions, which tragically resulted in the child's untimely death."
Casey filed for bankruptcy with the Middle District of Florida Bankruptcy Court on January 27, 2013. Her estimated liabilities were between $500,000 and $1 million.
Civil suits
In September 2008, a Zenaida Gonzalez sued Casey for defamation. During the investigation, Anthony told investigators that she left -year-old Caylee with a babysitter named Zenaida Fernandez-Gonzalez—also known as "Zanny"—on June 16 at the stairs of a specific apartment in the Sawgrass apartment complex located in Orlando. Zenaida Gonzalez, who was listed on apartment records as having visited apartments on that date, was questioned by police, but stated she did not know Casey or Caylee. Her defamation suit sought compensatory and punitive damages, alleging that Casey willfully damaged her reputation. Gonzalez told reporters that she lost her job, was evicted from her house, and received death threats against herself and her children as a result of Anthony's lies. Gonzalez' lawyer, John Morgan, said he wanted to interrogate Anthony about Caylee's death because it was "the essence" of the defamation suit.
On October 8, 2011, Morgan deposed Casey via a video conference. She exercised her Fifth Amendment right against self-incrimination and answered only a couple of factual questions. Morgan felt that was improper, but legal experts thought that Anthony was well within her rights to plead the Fifth until her appeals of the convictions for lying to officers had been exhausted. Gonzalez' attorneys sought and received permission to obtain Anthony's address (though it was kept sealed from the public) so they could subpoena her to testify, even if she only took the stand long enough to plead the Fifth. However, Gonzalez had been willing to drop the suit if Anthony were to apologize to her and compensate her for pain and suffering. In September 2015, a judge ruled in favor of Anthony, stating: "There is nothing in the statement…to support (Gonzalez') allegations that (Anthony) intended to portray (the nanny) as a child kidnapper and potentially a child killer."
In July 2011, Texas EquuSearch (TES), a non-profit group which assisted in the search for Caylee from July to December 2008 when she was believed to be missing, sued Anthony for fraud and unjust enrichment. TES estimates that it spent more than $100,000 searching for Caylee even though she was already dead. TES founder and director Tim Miller estimates that the abortive search for Caylee expended 40% of the group's yearly resources which could have been spent looking for other missing children. It only learned that Anthony knew all along that Caylee was dead when the trial began. TES and Anthony eventually settled out of court on October 18, 2013. TES was listed as a creditor to Anthony and was entitled to $75,000.
"Caylee's Law"
Since the end of the trial, various movements have arisen for the creation of a new law, called "Caylee's Law", that would impose stricter requirements on parents to notify law enforcement of the death or disappearance of a child. One such petition, circulated via Change.org, has gained nearly 1.3 million electronic signatures. In response to this and other petitions, lawmakers in four states—Florida, Oklahoma, New York, and West Virginia—have begun drafting versions of "Caylee's Law". The law in Oklahoma would require a child's parent or guardian to notify police of a missing child within 24 hours, and would also stipulate a time frame for notification of the disappearance of a young child under the age of 12. The Florida law would make it a felony if a parent or legal guardian fails to report a missing child in timely manner if they could have known the child would be in danger. The call for mandatory reporting laws has been criticized as being "reactive, overly indiscriminating and even counterproductive". One critic noted the law could lead to overcompliance and false reports by parents wary of becoming suspects, wasting police resources and leading to legitimate abductions going uninvestigated during the critical first few hours. Additionally, innocent people could get snared in the law for searching for a child instead of immediately calling police.
Memorials and tribute songs
Different artists have written songs in Caylee's memory. Jon Whynock performed his own version at her memorial service in February 2009, and Rascal Flatts' Gary LeVox collaborated with country comedian and radio host Cledus T. Judd and songwriter Jimmy Yeary to write a song titled "She's Going Places" in Caylee's memory.
Later information
In November 2012, WKMG-TV television in Orlando reported that police never investigated Firefox browser evidence on Casey's computer the day of Caylee's death; they only looked at Internet Explorer evidence. The browser history showed that someone at the Anthony household, using a password-protected account Casey used, used Firefox to do a Google search for "foolproof suffocation" at 2:51 p.m., and then clicked on an article criticizing pro-suicide websites promoting "foolproof" ways to die, including the idea of committing suicide by taking poison and putting a plastic bag over one's head. The browser then recorded activity on MySpace, a site used by Casey but not George. The station learned about this information from Casey's attorney Jose Baez who mentioned it in his book on the case, speculating that George had contemplated suicide after Caylee's death. He conceded to reporters that the records are open to interpretation; however, he speculated that the state may have chosen not to introduce the search at trial because, according to Baez, the computer records tend to refute the timeline stated by George, which was that Casey left at 12:50 p.m. An analysis by John Goetz, a retired engineer and computer expert in Connecticut, revealed that her password-protected computer account shows activity on the home computer at 1:39 p.m., with activity on her AIM account, as well as MySpace and Facebook.
In April 2016, transcripts of two 2015 affidavits of private investigator Dominic Casey were filed on the court docket in the matter of Kronk v. Anthony and picked up by news services in May 2016. In one affidavit, Dominic Casey stated that on July 26, 2008, Baez admitted to him that Casey Anthony murdered Caylee Anthony "and dumped the body somewhere and, he needed all the help he could get to find the body before anyone else did". He also claimed that Baez had a sexual relationship with Anthony, and that "Casey told me she had to do what Jose said because she had no money for her defense." Baez "vehemently" denied a sexual relationship.
In March 2017, Casey Anthony gave an interview with the Associated Press. In the interview, she admits to lying to police. When asked about the drowning defense, Anthony stated, "Everyone has their theories, I don't know. As I stand here today, I can't tell you one way or another. The last time I saw my daughter, I believed she was alive and was going to be OK, and that's what was told to me."
In January 2020, Kronk lost an appeal of his defamation case against Anthony and her attorneys. A US District Court judge upheld a lower court finding that there was not enough evidence to prove she willfully defamed Kronk.
See also
List of solved missing person cases
List of unsolved deaths
Media circus
Murder of Lorenzo González Cacho, unsolved murder of an 8-year-old Puerto Rican child, in which his mother figured as a suspect
Murder of Travis Alexander, a case compared to that of Anthony, with apparent similarities in coverage and the alleged perpetrators.
Trial by media
Unreported missing
References
External links
Central Florida News 13 resources:
Timeline of complete case
Link to daily news stories about the trial
List of all witnesses called
List of various legal documents, including documentary evidence released by the state from 2009
Miscellaneous documentary evidence released by the state from 2008, Discovery Channel
Casey Anthony news coverage, from WFTV-TV (ABC 9) in Orlando, FL
Casey Anthony news coverage, from WKMG-TV (CBS 6) in Orlando, FL
Casey Anthony news coverage, from WESH-TV (NBC 2) in Orlando, FL
Casey Anthony news coverage, from WOFL-TV (FOX 35) in Orlando, FL
JCS – Criminal Psychology – There's Something About Casey...
2000s missing person cases
2008 in Florida
2011 in Florida
21st century in Orlando, Florida
Child deaths
Criminal trials that ended in acquittal
Deaths by person in the United States
Formerly missing people
June 2008 events in the United States
Missing person cases in Florida
Trials in the United States
Unsolved deaths
Women in Florida |
175973 | https://en.wikipedia.org/wiki/Overclocking | Overclocking | In computing, overclocking is the practice of increasing the clock rate of a computer to exceed that certified by the manufacturer. Commonly, operating voltage is also increased to maintain a component's operational stability at accelerated speeds. Semiconductor devices operated at higher frequencies and voltages increase power consumption and heat. An overclocked device may be unreliable or fail completely if the additional heat load is not removed or power delivery components cannot meet increased power demands. Many device warranties state that overclocking or over-specification voids any warranty, however there are an increasing number of manufacturers that will allow overclocking as long as performed (relatively) safely.
Overview
The purpose of overclocking is to increase the operating speed of a given component. Normally, on modern systems, the target of overclocking is increasing the performance of a major chip or subsystem, such as the main processor or graphics controller, but other components, such as system memory (RAM) or system buses (generally on the motherboard), are commonly involved. The trade-offs are an increase in power consumption (heat), fan noise (cooling), and shortened lifespan for the targeted components. Most components are designed with a margin of safety to deal with operating conditions outside of a manufacturer's control; examples are ambient temperature and fluctuations in operating voltage. Overclocking techniques in general aim to trade this safety margin by setting the device to run in the higher end of the margin, with the understanding that temperature and voltage must be more strictly monitored and controlled by the user. Examples are that operating temperature would need to be more strictly controlled with increased cooling, as the part will be less tolerant of increased temperatures at the higher speeds. Also base operating voltage may be increased to compensate for unexpected voltage drops and to strengthen signalling and timing signals, as low-voltage excursions are more likely to cause malfunctions at higher operating speeds.
While most modern devices are fairly tolerant of overclocking, all devices have finite limits. Generally for any given voltage most parts will have a maximum "stable" speed where they still operate correctly. Past this speed, the device starts giving incorrect results, which can cause malfunctions and sporadic behavior in any system depending on it. While in a PC context the usual result is a system crash, more subtle errors can go undetected, which over a long enough time can give unpleasant surprises such as data corruption (incorrectly calculated results, or worse writing to storage incorrectly) or the system failing only during certain specific tasks (general usage such as internet browsing and word processing appear fine, but any application wanting advanced graphics crashes the system).
At this point, an increase in operating voltage of a part may allow more headroom for further increases in clock speed, but the increased voltage can also significantly increase heat output, as well as shorten the lifespan further. At some point, there will be a limit imposed by the ability to supply the device with sufficient power, the user's ability to cool the part, and the device's own maximum voltage tolerance before it achieves destructive failure. Overzealous use of voltage or inadequate cooling can rapidly degrade a device's performance to the point of failure, or in extreme cases outright destroy it.
The speed gained by overclocking depends largely upon the applications and workloads being run on the system, and what components are being overclocked by the user; benchmarks for different purposes are published.
Underclocking
Conversely, the primary goal of underclocking is to reduce power consumption and the resultant heat generation of a device, with the trade-offs being lower clock speeds and reductions in performance. Reducing the cooling requirements needed to keep hardware at a given operational temperature has knock-on benefits such as lowering the number and speed of fans to allow quieter operation, and in mobile devices increase the length of battery life per charge. Some manufacturers underclock components of battery-powered equipment to improve battery life, or implement systems that detect when a device is operating under battery power and reduce clock frequency.
Underclocking and undervolting would be attempted on a desktop system to have it operate silently (such as for a home entertainment center) while potentially offering higher performance than currently offered by low-voltage processor offerings. This would use a "standard-voltage" part and attempt to run with lower voltages (while attempting to keep the desktop speeds) to meet an acceptable performance/noise target for the build. This was also attractive as using a "standard voltage" processor in a "low voltage" application avoided paying the traditional price premium for an officially certified low voltage version. However again like overclocking there is no guarantee of success, and the builder's time researching given system/processor combinations and especially the time and tedium of performing many iterations of stability testing need to be considered. The usefulness of underclocking (again like overclocking) is determined by what processor offerings, prices, and availability are at the specific time of the build. Underclocking is also sometimes used when troubleshooting.
Enthusiast culture
Overclocking has become more accessible with motherboard makers offering overclocking as a marketing feature on their mainstream product lines. However, the practice is embraced more by enthusiasts than professional users, as overclocking carries a risk of reduced reliability, accuracy and damage to data and equipment. Additionally, most manufacturer warranties and service agreements do not cover overclocked components nor any incidental damages caused by their use. While overclocking can still be an option for increasing personal computing capacity, and thus workflow productivity for professional users, the importance of stability testing components thoroughly before employing them into a production environment cannot be overstated.
Overclocking offers several draws for overclocking enthusiasts. Overclocking allows testing of components at speeds not currently offered by the manufacturer, or at speeds only officially offered on specialized, higher-priced versions of the product. A general trend in the computing industry is that new technologies tend to debut in the high-end market first, then later trickle down to the performance and mainstream market. If the high-end part only differs by an increased clock speed, an enthusiast can attempt to overclock a mainstream part to simulate the high-end offering. This can give insight on how over-the-horizon technologies will perform before they are officially available on the mainstream market, which can be especially helpful for other users considering if they should plan ahead to purchase or upgrade to the new feature when it is officially released.
Some hobbyists enjoy building, tuning, and "Hot-Rodding" their systems in competitive benchmarking competitions, competing with other like-minded users for high scores in standardized computer benchmark suites. Others will purchase a low-cost model of a component in a given product line, and attempt to overclock that part to match a more expensive model's stock performance. Another approach is overclocking older components to attempt to keep pace with increasing system requirements and extend the useful service life of the older part or at least delay a purchase of new hardware solely for performance reasons. Another rationale for overclocking older equipment is even if overclocking stresses equipment to the point of failure earlier, little is lost as it is already depreciated, and would have needed to be replaced in any case.
Components
Technically any component that uses a timer (or clock) to synchronize its internal operations can be overclocked. Most efforts for computer components however focus on specific components, such as, processors (a.k.a. CPU), video cards, motherboard chipsets, and RAM. Most modern processors derive their effective operating speeds by multiplying a base clock (processor bus speed) by an internal multiplier within the processor (the CPU multiplier) to attain their final speed.
Computer processors generally are overclocked by manipulating the CPU multiplier if that option is available, but the processor and other components can also be overclocked by increasing the base speed of the bus clock. Some systems allow additional tuning of other clocks (such as a system clock) that influence the bus clock speed that, again is multiplied by the processor to allow for finer adjustments of the final processor speed.
Most OEM systems do not expose to the user the adjustments needed to change processor clock speed or voltage in the BIOS of the OEM's motherboard, which precludes overclocking (for warranty and support reasons). The same processor installed on a different motherboard offering adjustments will allow the user to change them.
Any given component will ultimately stop operating reliably past a certain clock speed. Components will generally show some sort of malfunctioning behavior or other indication of compromised stability that alerts the user that a given speed is not stable, but there is always a possibility that a component will permanently fail without warning, even if voltages are kept within some pre-determined safe values. The maximum speed is determined by overclocking to the point of first instability, then accepting the last stable slower setting. Components are only guaranteed to operate correctly up to their rated values; beyond that different samples may have different overclocking potential. The end-point of a given overclock is determined by parameters such as available CPU multipliers, bus dividers, voltages; the user's ability to manage thermal loads, cooling techniques; and several other factors of the individual devices themselves such as semiconductor clock and thermal tolerances, interaction with other components and the rest of the system.
Considerations
There are several things to be considered when overclocking. First is to ensure that the component is supplied with adequate power at a voltage sufficient to operate at the new clock rate. Supplying the power with improper settings or applying excessive voltage can permanently damage a component.
In a professional production environment, overclocking is only likely to be used where the increase in speed justifies the cost of the expert support required, the possibly reduced reliability, the consequent effect on maintenance contracts and warranties, and the higher power consumption. If faster speed is required it is often cheaper when all costs are considered to buy faster hardware.
Cooling
All electronic circuits produce heat generated by the movement of electric current. As clock frequencies in digital circuits and voltage applied increase, the heat generated by components running at the higher performance levels also increases. The relationship between clock frequencies and thermal design power (TDP) are linear. However, there is a limit to the maximum frequency which is called a "wall". To overcome this issue, overclockers raise the chip voltage to increase the overclocking potential. Voltage increases power consumption and consequently heat generation significantly (proportionally to the square of the voltage in a linear circuit, for example); this requires more cooling to avoid damaging the hardware by overheating. In addition, some digital circuits slow down at high temperatures due to changes in MOSFET device characteristics. Conversely, the overclocker may decide to decrease the chip voltage while overclocking (a process known as undervolting), to reduce heat emissions while performance remains optimal.
Stock cooling systems are designed for the amount of power produced during non-overclocked use; overclocked circuits can require more cooling, such as by powerful fans, larger heat sinks, heat pipes and water cooling. Mass, shape, and material all influence the ability of a heatsink to dissipate heat. Efficient heatsinks are often made entirely of copper, which has high thermal conductivity, but is expensive. Aluminium is more widely used; it has good thermal characteristics, though not as good as copper, and is significantly cheaper. Cheaper materials such as steel do not have good thermal characteristics. Heat pipes can be used to improve conductivity. Many heatsinks combine two or more materials to achieve a balance between performance and cost.
Water cooling carries waste heat to a radiator. Thermoelectric cooling devices which actually refrigerate using the Peltier effect can help with high thermal design power (TDP) processors made by Intel and AMD in the early twenty-first century. Thermoelectric cooling devices create temperature differences between two plates by running an electric current through the plates. This method of cooling is highly effective, but itself generates significant heat elsewhere which must be carried away, often by a convection-based heatsink or a water cooling system.
Other cooling methods are forced convection and phase transition cooling which is used in refrigerators and can be adapted for computer use. Liquid nitrogen, liquid helium, and dry ice are used as coolants in extreme cases, such as record-setting attempts or one-off experiments rather than cooling an everyday system. In June 2006, IBM and Georgia Institute of Technology jointly announced a new record in silicon-based chip clock rate (the rate a transistor can be switched at, not the CPU clock rate) above 500 GHz, which was done by cooling the chip to using liquid helium. Set in November 2012, the CPU Frequency World Record is 8.794 GHz as of January 2022. These extreme methods are generally impractical in the long term, as they require refilling reservoirs of vaporizing coolant, and condensation can form on chilled components. Moreover, silicon-based junction gate field-effect transistors (JFET) will degrade below temperatures of roughly and eventually cease to function or "freeze out" at since the silicon ceases to be semiconducting, so using extremely cold coolants may cause devices to fail.
Submersion cooling, used by the Cray-2 supercomputer, involves sinking a part of computer system directly into a chilled liquid that is thermally conductive but has low electrical conductivity. The advantage of this technique is that no condensation can form on components. A good submersion liquid is Fluorinert made by 3M, which is expensive. Another option is mineral oil, but impurities such as those in water might cause it to conduct electricity.
Amateur overclocking enthusiasts have used a mixture of dry ice and a solvent with a low freezing point, such as acetone or isopropyl alcohol. This cooling bath, often used in laboratories, achieves a temperature of −78 °C. However, this practice is discouraged due to its safety risks; the solvents are flammable and volatile, and dry ice can cause frostbite (through contact with exposed skin) and suffocation (due to the large volume of carbon dioxide generated when it sublimes).
Stability and functional correctness
As an overclocked component operates outside of the manufacturer's recommended operating conditions, it may function incorrectly, leading to system instability. Another risk is silent data corruption by undetected errors. Such failures might never be correctly diagnosed and may instead be incorrectly attributed to software bugs in applications, device drivers, or the operating system. Overclocked use may permanently damage components enough to cause them to misbehave (even under normal operating conditions) without becoming totally unusable.
A large-scale 2011 field study of hardware faults causing a system crash for consumer PCs and laptops showed a four to 20 times increase (depending on CPU manufacturer) in system crashes due to CPU failure for overclocked computers over an eight-month period.
In general, overclockers claim that testing can ensure that an overclocked system is stable and functioning correctly. Although software tools are available for testing hardware stability, it is generally impossible for any private individual to thoroughly test the functionality of a processor. Achieving good fault coverage requires immense engineering effort; even with all of the resources dedicated to validation by manufacturers, faulty components and even design faults are not always detected.
A particular "stress test" can verify only the functionality of the specific instruction sequence used in combination with the data and may not detect faults in those operations. For example, an arithmetic operation may produce the correct result but incorrect flags; if the flags are not checked, the error will go undetected.
To further complicate matters, in process technologies such as silicon on insulator (SOI), devices display hysteresis—a circuit's performance is affected by the events of the past, so without carefully targeted tests it is possible for a particular sequence of state changes to work at overclocked rates in one situation but not another even if the voltage and temperature are the same. Often, an overclocked system which passes stress tests experiences instabilities in other programs.
In overclocking circles, "stress tests" or "torture tests" are used to check for correct operation of a component. These workloads are selected as they put a very high load on the component of interest (e.g. a graphically intensive application for testing video cards, or different math-intensive applications for testing general CPUs). Popular stress tests include Prime95, Everest, Superpi, OCCT, AIDA64, Linpack (via the LinX and IntelBurnTest GUIs), SiSoftware Sandra, BOINC, Intel Thermal Analysis Tool and Memtest86. The hope is that any functional-correctness issues with the overclocked component will manifest themselves during these tests, and if no errors are detected during the test, then the component is deemed "stable". Since fault coverage is important in stability testing, the tests are often run for long periods of time, hours or even days. An overclocked computer is sometimes described using the number of hours and the stability program used, such as "prime 12 hours stable".
Factors allowing overclocking
Overclockability arises in part due to the economics of the manufacturing processes of CPUs and other components. In many cases components are manufactured by the same process, and tested after manufacture to determine their actual maximum ratings. Components are then marked with a rating chosen by the market needs of the semiconductor manufacturer. If manufacturing yield is high, more higher-rated components than required may be produced, and the manufacturer may mark and sell higher-performing components as lower-rated for marketing reasons. In some cases, the true maximum rating of the component may exceed even the highest rated component sold. Many devices sold with a lower rating may behave in all ways as higher-rated ones, while in the worst case operation at the higher rating may be more problematical.
Notably, higher clocks must always mean greater waste heat generation, as semiconductors set to high must dump to ground more often. In some cases, this means that the chief drawback of the overclocked part is far more heat dissipated than the maximums published by the manufacturer. Pentium architect Bob Colwell calls overclocking an "uncontrolled experiment in better-than-worst-case system operation".
Measuring effects of overclocking
Benchmarks are used to evaluate performance, and they can become a kind of "sport" in which users compete for the highest scores. As discussed above, stability and functional correctness may be compromised when overclocking, and meaningful benchmark results depend on the correct execution of the benchmark. Because of this, benchmark scores may be qualified with stability and correctness notes (e.g. an overclocker may report a score, noting that the benchmark only runs to completion 1 in 5 times, or that signs of incorrect execution such as display corruption are visible while running the benchmark). A widely used test of stability is Prime95, which has built-in error checking that fails if the computer is unstable.
Using only the benchmark scores, it may be difficult to judge the difference overclocking makes to the overall performance of a computer. For example, some benchmarks test only one aspect of the system, such as memory bandwidth, without taking into consideration how higher clock rates in this aspect will improve the system performance as a whole. Apart from demanding applications such as video encoding, high-demand databases and scientific computing, memory bandwidth is typically not a bottleneck, so a great increase in memory bandwidth may be unnoticeable to a user depending on the applications used. Other benchmarks, such as 3DMark, attempt to replicate game conditions.
Manufacturer and vendor overclocking
Commercial system builders or component resellers sometimes overclock to sell items at higher profit margins. The seller makes more money by overclocking lower-priced components which are found to operate correctly and selling equipment at prices appropriate for higher-rated components. While the equipment will normally operate correctly, this practice may be considered fraudulent if the buyer is unaware of it.
Overclocking is sometimes offered as a legitimate service or feature for consumers, in which a manufacturer or retailer tests the overclocking capability of processors, memory, video cards, and other hardware products. Several video card manufactures now offer factory-overclocked versions of their graphics accelerators, complete with a warranty, usually at a price intermediate between that of the standard product and a non-overclocked product of higher performance.
It is speculated that manufacturers implement overclocking prevention mechanisms such as CPU multiplier locking to prevent users from buying lower-priced items and overclocking them. These measures are sometimes marketed as a consumer protection benefit, but are often criticized by buyers.
Many motherboards are sold, and advertised, with extensive facilities for overclocking implemented in hardware and controlled by BIOS settings.
CPU multiplier locking
CPU multiplier locking is the process of permanently setting a CPU's clock multiplier. AMD CPUs are unlocked in early editions of a model and locked in later editions, but nearly all Intel CPUs are locked and recent models are very resistant to unlocking to prevent overclocking by users. AMD ships unlocked CPUs with their Opteron, FX, Ryzen and Black Series line-up, while Intel uses the monikers of "Extreme Edition" and "K-Series." Intel usually has one or two Extreme Edition CPUs on the market as well as X series and K series CPUs analogous to AMD's Black Edition. AMD has the majority of their desktop range in a Black Edition.
Users usually unlock CPUs to allow overclocking, but sometimes to allow for underclocking in order to maintain the front side bus speed (on older CPUs) compatibility with certain motherboards. Unlocking generally invalidates the manufacturer's warranty, and mistakes can cripple or destroy a CPU. Locking a chip's clock multiplier does not necessarily prevent users from overclocking, as the speed of the front-side bus or PCI multiplier (on newer CPUs) may still be changed to provide a performance increase. AMD Athlon and Athlon XP CPUs are generally unlocked by connecting bridges (jumper-like points) on the top of the CPU with conductive paint or pencil lead. Other CPU models may require different procedures.
Increasing front-side bus or northbridge/PCI clocks can overclock locked CPUs, but this throws many system frequencies out of sync, since the RAM and PCI frequencies are modified as well.
One of the easiest ways to unlock older AMD Athlon XP CPUs was called the pin mod method, because it was possible to unlock the CPU without permanently modifying bridges. A user could simply put one wire (or some more for a new multiplier/Vcore) into the socket to unlock the CPU. More recently however, notably with Intel's Skylake architecture, Intel had a bug with the Skylake (6th gen Core) processors where the base clock could be increased past 102.7 MHz, however functionality of certain features would not work. Intel intended to block base clock (BCLK) overclocking of locked processors when designing the Skylake architecture to prevent consumers from purchasing cheaper components and overclocking to previously-unseen heights (since the CPU's BCLK was no longer tied to the PCI buses), however for LGA1151, the 6th generation "Skylake" processors were able to be overclocked past 102.7 MHz (which was the intended limit by Intel, and was later mandated through later BIOS updates). All other unlocked processors from LGA1151 and v2 (including 7th, 8th, and 9th generation) and BGA1440 allow for BCLK overclocking (as long as the OEM allows it), while all other locked processors from 7th, 8th, and 9th gen were not able to go past 102.7 Mhz. 10th gen however, could reach 103 Mhz on the BCLK.
Advantages
Higher performance in games, en-/decoding, video editing and system tasks at no additional direct monetary expense, but with increased electrical consumption and thermal output.
System optimization: Some systems have "bottlenecks", where small overclocking of one component can help realize the full potential of another component to a greater percentage than when just the limiting hardware itself is overclocked. For instance: many motherboards with AMD Athlon 64 processors limit the clock rate of four units of RAM to 333 MHz. However, the memory performance is computed by dividing the processor clock rate (which is a base number times a CPU multiplier, for instance 1.8 GHz is most likely 9×200 MHz) by a fixed integer such that, at a stock clock rate, the RAM would run at a clock rate near 333 MHz. Manipulating elements of how the processor clock rate is set (usually adjusting the multiplier), it is often possible to overclock the processor a small amount, around 5-10%, and gain a small increase in RAM clock rate and/or reduction in RAM latency timings.
It can be cheaper to purchase a lower performance component and overclock it to the clock rate of a more expensive component.
Extending life on older equipment (through underclocking/undervolting).
Disadvantages
General
Higher clock rates and voltages increase power consumption, also increasing electricity cost and heat production. The additional heat increases the ambient air temperature within the system case, which may affect other components. The hot air blown out of the case heats the room it's in.
Fan noise: High-performance fans running at maximum speed used for the required degree of cooling of an overclocked machine can be noisy, some producing 50 dB or more of noise. When maximum cooling is not required, in any equipment, fan speeds can be reduced below the maximum: fan noise has been found to be roughly proportional to the fifth power of fan speed; halving speed reduces noise by about 15 dB. Fan noise can be reduced by design improvements, e.g. with aerodynamically optimized blades for smoother airflow, reducing noise to around 20 dB at approximately 1 metre or larger fans rotating more slowly, which produce less noise than smaller, faster fans with the same airflow. Acoustical insulation inside the case e.g. acoustic foam can reduce noise. Additional cooling methods which do not use fans can be used, such as liquid and phase-change cooling.
An overclocked computer may become unreliable. For example: Microsoft Windows may appear to work with no problems, but when it is re-installed or upgraded, error messages may be received such as a "file copy error" during Windows Setup. Because installing Windows is very memory-intensive, decoding errors may occur when files are extracted from the Windows XP CD-ROM
The lifespan of semiconductor components may be reduced by increased voltages and heat.
Warranties may be voided by overclocking.
Risks of overclocking
Increasing the operation frequency of a component will usually increase its thermal output in a linear fashion, while an increase in voltage usually causes thermal power to increase quadratically. Excessive voltages or improper cooling may cause chip temperatures to rise to dangerous levels, causing the chip to be damaged or destroyed.
Exotic cooling methods used to facilitate overclocking such as water cooling are more likely to cause damage if they malfunction. Sub-ambient cooling methods such as phase-change cooling or liquid nitrogen will cause water condensation, which will cause electrical damage unless controlled; some methods include using kneaded erasers or shop towels to catch the condensation.
Limitations
Overclocking components can only be of noticeable benefit if the component is on the critical path for a process, if it is a bottleneck. If disk access or the speed of an Internet connection limit the speed of a process, a 20% increase in processor speed is unlikely to be noticed, however there are some scenarios where increasing the clock speed of a processor actually allows an SSD to be read and written to faster. Overclocking a CPU will not noticeably benefit a game when a graphics card's performance is the "bottleneck" of the game.
Graphics cards
Graphics cards can also be overclocked. There are utilities to achieve this, such as EVGA's Precision, RivaTuner, AMD Overdrive (on AMD cards only), MSI Afterburner, Zotac Firestorm, and the PEG Link Mode on Asus motherboards. Overclocking a GPU will often yield a marked increase in performance in synthetic benchmarks, usually reflected in game performance. It is sometimes possible to see that a graphics card is being pushed beyond its limits before any permanent damage is done by observing on-screen artifacts or unexpected system crashes. It is common to run into one of those problems when overclocking graphics cards; both symptoms at the same time usually means that the card is severely pushed beyond its heat, clock rate, and/or voltage limits, however if seen when not overclocked, they indicate a faulty card. After a reboot, video settings are reset to standard values stored in the graphics card firmware, and the maximum clock rate of that specific card is now deducted.
Some overclockers apply a potentiometer to the graphics card to manually adjust the voltage (which usually invalidates the warranty). This allows for finer adjustments, as overclocking software for graphics cards can only go so far. Excessive voltage increases may damage or destroy components on the graphics card or the entire graphics card itself (practically speaking).
RAM
Alternatives
Flashing and unlocking can be used to improve the performance of a video card, without technically overclocking (but is much riskier than overclocking just through software).
Flashing refers to using the firmware of a different card with the same (or sometimes similar) core and compatible firmware, effectively making it a higher model card; it can be difficult, and may be irreversible. Sometimes standalone software to modify the firmware files can be found, e.g. NiBiTor (GeForce 6/7 series are well regarded in this aspect), without using firmware for a better model video card. For example, video cards with 3D accelerators (most, ) have two voltage and clock rate settings, one for 2D and one for 3D, but were designed to operate with three voltage stages, the third being somewhere between the aforementioned two, serving as a fallback when the card overheats or as a middle-stage when going from 2D to 3D operation mode. Therefore, it could be wise to set this middle-stage prior to "serious" overclocking, specifically because of this fallback ability; the card can drop down to this clock rate, reducing by a few (or sometimes a few dozen, depending on the setting) percent of its efficiency and cool down, without dropping out of 3D mode (and afterwards return to the desired high performance clock and voltage settings).
Some cards have abilities not directly connected with overclocking. For example, Nvidia's GeForce 6600GT (AGP flavor) has a temperature monitor used internally by the card, invisible to the user if standard firmware is used. Modifying the firmware can display a 'Temperature' tab.
Unlocking refers to enabling extra pipelines or pixel shaders. The 6800LE, the 6800GS and 6800 (AGP models only) were some of the first cards to benefit from unlocking. While these models have either 8 or 12 pipes enabled, they share the same 16x6 GPU core as a 6800GT or Ultra, but pipelines and shaders beyond those specified are disabled; the GPU may be fully functional, or may have been found to have faults which do not affect operation at the lower specification. GPUs found to be fully functional can be unlocked successfully, although it is not possible to be sure that there are undiscovered faults; in the worst case the card may become permanently unusable.
History
Overclocked processors first became commercially available in 1983, when AMD sold an overclocked version of the Intel 8088 CPU. In 1984, some consumers were overclocking IBM's version of the Intel 80286 CPU by replacing the clock crystal. Xeon W-3175X is the only Xeon with a multiplier unlocked for overclocking
See also
Clock rate
CPU-Z
Double boot
Dynamic voltage scaling
POWER8 on-chip controller (OCC)
Serial presence detect (SPD)
Super PI
Underclocking
UNIVAC I Overdrive, 1952 unofficial modification
References
Notes
External links
OverClocked inside
How to Overclock a PC, WikiHow
Overclocking guide for the Apple iMac G4 main logic board
Overclocking and benchmark databases
OC Database of all PC hardware for the past decade (applications, modifications and more)
HWBOT: Worldwide Overclocking League – Overclocking competition and data
Comprehensive CPU OC Database
Segunda Convencion Nacional de OC: Overclocking Extremo by Imperio Gamer
Tool for overclock
Computer hardware tuning
Clock signal
Hobbies
IBM PC compatibles |
86894 | https://en.wikipedia.org/wiki/Tiny%20BASIC | Tiny BASIC | Tiny BASIC is a family of dialects of the BASIC programming language that can fit into 4 or fewer KBs of memory. Tiny BASIC was designed in response to the open letter published by Bill Gates complaining about users pirating Altair BASIC, which sold for $150. Tiny BASIC was intended to be a completely free version of BASIC that would run on the same early microcomputers.
Tiny BASIC was released as a specification, not an implementation, published in the September 1975 issue of the People's Computer Company (PCC) newsletter. The article invited programmers to implement it on their machines and send the resulting assembler language implementation back for inclusion in a series of three planned newsletters. Dr. Li-Chen Wang, author of Palo Alto Tiny BASIC, coined the term "copyleft" to describe this concept. The community response was so overwhelming that the newsletter was relaunched as Dr. Dobb's Journal, the first regular periodical to focus on microcomputer software. Dr. Dobb's lasted in print form for 34 years and then online until 2014, when its website became a static archive.
The small size and free source code made these implementations invaluable in the early days of microcomputers in the mid-1970s, when RAM was expensive and typical memory size was only 4 to 8 KB. While the minimal version of Microsoft's Altair BASIC would also run in 4 KB machines, it left only 790 bytes free for BASIC programs. More free space was a significant advantage of Tiny BASIC. To meet these strict size limits, Tiny BASIC dialects generally lacked a variety of features commonly found in other dialects, for instance, most versions lacked string variables, lacked floating point math, and allowed only single-letter variable names.
Tiny BASIC implementations are still used today, for programming microcontrollers such as the Arduino.
History
Altair BASIC
The earliest microcomputers, like the MITS Altair 8800, generally had no built-in input/output (I/O) beyond front-panel switches and LED lamps. Useful work generally required the addition of an I/O expansion card and the use of some form of terminal. At the time, video-based terminals were very expensive, much more than the computer itself, so many users turned to mechanical devices like the Teletype Model 33. The Model 33, like most teleprinters of the era, included a punch tape system intended to allow operators to pre-record their messages and then play them at "high speed", faster than typing the message live. For the early microcomputers, this provided a convenient computer storage format, allowing the users to write programs to paper tape and distribute them to other users.
The Homebrew Computer Club met for the first time in March 1975, and its members soon used the meetings to swap software on punch tape. At the June meeting, a tape containing a pre-release version of Altair BASIC disappeared. The tape was given to Steve Dompier, who passed it on to Dan Sokol, who had access to a high speed tape punch. At the next meeting, 50 copies of Altair BASIC on paper tape appeared in a cardboard box. When Ed Roberts, founder of MITS, learned of this, he stated "Anyone who is using a stolen copy of MITS BASIC should identify himself for what he is, a thief." Bill Gates made this more formal, writing his an open letter to hobbyists, complaining that "As the majority of hobbyists must be aware, most of you steal your software."
Tiny BASIC
The complaint was not well received. Among the many responses, Bob Albrecht, another Homebrew member and founder of the People's Computer Company (PCC), felt the best response would be to produce their own BASIC that was completely free to use by anyone. He approached Dennis Allison, a member of the Computer Science faculty at Stanford University, to write a specification for a version of BASIC that would fit in 2 to 3 kilobytes of memory. To aid porting, the design was based on an intermediate language (IL), an interpreter for the interpreter, which meant only a small portion of the total code had to be ported.
Allison's initial design was published in the September 1975 edition of the PCC newsletter, along with an Intel 8080 version of the IL interpreter. The article called on programmers to implement the design on their computer and send the resulting assembler language version back to the PCC. They stated their plans to publish three special newsletters containing these user-submitted versions, along with bug fixes, programs written in the new BASIC, and suggestions and enhancements. The concept gained further notice when it was republished in the January 1976 edition of the ACM Special Interest Group on Programming Languages. Submissions poured in. Among the notable early versions was Tiny BASIC Extended by Dick Whipple and John Arnold which ran in 3K of RAM, added FOR...NXT loops, and allowed a single numeric array. They avoided the use of the IL and wrote it directly in machine code, using octal.
The first of the three planned newsletters, with the title "Dr. Dobb's Journal of Computer Calisthenics & Orthodontia, Running Light Without Overbyte", was published in January 1976. It starts with a note from Albrecht, under the penname "the dragon", suggesting that three editions would not be enough, and asked the readers if they would like to see it continue. It also reprinted the original article on Tiny BASIC from PCC, included the complete listing of Extended TB, and included a number of small BASIC programs including tips-and-tricks from Allison. Response to the first issue was so impressive that the introduction to the second issue stated they had already decided to continue publishing the new newsletter under the simplified name Dr. Dobb's Journal. Over the next several issues, additional versions of the language were published, and similar articles began appearing in other magazines like Interface Age.
Spread
By the middle of 1976, Tiny BASIC interpreters were available for the Intel 8080, the Motorola 6800 and MOS Technology 6502 processors. This was a forerunner of the free software community's collaborative development before the internet allowed easy transfer of files, and was an example of a free software project before the free software movement. Computer hobbyists would exchange paper tapes, cassettes or even retype the files from the printed listings.
Jim Warren, editor of Dr. Dobb's, wrote in the July 1976 ACM Programming Language newsletter about the motivations and methods of this successful project. He started with this: "There is a viable alternative to the problems raised by Bill Gates in his irate letter to computer hobbyists concerning 'ripping off' software. When software is free, or so inexpensive that it's easier to pay for it than to duplicate it, then it won't be 'stolen'." The Bill Gates letter was written to make software into products. The alternative method was to have an experienced professional do the overall design and then outline an implementation strategy. Knowledgeable amateurs would implement the design for a variety of computer systems. Warren predicted this strategy would be continued and expanded.
The May 1976 issue of Dr. Dobbs had Li-Chen Wang's Palo Alto Tiny BASIC for the 8080. The listing began with the usual title, author's name and date but it also had "@COPYLEFT ALL WRONGS RESERVED". A fellow Homebrew Computer Club member, Roger Rauskolb, modified and improved Li-Chen Wang's program and this was published in the December 1976 issue of Interface Age magazine. Roger added his name and preserved the COPYLEFT Notice.
Description
Basic concepts
See BASIC interpreters
Tiny BASIC was designed to use as little memory as possible, and this is reflected in the paucity of features as well as details of its interpreter system. Early microcomputers lacked the RAM and secondary storage for a BASIC compiler, which was more typical of timesharing systems.
Like most BASICs of the era, Tiny Basic was interactive with the user typing statements into a command line. As microcomputers of the era were often used with teletype machines or "dumb" terminals, direct editing of existing text was not possible and the editor instead used takeout characters, often the backslash, to indicate where the user backed up to edit existing text.
If the user typed a statement into the command line the system examined it to see if it started with a number. If it did not, the line was immediately parsed and operated on, potentially generating output via . This was known as "direct mode".
If the line was entered with a leading number, the number was converted from decimal format, like "50", and converted to a 8-bit value, in this case, $32 hexadecimal. This number was used as an index into an array-like storage area where the rest of the line was stored in exactly the format it was typed. When the user typed into the command line the system would loop over the array, convert the line number back to decimal format, and then print out the rest of the text in the line.
When a program was present in memory and the user types in the command, the system enters "indirect mode". In this mode, a pointer is set to point to the first line of the program, for instance, 10 ($0A hex). The original text for that line is then retrieved from the store and run as if the user had just typed it in direct mode. The pointer then advances to the next line and the process continues.
Formal grammar
The grammar is listed below in Backus-Naur form, almost exactly as it was specified in the Design Note. In the listing, an asterisk ("*") denotes zero or more of the object to its left — except for the first asterisk in the definition of "term", which is the multiplication operator; parentheses group objects; and an epsilon ("ε") signifies the empty set. As is common in computer language grammar notation, the vertical bar ("|") distinguishes alternatives, as does their being listed on separate lines. The symbol "CR" denotes a carriage return (usually generated by a keyboard's "Enter" key). A BREAK from the console will interrupt execution of the program.
line ::= number statement CR | statement CR
statement ::= PRINT expr-list
IF expression relop expression THEN statement
GOTO expression
INPUT var-list
LET var = expression
GOSUB expression
RETURN
CLEAR
LIST
RUN
END
expr-list ::= (string|expression) (, (string|expression) )*
var-list ::= var (, var)*
expression ::= (+|-|ε) term ((+|-) term)*
term ::= factor ((*|/) factor)*
factor ::= var | number | (expression)
var ::= A | B | C ... | Y | Z
number ::= digit digit*
digit ::= 0 | 1 | 2 | 3 | ... | 8 | 9
relop ::= < (>|=|ε) | > (<|=|ε) | =
string ::= " ( |!|#|$ ... -|.|/|digit|: ... @|A|B|C ... |X|Y|Z)* "
Note that string wasn't defined in the Design Note.
This syntax, as simple as it was, added one innovation: and could take an expression rather than just a line number, providing an assigned GOTO rather than the switch statement of the , a structure then supported in HP Time-Shared BASIC and predating . The syntax allowing (as opposed to just a line number to branch to) was not yet supported in Dartmouth BASIC as this time but had been introduced by Digital and copied by Microsoft.
Implementation in a virtual machine
The Design Note specified a virtual machine, in which the Tiny BASIC interpreter is itself run on a virtual machine interpreter. The designer's idea to use an application virtual machine goes back to Val Schorre (with META II, 1964) and Glennie (Syntax Machine). The choice of a virtual machine approach economized on memory space and implementation effort, although the BASIC programs run thereon were executed somewhat slowly.
Dialects that used the virtual machine included Tiny BASIC Extended, Tom Pittman's Tiny BASIC and NIBL. Other dialects such as Denver Tiny BASIC (DTB) and Palo Alto Tiny BASIC were direct interpreters. Some programmers, such as Fred Greeb with DTB, treated the IL (Interpretive Language) program as pseudocode for the algorithm to implement in assembly language; Denver Tiny BASIC did not use a virtual machine, but it did closely follow the IL program.
This is a representative excerpt from the 120-line IL program:
S1: TST S3,'GO' ;GOTO OR GOSUB?
TST S2,'TO' ;YES...TO, OR...SUB
CALL EXPR ;GET LABEL
DONE ;ERROR IF CR NOT NEXT
XFER ;SET UP AND JUMP
S3: TST S8,'PRINT' ;PRINT.
A common pattern in the program is to test for a keyword or part of a keyword, then act on that information. Each test is an assertion as to what is next in the line buffer. If the assertion fails, control jumps to a subsequent label (usually looking for a new keyword or token). Here the system advances its buffer cursor over any spaces and tests for and if it fails to find it then jumps to line . If it finds it, execution continues with the next IL command. In this case, the system next tests for , skipping to line if it fails (a test for , to see if this is instead a command). If it passes, control continues; in this case, calling an IL subroutine that starts at label , which parses an expression. In Tiny BASIC, (a computed GO TO) is as legal as and is the alternative to the ON-GOTO of larger BASIC implementations. The subroutine pushes the result of the expression onto the arithmetic stack (in this case, the line number). verifies no other text follows the expression and gives an error if it does. pops the number from the stack and transfers execution (GOes TO) the corresponding line number, if it exists.
The following table gives a partial list of the 32 commands of the virtual machine in which the first Tiny BASIC interpreter was written.
If string matches the BASIC line, advance cursor over and execute the next IL instruction; if the test fails, execute the IL instruction at the label lbl
Execute the IL subroutine starting at ; save the IL address following the CALL on the control stack
Report a syntax error if after deleting leading blanks the cursor is not positioned to reach a carriage return
Test value at the top of the AE stack to be within range. If not, report an error. If so, attempt to position cursor at that line. If it exists, begin interpretation there; if not, report an error.
Continue execution of the IL at the label specified
Return to the IL location specified at the top of the control stack
Print characters from the BASIC text up to but not including the closing quotation mark
Print number obtained by popping the top of the expression stack
Insert spaces to move the print head to next zone
Output a CRLF to the printer
Tom Pittman, discussing the IL, says: "The TINY BASIC interpreter was designed by Dennis Allison as a recursive descent parser. Some of the elegant simplicity of this design was lost in the addition of syntactical sugar to the language but the basic form remains. The IL is especially suited to Recursive Descent parsing of TINY BASIC because of the general recursive nature of its procedures and the simplicity of the TINY BASIC tokens. The IL language is effectively optimized for the interpretation of TINY. Experience has shown that the difficulty of adding new features to the language is all out of proportion with the nature of the features. Usually it is necessary to add additional machine language subroutines to support the new features. Often the difficulty outweighs the advantages."
Deviations from the design
Defining Tiny BASIC for the Homebrew Computer Club, Pittman wrote, "Tiny BASIC is a proper subset of Dartmouth BASIC, consisting of the following statement types only: LET, PRINT, INPUT, IF, GOTO, GOSUB, RETURN, END, CLEAR, LIST, RUN. Arithmetic is in 16-bit integers only with the operators + - * / and nested parentheses. There are only the 26 single letter variable names A, B, ...Z, and no functions. There are no strings or arrays... Tiny BASIC specifies line numbers less than 256." He then went on to describe his implementation: "This language has been augmented to include the functions RND, USR, and PEEK and POKE, giving the user access to all his system components in the 6800 from the BASIC program."
Many implementers brought their own experiences with HP Time-Shared BASIC or DEC BASIC-PLUS to their designs and relaxed the formal Tiny BASIC language specification. Of the seven prominent implementations published by 1977:
All added some sort of random number function, typically . Though not included in the specification, a newsletter article prior to the Design Note for Tiny BASIC requested only this function.
All enabled to be optional and most let expressions in assignment statements contain relational operators.
All but 6800TB supported statement delimiters in lines, typically although TBX used and PATB used .
In statements, all but MINOL removed the need for expressions to contain relational operators (e.g., was valid). Implementations removed altogether or made it optional or supported it only for implied .
Many modified to support print zones, using to go to the next zone and to not advance the cursor.
All but 6800TB and DTB added .
All but 6800TB and MINOL added a function to return memory size: TBX had , DTB and PATB had , L1B had , and NIBL had .
Four implementations added arrays, whether a single, undimensioned array in PATB and L1B or ensionable arrays in TBX and DTB.
Four implementations added the ark statement.
Four implementations added the loop: PATB, NIBL, and L1B offered , while TBX did not support and used the keyword to end a loop.
Only NIBL had any nod towards structured programming, with , despite Allison's lament in Issue 2 about problems with BASIC.
As an alternative to tokenization, to save RAM, TBX, DTB, and MINOL truncated keywords: for , for , for . The full, traditional keywords were not accepted. In contrast, PATB allowed accepted traditional keywords but also allowed any keyword to be abbreviated to its minimal unique string, with a trailing period. For instance, could be typed , although and other variations also worked. This system was retained in Level I BASIC for the TRS-80, which used PATB, and was also later found in Atari BASIC and the BASIC of various Sharp Pocket Computers.
Dialects
The most prominent dialects of Tiny BASIC were the original Design Note, Tiny BASIC Extended, Palo Alto Tiny BASIC, and 6800 Tiny BASIC. However, many other versions of Tiny BASIC existed.
List of prominent dialects
Tiny BASIC was first published in a newsletter offshoot of the People's Computer Company, a newsletter which became Dr. Dobb's Journal, a long-lived computing magazine. About ten versions were published in the magazine.
TBX was also known as Texas Tiny BASIC.
Both SCELBAL and 6800 Tiny BASIC were announced in the magazine but did not publish their source code.
Palo Alto Tiny BASIC
One of the most popular of the many versions of Tiny BASIC was Palo Alto Tiny BASIC, or PATB for short, by Li-Chen Wang. PATB first appeared in the May 1976 edition of Dr. Dobbs, written in a custom assembler language with non-standard mnemonics. This led to further ports that worked with conventional assemblers on the 8080. The first version of the interpreter occupied 1.77 kilobytes of memory and assumed the use of a Teletype Machine (TTY) for user input/output. An erratum to the original article appeared in the June/July issue of Dr. Dobb's (Vol. 1, No 6). This article also included information on adding additional I/O devices, using code for the VDM video display by Processor Technology as an example.
Wang was one of the first to use word copyleft. In Palo Alto Tiny BASIC's distribution notice, he had written "@COPYLEFT ALL WRONGS RESERVED". Tiny BASIC was not distributed under any formal form of copyleft distribution terms but was presented in a context where source code was being shared and modified. In fact, Wang had earlier contributed edits to Tiny BASIC Extended before writing his own interpreter. He encouraged others to adapt his source code and publish their adaptions, as with Roger Rauskolb's version of PATB published in Interface Age. He himself published a third version in PCC's Reference Book of Personal and Home Computing.
One of the most notable changes in PATB is the addition of the FOR...NEXT loop. In the original TB, loops could only be implemented using and . As in Microsoft BASIC, the upper and lower bounds of the loop were set on loop entry, and did not change during the loop, so if one of the bounds was based on a variable expression, changing the variable did not change the bound. The modifier was optional, as in MS.
Another significant change was the ability to place several statements on a single line. For reasons not explained, PATB used the semicolon to separate statements, rather than the already common colon .
Other changes include the addition of a single numeric array, with the variable name , in addition to , and the use of for not-equals in comparisons, as opposed to .
PATB used words for error messages instead of numbers. To reduce the amount of memory required, there were only three messages and they consisted of single words. The system would respond with for syntax errors, for run-time errors like GOTOs to a line that didn't exist or numeric overflows, and for out-of-memory problems.
Wang also wrote a STARTREK program in his Tiny BASIC that appeared in the July 1976 issue of the People's Computer Company Newsletter.
He later adapted the language into 3K Control Basic for Cromemco, adding variable names of the form letter-digit (e.g., A0 to Z9), logic functions (AND(), OR(), XOR()), a CALL command to execute machine language routines, more PRINT-formatting options, and others (GET() and PUT() instead of PEEK and POKE; I/O port functions).
Palo Alto Tiny BASIC was adapted for many other implementations, including Level I BASIC (1977), BASIC for the Sharp PC-1211 pocket computer (1980), and Astro BASIC (1982, by Jamie Fenton).
MINOL
Written by a junior in high school, MINOL was the only implementation that didn't support the full Design Note, lacking operator precedence, having only three relops (<, =, #), omitting and . It only supported unsigned 8-bit precision (in contrast to signed 16-bit precision for every other implementation) and line numbers from 0 to 254.
No spaces were permitted except in strings; returns a random number, before an expression loads a string at that address; returns to operating system. Memory was addressable as if it were a two-dimensioned array of high and low bytes (e.g., "(0,0)" to "(255,255)"); executes a machine language subroutine.
Miscellaneous dialects
Many dialects appeared in various other publications.
Inspired by PCC's call for Tiny BASICs, Robert Uiterwyk wrote MICRO BASIC 1.3 for the SWTPC (a 6800 system), which SWTPC published in the June 1976 issue of the SWTPC newsletter. Uiterwyk had handwritten the language on a legal tablet. He later expanded the language to 4K, adding support for floating point; this implementation was unique among BASIC interpreters by using Binary Coded Decimal to 9 digits of precision, with a range up to 1099, and by being published for free as a "Floppy ROM" magazine insert. An 8K version added string variables and trigonometry functions. Both the 4K and 8K versions were sold by SWTPC. In January, 1978, Uiterwyk sold the rights of the source code to Motorola.
Thomas F. Waitman wrote a Tiny BASIC in 1976 for the Hewlett-Packard HP-2640 and HP-2645 terminals (which used the Intel 8008 and 8080 processors), which was published in the Hewlett-Packard Journal.
Published in the December 1976 issue of Interface Age was LLL (Lawrence Livermore Laboratory) BASIC, the first draft of which was developed by Steve Leininger from Allison's specification before Leininger left National Semiconductor for Tandy Corporation. The final interpreter was developed by John Dickenson, Jerry Barber, and John Teeter at the University of Idaho on a contract with LLL. Taking 5K, it included a floating point package, developed by David Mead, Hal Brand, and Frank Olken. The program was placed into the public domain by LLL, which developed the system under the auspices of the
U.S. Energy Research and Development Administration.
4K BASICs
Altair BASIC, 4K BASIC, could run within a 4kB RAM machine, leaving only about 790 bytes free for program code. The Tiny BASIC initiative started in response to the $150 charge for Altair 4K BASIC.
In 1975, Steve Wozniak joined the newly formed Homebrew Computer Club, which had fellow members Li-Chen Wang (Palo Alto Tiny BASIC) and Tom Pittman (6800 Tiny BASIC). Wozniak concluded that his machine would have to have a BASIC of its own, which would, hopefully, be the first for the MOS Technology 6502 processor. As the language needed 4 kB RAM, he made that the minimum memory for the design. Integer BASIC was originally published on Compact Cassette in 1976.
In 1977, Radio Shack (as it was known then) released their first computer, the TRS-80, a Z80 system with Level I BASIC in a 4kB ROM. Tandy-employee Steve Leininger had written the first draft of the NIBL (National Industrial Basic Language) interpreter for the SC/MP while employed at National Semiconductor.
Unable to take that source code with him, he adapted Li-Chen Wang's Palo Alto Tiny BASIC for the original prototype of the TRS-80 Model I. He extensively revised the interpreter, adding floating-point support, simple black-and-white graphics, and statements.
Originally developed in 1979, Sinclair 4K BASIC, written by John Grant, used as its language definition the 1978 American National Standards Institute (ANSI) Minimal BASIC standard, but was itself an incomplete 4Kb implementation with integer arithmetic only.
Microcontroller dialects
Tiny BASIC implementations have been adapted for processor control and for microcontrollers such as the Arduino:
Stephen A. Ness wrote XYBASIC for the Mark Williams Company in 1977, a 4K integer implementation. The language was often used for process control applications.
Arduino BASIC - Adapted from Gordon Brandly's 68000 Tiny BASIC, ported to C by Mike Field.
Tiny Basic Plus - Adapted from Arduino BASIC by Scott Lawrence.
Half-Byte Tiny Basic - Adapted from Arduino BASIC.
Tiny Basic on the Micro: Bit - Adapted from Palo Alto Tiny BASIC.
Later implementations
In 2002, Emmanuel Chailloux, Pascal Manoury and Bruno Pagano published a Tiny BASIC (lacking /) in Developing Applications with Objective Caml as an example Objective Caml application.
In 2013, Alex Yang published an implementation in Python.
In 2019, Sergey Kuznetsov published a version in Ruby.
Dialects compared
The following table compares the language feature of Tiny BASIC implementations against other prominent BASICs that preceded them.
See also
BASIC interpreter
Copyleft
Dartmouth BASIC
Notes
References
Citations
Bibliography
External links
Tiny Basic User Manual and Experimenter's Kit – by Tom Pittman
Robert Uiterwyk's BASIC and Robert Uiterwyk's Micro Basic – A MC6800 tiny BASIC later sold with the SWTPC 6800 computer
MINOL – Erik Mueller's MINOL – Tiny BASIC with strings for Intel 8080
Tiny BASIC – A version for the curses character screen handling library
tinyBasic – An implementation written in iziBasic
Tiny BASIC – A live web version, ported to Run BASIC from iziBasic
Palo Alto BASIC less than in 500 lines – Example BASIC interpreter written in Ruby.
TinyBasic – A port of Tom Pittman's TinyBasic C interpreter to Java, C# and Adobe Flex. Includes live web versions.
TinyBASIC Windows – A Windows version of TinyBASIC
Microcomputer software
BASIC interpreters
Free software
Copyleft
BASIC programming language family
Programming languages created in 1975 |
5564371 | https://en.wikipedia.org/wiki/Eclipse%20Process%20Framework | Eclipse Process Framework | The Eclipse Process Framework (EPF) is an open source project that is managed by the Eclipse Foundation. It lies under the top-level Eclipse Technology Project. It has two goals:
To provide an extensible framework and exemplary tools for software process engineering - method and process authoring, library management, configuring and publishing a process.
To provide exemplary and extensible process content for a range of software development and management processes supporting iterative, agile, and incremental development, and applicable to a broad set of development platforms and applications. For instance, EPF provides the OpenUP, an agile software development process optimized for small projects.
By using EPF Composer, engineers can create your own Software development process by structuring it using a predefined schema. This schema is an evolution of the SPEM 1.1 OMG specification referred to as the Unified Method Architecture (UMA). Major parts of UMA went into the adopted revision of SPEM, SPEM 2.0. EPF is aiming to fully support SPEM 2.0 in the near future. The UMA and SPEM schemata support the organization of large amounts of descriptions for development methods and processes. Such method content and processes do not have to be limited to software engineering, but can also cover other design and engineering disciplines, such as mechanical engineering, business transformation, and sales cycles.
IBM supplies a commercial version, IBM Rational Method Composer.
Limitations
The "Content Variability" capability severely limits users to one-to-one mappings. Processes trying to integrate various aspects may require block-copy-paste style clones to get around this limitation. This may be a limitation of the SPEM model and might be based on presumption that agile methods are being described as these methods tend not to have deep dependencies.
See also
Meta-Process Modeling
References
External links
Eclipse Process Framework site
Open content
Eclipse (software)
Software development process |
1973337 | https://en.wikipedia.org/wiki/Anti-replay | Anti-replay | Anti-replay is a sub-protocol of IPsec that is part of Internet Engineering Task Force (IETF). The main goal of anti-replay is to avoid hackers injecting or making changes in packets that travel from a source to a destination. Anti-replay protocol uses a unidirectional security association in order to establish a secure connection between two nodes in the network. Once a secure connection is established, the anti-replay protocol uses packet sequence numbers to defeat replay attacks as follows: When the source sends a message, it adds a sequence number to its packet; the sequence number starts at 0 and is incremented by 1 for each subsequent packet. The destination maintains a 'sliding window' record of the sequence numbers of validated received packets; it rejects all packets which have a sequence number which is lower than the lowest in the sliding window (i.e. too old) or already appears in the sliding window (i.e. duplicates/replays). Accepted packets, once validated, update the sliding window (displacing the lowest sequence number out of the window if it was already full).
See also
Cryptanalysis
Person in the middle attack
Replay attack
Session ID
Transport Layer Security
References
Internet layer protocols
Cryptographic protocols
Tunneling protocols
Network layer protocols |
12201456 | https://en.wikipedia.org/wiki/Gobuntu | Gobuntu | Gobuntu was a short-lived official derivative of the Ubuntu operating system that was conceived to provide a distribution consisting entirely of free software. It was first released in October 2007.
Because Ubuntu now incorporates a "free software only" installer option, the Gobuntu project was rendered redundant in early 2008. As a result, Canonical made the decision officially to end the Gobuntu project with version 8.04.
In March 2009, it was announced that "Gobuntu 8.04.1 is the final release of Gobuntu. The project has merged back to mainline Ubuntu, so there is no need for a separate distribution".
History and development
Mark Shuttleworth first mentioned the idea of creating an Ubuntu derivative named Gnubuntu consisting entirely of free software, on 24 November 2005. Due to Richard Stallman's disapproval of the name, the project was later renamed Ubuntu-libre. Stallman had previously endorsed a distribution based on Ubuntu called gNewSense, and has criticized Ubuntu for using proprietary and non-free software in successive distributions, most notably, Ubuntu 7.04.
While introducing Ubuntu 7.10, Mark Shuttleworth said that it would
Gobuntu was officially announced by Mark Shuttleworth on 10 July 2007 and daily builds of Gobuntu 7.10 began to be publicly released. The initial version, Gobuntu 7.10, was released on 18 October 2007, as an in text-only installer. The next release was the Long-Term Release codenamed "Hardy Heron", which was also only made available as an alternate installation image.
Release 7.10 initially met with criticism from some free software advocates because it included Mozilla Firefox. Firefox is not considered to be 100% free software because it includes Mozilla Foundation copyrighted icons. The Mozilla licence for the icons states that they "...may not be reproduced without permission". After some debate on the developer list, this problem was quickly addressed by Canonical, and the applications with non-free logos were replaced in the follow-up Gobuntu release, Hardy Heron. Firefox was replaced by Epiphany, which has free logos.
Because some drivers, firmware, and "binary blobs" were removed from Gobuntu, it would run on fewer computers than Ubuntu. Canonical stated at the time of release of 7.10:
On 13 June 2008 Ubuntu Community Manager Jono Bacon announced that the Gobuntu project would end with the release of Gobuntu 8.04:
Shuttleworth explained:
The project ended with the release of version 8.04.1.
Releases
Gobuntu versions were intended to be released twice a year, coinciding with Ubuntu releases. Gobuntu uses the same version numbers and code names as Ubuntu, using the year and month of the release as the version number. The first Gobuntu release, for example, was 7.10, indicating October 2007.
Gobuntu releases are also given code names, using an adjective and an animal with the same first letter e.g.: "Gutsy Gibbon". These are the same as the respective Ubuntu code names. Commonly, Gobuntu releases are referred to by developers and users by only the adjective portion of the code name, for example Gutsy Gibbon is often called just Gutsy.
References
External links
Gobuntu page in Ubuntu Wiki
Discontinued Linux distributions
Ubuntu derivatives
History of free and open-source software
Free software only Linux distributions
Linux distributions |
15381920 | https://en.wikipedia.org/wiki/Information%20Security%20Group | Information Security Group | Founded in 1990, the Information Security Group (ISG) at Royal Holloway, University of London is an academic security group with 18 established academic posts, 10 visiting Professors/Fellows and over 90 research students. The Founder Director of the ISG was Professor Fred Piper, and the current director is Professor Keith Mayes.
In 1998 the ISG was awarded a Queen's Anniversary Prize in recognition of its work in the field of information security. It has also been awarded the status of Academic Centre of Excellence in Cyber Security Research (ACE-CSR) and hosts one of only two UK Centres for Doctoral Training in cyber security (led by Professor Keith Martin).
In 1992, the ISG introduced an MSc in information security, being the first university in the world to offer a postgraduate course in the subject. In 2014 this course received full certification from GCHQ. In 2017 it won the award for the Best Cyber Security Education Programme at SC Awards Europe 2017.
Research topics addressed by the ISG include: the design and evaluation of cryptographic algorithms, protocols and key management; provable security; smart cards; RFID; electronic commerce; security management; mobile telecommunications security; authentication and identity management; cyber-physical systems; embedded security; Internet of Things (IoT); and human related aspects of cyber security. The current director of Research is Professor Jason Crampton.
The ISG includes the Smart Card and IoT Security Centre (previously named Smart Card Centre, SCC) that was founded in October 2002 by Royal Holloway, Vodafone and Giesecke & Devrient, for training and research in the field of Smart cards, applications and related technologies: its research topics include RFID, Near Field Communication (NFC), mobile devices, IoT, and general embedded/implementation system security. In 2008, the SCC was commissioned to perform a counter expertise review of the OV-chipkaart by the Dutch Ministry of Transport, Public Works and Water Management. The SCC has received support from a number of industrial partners, such as Orange Labs (UK), the UK Cards Association, Transport for London and ITSO. The current director of the Smart Card and IoT Security Centre is Dr. Konstantinos Markantonakis.
The ISG also includes a Systems Security Research Lab (S2Lab), which was created in 2014, to investigate how to protect systems from software related threats, such as malware and botnets. The research in the lab covers many different Computer Science-related topics, such as operating systems, computer architecture, program analysis, and machine learning. The current Lab Leader is Dr. Lorenzo Cavallaro.
Current and former associated academics include Whitfield Diffie, Kenny Paterson, David Naccache, Michael Walker, Sean Murphy and Igor Muttik.
Royal Holloway's Information Security Group has been mentioned in popular media, most notably in the New York Times bestseller The Da Vinci Code by Dan Brown.
References
External links
Information Security Group website
Smart Card Centre website
Systems Security Research Lab website
Royal Holloway, University of London website
Royal Holloway, University of London |
21113 | https://en.wikipedia.org/wiki/Napster | Napster | Napster is an audio streaming service provider owned by MelodyVR. It originally launched on June 1, 1999, as a pioneering peer-to-peer (P2P) file sharing software service with an emphasis on digital audio file distribution. Audio songs shared on the service were typically encoded in the MP3 format. It was founded by Shawn Fanning and Sean Parker. As the software became popular, the company ran into legal difficulties over copyright infringement. It ceased operations in 2001 after losing a wave of lawsuits and filed for bankruptcy in June 2002. Its assets were eventually acquired by Roxio, and it re-emerged as an online music store. Best Buy later purchased the service and merged it with its Rhapsody branding on December 1, 2011.
Later, more decentralized projects followed Napster's P2P file-sharing example, such as Gnutella, Freenet, FastTrack, and Soulseek. Some services and software, like AudioGalaxy, LimeWire, Scour, Kazaa / Grokster, Madster, and eDonkey2000, were also brought down or changed due to copyright issues.
Origin
Napster was founded by Shawn Fanning and Sean Parker. Initially, Napster was envisioned by Fanning as an independent peer-to-peer file sharing service. The service operated between June 1999 and July 2001. Its technology allowed people to easily share their MP3 files with other participants. Although the original service was shut down by court order, the Napster brand survived after the company's assets were liquidated and purchased by other companies through bankruptcy proceedings.
History
Although there were already networks that facilitated the distribution of files across the Internet, such as IRC, Hotline, and Usenet, Napster specialized in MP3 files of music and a user-friendly interface. At its peak the Napster service had about 80 million registered users. Napster made it relatively easy for music enthusiasts to download copies of songs that were otherwise difficult to obtain, such as older songs, unreleased recordings, studio recordings, and songs from concert bootleg recordings. Napster paved the way for streaming media services and transformed music into a public good for a brief period of time.
High-speed networks in college dormitories became overloaded, with as much as 61% of external network traffic consisting of MP3 file transfers. Many colleges blocked its use for this reason, even before concerns about liability for facilitating copyright violations on campus.
Macintosh version
The service and software program began as Windows-only. However, in 2000, Black Hole Media wrote a Macintosh client called Macster. Macster was later bought by Napster and designated the official Mac Napster client ("Napster for the Mac"), at which point the Macster name was discontinued. Even before the acquisition of Macster, the Macintosh community had a variety of independently developed Napster clients. The most notable was the open source client called MacStar, released by Squirrel Software in early 2000, and Rapster, released by Overcaster Family in Brazil. The release of MacStar's source code paved the way for third-party Napster clients across all computing platforms, giving users advertisement-free music distribution options.
Legal challenges
Heavy metal band Metallica discovered a demo of their song "I Disappear" had been circulating across the network before it was released. This led to it being played on several radio stations across the United States, which alerted Metallica to the fact that their entire back catalogue of studio material was also available. On March 13, 2000, they filed a lawsuit against Napster. A month later, rapper and producer Dr. Dre, who shared a litigator and legal firm with Metallica, filed a similar lawsuit after Napster refused his written request to remove his works from its service. Separately, Metallica and Dr. Dre later delivered to Napster thousands of usernames of people who they believed were pirating their songs. In March 2001, Napster settled both suits, after being shut down by the Ninth Circuit Court of Appeals in a separate lawsuit from several major record labels (see below). In 2000, Madonna's single "Music" was leaked out onto the web and Napster prior to its commercial release, causing widespread media coverage. Verified Napster use peaked with 26.4 million users worldwide in February 2001.
In 2000, the American musical recording company A&M Records along with several other recording companies, through the Recording Industry Association of America (RIAA), sued Napster (A&M Records, Inc. v. Napster, Inc.) on grounds of contributory and vicarious copyright infringement under the US Digital Millennium Copyright Act (DMCA). Napster was faced with the following allegations from the music industry:
That its users were directly violating the plaintiffs' copyrights.
That Napster was responsible for contributory infringement of the plaintiffs' copyrights.
That Napster was responsible for vicarious infringement of the plaintiffs' copyrights.
Napster lost the case in the District Court but then appealed to the U.S. Court of Appeals for the Ninth Circuit. Although it was clear that Napster could have commercially significant non-infringing uses, the Ninth Circuit upheld the District Court's decision. Immediately after, the District Court commanded Napster to keep track of the activities of its network and to restrict access to infringing material when informed of that material's location. Napster wasn't able to comply and thus had to close down its service in July 2001. In 2002, Napster announced that it had filed for bankruptcy and sold its assets to a third party. In a 2018 Rolling Stone article, Kirk Hammett of Metallica upheld the band's opinion that suing Napster was the "right" thing to do.
Promotional power
Along with the accusations that Napster was hurting the sales of the record industry, there were those who felt just the opposite, that file trading on Napster stimulated, rather than hurt, sales. Some evidence may have come in July 2000 when tracks from English rock band Radiohead's album Kid A found their way to Napster three months before the album's release. Unlike Madonna, Dr. Dre or Metallica, Radiohead had never hit the top 20 in the US. Furthermore, Kid A was an album without any singles released, and received relatively little radio airplay. By the time of the album's release, the album was estimated to have been downloaded for free by millions of people worldwide, and in October 2000 Kid A captured the number one spot on the Billboard 200 sales chart in its debut week. According to Richard Menta of MP3 Newswire, the effect of Napster in this instance was isolated from other elements that could be credited for driving sales, and the album's unexpected success suggested that Napster was a good promotional tool for music.
Since 2000, many musical artists, particularly those not signed to major labels and without access to traditional mass media outlets such as radio and television, have said that Napster and successive Internet file-sharing networks have helped get their music heard, spread word of mouth, and may have improved their sales in the long term. One such musician to publicly defend Napster as a promotional tool for independent artists was Dj Xealot, who became directly involved in the 2000 A&M Records Lawsuit. Chuck D from Public Enemy also came out and publicly supported Napster.
Lawsuit
Napster's facilitation of transfer of copyrighted material raised the ire of the Recording Industry Association of America (RIAA), which almost immediately—on December 6, 1999—filed a lawsuit against the popular service. The service would only get bigger as the trial, meant to shut down Napster, also gave it a great deal of publicity. Soon millions of users, many of whom were college students, flocked to it.
After a failed appeal to the Ninth Circuit Court, an injunction was issued on March 5, 2001 ordering Napster to prevent the trading of copyrighted music on its network.
Lawrence Lessig claimed, however, that this decision made little sense from the perspective of copyright protection: "When Napster told the district court that it had developed a technology to block the transfer of 99.4 percent of identified infringing material, the district court told counsel for Napster 99.4 percent was not good enough. Napster had to push the infringements 'down to zero.' If 99.4 percent is not good enough," Lessig concluded, "then this is a war on file-sharing technologies, not a war on copyright infringement."
Shutdown
On July 11, 2001, Napster shut down its entire network in order to comply with the injunction. On September 24, 2001, the case was partially settled. Napster agreed to pay music creators and copyright owners a $26 million settlement for past, unauthorized uses of music, and as an advance against future licensing royalties of $10 million. In order to pay those fees Napster attempted to convert its free service into a subscription system, and thus traffic to Napster was reduced. A prototype solution was tested in 2002: the Napster 3.0 Alpha, using the ".nap" secure file format from PlayMedia Systems and audio fingerprinting technology licensed from Relatable. Napster 3.0 was, according to many former Napster employees, ready to deploy, but it had significant trouble obtaining licenses to distribute major-label music. On May 17, 2002, Napster announced that its assets would be acquired by German media firm Bertelsmann for $85 million with the goal of transforming Napster into an online music subscription service. The two companies had been collaborating since the middle of 2000 where Bertelsmann became the first major label to drop its copyright lawsuit against Napster. Pursuant to the terms of the acquisition agreement, on June 3 Napster filed for Chapter 11 protection under United States bankruptcy laws. On September 3, 2002, an American bankruptcy judge blocked the sale to Bertelsmann and forced Napster to liquidate its assets.
Third-party clients
After official Napster client takedown, multiple third-party client and server implementations continued working and supporting Napster network. These include OpenNap and TekNap.
Reuse of name
Napster's brand and logos were acquired at bankruptcy auction by Roxio which used them to re-brand the Pressplay music service as Napster 2.0. In September 2008, Napster was purchased by US electronics retailer Best Buy for US $121 million. On December 1, 2011, pursuant to a deal with Best Buy, Napster merged with Rhapsody, with Best Buy receiving a minority stake in Rhapsody. On July 14, 2016, Rhapsody phased out the Rhapsody brand in favor of Napster and has since branded its service internationally as Napsterand expanded toward other markets by providing music on-demand as a service to other brands like the iHeartRadio app and their All Access music subscription service that provides subscribers with an on-demand music experience as well as premium radio.
On August 25, 2020, Napster was sold to virtual reality concerts company MelodyVR.
Media
There have been several books that document the experiences of people working at Napster, including:
Joseph Menn's Napster biography
All the Rave: The Rise and Fall of Shawn Fanning's Napster
John Alderman's "Sonic Boom: Napster, MP3, and the New Pioneers of Music"
Steve Knopper's "Appetite for Self Destruction: The Spectacular Crash of the Record Industry in the Digital Age."
The 2003 film The Italian Job features Napster co-founder Shawn Fanning as a cameo of himself. This gave credence to one of the characters fictional back-story as the original "Napster".
The 2010 film The Social Network features Napster co-founder Sean Parker (played by Justin Timberlake) in the rise of the popular website Facebook.
The 2013 film Downloaded is a documentary about sharing media on the Internet and includes the history of Napster.
See also
Album era
Bittorrent
Napster (streaming music service)
Snocap
TekNap
Further reading
InsightExpress. 2000. Napster and its Users Not violating Copyright Infringement Laws, According to a Survey of the Online Community.
Judge criticises both parties in Napster case
"The File Sharing Movement" in Jack Goldsmith and Tim Wu, Who Controls the Internet: Illusions of a Borderless World Oxford University Press, 2006, pp. 105–125.
References
External links
Official website in 2011 on archive.org
Defunct digital music services or companies
Defunct online companies of the United States
1999 software
File sharing networks
File sharing software
Internet properties established in 1999
Internet services shut down by a legal challenge
American companies established in 1999
Entertainment companies established in 1999
Software companies established in 1999
Internet properties disestablished in 2001
Technology companies disestablished in 2001
Companies that have filed for Chapter 7 bankruptcy
Companies that filed for Chapter 11 bankruptcy in 2002
Classic Mac OS software
Windows file sharing software
Web 2.0 |
4676968 | https://en.wikipedia.org/wiki/Michael%20J.%20Saylor | Michael J. Saylor | Michael J. Saylor (born February 4, 1965) is an American entrepreneur and business executive, who co-founded and leads MicroStrategy, a company that provides business intelligence, mobile software, and cloud-based services. Saylor authored the 2012 book The Mobile Wave: How Mobile Intelligence Will Change Everything. He is also the sole trustee of Saylor Academy, a provider of free online education. As of 2016, Saylor has been granted 31 patents and has 9 additional applications under review.
Life
Saylor was born in Lincoln, Nebraska on February 4, 1965 and spent his early years on various Air Force bases around the world, as his father was an Air Force chief master sergeant. When Saylor was 11, the family settled in Fairborn, Ohio, near the Wright-Patterson Air Force Base.
In 1983, Saylor enrolled at the Massachusetts Institute of Technology (MIT) on an Air Force ROTC scholarship. He joined the Theta Delta Chi fraternity, through which he met the future co-founder of MicroStrategy, Sanju K. Bansal. He graduated from MIT in 1987, with a double major in aeronautics and astronautics; and science, technology, and society.
A medical condition prevented him from becoming a pilot, and instead, he got a job with a consulting firm, The Federal Group, Inc. in 1987, where he focused on computer simulation modeling for a software integration company. In 1988, Saylor became an internal consultant at DuPont, where he developed computer models to help the company anticipate change in its key markets. The simulations predicted that there would be a recession in many of DuPont's major markets in 1990.
MicroStrategy
Using the funds from DuPont, Saylor founded MicroStrategy with Sanju Bansal, his MIT fraternity brother. The company began developing software for data mining, then focused on software for business intelligence. In 1992, MicroStrategy won a $10 million contract with McDonald's to develop applications to analyze the efficiency of its promotions. The contract with McDonald's led Saylor to realize that his company could create business intelligence software that would allow companies to use their own data for insights into their businesses.
Saylor took the company public in June 1998, with an initial stock offering of 4 million shares priced at $12 each. The stock price doubled on the first day of trading. He owns over 39,521 units of the company worth over $4,804,963. By early 2000, Saylor's net worth reached $7 billion, and the Washingtonian reported that he was the wealthiest man in the Washington D.C. area.
In 1996, Saylor was named KPMG Washington High-Tech Entrepreneur of the Year. In 1997, Ernst & Young named Saylor its Software Entrepreneur of the Year, and the following year, Red Herring Magazine recognized him as one of its Top 10 Entrepreneurs for 1998. Saylor was also featured by the MIT Technology Review as an "Innovator Under 35" in 1999.
SEC investigation
In March 2000, the U.S. Securities and Exchange Commission (SEC) brought charges against Saylor and two other MicroStrategy executives for the company's inaccurate reporting of financial results for the preceding two years. In December 2000, Saylor settled with the SEC without admitting wrongdoing by paying $350,000 in penalties and a personal disgorgement of $8.3 million. As a result of the restatement of results, the company's stock declined in value and Saylor's net worth fell by $6 billion.
COVID-19 response criticism
In a 3,000-word memo to all MicroStrategy employees on March 16, 2020, entitled "My Thoughts on COVID-19," Saylor criticized countermeasures then being recommended against the disease, saying that it is "soul-stealing and debillitating [sic] to embrace the notion of social distancing & economic hibernation" and predicting that in the worst-case scenario, global life expectancy would only "click down by a few weeks." Saylor also refused to close MicroStrategy's offices unless he was legally required to do so. The full content of the memo appeared on Reddit for only a few minutes and was reposted in the Washington Business Journal.
Bitcoin investment
On MicroStrategy's quarterly earnings conference call in July 2020, Saylor announced his intention for MicroStrategy to explore purchasing Bitcoin, gold, or other alternative assets instead of holding cash. The following month, MicroStrategy used $250 million from its cash stockpile to purchase 21,454 Bitcoin.
MicroStrategy later added $175 million of Bitcoin to its holdings in September 2020 and another $50 million in early December 2020. On December 11, 2020, MicroStrategy announced that it had sold $650 million in convertible senior notes, taking on debt to increase its Bitcoin holdings to over $1 billion worth. On December 21, 2020 MicroStrategy announced their total holdings include 70,470 bitcoins purchased for $1.125 billion at an average price of $15,964 per bitcoin. As of February 24, 2021 holdings include 90,531 bitcoins acquired for $2.171 billion at an average price of $23,985 per bitcoin. Saylor, who controls 70% of MicroStrategy's shares, dismissed concerns by observers that the move is turning MicroStrategy into a Bitcoin investment firm or exchange-traded fund (ETF).
In October 2020, Saylor disclosed he personally held 17,732 bitcoin at an average purchase price of $9,882. As of October 2021, he had not sold any bitcoin.
Between October 1 and November 29, 2021, MicroStrategy bought 7,002 bitcoins for about $414.4 million in cash at an average purchase price of $59,187 bringing its total holdings to 121,044 bitcoins.
The Mobile Wave
In June 2012, Saylor released The Mobile Wave: How Mobile Intelligence Will Change Everything, published by Perseus Books, which discusses trends in mobile technology and their future impact on commerce, healthcare, education, and the developing world. The book appeared on the New York Times Best Seller list, where it was ranked number seven in hardcover non-fiction books in August 2012, and was ranked number five in hardcover business books on the Wall Street Journal's Best-Sellers list in July 2012.
Saylor Academy
In 1999, Saylor established The Saylor Foundation (later named Saylor Academy), of which he is the sole trustee. Saylor.org was launched in 2008 as the free education initiative of The Saylor Foundation.
References
External links
Michael Saylor official website
1965 births
Theft
Hacking (computer security)
Thieves
American technology chief executives
Living people
MIT School of Engineering alumni
People from Fairborn, Ohio
People from Lincoln, Nebraska
Businesspeople from Nebraska
Businesspeople from Ohio
20th-century American businesspeople
21st-century American businesspeople
American technology company founders
American technology writers
American philanthropists
People associated with cryptocurrency |
44593154 | https://en.wikipedia.org/wiki/Making%20Waves%20%28software%29 | Making Waves (software) | Making Waves (MW) is computer software designed to produce professional quality audio from basic Windows multimedia PCs. This application was among the first of the 16-bit digital sequencers that evolved from the MS-DOS WAV trackers of the Eighties to become the digital audio workstation software available today including Steinberg Cubase, Pro Tools and ACID Pro. Making Waves enabled a small community of independent artists (originally including Daniel Bedingfield) to use existing hardware to record, sample, mix and render their own original work creating professional-quality audio with a modest investment of less than $100. This same dynamic user community played a significant role in the application's development, suggesting program revisions and performing extensive beta testing. These users were all organized and mentored by Stephen John Steele, the original programmer and developer of Making Waves as well as a founding director of Perceptive Solutions, Spacehead Systems and Making Waves Software Limited.
Overview
The application's interface integrates a sequencer, mixer, sampler and wave editor compatible with Musical Instrument Digital Interface (MIDI), Virtual Studio Technology (VST) and some DirectX plugins (only effects, not instruments), and the program's key feature is live mixing of MIDI input with VST (instruments and effects) and digital samples including Yamaha XG (EXtended General MIDI) sound libraries. Making Waves Studio renders mp3, wav and MIDI files or can be purchased as an inexpensive audio-only wav sequencer.
The early commercial success of this embryonic digital audio workstation (DAW) was relatively brief and seemed to build on two significant events, the release of a stable graphical user interface (GUI) version and the production of a "hit" record and album by an independent artist. First, with the release of the 32-bit Making Waves Studio version in April 1998, Perceptive Solutions had a product compatible with the Windows 95 GUI. The version provided a number of audio features never before or since consolidated at that price point. Next, Daniel Bedingfield's number one UK single was created with Making Waves and released in November 2001. His number two first album of the same name, Gotta Get Through This, soon followed. Making Waves began to gain sales and acceptance within the digital audio community as an affordable professional audio platform and VST host, a complete "recording studio-in-a-box"."
While still available for sale, Making Waves lacks a 64-bit version, is not approved for use on Windows 8 and is no longer being maintained following the death of the original developer in 2011.
References
External links
Sound on Sound, August 2002
Sound on Sound, July 2003
MIDI Connections 2003
Katharsis! REMASTERED!
Katharsis! on Soundcloud
Windows audio
Software synthesizers
Digital audio workstation software
Windows multimedia software |
4390818 | https://en.wikipedia.org/wiki/Computer-aided | Computer-aided | Computer-aided or computer-assisted is an adjectival phrase that hints of the use of a computer as an indispensable tool in a certain field, usually derived from more traditional fields of science and engineering. Instead of the phrase computer-aided or computer-assisted, in some cases the suffix management system is used.
Engineering and production
Computer-aided design
Computer-aided architectural design
Computer-aided industrial design
Electronic and electrical computer-aided design
Computer-aided garden design
Computer-aided drafting
Computer-aided engineering
Computer-aided production engineering
Computer-aided manufacturing
Computer-aided quality
Computer-aided maintenance
Music and arts
Computer-aided algorithmic composition
Computer-assisted painting
Human languages
Computer-aided translation
Medicine
Computer-assisted detection
Computer-aided diagnosis
Computer-assisted orthopedic surgery
Computer-aided patient registration
Computer-assisted sperm analysis
Computer-assisted surgery
Computer-assisted surgical planning
Computer-aided tomography
Software engineering
Computer-aided software engineering
Traffic control
Computer-assisted dispatch
Teaching
Computer-assisted instruction
Computer-assisted learning, better known as computer-based learning
Computer-assisted language learning
Computer-assisted assessment
Mathematics
Computer-assisted proof
Computer-aided learning
Economy
Computer-assisted auditing techniques
Computer-assisted mass appraisal
Communications
Computer-assisted personal interviewing
Computer-assisted telephone interviewing
Computer-assisted reporting
Security
Computer-Assisted Passenger Prescreening System
Law
Computer-assisted legal research
Entertainment
Computer-assisted gaming
Computer-assisted role-playing game
Prefixes |
436807 | https://en.wikipedia.org/wiki/SPNEGO | SPNEGO | Simple and Protected GSSAPI Negotiation Mechanism (SPNEGO), often pronounced "spenay-go", is a GSSAPI "pseudo mechanism" used by client-server software to negotiate the choice of security technology. SPNEGO is used when a client application wants to authenticate to a remote server, but neither end is sure what authentication protocols the other supports. The pseudo-mechanism uses a protocol to determine what common GSSAPI mechanisms are available, selects one and then dispatches all further security operations to it. This can help organizations deploy new security mechanisms in a phased manner.
SPNEGO's most visible use is in Microsoft's "HTTP Negotiate" authentication extension. It was first implemented in Internet Explorer 5.01 and IIS 5.0 and provided single sign-on capability later marketed as Integrated Windows Authentication. The negotiable sub-mechanisms included NTLM and Kerberos, both used in Active Directory. The HTTP Negotiate extension was later implemented with similar support in:
Mozilla 1.7 beta
Mozilla Firefox 0.9
Konqueror 3.3.1
Google Chrome 6.0.472
History
19 February 1996 – Eric Baize and Denis Pinkas publish the Internet Draft Simple GSS-API Negotiation Mechanism (draft-ietf-cat-snego-01.txt).
17 October 1996 – The mechanism is assigned the object identifier 1.3.6.1.5.5.2 and is abbreviated snego.
25 March 1997 – Optimistic piggybacking of one mechanism's initial token is added. This saves a round trip.
22 April 1997 – The "preferred" mechanism concept is introduced. The draft standard's name is changed from just "Simple" to "Simple and Protected" (spnego).
16 May 1997 – Context flags are added (delegation, mutual auth, etc.). Defenses are provided against attacks on the new "preferred" mechanism.
22 July 1997 – More context flags are added (integrity and confidentiality).
18 November 1998 – The rules of selecting the common mechanism are relaxed. Mechanism preference is integrated into the mechanism list.
4 March 1998 – An optimisation is made for an odd number of exchanges. The mechanism list itself is made optional.
Final December 1998 – DER encoding is chosen to disambiguate how the MIC is calculated. The draft is submitted for standardisation as RFC 2478.
October 2005 – Interoperability with Microsoft implementations is addressed. Some constraints are improved and clarified and defects corrected. Published as RFC 4178, although it is now non-interoperable with strict implementations of now-obsoleted RFC 2478.
Notes
References
External links
The Simple and Protected GSS-API Negotiation Mechanism (obsoletes ).
SPNEGO-based Kerberos and NTLM HTTP Authentication in Microsoft Windows
Cryptographic protocols
Computer access control protocols |
34531444 | https://en.wikipedia.org/wiki/Linux%20DM%20Multipath | Linux DM Multipath | Device Mapper Multipath Input Output often shortened to DM-Multipathing and abbreviated as DM-MPIO provides input-output (I/O) fail-over and load-balancing by using multipath I/O within Linux for block devices. By utilizing device-mapper, the multipathd daemon provides the host-side logic to use multiple paths of a redundant network to provide continuous availability and higher-bandwidth connectivity between the host server and the block-level device. DM-MPIO handles the rerouting of block I/O to an alternate path in the event of a path failure. DM-MPIO can also balance the I/O load across all of the available paths that are typically utilized in Fibre Channel (FC) and iSCSI SAN environments.
DM-MPIO is based on the device mapper, which provides the basic framework that maps one block device onto another.
Considerations
When utilizing Linux DM-MPIO in a datacenter that has other operating systems and multipath solutions, key components of path management must be considered.
Load balancing — The workload is distributed across the available hardware components. Goal: Reduce I/O completion time, maximize throughput, and optimize resources
Path failover and recovery — Utilizes redundant I/O channels to redirect application reads and writes when one or more paths are no longer available.
History
DM-MPIO started as a patch set created by Joe Thornber, and was later maintained by Alasdair G Kergon at Red Hat. It was included in mainline Linux with kernel version 2.6.12, which was released on June 17, 2005.
Components
DM-MPIO in Linux consists of kernel components and user-space components.
Kernel – device-mapper – block subsystem that provides layering mechanism for block devices.
dm-multipath – kernel module implementing the multipath device-mapper target.
User-space – multipath-tools – provides the tools to manage multipathed devices by instructing the device-mapper multipath module what to do. The tools consist of:
Multipath: scans the system for multipathed devices, assembles them, updates the device-mapper's map.
Multipathd: daemon that waits for maps events, and then executes multipath and monitors the paths. Marks a path as failed when the path becomes faulty. Depending on the failback policy, it can reactivate the path.
Devmap-name: provides a meaningful device-name to udev for devmaps.
Kpartx: maps linear devmaps to device partitions to make multipath maps partitionable.
Multipath.conf: configuration file for the multipath daemon. Used to overwrite the built-in configuration table of multipathd.
Configuration file
The configuration file /etc/multipath.conf makes many of the DM-MPIO features user-configurable. The multipath command and the kernel daemon multipathd use information found in this file. The file is only consulted during the configuration of the multipath devices. Changes must be made prior to running the multipath command. Changes to the file afterwards will require multipath to be executed again.
The multipath.conf has five sections:
System level defaults (defaults): User can override system level defaults.
Blacklisted devices (blacklist): User specifies the list of devices that is not to be under the control of DM-MPIO.
Blacklist exceptions (blacklist_exceptions): Specific devices to be treated as multipath devices even if listed in the blacklist.
Storage controller specific settings (devices): User specified configuration settings will be applied to devices with specified "Vendor" and "Product" information.
Device specific settings (multipaths): Fine tune the configuration settings for individual LUNs.
Terminology
HBA: Host bus adapters provide the physical interface between the input/output (I/O) host bus of Fibre Channel devices and the underlying Fibre Channel network.
Path: Connection from the server through the HBA to a specific LUN.
DM Path States: The device mapper's view of the path condition. Only two conditions are possible:
Active: The last I/O operation sent through this path successfully completed. Analogous to ready path state.
Failed: The last I/O operation sent through this path did not successfully complete. Analogous to faulty path state.
Failover: When a path is determined to be in a failed state, a path that is in ready state will be made active.
Failback: When a failed path is determined to be active again, multipathd may choose to failback to the path as determined by the failback policy.
Failback Policy: Four options as set in the multipath.conf configuration file.
Immediate: Immediately failback to the highest priority path.
Manual: The failed path is not monitored, requires user intervention to failback.
Followover(for clusters): Only perform automatic failback when the first path of a pathgroup becomes active. This keeps a node from automatically failing back when another node requested the failover.
Number of seconds: Wait for a specified number of seconds to allow the I/O to stabilize, then failback to the highest priority path.
Active/Active: In a system that has two storage controllers, each controller can process I/O.
Active/Passive: In a system that has two storage controllers, only one controller at a time is able to process I/O, the other (passive) is in a standby mode.
LUN: SCSI Logical Unit Number
WWID: Worldwide Identifier is an identifier for the multipath device that is guaranteed to be globally unique and unchanging.
Further reading
Michael, T., Kabir, R., Giles, J. & Hull, J. (2006.) Configuring Linux to Enable Multipath I/O. Retrieved from http://www.dell.com/downloads/global/power/ps3q06-20060189-Michael.pdf
Goggin, E., Kergon, A., Varoqui, C., & Olien, D. (2005) Proceedings of the Linux Symposium – Linux Multipathing. Retrieved from https://web.archive.org/web/20101227213252/http://www.linuxinsight.com/files/ols2005/goggin-reprint.pdf
Red Hat Documentation. (n.d.) Red Hat Enterprise Linux 6, DM Multipath. Retrieved from https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/DM_Multipath/
Varoqui, C. (2010.) The Linux multipath implementation. Retrieved from http://christophe.varoqui.free.fr/refbook.html
References
External links
multipath-tools, homepage of the upstream project used to drive the Device Mapper multipathing driver.
Linux kernel features
Linux
Red Hat software
Device mapper |
427861 | https://en.wikipedia.org/wiki/MEPIS | MEPIS | MEPIS was a set of Linux distributions, distributed as Live CDs or DVDs that could be installed onto a hard disk drive. MEPIS was started by Warren Woodford and the eponymous company MEPIS LLC.
The most popular MEPIS distribution was SimplyMEPIS, which was based primarily on Debian stable, with the last version of SimplyMEPIS being based on Debian 6. It could either be installed onto a hard drive or used as a Live DVD, which made it externally bootable for troubleshooting and repairing many operating systems. It included the KDE desktop environment.
History
MEPIS was designed as an alternative to SUSE Linux, Red Hat Linux, and Mandriva Linux (formerly Mandrake) which, in the creator Warren Woodford's opinion, were too difficult for the average user. MEPIS's first official release was on May 10, 2003.
In 2006, MEPIS made a transition from using Debian packages to using Ubuntu packages. SimplyMEPIS 6.0, released in July 2006, was the first version of MEPIS to incorporate the Ubuntu packages and repositories.
SimplyMEPIS 7.0 discontinued the use of Ubuntu binary packages in favor of a combination of MEPIS packaged binaries based on Debian and Ubuntu source code, combined with a Debian stable OS core and extra packages from Debian package pools.
Major releases occurred about six months to one year apart until 2013, based mostly on Warren's availability to produce the next version.
Variants
SimplyMEPIS, designed for everyday desktop and laptop computing. The default desktop environment is KDE-based, although Gnome and/or other GUI-environments can be installed. SimplyMEPIS 11.0 is based on Debian 6 and includes Linux 2.6.36.4, KDE 4.5.1 and LibreOffice 3.3.2, with other applications available from Debian and the MEPIS Community. It was released on May 5, 2011. Development halted during beta testing of Mepis 12.
antiX, a fast and lightweight distribution, was originally based on MEPIS for x86 systems in an environment suitable for old computers. It's now based on Debian Stable.
MX Linux, a midweight distribution developed in collaboration between antiX and former MEPIS communities which is based on Debian Stable.
Name
According to Warren Woodford, the name MEPIS is pronounced like "Memphis", with the extra letters removed. Originally, the word "MEPIS" didn't mean anything in particular; it came about by mistake. When Woodford misunderstood a friend over the telephone, he decided to use the name because it was a simple five-letter word and there were no other companies or products with that name.
References
External links
Community website
Reviews
April 2009 Review of SimplyMEPIS 8.0
Review of SimplyMEPIS 8.0 Beta 5
antiX M-7, The Fat-free Mepis
MEPIS AntiX on 450Mhz K6-2, 256Mb
Debian-based distributions
KDE
Operating system distributions bootable from read-only media
X86-64 Linux distributions
Linux distributions |
46743384 | https://en.wikipedia.org/wiki/Peerio | Peerio | Peerio was a cross-platform end-to-end encrypted application that provided secure messaging, file sharing, and cloud file storage. Peerio was available as an application for iOS, Android, macOS, Windows, and Linux. Peerio (Legacy) was originally released on 14 January 2015, and was replaced by Peerio 2 on 15 June 2017. The app is discontinued.
Messages and user files stored on the Peerio cloud were protected by end-to-end encryption, meaning the data was encrypted in a way that could not be read by third parties, such as Peerio itself or its service providers. Security was provided by a single permanent key-password, which in Peerio was called an "Account Key".
The company, Peerio Technologies Inc., was founded in 2014 by Vincent Drouin. The intent behind Peerio was to provide a security program that is easier to use than the PGP standard.
Peerio was acquired by WorkJam, a digital workplace solutions provide, on January 13, 2019.
Features
Peerio allowed users to share encrypted messages and files in direct messages or groups that Peerio called "rooms".
Peerio "rooms" were offered as a team-oriented group chat, allowing administrative functionality to add and remove other users from the group chat.
Peerio allowed users to store encrypted files online, offering limited cloud storage for free with optional paid upgrades.
Peerio messages and files persist between logins and hardware, differing from ephemeral encrypted messaging apps which do not retain message or file history between logins or different devices.
Peerio supported application based multi-factor authentication.
Peerio allowed users to share animated GIFs.
Security
End-to-End Encryption
Peerio utilized end-to-end encryption and it was applied by default to all message and file data. End-to-end encryption is intended to encrypt data in a way that only the sender and intended recipients are able to decrypt, and thus read, the data.
Taken from Peerio's privacy policy:
"Peerio utilizes the NaCl (pronounced "salt") cryptographic framework, which itself uses the following cryptographic primitives:
X25519 for public key agreement over elliptic curves.
ed25519 for public key signatures.
XSalsa20 for encryption and confidentiality.
Poly1305 for ensuring the integrity of encrypted data.
Additionally, Peerio uses scrypt for memory-hard key derivation and BLAKE2s is used for various hashing operations.
For in-transit encryption, Peerio Services used Transport Layer Security (TLS) with best-practice cipher suite configuration, including support for perfect forward secrecy (PFS). You can view a detailed and up-to-date independent review of Peerio's TLS configuration on SSL Labs."
Code Audits
Prior to Peerio's initial release, the software was audited by the German security firm Cure53, which found only non-security related bugs, all of which were fixed prior to the applications release.
According to Peerio's website, the application was also audited in March 2017 by Cure53.
Open Source
Peerio was partly open source and published code publicly on GitHub
Bug Bounty
Peerio offered a bug bounty, offering cash rewards for anyone who reports security vulnerabilities.
Peerio (Legacy)
The first iteration of Peerio, Peerio (Legacy), was developed by Nadim Kobeissi and Florencia Herra-Vega and was released on 14 January 2015 and was closed on 8 January 2018.
Peerio (Legacy) was a free application, available for Android, iOS, Windows, macOS, Linux, and as a Google Chrome extension. It offered end-to-end encryption, which is enabled by default. The encryption used the miniLock open-source security standard, which was also developed by Kobeissi.
On 15 June 2017, Peerio 2 was launched as the successor to Peerio (Legacy). According to the company's blog, Peerio 2 is purported to be a "radical overhaul" of the original application's core technology. Claimed benefits in comparison to Peerio (Legacy) include increased speed, support for larger file transfers (up to 7000GB), and a re-designed user interface. Peerio also stated an added focus towards businesses looking for encrypted team collaboration software.
References
Cryptographic software
Internet privacy software
Privacy software
Open standards |
30505692 | https://en.wikipedia.org/wiki/Social%20project%20management | Social project management | Social project management is a non-traditional way of organizing projects and performing project management. It is, in its simplest form, the outcome of the application of the social networking (i.e. Facebook) paradigm to the context of project ecosystems, as a continued response to the movement toward distributed, virtual teams. Distributed virtual teams lose significant communication value normally present when groups are collocated. Because of this, social project management is motivated by a philosophy of the maximizing of open, and continuous communication, both inside and outside the team. Because it is a response to new organizing structures that require technologically mediated communications, Social Project Management is most often enabled by the use of Collaborative software inspired by social media (i.e. Ongozah). This paradigm enables the project work to be published as activity stream and publicized via the integration with the social network of an organization. Social project management embraces both the historical best practices of Project management, and the open collaboration of Web 2.0.
While Project management 2.0 embraced a philosophical shift away from centralized command and control and focused strongly on the egalitarian collaboration of a team, social project management recognizes the important role of the project manager, especially on large projects. Additionally, while Project management 2.0 minimized the importance of computer-supported scheduling, social project management recognizes that while many projects can be performed using emergent planning and control, large, enterprise projects require centralized control accompanied by seamless collaboration.
History
The concept of social project management emerged during 2008 when some developers of project management tools started to use the term to differentiate between traditional project management tools and tools for Agile software development.
While some have used the terminology Project Management 2.0 and social project management interchangeably, they exhibit significant differences in practice.
Communigram-NET, a network of excellence on social project management, has been set up since November 2011.
Key concepts (how social project management differs from Project Management 2.0)
Social business software, of which social project management is a subset, powers business performance based upon its ability to assist teams in managing exceptions. Because it is based on the concepts of Social Business Software in general, Social Project Management software is differentiated from other collaborative project software by three key areas of functionality:
First, social project management software is embedded into the social network of the larger organization.
One goal that Project management 2.0 systems realized was the need to create project-based collaboration systems. However, PM2.0 tools were often adopted at the project level, and not the enterprise level. This led to the situation where team members on several projects might have to use multiple tools for collaboration, depending on what project they were working on at that moment. Additionally, because of the fragmented nature of the tools used, little visibility existed to any person outside of the project team.
Social project management is based upon the philosophy that the project team is one part of an integrated whole, and that valuable, relevant and unique abilities and knowledge exist within the larger organization. For this reason, Social Project Management systems are integrated into the collaborative platform(s) of an organization, so that communication can proceed outside the project boundaries.
Second, social project management software is organized around a formal project schedule, and all activities and collaborative functionality are linked to this schedule.
While PM 2.0 tools stressed collaboration, many tools provided little to no actual project management capabilities. While this often worked very well for smaller projects, especially ones with distributed teams, it could not scale to enterprise-level projects.
Social project management embraces the vision of seamless online collaboration within a project team, but also provides for the use of rigorous project management techniques.
Third, social project management software provides an activity stream that allows the team, and its stakeholders to build ambient awareness of the project activity and status.
This is what makes social project management "social". The concept of Ambient awareness enables distributed teams to build awareness in ways that previously was restricted to teams that were collocated. Using the Activity Stream paradigm, large distributed teams are provided with a constant stream of information regarding the project. While in the past, this kind of continuous communication might have been posited to create Information Overload, this stream of small bits of information has been shown to create significant alignment between people working together, without overload.
See also
Activity stream
Ambient awareness
Project management 2.0
Social intelligence
References
Further reading
, Leadership and Project Management: Time for a shift from Fayol to Flores
, Social Structures (2009)
, Breaking the Code of Project Management (2009).
, Making The Social World (2010)
, Social Intelligence (2007)
Project management by type
Project management software |
357616 | https://en.wikipedia.org/wiki/Outline%20of%20software%20engineering | Outline of software engineering | The following outline is provided as an overview of and topical guide to software engineering:
Software engineering – application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is the application of engineering to software.
The ACM Computing Classification system is a poly-hierarchical ontology that organizes the topics of the field and can be used in semantic web applications and as a defacto standard classification system for the field. The major section "Software and its Engineering" provides an outline and ontology for software engineering.
Technologies and practices
Skilled software engineers use technologies and practices from a variety of fields to improve their productivity in creating software and to improve the quality of the delivered product.
Software applications
Software engineers build software (applications, operating systems, system software) that people use.
Applications influence software engineering by pressuring developers to solve problems in new ways. For example, consumer software emphasizes low cost, medical software emphasizes high quality, and Internet commerce software emphasizes rapid development.
Business software
Accounting software
Analytics
Data mining closely related to database
Decision support systems
Airline reservations
Banking
Automated teller machines
Cheque processing
Credit cards
Commerce
Trade
Auctions (e.g. eBay)
Reverse auctions (procurement)
Bar code scanners
Compilers
Parsers
Compiler optimization
Interpreters
Linkers
Loaders
Communication
E-mail
Instant messengers
VOIP
Calendars — scheduling and coordinating
Contact managers
Computer graphics
Animation
Special effects for video and film
Editing
Post-processing
Cryptography
Databases, support almost every field
Embedded systems Both software engineers and traditional engineers write software control systems for embedded products.
Automotive software
Avionics software
Heating ventilating and air conditioning (HVAC) software
Medical device software
Telephony
Telemetry
Engineering All traditional engineering branches use software extensively. Engineers use spreadsheets, more than they ever used calculators. Engineers use custom software tools to design, analyze, and simulate their own projects, like bridges and power lines. These projects resemble software in many respects, because the work exists as electronic documents and goes through analysis, design, implementation, and testing phases. Software tools for engineers use the tenets of computer science; as well as the tenets of calculus, physics, and chemistry.
Computer Aided Design (CAD)
Electronic Design Automation (EDA)
Numerical Analysis
Simulation
File
FTP
File sharing
File synchronization
Finance
Bond market
Futures market
Stock market
Games
Poker
Multiuser Dungeons
Video games
Information systems, support almost every field
LIS Management of laboratory data
MIS Management of financial and personnel data
Logistics
Supply chain management
Manufacturing
Computer Aided Manufacturing (CAM)
Distributed Control Systems (DCS)
Music
Music sequencers
Sound effects
Music synthesis
Network Management
Network management system
Element Management System
Operations Support System
Business Support Systems
Networks and Internet
Domain Name System
Protocols
Routers
Office suites
Word processors
Spreadsheets
Presentations
Operating systems
Embedded
Graphical
Multitasking
Real-time
Robotics
Signal processing, encoding and interpreting signals
Image processing, encoding and interpreting visual information
Speech processing
Text recognition
Handwriting recognition
Simulation, supports almost every field.
Engineering, A software simulation can be cheaper to build and more flexible to change than a physical engineering model.
Sciences
Sciences
Genomics
Traffic Control
Air traffic control
Ship traffic control
Road traffic control
Training
Drill
Simulation
Testing
Visualization, supports almost every field
Architecture
Engineering
Sciences
Voting
World wide web
Browsers
Servers
Software engineering topics
Many technologies and practices are (mostly) confined to software engineering,
though many of these are shared with computer science.
Programming paradigm, based on a programming language technology
Object-oriented programming
Aspect-oriented programming
Functional decomposition
Structured programming
Rule-based programming
Databases
Hierarchical
Object
Relational
SQL/XML
SQL
MYSQL
NoSQL
Graphical user interfaces
GTK+ GIMP Toolkit
wxWidgets
Ultimate++
Qt toolkit
FLTK
Programming tools
Configuration management and source code management
CVS
Subversion
Git
Mercurial
RCS
GNU Arch
LibreSource Synchronizer
Team Foundation Server
Visual Studio Team Services
Build tools
Make
Rake
Cabal
Ant
CADES
Nant
Maven
Final Builder
Gradle
Team Foundation Server
Visual Studio Team Services
Visual Build Pro
Editors
Integrated development environments (IDEs)
Text editors
Word processors
Parser creation tools
Yacc/Bison
Static code analysis tools
Libraries
Component-based software engineering
Design languages
Unified Modeling Language (UML)
Patterns, document many common programming and project management techniques
Anti-patterns
Patterns
Processes and methodologies
Agile
Agile software development
Extreme programming
Lean software development
Rapid application development (RAD)
Rational Unified Process
Scrum (in management)
Heavyweight
Cleanroom
ISO/IEC 12207 — software life cycle processes
ISO 9000 and ISO 9001
Process Models
CMM and CMMI/SCAMPI
ISO 15504 (SPICE)
Metamodels
ISO/IEC 24744
SPEM
Platforms
A platform combines computer hardware and an operating system. As platforms grow more powerful and less costly, applications and tools grow more widely available.
BREW
Cray supercomputers
DEC minicomputers
IBM mainframes
Linux PCs
Classic Mac OS and macOS PCs
Microsoft .NET
Palm PDAs
Sun Microsystems Solaris
Windows PCs (Wintel)
Symbian OS
Other Practices
Communication
Method engineering
Pair programming
Performance Engineering
Programming productivity
Refactoring
Software inspections/Code reviews
Software reuse
Systems integration
Teamwork
Other tools
Decision tables
Feature
User stories
Use cases
Computer science topics
Skilled software engineers know a lot of computer science including what is possible and impossible, and what is easy and hard for software.
Algorithms, well-defined methods for solving specific problems.
Searching
Sorting
Parsing
Numerical analysis
Compiler theory
Yacc/Bison
Data structures, well-defined methods for storing and retrieving data.
Lists
Trees
Hash tables
Computability, some problems cannot be solved at all
List of unsolved problems in computer science
Halting problem
Complexity, some problems are solvable in principle, yet unsolvable in practice
NP completeness
Computational complexity theory
Formal methods
Proof of correctness
Program synthesis
Adaptive Systems
Neural Networks
Evolutionary Algorithms
Mathematics topics
Discrete mathematics is a key foundation of software engineering.
Number representation
Set (computer science)
Bags
Graphs
Sequences
Trees
Graph (data structure)
Logic
Deduction
First-order logic
Higher-order logic
Combinatory logic
Induction
Combinatorics
Other
Domain knowledge
Statistics
Decision theory
Type theory
Life cycle phases
Development life cycle phase
Requirements gathering / analysis
Software architecture
Computer programming
Testing, detects bugs
Black box testing
White box testing
Quality assurance, ensures compliance with process.
Product Life cycle phase and Project lifecycle
Inception
First development
Major release
Minor release
Bug fix release
Maintenance
Obsolescence
Release development stage, near the end of a release cycle
Alpha
Beta
Gold master
1.0; 2.0
Software development lifecycle
Waterfall model — Structured programming and Stepwise refinement
SSADM
Spiral model — Iterative development
V-model
Agile software development
DSDM
Chaos model — Chaos strategy
Deliverables
Deliverables must be developed for many SE projects. Software engineers rarely make all of these deliverables themselves. They usually cooperate with the writers, trainers, installers, marketers, technical support people, and others who make many of these deliverables.
Application software — the software
Database — schemas and data.
Documentation, online and/or print, FAQ, Readme, release notes, Help, for each role
User
Administrator
Manager
Buyer
Administration and Maintenance policy, what should be backed-up, checked, configured, ...
Installers
Migration
Upgrade from previous installations
Upgrade from competitor's installations
Training materials, for each role
User
Administrator
Manager
Buyer
Support info for computer support groups.
Marketing and sales materials
White papers, explain the technologies used in the applications
Business roles
Operations
Users
Administrators
Managers
Buyers
Development
Analysts
Programmers
Testers
Managers
Business
Consulting — customization and installation of applications
Sales
Marketing
Legal — contracts, intellectual property rights
Privacy and Privacy engineering
Support — helping customers use applications
Personnel — hiring and training qualified personnel
Finance — funding new development
Academe
Educators
Researchers
Management topics
Leadership
Coaching
Communication
Listening
Motivation
Vision, SEs are good at this
Example, everyone follows a good example best
Human resource management
Hiring, getting people into an organization
Training
Evaluation
Project management
Goal setting
Customer interaction (Rethink)
Estimation
Risk management
Change management
Process management
Software development processes
Metrics
Business topics
Quality programs
Malcolm Baldrige National Quality Award
Six Sigma
Total Quality Management (TQM)
Software engineering profession
Software engineering demographics
Software engineering economics
CCSE
History of software engineering
Software engineering professionalism
Ethics
Licensing
Legal
Intellectual property
Consumer protection
History of software engineering
History of software engineering
Pioneers
Many people made important contributions to SE technologies, practices, or applications.
John Backus: Fortran, first optimizing compiler, BNF
Victor Basili: Experience factory.
F.L. Bauer: Stack principle, popularized the term Software Engineering
Kent Beck: Refactoring, extreme programming, pair programming, test-driven development.
Tim Berners-Lee: World wide web
Barry Boehm: SE economics, COCOMO, Spiral model.
Grady Booch: Object-oriented design, UML.
Fred Brooks: Managed System 360 and OS 360. Wrote The Mythical Man-Month and No Silver Bullet.
Larry Constantine: Structured design, coupling, cohesion
Edsger Dijkstra: Wrote Notes on Structured Programming, A Discipline of Programming and Go To Statement Considered Harmful, algorithms, formal methods, pedagogy.
Michael Fagan: Software inspection.
Tom Gilb: Software metrics, Software inspection, Evolutionary Delivery ("Evo").
Adele Goldstine: Wrote the Operators Manual for the ENIAC, the first electronic digital computer, and trained some of the first human computers
Lois Haibt: FORTRAN, wrote the first parser
Margaret Hamilton: Coined the term "software engineering", developed Universal Systems Language
Mary Jean Harrold: Regression testing, fault localization
Grace Hopper: The first compiler (Mark 1), COBOL, Nanoseconds.
Watts Humphrey: Capability Maturity Model, Personal Software Process, fellow of the Software Engineering Institute.
Jean Ichbiah: Ada
Michael A. Jackson: Jackson Structured Programming, Jackson System Development
Bill Joy: Berkeley Unix, vi, Java.
Alan Kay: Smalltalk
Brian Kernighan: C and Unix.
Donald Knuth: Wrote The Art of Computer Programming, TeX, algorithms, literate programming
Nancy Leveson: System safety
Bertrand Meyer: Design by Contract, Eiffel programming language.
Peter G. Neumann: RISKS Digest, ACM Sigsoft.
David Parnas: Module design, social responsibility, professionalism.
David Pearson, Computer Scientist: Developed the ICL CADES software engineering system.
Jef Raskin: Developed the original Macintosh GUI, authored The Humane Interface
Dennis Ritchie: C and Unix.
Winston W. Royce: Waterfall model.
Mary Shaw: Software architecture.
Richard Stallman: Founder of the Free Software Foundation
Linus Torvalds: Linux kernel, free software / open source development.
Will Tracz: Reuse, ACM Software Engineering Notes.
Gerald Weinberg: Wrote The Psychology of Computer Programming.
Elaine Weyuker: Software testing
Jeannette Wing: Formal specifications.
Ed Yourdon: Structured programming, wrote The Decline and Fall of the American Programmer.
See also
List of programmers
List of computer scientists
Notable publications
About Face: The Essentials of User Interface Design by Alan Cooper, about user interface design.
The Capability Maturity Model by Watts Humphrey. Written for the Software Engineering Institute, emphasizing management and process. (See Managing the Software Process )
The Cathedral and the Bazaar by Eric Raymond about open source development.
The Decline and Fall of the American Programmer by Ed Yourdon predicts the end of software development in the U.S.
Design Patterns by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides.
Extreme Programming Explained by Kent Beck
"Go To Statement Considered Harmful" by Edsger Dijkstra.
"Internet, Innovation and Open Source:Actors in the Network" — First Monday article by Ilkka Tuomi (2000) source
The Mythical Man-Month by Fred Brooks, about project management.
Object-oriented Analysis and Design by Grady Booch.
Peopleware by Tom DeMarco and Tim Lister.
The pragmatic engineer versus the scientific designer by E. W. Dijkstra
Principles of Software Engineering Management by Tom Gilb about evolutionary processes.
The Psychology of Computer Programming by Gerald Weinberg. Written as an independent consultant, partly about his years at IBM.
Refactoring: Improving the Design of Existing Code by Martin Fowler, Kent Beck, John Brant, William Opdyke, and Don Roberts.
The Pragmatic Programmer: from journeyman to master by Andrew Hunt, and David Thomas.
Software Engineering Body of Knowledge (SWEBOK) ISO/IEC TR 19759
See also:
Important publications in software engineering in CS.
Related fields
Computer Science
Information engineering
Information technology
Traditional engineering
Computer engineering
Electrical engineering
Software engineering
Domain engineering
Information technology engineering
Knowledge engineering
User interface engineering
Web engineering
Arts and Sciences
Mathematics
Computer science
Information science
Application software
Information systems
Programming
Systems Engineering
See also
Index of software engineering articles
Search-based software engineering
SWEBOK Software engineering body of knowledge
CCSE Computing curriculum for software engineering
Computer terms etymology, the origins of computer terms
Complexity or scaling
Second system syndrome
optimization
Source code escrow
Feature interaction problem
Certification (software engineering)
Engineering disasters#Failure due to software
Outline of software development
References
External links
ACM Computing Classification System
Guide to the Software Engineering Body of Knowledge (SWEBOK)
Professional organizations
British Computer Society
Association for Computing Machinery
IEEE Computer Society
Professionalism
SE Code of Ethics
Professional licensing in Texas
Education
CCSE Undergraduate curriculum
Standards
IEEE Software Engineering Standards
Internet Engineering Task Force
ISO
Government organizations
European Software Institute
Software Engineering Institute
Agile
Organization to promote Agile software development
Test driven development
Extreme programming
Other organizations
Online community for software engineers
Software Engineering Society
Demographics
U.S. Bureau of Labor Statistics on SE
Surveys
David Redmiles page from the University of California site
Other
Full text in PDF from the NATO conference in Garmisch
Computer Risks Peter G. Neumann's risks column.
Outlines of applied sciences
Wikipedia outlines
Software engineering, Outline of |
65821147 | https://en.wikipedia.org/wiki/Beau%20Parry | Beau Parry | Beau Parry is an American inventor, known for his contributions in the field of biometric encryption and liveness detection. He is also the founder of BRIVAS, a biometric technology company that offers biometric encryption services for consumers, enterprises, and government.
In 2012, Parry founded BRIVAS in downtown Cincinnati, Ohio. In 2015 he received a U.S. patent for biometric encryption to stop digital identity fraud. Parry achieved a granted patent that claims cloud based biometric liveness detection wherein the verification enables access to data stored in a blockchain. Parry also holds a patent that protects determinstic bio-signature keybinding that utilizes one or more biometrics along with contextual data from GPS or other sensors. Parry delivered a talk "Seeing is Believing" in a TEDx event in October 2018. Parry was educated at Cincinnati Country Day School and later attended The University of North Carolina at Chapel Hill where he studied Economics and played linebacker under Coach Mack Brown.
References
American inventors
Living people
Year of birth missing (living people) |
18434237 | https://en.wikipedia.org/wiki/ZAP%20File | ZAP File | A .ZAP File (Zero Administration Package) is a text file, which allows the publishing of an application to a user on a Microsoft Windows system (Windows 2000, XP Professional, Windows Vista, or Windows 7 Professional), for applications for which a .MSI file does not exist. It is used in Active Directory Domains and is installed using a Group Policy.
A basic .ZAP file
A .ZAP file can be as simple or as complicated as the System Administrator wishes to make it. There are only two required fields in a .ZAP file, an Application Name (called a Friendly Name) and a Setup Command line. Other information is optional.
The .ZAP File begins with a title line consisting of the word Application inside single Square Brackets ([ ]). Underneath this come the entry fields, the two Required fields being FriendlyName = "Name" and SetupCommand = "\\Server\share\setupfile". You can also add Optional entries, such as DisplayVersion = and Publisher =. Note that DisplayVersion and Publisher do not require Quotation Marks around the variables.
Below is a very simple example of a .ZAP file.
[Application]
FriendlyName = "Program"
SetupCommand = "\\FileServer\Share\setup.exe" /q
Restrictions to a .ZAP file
The .ZAP file is more restricted than a .MSI file in that it cannot be rolled back if the application fails to install correctly, cannot use elevated privileges to install itself (i.e. the User needs to have the rights to install the software - usually given by Group Policy) and cannot install on first use, or install a separate feature on first use.
Many .ZAP Files require user intervention. This can be overcome if the Systems Administrator creates a Batch file and runs a quiet or silent install from a Batch File command. However, running an Executable file (such as setup.exe) often bypasses quiet, passive or silent installation switches, even if specified in the SetupCommand.
In addition, .ZAP files are not run automatically prior to, or during a User Logon. Instead, the User must access Add/Remove Programs from within the Windows Control Panel, Select Add New Programs and select the Installation from here. The User must have access to the location where the .ZAP file is located and have access to the location of the Setup files (if these locations are different), otherwise they will not be able to install the Application.
.ZAP Files cannot be Assigned to Computers and must be published to Users. Therefore when a User moves to another computer (even only temporarily) they can install this application on that machine whether the program should be there or not.
Finally, .ZAP Files do not automatically uninstall when a User no longer requires the software. Instead, the software remains installed on the machine permanently, unlike a .MSI installation which can be set to uninstall when the Computer is removed from the relevant OU.
Publishing a .ZAP file
After creating a .ZAP file and placing it in an accessible share - usually creating an Active Directory Group with access to this location - the Systems Administrator needs to create a Group Policy Object, open the editing screen, select User Configuration, Software Settings and Software Installation and create a New Package to the location of the .ZAP file. Since GPOs default to .MSI, the System Admin needs to ensure that they search for .ZAP files, instead of .MSI files.
Accepting the new package and assigning the GPO to the relevant Organizational Unit (OU) will publish the application. The user(s) will then need to reload the group policy from the server which manages GPO. This can be done either through logging off and then on again, or by running "gpupdate" through a command line.
References
Installation software
Microsoft application programming interfaces
Windows administration
Windows components |
42091559 | https://en.wikipedia.org/wiki/Open%20source%20in%20Kosovo | Open source in Kosovo | The first open-source software project in Kosovo was the adjustment of the Open Office Packet in December 2003.
On 28 July 2004, GGSL, an Albanian team of Linux users, was one of the first public organizations for getting information about open source, This conference was called "Software Freedom Day". which is known as the first FOSS initiative in Kosovo. The conference was held to promote the free and open software (FOSS) movement. Some of the issues that were discussed were the Linux operating system and the definitions of free software and open sources in general. KDE and GNOME desktop environments (DEs) were also discussed in conference.
Ati-Kos has made surveys in five municipal assemblies of Kosovo in May 2005. According to this survey, about 98.6% wanted software box in mother tongue, Albanian. In another survey, most of the participants believed a software box like Open Office would help increase productivity because of the interface in Albanian.
Free Libre Open Source Software Kosovo
FLOSS Kosovo (FLOSSK) is a non-governmental organization that was established in March 2009 to promote FOSS software. This initiative was undertaken by James Michael DuPont in association with volunteers from Kosovo. In August of that year in the Faculty of Electrical and Computer Engineering, the first conference of this organization was held. The conference became an annual event that became the biggest conference in the region; it was called "Software Freedom Kosovo 2009".
Open source projects and training
The first course of Linux
In February 2009, James Micheal DuPont became the first person in Kosovo to teach the operating system Linux. In June of that year, some of the students continued to teach about the OS.
OpenStreetMap
OpenStreetMap contributes mostly to open-source software in Kosovo.
Together with the OpenStreetMap community, companies like LogisticPlus have contributed to the movement. . Its beginnings have been seen in the map which was made for Brod town, which was endangered by environmental damage. This project was introduced by Joachim Bergerhoff, who told to FLOSSK UN-HABITAT had a project for them. FLOSSK helped in the development of this project. The map, which was created by community, included Brod, Gjilan, Gjakovë, Ferizaj, Prizren, Pejë, and Prishtinë. Another project was also the project for Shkodra, when in 2010 the community helped the survivors of flooding to find the streets.
Kumevotu.info
"Kumevotu.info" is a project for youths of Flossk to help people find the places where they can vote in the 2010 election. This project was held until 12 December 2010 and was based on Open Street Map, where the user gives his personal data and he can find the place where he should vote. The project was very useful, even for the youngsters who participated in creating it.
OLPC
OLPC was one of the most attractive FOSS projects in Kosovo because it was dedicated to children and poor places where children use a laptop to learn about Linux. This laptop was based on a version of Fedora and it had some basic applications. It was also equipped with WiFi, where people could browse the internet and learn. The project was very welcomed in Kosovo.
Wikipedia project
The Albanian community of open source in Kosovo has been active in Wikipedia and Wikimedia, and has been dealing mainly with promoting open knowledge by translating Wikipedia articles in Albanian. This initiative was made by FLOSSK, and it resulted with 31,458 articles in Albanian. In 2013, a conference called 'WikiacademyKosovo' was held; it was a direct way of adding articles in Wikipedia.
Fedora project
The Fedora project is also active in Kosovo. It is promoted and distributed by its ambassadors in Kosovo: Ardian Haxha and Gent Thaqi. FLOSSK in association with the ambassadors organize parties to announce new versions.
Mozilla project
The Mozilla Firefox community is active in Kosovo. FLOSSK was also promoting the Mozilla Project. The ambassadors in Kosovo and Heroid Shehu are very active in promoting the project.
Drupal training
In Kosovo, there have also been projects about the platform Drupal. FLOSSK in association with UNICEF Lab organized a training project on 6 May with Dave Hall, who is a member of free software, counselor, and administrator of the systems in Australia. The participants were trained in Drupal to use and developing this content management system.
Conferences
Software Freedom Kosovo 2009
On 29 and 30 August 2009, the first annual conference, "Software Freedom Kosovo 2009", was held in the premises of Faculty of Engineering organized by FLOSSK and University of Pristina. In the conference were presented the public figures from the FOSS world:
Giussepe Maxia by Sun Microsystems, who is also the MySQL community manager
Dan Carchidi from the Massachusetts Institute of Technology (MIT) who spoke of Open Courseware (OCW)
Flavia Marzano, Italy's representative at the European Commission on issues of Free and Open Source Software and,
Brian King from Mozilla who spoke for Firefox browser and its extensions.
Presented from Kosovo were also :
Prof. Dr. Blerim Rexha – Deputy Minister of Energy and Mining
Prof. Asc. Myzafere Limani – Dean of Faculty of Electrical Engineering and Computer
Lule Ahmedi – FIEK professor and
James Michael DuPont – Co-Founder of FLOSSK
Above 40 topics were discussed in lectures on various fields including Wikipedia, Free Encyclopedia, Linux, intellectual property licenses, building communities, and programming languages PHP and Python. This conference was called one of the most extensive in Southeast Europe.
Software Freedom Kosovo 2010
"Software Freedom Kosovo 2010” was held on 25–26 September in Prishtina]]. SFK10 again was organized by FLOSS Kosovo and the Faculty of Electrical Engineering and Computer (FIEK) of the University of Pristina.
There were 24 lectures from Kosovo and overseas.
The main lecturers and also guests of honor of this conference were:
Leon Shiman, board member of X.Org Foundation, and owner of Shiman Associates consulting firm
Rob Savoye, the primary developer of Gnash as previously developed for Debian, Red Hat and Yahoo. Savoy codes since 1977
Mikel Maron, OpenStreeMap Foundation board member
Peter Salus, linguist, computer scientist and historian of technology.
And also other topics were also offered by:
Milot Shala
Martin Bekkelund
Baki Goxhaj
Marco Fioretti
The conference was held at the premises of Faculty of Electrical and Computer Engineering.
Software Freedom Kosovo 2011
The 2011 iteration of the conference was held on 12 November 2011. With over 300 participants, this conference was one of the most successful held till now. The day-long event was themed "Doing Business with Open Source]]”.
The introductory remarks were made by:
Muzafere Limani (FIEK dean),
Lule Ahmedi (professor at FIEK and Conference Co-leader),
Vjollca Cavolli (STIKK Director) and
Arianit Dobroshi (President of FLOSSK's board)
.
Speakers for the first half of the day were; Gëzim Pula, CEO of 3CIS, Amir Neziri, James Michael DuPont, Ervis Tusha and Marian Marionv. In the second part of the day, there were presentations by: Arian Xheaziri of Chyrp CMS, P. Chriesteas of OpenERP, Jonian Dervishi of Ditari.im, Edlira Kalemi, Damjan Georgievski, Flakerim Ismani of Ruby on Rails, and Erdet Nasufi,
The final lecture was by Omer Keser, a Google executive, who spoke on the Google mobile applications and the development of mobile-phone use across different states.
Software Freedom Kosovo 2012
This conference focused on web technologies that are standards-based and vendor such as HTML5 and JavaScript. It was held on 8–9 September in the Faculty of Electrical and Computer Engineering, in Prishtina.
Software Freedom Kosovo 2013
The 2013 conference was held in the Faculty of Education in the University of Pristina.
Some of the topics that were discussed include mobile open web, hacker spaces, data protection, freelancing and code sharing scaling cloud.
More than 170 participants and many speakers, including Alex Lakatos, a JavaScript developer and Mozilla representative; Redon Skikuli, another Mozilla Representative and co-founder of Design Everview in Tirana, Arianit Dobroshi, a member of FLOSSK; Arbnor Hasani currently involved with Innovations Lab in Kosovo at the Design Center, Ana Risteska, a contributor to the GNOME project; and Burim Shala, WordPress theme developer.
Two days of the conference were held elsewhere and focused on practical things by the participants, like Tuning Postagre SQL with Bert Desmet, Awesome HTML5/CSS3 with Vleran Dushi, WordPress and Template Development with Burim Shala and WMKIT Arduino workshop with Redon Sikuli.
Open Source in Government
In November 2004, Klina municipality started a project to change computer network in Linux and OpenOffice.org. The first part of the project was charged with Firefox and OpenOffice.org, and the second part was charged with Linux. From 3rd to 6 November in that year, 70% of municipality had computer network in English-language, and OpenOffice.org and Firefox in Albanian. The proliferation of open source software products has benefits to many companies and government of Kosovo because the costs are significantly lower and security is higher. UNDP FOSS Club has also trained municipal employees. A survey conducted at the end of training 100% of employees stated they prefer the software to be in Albanian. The project for the translation of OpenOffice.org in Albanian started in that year. UNDP FOSS team consisted of members from Bulgaria and Kosovo.
Richard Stallman's visit in Kosovo
On 4 June 2010, Richard Stallman, a free software activist and programmer, visited Kosovo. He lectured on the topic "A Free Digital Society" at the National Library of Kosovo. The lecture was about the freedom of the digital society and the threats that have to do with it. Stallman mentioned many countries where digital freedom is violated, such as Denmark, Australia, where many web pages have been closed for unclear reasons. He said the presence of free software and freedom in educational institutions is necessary for the countries that want to advance their societies and who do not want to be dependent on software that they have to pay for.
He also said:
At the end, Stallman answered questions put forth by the participants in the room.
Wiki Academy Kosovo
During 22–24 February 2013, at the Faculty of Education near University of Pristina, the first academy was held, which was called "WikiacademyKosovo". This conference was held to promote Kosovo in the digital world and to emphasize its good things through new and qualitative articles and pictures in Wikimedia. The academy revealed articles about cultural heritage, social issues, geography, institutions, economy, and tourism; it was also a starting point to improve the image of Kosovo. A number of mentors of Wikipedia were present at this event.
The winners of the academy were writings such as:
Archaeology of Kosovo – Atdhe Prelvukaj,
Classical Music in Kosovo- Liburn Jupolli, Mic Sokoli, Edona Vatoci,
Information and Communications Technology in Kosovo- Dardan Ahmeti.
This academy was supported by the Ministry of Internal Affairs of the Republic of Kosovo, the Great Britain Embassy, the Royal Embassy of Norway, British Council, IPKO foundation and FLOSSK.
References
Free and open-source software licenses
Linux software projects
Free software projects
Economy of Pristina
Science and technology in Kosovo
Articles containing video clips |
23278541 | https://en.wikipedia.org/wiki/Rollbase | Rollbase | Infinite Blue Platform, (previously Rollbase) is a platform as a service (PaaS) software solution. It was founded by the eponymous software vendor based in Saratoga, California and previously owned by Progress Software (Nasdaq: PRGS) in June 2013. In May 2019, Rollbase was acquired by BC in the Cloud, a business continuity and disaster recovery application company, who then formed the new company Infinite Blue, as they expanded their offerings.
Founded in 2007, the Rollbase platform allows users to create Software as a Service (SaaS) business applications using point and click, drag and drop tools in a standard web browser with minimal programming.
Product
Rollbase provides software vendors, ISVs, and organizations with a multitenant software as a service platform to use as the foundation for SaaS application development and delivery. It serves business users, IT professionals, and Web developers.
References
Cloud platforms
Web applications |
67287843 | https://en.wikipedia.org/wiki/Latifa%20Al-Abdulkarim | Latifa Al-Abdulkarim | Latifa Mohammed Al-Abdulkarim is a Saudi Arabian computer scientist and professor working on AI ethics, legal technology, and explainable AI. She is currently an assistant professor of computer science at King Saud University and visiting researcher in artificial intelligence and law at the University of Liverpool. Al-Abdulkarim has been recognized by Forbes as one of the “women defining the 21st century AI movement” and was selected as one of the 100 Brilliant Women in AI Ethics in 2020.
Education
Al-Abdulkarim earned a PGD in Computer Software Engineering in 2009, a Master of Science in Computer Science in 2011, and a Ph.D. in Computer Science in 2017 all from the University of Liverpool.
Career and research
Al-Abdulkarim is currently an assistant professor of computer science at King Saud University while also a visiting researcher in AI and law at the University of Liverpool. She researches and studies the application of AI to legal domains, explainable and trustworthy AI, and ethical dimensions of AI.
In 2016 she, Katie Atkinson, and Trevor Bench-Capon published a methodology analyzing legal cases to predict case opinions in the US Supreme Court known as ANGELIC. Abbreviated for "ADF for kNowledGe Encapsulation of Legal Information from Cases", ANGELIC was able to produce programs that decided cases with a high degree of accuracy in multiple domains. She soon worked in collaboration with Thomson Reuters and Weightmans to apply ANGELIC to different legal case domains in the UK. For her research she was awarded the Best Doctoral Consortium at the 26th International Conference on Legal Knowledge and Information Systems.
In addition to her research and teaching, Al-Abdulkarim is a member of the Shura Council has also advised and led the national strategic direction for AI and AI governance for Saudi Arabia's government. She has contributed to G20 AI policy and advised different international organizations including the OECD and ITU. Al-Abdulkarim is also a member in the UNESCO expert group for AI ethics. She serves on the Global Future Council on Artificial Intelligence for Humanity in the World Economic Forum focusing on technically-oriented solutions for issues of AI fairness.
References
Living people
Saudi Arabian women
Academics of the University of Liverpool
King Saud University faculty
Saudi Arabian scientists
Saudi Arabian women scientists
Saudi Arabian educators
Women computer scientists
Artificial intelligence ethicists
Alumni of the University of Liverpool
Year of birth missing (living people) |
195520 | https://en.wikipedia.org/wiki/Civil%20engineer | Civil engineer | A civil engineer is a person who practices civil engineering – the application of planning, designing, constructing, maintaining, and operating infrastructure while protecting the public and environmental health, as well as improving existing infrastructure that may have been neglected.
Civil engineering is one of the oldest engineering disciplines because it deals with constructed environment including planning, designing, and overseeing construction and maintenance of building structures, and facilities, such as roads, railroads, airports, bridges, harbors, channels, dams, irrigation projects, pipelines, power plants, and water and sewage systems.
The term "civil engineer" was established by John Smeaton in 1750 to contrast engineers working on civil projects with the military engineers, who worked on armaments and defenses. Over time, various sub-disciplines of civil engineering have become recognized and much of military engineering has been absorbed by civil engineering. Other engineering practices became recognized as independent engineering disciplines, including chemical engineering, mechanical engineering, and electrical engineering.
In some places, a civil engineer may perform land surveying; in others, surveying is limited to construction surveying, unless an additional qualification is obtained.
Specialization
Civil engineers usually practice in a particular specialty, such as construction engineering, geotechnical engineering, structural engineering, land development, transportation engineering, hydraulic engineering, and environmental engineering. A civil engineer is concerned with determining the right design for these structures and looking after the construction process so that the longevity of these structures is guaranteed after completion. These structures should also be satisfactory for the public in terms of comfort. Some civil engineers, particularly those working for government agencies, may practice across multiple specializations, particularly when involved in critical infrastructure development or maintenance.
Work environment
Civil engineers generally work in a variety of locations and conditions. Much of a civil engineer's work is dealing with non-engineers or others from different technical disciplines, so training should give skills preparing future civil engineers in organizational relationships between parties to projects, cost and time. Many spend time outdoors at construction sites so that they can monitor operations or solve problems onsite. The job is typically a blend of in-office and on-location work. Most work full-time.
Education and licensing
In most countries, a civil engineer will have graduated from a post-secondary school with a degree in civil engineering, which requires a strong background in mathematics and the physical sciences; this degree is typically a bachelor's degree, though many civil engineers study further to obtain master's, engineer, doctoral and post doctoral degrees. In many countries, civil engineers are subject to licensure. In some jurisdictions with mandatory licensing, people who do not obtain a license may not call themselves "civil engineers".
Belgium
In Belgium, Civil Engineer (abbreviated Ir.) (, ) is a legally protected title applicable to graduates of the five-year engineering course of one of the six universities and the Royal Military Academy. Their speciality can be all fields of engineering: civil, structural, electrical, mechanical, chemical, physics and even computer science. This use of the title may cause confusion to the English speaker as the Belgian "civil" engineer can have a speciality other than civil engineering. In fact, Belgians use the adjective "civil" in the sense of "civilian", as opposed to military engineers.
The formation of the civil engineer has a strong mathematical and scientific base and is more theoretical in approach than the practical oriented industrial engineer (Ing.) educated in a five-year program at a polytechnic. Traditionally, students were required to pass an entrance exam on mathematics to start civil engineering studies. This exam was abolished in 2004 for the Flemish Community, but is still organised in the French Community.
Scandinavia
In Scandinavian countries, civil engineer (civilingenjör (Swedish), sivilingeniør (Norwegian), civilingeniør (Danish)) is a first professional degree, approximately equivalent to Master of Science in Engineering, and a protected title granted to students by selected institutes of technology. As in English the word has its origin in the distinction between civilian and military engineers, as in before the start of the 19th century only military engineers existed and the prefix "civil" was a way to separate those who had studied engineering in a regular University from their military counterparts. Today the degree spans over all fields within engineering, like civil engineering, mechanical engineering, computer science, electronics engineering, etc.
There is generally a slight difference between a Master of Science in Engineering degree and the Scandinavian civil engineer degree, the latter's programme having closer ties with the industry's demands. A civil engineer is the most well-known of the two; still, the area of expertise remains obfuscated for most of the public. A noteworthy difference is the mandatory courses in mathematics and physics, regardless of the equivalent master's degree, e.g. computer science.
Although a 'college engineer' (högskoleingenjör, diplomingenjör/mellaningenjör (Swedish), høgskoleingeniør (Norwegian), diplomingeniør (Danish)) is roughly equivalent to a Bachelor of Science in Scandinavia, to become a 'civil engineer' one often has had to do up to one extra year of overlapping studies compared to attaining a B.Sc./M.Sc. combination. This is because the higher educational system is not fully adopted to the international standard graduation system, since it is treated as a professional degree. Today (2009) this is starting to change due to the Bologna process.
A Scandinavian "civilingenjör" will in international contexts commonly call himself "Master of Science in Engineering" and will occasionally wear an engineering class ring. At the Norwegian Institute of Technology (now the Norwegian University of Science and Technology), the tradition with an NTH Ring goes back to 1914, before the Canadian iron ring.
In Norway, the title "Sivilingeniør" is no longer issued after 2007, and has been replaced with "Master i teknologi". In the English translation of the diploma, the title will be "Master of Science", since "Master of Technology" is not an established title in the English-speaking world. The extra overlapping year of studies have also been abolished with this change to make Norwegian degrees more equal to their international counterparts.
Spain
In Spain, a civil engineering degree can be obtained after four years of study in the various branches of mathematics, physics, mechanics, etc. The earned degree is called Grado en Ingeniería Civil. Further studies at a graduate school include master's and doctoral degrees.
Before the current situation, that is, before the implementation of Bologna Process in 2010, a degree in civil engineering in Spain could be obtained after three to six years of study and was divided into two main degrees.
In the first case, the earned degree was called Ingeniero Técnico de Obras Públicas (ITOP), literally translated as "Public Works Engineer" obtained after three years of study and equivalent to a Bachelor of Civil Engineering.
In the second case, the academic degree was called Ingeniero de Caminos, Canales y Puertos (often shortened to Ingeniero de Caminos or ICCP), that literally means "Highways, Canals and Harbors Engineer", though civil engineers in Spain practice in the same fields as civil engineers do elsewhere. This degree is equivalent to a Master of Civil Engineering and is obtained after five or six years of study depending on the school granting the title.
The first Spanish Civil Engineering School was the Escuela Especial de Ingenieros de Caminos y Canales (now called Escuela Técnica Superior de Ingenieros de Caminos, Canales y Puertos), established in 1802 in Madrid, followed by the Escuela Especial de Ayudantes de Obras Públicas (now called Escuela Universitaria de Ingeniería Técnica de Obras Públicas de la Universidad Politécnica de Madrid), founded in 1854 in Madrid. Both schools now belong to the Technical University of Madrid.
In Spain, a civil engineer has the technical and legal ability to design projects of any branch, so any Spanish civil engineer can oversee projects about structures, buildings (except residential structures which are reserved for architects), foundations, hydraulics, the environment, transportation, urbanism, etc.
In Spain, Mechanical and Electrical engineering tasks are included under the Industrial engineering degree.
United Kingdom
A chartered civil engineer (known as certified or professional engineer in other countries) is a member of the Institution of Civil Engineers, and has also passed membership exams. However a non-chartered civil engineer may be a member of the Institution of Civil Engineers or the Institution of Civil Engineering Surveyors. The description "Civil Engineer" is not restricted to members of any particular professional organisation although "Chartered Civil Engineer" is.
Eastern Europe
In many Eastern European countries, civil engineering does not exist as a distinct degree or profession but its various sub-professions are often studied in separate university faculties and performed as separate professions, whether they are taught in civilian universities or military engineering academies. Even many polytechnic tertiary schools give out separate degrees for each field of study. Typically study in geology, geodesy, structural engineering and urban engineering allows a person to obtain a degree in construction engineering. Mechanical engineering, automotive engineering, hydraulics and even sometimes metallurgy are fields in a degree in "Machinery Engineering". Computer sciences, control engineering and electrical engineering are fields in a degree in electrical engineering, while security, safety, environmental engineering, transportation, hydrology and meteorology are in a category of their own, typically each with their own degrees, either in separate university faculties or at polytechnic schools.
United States
In the United States, civil engineers are typically employed by municipalities, construction firms, consulting engineering firms, architect/engineer firms, the military, state governments, and the federal government. Each state requires engineers who offer their services to the public to be licensed by the state. Licensure is obtained by meeting specified education, examination, and work experience requirements. Specific requirements vary by state.
Typically licensed engineers must graduate from an ABET-accredited university or college engineering program with a minimum of bachelor's degree, pass the Fundamentals of Engineering exam, obtain several years of engineering experience under the supervision of a licensed engineer, then pass the Principles and Practice of Engineering Exam. After completing these steps and the granting of licensure by a state board, engineers may use the title "Professional Engineer" or PE in advertising and documents. Most states have implemented mandatory continuing education requirements to maintain a license.
Professional associations
ASCE
The ASCE (American Society of Civil Engineers) represents more than 140,000 members of the civil engineering profession worldwide. Official members of the ASCE must hold a bachelor's degree from an accredited civil engineering program and be a licensed professional engineer or have five years responsible charge of engineering experience.
Most civil engineers join this organization to be updated of current news, projects, and methods (such as sustainability) related to civil engineering; as well as contribute their expertise and knowledge to other civil engineers and students obtaining their civil engineering degree.
ICE
The ICE (Institution of Civil Engineers) founded in 1818, represents, as of 2008, more than 80,000 members of the civil engineering profession worldwide. Its commercial arm, Thomas Telford Ltd, provides training, recruitment, publishing and contract services.
CSCE
Founded in 1887, the CSCE (Canadian Society for Civil Engineering) represents members of the Canadian civil engineering profession. Official members of the CSCE must hold a bachelor's degree from an accredited civil engineering program. Most civil engineers join this organization to be updated of current news, projects, and methods (such as sustainability) related to civil engineering; as well as contribute their expertise and knowledge to other civil engineers and students obtaining their civil engineering degree. Local sections frequently host events such as seminars, tours, and courses.
See also
Canal engineer
Construction engineering
Critical infrastructure
Environmental engineering
Geotechnical engineering
Glossary of civil engineering
Hydraulic engineering
List of civil engineers
National Council of Examiners for Engineering and Surveying
Professional engineer
Structural engineer
Structural engineering
Transport engineering
Urban planning
References
External links
Engineering occupations |
47714541 | https://en.wikipedia.org/wiki/Hilary%20Kahn | Hilary Kahn | Hilary J. Kahn (1943–2007) was a British computer scientist who spent most of her career as a professor at the University of Manchester, where she worked on computer-aided design and information modelling. Kahn participated in the development of the Manchester MU5 computer. Later she became involved in standards development and was both the chair of the Technical Experts Group and a member of the Steering Committee for the development of the EDIF (Electronic Design Interchange Format) standard. Kahn retired from Manchester in 2006 and died in 2007.
Early life and education
Kahn was born in 1943 in Cape Town, South Africa and moved in 1960 to England; she said later that she did so to pursue her education and escape the politics of her native country.
She attended the University of London and studied classics, after which she attended a post-graduate diploma course in computing at the Newcastle University, where she was first exposed to working with the English Electric KDF9 computer and programming in ALGOL. She subsequently worked as a programmer at English Electric.
Career and research
Kahn joined the Computer Science Department at the University of Manchester in 1967, appointed as an assistant lecturer based on her ability to teach COBOL. She has been cited as an example of how women with non-traditional backgrounds could enter early academic computer science by offering unusual specialised skills.
Although Kahn never pursued a PhD, she was a faculty member who supervised a number of PhD students; during her tenure she started the computer-aided design (CAD) group at Manchester, worked on the Manchester MU5 computer, and was extensively involved in standards development, most notably for the EDIF project. She collaborated with Tom Kilburn and wrote published several obituaries on him.
Kahn was also active in preserving the history of early computing at Manchester and in 1998 organised a large-scale celebration Computer 50 for the 50th anniversary of the Manchester Baby, the first stored-program computer, which was completed in 1948.
Kahn retired from her faculty position in 2006.
Personal life
Kahn's husband Brian Napper was also a Manchester faculty member. The couple had one child, a daughter, born in 1977. Kahn died in November 2007.
References
British computer scientists
British women computer scientists
1943 births
2007 deaths
South African emigrants to the United Kingdom
People associated with the University of Manchester
People from Cape Town
20th-century British women scientists |
1717652 | https://en.wikipedia.org/wiki/List%20of%20BeOS%20applications | List of BeOS applications | This is a list of computer programs for BeOS.
Adam - e-mail client
AudioElements - audio editor
Army Knife - audio attribute editor
Becasso - photo editor/paint program
Beezer - file archival/compression application
BeKaffe - Java virtual machine
BePDF - PDF reader
BeServed - Network file system
BeShare - file-sharing application
CL-Amp - audio player
Eddie - text editor
Gobe Productive - office suite
ImageElements - image editor/manipulator
ObjektSynth - modular software synth
personalStudio - video editor
Pe - text editor
Rack747 - synthesizer/sequencer/drum-machine
SoundPlay - audio player
TV-O-Rama - DVB application
TimeTracker - scheduled recording software
TuneBridge - music database builder
TunePrepper - music ripping and prepping software
TuneStacker - professional program log generation software
TuneTracker Basic - commercial radio automation software
TuneTracker Command Center - advanced commercial radio automation software
Vision - IRC client
BeOS is bundled with these programs:
3dmiX - sound mix
BeMail - e-mail client
Camera - digital camera picture manager
CDBurner
CDPlayer
Clock
CodyCam - interface for video cameras
DiskProbe
Expander - compressed file expander
Magnify
MediaPlayer
MidiPlayer
NetPositive - web browser
People - contact information manager
PoorMan - web server
Pulse - CPU monitor
SCSIProbe
SerialConnect - serial debugger
ShowImage - image viewer
SoftwareValet - software package manager
SoundRecorder
StyledEdit - text editor
Terminal
TV - TV card interface
Chart
FontDemo
GLTeapot - OpenGL Demo
Minesweeper
In addition, many cross platform programs have or had BeOS ports:
AbiWord - word processor
Basilisk II - Macintosh emulator
Blender 3D
Civilization: Call to Power
CodeWarrior (as BeIDE)
Doom - classic first-person shooter
Free Pascal - A modern open source Object Pascal compiler
Harbour - A modern, multi-platform, open source Clipper-compatible compiler
Macromedia Flash Player
Mozilla Firefox, Mozilla Thunderbird, Nvu
NetSurf - Web browser
Netwide Assembler (NASM)
Opera
Otter Browser
p7zip - compression utility
PearPC - Macintosh emulator
Pforth - Forth language compiler
Quake, Quake II, and Quake III Arena
QupZilla
RealPlayer G2 - media player
SeaMonkey - Internet application suite
SheepShaver - Mac OS runtime environment
SkyORB - space simulation utility
Spellswell - spelling checker
Transmission
VLC media player
VNC - Virtual Network Computing
Xitami - web server
As well as many command line tools, SDL games, and some X11 applications.
BeOS
BeOS programs |
24367133 | https://en.wikipedia.org/wiki/PM%20WIN-T | PM WIN-T | PM WIN-T (Project Manager Warfighter Information Network-Tactical) is a component of Program Executive Office Command, Control and Communications-Tactical in the United States Army. PM WIN-T has been absorbed into PM Tactical Networks as Product Manager for Mission Networks.
PM WIN-T designs, acquires, fields and supports tactical networks and services for US Army Soldiers, most notably the WIN-T suite of communication technologies.
About
PM WIN-T provides the communications network (satellite and terrestrial) and services that allows the Warfighter to send and receive information in tactical situations. WIN-T is the transformational Command and Control system that manages tactical information transport at theatre through Company Echelons in support of full spectrum Army operations.
Besides WIN-T Increments 1, 2, and 3 (WIN-T), PM WIN-T is also responsible for the following systems, among others: the Area Common User System Modernization (ACUS MOD); Regional Hub Nodes (RHN); SIPR/NIPR Access Points (SNAP); Deployable Ku Band Earth Terminals (DKET); Secure, Mobile, Anti-Jam, Reliable, Tactical - Terminal (SMART-T); Phoenix/Super High Frequency (SHF); Global Broadcast Service (GBS), Standardized Integrated Command Post System (SICPS); and Harbormaster Command and Control Center (HCCC).
WIN-T
History
In 1982 the Army embarked on the acquisition of the Mobile Subscriber Equipment (MSE) system, at an overall cost of more than $4 billion, to fill communications requirements from division down to the battalion level. MSE filled tactical telephone and switchboard requirements with a smaller, more mobile switching capability than had previously been used.
However, military operations in Desert Storm in 1991, as well as Operation Enduring Freedom in Afghanistan in 2001 and Operation Iraqi Freedom in 2003 revealed inadequacies in MSE to support highly mobile and dispersed forces in a digital environment. Before the widespread availability of satellite communications technology, battlefield communications required the installation and maintenance of relay towers and cables, limiting range and flexibility of missions. The outdated MSE could no longer keep up with the pace of battle. WIN-T was conceived to solve this problem and to enable mobile mission command on the battlefield. The systems development and integration for Project Manager WIN-T began in 2002.
Consequently, the Joint Network Node (JNN) network, as an outgrowth of the 3rd Infantry Division Operational Needs Statement, was developed to bridge the gap between MSE and the "full" on-the-move WIN-T network capability. The JNN network provided battalion-level and above with the ability to connect to the Army's digitized systems, voice, data and video via satellite Internet connection at-the-quick-halt. It obtained instantaneous battlefield success.
As a result, the Army, along with Congressional assistance in the form of supplemental funding, shifted their priority from WIN-T to JNN. The fielding of JNN started in 2004 to support operations in Iraq and Afghanistan. As the result of a Nunn-McCurdy restructure on June 5, 2007, the WIN-T program was restructured into four separate Increments. The JNN program was integrated into WIN-T as Increment 1. Further development led to Increment 2, which was first fielded in 2012. Funding re-allocation is currently being debated by Congress.
WIN-T Increment 1
WIN-T Increment 1 provides networking at-the-halt capability down to battalion level (1a) with a follow-on enhanced networking at-the-halt (1b) to improve efficiency and encryption. WIN-T Increment 1 components reside at the theater, corps, division, brigade and battalion levels.
WIN-T Increment 1 provides a full range of data, voice and video communications at-the-quick-halt, which allows Soldiers to simply pull over on the side of the road to communicate without wasting valuable time setting up complicated infrastructure. WIN-T Increment 1 is a Joint compatible communications package that allows the Soldier to use advanced networking capabilities, and is also interoperable with current force systems and future increments of WIN-T. WIN-T Increment 1a upgrades the former Joint Network Node (JNN) satellite capability to access the Ka-band defense Wideband Global Satellite (WGS), reducing reliance on expensive commercial Ku-band satellite. WIN-T Increment 1b introduces the Net Centric Waveform (NCW), a dynamic waveform that optimizes bandwidth and satellite utilization and Colorless Core technology, which further enhances security.
Capabilities:
Communications at-the-quick-halt
Interoperable with all current and future WIN-T Increments
Provides interface to legacy systems
Encrypts classified traffic over Department of Defense (DoD) unclassified network
Supports modularity by allowing a brigade combat team to have self-sustaining reach back communications
Provides Internet infrastructure connectivity directly to the battalion level and above
Allows independent deployment of command posts and centers constrained by line-of-sight radio ranges
Connects the Soldier to the Global Information Grid /Defense Information Systems Network
Transitions Army networks from proprietary protocols to Everything Over Internet Protocol
Incorporates WIN-T Increment 2 technical insertions for improved capability
WIN-T Increment 2
WIN-T Increment 2 provides networking on-the-move (OTM) capability through the addition of a secure networking package on existing Tactical Vehicles. This package employs military and commercial satellite connectivity and line-of-sight (terrestrial) radios and antennas to achieve end-to-end connectivity and dynamic ad hoc mobile networking operations. WIN-T Increment 2 extends the network to company level for maneuver brigades for the first time.
WIN-T Increment 2 increases mobility and provides a communication network down to the company level. Tactical Communication Nodes in Increment 2 are the first step to providing a mobile infrastructure on the battlefield. Combined with the Points of Presence, Vehicle Wireless Packages, and Soldier Network Extensions, Increment 2 enables mobile mission command from division to company in a completely ad hoc, self-forming, self-healing network. The WIN-T Increment 2 addition of embedding communications gear in select vehicles brings Secure Internet Protocol Router (SIPR) and CENTRIX (CXI) into the warfighting platform. Select staff have the ability to maneuver anywhere on the battlefield and maintain connectivity to the network.
WIN-T Increment 2 began fielding in October 2012 to the 4th BCT, 10th Mountain Division at Fort Polk, LA. The system made its combat debut in Afghanistan in July 2013 with the 2nd Battalion, 4th Infantry Regiment (4BCT/10MTN). In particular, the Point of Presence greatly enhanced the ability of 2-4IN to maintain network access, increasing situational awareness and threat warning while on the move and at the halt during multiple week-long, long range expeditionary advising operations with the Afghan National Army.
Capabilities:
Increment 2 supports initial collaboration, mission planning and rehearsal, and for the first time introduces mobility to the network.
Increment 2 brings a mobile network infrastructure, which means the network stays connected while moving.
Increment 2 extends the network down to Company level.
WIN-T Increment 3
WIN-T Increment 3 will provide the fully mobile, flexible, dynamic tactical networking capability needed to support a highly dispersed force over isolated areas. Building on previous increments, it will support full network planning and execution while on-the-move for maneuver, fires and aviation brigades. WIN-T Increment 3 also introduces the aerial tier to enhance reliability.
WIN-T Increment 3 provides full network mobility and introduces the air tier creating a three-tiered architecture: traditional line-of-sight (terrestrial), airborne through the use of Unmanned Aerial Systems and other airborne platforms; and beyond-line-of-sight (satellite). Additionally WIN-T Increment 3 introduces embedded Joint Command, Control, Communications, Computers, Intelligence, Surveillance (JC4ISR) radios into the platforms.
Capabilities:
Enables the full objective WIN-T distribution of intelligence, surveillance and reconnaissance information via voice, data, and real time video
Manages, prioritizes, and protects information through network operations (Network Management and Information Assurance)
Ensures interoperability with joint, allied, coalition, current force, and commercial voice and data networks
Uninterrupted flow of timely, relevant, and actionable information; the right information to the right Soldier, at the right time
Other systems managed
Area Common User System Modernization (ACUS MOD)
ACUS MOD supports network upgrades for the Army's transformation to the current force, including secure wireless communications between Soldier's vehicles. It provides Internet network management capabilities, as well as integrated voice video and data services. It also allows for beyond-line-of-sight transmission capability, which enables Soldiers to communicate with one another from separate physical locations.
Capabilities
Increased situational awareness to unit commanders
Improved throughput and Joint interoperability
Implements commercial-based technology insertions into the Current Force
High Capacity Line of Sight (HCLOS) radio upgrades to Warfighter Information Network-Tactical (WIN-T) Increment 1 units
Extends selected network capabilities to the battalion level
Deployment orders to fire support radars
Secure wireless connections both between and within tactical operations centers and command posts
S6 functionality into a single vehicle shelter
Regional hub nodes
Regional hub nodes (RHN) serve as transport nodes for Warfighter Information Network-Tactical (WIN-T), the Army's tactical communications network backbone, as well as the transport medium for theater-based Network Service Centers, which are the basic building blocks for the Army's global network infrastructure. RHNs provide satellite, voice and data services to support forces as they flow into a theater of operations, including domestic disaster relief, and enable deployed units to connect to Department of Defense (DoD) networks.
RHNs innovatively use baseband and satellite communications capabilities that enable regionalized reach-back to the Army's global network. The RHNs operate "in sanctuary," or out of the fight zone, and were designed to provide division, brigade combat teams and below early access to the Global Information Grid, the infrastructure and services that move information through the global network. The RHN gives the Soldier in the field immediate access to secure and non-secure internet and voice communications, and it allows them to do their job anywhere on the globe. To provide tactical users with secure, reliable connectivity worldwide, the Army has positioned RHNs in five separate strategic regions: Continental United States (CONUS) East and CONUS West, Central Command, European Command, and Pacific Command.
Capabilities
Currently used by both deployed Marine Corps and Army units
By enabling forces to mobilize without having to develop their own transport and network access solutions, the cycle time for deployment decreases and Soldiers can focus on the assignment at hand.
Reduces the amount of in-theater support required
Promotes interoperability and a true global network infrastructure
Serve as a gateway to quickly connect expeditionary forces and their tactical Information Technology systems into the enterprise network, giving them access to the network as soon as boots hit the ground
SIPR/NIPR access points
Certain locations in theater create unique satellite communication requirements that cause the need for SIPR/NIPR access points (SNAP) to be fielded to augment current program of record solutions. Project Manager Warfighter Information Network-Tactical (PM WIN-T) is bridging gaps in C4ISR created by rugged terrain and sparse infrastructure by deploying these transportable commercial-off-the-shelf Very Small Aperture Terminal (VSAT) satellite terminals that can deploy much more quickly than their traditional counterparts.
SNAP terminals provide reliable satellite communication access and take advantage of commercial equipment to expedite the fielding process. They provide access to the tactical and strategic networks for mission command, call for fire, Medevac and information exchange. SNAPs are a key communications component for units, providing secure beyond-line-of-sight communications at the company level and below. SNAPs are designed to provide satellite communications to small units at remote forward operating bases where they are unable to use terrestrial radios due to issues with terrain or distance.
Capabilities
Work in concert with WIN-T Increments 1 and 2
Weigh 1,200 - 1,300 pounds and fit into eight transit cases, which can be transported in the back of High Mobility Multipurpose Wheeled Vehicles or helicopters
Modular design allows for varying dish and antenna sizes to appropriately satisfy mission requirements
Easy to move around the battlefield, providing an expeditionary element to the force
Certified Ka and X-band capability to take advantage of the Department of Defense's Wideband Global SATCOM satellites
Deployable Ku Band Earth Terminals
Deployable Ku Band Earth Terminals (DKET) are used at the higher level headquarters level, and their role is to transmit tactical communications information out of theater. Some of the DKETs take on dual roles to hub for smaller earth terminals and also to pass along other communication traffic. DKETs also are providing hub services for disadvantaged forward operating bases.
DKETs are satellite terminals designed for use at larger hub locations. They support commercial Ku-Band frequencies, and have recently been certified for Ka and X band capability to take advantage of U.S. military satellites. They are highly transportable, self-contained and can establish headquarters-level, network-hub connectivity anywhere a mission demands.
Capabilities
DKETs are currently deployed in three configurations: Light (3.7 – 3.9M), Mobile (4.5M) and standard (4.6M – 7M), with the majority being the light design. This lighter design has a tri-fold antenna and a smaller shelter to make redeployment and setup faster and easier.
The robust DKET network makes for a seamless transition to backup equipment or terminals, eliminates long outages and minimizes impact to the Soldier.
DKETs operate on Ku, Ka and X-band frequencies.
Electronics are housed in separate shelters
Secure, Mobile, Anti-Jam, Reliable, Tactical - Terminal (SMART-T)
Secure, Mobile, Anti-Jam, Reliable, Tactical - Terminal (SMART-T) provides tactical users with secure, survivable, anti-jam, satellite communications in a High Mobility Multi-Purpose Wheeled Vehicle (HMMWV) configuration using equipment to communicate at Extremely High Frequency (EHF) and processes data and voice communications at both low and medium EHF data rates. SMART-Ts are being modified to communicate over Advanced EHF (AEHF) satellite, which significantly increases data rates for future tactical communications networks.
The SMART-T makes it possible for Soldiers to extend the range of their network in such a manner that communications cannot be jammed, detected or intercepted. Soldiers at the brigade echelon and above can send text, data, voice and video communications beyond their area of operations without worrying that the information will fall into the hands of enemy forces.
Capabilities
Interoperable with future AEHF satellite constellation
Enhanced system interfaces
Provides Low and Medium Data Rate (LDR/MDR) capability for voice and data transmission
Interoperable with Milstar, UHF Follow-On, EHF MIL-STD 1582D and MIL-STD 188-136 compatible payloads
Provides Anti-Jam and anti-scintillation (nuclear environment) communications
Part of the WIN-T architecture and is compatible with both WIN-T Increments 1 and 2 and corresponding equipment
Phoenix/Super High Frequency (SHF)
Phoenix/Super High Frequency (SHF) provides multi-band capability in the SHF range that operates over commercial and military SHF satellites for Army expeditionary signal battalions and is the Soldier's primary means of reach-back communications.
Phoenix/SHF is a tactical satellite terminal that operates using various military and commercial frequencies and allows Soldiers to transmit and receive high bandwidth voice, video and data similar to shipboard communications. It is designed to operate 24 hours per day, seven days per week and provides assured and reliable communications throughout the world.
Capabilities
Operates in military X and Ka Band and commercial C and Ku Bands with data rate up to 20 Mbit/s (50Mbit/s with "D" terminal)
Qualified for the military environment: temperature, shock, vibration
High-capacity, inter- and intra-theater data range extension over commercial and military satellites
Can interface with other strategic networks via Standardized Tactical Entry Points or strategic assets
Provides highly mobile, strategically transportable, wideband communications capability and displaces selected AN/TSC-85/93 terminals at expeditionary signal battalions and complements the AN/TSC-85/93 Service Life Extension Program
Global Broadcast Service (GBS)
Global Broadcast Service (GBS) provides high-speed broadcast of large-volume information products such as video, imagery, maps and weather data to deployed Tactical Operations Centers and garrisoned forces worldwide. This wealth of critical information informs and educates the Soldier.
GBS provides high-speed, one way flow of multimegabit video and data products including National Television Standards Committee (NTSC) video, large data files, map files and web products. GBS operates as a system of broadcast sites with multiple receive suite types.
Capabilities
Operates under the UHF Follow-On (UFO) Ka band satellites and the Wideband Global Satellite system, augmented as required by commercial Ku band satellites
Transportable Ground Receive Suites allow deployed forces to directly receive national level data and full motion video and distribute to TOC local area network users
Theater Injection Point provides the Combatant Command/ Combined Joint Force Command an in-theater uplink capability that broadcasts live UAV and other video feeds as well as data products generated in theater
Transitioning to Joint IP Modem and moving Satellite Broadcast Manager to D Defense Information Services Agency's enterprise computing center sites
Standardized Integrated Command Post System (SICPS)
The Standardized Integrated Command Post System (SICPS) provides commanders with integrated Command Post capability including all supporting equipment and tools to enhance the mission command decision making process across all phases of the operation. SICPS provides fully integrated, digitized, and interoperable Tactical Operations Centers for use by joint, interagency and multinational Soldiers and civilian crisis management teams. It includes legacy Command Posts (CP), Command Post Platforms, shelters, common shelters, and fixed CP facilities.
SICPS consists of the integration of approved and fielded mission command and other C4ISR systems technology into platforms supporting the operational needs of the current heavy, light, and Stryker Brigade Combat Team forces as well as requirements of the future force. SICPS consists of various systems, specifically the Command Post Platform, which includes the Command Post Local Area Network and Command Post Communications System; the Command Center System; and the Trailer Mounted Support System (TMSS).
Capabilities
Standard, mobile, interoperable, and network centric
Fully integrated mission command systems, communications equipment, local area networks (LAN), and intercom systems into a standard Army platform
TMSS includes Army standard family of shelters, Environmental Control Unit and power generation
Connectivity to tactical Internet
Displays the Common Operational Picture (COP) to combined and Joint/coalition command and control nodes
Integrates satellite communications and secure wireless LAN capabilities
Harbormaster Command and Control Center (HCCC)
Harbormaster Command and Control Center (HCCC) provides synchronization and control of Army watercraft distribution assets to ensure that water delivery of assets is precise, flexible and responsive to sustaining tailored forces operating in a dynamic environment. The HCCC program provides the US Army Harbormaster Detachments with a deployable mission command system that enables situational awareness and maintains real-time tracking of Army watercraft distribution assets and their cargos.
HCCC is a new deployable and tactically mobile system used to manage harbors, ports and beaches—the littoral environment—in Overseas Contingency Operations. It provides the Army logistician the sensors and knowledge management tools to establish and maintain situational awareness and mission command even in a chaotic shipping environment. HCCC allows logistics commanders to command and control within harbors, ports and shipping channels ensuring route security as Army logistics transitions from sea to shore.
Capabilities
Battle Command - HCCC enables Commanders to maintain visibility, exercise authority and direction over Army Watercraft operations
Situational Awareness - HCCC enables collaboration between logistics and maneuver forces and provides the ability to collect information on the local operational environment
Stability Operations - HCCC enables Army Watercraft to collaborate with and support Joint, Coalition, and non-DoD mission partners
Agile Sustainment - HCCC enables mobile, deployable, networked, multi-site mission command throughout the littoral operational environment
Mission
Project Manager (PM) Warfighter Information Network-Tactical (WIN-T) designs, acquires, fields and supports fully integrated and cost effective tactical networks and services that meet Soldier capability needs while sustaining a world class workforce. PM WIN-T will incrementally develop and deliver products that simplify network initialization and management and significantly increase capabilities.
References
External links
PEO C3T Public Site
2012 Army Posture Statement
United States Army organization |
18345539 | https://en.wikipedia.org/wiki/Gutsy%20Geeks | Gutsy Geeks | Gutsy Geeks is a weekly radio show dedicated to explaining the benefits that the "Average Computer User" would experience by switching to Free and Open Source software. The show is designed to help introduce people to Linux and FOSS. These open technologies help listeners save money while providing a more reliable and secure computing experience. The hosts attempt to keep the show at a constant novice level accessible by anyone, while at the same time providing new information to keep the show interesting for long time listeners. The primary audience for this program are computer novices and/or former Microsoft Windows users being introduced to the Linux and Open Source world for the first time. The hosts of the show are Michael Cady, Nick Coons, and Richard "Mr. Modem" Sherman.
History
Gutsy Geeks was the first broadcast radio show focusing on Linux and open-source computing. The show, and in its previous incarnation as PC Chat, had been on the air since August 2001. The show's home base is the AM station KFNX 1100 AM, serving the greater Phoenix, Arizona metropolitan area.
Media coverage
LXer from Linux News praised the show for its "distinctive personalities and technical expertise". DesktopLinux from eWeek described the show as one that "aims to promote and teach Linux to newcomers while also providing tips to intermediate-level Linux users".
External links
The Gutsy Geeks Website
References
American radio programs
Audio podcasts |
44283951 | https://en.wikipedia.org/wiki/Amit%20Sahai | Amit Sahai | Amit Sahai (; born 1974) is an American computer scientist. He is a professor of computer science at UCLA and the director of the Center for Encrypted Functionalities.
Biography
Amit Sahai was born in 1974 in Thousand Oaks, California, to parents who had
immigrated from India. He received a B.A. in mathematics with a computer
science minor from the University of California, Berkeley, summa cum laude, in
1996.
At Berkeley, Sahai was named Computing Research Association Outstanding
Undergraduate of the Year, North America, and was a member of the three-person
team that won first place in the 1996 ACM International Collegiate Programming Contest.
Sahai received his Ph.D. in Computer Science from MIT in 2000, and joined the
computer science faculty at Princeton University. In
2004 he moved to UCLA, where he currently holds the position of Professor of
Computer Science.
Research and Recognition
Amit Sahai's research interests are in security and cryptography, and theoretical
computer science more broadly. He has published more than 100 original
technical research papers.
Notable contributions by Sahai include:
Obfuscation. Sahai is a co-inventor of the first candidate general-purpose indistinguishability obfuscation schemes, with security based on a mathematical conjecture. This development generated much interest in the cryptography community and was called "a watershed moment for cryptography." Earlier, Sahai co-authored a seminal paper formalizing the notion of cryptographic obfuscation and showing that strong forms of this notion are impossible to realize.
Functional Encryption. Sahai co-authored papers which introduced attribute-based encryption and functional encryption.
Results on Zero-Knowledge Proofs. Sahai co-authored several important results on zero-knowledge proofs, in particular introducing the concept of concurrent zero-knowledge proofs. Sahai also co-authored the paper that introduced the MPC-in-the-head technique for using secure multi-party computation (MPC) protocols for efficient zero-knowledge proofs.
Results on Secure Multi-Party Computation. Sahai is a co-author on many important results on MPC, including the first universally composably secure MPC protocol, the first such protocol that avoided the need for trusted set-ups (using "Angel-aided simulation") and the IPS compiler for building efficient MPC protocols. He is also a co-editor of a book on the topic.
Sahai has given a number of invited talks including the 2004 Distinguished Cryptographer Lecture
Series at NTT Labs, Japan. He was named an Alfred P. Sloan Foundation Research
Fellow in 2002, received an Okawa Research Grant Award in 2007, a Xerox
Foundation Faculty Award in 2010, and a Google Faculty Research Award in 2010.
His research has been covered by several news agencies including the BBC World
Service.
Sahai was elected as an ACM Fellow in 2018 for "contributions to cryptography and to the development of indistinguishability obfuscation".
In 2019, he was named a Fellow of the International Association for Cryptologic Research for "fundamental contributions, including to secure computation, zero knowledge, and functional encryption, and for service to the IACR."
Sahai was named a Simons Investigator by the Simons Foundation in 2021. He was also named a Fellow of the Royal Society of Arts.
In 2022, he received the Michael and Shelia Held Prize from the National Academy of Sciences for “outstanding, innovative, creative, and influential research in the areas of combinatorial and discrete optimization, or related parts of computer science, such as the design and analysis of algorithms and complexity theory.”
References
Modern cryptographers
University of California, Berkeley alumni
MIT School of Engineering alumni
Theoretical computer scientists
Living people
1974 births
People from Thousand Oaks, California
Princeton University faculty
UCLA Henry Samueli School of Engineering and Applied Science faculty
Fellows of the Association for Computing Machinery
Fellows of the Royal Society of Arts
Competitive programmers |
25572284 | https://en.wikipedia.org/wiki/Rexx | Rexx | Rexx (Restructured Extended Executor) is a programming language that can be interpreted or compiled. It was developed at IBM by Mike Cowlishaw. It is a structured, high-level programming language designed for ease of learning and reading. Proprietary and open source Rexx interpreters exist for a wide range of computing platforms; compilers exist for IBM mainframe computers.
Rexx is a full language that can be used as a scripting, macro language, and application development language. It is often used for processing data and text and generating reports; these similarities with Perl mean that Rexx works well in Common Gateway Interface (CGI) programming and it is indeed used for this purpose. Rexx is the primary scripting language in some operating systems, e.g. OS/2, MVS, VM, AmigaOS, and is also used as an internal macro language in some other software, such as SPFPC, KEDIT, THE and the ZOC terminal emulator. Additionally, the Rexx language can be used for scripting and macros in any program that uses Windows Scripting Host ActiveX scripting engines languages (e.g. VBScript and JScript) if one of the Rexx engines is installed.
Rexx is supplied with VM/SP Release 3 on up, TSO/E Version 2 on up, OS/2 (1.3 and later, where it is officially named Procedures Language/2), AmigaOS Version 2 on up, PC DOS (7.0 or 2000), ArcaOS, and Windows NT 4.0 (Resource Kit: Regina). REXX scripts for OS/2 share the filename extension .cmd with other scripting languages, and the first line of the script specifies the interpreter to be used. REXX macros for REXX-aware applications use extensions determined by the application. In the late 1980s, Rexx became the common scripting language for IBM Systems Application Architecture, where it was renamed "SAA Procedure Language REXX".
A Rexx script or command is sometimes referred to as an EXEC in a nod to the CMS file type used for EXEC, EXEC 2 and REXX scripts on CP/CMS and VM/370 through z/VM.
Features
Rexx has the following characteristics and features:
Simple syntax
The ability to route commands to multiple environments
The ability to support functions, procedures and commands associated with a specific invoking environment.
A built-in stack, with the ability to interoperate with the host stack if there is one.
Small instruction set containing just two dozen instructions
Freeform syntax
Case-insensitive tokens, including variable names
Character string basis
Dynamic data typing, no declarations
No reserved keywords, except in local context
No include file facilities
Arbitrary numerical precision
Decimal arithmetic, floating-point
A rich selection of built-in functions, especially string and word processing
Automatic storage management
Crash protection
Content addressable data structures
Associative arrays
Straightforward access to system commands and facilities
Simple error-handling, and built-in tracing and debugger
Few artificial limitations
Simplified I/O facilities
Unconventional operators
Only partly supports Unix style command line parameters, except specific implementations
Provides no basic terminal control as part of the language, except specific implementations
Provides no generic way to include functions and subroutines from external libraries, except specific implementations
Rexx has just twenty-three, largely self-evident, instructions (such as call, parse, and select) with minimal punctuation and formatting requirements. It is essentially an almost free-form language with only one data-type, the character string; this philosophy means that all data are visible (symbolic) and debugging and tracing are simplified.
Rexx's syntax looks similar to PL/I, but has fewer notations; this makes it harder to parse (by program) but easier to use, except for cases where PL/I habits may lead to surprises. One of the REXX design goals was the principle of least astonishment.
History
pre-1990
Rexx was designed and first implemented, in assembly language, as an 'own-time' project between 20 March 1979 and mid-1982 by Mike Cowlishaw of IBM, originally as a scripting programming language to replace the languages EXEC and EXEC 2. It was designed to be a macro or scripting language for any system. As such, Rexx is considered a precursor to Tcl and Python. Rexx was also intended by its creator to be a simplified and easier to learn version of the PL/I programming language. However, some differences from PL/I may trip up the unwary.
It was first described in public at the SHARE 56 conference in Houston, Texas, in 1981, where customer reaction, championed by Ted Johnston of SLAC, led to it being shipped as an IBM product in 1982.
Over the years IBM included Rexx in almost all of its operating systems (VM/CMS, MVS TSO/E, IBM i, VSE/ESA, AIX, PC DOS, and OS/2), and has made versions available for Novell NetWare, Windows, Java, and Linux.
The first non-IBM version was written for PC DOS by Charles Daney in 1984/5 and marketed by the Mansfield Software Group (founded by Kevin J. Kearney in 1986). The first compiler version appeared in 1987, written for CMS by Lundin and Woodruff. Other versions have also been developed for Atari, AmigaOS, Unix (many variants), Solaris, DEC, Windows, Windows CE, Pocket PC, DOS, Palm OS, QNX, OS/2, Linux, BeOS, EPOC32/Symbian, AtheOS, OpenVMS, Apple Macintosh, and Mac OS X.
The Amiga version of Rexx, called ARexx, was included with AmigaOS 2 onwards and was popular for scripting as well as application control. Many Amiga applications have an "ARexx port" built into them which allows control of the application from Rexx. One single Rexx script could even switch between different Rexx ports in order to control several running applications.
1990 to present
In 1990, Cathie Dager of SLAC organized the first independent Rexx symposium, which led to the forming of the REXX Language Association. Symposia are held annually.
Several freeware versions of Rexx are available. In 1992, the two most widely used open-source ports appeared: Ian Collier's REXX/imc for Unix and Anders Christensen's Regina (later adopted by Mark Hessling) for Windows and Unix. BREXX is well known for WinCE and Pocket PC platforms, and has been "back-ported" to VM/370 and MVS.
OS/2 has a visual development system from Watcom VX-REXX. Another dialect was VisPro REXX from Hockware.
Portable Rexx by Kilowatt and Personal Rexx by Quercus are two Rexx interpreters designed for DOS and can be run under Windows as well using a command prompt. Since the mid-1990s, two newer variants of Rexx have appeared:
NetRexx: compiles to Java byte-code via Java source code; this has no reserved keywords at all, and uses the Java object model, and is therefore not generally upwards-compatible with 'classic' Rexx.
Object REXX: an object-oriented generally upwards-compatible version of Rexx.
In 1996 American National Standards Institute (ANSI) published a standard for Rexx: ANSI X3.274–1996 "Information Technology – Programming Language REXX". More than two dozen books on Rexx have been published since 1985.
Rexx marked its 25th anniversary on 20 March 2004, which was celebrated at the REXX Language Association's 15th International REXX Symposium in Böblingen, Germany, in May 2004.
On October 12, 2004, IBM announced their plan to release their Object REXX implementation's sources under the Common Public License. Recent releases of Object REXX contain an ActiveX Windows Scripting Host (WSH) scripting engine implementing this version of the Rexx language.
On February 22, 2005, the first public release of Open Object Rexx (ooRexx) was announced. This product contains a WSH scripting engine which allows for programming of the Windows operating system and applications with Rexx in the same fashion in which Visual Basic and JScript are implemented by the default WSH installation and Perl, Tcl, Python third-party scripting engines.
REXX was listed in the TIOBE index as one of the fifty languages in its top 100 not belonging to the top 50.
In 2019, the 30th Rexx Language Association Symposium marked the 40th anniversary of Rexx. The symposium was held in Hursley, England, where Rexx was first designed and implemented.
Toolkits
Rexx/Tk, a toolkit for graphics to be used in Rexx programmes in the same fashion as Tcl/Tk is widely available.
A Rexx IDE, RxxxEd, has been developed for Windows. RxSock for network communication as well as other add-ons to and implementations of Regina Rexx have been developed, and a Rexx interpreter for the Windows command line is supplied in most Resource Kits for various versions of Windows and works under all of them as well as DOS.
Spelling and capitalization
Originally the language was called Rex (Reformed Executor); the extra "X" was added to avoid collisions with other products' names. REX was originally all uppercase because the mainframe code was uppercase oriented. The style in those days was to have all-caps names, partly because almost all code was still all-caps then. For the product it became REXX, and both editions of Mike Cowlishaw's book use all-caps. The expansion to REstructured eXtended eXecutor was used for the system product in 1984.
Syntax
Looping
The loop control structure in Rexx begins with a DO and ends with an END but comes in several varieties. NetRexx uses the keyword LOOP instead of DO for looping, while ooRexx treats LOOP and DO as equivalent when looping.
Conditional loops
Rexx supports a variety of traditional structured-programming loops while testing a condition either before (do while) or after (do until) the list of instructions are executed:
do while [condition]
[instructions]
end
do until [condition]
[instructions]
end
Repetitive loops
Like most languages, Rexx can loop while incrementing an index variable and stop when a limit is reached:
do index = start [to limit] [by increment] [for count]
[instructions]
end
The increment may be omitted and defaults to 1. The limit can also be omitted, which makes the loop continue forever.
Rexx permits counted loops, where an expression is computed at the start of the loop and the instructions within the loop are executed that many times:
do expression
[instructions]
end
Rexx can even loop until the program is terminated:
do forever
[instructions]
end
A program can break out of the current loop with the leave instruction, which is the normal way to exit a do forever loop, or can short-circuit it with the iterate instruction.
Combined loops
Like PL/I, Rexx allows both conditional and repetitive elements to be combined in the same loop:
do index = start [to limit] [by increment] [for count] [while condition]
[instructions]
end
do expression [until condition]
[instructions]
end
Conditionals
Testing conditions with IF:
if [condition] then
do
[instructions]
end
else
do
[instructions]
end
The ELSE clause is optional.
For single instructions, DO and END can also be omitted:
if [condition] then
[instruction]
else
[instruction]
Indentation is optional, but it helps improve the readability.
Testing for multiple conditions
SELECT is Rexx's CASE structure, like many other constructs derived from PL/I. Like some implementations of CASE constructs in other dynamic languages, Rexx's WHEN clauses specify full conditions, which need not be related to each other. In that, they are more like cascaded sets of IF-THEN-ELSEIF-THEN-...-ELSE code than they are like the C or Java switch statement.
select
when [condition] then
[instruction] or NOP
when [condition] then
do
[instructions] or NOP
end
otherwise
[instructions] or NOP
end
The NOP instruction performs "no operation", and is used when the programmer wishes to do nothing in a place where one or more instructions would be required.
The OTHERWISE clause is optional. If omitted and no WHEN conditions are met, then the SYNTAX condition is raised.
Simple variables
Variables in Rexx are typeless, and initially are evaluated as their names, in upper case. Thus a variable's type can vary with its use in the program:
say hello /* => HELLO */
hello = 25
say hello /* => 25 */
hello = "say 5 + 3"
say hello /* => say 5 + 3 */
interpret hello /* => 8 */
drop hello
say hello /* => HELLO */
Compound variables
Unlike many other programming languages, classic Rexx has no direct support for arrays of variables addressed by a numerical index. Instead it provides compound variables. A compound variable consists of a stem followed by a tail. A . (dot) is used to join the stem to the tail. If the tails used are numeric, it is easy to produce the same effect as an array.
do i = 1 to 10
stem.i = 10 - i
end
Afterwards the following variables with the following values exist: stem.1 = 9, stem.2 = 8, stem.3 = 7...
Unlike arrays, the index for a stem variable is not required to have an integer value. For example, the following code is valid:
i = 'Monday'
stem.i = 2
In Rexx it is also possible to set a default value for a stem.
stem. = 'Unknown'
stem.1 = 'USA'
stem.44 = 'UK'
stem.33 = 'France'
After these assignments the term stem.3 would produce 'Unknown'.
The whole stem can also be erased with the DROP statement.
drop stem.
This also has the effect of removing any default value set previously.
By convention (and not as part of the language) the compound stem.0 is often used to keep track of how many items are in a stem, for example a procedure to add a word to a list might be coded like this:
add_word: procedure expose dictionary.
parse arg w
n = dictionary.0 + 1
dictionary.n = w
dictionary.0 = n
return
It is also possible to have multiple elements in the tail of a compound variable. For example:
m = 'July'
d = 15
y = 2005
day.y.m.d = 'Friday'
Multiple numerical tail elements can be used to provide the effect of a multi-dimensional array.
Features similar to Rexx compound variables are found in many other languages (including associative arrays in AWK, hashes in Perl and Hashtables in Java). Most of these languages provide an instruction to iterate over all the keys (or tails in Rexx terms) of such a construct, but this is lacking in classic Rexx. Instead it is necessary to keep auxiliary lists of tail values as appropriate. For example, in a program to count words the following procedure might be used to record each occurrence of a word.
add_word: procedure expose count. word_list
parse arg w .
count.w = count.w + 1 /* assume count. has been set to 0 */
if count.w = 1 then word_list = word_list w
return
and then later:
do i = 1 to words(word_list)
w = word(word_list,i)
say w count.w
end
At the cost of some clarity it is possible to combine these techniques into a single stem:
add_word: procedure expose dictionary.
parse arg w .
dictionary.w = dictionary.w + 1
if dictionary.w = 1 /* assume dictionary. = 0 */
then do
n = dictionary.0+1
dictionary.n = w
dictionary.0 = n
end
return
and later:
do i = 1 to dictionary.0
w = dictionary.i
say i w dictionary.w
end
Rexx provides no safety net here, so if one of the words happens to be a whole number less than dictionary.0 this technique will fail mysteriously.
Recent implementations of Rexx, including IBM's Object REXX and the open source implementations like ooRexx include a new language construct to simplify iteration over the value of a stem, or over another collection object such as an array, table or list.
do i over stem.
say i '-->' stem.i
end
Keyword instructions
PARSE
The PARSE instruction is particularly powerful; it combines some useful string-handling functions. Its syntax is:
parse [upper] origin [template]
where origin specifies the source:
arg (arguments, at top level tail of command line)
linein (standard input, e.g. keyboard)
pull (Rexx data queue or standard input)
source (info on how program was executed)
value (an expression) with: the keyword with is required to indicate where the expression ends
var (a variable)
version (version/release number)
and template can be:
list of variables
column number delimiters
literal delimiters
upper is optional; if specified, data will be converted to upper case before parsing.
Examples:
Using a list of variables as template
myVar = "John Smith"
parse var myVar firstName lastName
say "First name is:" firstName
say "Last name is:" lastName
displays the following:
First name is: John
Last name is: Smith
Using a delimiter as template:
myVar = "Smith, John"
parse var myVar LastName "," FirstName
say "First name is:" firstName
say "Last name is:" lastName
also displays the following:
First name is: John
Last name is: Smith
Using column number delimiters:
myVar = "(202) 123-1234"
parse var MyVar 2 AreaCode 5 7 SubNumber
say "Area code is:" AreaCode
say "Subscriber number is:" SubNumber
displays the following:
Area code is: 202
Subscriber number is: 123-1234
A template can use a combination of variables, literal delimiters, and column number delimiters.
INTERPRET
The INTERPRET instruction evaluates its argument and treats its value as a Rexx statement. Sometimes INTERPRET is the clearest way to perform a task, but it is often used where clearer code is possible using, e.g., value().
Other uses of INTERPRET are Rexx's (decimal) arbitrary precision arithmetic (including fuzzy comparisons), use of the PARSE statement with programmatic templates, stemmed arrays, and sparse arrays.
/* demonstrate INTERPRET with square(4) => 16 */
X = 'square'
interpret 'say' X || '(4) ; exit'
SQUARE: return arg(1)**2
This displays 16 and exits. Because variable contents in Rexx are strings, including rational numbers with exponents and even entire programs, Rexx offers to interpret strings as evaluated expressions.
This feature could be used to pass functions as function parameters, such as passing SIN or COS to a procedure to calculate integrals.
Rexx offers only basic math functions like ABS, DIGITS, MAX, MIN, SIGN, RANDOM, and a complete set of hex plus binary conversions with bit operations. More complex functions like SIN were implemented from scratch or obtained from third party external libraries. Some external libraries, typically those implemented in traditional languages, did not support extended precision.
Later versions (non-classic) support CALL variable constructs. Together with the built-in function VALUE, CALL can be used in place of many cases of INTERPRET. This is a classic program:
/* terminated by input "exit" or similar */
do forever ; interpret linein() ; end
A slightly more sophisticated "Rexx calculator":
X = 'input BYE to quit'
do until X = 'BYE' ; interpret 'say' X ; pull X ; end
PULL is shorthand for parse upper pull. Likewise, ARG is shorthand for parse upper arg.
The power of the INTERPRET instruction had other uses. The Valour software package relied upon Rexx's interpretive ability to implement an OOP environment. Another use was found in an unreleased Westinghouse product called Time Machine that was able to fully recover following a fatal error.
NUMERIC
say digits() fuzz() form() /* => 9 0 SCIENTIFIC */
say 999999999+1 /* => 1.000000000E+9 */
numeric digits 10 /* only limited by available memory */
say 999999999+1 /* => 1000000000 */
say 0.9999999999=1 /* => 0 (false) */
numeric fuzz 3
say 0.99999999=1 /* => 1 (true) */
say 0.99999999==1 /* => 0 (false) */
say 100*123456789 /* => 1.23456789E+10 */
numeric form engineering
say 100*123456789 /* => 12.34567890E+9 */
say 53 // 7 /* => 4 (rest of division)*/
SIGNAL
The SIGNAL instruction is intended for abnormal changes in the flow of control (see the next section). However, it can be misused and treated like the GOTO statement found in other languages (although it is not strictly equivalent, because it terminates loops and other constructs). This can produce difficult-to-read code.
Error handling and exceptions
It is possible in Rexx to intercept and deal with errors and other exceptions, using the SIGNAL instruction. There are seven system conditions: ERROR, FAILURE, HALT, NOVALUE, NOTREADY, LOSTDIGITS and SYNTAX. Handling of each can be switched on and off in the source code as desired.
The following program will run until terminated by the user:
signal on halt;
do a = 1
say a
do 100000 /* a delay */
end
end
halt:
say "The program was stopped by the user"
exit
A signal on novalue statement intercepts uses of undefined variables, which would otherwise get their own (upper case) name as their value. Regardless of the state of the NOVALUE condition, the status of a variable can always be checked with the built-in function SYMBOL returning VAR for defined variables.
The VALUE function can be used to get the value of variables without triggering a NOVALUE condition, but its main purpose is to read and set environment variables, similar to POSIX getenv and putenv.
Conditions
ERROR
Positive RC from a system command
FAILURE
Negative RC for a system command (e.g. command doesn't exist)
HALT
Abnormal termination
NOVALUE
An unset variable was referenced
NOTREADY
Input or output error (e.g. read attempts beyond end of file)
SYNTAX
Invalid program syntax, or some other error condition
LOSTDIGITS
Significant digits are lost (ANSI Rexx, not in TRL second edition)
When a condition is handled by SIGNAL ON, the SIGL and RC system variables can be analyzed to understand the situation. RC contains the Rexx error code and SIGL contains the line number where the error arose.
Beginning with Rexx version 4 conditions can get names, and there's also a CALL ON construct. That's handy if external functions do not necessarily exist:
ChangeCodePage: procedure /* protect SIGNAL settings */
signal on syntax name ChangeCodePage.Trap
return SysQueryProcessCodePage()
ChangeCodePage.Trap: return 1004 /* windows-1252 on OS/2 */
See also
ISPF
XEDIT
Comparison of computer shells
Comparison of programming languages
References
Further reading
Callaway, Merrill. The ARexx Cookbook: A Tutorial Guide to the ARexx Language on the Commodore Amiga Personal Computer. Whitestone, 1992. .
Callaway, Merrill. The Rexx Cookbook: A Tutorial Guide to the Rexx Language in OS/2 & Warp on the IBM Personal Computer. Whitestone, 1995. .
Cowlishaw, Michael. The Rexx Language: A Practical Approach to Programming. Prentice Hall, 1990. .
Cowlishaw, Michael. The NetRexx Language. Prentice Hall, 1997. .
Daney, Charles. Programming in REXX. McGraw-Hill, TX, 1990. .
Ender, Tom. Object-Oriented Programming With Rexx. John Wiley & Sons, 1997. .
Fosdick, Howard. Rexx Programmer's Reference. Wiley/Wrox, 2005. .
Gargiulo, Gabriel. REXX with OS/2, TSO, & CMS Features. MVS Training, 1999 (third edition 2004). .
Goldberg, Gabriel and Smith, Philip H. The Rexx Handbook . McGraw-Hill, TX, 1992. .
Goran, Richard K. REXX Reference Summary Handbook. CFS Nevada, Inc.,1997. .
IBM Redbooks. Implementing Rexx Support in Sdsf. Vervante, 2007. .
Kiesel, Peter C. Rexx: Advanced Techniques for Programmers. McGraw-Hill, TX, 1992. .
Marco, Lou ISPF/REXX Development for Experienced Programmers. CBM Books, 1995.
O'Hara, Robert P. and Gomberg, David Roos. Modern Programming Using Rexx. Prentice Hall, 1988. .
Rudd, Anthony S. 'Practical Usage of TSO REXX'. CreateSpace, 2012. .
Schindler, William. Down to Earth Rexx. Perfect Niche Software, 2000. .
External links
Mike Cowlishaw's home page
REXX language page at IBM
REXX Language Association
Rexx programming language at Open Hub
IBM software
Scripting languages
Text-oriented programming languages
Command shells
IBM mainframe operating systems
Cross-platform software
Programming languages created in 1979
Rexx |
40591576 | https://en.wikipedia.org/wiki/2X%20Software | 2X Software | 2X Software was a Maltese software company specializing in virtual desktop, application virtualization, application delivery, Remote Desktop Services, remote access and Mobile Device Management. On 25 February 2015, 2X Software was acquired by Parallels, Inc. The 2X products, Remote Application Server and Mobile Device Management, are now included in Parallels' offering.
Profile
The company has offices in the United States, Germany, UK, Australia and Malta. It develops software for the server-based computing market, in the application virtualization, remote desktop services and virtual desktop infrastructure space. With the acquisition of MDM from 3CX, the company extended its portfolio to include mobile device management.
In 2014 it was a finalist for Best of TechEd 2014, Govies Government Security Award 2014 and Datacenter ICT Application Product of the Year.
The company was acquired by Parallels, Inc. on 25 February 2015.
Products
2X Remote Application Server
2X RAS delivers virtual desktops and Windows applications hosted on hypervisors such as Microsoft Hyper-V, Citrix XenServer, VMware vSphere and others, to remote mobile or desktop devices,
2X RAS won Cloud Computing Magazine's 2012 Cloud Computing Excellence Award. 2X RAS has also won the Government Security Award 2014 and has been named to CRN's 2014 Virtualization 50 list.
The latest version of 2X RAS was released at the beginning of August 2014. The main feature updates are the management of Windows PCs as pseudo thin clients, and remote assistance.
After the acquisition by Parallels Inc., it was rebranded to Parallels RAS.
2X RDP Client
2X RDP Client provides remote desktop and application access for any web-enabled device, including Android, Chrome OS, Microsoft Windows, Linux OS, Windows Phone, MAC OS, HTML5 and iOS devices. 2X RAS delivers Microsoft Office applications to any remote or local users on the major OSs.
2X RDP Client connects to 2X RAS for Windows XP, Windows 7, Windows 8 and Windows 8.1.
2X Mobile Device Management
2X MDM was a mobile device management platform targeted at the BYOD market. It was available as a hosted (SaaS) or as an on premises solution.
References
External links
Software companies of Malta
Remote administration software
Remote desktop
Virtualization software |
1649377 | https://en.wikipedia.org/wiki/Softcatal%C3%A0 | Softcatalà | Softcatalà is a non-profit association that promotes the use of the Catalan language on computing, Internet and new technologies. This association consists of computer specialists, philologists, translators, students and all kind of volunteers that work in the field of translating software into Catalan, in order to preserve this language in the English-controlled software environment. They also offer several linguistic tools to help users improve their language knowledge.
History
Softcatalà was born in 1997 as a group of volunteers with the aim of improving the presence of Catalan in new technologies. The first step was to translate the most important free and/or open-source software based programs (OpenOffice.org, Firefox, etc.) into Catalan. After that, they delivered some other projects, including the following ones:
1,500 English-Catalan words glossary for software translation.
Software translation style guide
Translation memory with more than 40,000 entries (including translations made by Softcatalà)
Spell-checker
Collaborations
During this last years, Softcatalà has collaborated with the terminology centre TERMCAT standardizing new Catalan terms related to new technologies.
In 2001, they started collaborating with Google, and that permitted the translation of the interface and later, the participation in the adaptation of the search engine related to Catalan pages.
They have also worked on the popularization of Linux, translating GNOME and some installation and configuration tools of Mandriva and Fedora.
Web-page
The main Internet site for Softcatalà is only available in Catalan. It offers all the information about the group and explains its reasons and objectives.
The web page consists of six different sections:
Pantry: Links to software resources organized in different sections (Internet, multimedia, image, language…) for Windows, Linux and Mac.
Forums: Forums focused on solving doubts related to the language used by programmers.
Spell-checker: On-line spell-checker available in general Catalan and also in Valencian. It only corrects orthographic mistakes.
Translator: Ruled-based machine translator based on the technology by Apertium and Scale MT. It offers the chance of translating from Catalan to Spanish and vice versa. There are also new versions which are being tested (French, English, Portuguese, Aranese Occitan and Aragonese to Catalan and vice versa).
Lists: some mail lists belonging to llistes.softcatala.org.
Projects: Projects in which Softcatalà is involved, including OpenOffice.org, Mozilla, GNOME, Ubuntu, Open Thesaurus-ca…
References
External links
Softcatalà web site (Catalan)
Official Twitter account
Official Telegram channel
Official GitHub repo
Free software
Non-profit organisations based in Spain
Catalan advocacy organizations |
46887711 | https://en.wikipedia.org/wiki/Custom%20firmware | Custom firmware | Custom firmware, also known as aftermarket firmware, is an unofficial new or modified version of firmware created by third parties on devices such as video game consoles and various embedded device types to provide new features or to unlock hidden functionality. In the video game console community, the term is often written as custom firmware or simply CFW, referring to an altered version of the original system software (also known as the official firmware or simply OFW) inside a video game console such as the PlayStation Portable, PlayStation 3, PlayStation Vita and Nintendo 3DS. Installing custom firmware typically requires bootloader unlocking.
Video game consoles
Custom firmware often allow homebrew applications or ROM image backups to run directly within the game console, unlike official firmware, which usually only allow signed or retailed copies of software to run. Because custom firmware is often associated with software piracy, console manufacturers such as Nintendo and Sony have put significant effort into blocking custom firmware and other third party devices and content from their game consoles.
PlayStation Portable, PlayStation 3 and PlayStation Vita
Custom firmware is commonly seen in the PlayStation Portable handhelds released by Sony. Notable custom firmware include M33 by Dark_AleX as well as those made by others such as the 5.50GEN series, Minimum Edition (ME/LME), and PRO.
Custom firmware is also seen in the PlayStation 3 console. Only early "Fat" and Slim (CECH-20xx until CECH-25xx) models are able to run custom firmware. Slim (CECH-30xx) and Super Slim model can only run HEN (Homebrew Enabler), which has functionality similar to a custom firmware.
The PlayStation Vita, has eCFW meaning custom firmware for PSP running in the PSP emulator of the PS Vita. These eCFWs include ARK, TN-V and more recently, Adrenaline, which includes more features since it was hacked from the native side. In 2016 a Team called Molecule released HENkaku for PlayStation Vita, which alters the PS Vita's firmware on version 3.60, which allows creating a custom firmware on the console. The team behind the original HENkaku has also released taiHEN. taiHEN is a framework on which the newest version of HENkaku runs. It is a way to load plugins at the system level like the user was used to on the PSP allowing them to change/add function to their console. Enso is a bootloader vulnerability of the Vita that make HENkaku permanent and allows to run itself on the boot. So the Vita has a full CFW with HENkaku taiHEN and Enso. People on 3.60 can also update to 3.65 without losing HENkaku Enso.
Nintendo 3DS
The modding scene of the Nintendo 3DS primarily involve custom firmware (software which patches the official firmware "on the fly"), which requires an exploit to obtain control of the ARM9, the 3DS' security coprocessor, and, secondarily, flash cartridges, which emulate an original game cart (which can be solely used to play untouched game cart ROM backups). The current most widely used CFW is Luma3DS, developed by Aurora Wright and TuxSH, which allows unsigned CIA (CTR Importable Archives) installation, includes open-source rewritten system firmware modules, and exception handling for homebrew software developers.
Other past and abandoned CFWs included Gateway (a proprietary CFW locked to a flash cartridge via DRM and the first publicly available one), Pasta, RxTools (the first free and widely used one), Cakes CFW (the first open source CFW, which used a modularized approach for patches and was the inspiration for the following ones), ReiNAND, which Luma3DS was originally based on, and Corbenik; as of now the only custom firmware still currently being developed is Luma3DS (previously known as AuReiNAND). 3DS CFWs used to rely on "EmuNAND"/"RedNAND", a feature that boots the system from an unpartitioned space of the SD card containing a copy of the 3DS' NAND memory. These EmuNANDs could protect the 3DS system from bricking, as the usual system NAND was unaffected if the emuNAND is no longer functioned properly or was otherwise unusable. EmuNANDs could also be updated separately from the usual system NAND, allowing users to have the latest system version on the EmuNAND while retaining the vulnerable version on the system NAND; thus making online play and Nintendo eShop access possible on outdated 3DS system versions.
EmuNANDs were obsoleted by the release of arm9loaderhax, a boot-time ARM9 exploit that allowed people to safely use SysNAND and update it, as CFWs started patching the OS' update code so that official updates wouldn't remove the exploit. However, this exploit required a downgrade to a very early system version to get the console's unique OTP, necessary for the installation.
On May 19, 2017 a new exploit basis called sighax was released, replacing arm9loaderhax and allowing users to get even earlier control of the system, granting code execution in the context of the bootROM and thus a cleaner environment, with no downgrades or OTP required. Boot9Strap, a user-friendly version of sighax, was released.
At the same time, another bootROM exploit called ntrboot was announced, which allows people to use a backdoor present in the bootROM to get full system control on any 3DS console regardless of the firmware version (as the bootROM can't be updated), only requiring a modified DS flash cartridge and a magnet. The initial release was on August 12, supporting the AceKard 2i and R4i Gold 3DS RTS cartridges.
Nintendo Switch
Currently, several custom firmwares for the Nintendo Switch console exist: Atmosphère, ReiNX and SX OS. The differences between them are largely inconsequential; Atmosphère remains in active development and is free and open-source software. ReiNX bases much of its code off Atmosphère but with some modifications to runtime components and a different bootloader, while SX OS is closed source and paid, but largely based on Atmosphère code despite assertions to the contrary.
Nintendo has made the Switch environment much more secure than previous consoles. Despite this, there exist notable bugs which lead to user exploits. Of these, the Nvidia Tegra stack bug () is the most well-exploited. It leverages the Recovery Mode (RCM) of the Switch unit in order to push unsigned/unverified payloads, in turn granting the user access to arbitrary code execution. This vulnerability has been further leveraged by users within the Switch hacking scene to reverse-engineer the firmware, leading to two other notable exploits: Nereba and Caffeine. While RCM is a hardware exploit, Nereba and Caffeine are software exploits and rely on the console being at or below specific firmware versions in order to make use of the exploits. RCM, being hardware related, merely relies on the console being vulnerable to that particular exploit and does not have a firmware requirement or range.
Due to Nvidia's disclosure of CVE-2018-6242, Nintendo was forced to address the vulnerability, and during late 2018 began manufacturing and distributing units which have been hardware patched and are unable to access the RCM vulnerability. Any unit manufactured during or after this time is likely to be hardware patched, including the Switch Lite and the newer "red box" Switches, and any unit which is hardware patched and running a relatively recent firmware is unlikely to be able to access custom firmware at this time or in the future due to the unusually secure software environment of the Switch.
Android
The practice of replacing the system partition of the Android operating system, usually mounted as read-only, with a modified version of Android is called "flashing a ROM". The procedure requires unlocking the bootloader (typically by exploiting vulnerabilities in the operating system), is generally not supported by device manufacturers, and requires some expertise. However, since about 2015 several manufacturers, including LG, Motorola, OnePlus, Google Xiaomi, and Sony support unlocking the bootloader (except on devices that are locked by some carriers). This bypasses secure boot, without the need for exploits. The "custom ROMs" installed may include different features, require less power, or offer other benefits to the user; devices no longer receiving official Android version updates can continue to be updated.
Other devices
Various other devices, such as digital cameras, wireless routers and smart TVs, may also run custom firmware. Examples of such custom firmware include:
Rockbox for portable media players
iPodLinux for iPod portable media players
CHDK and Magic Lantern for Canon digital cameras
Nikon Hacker project for Nikon EXPEED DSLRs
Coreboot and Libreboot for computers
Many third-party firmware projects for wireless routers, including:
LibreWRT project for Ben Nanonote, Buffalo WZR-HP-G300NH and other computers with minimal resources
OpenWrt, and its derivatives such as DD-WRT
RouterTech, for ADSL gateway routers based on the Texas Instruments AR7 chipset (with the Pspboot or Adam2 bootloader)
Cable Hack and Sigma for uncapping cable modems, but with dubious legality
Firmware that allows DVD drives to be region-free
SamyGO, modified firmware for Samsung smart TVs
See also
List of custom Android firmware
List of router firmware projects
Nintendo DS homebrew
PlayStation Portable homebrew
iOS Jailbreaking
References
Homebrew software
Video game development |
38521070 | https://en.wikipedia.org/wiki/Heterogeneous%20System%20Architecture | Heterogeneous System Architecture | Heterogeneous System Architecture (HSA) is a cross-vendor set of specifications that allow for the integration of central processing units and graphics processors on the same bus, with shared memory and tasks. The HSA is being developed by the HSA Foundation, which includes (among many others) AMD and ARM. The platform's stated aim is to reduce communication latency between CPUs, GPUs and other compute devices, and make these various devices more compatible from a programmer's perspective, relieving the programmer of the task of planning the moving of data between devices' disjoint memories (as must currently be done with OpenCL or CUDA).
CUDA and OpenCL as well as most other fairly advanced programming languages can use HSA to increase their execution performance. Heterogeneous computing is widely used in system-on-chip devices such as tablets, smartphones, other mobile devices, and video game consoles. HSA allows programs to use the graphics processor for floating point calculations without separate memory or scheduling.
Rationale
The rationale behind HSA is to ease the burden on programmers when offloading calculations to the GPU. Originally driven solely by AMD and called the FSA, the idea was extended to encompass processing units other than GPUs, such as other manufacturers' DSPs, as well.
Modern GPUs are very well suited to perform single instruction, multiple data (SIMD) and single instruction, multiple threads (SIMT), while modern CPUs are still being optimized for branching. etc.
Overview
Originally introduced by embedded systems such as the Cell Broadband Engine, sharing system memory directly between multiple system actors makes heterogeneous computing more mainstream. Heterogeneous computing itself refers to systems that contain multiple processing units central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), or any type of application-specific integrated circuits (ASICs). The system architecture allows any accelerator, for instance a graphics processor, to operate at the same processing level as the system's CPU.
Among its main features, HSA defines a unified virtual address space for compute devices: where GPUs traditionally have their own memory, separate from the main (CPU) memory, HSA requires these devices to share page tables so that devices can exchange data by sharing pointers. This is to be supported by custom memory management units. To render interoperability possible and also to ease various aspects of programming, HSA is intended to be ISA-agnostic for both CPUs and accelerators, and to support high-level programming languages.
So far, the HSA specifications cover:
HSA Intermediate Layer
HSAIL (Heterogeneous System Architecture Intermediate Language), a virtual instruction set for parallel programs
similar to LLVM Intermediate Representation and SPIR (used by OpenCL and Vulkan)
finalized to a specific instruction set by a JIT compiler
make late decisions on which core(s) should run a task
explicitly parallel
supports exceptions, virtual functions and other high-level features
debugging support
HSA memory model
compatible with C++11, OpenCL, Java and .NET memory models
relaxed consistency
designed to support both managed languages (e.g. Java) and unmanaged languages (e.g. C)
will make it much easier to develop 3rd-party compilers for a wide range of heterogeneous products programmed in Fortran, C++, C++ AMP, Java, et al.
HSA dispatcher and run-time
designed to enable heterogeneous task queueing: a work queue per core, distribution of work into queues, load balancing by work stealing
any core can schedule work for any other, including itself
significant reduction of overhead of scheduling work for a core
Mobile devices are one of the HSA's application areas, in which it yields improved power efficiency.
Block diagrams
The illustrations below compare CPU-GPU coordination under HSA versus under traditional architectures.
Software support
Some of the HSA-specific features implemented in the hardware need to be supported by the operating system kernel and specific device drivers. For example, support for AMD Radeon and AMD FirePro graphics cards, and APUs based on Graphics Core Next (GCN), was merged into version 3.19 of the Linux kernel mainline, released on 8 February 2015. Programs do not interact directly with , but queue their jobs utilizing the HSA runtime. This very first implementation, known as , focuses on "Kaveri" or "Berlin" APUs and works alongside the existing Radeon kernel graphics driver.
Additionally, supports heterogeneous queuing (HQ), which aims to simplify the distribution of computational jobs among multiple CPUs and GPUs from the programmer's perspective. Support for heterogeneous memory management (HMM), suited only for graphics hardware featuring version 2 of the AMD's IOMMU, was accepted into the Linux kernel mainline version 4.14.
Integrated support for HSA platforms has been announced for the "Sumatra" release of OpenJDK, due in 2015.
AMD APP SDK is AMD's proprietary software development kit targeting parallel computing, available for Microsoft Windows and Linux. Bolt is a C++ template library optimized for heterogeneous computing.
GPUOpen comprehends a couple of other software tools related to HSA. CodeXL version 2.0 includes an HSA profiler.
Hardware support
AMD
, only AMD's "Kaveri" A-series APUs (cf. "Kaveri" desktop processors and "Kaveri" mobile processors) and Sony's PlayStation 4 allowed the integrated GPU to access memory via version 2 of the AMD's IOMMU. Earlier APUs (Trinity and Richland) included the version 2 IOMMU functionality, but only for use by an external GPU connected via PCI Express.
Post-2015 Carrizo and Bristol Ridge APUs also include the version 2 IOMMU functionality for the integrated GPU.
ARM
ARM's Bifrost microarchitecture, as implemented in the Mali-G71, is fully compliant with the HSA 1.1 hardware specifications. , ARM has not announced software support that would use this hardware feature.
See also
General-purpose computing on graphics processing units (GPGPU)
Non-Uniform Memory Access (NUMA)
OpenMP
Shared memory
Zero-copy
References
External links
by Vinod Tipparaju at SC13 in November 2013
HSA and the software ecosystem
2012 – HSA by Michael Houston
Heterogeneous computing |
630552 | https://en.wikipedia.org/wiki/Sasser%20%28computer%20worm%29 | Sasser (computer worm) | Sasser is a computer worm that affects computers running vulnerable versions of the Microsoft operating systems Windows XP and Windows 2000. Sasser spreads by exploiting the system through a vulnerable port. Thus it is particularly virulent in that it can spread without user intervention, but it is also easily stopped by a properly configured firewall or by downloading system updates from Windows Update. The specific hole Sasser exploits is documented by Microsoft in its MS04-011 bulletin, for which a patch had been released seventeen days earlier. The most characteristic experience of the worm is the shutdown timer that appears due to the worm crashing LSASS.
History and effects
Sasser was created on April 30, 2004. This worm was named Sasser because it spreads by exploiting a buffer overflow in the component known as LSASS (Local Security Authority Subsystem Service) on the affected operating systems. The worm scans different ranges of IP addresses and connects to victims' computers primarily through TCP port 445. Microsoft's analysis of the worm indicates that it may also spread through port 139. Several variants called Sasser.B, Sasser.C, and Sasser.D appeared within days (with the original named Sasser.A). The LSASS vulnerability was patched by Microsoft in the April 2004 installment of its monthly security packages, prior to the release of the worm. Some technology specialists have speculated that the worm writer reverse-engineered the patch to discover the vulnerability, which would open millions of computers whose operating system had not been upgraded with the security update.
The effects of Sasser include the news agency Agence France-Presse (AFP) having all its satellite communications blocked for hours and the U.S. flight company Delta Air Lines having to cancel several trans-atlantic flights because its computer systems had been swamped by the worm. The Nordic insurance company If and their Finnish owners Sampo Bank came to a complete halt and had to close their 130 offices in Finland. The British Coastguard had its electronic mapping service disabled for a few hours, and Goldman Sachs, Deutsche Post, and the European Commission also all had issues with the worm. The X-ray department at Lund University Hospital had all their four layer X-ray machines disabled for several hours and had to redirect emergency X-ray patients to a nearby hospital. The University of Missouri was forced to "unplug" its network from the wider Internet in response to the worm.
Author
On 7 May 2004, 18-year-old German Sven Jaschan from Rotenburg, Lower Saxony, then student at a technical college, was arrested for writing the worm. German authorities were led to Jaschan partly because of information obtained in response to a bounty offer by Microsoft of US$250,000.
One of Jaschan's friends had informed Microsoft that his friend had created the worm. He further revealed that not only Sasser, but also Netsky.AC, a variant of the Netsky worm, was his creation. Another variation of Sasser, Sasser.E, was found to be circulating shortly after the arrest. It was the only variation that attempted to remove other worms from the infected computer, much in the way Netsky does.
Jaschan was tried as a minor because the German courts determined that he created the worm before he was 18. The worm itself had been released on his 18th birthday (29 April 2004). Sven Jaschan was found guilty of computer sabotage and illegally altering data. On Friday, 8 July 2005, he received a 21-month suspended sentence.
Side effects
An indication of the worm's infection of a given PC is the existence of the files C:\win.log, C:\win2.log or C:\WINDOWS\avserve2.exe on the PC's hard disk, the ftp.exe running randomly and 100% CPU usage, as well as seemingly random crashes with LSA Shell (Export Version) caused by faulty code used in the worm. The most characteristic symptom of the worm is the shutdown timer that appears due to the worm crashing LSASS.exe.
Workarounds
The shutdown sequence can be aborted by pressing start and using the Run command to enter shutdown -a. This aborts the system shutdown so the user may continue what they were doing. The shutdown.exe file is not available by default within Windows 2000, but can be installed from the Windows 2000 resource kit. It is available in Windows XP.
A second option to stop the worm from shutting down a computer is to change the time and/or date on its clock to earlier; the shutdown time will move as far into the future as the clock was set back.
See also
Blaster (computer worm)
Nachia (computer worm)
BlueKeep (security vulnerability)
Timeline of notable computer viruses and worms
External links
Microsoft Security Bulletin: MS04-011
Bugtraq ID 10108
Read here how you can protect your PC (Microsoft Security page) - Includes links to the info pages of major anti-virus companies.
New Windows Worm on the Loose (Slashdot article)
Report on the effects of the worm from the BBC
German admits creating Sasser (BBC News)
Sasser creator avoids jail term (BBC News)
Exploit-based worms
Hacking in the 2000s |
12142638 | https://en.wikipedia.org/wiki/6in4 | 6in4 | 6in4 is an IPv6 transition mechanism for migrating from Internet Protocol version 4 (IPv4) to IPv6. It is a tunneling protocol that encapsulates IPv6 packets on specially configured IPv4 links according to the specifications of . The IP protocol number for 6in4 is 41, per IANA reservation.
The 6in4 packet format consists of the IPv6 packet preceded by an IPv4 packet header. Thus, the encapsulation overhead is the size of the IPv4 header of 20 bytes. On Ethernet with a maximum transmission unit (MTU) of 1500 bytes, IPv6 packets of 1480 bytes may therefore be transmitted without fragmentation.
6in4 tunneling is also referred to as proto-41 static because the endpoints are configured statically. Although 6in4 tunnels are generally manually configured, the utility AICCU can configure tunnel parameters automatically after retrieving information from a Tunnel Information and Control Protocol (TIC) server.
The similarly named methods 6to4 or 6over4 describe a different mechanism. The 6to4 method also makes use of proto-41, but the endpoint IPv4 address information is derived from the IPv6 addresses within the IPv6 packet header, instead of from static configuration of the endpoints.
Network address translators
When an endpoint of a 6in4 tunnel is inside a network that uses network address translation (NAT) to external networks, the DMZ feature of a NAT router may be used to enable the service. Some NAT devices automatically permit transparent operation of 6in4.
Dynamic 6in4 tunnels and heartbeat
Even though 6in4 tunnels are static in nature, with the help of for example the heartbeat protocol one can still have dynamic tunnel endpoints. The heartbeat protocol signals the other side of the tunnel with its current endpoint location. A tool such as AICCU can then update the endpoints, in effect making the endpoint dynamic while still using the 6in4 protocol. Tunnels of this kind are generally called 'proto-41 heartbeat' tunnels.
Security issues
The 6in4 protocol has no security features, thus one can inject IPv6 packets by spoofing the source IPv4 address of a tunnel endpoint and sending it to the other endpoint. This problem can partially be solved by implementing network ingress filtering (not near the exit point but close to the true source) or with IPsec.
The mentioned packet injection loophole of 6in4 was exploited for a research benefit in a method called IPv6 Tunnel Discovery which allowed the researchers to discover operating IPv6 tunnels around the world.
Specifications
, Transition Mechanisms for IPv6 Hosts and Routers, R. Gilligan and E. Nordmark, 1996
, Transition Mechanisms for IPv6 Hosts and Routers, R. Gilligan and E. Nordmark, 2000
, Basic Transition Mechanisms for IPv6 Hosts and Routers, R. Gilligan and E. Nordmark, 2005
See also
List of IPv6 tunnelbrokers
IP in IP: the equivalent protocol encapsulating IPv4 in IPv4
References
External links
How do I configure my machine to set up an IPv6 in IPv4 tunnel
6in4 and other tunnel setups on Debian
6in4 setup on Plan9 OS
Tunneling protocols
IPv6 transition technologies
Network protocols |
58109396 | https://en.wikipedia.org/wiki/Daniel%20Zingaro | Daniel Zingaro | Daniel Zingaro is an Associate Professor at the University of Toronto Mississauga. His main areas of research are in evaluating Computer science education and online learning. He has co-authored over 80 articles in peer-reviewed journals and conferences; and also authored a textbook, "Invariants: a Generative Approach to Programming.
Born visually impaired, Zingaro completed B.Sc and M.Sc in Computer Science from McMaster University. He then received a Ph.D. from Ontario Institute for Studies in Education (OISE) at the University of Toronto in Computer Science Education. His master's thesis was about formalizing and proving properties of parsers. His doctoral thesis was titled " Evaluating Peer Instruction in First-year University Computer Science Courses". Daniel Zingaro designed accessible computer games and published work in Computers & Education, International Computing Education Research (ICER) conference, Computer Science Education, British Journal of Educational Technology, and Transactions on Computing Education.
Selected publications
Awards
ICER Best Paper Award, 2014
SIGCSE 2016 best paper award
JOLT 2012 best paper award
References
Year of birth missing (living people)
Living people
University of Toronto alumni
Canadian computer scientists |
236575 | https://en.wikipedia.org/wiki/SIT | SIT | Sit commonly refers to sitting.
Sit, SIT or Sitting may also refer to:
Places
Sit (island), Croatia
Sit, Bashagard, a village in Hormozgan Province, Iran
Sit, Gafr and Parmon, a village in Hormozgan Province, Iran
Sit, Minab, a village in Hormozgan Province
Sit-e Bandkharas, a village in Hormozgan Province, Iran
Sit (river), a river in Russia
Sit-e Bandkharas, a village in Hormozgan Province, Iran
Organizations
Singapore Improvement Trust, a government public housing organization
Special Investigation Team (India), a team of Indian investigators for serious crimes.
Special Investigation Team, a specialized team of officers in Japanese law enforcement consisting of officers trained to investigate serious crimes with SWAT elements attached.
Strategic Information Technology, a Canadian banking software company
Educational organizations and certification
Salazar Institute of Technology, Cebu, Philippines
Schaffhausen Institute of Technology, Schaffhausen, Switzerland
School of Information Technology, King Mongkut's University of Technology Thonburi, Bangkok, Thailand
School of Information Technology, Kolkata, India
School for International Training, Brattleboro, Vermont
Shibaura Institute of Technology, Tokyo, Japan
Siddaganga Institute of Technology, Tumkur, Karnataka, India
Siliguri Institute of Technology, West Bengal, India
Singapore Institute of Technology, an autonomous university in Singapore
Southeastern Institute of Technology, Huntsville, Alabama, United States
Southern Institute of Technology, Invercargill, New Zealand
Stevens Institute of Technology, Hoboken, New Jersey, United States
Psychology
Sexual Identity Therapy
SIT-lite, pejorative term for a form of social identity theory
Structural Information Theory, a theory of human perception
Science and technology
Sea ice thickness
SIT or SITh, static induction thyristor
SIT, static induction transistor
SIT, Simple Internet Transition, an IPv6 over IPv4 tunneling protocol
.sit, or .sitx, file extensions used for compressed files created with StuffIt
System integration testing, a process in software engineering
Special information tones (telephony), a three beep signal indicating a call did not go through
Specific ion Interaction Theory, a theory for estimation of single-ion activity coefficients
Sprint interval training, a form of high-intensity interval training in which sprinting is interspersed with walking
Sterile insect technique, a technique for managing insect populations
Systematic Inventive Thinking, a practical methodology for innovation and creative problem solving
Systematic Inventive Thinking (company), a company based in Israel implementing the Systematic Inventive Thinking method in organizations
Other
Sino-Tibetan languages, the ISO 639-2 code
SIT, IATA code for Sitka Rocky Gutierrez Airport
Slovenian tolar, the ISO 4217 code for the former currency of Slovenia
SİT areas in Turkey, archaeological sites in Turkey
Sit (surname), Chinese surname
"Sitting", a song from Cat Stevens' album Catch Bull at Four |
3841734 | https://en.wikipedia.org/wiki/Defaults%20%28software%29 | Defaults (software) | defaults is a command line utility that manipulates plist files. Introduced in 1998 OPENSTEP, defaults is found in the system's descendants macOS and GNUstep.
The name "defaults" derives from OpenStep's name for user preferences, Defaults, or NSUserDefaults in Foundation Kit. Each application had its own defaults plist ("domain"), under for the user configuration and for the system configuration. The lookup system also supports a , where defaults written there will be seen by all applications. In macOS, the part of the path is replaced by the more intuitive . defaults accesses the plists based on the domain given.
defaults is also able to read and write any plist specified with a path, although Apple plans to phase out this utility in a future version.
Usage
Common uses of defaults:
$ defaults read DOMAIN # gets all
$ defaults read DOMAIN PROPERTY_NAME # gets
$ defaults write DOMAIN PROPERTY_NAME VALUE # sets
$ defaults delete DOMAIN PROPERTY_NAME # resets a property
$ defaults delete DOMAIN # resets preferences
DOMAIN should be replaced by the plist file name sans extension ('.plist'). plist files are named with reverse domain name notation. For example:
$ defaults read com.apple.iTunes # prints all iTunes preference values
plist files store keys and values. The PROPERTY_NAME key is the name of the property to modify. For example, to remove the search field from Safari's address bar:
$ defaults write com.apple.Safari AddressBarIncludesGoogle 0
$ # or
$ defaults write com.apple.Safari AddressBarIncludesGoogle -bool NO # case-sensitive!
Using "1", "YES", or "TRUE" instead restores this to the default of including search.
Preferences can at times corrupt applications. To reset Address Book's preferences, either the file ~/Library/Preferences/com.apple.AddressBook.plist must be removed or the following command issued:
$ defaults delete com.apple.AddressBook
Compound values
defaults prints values in the OpenStep format. It allows the VALUE to be arrays and dicts, as long as they conform to old-style plist syntax.
Settings
Some example settings configurable with defaults under macOS:
SS64 documents a set of other keys that can be changed for each software (not the global domain) in macOS. Other sites also document settings to be changed using defaults. Apple does not publish a complete list of these "secret knobs", but their support site does occasionally provide defaults commands for user to change a certain setting, such as the creation of .DS_Store.
GNUstep documents its defaults more clearly, so that there is no such thing as a "hidden settings" community like there is for macOS.
References
NSUserDefaults documentation Apple Inc
MacOS software
GNUstep
Command-line software |
29078194 | https://en.wikipedia.org/wiki/MikuMikuDance | MikuMikuDance | MikuMikuDance (commonly abbreviated to MMD) is a freeware animation program that lets users animate and create 3D animated movies, originally produced for the Vocaloid character Hatsune Miku. The MikuMikuDance program itself was programmed by Yu Higuchi (HiguchiM) and has gone through significant upgrades since its creation. Its production was made as part of the VOCALOID Promotion Video Project (VPVP).
Overview
The software allows users to import 3D models into a virtual space that can be moved and animated accordingly. The positioning of the 3D figures can be easily altered, the facial expressions can be altered (as long as the model has morphs to use), and motion data can be applied to the model to make it move. Along with these functions for models, accessories, stages, and backgrounds can be added to create an environment, and effects such as lens flares and AutoLuminous (an effect that makes things glow and light up) can be applied as long as the MikuMikuEffect (MME) plugin is installed into the interface. Sound and music can also be added to create music videos, short films, and fan-made stories. The motion data used to animate the characters and the pose data mainly used for making screenshots can be exported as .vmd (Vocaloid Motion Data) files and .vpd (Vocaloid Pose Data) files, respectively. The exported files can then be imported into other projects made with software that can use the file types. This allows users to share the data with other users. The software also uses the Bullet physics engine. Users can also use Microsoft's Kinect for motion capturing. Map shadowing, screenshot rendering in several picture file formats and full movie rendering in the .avi file format are also possible.
With the exception of a few models, stages, motion data and accessories that come with the software upon download, all content, including the 3D models, is distributed by the users, meaning all rules and restrictions (or lack thereof) vary greatly from case to case. Most models' rules may be found in its Readme file, which may be a .txt, pdf or a webpage file. The creator, HiguchiM, has stated he can make no promises regarding how other users' fan models can or cannot be used, and is exempt from all responsibility relating to this subject. Models created by other users are often available for public download. As MikuMikuDance is exclusively a posing and animation software, modelers use 3D modeling software, such as Blender or Metasequoia, to create the model and UV map, while the majority of conversion to the MMD platform (such as facial morphs, bones and physical bodies) is done with a program made exclusively for MMD model conversion, PMD Editor or its successor PMX editor.
The software itself comes with a small number of models of well-known Vocaloids and an invisible grid, to which particle effects can be attached to in MME, a stage, some accessories, and two samples of what MMD can do, in the form of .pmm files; the file type that MMD projects are saved as. The software was originally only released in Japanese; however an English version was released at a later date. Videos using the software are regularly seen on sites such as Nico Nico Douga and YouTube and are popular among Vocaloid fans and users alike. A magazine which hands out exclusive models with every issue was also produced owing to this popularity. Some models for Vocaloid may also be used for Vocaloid music, going on to be used by studios working with the Vocaloid software.
Many people also buy Windows100% magazines which give models exclusive to the public. These come out once every month and due to popularity, model creators are giving out secret models, as well as the models people have paid for. Most of these tend to be Vocaloid or models that do not have a particular copyright holder.
On May 26, 2011, continual updating of the software came to an end and the last version was released. In a closing statement, the creator left the software in the hands of the fans to continue building upon. Despite this, the source code has not been released, and the developer has no intentions of doing so, making it impossible for people to continue building upon the original software. However, there are alternative programs that provide similar functionality, such as MikuMikuMoving (MMD's "replacement" that is updated frequently and has many of the features of MMD, as well as new file formats unique to the program, support for the Oculus Rift head-mounted display and a new UI, among other features), and the free software, Blender.
Between then and now, there have been several additions to MMD version 7.39, mainly the addition of the x64 version, which runs better than the normal version and is designed to use the power of 64-bit computers that 32-bit computers lack. This results in better performance, faster render times, and higher quality, to name a few.
However, on June 1, 2013, MikuMikuDance's creator began to release updates for the program very suddenly. After he began releasing updates again, there have been 20 new versions and the 64-bit versions of them. Before June 1, the latest version was 7.39, which was released on May 26, 2011. MMD ver. 7.39 received several program updates between its initial release and the time of ver. 7.39m's release. Most of these updates were only made to increase compatibility with newer, more advanced .pmx models. It is unknown why the creator began editing the software again. On December 10, 2019, version 9.32 was released which is the most current version.
In December 2014, Sekai Project announced that they had acquired permission to release MikuMikuDance on Steam. However, , it has been unreleased.
The first anime television series to be fully produced with the software, Straight Title Robot Anime, premiered on February 5, 2013.
Copyright
The software was released as freeware. The models of the Vocaloid mascot series provided with the software are subject to the Piapro Character License, and are not allowed to be used without permission for commercial reasons. Although the software is distributed freely, models released independently of the software may not be — original produced models, motion data, and landscapes may be subject to their creator's own rules. The program does not include all of the Vocaloid characters by default, but it includes Crypton Future Media's Vocaloids which are Hatsune Miku, Kagamine Rin, Kagamine Len, KAITO, MEIKO, and Megurine Luka; and although Yowane Haku, Akita Neru, Sakine Meiko, and Kasane Teto are not official Vocaloids (Teto being an UTAU), they became so popular that Crypton officially licensed and added them to Project Diva.
References
External links
3D animation software
Creative works using vocaloids
Windows graphics-related software
Freeware game engines |
47674850 | https://en.wikipedia.org/wiki/Algebraic%20Eraser | Algebraic Eraser | Algebraic Eraser (AE) is an anonymous key agreement protocol that allows two parties, each having an AE public–private key pair, to establish a shared secret over an insecure channel. This shared secret may be directly used as a key, or to derive another key that can then be used to encrypt subsequent communications using a symmetric key cipher. Algebraic Eraser was developed by Iris Anshel, Michael Anshel, Dorian Goldfeld and Stephane Lemieux. SecureRF owns patents covering the protocol and unsuccessfully attempted (as of July 2019) to standardize the protocol as part of ISO/IEC 29167-20, a standard for securing radio-frequency identification devices and wireless sensor networks.
Keyset parameters
Before two parties can establish a key they must first agree on a set of parameters, called the keyset parameters. These parameters comprise:
, the number of strands in the braid,
, the size of the finite field ,
, the initial NxN seed matrix in ,
, a set of elements in the finite field (also called the T-values), and
a set of conjugates in the braid group designed to commute with each other.
E-multiplication
The fundamental operation of the Algebraic Eraser is a one-way function called E-multiplication. Given a matrix, permutation, an Artin generator in the braid group, and T-values, one applies E-multiplication by converting the generator to a colored Burau matrix and braid permutation, , applying the permutation and T-values, and then multiplying the matrices and permutations. The output of E-multiplication is itself a matrix and permutation pair: .
Key establishment protocol
The following example illustrates how to make a key establishment. Suppose Alice wants to establish a shared key with Bob, but the only channel available may be eavesdropped by a third party. Initially, Alice and Bob must agree on the keyset parameters they will use.
Each party must have a key pair derived from the keyset, consisting of a private key (e.g., in the case of Alice) where is a randomly selected polynomial of the seed matrix and a braid, which is a randomly selected set of conjugates and inverses chosen from the keyset parameters (A for Alice and B for Bob, where (for Alice) ).
From their private key material Alice and Bob each compute their public key and where, e.g., , that is, the result of E-Multiplication of the private matrix and identity-permutation with the private braid.
Each party must know the other party's public key prior to execution of the protocol.
To compute the shared secret, Alice computes and Bob computes . The shared secret is the matrix/permutation pair , which equals . The shared secrets are equal because the conjugate sets and are chosen to commute and both Alice and Bob use the same seed matrix and T-values .
The only information about her private key that Alice initially exposes is her public key. So, no party other than Alice can determine Alice's private key, unless that party can solve the Braid Group Simultaneous Conjugacy Separation Search problem. Bob's private key is similarly secure. No party other than Alice or Bob can compute the shared secret, unless that party can solve the Diffie–Hellman problem.
The public keys are either static (and trusted, say via a certificate) or ephemeral. Ephemeral keys are temporary and not necessarily authenticated, so if authentication is desired, authenticity assurances must be obtained by other means. Authentication is necessary to avoid man-in-the-middle attacks. If one of Alice or Bob's public key is static then man-in-the-middle attacks are thwarted. Static public keys provide neither forward secrecy nor key-compromise impersonation resilience, among other advanced security properties. Holders of static private keys should validate the other public key, and should apply a secure key derivation function to the raw Diffie–Hellman shared secret to avoid leaking information about the static private key.
Security
The security of AE is based on the Generalized Simultaneous Conjugacy Search Problem (GSCSP) within the braid group. This is a distinct and different hard problem than the Conjugacy Search Problem (CSP), which has been the central hard problem in what is called braid group cryptography. Even if CSP is uniformly broken (which has not been done to date), it is not known how this would facilitate a break of GSCSP.
Known attacks
The first attack by Kalka, Teicher and Tsaban shows a class of weak-keys when or are chosen randomly. The authors of Algebraic Eraser followed up with a preprint on how to choose parameters that aren't prone to the attack. Ben-Zvi, Blackburn, and Tsaban improved the first attack into one the authors claim can break the publicized security parameters (claimed to provide 128-bit security) using less than 8 CPU hours, and less than 64MB of memory. Anshel, Atkins and Goldfeld responded to this attack in January 2016.
A second attack by Myasnikov and Ushakov, published as a preprint, shows that conjugates chosen with a too-short conjugator braid can be separated, breaking the system. This attack was refuted by Gunnells, by showing that properly sized conjugator braids cannot be separated.
In 2016, Simon R. Blackburn and Matthew J. B. Robshaw published a range of practical attacks against the January 2016 draft of the ISO/IEC 29167-20 over-the-air protocol, including impersonation of a target tag with negligible amount of time and memory and full private key recovery requiring 249 time and 248 memory. Atkins and Goldfeld responded that adding a hash or message authentication code to the draft protocol defeats these attacks.
See also
Anshel–Anshel–Goldfeld key exchange
Group-based cryptography
Non-commutative cryptography
Notes
References
External links
SecureRF home page
Key-agreement protocols |
179124 | https://en.wikipedia.org/wiki/Threshold%20pledge%20system | Threshold pledge system | The threshold pledge or fund and release system is a way of making a fundraising pledge as a group of individuals, often involving charitable goals or financing the provision of a public good. An amount of money is set as the goal or threshold to reach for the specified purpose and interested individuals will pitch in, but the money at first either remains with the pledgers or is held in escrow.
When the threshold is reached, the pledges are called in (or transferred from the escrow fund) and a contract is formed so that the collective good is supplied; a variant is that the money is collected when the good is actually delivered. If the threshold is not reached by a certain date (or perhaps if no contract is ever signed, etc.), the pledges are either never collected or, if held in escrow, are simply returned to the pledgers. In economics, this type of model is known as an assurance contract.
This system is most often applied to creative works, both for financing new productions and for buying out existing works; in the latter cases, it is sometimes known as ransom publishing model or Street Performer Protocol (SPP).
Street Performer Protocol
Street Performer Protocol is an early description of a type of threshold pledge system. SPP is the threshold pledge system encouraging the creation of creative works in the public domain or copylefted, described by Steven Schear and separately by cryptographers John Kelsey and Bruce Schneier. This assumes that current forms of copyright and business models of the creative industries will become increasingly inefficient or unworkable in the future, because of the ease of copying and distribution of digital information.
Under the Street Performer Protocol, the artist announces that when a certain amount of money is received in escrow, the artist will release a work (book, music, software, etc.) into the public domain or under a free content license. Interested donors make their donations to a publisher, who contracts with the artist for the work's creation and keeps the donations in escrow, identified by their donors, until the work is released.
If the artist releases the work on time, the artist receives payment from the escrow fund. If not, the publisher repays the donors, possibly with interest. As detailed above, contributions may also be refunded if the threshold is not reached within a reasonable expiring date. The assessed threshold also includes a fee which compensates the publisher for costs and assumption of risks.
The publisher may act like a traditional publisher, by soliciting sample works and deciding which ones to support, or it may serve only as an escrow agent and not care about the quality of the works (like a vanity press).
Ransom model in software
In software, source code escrow is a publishing model that applies the SPP to source code (often involving existing proprietary software) which is eventually released under an open source or free software license.
History
The Street Performer Protocol is a natural extension of the much older idea of funding the production of written or creative works through agreements between groups of potential readers or users.
The first illustrated edition of John Milton's Paradise Lost was published under a subscription system; and Mozart and Beethoven, among other composers, used subscriptions to premiere concerts and first print editions of their works. Unlike today's meaning of subscription, this meant that a fixed number of people had to sign up and pay some amount before the concert could take place or the printing press started.
These three (piano) concertos K413-415 ... formed an important milestone in his career, being the first in the series of great concertos that he wrote for Vienna, and the first to be published in a printed edition. Initially, however, he followed the usual practice of making them available in manuscript copies. Mozart advertised for subscribers in January 1783: "These three concertos, which can be performed with full orchestra including wind instruments, or only a quattro, that is with 2 violins, 1 viola and violoncello, will be available at the beginning of April to those who have subscribed for them (beautifully copied, and supervised by the composer himself)." Six months later, Mozart complained that it was taking a long time to secure enough subscribers. This was despite the fact that he had meanwhile scored a great success on two fronts:…
However, there are a number of differences between this traditional model and the SPP. The most important difference is that traditionally, the subscribers would be among the first to get access and would do so with the understanding that the work would likely always be a "rare" good; thus, there was some status in owning a copy, as well as the prestige of being among the patrons. Additionally, subscriptions were generally sold at a set price, but some wealthy subscribers may have given more in order to be a patron. In the modern Street Performer Protocol, each funder chooses the amount they want to pay, and the work is released to the public and freely reproduced.
In 1970, Stephen Breyer argued for the importance of this model in "The Uneasy Case for Copyright".
The Street Performer Protocol was successfully used to release the source code and brand name of the Blender 3D animation program. After NaN Technologies BV went bankrupt in 2002, the copyright and trademark rights to Blender went to the newly created NaN Holding BV. The newly created Blender Foundation campaigned for donations to obtain the right to release the software as free and open source under the GNU General Public License. NaN Holding BV set the price tag at 100,000 euros. More than 1,300 users became members and donated more than 50 euros each, in addition to anonymous users, non-membership individual donations and companies. On October 13, 2002, Blender was released on the Internet as free/open source software.
Variations of the SPP include the Rational Street Performer Protocol and the Wall Street Performer Protocol.
List of threshold-pledge websites
Community Funded - A crowdfunding platform-oriented threshold-pledge website (threshold-pledge currently disabled)
GlobalGiving - A project-oriented threshold-pledge website
IndieGoGo - A project-oriented threshold-pledge website
Kickstarter - A project-oriented threshold-pledge website
PledgeBank - An honor system fund and release website
PledgeMusic - A direct-to-fan music funding website
RocketHub - An international project-oriented pledge website
Sellaband - A crowdfunding music funding website
Tides Center - A project-oriented threshold-pledge website
Patreon - A crowdfunding platform providing "recurring funding for artists and creators"
See also
Assurance contract
Contingency market
Copyright social conflict
Crowdfunding
References
Further reading
John Kelsey and Bruce Schneier, The Street Performer Protocol and Digital Copyrights, First Monday 4(6), 1999.
Crosbie Fitch, The Digital Art Auction - March 2001.
Chris Rasch, The Wall Street Performer Protocol, First Monday 6(6), 2001. (tROT)
Karl Fogel: The Promise of a Post-Copyright World (Threshold Pledge System section) - QuestionCopyright.org, October 2005
Payment systems
Copyright law
Fundraising |
19535600 | https://en.wikipedia.org/wiki/HMMER | HMMER | HMMER is a free and commonly used software package for sequence analysis written by Sean Eddy. Its general usage is to identify homologous protein or nucleotide sequences, and to perform sequence alignments. It detects homology by comparing a profile-HMM to either a single sequence or a database of sequences. Sequences that score significantly better to the profile-HMM compared to a null model are considered to be homologous to the sequences that were used to construct the profile-HMM. Profile-HMMs are constructed from a multiple sequence alignment in the HMMER package using the hmmbuild program. The profile-HMM implementation used in the HMMER software was based on the work of Krogh and colleagues. HMMER is a console utility ported to every major operating system, including different versions of Linux, Windows, and Mac OS.
HMMER is the core utility that protein family databases such as Pfam and InterPro are based upon. Some other bioinformatics tools such as UGENE also use HMMER.
HMMER3 also makes extensive use of vector instructions for increasing computational speed. This work is based upon earlier publication showing a significant acceleration of the Smith-Waterman algorithm for aligning two sequences.
Profile HMMs
A profile HMM is a variant of an HMM relating specifically to biological sequences. Profile HMMs turn a multiple sequence alignment into a position-specific scoring system, which can be used to align sequences and search databases for remotely homologous sequences. They capitalise on the fact that certain positions in a sequence alignment tend to have biases in which residues are most likely to occur, and are likely to differ in their probability of containing an insertion or a deletion. Capturing this information gives them a better ability to detect true homologs than traditional BLAST-based approaches, which penalise substitutions, insertions and deletions equally, regardless of where in an alignment they occur.
Profile HMMs center around a linear set of match (M) states, with one state corresponding to each consensus column in a sequence alignment. Each M state emits a single residue (amino acid or nucleotide). The probability of emitting a particular residue is determined largely by the frequency at which that residue has been observed in that column of the alignment, but also incorporates prior information on patterns of residues that tend to co-occur in the same columns of sequence alignments. This string of match states emitting amino acids at particular frequencies are analogous to position specific score matrices or weight matrices.
A profile HMM takes this modelling of sequence alignments further by modelling insertions and deletions, using I and D states, respectively. D states do not emit a residue, while I states do emit a residue. Multiple I states can occur consecutively, corresponding to multiple residues between consensus columns in an alignment. M, I and D states are connected by state transition probabilities, which also vary by position in the sequence alignment, to reflect the different frequencies of insertions and deletions across sequence alignments.
The HMMER2 and HMMER3 releases used an architecture for building profile HMMs called the Plan 7 architecture, named after the seven states captured by the model. In addition to the three major states (M, I and D), six additional states capture non-homologous flanking sequence in the alignment. These 6 states collectively are important for controlling how sequences are aligned to the model e.g. whether a sequence can have multiple consecutive hits to the same model (in the case of sequences with multiple instances of the same domain).
Programs in the HMMER package
The HMMER package consists of a collection of programs for performing functions using profile hidden Markov models. The programs include:
Profile HMM building
hmmbuild - construct profile HMMs from multiple sequence alignments
Homology searching
hmmscan - search protein sequences) against a profile HMM database
hmmsearch - search profile HMMs against a sequence database
jackhmmer - iteratively search sequences against a protein database
nhmmer - search DNA/RNA queries against a DNA/RNA sequence database
nhmmscan - search nucleotide sequences against a nucleotide profile
phmmer - search protein sequences against a protein database
Other functions
hmmalign - align sequences to a profile HMM
hmmemit - produce sample sequences from a profile HMM
hmmlogo - produce data for an HMM logo from an HMM file
The package contains numerous other specialised functions.
The HMMER web server
In addition to the software package, the HMMER search function is available in the form of a web server. The service facilitates searches across a range of databases, including sequence databases such as UniProt, SwissProt, and the Protein Data Bank, and HMM databases such as Pfam, TIGRFAMs and SUPERFAMILY. The four search types phmmer, hmmsearch, hmmscan and jackhmmer are supported (see Programs). The search function accepts single sequences as well as sequence alignments or profile HMMs.
The search results are accompanied by a report on the taxonomic breakdown, and the domain organisation of the hits. Search results can then be filtered according to either parameter.
The web service is currently run out of the European Bioinformatics Institute (EBI) in the United Kingdom, while development of the algorithm is still performed by Sean Eddy's team in the United States. Major reasons for relocating the web service were to leverage the computing infrastructure at the EBI, and to cross-link HMMER searches with relevant databases that are also maintained by the EBI.
The HMMER3 release
The latest stable release of HMMER is version 3.0. HMMER3 is complete rewrite of the earlier HMMER2 package, with the aim of improving the speed of profile-HMM searches. Major changes are outlined below:
Improvements in speed
A major aim of the HMMER3 project, started in 2004 was to improve the speed of HMMER searches. While profile HMM-based homology searches were more accurate than BLAST-based approaches, their slower speed limited their applicability. The main performance gain is due to a heuristic filter that finds high-scoring un-gapped matches within database sequences to a query profile. This heuristic results in a computation time comparable to BLAST with little impact on accuracy. Further gains in performance are due to a log-likelihood model that requires no calibration for estimating E-values, and allows the more accurate forward scores to be used for computing the significance of a homologous sequence.
HMMER still lags behind BLAST in speed of DNA-based searches; however, DNA-based searches can be tuned such that an improvement in speed comes at the expense of accuracy.
Improvements in remote homology searching
The major advance in speed was made possible by the development of an approach for calculating the significance of results integrated over a range of possible alignments. In discovering remote homologs, alignments between query and hit proteins are often very uncertain. While most sequence alignment tools calculate match scores using only the best scoring alignment, HMMER3 calculates match scores by integrating across all possible alignments, to account for uncertainty in which alignment is best. HMMER sequence alignments are accompanied by posterior probability annotations, indicating which portions of the alignment have been assigned high confidence and which are more uncertain.
DNA sequence comparison
A major improvement in HMMER3 was the inclusion of DNA/DNA comparison tools. HMMER2 only had functionality to compare protein sequences.
Restriction to local alignments
While HMMER2 could perform local alignment (align a complete model to a subsequence of the target) and global alignment (align a complete model to a complete target sequence), HMMER3 only performs local alignment. This restriction is due to the difficulty in calculating the significance of hits when performing local/global alignments using the new algorithm.
See also
Hidden Markov model
Sequence alignment software
Pfam
UGENE
Several implementations of profile HMM methods and related position-specific scoring matrix methods are available. Some are listed below:
HH-suite
SAM
PSI-BLAST
MMseqs2
PFTOOLS
GENEWISE
PROBE
META-MEME
BLOCKS
GPU-HMMER
DeCypherHMM
References
External links
HMMER3 announcement
A blog posting on HMMER policy on trademark, copyright, patents, and licensing
Bioinformatics software
Free science software
Free software programmed in C
Computational science |
11053456 | https://en.wikipedia.org/wiki/Dictation%20machine | Dictation machine | A dictation machine is a sound recording device most commonly used to record speech for playback or to be typed into print. It includes digital voice recorders and tape recorder.
The name "Dictaphone" is a trademark of the company of the same name, but it has also become a common term for all dictation machines, as a genericized trademark.
History
Alexander Graham Bell and his two associates took Edison's tinfoil phonograph and modified it considerably to make it reproduce sound from wax instead of tinfoil. They began their work at Bell's Volta Laboratory in Washington, D.C.In 1879, and continued until they were granted basic patents in 1886 for recording in wax.
Thomas A. Edison had invented the phonograph in 1877, but the fame bestowed on him for this invention — sometimes called his most original — was not due to its efficiency. Recording with his tinfoil phonograph was too difficult to be practical, as the tinfoil tore easily, and even when the stylus was properly adjusted. Although Edison had hit upon the secret of sound recording, immediately after his discovery he did not improve it, allegedly because of an agreement to spend the next five years developing the New York City electric light and power systems.
By 1881 the Volta associates had success in improving an Edison tinfoil machine to some extent. Wax was put in the grooves of the heavy iron cylinder, and no tinfoil was used. The basic distinction between the Edison's first phonograph patent, and the Bell and [Charles Sumner] Tainter patent of 1886 was the method of recording. Edison's method was to indent the sound waves on a piece of tinfoil, while Bell and Tainter's invention called for cutting, or 'engraving', the sound waves into a wax record with a sharp recording stylus.
Among the later improvements by the Volta Associates, the Graphophone used a cutting stylus to create lateral zig-zag grooves of uniform depth into the wax-coated cardboard cylinders, rather than the up-down vertically-cut grooves of Edison's contemporary phonograph machine designs.
Notably, Bell and Tainter developed wax-coated cardboard cylinders for their record cylinders, instead of Edison's cast iron cylinder, covered with a removable film of tinfoil (the actual recording medium, which was prone to damage during installation or removable. Tainter received a separate patent for a tube assembly machine to automatically produce the coiled cardboard tubes, which served as the foundation for the wax cylinder records.
Besides being far easier to handle, the wax recording medium also allowed for lengthier recordings and created superior playback quality. Additionally the Graphophones initially deployed foot treadles to rotate the recordings, then wind-up clockwork drive mechanisms, and finally migrated to electric motors, instead of the manual crank that was used on Edison's phonograph. The numerous improvements allowed for a sound quality that was significantly better than Edison's machine.
Shortly after Thomas Edison invented the phonograph, the first device for recording sound, in 1877, he thought that the main use for the new device would be for recording speech in business settings. (Given the low audio frequency of earliest versions of the phonograph, recording music may not have seem to be a major application.) Some early phonograph were indeed used this way, but this did not become common until the production of reusable wax cylinders in the late 1880s. The differentiation of office dictation devices from other early phonographs, which commonly had attachments for making one's own recordings, was gradual. The machine marketed by the Edison Records company was trademarked as the "Ediphone".
Following the invention of the audion tube in 1906, electric microphones gradually replaced the purely acoustical recording methods of earlier dictaphones by the late 1930s. In 1945, the SoundScriber, Gray Audograph and Edison Voicewriter, which cut grooves into a plastic disc, was introduced, and two years later Dictaphone replaced wax cylinders with their Dictabelt technology, which cut a mechanical groove into a plastic belt instead of into a wax cylinder. This was later replaced by magnetic tape recording. While reel-to-reel tape was used for dictation, the inconvenience of threading tape spools led to development of more convenient formats, notably the Compact Cassette, Mini-Cassette, and Microcassette.
Digital dictation
Digital dictation became possible in the 1990s, as falling computer memory prices made possible pocket-sized digital voice recorders that stored sound on computer memory chips without moving parts. Many early 21st-century digital cameras and smartphones have this capability built in. In the 1990s, improvements in voice recognition technology began to allow computers to transcribe recorded audio dictation into text form, a task that previously required human secretaries or transcribers. the technology is not robust enough to replace human transcription in most cases.
The files generated with digital recorders vary in size, depending on the manufacturer and the format the user chooses. The most common file formats that digital recorders generate have one of the extensions WAV, WMA and MP3. Many dictation machines record in the DSS and DS2 format. Dictation audio can be recorded in various audio file formats. Most digital dictation systems use a lossy form of audio compression based on modelling of the vocal tract to minimize hard disk space and optimize network utilization as files are transferred between users. (Note that WAV is not an audio encoding format, but a file format and has little or no bearing on the encoding rate (kbit/s), size or audio quality of the resulting file.)
Digital dictation offers several advantages over traditional cassette tape based dictation:
The user can instantly rewind or fast forward to any point within the dictation file to review or edit.
The random access ability of digital audio allows inserting audio at any point without overwriting the following text.
Dictation produces a file which can be transferred electronically, e.g. via WAN, LAN, USB, e-mail, telephony, FTP, etc.
Large dictation files can be shared with multiple typists.
Sound may be CD quality and can improve transcription accuracy and speed.
Digital dictation provides the ability to report on the volume or type of dictation and transcription outstanding or completed within an organization.
Despite the advances in technology, analog media are still widely used in dictation recording for their flexibility, permanence, and robustness. In some cases, speech is recorded where sound quality is paramount and transcription unnecessary, e.g. for broadcasting a theatre play; recording techniques closer to high-fidelity music recording are more appropriate.
Methods
Portable recorder
Portable, hand held, digital recorders are the modern replacement for along with handhelds. Digital portables allow transfer of recordings by docking or plugging into a computer. Digital recorders eliminate the need for cassette tapes. Professional digital hand held recorders are available with slide switch, push button, fingerprint locking, and barcode scanning options.
Computer
Another common way to record digital dictation is with a computer dictation microphone. There are several different types of computer dictation microphones available, but each one has similar features and operation.Olympus Direct Rec, Philips SpeechMike, and Dictaphone Powermic are all digital computer dictation microphones that also feature push button control for operating dictation or speech recognition software. The dictation microphone operates through a USB on the computer it is used with.
Call-in dictation system
Call in dictation systems allow one to record their dictations over the phone. With call in dictation systems, the author dials a phone number, enters a PIN and starts dictating. Touch tone controls allow for start, pause, playback, and sending of dictation audio file. The call in dictation systems usually feature a Pod that can be plugged into a phone line. The pod can then be plugged into a computer to store dictation audio recording in compatible transcription or management software.
Mobile phone
Currently there are several digital dictation applications available for mobile phones. With mobile dictation apps, one can record, edit, and send dictation files over networks. Wireless transfer of dictation files decreases turnaround time. Mobile dictation applications allow users to stay connected to dictation workflows through a network, such as the Internet.
Software
There are two types of digital dictation software:
Standalone digital sound recording software: Basic software whereby the audio is recorded as a simple file. Most digital sound recording applications are designed for individuals or a very small number of users, as they do not offer a network efficient way of transferring the audio files other than email, they also do not encrypt or password protect the audio file
Digital dictation workflow software: Advanced software for commercial organizations where audio is still played by a typist but the audio file can be securely and efficiently transferred. The workflow element of these advanced systems also allows users to share audio files instantly, create virtual teams, outsource transcription securely, and set up confidential send options or 'ethical walls'. Digital Dictation workflow software is normally Active Directory integrated and can be used in conjunction with document, practice or case management systems. Typical businesses using workflow software are law firms, healthcare organizations, accountancies or surveying firms.
Recordings can be made over the telephone, on a computer or via a hand held dictation device that is "docked" to a computer.
Transcription
Digital dictation is different from speech recognition where audio is analyzed by a computer using speech algorithms in an attempt to transcribe the document. With digital dictation the process of converting digital audio to text may be done using digital transcription software, typically controlled by a foot switch which allows the transcriber to PLAY, STOP, REWIND, and BACKSPACE. Nevertheless, there are Digital Transcription Kits that allow integration with Speech Recognition Software. This gives the typist the option to either type a document manually, or send a document to be converted to text by Software such as Dragon NaturallySpeaking.
Common dictation formats
Phonograph cylinder (1890s)
Gray Audograph (1945)
SoundScriber (1945)
Edison Voicewriter (?)
Dictabelt (1947)
Compact Cassette (1963)
Mini-Cassette (1967)
Microcassette (1969)
Digital dictation (1990s)
See also
Digital pen
IBM dictation machines
Speech recognition
Volta Laboratory and Bureau
References
Audiovisual introductions in 1886
Sound recording technology
Audio storage
Office equipment
Alexander Graham Bell
Transcription (linguistics) |
23972680 | https://en.wikipedia.org/wiki/Invenio | Invenio | Invenio is an open source software framework for large-scale digital repositories that provides the tools for management of digital assets in an institutional repository and research data management systems. The software is typically used for open access repositories for scholarly and/or published digital content and as a digital library.
Invenio is initially developed by CERN with both individual and organisational external contributors and is freely available for download.
History
Prior to July 1, 2006, the package was named CDSware, then renamed CDS Invenio, and now known simply as Invenio.
Standards
Invenio complies with standards such as the Open Archives Initiative metadata harvesting protocol (OAI-PMH) and uses JSON/JSONSchema as its underlying bibliographic format.
Support
The service provider TIND Technologies, an official CERN spin-off based in Norway, offers Invenio via a software-as-a-service model.
Variants of Invenio are offered by TIND for all library services as TIND ILS, DA, IR and RDM under a fully hosted and open-core model.
Users
Invenio is widely used outside of its original home within CERN, including SLAC National Accelerator Laboratory, Fermilab, and the École Polytechnique Fédérale de Lausanne. SPIRES migrated to INVENIO in October 2011 with the INSPIRE-HEP site, a joint effort of CERN, DESY, SLAC and FNAL.
In 2014, the package was chosen to be the digital library software of all national universities in the western Africa regional economic community UEMOA which includes eight countries: Benin, Burkina Faso, Côte d'Ivoire, Guinea-Bissau, Mali, Niger, Senegal, Togo.
The research data repository Zenodo at CERN is basically run under Invenio v3, wrapped by a small extra layer of code that is also called Zenodo.
To simplify reuse of the Zenodo codebase, several institutions have joined in 2019 to distribute an institution-agnostic package under the name of InvenioRDM.
See also
Digital library
Institutional repository
References
External links
Official website
List of sites running Invenio
Short description about some of the features of Invenio
Service provider for Invenio support, installation, training, etc
Digital library software
Free institutional repository software
Free software programmed in Python
CERN software |
26156407 | https://en.wikipedia.org/wiki/Air%20tasking%20order | Air tasking order | An air tasking order (ATO) is a means by which the Joint Forces Air Component Commander (JFACC) controls air forces within a joint operations environment. The ATO is a large document written in United States Message Text Format (USMTF) that lists air sorties for a fixed 24-hour period, with individual call signs, aircraft types, and mission types (e.g. close air support or air refueling). NATO uses a different text format, “.ato”. The ATO is created by an air operations center which has command and control for a particular theater (e.g. Combined Air Operations Center for Southwest Asia). More specifically, the Combat Plans Division of the AOC is responsible for creating the ATO, as well as the associated Airspace Control Order (ACO) and linked detailed information in the Special Instructions (SPINS).
Use of the standardized USMTF allows ATO processing by a variety of legacy computer models, newer software, and even word processors. Since 2004, the ATO has been standardized as an XML schema by NATO Allied Data Publication-3 and US MIL-STD-6040.
The ATO was historically known as the "fragmentary order" or "frag order". Pilots continue to informally refer to it as the "frag"; to be "fragged" to a mission is to be assigned to it, and "as fragged" indicates that an operation will/did occur in accordance with the original ATO, without modifications.
Joint Publication Definition
As defined by Joint Publication 1-02, an air tasking order is:
"A method used to task and disseminate to components, subordinate units, and command and control agencies projected sorties, capabilities and/or forces to targets and specific missions. Normally provides specific instructions to include call signs, targets, controlling agencies, etc., as well as general instructions."
See also
Air Operations Center (also known as an AOC, CAOC, or JAOC)
Joint Forces Air Component Commander (JFACC)
References
External links
Defense Technical Information Center: Joint Publication 3-30: Command and Control for Joint Air Operations
United States Air Force |
54102832 | https://en.wikipedia.org/wiki/Eleks | Eleks | Eleks, also known as ELEKS Software, is an international company that provides custom software engineering and consulting services, headquartered in Tallin, Estonia. The company has about 2000+ employees and operates offices in the United States, Canada, Germany, Ukraine, Poland, Switzerland, Croatia,Japan, Croatia, UAE, KSA and the United Kingdom.
History
Eleks was established as a product company in 1991 by Oleksiy Skrypnyk and his son Oleksiy Skrypnyk, Jr. The company started out with the launch of Dakar, a science-intensive software for power distribution systems for Eastern European markets.
By 2016, Dakar was used in more than 20 Eastern European power systems, and by 2019, the company had 1400 employees. As of 2019, more than 200 companies are using services of the company.
Industries and technologies
Eleks provides its services to enterprises in Finance, Media & Entertainment, Healthcare, Retail, Agriculture and Logistics industries.
Activities include:
Custom Software Development
Advanced Analytics
Virtual Reality
Drones
Mobile and Wearables Development
Solutions for Retail
Data Science
Research and innovations
The company supports Ukrainian armed forces with developing military drone software and hardware.
Awards
2021 — CEE Business Services Firm of the Years 2021
2021 — Cybersecurity Excellence Awards 2021
2020 — Top IoT App Development Companies and Developers 2020
2019 — Top IT Outsourcing Companies In USA & Europe
2018 — European Software Testing Awards Finalist
2018 — Bronze Stevie Award in 2018 International Business Awards
2018 — Global Sourcing Awards 2018 Finalist
2018 — European IT & Software Excellence Awards 2018 Finalist
References
Companies based in Lviv
Software companies established in 1991
Consulting firms established in 1991
Development software companies
Engineering software companies
Information technology consulting firms
Outsourcing companies
Software companies of Ukraine
Ukrainian brands |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.