id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
593737
|
https://en.wikipedia.org/wiki/S/MIME
|
S/MIME
|
S/MIME (Secure/Multipurpose Internet Mail Extensions) is a standard for public key encryption and signing of MIME data. S/MIME is on an IETF standards track and defined in a number of documents, most importantly . It was originally developed by RSA Data Security and the original specification used the IETF MIME specification with the de facto industry standard PKCS#7 secure message format. Change control to S/MIME has since been vested in the IETF and the specification is now layered on Cryptographic Message Syntax (CMS), an IETF specification that is identical in most respects with PKCS #7. S/MIME functionality is built into the majority of modern email software and interoperates between them. Since it is built on CMS, MIME can also hold an advanced digital signature.
Function
S/MIME provides the following cryptographic security services for electronic messaging applications:
Authentication
Message integrity
Non-repudiation of origin (using digital signatures)
Privacy
Data security (using encryption)
S/MIME specifies the MIME type application/pkcs7-mime (smime-type "enveloped-data") for data enveloping (encrypting) where the whole (prepared) MIME entity to be enveloped is encrypted and packed into an object which subsequently is inserted into an application/pkcs7-mime MIME entity.
S/MIME certificates
Before S/MIME can be used in any of the above applications, one must obtain and install an individual key/certificate either from one's in-house certificate authority (CA) or from a public CA. The accepted best practice is to use separate private keys (and associated certificates) for signature and for encryption, as this permits escrow of the encryption key without compromise to the non-repudiation property of the signature key. Encryption requires having the destination party's certificate on store (which is typically automatic upon receiving a message from the party with a valid signing certificate). While it is technically possible to send a message encrypted (using the destination party certificate) without having one's own certificate to digitally sign, in practice, the S/MIME clients will require the user to install their own certificate before they allow encrypting to others. This is necessary so the message can be encrypted for both, recipient and sender, and a copy of the message can be kept (in the sent folder) and be readable for the sender.
A typical basic ("class 1") personal certificate verifies the owner's "identity" only insofar as it declares that the sender is the owner of the "From:" email address in the sense that the sender can receive email sent to that address, and so merely proves that an email received really did come from the "From:" address given. It does not verify the person's name or business name. If a sender wishes to enable email recipients to verify the sender's identity in the sense that a received certificate name carries the sender's name or an organization's name, the sender needs to obtain a certificate ("class 2") from a CA who carries out a more in-depth identity verification process, and this involves making inquiries about the would-be certificate holder. For more detail on authentication, see digital signature.
Depending on the policy of the CA, the certificate and all its contents may be posted publicly for reference and verification. This makes the name and email address available for all to see and possibly search for. Other CAs only post serial numbers and revocation status, which does not include any of the personal information. The latter, at a minimum, is mandatory to uphold the integrity of the public key infrastructure.
S/MIME Working Group of CA/Browser Forum
In 2020, the S/MIME Certificate Working Group of the CA/Browser Forum was chartered to create a baseline requirement applicable to CAs that issue S/MIME certificates used to sign, verify, encrypt, and decrypt email. That effort is intended to create standards including:
Certificate profiles for S/MIME certificates and CAs that issue them
Verification of control over email addresses
Identity validation
Key management, certificate lifecycle, CA operational practices, etc.
Obstacles to deploying S/MIME in practice
S/MIME is sometimes considered not properly suited for use via webmail clients. Though support can be hacked into a browser, some security practices require the private key to be kept accessible to the user but inaccessible from the webmail server, complicating the key advantage of webmail: providing ubiquitous accessibility. This issue is not fully specific to S/MIME: other secure methods of signing webmail may also require a browser to execute code to produce the signature; exceptions are PGP Desktop and versions of GnuPG, which will grab the data out of the webmail, sign it by means of a clipboard, and put the signed data back into the webmail page. Seen from the view of security this is a more secure solution.
S/MIME is tailored for end-to-end security. Logically it is not possible to have a third party inspecting email for malware and also have secure end-to-end communications. Encryption will not only encrypt the messages, but also the malware. Thus if mail is not scanned for malware anywhere but at the end points, such as a company's gateway, encryption will defeat the detector and successfully deliver the malware. The only solution to this is to perform malware scanning on end user stations after decryption. Other solutions do not provide end-to-end trust as they require keys to be shared by a third party for the purpose of detecting malware. Examples of this type of compromise are:
Solutions which store private keys on the gateway server so decryption can occur prior to the gateway malware scan. These unencrypted messages are then delivered to end users.
Solutions which store private keys on malware scanners so that it can inspect messages content, the encrypted message is then relayed to its destination.
Due to the requirement of a certificate for implementation, not all users can take advantage of S/MIME, as some may wish to encrypt a message without the involvement or administrative overhead of certificates, for example by encrypting the message with a public/private key pair instead.
Any message that an S/MIME email client stores encrypted cannot be decrypted if the applicable key pair's private key is unavailable or otherwise unusable (e.g., the certificate has been deleted or lost or the private key's password has been forgotten). However, an expired, revoked, or untrusted certificate will remain usable for cryptographic purposes. Indexing of encrypted messages' clear text may not be possible with all email clients. Neither of these potential dilemmas is specific to S/MIME but rather cipher text in general and do not apply to S/MIME messages that are only signed and not encrypted.
S/MIME signatures are usually "detached signatures": the signature information is separate from the text being signed. The MIME type for this is multipart/signed with the second part having a MIME subtype of application/(x-)pkcs7-signature. Mailing list software is notorious for changing the textual part of a message and thereby invalidating the signature; however, this problem is not specific to S/MIME, and a digital signature only reveals that the signed content has been changed.
Security issues
On May 13, 2018, the Electronic Frontier Foundation (EFF) announced critical vulnerabilities in S/MIME, together with an obsolete form of PGP that is still used, in many email clients. Dubbed EFAIL, the bug required significant coordinated effort by many email client vendors to fix.
See also
CryptoGraf
DomainKeys Identified Mail for server-handled email message signing.
Email encryption
EFAIL, a security issue in S/MIME
GNU Privacy Guard (GPG)
Pretty Good Privacy (PGP), especially "MIME Security with OpenPGP" ().
References
External links
: Cryptographic Message Syntax (CMS)
: Cryptographic Message Syntax (CMS) Algorithms
: Secure/Multipurpose Internet Mail Extensions (S/MIME) Version 3.2 Message Specification
: Secure/Multipurpose Internet Mail Extensions (S/MIME) Version 4.0 Message Specification
Microsoft Exchange Server: Understanding S/MIME (high-level overview).
Cryptography
Computer security standards
Internet mail protocols
Email authentication
MIME
|
33454038
|
https://en.wikipedia.org/wiki/OpenCards
|
OpenCards
|
OpenCards is a free spaced repetition flashcard program. The software is similar to SuperMemo, Anki or Mnemosyne.
The flashcards are saved as PowerPoint presentation files and may include text, images, sounds and LaTeX equations. The learning states are saved in hidden meta-data files in the same directory as the flashcards files. OpenCards implements learning schemes for short-term and long-term memorization.
Flashcard Format
OpenCards uses PowerPoint ppt-files as flashcard sets. Thereby, slide-titles are considered as questions and the slide contents as their answers. OpenCards also supports a reversed mode in which slide contents are treated as questions and the slide title as their answers, which allows creating image, formula or sound questions.
By allowing users to create flashcard files in ppt-format with PowerPoint or LibreOffice, it overcomes the major limitation of other flashcard software, which usually rely on custom formats and flashcard editors. Internally, OpenCards relies on Apache POI to render slides from ppt-files.
Learning Modes
OpenCards implements two different learning models. A box-based short-term learning procedure, called last-minute learning, and a more sophisticated long-term memorization model based on the principles of active recall and the forgetting model. The latter is implemented as an improved version of the SuperMemo2 algorithm. The SM2 algorithm had been created for SuperMemo in the late 1980s, but still forms the basis of many spaced repetition software applications. OpenCards's implementation of the algorithm has been modified to allow priorities on cards, and to show cards in order of their urgency.
History
OpenCards started as flashcard learning extension for OpenOffice Impress in spring 2008, from which it inherited the first part of its name. In 2008 it won a Bronze award in the OpenOffice.org Community Innovation Program.
In 2011, OpenCards was redesigned to work as stand alone software and to support PowerPoint PPT files as the main flashcard set file format.
Syncing
OpenCards implements no synchronization mechanism, but flashcard sets including their learning states can be synced using services like DropBox. This allows the user to keep their flashcard sets synchronized across multiple computers.
See also
Mnemosyne (software)
Anki
List of flashcard software
References
External links
OpenCards website
OpenCards developer resources
SM2 Algorithm
Reviews
Spaced repetition software
Free software programmed in Java (programming language)
Free educational software
|
33746018
|
https://en.wikipedia.org/wiki/Steve%20Johnson%20%28tennis%29
|
Steve Johnson (tennis)
|
Steve "Stevie" Johnson Jr. (born December 24, 1989) is an American professional tennis player. For one week in August 2016 he was the top-ranked American in men's singles. He has a career-high singles ranking of world No. 21 achieved on July 25, 2016 and a doubles ranking of world No. 39 achieved on May 23, 2016. He has won four ATP Challenger Tour titles and four ATP Tour 250 titles, one at Nottingham on grass, twice at Houston on clay and most recently at Newport on grass. He won a bronze medal in men's doubles at the 2016 Olympics with fellow American Jack Sock.
Johnson played college tennis for the USC Trojans. He won the NCAA Men's Singles Championship in his junior and senior seasons (2011–2012), and he was a part of a Trojan team that won four consecutive NCAA Championships.
Personal life
His father, Steve Johnson Sr. (died May 11, 2017, aged 58), was a tennis coach at the Rancho San Clemente Tennis and Fitness Club, and his mother, Michelle, is a mathematics professor. His older sister, Alison, is a graduate of Sonoma State University. Johnson has credited his father with his success in tennis: "He taught me pretty much everything I know. Since I can remember, it's always been me and him out there hitting balls, having a blast. It's really been amazing. I wouldn't change anything." Growing up, he idolized Pete Sampras and Andre Agassi. At USC, Johnson was coached by Peter Smith and majored in Human Performance, but left when he was three classes short of attaining his degree. Johnson hopes to complete his degree after his tennis career.
In July 2012, Johnson signed a clothing deal with Asics America and is represented by Sam Duvall at Lagardere Unlimited. He currently trains at the USTA Player Development Center West in Carson, California. Johnson is currently working with the USTA and travels with other Americans. His personal coach is Craig Boynton, who is a USTA national coach for men's tennis. The team of Dustin Taylor and Rodney Marshall also help Steve hone his skills at the development center.
Johnson is a fan of the Anaheim Ducks.
Steve Johnson married Kendall Bateman at Maravilla Gardens in Camarillo, Ventura County in Southern California on April 21, 2018. Kendall is a former Trojan volleyball player.
Junior tennis
Johnson's dad served as his coach in his early career. Johnson won four consecutive 18-under national team titles, becoming the first player in tournament history to be a member of four championship teams. Johnson contributed to a 6–1 victory over Texas in the 2005 final, a 6–1 triumph over Southern in 2006 and clinched Southern California's 4–3 win the next year over Southern. Some of Johnson's junior accomplishments include being the 2008 Kalamazoo Doubles finalist and winning the 2008 Southern California Sectional Boys 18 championship in straight sets over JT Sundling. This marked his fourth Sectionals title and near a clean sweep of all age divisions having won the 12s, 14s and 16s. He also has the distinction of being the only player to win the Triple Crown, singles and two doubles—twice. He also won nine Gold Balls. He was ranked the third-rated California senior tennis recruit (7th overall) in the country according to TennisRecruiting.net. Johnson clinched the title for Southern California at the 2007 Junior Davis Cup.
High school tennis
Johnson is a 2008 graduate of Orange High School in Orange, California and was coached by Pete Tavoularis. He won CIF singles championships in 2006 and 2007 and was named the Orange County and Los Angeles Player of the Year in both seasons. He is the only Southern Section tennis champion in the school's history. Johnson also made it a priority to play in as many team matches as possible. He missed just two because of junior tennis events and did not lose a set in team competition on his way to winning his second consecutive Golden West League title. Johnson beat future Stanford Cardinal Ryan Thacher of Harvard-Westlake High to win the Southern Section Individual Tournament when both were high school sophomores in 2006. His only high school loss of 2006 was a three-set defeat in the semifinals of the Ojai Tennis Tournament to eventual champion Jason Jung of West Torrance. He then defended his title by beating future UCLA Bruin Alex Brigham of Pacifica Christian High. The victories made Johnson the first back-to-back singles winner since Tom Leonard of Arcadia in 1965 and 1966. He also became the eighth player to repeat as champion and the fourth to win the title after losing the first set at love (Leonard in 1965, Barry Buss in 1982, and Phil Sheng in 1999 each won titles after losing the first set at love). Johnson was the Orange County boys tennis player of the year as a sophomore and junior at Orange, but opted to not play high school tennis his senior year.
College tennis
Johnson chose to play college tennis for the University of Southern California. Johnson said of his decision, "I chose USC because I felt like I had a great relationship with Peter Smith, USC Tennis Coach, and I got along with the team really well." As a sophomore, he was selected to represent the United States in the fourth annual Master'U BNP Paribas, an intercollegiate competition in which eight countries from around the world play for the title. As a junior, he captured the 2010–11 NCAA Singles Championship, defeating Rhyne Williams in the final. In his senior season, he captured the 2011–2012 NCAA Singles Championship, defeating Kentucky's Eric Quigley in the final, overcoming a strained abdomen and shin splints and a bout of food poisoning to retain his title. As a freshman, Johnson was selected to All-Pac-10 First Team, as well as being named the Pac-10 Doubles Team of the Year with Robert Farah. He also reached the final of the Pac-10 singles championship match and he won the ITA Regionals Doubles Championship with Farah. As a sophomore, he was also selected to the All-Pac-10 First Team and was the named the Pac-10 Doubles Team of the Year with Farah once again as well as winning the ITA Southwest Regionals doubles championship with Farah. As a sophomore, he won the ITA National Indoor championship. As a junior, he also won the 2011 Pac-10 Singles and Doubles Title with Raymond Sarmiento. In addition, he was selected as the NCAA Tournament Most Outstanding Player and to the NCAA All-Tournament Team for singles. Johnson was named the Intercollegiate Tennis Association Player of the Year for the 2010–11 and 2011–2012 seasons, as well as the 2010–2011 and 2011–2012 Pac-12 Men's Player of the Year. In his college career, he became a seven-time Intercollegiate Tennis Association (ITA) All-American, two-time NCAA Singles Champion, and he captured the team title for the Trojans in all four of his years there. Furthermore, he ended his college career with an unprecedented 72 match win streak. He has said that "the biggest thing that I have learned from college tennis is to play aggressive while playing within myself and to never give up, because every dual match could end up being decided on your court." These exploits led to Johnson's becoming the most decorated college player of all time.
ITF Futures Circuit
Johnson has competed in 12 Futures tournaments in his career for singles, all of them being in the United States. He has been in 3 finals, winning two of them. He lost the 2011 Sacramento Futures tournament to Daniel Kosakowski in 3 sets. Later that same year, Johnson won consecutive tournaments in the Claremont and Costa Mesa futures tournaments respectively beating Darian King and Artem Sitak in straight sets. Johnson has competed in various Futures tournaments since 2006 as a high schooler, and he won his first match and earned his first point the following year. He has compiled an overall record of 23 wins and 10 losses on the Futures Tour.
ATP Challenger Tour
Johnson has competed in 27 Challenger tournaments in his career for singles in the United States, Turkey, Canada, and France. He won his first challenger tournament in the Comerica Bank Challenger played in Aptos, California. He won it in the summer of 2012, before the 2012 U.S. Open. In the finals, he defeated Robert Farah in straight sets, 6–3, 6–3, gaining 100 points, as well as A month after his win in Aptos, Stevie competed in 2 challenger tournaments in Turkey and France. He reached the semifinals in Izmir, Turkey, winning three matches along the way. In Orléans, France, Stevie reached the second round and lost to the no.2 seed David Goffin of Belgium. in a tightly contested match, with the final score being 7–5, 6–4. A couple of weeks later in the 2012 Tiburon Challenger, Stevie was ousted in the semifinals by Jack Sock 4–6, 6–7(4). Johnson competed in the 2012 Charlottesville Challenger but fell to Rhyne Williams in the Round of 16. Johnson planned on playing in the Knoxville Challenger as well as the JSM Challenger of Champaign–Urbana to finish the year, but a shoulder injury forced him to pull out. The shoulder injury also forced him to miss the Australian Open Wild Card Playoffs. Stevie has compiled an overall record of 32 wins and 18 losses on the Challenger Tour and has earned In doubles, Stevie has had equal success on the Challenger Tour, compiling an overall record of 13 wins and 8 losses including a title in Knoxville, Tennessee with Austin Krajicek in 2011. He also made it to the finals in the 2011 Tiburon challenger with Sam Querrey, but they lost 6–10 in the 3rd set super tie-breaker. In 2012, Stevie reached the semifinals in Tiburon partnered with Robert Farah, as the #1 seeds. Johnson played doubles in the 2013 Maui Challenger being seeded no.2 and reached the semifinals with his partner Alex Bogomolov, Jr. Johnson played singles and doubles in the 2013 Sarasota Open. In singles, he lost in the quarter-finals to the eventual champion, Alex Kuznetsov, 2–6, 6–3, 1–6. In doubles, he partnered with Bradley Klahn, and they won three matches to reach the finals but lost 7–6(5), 6–7(3), 9–11. Johnson played three more clay challengers before the French Open and lost in the first round in each. After a successful French Open, Johnson won his second career challenger at the Aegon Nottingham Challenge defeating Ruben Bemelmans in the finals. Winning this tournament helped grant him a wild-card into Wimbledon. Johnson finished the 2013 Challenger Tour season 1-5. In his second challenger event of the 2014 season, Johnson won the 2014 Challenger of Dallas, dropping only one set throughout the tournament. He defeated fellow American Ryan Harrison along the way and Tunisian Malek Jaziri in the finals. After the match, he stated, "I was struggling with confidence a little before the start to this year, and to come out and win the tournament here makes it more special." One month later in the 2014 Irving Tennis Classic, Johnson beat three top-100 players along the way to reach the finals, where he lost to Lukáš Rosol. A win at the 2014 Open Guadeloupe Challenger Tour tournament boosted Johnson's singles ranking to a career-high No. 69 and gave him his fourth career challenger title. After taking a month off from competing in tournaments, Johnson's next challenger tournament was the BNP Paribas Primrose Bordeaux where he was the number two seed. He lost in the finals to number one seed Julien Benneteau. Johnson kicked off his grass court season as the number two seed in the 2014 Aegon Trophy where fell in the quarterfinals to Gilles Müller.
ATP World Tour
2011
Johnson started the year in Indian Wells where he lost in the first round of qualifying in three tight sets to Frank Dancevic. Shortly after his college season, Johnson received a wild card into the 2011 Farmers Classic, where he lost in the first round to Gilles Müller in three sets. Johnson then competed in qualifying of the 2011 Western & Southern Open. After scoring his first win over a top 100 player in the first round, Jérémy Chardy, he lost in the following round to Édouard Roger-Vasselin. Winning the 2011 individual NCAA championships, Johnson received a wild card to the main draw of the 2011 US Open. He played his first career grand slam match against Alex Bogomolov, Jr. and lost in five tight sets where he had a two sets to love lead.
In the 2011 Western & Southern Open, Johnson reached the quarterfinals partnered with Alex Bogomolov, Jr., and along the way defeated the no. 2 doubles team of Mirnyi/Nestor, subsequently gaining 180 points. At the 2011 US Open, Johnson partnered with Denis Kudla but they lost in straight sets to Marcelo Melo and Bruno Soares. In the 2012 Farmers Classic, Johnson partnered up with Sam Querrey, reaching the semifinals. In the 2012 Citi Open, Johnson reached the semifinals once again, partnered with Drew Courtney.
2012
Johnson received a wild card into the 2012 SAP Open but lost in two tie-breakers to Steve Darcis. Johnson registered his first ATP win in a main draw against Donald Young in the 2012 BB&T Atlanta Open before losing to Sock in the second round. He received a wild card into the 2012 Farmers Classic, but lost to Igor Sijsling in straight sets. Johnson received another wild card into the 2012 Citi Open, but lost to Benjamin Becker in straight sets. Johnson reached the third round of the 2012 US Open, where he had received a wild card for winning the individual NCAA championships once again. In the first round, Johnson beat Rajeev Ram and in the second round, Johnson advanced by defeating Ernests Gulbis. In the third round, Johnson lost to 13th seeded Richard Gasquet.
In the 2012 Campbell's Hall of Fame Tennis Championships, Johnson partnered with Denis Kudla, but they lost in the first round. Competing in the 2012 BB&T Atlanta Open, Johnson partnered with Sock, but they lost in a super tie-breaker in the first round. In the 2012 Farmers Classic, Johnson partnered up with Querrey, and they reached the semifinals. Next, in the 2012 Citi Open, Johnson reached the semifinals once again, partnered with Drew Courtney. In the 2012 US Open, Johnson received a wild card to the main draw and partnered with Sock. In the first round they defeated the No. 1 doubles team of Mirnyi/Nestor. However they lost in the second round to František Čermák and Michal Mertiňák.
2013
In the 2013 Australian Open, Johnson won three qualifying matches to reach the main draw. In the first round of the main draw, he took tenth seed Nicolás Almagro the distance, but lost. Next, in the 2013 SAP Open, Johnson received a wild card to the main draw. In the first round, he defeated former top-20 player Ivo Karlović. In the second round, Johnson defeated Tim Smyczek, reaching his first quarterfinal. However, in the quarterfinals, Johnson lost to eventual finalist Tommy Haas. Overall, Johnson compiled a 5–13 record in singles. In doubles, Johnson attained a career-high ranking of No. 126.
Johnson partnered with Sock in the 2013 SAP Open, but they lost to the No. 1 doubles team of Mike and Bob Bryan. Johnson once again partnered with Sock in the U.S. Men's Clay Court Championships, and they reached the round of 16.
Johnson went to the 2013 French Open, qualifying for the first time and made it through to the main draw before losing in the first round to Albert Montañés, who had just won Nice the previous week. Receiving a wildcard into the maindraw of Wimbledon, Johnson lost a tight first round match to fellow American Bobby Reynolds. In the 2013 Citi Open, Johnson lost in the first round to Radek Štěpánek in straight sets. In the 2013 Winston-Salem Open, Johnson won three qualifying matches to reach the main draw and have a rematch with Bobby Reynolds. In the 2013 US Open, Johnson lost in the first round to German Tobias Kamke, failing to reach the third round as he had the previous year. Johnson and fellow American Michael Russell received a wild card in doubles, but fell in the first round.
2014
Steve kicked off the 2014 season by reaching the main draw of the 2014 Heineken Open as a lucky loser and beat former Australian Open runner up Marcos Baghdatis, and also defeated #4 seed Kevin Anderson to reach his second quarterfinal on tour. By winning the Australian Open Wildcard Playoffs a few weeks back, Johnson received a wildcard into the main draw of 2014 Australian Open. However, he lost in the first round to Frenchman Adrian Mannarino in five sets. In the 2014 Delray Beach International Tennis Championships Johnson qualified for the main draw and beat Mikhail Kukushkin, #1 seed Tommy Haas in a third-set tie-breaker, and #6 seed Feliciano López to reach his first semi-finals. After he beat Haas, Johnson said that "Tommy is an unbelievable player and this is a win I won't forget." Haas later said, "I hate to lose, but I'm happy for him. He served well and competed hard." South African Kevin Anderson got revenge on Johnson in the semi-finals as he beat him in straight sets to reach the finals. Johnson received a wildcard into the maindraw of the 2014 BNP Paribas Open, but fell to the red-hot Spaniard Roberto Bautista Agut in straight sets, who knocked out #4 seed Tomáš Berdych in the next round. Johnson got a rematch with Bautista Agut just a couple weeks later in the 2014 Sony Open Tennis, this time falling in three sets.
In his first clay court tournament of the year, Johnson received a wild card into the U.S. Men's Clay Court Championships where he reached the second round and lost to eventual champion Fernando Verdasco. Johnson then competed in the Open de Nice Côte d'Azur where he fell in the first round to youngster Dominic Thiem. Johnson lost in three sets, while failing to convert a match point as he was trying to serve out the match at 6–5 in the second set. Johnson next competed in the 2014 French Open where he won his first ever ATP match on clay, and advanced to the second round. In his first round match against Frenchman Laurent Lokoli, Johnson came back from a two sets to love deficit, and saved two match points along the way for a dramatic five set victory. In his second round match, Johnson lost in straight sets to fellow American Jack Sock. Johnson registered his first grass court ATP win at the 2014 Gerry Weber Open when he defeated Frenchman Albano Olivetti. Johnson's second round opponent withdrew giving Johnson a walkover to the quarterfinals where he lost to the number four seed Kei Nishikori. The following week, Johnson competed in the 2014 Topshelf Open and reached the second round before falling to the number seven seed Nicolas Mahut. Johnson then competed in the 2014 Wimbledon Championships. Unfortunately, Johnson fell in the first round in four sets to twenty-seventh seed Bautista Agut. Returning to the U.S., Johnson competed in the 2014 Hall of Fame Tennis Championships, where he lost in the quarterfinals to eventual champion Lleyton Hewitt. As the sixth seed in this tournament, this was Johnson's first ATP tournament where he was seeded. Johnson then kicked off his US Open Series in Atlanta, where he lost in the first round to his good friend and countryman Sam Querrey.
2015
At the 2015 US Open, Johnson reached the semifinals in doubles for the first time at a Major partnering Sam Querrey after defeating en route the top seeded pair of the Bryan Brothers in the first round. They were defeated in the semifinal by eight seeded pair of Jamie Murray and John Peers.
Johnson finished the year ranked World No. 32 in singles and World No. 52 in doubles, the highest year-end rankings in his career.
2016: Olympic medal, Wimbledon fourth round, Career-high in singles & doubles, first Masters 1000 quarterfinal
Johnson reached the third round of the 2016 Australian Open as the 31st seed but lost to David Ferrer in three sets. Johnson lost in the 1st round of the 2016 French Open as the 33rd seed. He won his first ATP Tour level title at the 2016 Aegon Open in Nottingham, UK, defeating Pablo Cuevas in the final.
Johnson reached the fourth round of 2016 Wimbledon Championships defeating Grigor Dimitrov before being defeated by Roger Federer in straight sets. He reached a singles career-high of World No. 21 on July 15, 2016.
In the 2016 Western & Southern Open he defeated Federico Delbonis in the 1st round, in the 2nd round he beat Julien Benneteau and in the 3rd round he beat 7th seed Jo-Wilfried Tsonga, all this set up a tie with Dimitrov in the quarterfinal, which he lost in straight sets. Johnson defeated Evgeny Donskoy in the first round of the US Open after losing the first 2 sets. His run was ended in the second round by Juan Martín del Potro in straight sets, however. Johnson would lose in the first round of the 2016 China Open to Dimitrov and in the 2016 Shanghai Masters to Andy Murray. After two more consecutive losses, he would end his season with a second-round loss in the 2016 Paris Masters to Richard Gasquet.
2017: Second career title
Johnson began the 2017 season with a first-round loss to Dimitrov at the Brisbane International. He would dispatch John Isner to reach the semis of the Auckland Open, but lost to Jack Sock. He then lost to Stan Wawrinka in the Australian Open second round. After reaching three straight quarterfinals in his next three tournaments, Johnson lost to Roger Federer at Indian Wells and had a disappointing second-round loss at Miami to Nicolas Mahut. However, he would rebound by capturing his second career singles title (and first on clay) at the U.S. Men's Clay Court Championships in Houston, Texas, beating Sock in the semis and Thomaz Bellucci in a thrilling final, where Johnson overcame severe cramps and being down a break to win in a final-set tiebreak.
2018: Third and fourth titles
Johnson made the semifinals in Delray Beach, where he lost to young German Peter Gojowczyk.
In Miami, Johnson made it to the third round, where he was defeated by Spaniard Pablo Carreño Busta.
Johnson won his third title in Houston on clay courts after defeating five Americans: Ernesto Escobedo, Frances Tiafoe, John Isner, Taylor Fritz, and Tennys Sandgren.
In May, he reached the final in Geneva, only to fall to Hungarian Marton Fucsovics in three sets.
In July, Johnson won the grass-court Hall of Fame Championship in Newport, Rhode Island, defeating Indian Ramkumar Ramanathan over three sets in the final.
Johnson reached the final again in Winston-Salem in August, where he was defeated by young Russian Daniil Medvedev in straight sets.
2019: Wimbledon third round
Johnson reached the quarterfinals at Delray Beach in February, where he lost to Radu Albot.
In March, Johnson beat Taylor Fritz in the first round of Indian Wells, but fell to young Canadian Denis Shapovalov in the second.
At Wimbledon, Johnson beat Albert Ramos-Viñolas in the first round and young Aussie Alex de Minaur in the second, only to be defeated by Kei Nishikori in the third round in straight sets.
At the US Open, Johnson lost in the first round to Nick Kyrgios.
2020: Two Challenger titles, Masters 1000 semifinal in doubles
Johnson captured his seventh career Challenger title with a win over Stefano Travaglia at the Bendigo Challenger. After a first-round loss to Roger Federer in the first round of the Australian Open, he rebounded with his eighth Challenger title at the Indian Wells Challenger with a win over fellow American Jack Sock.
The season was then interrupted by the COVID-19 pandemic. When tennis returned in August Johnson, partnering Austin Krajicek as a wildcard pair, reached the semifinals at the Western & Southern Open, held in New York. In singles, he upset John Isner at the US Open before losing to Ričardas Berankis in the second round.
2021: French Open singles third round, Grand Slam semifinal & maiden Masters 1000 final & return to top 100 in doubles
At the French Open, Johnson reached the third round for the fourth time in his career defeating fellow American Frances Tiafoe in a five-setter and Thiago Monteiro in the second round, before losing to Pablo Carreño Busta.
Partnering with Austin Krajicek as wildcard pair, he reached his maiden Masters 1000 final at the Western & Southern Open in Cincinnati defeating No. 3 seeded Colombian pair Juan Sebastián Cabal and Robert Farah in a tight three-set match. As a result he reentered the top 100 in doubles at World No. 95 on August 23, 2021. The pair last competed at the 2020 edition of the Cincinnati Masters where they reached the semifinals.
At the 2021 US Open he reached the quarterfinals and semifinals in doubles partnering Sam Querrey also as a wildcard pair. They were defeated in the semifinal by the eventual champions Rajeev Ram and Joe Salisbury. As a result he reached No. 62 in doubles on September 13, 2021.
World Team Tennis
Steve Johnson was selected fifth overall in the Mylan World TeamTennis Roster Draft by the Orange County Breakers. Johnson was joined by his father, who was an assistant coach on the team. Despite ultimately placing third in the Western Conference, Johnson was the No. 2 men's singles player in the league, amassing a 62–47 record for the season. He was equally successful in men's doubles, pairing with doubles specialist Treat Huey to go 64–53. In the middle of the season, Johnson helped lead the Breakers to four consecutive victories. During the season, Johnson had victories over Andy Roddick in singles and doubles, as well as doubles victories over the Bryan Brothers, and tennis legend John McEnroe. On July 20, Johnson landed himself in the No. 4 spot on the top plays of SportsCenter that evening against Alex Bogomolov, Jr. of the Texas Wild with an amazing rally that ended with Johnson slipping and sliding for a volley winner. Steve made a successful debut with the Breakers in his first season, leading him to be named the Mylan WTT Male Rookie of the Year.
Performance timelines
Singles
Current through the 2022 Delray Beach Open.
Doubles
Significant finals
Masters 1000 finals
Doubles: 1 (1 runner-up)
Olympic medal matches
Doubles: 1 (Bronze)
ATP career finals
Singles: 6 (4 titles, 2 runner-ups)
Doubles: 7 (1 title, 6 runner-ups)
Challenger and ITF finals
Singles 13 (9–4)
Doubles: 5 (2–3)
Wins over top-10 players
World TeamTennis
Johnson has played five seasons with World TeamTennis starting in 2013 when he made his debut with the Orange County Breakers, and earning the Male Rookie of the Year award. He played two more seasons with the Breakers in 2016 and 2017, before playing for the New York Empire in 2018, and returning to the Breakers in 2019. It was announced that he will be joining the Orange County during the 2020 WTT season set to begin July 12.
References
External links
Official website
1989 births
Living people
American male tennis players
Sportspeople from Orange County, California
Tennis people from California
USC Trojans men's tennis players
Tennis players at the 2016 Summer Olympics
Olympic bronze medalists for the United States in tennis
Medalists at the 2016 Summer Olympics
Olympic medalists in tennis
|
14387737
|
https://en.wikipedia.org/wiki/OS/360%20and%20successors
|
OS/360 and successors
|
OS/360, officially known as IBM System/360 Operating System, is a discontinued batch processing operating system developed by IBM for their then-new System/360 mainframe computer, announced in 1964; it was influenced by the earlier IBSYS/IBJOB and Input/Output Control System (IOCS) packages for the IBM 7090/7094 and even more so by the PR155 Operating System for the IBM 1410/7010 processors. It was one of the earliest operating systems to require the computer hardware to include at least one direct access storage device.
Although OS/360 itself was discontinued, successor operating systems, including the virtual storage MVS and the 64-bit z/OS, are still run and maintain application-level compatibility.
Overview
IBM announced three different levels of OS/360, generated from the same tapes and sharing most of their code. IBM eventually renamed these options and made some significant design changes:
Single Sequential Scheduler (SSS)
Option 1
Primary Control Program (PCP)
Multiple Sequential Schedulers (MSS)
Option 2
Multiprogramming with a Fixed number of Tasks (MFT)
MFT II
Multiple Priority Schedulers (MPS)
Option 4
VMS
Multiprogramming with a Variable number of Tasks (MVT)
Model 65 Multiprocessing (M65MP)
Users often coined nicknames, e.g., "Big OS", "OS/MFT", but none of these names had any official recognition by IBM.
The other major operating system for System/360 hardware was DOS/360.
OS/360 is in the public domain and can be downloaded freely. As well as being run on actual System/360 hardware, it can be executed on the free Hercules emulator, which runs under most UNIX and Unix-like systems including Linux, Solaris, and macOS, as well as Windows. There are OS/360 turnkey CDs that provide pregenerated OS/360 21.8 systems ready to run under Hercules.
Origin
IBM originally intended that System/360 should have only one batch-oriented operating system, OS/360, capable of running on machines as small as 32 KiB. It also intended to supply a separate timesharing operating system, TSS/360, for the System/360 Model 67. There are at least two accounts of why IBM eventually decided to produce other, simpler batch-oriented operating systems:
because it found that the "approximately 1.5 million instructions that enable the system to operate with virtually no manual intervention" comprising OS/360 would not fit into the limited memory available on the smaller System/360 models; or
because it realized that the development of OS/360 would take much longer than expected.
IBM introduced a series of stop-gaps to prevent System/360 hardware sales from collapsing—first Basic Programming Support (BPS) and BOS/360 (Basic Operating System, for the smallest machines with 8K byte memories), then TOS/360 (Tape Operating System, for machines with at least 16K byte memories and only tape drives), and finally DOS/360 (Disk Operating System), which became a mainstream operating system and is the ancestor of today's widely used z/VSE.
IBM released three variants of OS/360: PCP (Primary Control Program), a stop-gap which could run only one job at a time, in 1966; MFT (Multiprogramming with Fixed number of Tasks) for the mid-range machines, and MVT (Multiprogramming with Variable number of Tasks) for the top end. MFT and MVT were used until at least 1981, a decade after their successors had been launched. The division between MFT and MVT arose because of storage limitations and scheduling constraints. Initially IBM maintained that MFT and MVT were simply "two configurations of the OS/360 control program", although later IBM described them as "separate versions of OS/360".
IBM originally wrote OS/360 in assembly language. Later on, IBM wrote some OS/360 code in a new language, Basic Systems Language (BSL), derived from PL/I. A large amount of the TSO code in Release 20 was written in BSL.
TSS/360 was so late and unreliable that IBM canceled it, although IBM later supplied three releases of the TSS/370 PRPQ. By this time CP-67 was running well enough for IBM to offer it without warranty as a timesharing facility for a few large customers.
OS/360 variants
These three options offered such similar facilities that porting applications between them usually required minimal effort; the same versions of IBM Program Products, application and utility software ran on both. The text below mostly treats PCP, MFT and MVT as simply new names for the original SSS, MSS and MPS, although there were some design changes. Also, the text does not distinguish between M65MP and MVT.
Officially, PCP, MFT and MVT are not separate operating systems from OS/360, they are only install-time configuration options—in today's words, three different variants of the OS Nucleus and Scheduler. However, because of quite different behavior and memory requirements, users commonly consider them de facto separate operating systems and refer to them as "early OS/360", "OS/MFT", "OS/MVT", respectively. MFT differs from MVT mainly in the way in which it manages memory: when installing MFT, customers specify in the system generation (SysGen) a fixed number of partitions, areas of memory with fixed boundaries, in which application programs can be run simultaneously.
PCP
Primary Control Program (PCP) was intended for machines with small memories. It is similar to MFT with one partition. Experience indicated that it was not advisable to install OS/360 on systems with less than 128 KiB of memory, although limited production use was possible on much smaller machines, such as 48 KiB of memory. IBM dropped the PCP option in the final releases of OS/360, leaving only MFT II and MVT, both of which required more memory.
Also referred to as SYS=MIN in macro expansions that were system-dependent.
MFT
Multiprogramming with a Fixed number of Tasks (MFT) was intended to serve as a stop-gap until Multiprogramming with a Variable number of Tasks (MVT), the intended target configuration of OS/360, became available in 1967. Early versions of MVT had many problems, so the simpler MFT continued to be used for many years. After introducing new System/370 machines with virtual memory in 1972, IBM developed MFT 2 into OS/VS1, the last system of this particular line.
The first version of MFT shared much of the code and architecture with PCP, and was limited to four partitions. It was very cumbersome to run multiple partitions. Many installations used Houston Automatic Spooling Priority (HASP) to mitigate the complexity.
MFT Version II (MFT-II) shared much more of the Control Program and Scheduler code with MVT, and was much more flexible to run. The maximum number of partitions increased to 52.
Later modifications of MFT-II added sub-tasking, so that the fixed number of tasks was no longer fixed, although the number of partitions did remain a limitation.
Experience indicated that it was not advisable to install MFT on systems with less than 256 KiB of memory, which in the 1960s was quite a large amount.
Also referred to as SYS=INT in macro expansions that were system-dependent.
MVT
Multiprogramming with a Variable number of Tasks (MVT) was the most sophisticated of three available configurations of OS/360's control program, and one of two available configurations in the final releases. MVT was intended for the largest machines in the System/360 family. Introduced in 1964, it did not become available until 1967. Early versions had many problems and the simpler MFT continued to be used for many years. Experience indicated that it was not advisable to install MVT on systems with less than 512 KiB of memory
MVT treated all memory not used by the operating system as a single pool from which contiguous regions could be allocated as required by an unlimited number of simultaneous application and systems programs. This scheme was more flexible than MFT's and in principle used memory more efficiently, but was liable to fragmentation - after a while one could find that, although there was enough spare memory in total to run a program, it was divided into separate chunks none of which was large enough. System/360 lacked memory relocation hardware so memory compaction could not be used to reduce fragmentation. A facility called Rollout/Rollin could swap a running job out to secondary storage to make its memory available to another job. The rolled-out job would, however, have to be rolled-in to the original memory locations when they again became available.
In 1971 the Time Sharing Option (TSO) for use with MVT was added as part of release 20.1. TSO became widely used for program development because it provided an editor, the ability to submit batch jobs, be notified of their completion, and view the results without waiting for printed reports, and debuggers for some of the programming languages used on System/360. TSO in OS/360 communicated with terminals by using Telecommunications Access Method (TCAM). TCAM's name suggests that IBM hoped it would become the standard access method for data communications, but in fact TCAM in OS/VS2 was used almost entirely for TSO and was largely superseded by Virtual Telecommunications Access Method (VTAM) in the mid-to-late 1970s.
Also referred to as SYS=VMS in invocations of some macros that were system-dependent.
M65MP
Model 65 Multiprocessing (M65MP) is a variant of MVT. It runs on a 360/65 in Multisystem mode M65MP traps use of the Set System Mask (SSM) instruction to serialize disabled code between the two CPUs. For the most part an M65MP system has the same behavior and interfaces as any other MVT system.
The keyword parameter SYS=VMS included M65MP as well as uniprocessor MVT.
Shared features
PCP, MFT and MVT provide similar facilities from the point of view of application programs:
The same application programming interface (API) and application binary interface (ABI), so application programs can be transferred between MFT and MVT without even needing to be modified or re-assembled or re-compiled.
The same JCL (Job Control Language, for initiating batch jobs), which was more flexible and easier to use, though more complex, than that of DOS/360.
The same facilities (access methods) for reading and writing files and for data communications:
Sequential data sets are normally read or written one record at a time from beginning to end, using BSAM or QSAM. This was the only technique that could be used for tape drives, card readers / punches and printers.
In indexed (ISAM) files a specified section of each record is defined as a key which can be used to look up specific records.
In direct access (BDAM) files, the application program has to specify the relative block number, the relative track and record (TTR) or the actual physical location (MBBCCHHR) in a Direct-access storage device (DASD) of the data it wanted to access, or the starting point for a search by key. BDAM programming was not easy and most organizations never used it themselves; but it was the fastest way to access data on disks and many software companies used it in their products, especially database management systems such as ADABAS, IDMS and IBM's DL/I. It is also available from OS/360 Fortran. BDAM datasets are unblocked, with one logical record per physical record.
An additional file structure, partitioned, and access method (BPAM), is mainly used for managing program libraries. Although partitioned files need to be compressed to reclaim free space, this has less impact than did a similar requirement for DOS/360's Core Image Library, because MFT and MVT allow multiple partitioned datasets and each project generally has at least one.
Generation Data Groups (GDGs) were originally designed to support grandfather-father-son backup procedures - if a file was modified, the changed version became the new son, the previous son became the father, the previous father became the grandfather and the previous grandfather was deleted. But one could set up GDGs with more than 3 generations, and some applications used GDGs to collect data from large and variable numbers of sources and feed the information to one program - each collecting program created a new generation of the file and the final program read the whole group as a single sequential file (by not specifying a generation in the JCL).
BTAM, a data communications facility, was primitive and hard to use by today's standards. However, it could communicate with almost any type of terminal, which was a big advantage at a time when there was hardly any standardization of communications protocols.
The file naming system allows files to be managed as hierarchies with at most 8 character names at each level, e.g. PROJECT.USER.FILENAME. This is tied to the implementation of the system catalog (SYSCTLG) and Control Volumes (CVOLs), which used records with 8 byte keys.
Shared features excluding PCP
Some features were available only for MFT and MVT:
A SPOOLing facility for MFT II and MVT (which DOS/360 initially lacked, but was, later, provided by the POWER application).
Applications in MFT (Release 19 and later) and MVT could create sub-tasks, which allowed multitasking (multithreading) within the one job.
Graphic Job Processing
Satellite Graphic Job Processing
Remote Job Entry
Queued Telecommunications Access Method (QTAM)
Telecommunications Access Method (TCAM)
System/370 and virtual memory operating systems
When System/370 was announced in 1970 it offered essentially the same facilities as System/360 but with about 4 times the processor speeds of similarly priced System/360 CPUs. Then in 1972 IBM announced System/370 Advanced Functions, of which the main item was that future sales of System/370 would include virtual memory capability and this could also be retro-fitted to existing System/370 CPUs. Hence IBM also committed to delivering enhanced operating systems which could support the use of virtual memory.
OS/360
IBM provided an OS/360 SYSGEN option for S/370 support, which did not support DAT but did:
Support control registers
Support enhanced I/O
Provide a S/370 Machine Check Handler
Provide limited support for the new timer facilities
OS/VS1
OS/VS1 is the successor to MFT, and offers similar facilities with several additions, e.g., RES, virtual memory. VSAM (see below) was initially available as an independent component release (ICR) and later integrated into the OS/VS1 base. IBM released fairly minor enhancements of OS/VS1 until 1983, and in 1984 announced that there would be no more. AIX/370, AIX/ESA, DPPX, IX/370, OS/VS1 and TSS/370 are the only System/370 operating systems that do not have modern descendants.
OS/VS2 SVS and MVS
OS/VS2 release 1 was just MVT plus virtual memory and VSAM (see below). This version was eventually renamed OS/VS2 SVS, for Single Virtual Storage, when OS/VS2 Release 2, also known as MVS, for Multiple Virtual Storage, was introduced. SVS was intended as a stepping stone from MVT to MVS, and is only of historical interest today.
In 1974 IBM released what it described as OS/VS2 Release 2 but which was really a new operating system that was upwards-compatible with OS/VS2 Release 1. The Supervisor of the new system had been largely rewritten in a new dialect of BSL, PL/S; BSL and PL/S were dialects of PL/I with extensions designed to transcribe Assembly language code, including privileged instructions needed to control the computer as a whole. Time-sensitive OS components, such as the OS Dispatcher and the IOS, notably, among many others, remained coded in Assembly Language, which had been enhanced for OS/VS in the IFOX00 Assembler (from the older, OS/360 IEUASM Assembler).
The new version's most noticeable feature was that it supported multiple virtual address spaces - different applications thought they were using the same range of virtual addresses, but the new system's virtual memory facilities mapped these to different ranges of real memory addresses. Each application's address space consists of 3 areas: operating system (one instance shared by all jobs); an application area which was unique for each application; shared virtual area used for various purposes including inter-job communication. IBM promised that the application areas would always be at least 8MB. This approach eliminated the risk of memory fragmentation that was present in MVT and SVS, and improved the system's internal security. The new system rapidly became known as "MVS" (Multiple Virtual Storages), the original OS/VS2 became known as "SVS" (Single Virtual Storage) and IBM itself accepted this terminology and labelled MVS's successors "MVS/xxx".
MVS introduced a new approach to workload management, allowing users to define performance targets for high-priority batch jobs. This enabled users to give their systems more work than before without affecting the performance of the highest-priority jobs.
MVS was IBM's first mainstream operating system on the System/370 to support what IBM called tightly coupled multiprocessing, in which 2 (later, up to 12, for IBM mainframes, and up to 16, for Amdahl mainframes) CPUs shared concurrent access to the same memory (and a single copy of the operating system and peripheral devices), providing greater processing power and a degree of graceful degradation if one CPU failed (which, fortunately, became an increasingly rare event, as system up time rose from hours to days and, then, to years.)
Initially MVS was supplied with a job queue manager called JES2 (Job Entry Subsystem 2), which was descended from HASP (Houston Automatic Spooling Priority) and also supported Remote Job Entry from workstations located elsewhere. JES2 can only manage jobs for one CPU (which might be a tightly coupled multiprocessor system). In 1976 IBM provided another option, JES3 (Job Entry Subsystem 3), a descendant of ASP (Attached Support Processor), which allows one CPU to manage a single job queue feeding work to several physically distinct CPUs, and therefore allows one operator's console to manage the work of all those CPUs. Note: JES1 was the job queue manager for OS/VS1 (see above).
VSAM
IBM hoped that Virtual storage access method (VSAM) would replace its earlier sequential, indexed and direct access methods as it provided improved versions of these:
Entry-Sequenced Datasets (ESDS) provide facilities similar to those of both sequential and BDAM (direct) datasets, since they can be read either from start to finish or directly by specifying an offset from the start.
Key-Sequenced Datasets (KSDS) are a major upgrade from IBM's ISAM: they allow secondary keys with non-unique values and keys formed by concatenating non-contiguous fields in any order; they greatly reduce the performance problems caused by overflow records used to handle insertions and updates in ISAM; and they greatly reduce the risk that a software or hardware failure in the middle of an index update might corrupt the index. VSAM provides an ISAM / VSAM Interface which allows ISAM-based applications to use VSAM KSDS without reprogramming.
Relative Record Datasets (RRDS) are a replacement for direct access (BDAM) datasets, allowing applications to access a record by specifying a relative record number. Unlike ESDS and KSDS, RRDS does not support variable-length records.
These VSAM formats became the basis of IBM's database management systems, IMS/VS and DB2 - usually ESDS for the actual data storage and KSDS for indexes.
VSAM also provides a new implementation of the catalog facility which enables applications to access files by name, without needing to know which disk drive(s) they are on. VSAM datasets must be defined in a VSAM catalog before they are used, and non-VSAM datasets can also be listed in a VSAM catalog. The MVS Master Catalog must be a VSAM catalog. Catalogs were originally provided in OS/360 in the form of CVOLs; MVS added a separate catalog structure for VSAM; later IBM added a third type of catalog known as an ICF catalog. (IBM removed support for CVOL and VSAM catalogs as of 2000, since they were not Y2K-compliant; hence in z/OS, only ICF catalogs are supported.)
SNA
In 1974 IBM announced Systems Network Architecture, which was meant to reduce the cost of running large networks of terminals, mainly by using communications lines much more efficiently. This is only available for IBM's virtual memory operating systems, since its mainframe software component, VTAM is only available with these operating systems.
Later MVS versions and enhancements
In 1977 IBM announced MVS/System Extensions, a program product (i.e., it cost extra money) which improved MVS performance and added functionality.
Descendants of MVS are still used on the latest descendants of System/360, System/390 and zSeries; it was renamed to OS/390 for System/390, and the 64-bit version for the zSeries was named z/OS.
Timeline
These data are taken from IBM 360 Operating Systems Release History, System/370 Market Chronology of Products & Services,
and IBM announcement letters.
Notes
References
Further reading
Manuals
IBM, "MVT Guide" - GC28-6720-4, R21, March 1972
IBM, "MVT Supervisor PLM" - GY28-6659-7, Program Logic Manual, March 1972
IBM, "OS I/O Supervisor PLM" - GY28-6616-1, Program Logic Manual, April 1967
IBM, "OS I/O Supervisor PLM" - GY28-6616-9, Program Logic Manual, R21.7, April 1973
Books
Brooks, Jr., Frederick P. (1975). "The Mythical Man-Month: Essays on Software Engineering", Addison-Wesley. . (Reprinted with corrections, January 1982)
Binder, Robert V. (1985). "Application Debugging: An MVS Abend Handbook for Cobol, Assembly, PL/I, and Fortran Programmers ", Prentice-Hall. .
Pugh, Emerson W.; Johnson, Lyle R.; Palmer, John H. (1991). IBM's 360 and Early 370 Systems, Cambridge : MIT Press. (pp. 291–345)
References in popular culture
ABEND
External links
Operating System/360 1965–1972
MVS... A Long History on archive.org
IBM mainframe operating systems
Computer-related introductions in 1964
1960s software
|
235433
|
https://en.wikipedia.org/wiki/Xiph.Org%20Foundation
|
Xiph.Org Foundation
|
Xiph.Org Foundation is a nonprofit organization that produces free multimedia formats and software tools. It focuses on the Ogg family of formats, the most successful of which has been Vorbis, an open and freely licensed audio format and codec designed to compete with the patented WMA, MP3 and AAC. As of 2013, development work was focused on Daala, an open and patent-free video format and codec designed to compete with VP9 and the patented High Efficiency Video Coding.
In addition to its in-house development work, the foundation has also brought several already-existing but complementary free software projects under its aegis, most of which have a separate, active group of developers. These include Speex, an audio codec designed for speech, and FLAC, a lossless audio codec.
The Xiph.Org Foundation has criticized Microsoft and the RIAA for their lack of openness. They state that if companies like Microsoft had owned patents on the Internet, then other companies would have tried to compete, and "The Net, as designed by warring corporate entities, would be a battleground of incompatible and expensive 'standards' had it actually survived at all." They also criticize the RIAA for their support of projects such as the Secure Digital Music Initiative.
In 2008, the Free Software Foundation listed the Xiph.Org projects as High Priority Free Software Projects.
History
Chris Montgomery, creator of the Ogg container format, founded the Xiphophorus company and later the Xiph.Org Foundation. The first work that became the Ogg media projects started in 1994. The name "Xiph" abbreviates the original organizational name, "Xiphophorus", named after the common swordtail fish, Xiphophorus hellerii. It was officially incorporated on 15 May 1996 as Xiphophorus, Inc. The name "Xiphophorus company" was used until 2002, when it was renamed to Xiph.Org Foundation.
In 1999, the Xiphophorus company defined itself on its website as "a distributed group of Free and Open Source programmers working to protect the foundations of Internet multimedia from domination by self-serving corporate interests."
In 2002, the Xiph.Org Foundation defined itself on its website as "a non-profit corporation dedicated to protecting the foundations of Internet multimedia from control by private interests."
In March 2003, the Xiph.Org Foundation was recognized by the IRS as a 501(c)(3) Non-Profit Organization, which means that U.S. citizens can deduct donations made to Xiph.Org from their taxes.
Xiph.Org Foundation projects
Ogg – a multimedia container format, a reference implementation, and the native file and stream format for the Xiph.org multimedia codecs
Vorbis – a lossy audio compression format and codec
Theora – a lossy video coding format and codec
FLAC – a lossless audio compression format and software
Speex – a lossy speech encoding format and software (deprecated)
CELT – an ultra-low delay lossy audio compression format that has been merged into Opus, and is now obsolete
Opus – a low delay lossy audio compression format originally intended for VoIP
Tremor – an integer-only implementation of the Vorbis audio decoder for embedded devices (software)
OggPCM – an encapsulation of PCM audio data inside the Ogg container format
Skeleton – a structuring information for multi-track Ogg files (a logical bitstream within an Ogg stream)
RTP payloads – containers for Vorbis, Theora, Speex and Opus.
CMML – an XML-based markup language for time-continuous data (a timed text codec; deprecated)
Ogg Squish – a lossless audio compression format and software (discontinued)
Tarkin – an experimental lossy video coding format; no stable release (discontinued)
Daala – a video coding format and codec
Kate – an overlay codec that can carry animated text and images.
libao – an audio-output library that operates on different platforms
Annodex – an encapsulation format, which interleaves time-continuous data with CMML markup in a streamable manner
Icecast – an open source multi-platform streaming server (software)
Ices – a source client for broadcasting in Ogg Vorbis or MP3 format to an icecast2 server (software)
IceShare – an unfinished peercasting system for Ogg multimedia (no longer maintained)
cdparanoia – an open source CD Audio extraction tool that aims to be bit-perfect (currently unmaintained)
XSPF – an XML Shareable Playlist Format
OpenCodecs
OpenCodecs is a software package for Windows adding DirectShow filters for the Theora and WebM codecs. It adds Theora and WebM support to Windows Media Player and enables HTML5 video in Internet Explorer. It consists of:
dshow, Xiph's DirectShow filters for their suite of Ogg formats, including Theora and Vorbis
webmdshow, the DirectShow filter for WebM maintained by the WebM project
An ActiveX plugin adding HTML5 video capability to Internet Explorer older than version 9
QuickTime Components
Xiph QuickTime Components are implementations of the Ogg container along with the Speex, Theora, FLAC and Vorbis codecs for QuickTime. It allows users to use Ogg files in any application that uses QuickTime for audio and video file support, such as iTunes and QuickTime Player.
Since QuickTime Components does not function in macOS Sierra and above, the project was discontinued in 2016.
References
External links
Free software project foundations in the United States
Non-profit organizations based in Massachusetts
|
45474416
|
https://en.wikipedia.org/wiki/USC%20Trojans%20football%20statistical%20leaders
|
USC Trojans football statistical leaders
|
The USC Trojans football statistical leaders are individual statistical leaders of the USC Trojans football program in various categories, including passing, rushing, receiving, total offense, defensive stats, and kicking/special teams. Within those areas, the lists identify single-game, single-season, and career leaders. The Trojans represent the University of Southern California in the NCAA's Pac-12 Conference.
Although USC began competing in intercollegiate football in 1888, the school's official record book considers the "modern era" to have begun in the 1920s. Records from before this decade are often incomplete and inconsistent, and they are generally not included in these lists.
These lists are dominated by more recent players for several reasons:
Since 1920s, seasons have increased from to 11 and then 12 games in length.
The NCAA didn't allow freshmen to play varsity football until 1972 (with the exception of the World War II years), allowing players to have four-year careers.
The Trojans have played in 55 bowl games in school history, 35 of which have come since the 1970 season. Although the official NCAA record book does not include bowl games in statistical records until 2002, and most colleges also structure their record books this way, USC counts all bowl games in its records.
These lists are updated through the end of the 2017 season. Recent USC Football Media Guides do not include full top 10 lists for single-game records. However, the 2003 version of the media guide included long lists of top individual single-game performances, and box scores from more recent games are readily available, so the lists are easily derived.
Passing
Passing yards
Passing touchdowns
Rushing
Rushing yards
Rushing touchdowns
Receiving
Receptions
Receiving yards
Receiving touchdowns
Total offense
Total offense is the sum of passing and rushing statistics. It does not include receiving or returns.
Total offense yards
Total touchdowns
Defense
Note: The USC Football Media Guide does not generally give a full top 10 in defensive statistics.
Interceptions
Tackles
Sacks
Special teams
Field goals made
Field goal percentage
References
USC
|
851269
|
https://en.wikipedia.org/wiki/Unit%20record%20equipment
|
Unit record equipment
|
Starting at the end of the nineteenth century, well before the advent of electronic computers, data processing was performed using electromechanical machines collectively referred to as unit record equipment, electric accounting machines (EAM) or tabulating machines.
Unit record machines came to be as ubiquitous in industry and government in the first two-thirds of the twentieth century as computers became in the last third. They allowed large volume, sophisticated data-processing tasks to be accomplished before electronic computers were invented and while they were still in their infancy. This data processing was accomplished by processing punched cards through various unit record machines in a carefully choreographed progression. This progression, or flow, from machine to machine was often planned and documented with detailed flowcharts that used standardized symbols for documents and the various machine functions. All but the earliest machines had high-speed mechanical feeders to process cards at rates from around 100 to 2,000 per minute, sensing punched holes with mechanical, electrical, or, later, optical sensors. The operation of many machines was directed by the use of a removable plugboard, control panel, or connection box. Initially all machines were manual or electromechanical. The first use of an electronic component was in 1937 when a photocell was used in a Social Security bill-feed machine. Electronic components were used on other machines beginning in the late 1940s.
The term unit record equipment also refers to peripheral equipment attached to computers that reads or writes unit records, e.g., card readers, card punches, printers, MICR readers.
IBM was the largest supplier of unit record equipment and this article largely reflects IBM practice and terminology.
History
Beginnings
In the 1880s Herman Hollerith invented the recording of data on a medium that could then be read by a machine. Prior uses of machine readable media had been for lists of instructions (not data) to drive programmed machines such as Jacquard looms and mechanized musical instruments. "After some initial trials with paper tape, he settled on punched cards [...]". To process these punched cards, sometimes referred to as "Hollerith cards", he invented the keypunch, sorter, and tabulator unit record machines. These inventions were the foundation of the data processing industry. The tabulator used electromechanical relays to increment mechanical counters. Hollerith's method was used in the 1890 census. The company he founded in 1896, the Tabulating Machine Company (TMC), was one of four companies that in 1911 were amalgamated in the forming of a fifth company, the Computing-Tabulating-Recording Company, later renamed IBM.
Following the 1900 census a permanent Census bureau was formed. The bureau's contract disputes with Hollerith led to the formation of the Census Machine Shop where James Powers and others developed new machines for part of the 1910 census processing. Powers left the Census Bureau in 1911, with rights to patents for the machines he developed, and formed the Powers Accounting Machine Company. In 1927 Powers' company was acquired by Remington Rand. In 1919 Fredrik Rosing Bull, after examining Hollerith's machines, began developing unit record machines for his employer. Bull's patents were sold in 1931, constituting the basis for Groupe Bull.
These companies, and others, manufactured and marketed a variety of general-purpose unit record machines for creating, sorting, and tabulating punched cards, even after the development of computers in the 1950s. Punched card technology had quickly developed into a powerful tool for business data-processing.
Timeline
1884: Herman Hollerith files a patent application titled "Art of Compiling Statistics"; granted on January 8, 1889.
1886: First use of tabulating machine in Baltimore's Department of Health.
1887: Hollerith files a patent application for an integrating tabulator (granted in 1890).
1889: First recorded use of integrating tabulator in the Office of the Surgeon General of the Army.
1890-1895: U.S. Census, Superintendents Robert P. Porter 1889-1893 and Carroll D. Wright 1893-1897, tabulations are done using equipment supplied by Hollerith.
1896: The Tabulating Machine Company founded by Hollerith, trade name for products is Hollerith
1901: Hollerith Automatic Horizontal Sorter
1904: Porter, having returned to England, forms The Tabulator Limited (UK) to market Hollerith's machines.
1905: Hollerith reincorporates the Tabulating Machine Company as The Tabulating Machine Company
1906: Hollerith Type 1 Tabulator, the first tabulator with an automatic card feed and control panel.
1909: The Tabulator Limited renamed as British Tabulating Machine Company (BTM).
1910: Tabulators built by the Census Machine Shop print results.
1910: Willy Heidinger, an acquaintance of Hollerith, licenses Hollerith’s The Tabulating Machine Company patents, creating Dehomag in Germany.
1911: Computing-Tabulating-Recording Company (CTR), a holding company, formed by the amalgamation of The Tabulating Machine Company and three other companies.
1911: James Powers forms Powers Tabulating Machine Company, later renamed Powers Accounting Machine Company. Powers had been employed by the Census Bureau to work on tabulating machine development and was given the right to patent his inventions there. The machines he developed sensed card punches mechanically, as opposed to Hollerith's electric sensing.
1912: The first Powers horizontal sorting machine.
1914: Thomas J. Watson hired by CTR.
1914: The Tabulating Machine Company produces 2 million punched cards per day.
1914: The first Powers printing tabulator.
1915 Powers Tabulating Machine Company establishes European operations through the Accounting and Tabulating Machine Company of Great Britain Limited.
1919: Fredrik Rosing Bull, after studying Hollerith's machines, constructs a prototype 'ordering, recording and adding machine' (tabulator) of his own design. About a dozen machines were produced during the following several years for his employer.
1920s: Early in this decade punched cards began use as bank checks.
1920: BTM begins manufacturing its own machines, rather than simply marketing Hollerith equipment.
1920: The Tabulating Machine Company's first printing tabulator, the Hollerith Type 3.
1921: Powers-Samas develops the first commercial alphabetic punched card representation.
1922: Powers develops an alphabetic printer.
1923: Powers develops a tabulator that accumulates and prints both sub and grand totals (rolling totals).
1923: CTR acquires 90% ownership of Dehomag, thus acquiring patents developed by them.
1924: Computing-Tabulating-Recording Company (CTR) renamed International Business Machines (IBM). There would be no IBM-labeled products until 1933.
1925: The Tabulating Machine Company's first horizontal card sorter, the Hollerith Type 80, processes 400 cards/min.
1927: Remington Typewriter Company and Rand Kardex combine to form Remington Rand. Within a year, Remington Rand acquires the Powers Accounting Machine Company.
1928: The Tabulating Machine Company's first tabulator that could subtract, the Hollerith Type IV tabulator. The Tabulating Machine Company begins its collaboration with Benjamin Wood, Wallace John Eckert and the Statistical Bureau at Columbia University. The Tabulating Machine Company's 80-column card introduced. Comrie uses punched card machines to calculate the motions of the moon. This project, in which 20,000,000 holes are punched into 500,000 cards continues into 1929. It is the first use of punched cards in a purely scientific application.
1929 The Accounting and Tabulating Machine Company of Great Britain Limited renamed Powers-Samas Accounting Machine Limited (Samas, full name Societe Anonyme des Machines a Statistiques, had been the Power's sales agency in France, formed in 1922). The informal reference "Acc and Tab" would persist.
1930: The Remington Rand 90 column card, offering "more storage capacity [and] alphabetic capability"
1931: H.W.Egli - BULL founded to capitalize on the punched card technology patents of Fredrik Rosing Bull. The Tabulator model T30 is introduced.
1931: The Tabulating Machine Company's first punched card machine that could multiply, the 600 Multiplying Punch. Their first alphabetical accounting machine - although not a complete alphabet, the Alphabetic Tabulator Model B was quickly followed by the full alphabet ATC.
1931: The term "Super Computing Machine" is used by the New York World newspaper to describe the Columbia Difference Tabulator, a one-of-a-kind special purpose tabulator-based machine made for the Columbia Statistical Bureau, a machine so massive it was nicknamed "Packard". The Packard attracted users from across the country: "the Carnegie Foundation, Yale, Pittsburgh, Chicago, Ohio State, Harvard, California and Princeton."
1933: Compagnie des Machines Bull is the new name of the reorganized H.W. Egli - Bull.
1933: The Tabulating Machine Company name disappears as subsidiary companies are merged into IBM. The Hollerith trade name is replaced by IBM. IBM introduces removable control panels.
1933: Dehomag's BK tabulator (developed independently of IBM) announced.
1934: IBM renames its Tabulators as Electric Accounting Machines.
1935: BTM Rolling Total Tabulator introduced.
1937: Leslie Comrie establishes the Scientific Computing Service Limited - the first for-profit calculating agency.
1937: The first collator, the IBM 077 Collator The first use of an electronic component in an IBM product was a photocell in a Social Security bill-feed machine. By 1937 IBM had 32 presses at work in Endicott, N.Y., printing, cutting and stacking five to 10 million punched cards every day.
1938: Powers-Samas multiplying punch introduced.
1941 Introduction of Bull Type A unit record machines based on 80 column card.
1943: "IBM had about 10,000 tabulators on rental [...] 601 multipliers numbered about 2000 [...] keypunch[es] 24,500".
1946: The first IBM punched card machine that could divide, the IBM 602, was introduced. Unreliable, it "was upgraded to the 602-A (a '602 that worked') [...] by 1948". The IBM 603 Electronic Multiplier was introduced, "the first electronic calculator ever placed into production.".
1948: The IBM 604 Electronic Punch. "No other calculator of comparable size or cost could match its capability".
1949: The IBM 024 Card Punch, 026 Printing Card Punch, 082 Sorter, 403 Accounting machine, 407 Accounting machine, and Card Programmed Calculator (CPC) introduced.
1952: Bull Gamma 3 introduced. An electronic calculator with delay-line memory, programmed by a connection panel, that was connected to a tabulator or card reader-punch. The Gamma 3 had greater capacity, greater speed, and lower rentals than competitive products.
1952: Remington Rand 409 Calculator (aka. UNIVAC 60, UNIVC 120) introduced.
1952: Underwood Corp acquires the American assets of Powers-Samas.
By the 1950s punched cards and unit record machines had become ubiquitous in academia, industry and government. The warning often printed on cards that were to be individually handled, "Do not fold, spindle or mutilate", coined by Charles A. Philips, became a motto for the post-World War II era (even though many people had no idea what spindle meant).
With the development of computers punched cards found new uses as their principal input media. Punched cards were used not only for data, but for a new application - computer programs, see: Computer programming in the punched card era. Unit record machines therefore remained in computer installations in a supporting role for keypunching, reproducing card decks, and printing.
1955: IBM produces 72.5 million punched cards per day.
1957: The IBM 608, a transistor version of the 1948 IBM 604. First commercial all-transistor calculator.
1958: The "Series 50", basic accounting machines, was announced. These were modified machines, with reduced speed and/or function, offered for rental at reduced rates. The name "Series 50" relates to a similar marketing effort, the "Model 50", seen in the IBM 1940 product booklet. An alternate report identifies the modified machines as "Type 5050" introduced in 1959 and notes that Remington-Rand introduced similar products.
1959: BTM is merged with rival Powers-Samas to form International Computers and Tabulators(ICT).
1959: The IBM 1401, internally known in IBM for a time as "SPACE" for "Stored Program Accounting and Calculating Equipment" and developed in part as a response to the Bull Gamma 3, outperforms three IBM 407s and a 604, while having a much lower rental. That functionality combined with the availability of tape drives, accelerated the decline in unit record equipment usage.
1960: The IBM 609 Calculator, an improved 608 with core memory. This will be IBMs last punched card calculator.
Many organizations were loath to alter systems that were working, so production unit record installations remained in operation long after computers offered faster and more cost effective solutions. Specialized uses of punched cards, including toll collection, microform aperture cards, and punched card voting, kept unit record equipment in use into the twenty-first century. Another reason was cost or availability of equipment: for example in 1965 an IBM 1620 computer did not have a printer as standard equipment, so it was normal in such installations to punch printed output onto cards, using two cards per line if required and print these cards on an IBM 407 accounting machine and then throw the cards away.
1968: International Computers and Tabulators (ICT) is merged with English Electric Computers, forming International Computers Limited (ICL).
1969: The IBM System/3, renting for less than $1,000 a month, the ancestor of IBM's midrange computer product line, aka. minicomputers, was aimed at new customers and organizations that still used IBM 1400 series computers or unit record equipment. It featured a new, smaller, punched card with a 96 column format. Instead of the rectangular punches in the classic IBM card, the new cards had tiny (1 mm), circular holes much like paper tape. By July 1974 more than 25,000 System/3s had been installed.
1971: The IBM 129 Card Data Recorder (keypunch and auxiliary on-line card reader/punch) is the last, or among the last, 80-column card unit record product announcements (other than card readers and card punches attached to computers).
1975 Cardamation founded, a U.S. company that supplied punched card equipment and supplies until 2011.
Endings
1976: The IBM 407 Accounting Machine was withdrawn from marketing.
1978: IBM's Rochester plant made its last shipment of the IBM 082, 084, 085, 087, 514, and 548 machines. The System/3 was succeeded by the System/38.
1980: The last reconditioning of an IBM 519 Document Originating Punch.
1984: The IBM 029 Card Punch, announced in 1964, was withdrawn from marketing. IBM closed its last punch card manufacturing plant.
2010: A group from the Computer History Museum reported that an IBM 402 Accounting Machine and related punched card equipment was still in operation at a filter manufacturing company in Conroe, Texas. The punched card system was still in use as of 2013.
2011: The owner of Cardamation, Robert G. Swartz, dies, and the company, perhaps the last supplier of punch card equipment, ceases operation.
2015: Punched cards for time clocks and some other applications were still available; one supplier was the California Tab Card Company. As of 2018, the web site was no longer in service.
Punched cards
The basic unit of data storage was the punched card. The IBM 80-column card was introduced in 1928. The Remington Rand Card with 45 columns in each of two tiers, thus 90 columns, in 1930. Powers-Samas punched cards include one with 130 columns. Columns on different punch cards vary from 5 to 12 punch positions.
The method used to store data on punched cards is vendor specific. In general each column represents a single digit, letter or special character. Sequential card columns allocated for a specific use, such as names, addresses, multi-digit numbers, etc., are known as a field. An employee number might occupy 5 columns; hourly pay rate, 3 columns; hours worked in a given week, 2 columns; department number 3 columns; project charge code 6 columns and so on.
Keypunching
Original data was usually punched into cards by workers, often women, known as keypunch operators. Their work was often checked by a second operator using a verifier machine.
Sorting
An activity in many unit record shops was sorting card decks into the order necessary for the next processing step. Sorters, like the IBM 80 series Card Sorters, sorted input cards into one of 13 pockets depending on the holes punched in a selected column and the sorter's settings. The 13th pocket was for blanks and rejects. Sorting could be done on one card column at a time; sorting on, for example, a five digit zip code required that the card deck be processed five times. Sorting an input card deck into ascending sequence on a multiple column field, such as an employee number, was done by a radix sort, bucket sort, or a combination of the two methods.
Sorters were also used to separate decks of interspersed master and detail cards, either by a significant hole punch or by the cards corner-cut.
More advanced functionality was available in the IBM 101 Electronic Statistical Machine, which could
Sort
Count
Accumulate totals
Print summaries
Send calculated results (counts and totals) to an attached IBM 524 Duplicating Summary Punch.
Tabulating
Reports and summary data were generated by accounting or tabulating machines. The original tabulators only counted the presence of a hole at a location on a card. Simple logic, like ands and ors could be done using relays.
Later tabulators, such as those in IBM's 300 series, directed by a control panel, could do both addition and subtraction of selected fields to one or more counters and print each card on its own line. At some signal, say a following card with a different customer number, totals could be printed for the just completed customer number. Tabulators became complex: the IBM 405 contained 55,000 parts (2,400 different) and 75 miles of wire; a Remington Rand machine circa 1941 contained 40,000 parts.
Calculating
In 1931, IBM introduced the model 600 multiplying punch. The ability to divide became commercially available after World War II. The earliest of these calculating punches were electromechanical. Later models employed vacuum tube logic. Electronic modules developed for these units were used in early computers, such as the IBM 650. The Bull Gamma 3 calculator could be attached to tabulating machines, unlike the stand-alone IBM calculators.
Card punching
Card punching operations included:
Gang punching - producing a large number of identically punched cards—for example, for inventory tickets.
Reproducing - reproducing a card deck in its entirety or just selected fields. A payroll master card deck might be reproduced at the end of a pay period with the hours worked and net pay fields blank and ready for the next pay period's data. Programs in the form of card decks were reproduced for backup.
Summary punching - punching new cards with details and totals from an attached tabulating machine.
Mark sense reading - detecting electrographic lead pencil marks on ovals printed on the card and punching the corresponding data values into the card.
Singularly or in combination, these operations were provided in a variety of machines. The IBM 519 Document-Originating Machine could perform all of the above operations.
The IBM 549 Ticket Converter read data from Kimball tags, copying that data to punched cards.
With the development of computers, punched cards were also produced by computer output devices.
Collating
IBM collators had two input hoppers and four output pockets. These machines could merge or match card decks based on the control panel's wiring as illustrated here.
The Remington Rand Interfiling Reproducing Punch Type 310-1 was designed to merge two separate files into a single file. It could also punch additional information into those cards and select desired cards.
Collators performed operations comparable to a database join.
Interpreting
An interpreter prints characters equivalent to the values of columns on the card. The columns to be printed can be selected and even reordered, based on the machine's control panel wiring. Later models could print on one of several rows on the card. Unlike keypunches, which print values directly above each column, interpreters generally use a font that was a little wider than a column and can only print up to 60 characters per row. Typical models include the IBM 550 Numeric Interpreter, the IBM 557 Alphabetic Interpreter, and the Remington Rand Type 312 Alphabetic Interpreter.
Filing
Batches of punched cards were often stored in tub files, where individual cards could be pulled to meet the requirements of a particular application.
Transmission of punched card data
Electrical transmission of punched card data was invented in the early 1930s. The device was called an Electrical Remote Control of Office Machines and was assigned to IBM.
Inventors were Joseph C. Bolt of Boston & Curt I. Johnson; Worcester, Mass. assors to the Tabulating Machine Co., Endicott, NY. The Distance Control Device received a US patent in Aug.9,1932: . Letters from IBM talk about filling in Canada in 9/15/1931.
Processing punched tape
The IBM 046 Tape-to-Card Punch and the IBM 047 Tape-to-Card Printing Punch (which was almost identical, but with the addition of a printing mechanism) read data from punched paper tape and punched that data into cards. The IBM 063 Card-Controlled Tape Punch read punched cards, punching that data into paper tape.
Control panel wiring and Connection boxes
The operation of Hollerith/BTM/IBM/Bull tabulators and many other types of unit record equipment was directed by a control panel. Operation of Powers-Samas/Remington Rand unit record equipment was directed by a connection box.
Control panels had a rectangular array of holes called hubs which were organized into groups. Wires with metal ferrules at each end were placed in the hubs to make connections. The output from some card column positions might connected to a tabulating machine's counter, for example. A shop would typically have separate control panels for each task a machine was used for.
Paper handling equipment
For many applications, the volume of fan-fold paper produced by tabulators required other machines, not considered to be unit record machines, to ease paper handling.
A decollator separated multi-part fan-fold paper into individual stacks of one-part fan-fold and removed the carbon paper.
A burster separated one-part fan-fold paper into individual sheets. For some uses it was desirable to remove the tractor-feed holes on either side of the fan-fold paper. In these cases the form's edge strips were perforated and the burster removed them as well.
See also
British Tabulating Machine Company
Fredrik Rosing Bull
Gustav Tauschek
IBM Electromatic Table Printing Machine
IBM 632 Accounting Machine
IBM 6400 Series
Leslie Comrie
List of IBM products
Powers Accounting Machine Company
Powers-Samas
Remington Rand
List of UNIVAC products
Wallace John Eckert
Notes and references
Further reading
Note: Most IBM form numbers end with an edition number, a hyphen followed by one or two digits.
For Hollerith and Hollerith's early machines see: Herman Hollerith#Further reading
Histories
Reprinted by Arno Press, 1976, from the best available copy. Some text is illegible.
includes Hollerith (1889) reprint
Punched card applications
– With 42 contributors and articles ranging from Analysis of College Test Results to Uses of the Automatic Multiplying Punch this is book provides an extensive view of unit record equipment use over a wide range of applications. For details of this book see The Baehne Book..
The appendix has IBM and Powers provided product detail sheets, with photo and text, for many machines.
(source: ) There is a 1954 edition, Ann F. Beach, et al., similar title and a 1956 edition, Joyce Alsop.
Describes several punched card applications.
Note: ISBN is for a reprint ed.
The machines
Unabridged edition of "Data Processing Tech 3 &2", aka. "Rate Training manual NAVPERS 10264-B", 3rd revised ed. 1970
Chapter 3 Punched Card Equipment describes American machines with some details of their logical organization and examples of control panel wiring.
The four main systems in current use - Powers-Samas, Hollerith, Findex, and Paramount - are examined and the fundamentals principles of each are fully explained.
An accessible book of recollections (sometimes with errors), with photographs and descriptions of many unit record machines. The ISBN is for an earlier (2006), printed, edition.
This elementary introduction to punched card systems is unusual because unlike most others, it not only deals with the IBM systems but also illustrates the card formats and equipment offered by Remington Rand and Underwood Samas. Erwin Tomash Library
IBM (1936) Machine Methods of Accounting, 360 p. Includes a 12-page 1936 IBM-written history of IBM and descriptions of many machines.
A simplified description of common IBM machines and their uses.
With descriptions, photos and rental prices.
The IBM Operators Guide, 22-8485 was an earlier edition of this book
Has extensive descriptions of unit record machine construction.
Ken Shirriff's blog Inside card sorters: 1920s data processing with punched cards and relays.
External links
Columbia University Computing History IBM Tabulators and Accounting Machines IBM Calculators IBM Card Interpreters IBM Reproducing and Summary Punches IBM Collators
Columbia University Computing History: L.J. Comrie From that site Comrie was the first to turn punched-card equipment to scientific use
History of Bull Extracted and translated from Science et Vie Micro magazine, No. 74, July–August, 1990: The very international history of a French giant
Musée virtuel de Bull et de l'informatique Française: Information Technology Industry TimeLine From that site The present TimeLine page differs from similar pages available on the Internet because it is focused more on the industry than on "inventions". It was originally designed to show the place of the European and more specifically the French computer industry facing its world-wide competition. Most of published time-line charts either consider that everything had an American origin or they show their country patriotism (French, Italian, Russian or British) or their company patriotism.
Musée virtuel de Bull et de l'informatique Française (Virtual Museum of French computers) Systems Catalog
Early office museum
IBM Archives
IBM Punch Card Systems in the U.S. Army
IBM early Card Reader and 1949 electronic Calculator video of unit record equipment in museum
Working Tabulating machines and punched card equipment in technikum29 Computermuseum (nr. Frankfurt/Main, Germany)
Punched card
ja:タビュレーティングマシン
|
48000953
|
https://en.wikipedia.org/wiki/Sue%20McKemmish
|
Sue McKemmish
|
Professor Sue McKemmish is an Australian archivist and scholar in the field of archival science. She is currently the Associate Dean Graduate Research for the Faculty of Information Technology at Monash University, Melbourne.
Career
McKemmish worked for 15 years for the National Archives of Australia and Public Record Office Victoria. In 1990 she joined Frank Upward at Monash University to develop a curriculum for recordkeeping professionals at undergraduate and post-graduate levels. She is best known in her discipline for her seminal paper "Evidence of me" (1996), about personal recordkeeping and societal memory. She also played a significant role in the development of records continuum thinking which led to Frank Upward's Records Continuum Model. In the 1990s she was a founding member of the Records Continuum Research Group at Monash.
She is a leader of continuum thinking, particularly related to societal memory linked to accountability, and is closely associated with the Australian records continuum movement. She has published extensively on recordkeeping in society, records continuum theory, recordkeeping metadata, and archival systems, and is a Laureate of the Australian Society of Archivists.
She has been at the forefront of a research and education agenda based in continuum thinking, which includes the development and leadership of international, multidisciplinary and collaborative research projects, as well as supervising multiple PhD students. She is engaged in major research and standards initiatives relating to the use of metadata in records and archival systems, information resource discovery and smart information portals, Australian Indigenous archives, and the development of more inclusive archival educational programs that meet the needs of diverse communities.
McKemmish has taken on senior leadership roles in the Faculty of Information Technology at Monash. She is the Chair of Archival Systems, Founder and Director of the Centre for Organisational and Social Informatics (COSI), and Associate Dean Graduate Research of the Faculty of Information Technology.
References
External links
Profile at Monash University
Profile on the website of the Records Continuum Research Group
Living people
Monash University faculty
Australian archivists
Female archivists
Year of birth missing (living people)
|
41155229
|
https://en.wikipedia.org/wiki/BLAST%20%28protocol%29
|
BLAST (protocol)
|
BLAST (BLocked ASynchronous Transmission), like XMODEM and Kermit, is a communications protocol designed for file transfer over asynchronous communication ports and dial-up modems that achieved a significant degree of popularity during the 1980s. Reflecting its status as a de facto standard for such transfers, BLAST, along with XMODEM, was briefly under official consideration by ANSI in the mid-80s as part of that organization's ultimately futile attempt to establish a single de jure standard.
Overview
BLAST grew out of the mission-critical experience of providing air pollution telemetry within the dial-up communications environment of the petroleum belt of southern Louisiana and Texas, with not only noisy telephone lines but also unexpected satellite hops to remote locations. As such, BLAST was the only asynchronous protocol to have entered the 1980s computing arena with all of the following features:
bit-oriented data encoding
CRC (cyclic redundancy check) error detection
a sliding window block transmission scheme
selective retransmission of corrupted blocks
simultaneous bi-directional data transfer
BLAST thus gained a reputation as the protocol having the best combination of speed and reliability in its class.
History
The idea for the BLAST product belongs to Paul Charbonnet, Jr., a former Data General salesman. Its original version was designed and implemented for the Data General line of Nova minicomputers by G. W. Smith, a former BorgWarner Research Center systems engineer who, having developed a basic "ack-nak" protocol for the aforesaid telemetry application, now created an entirely new protocol with all of the above-mentioned features, and for which he devised the "BLAST" acronym.
This work was performed under contract to AMP Incorporated, of Baton Rouge, LA. However, it was another Baton Rouge company, Communications Research Group (CRG), which was to successfully commercialize the BLAST protocol, and which was also to employ Charbonnet and Smith as, respectively, Sales Director and Vice-president of Research and Development.
On the downside, BLAST was criticized by ZMODEM developer Chuck Forsberg because of its proprietary nature, making it "tightly bound to the fortunes of [its supplier]".
Communications Research Group
Communications Research Group (CRG) was a Baton Rouge, Louisiana based company which became a major international vendor of data communications software during the 1980s, and which software had the BLAST protocol at its core.
As representative of one of CRG's mature products, the BLAST-II file transfer software was distinguished by its wide range of features. Beyond supporting the BLAST protocol, it enabled use of the competing XMODEM, encrypted and transmitted data using Secure Sockets Layer (SSL), and had "versions for about a hundred different micros, minis, and mainframes". Like Columbia University's Kermit software, CRG's BLAST-II also provided a scripting language.
CRG was recognized as one of the 100 largest microcomputer software companies in the United States, and it was ultimately acquired by modem manufacturer U.S. Robotics in 1990, and which company continued to develop and sell BLAST products.
See also
Kermit (protocol)
XMODEM
ZMODEM
References
File transfer protocols
Communication software
Communications protocols
Software companies based in California
History of software
Software companies of the United States
BBS file transfer protocols
|
17742923
|
https://en.wikipedia.org/wiki/PV-Wave
|
PV-Wave
|
PV-WAVE (Precision Visuals - Workstation Analysis and Visualization Environment) is an array oriented fourth-generation programming language used by engineers, scientists, researchers, business analysts and software developers to build and deploy visual data analysis applications. In January 2019, PV-Wave parent Rogue Wave Software was acquired by Minneapolis, Minnesota-based application software developer Perforce.
History
PV-WAVE was originally developed by a company called Precision Visuals, based in Boulder, CO. In 1992, the IMSL Numerical Libraries and Precision Visuals merged and the new company was renamed Visual Numerics.
PV-WAVE is closely related to the IDL (programming language), from whose code-base PV-WAVE originated. The shared history of PV-WAVE and IDL began in 1988, when Precision Visuals entered into an agreement with Research Systems, Incorporated (RSI, the original developer of IDL) under which Precision Visuals resold IDL under the name PV-WAVE. In September 1990, Precision Visuals exercised an option in its agreement with RSI to purchase a copy of the IDL source code. Since that time, IDL and PV-WAVE have been on separate development tracks: each product has been enhanced, supported, and maintained separately by its respective company.
In May 2009, Visual Numerics was acquired by Rogue Wave Software.
In January 2019, Rogue Wave Software was acquired by Minneapolis, Minnesota-based application software developer Perforce.
About
Due to their common history, PV-WAVE and IDL share a similar FORTRAN-like syntax, as well as many common commands, functions, and subroutines.
References
External links
PV-WAVE site
Programming languages
|
40525882
|
https://en.wikipedia.org/wiki/M.2
|
M.2
|
M.2, pronounced m dot two and formerly known as the Next Generation Form Factor (NGFF), is a specification for internally mounted computer expansion cards and associated connectors. M.2 replaces the mSATA standard, which uses the PCI Express Mini Card physical card layout and connectors. Employing a more flexible physical specification, the M.2 allows different module widths and lengths, and, paired with the availability of more advanced interfacing features, makes the M.2 more suitable than mSATA in general for solid-state storage applications, particularly in smaller devices such as ultrabooks and tablets.
Computer bus interfaces provided through the M.2 connector are PCI Express 4.0 (up to four lanes), Serial ATA 3.0, and USB 3.0 (a single logical port for each of the latter two). It is up to the manufacturer of the M.2 host or module to select which interfaces are to be supported, depending on the desired level of host support and the module type. Different M.2 connector keying notches denote various purposes and capabilities of both the M.2 hosts and modules, and also prevent the M.2 modules from being inserted into incompatible host connectors.
The M.2 specification supports NVM Express (NVMe) as the logical device interface for M.2 PCI Express SSDs, in addition to supporting legacy Advanced Host Controller Interface (AHCI) at the logical interface level. While the support for AHCI ensures software-level backward compatibility with legacy SATA devices and legacy operating systems, NVM Express is designed to fully utilize the capability of high-speed PCI Express storage devices to perform many I/O operations in parallel.
Features
M.2 modules can integrate multiple functions, including the following device classes: Wi-Fi, Bluetooth, satellite navigation, near field communication (NFC), digital radio, WiGig, wireless WAN (WWAN), and solid-state drives (SSDs). The SATA revision 3.2 specification, in its gold revision , standardizes the M.2 as a new format for storage devices and specifies its hardware layout. Buses exposed through the M.2 connector include PCI Express 3.0 and newer, Serial ATA (SATA) 3.0 and USB 3.0; all these standards are backward compatible.
The M.2 specification provides up to four PCI Express lanes and one logical SATA 3.0 (6 Gbit/s) port, and exposes them through the same connector so both PCI Express and SATA storage devices may exist in the form of M.2 modules. Exposed PCI Express lanes provide a pure PCI Express connection between the host and storage device, with no additional layers of bus abstraction. PCI-SIG M.2 specification, in its revision 1.0 , provides detailed M.2 specifications.
Storage interfaces
Three options are available for the logical device interfaces and command sets used for interfacing with M.2 storage devices, which may be used depending on the type of M.2 storage device and available operating system support:
Legacy SATA Used for SATA SSDs, and interfaced through the AHCI driver and legacy SATA 3.0 (6 Gbit/s) port exposed through the M.2 connector.
PCI Express using AHCI Used for PCI Express SSDs and interfaced through the AHCI driver and provided PCI Express lanes, providing backward compatibility with widespread SATA support in operating systems at the cost of lower performance. AHCI was developed when the purpose of a host bus adapter (HBA) in a system was to connect the CPU/memory subsystem with a much slower storage subsystem based on rotating magnetic media; as a result, AHCI has some inherent inefficiencies when applied to SSD devices, which behave much more like RAM than like spinning media.
PCI Express using NVMe Used for PCI Express SSDs and interfaced through the NVMe driver and provided PCI Express lanes, as a high-performance and scalable host controller interface designed and optimized especially for interfacing with PCI Express SSDs. NVMe has been designed from the ground up, capitalizing on the low latency and enhanced parallelism of PCI Express SSDs, and complementing the parallelism of contemporary CPUs, platforms and applications. At a high level, primary advantages of NVMe over AHCI relate to NVMe's ability to exploit parallelism in host hardware and software, based on its design advantages that include data transfers with fewer stages, greater depth of command queues, and more efficient interrupt processing.
Form factors and keying
The M.2 standard has been designed as a revision and improvement to the mSATA standard, with the possibility of larger printed circuit boards (PCBs) as one of its primary incentives. While the mSATA takes advantage of the existing PCI Express Mini Card (Mini PCIe) form factor and connector, M.2 has been designed from the ground up to maximize usage of the PCB space while minimizing the module footprint. As the result of the M.2 standard allowing longer modules and double-sided component population, M.2 SSD modules can provide larger storage capacities and can also double the storage capacity within the footprints of mSATA devices.
M.2 modules are rectangular, with an edge connector on one side and a semicircular mounting hole at the center of the opposite edge. The edge connector has 75 positions with up to 67 pins, employing a 0.5 mm pitch and offsetting the pins on opposing sides of the PCB from each other. Each pin on the connector is rated for up to 50 V and 0.5 A, while the connector itself is specified to endure 60 mating cycles. The M.2 standard allows module widths of 12, 16, 22 and 30 mm, and lengths of 16, 26, 30, 38, 42, 60, 80 and 110 mm. Initial line-up of the commercially available M.2 expansion cards is 22 mm wide, with varying lengths of 30, 42, 60, 80 and 110 mm. The codes for the M.2 module sizes contain both the width and length of a particular module; for example, "2242" as a module code means that the module is 22 mm wide and 42 mm long, while "2280" denotes a module 22 mm wide and 80 mm long.
An M.2 module is installed into a mating connector provided by the host's circuit board, and a single mounting screw secures the module into place. Components may be mounted on either side of the module, with the actual module type limiting how thick the components can be; the maximum allowable thickness of components is 1.5 mm per side, and the thickness of the PCB is . Different host-side connectors are used for single- and double-sided M.2 modules, providing different amounts of space between the M.2 expansion card and the host's PCB. Circuit boards on the hosts are usually designed to accept multiple lengths of M.2 modules, which means that the sockets capable of accepting longer M.2 modules usually also accept shorter ones by providing different positions for the mounting screw.
The PCB of an M.2 module provides a 75-position edge connector; depending on the type of module, certain pin positions are removed to present one or more keying notches. Host-side M.2 connectors (sockets) may populate one or more mating key positions, determining the type of modules accepted by the host; , host-side connectors are available with only one mating key position populated (either B or M). Furthermore, M.2 sockets keyed for SATA or two PCI Express lanes (PCIe ×2) are referred to as "socket 2 configuration" or "socket 2", while the sockets keyed for four PCI Express lanes (PCIe ×4) are referred to as "socket 3 configuration" or "socket 3".
For example, M.2 modules with two notches in B and M positions use up to two PCI Express lanes and provide broader compatibility at the same time, while the M.2 modules with only one notch in the M position use up to four PCI Express lanes; both examples may also provide SATA storage devices. Similar keying applies to M.2 modules that utilize provided USB 3.0 connectivity.
Various types of M.2 modules are denoted using the "WWLL-HH-K-K" or "WWLL-HH-K" naming schemes, in which "WW" and "LL" specify the module width and length in millimeters, respectively. The "HH" part specifies, in an encoded form, whether a module is single- or double-sided, and the maximum allowed thickness of mounted components; possible values are listed in the right table above. Module keying is specified by the "K-K" part, in an encoded form using the key IDs from the left table above; it can also be specified as "K" only, if a module has only one keying notch.
Beside socketed modules, the M.2 standard also includes the option for having permanently soldered single-sided modules.
See also
U.2
List of interface bit rates
NVM Express
References
External links
(SATA-IO) official website
(PCI-SIG) official website
Understanding M.2, the interface that will speed up your next SSD, Ars Technica, February 9, 2015, by Andrew Cunningham
LFCS: Preparing Linux for nonvolatile memory devices, LWN.net, April 19, 2013, by Jonathan Corbet
PCIe SSD 101: An Overview of Standards, Markets and Performance, SNIA, August 2013, archived from the original on February 2, 2014
US patent 20130294023, November 7, 2013, assigned to Raphael Gay
Computer-related introductions in 2013
Computer connectors
Motherboard expansion slot
SATA Express
Serial ATA
Peripheral Component Interconnect
USB
|
3062956
|
https://en.wikipedia.org/wiki/Neptune%20trojan
|
Neptune trojan
|
Neptune trojans are bodies that orbit the Sun near one of the stable Lagrangian points of Neptune, similar to the trojans of other planets. They therefore have approximately the same orbital period as Neptune and follow roughly the same orbital path. Twenty-eight Neptune trojans are currently known, of which 24 orbit near the Sun–Neptune Lagrangian point 60° ahead of Neptune and four orbit near Neptune's region 60° behind Neptune. The Neptune trojans are termed 'trojans' by analogy with the Jupiter trojans.
The discovery of in a high-inclination (>25°) orbit was significant, because it suggested a "thick" cloud of trojans (Jupiter trojans have inclinations up to 40°), which is indicative of freeze-in capture instead of in situ or collisional formation. It is suspected that large (radius ≈ 100 km) Neptune trojans could outnumber Jupiter trojans by an order of magnitude.
In 2010, the discovery of the first known Neptune trojan, , was announced. Neptune's trailing region is currently very difficult to observe because it is along the line of sight to the center of the Milky Way, an area of the sky crowded with stars.
Discovery and exploration
In 2001, the first Neptune trojan was discovered, , near Neptune's region, and with it the fifth known populated stable reservoir of small bodies in the Solar System. In 2005, the discovery of the high-inclination trojan has indicated that the Neptune trojans populate thick clouds, which has constrained their possible origins (see below).
On August 12, 2010, the first trojan, , was announced. It was discovered by a dedicated survey that scanned regions where the light from the stars near the Galactic Center is obscured by dust clouds. This suggests that large trojans are as common as large trojans, to within uncertainty, further constraining models about their origins (see below).
It would have been possible for the New Horizons spacecraft to investigate Neptune trojans discovered by 2014, when it passed through this region of space en route to Pluto. Some of the patches where the light from the Galactic Center is obscured by dust clouds are along New Horizons'''s flight path, allowing detection of objects that the spacecraft could image. , the highest-inclination Neptune trojan known, was just bright enough for New Horizons to observe it in end-2013 at a distance of 1.2 AU. However, New Horizons'' may not have had sufficient downlink bandwidth, so it was eventually decided to give precedence to the preparations for the Pluto flyby.
Dynamics and origin
The orbits of Neptune trojans are highly stable; Neptune may have retained up to 50% of the original post-migration trojan population over the age of the Solar System. Neptune's can host stable trojans equally well as its . Neptune trojans can librate up to 30° from their associated Lagrangian points with a 10,000-year period. Neptune trojans that escape enter orbits similar to centaurs. Although Neptune cannot currently capture stable trojans, roughly 2.8% of the centaurs within 34 AU are predicted to be Neptune co-orbitals. Of these, 54% would be in horseshoe orbits, 10% would be quasi-satellites, and 36% would be trojans (evenly split between the and groups).
The unexpected high-inclination trojans are the key to understanding the origin and evolution of the population as a whole. The existence of high-inclination Neptune trojans points to a capture during planetary migration instead of in situ or collisional formation. The estimated equal number of large and trojans indicates that there was no gas drag during capture and points to a common capture mechanism for both and trojans. The capture of Neptune trojans during a migration of the planets occurs via process similar to the chaotic capture of Jupiter trojans in the Nice model. When Uranus and Neptune are near but not in a mean-motion resonance the locations where Uranus passes Neptune can circulate with a period that is in resonance with the libration periods of Neptune trojans. This results in repeated perturbations that increase the libration of existing trojans causing their orbits to become unstable. This process is reversible allowing new trojans to be captured when the planetary migration continues. For high-inclination trojans to be captured the migration must have been slow, or their inclinations must have been acquired previously.
Colors
The first four discovered Neptune trojans have similar colors. They are modestly red, slightly redder than the gray Kuiper belt objects, but not as extremely red as the high-perihelion cold classical Kuiper belt objects. This is similar to the colors of the blue lobe of the centaur color distribution, the Jupiter trojans, the irregular satellites of the gas giants, and possibly the comets, which is consistent with a similar origin of these populations of small Solar System bodies.
The Neptune trojans are too faint to efficiently observe spectroscopically with current technology, which means that a large variety of surface compositions are compatible with the observed colors.
Naming
In 2015, the IAU adopted a new naming scheme for Neptune trojans, which are to be named after Amazons, with no differentiation between objects in L4 and L5. The Amazons were an all-female warrior tribe that fought in the Trojan War on the side of the Trojans against the Greeks. As of 2019, the named Neptune trojans are 385571 Otrera (after Otrera, the first Amazonian queen in Greek mythology) and Clete (an Amazon and the attendant to the Amazons' queen Penthesilea, who led the Amazons in the Trojan war).
Members
The amount of high-inclination objects in such a small sample, in which relatively fewer high-inclination Neptune trojans are known due to observational biases, implies that high-inclination trojans may significantly outnumber low-inclination trojans. The ratio of high- to low-inclination Neptune trojans is estimated to be about 4:1. Assuming albedos of 0.05, there are an expected Neptune trojans with radii above 40 km in Neptune's . This would indicate that large Neptune trojans are 5 to 20 times more abundant than Jupiter trojans, depending on their albedos. There may be relatively fewer smaller Neptune trojans, which could be because these fragment more readily. Large trojans are estimated to be as common as large trojans.
and display significant dynamical instability. This means they could have been captured after planetary migration, but may as well be a long-term member that happens not to be perfectly dynamically stable.
As of February 2020, 29 Neptune trojans are known, of which 24 orbit near the Sun–Neptune Lagrangian point 60° ahead of Neptune, four orbit near Neptune's region 60° behind Neptune, and one orbits on the opposite side of Neptune () but frequently changes location relative to Neptune to L4 and L5. These are listed in the following table. It is constructed from the list of Neptune trojans maintained by the IAU Minor Planet Center and with diameters from Sheppard and Trujillo's paper on , unless otherwise noted.
and were thought to be Neptune trojans at the time of their discovery, but further observations have disconfirmed their membership. is currently thought to be in a 3:5 resonance with Neptune. is currently following a quasi-satellite loop around Neptune.
See also
Nice model
Nice 2 model
, a temporary quasi-satellite of Neptune.
Notes
References
External links
Distant minor planets
Lists of minor planets
8
|
17747734
|
https://en.wikipedia.org/wiki/University%20of%20the%20Philippines%20Los%20Ba%C3%B1os%20College%20of%20Arts%20and%20Sciences
|
University of the Philippines Los Baños College of Arts and Sciences
|
The College of Arts and Sciences (CAS) is one of the eleven degree-granting units of the University of the Philippines Los Baños. It is the largest college in University of the Philippines System which offers most of the general education subjects required of UPLB students, as well as the highest number of degree programs in the University. The Philippines' Commission on Higher Education has recognized CAS as a Center of Excellence in Biology, Chemistry, Information Technology and Mathematics, as well as a Center of Development in Physics and Statistics.
History
The Board of Regents of the University of the Philippines at its 828th meeting on 21 December 1972 adopted Presidential Decree No. 58 issued on November 20, 1972 establishing the College of Science and Humanities. The Board appointed Edelwina C. Legaspi as the first dean of the new college, and a month later, Dolores A. Ramirez as secretary.
The College's first seven departments were Humanities, Chemistry, Mathematics, Statistics and Physics, Botany, Zoology, Life Sciences and Social Sciences.
The college was renamed College of Arts and Sciences on October 28, 1977.
On 23 March 1983, CAS reorganized its science and mathematics departments into three institutes: Institute of Mathematical Sciences and Physics from the former Department of Mathematics and Physics, Department of Statistics and Statistical Laboratory and the Computer Science unit; Institute of Chemistry from the former Department of Chemistry and; Institute of Biological Sciences from the former Botany, Life Sciences and Zoology departments. Together with three similar institutes in UP Diliman, they were designed to form part of a system of national centers of excellence in basic sciences.
The Division of Computer Science, because of the fast development in the field and rapid growth of student population, became an Institute in January 1996.
The Physical Education Department was placed under the CAS and started offering a Diploma in Physical Education. It was eventually renamed Department of Human Kinetics.
The Division of Statistics was transformed into an Institute in January 1998.
Institutes/Departments
Institute of Biological Sciences (IBS) - Established on March 23, 1983, it offers a Bachelor of Science degree in Biology with specializations in cell and molecular biology, ecology, genetics, microbiology, plant biology, systematics, wildlife biology, and zoology. It also offers 12 graduate degrees: six master's degree, three doctor of philosophy, and three doctor of philosophy by research. Students are admitted to the institute through the University of the Philippines College Admission Test. The institute is composed of five divisions namely on animal biology, environmental biology, genetics and molecular biology, microbiology, and plant biology. In 2014, the undergraduate degree program in Biology of the institute received its certification and accreditation under the ASEAN University Network-Quality Assurance System. Researchers from the institute discovered new species of spiders from Mount Makiling. Other facilities that support the institute are the Limnological Station, UPLB Museum of National History, University of the Philippines Open University, National Institute of Molecular Biology and Biotechnology, National Crop Protection Center, Institute of Plant Breeding, and International Rice Research Institute.
Institute of Chemistry (IC)
Institute of Computer Science (ICS)
Institute of Mathematical Sciences and Physics (IMSP)
Institute of Statistics (IS)
Department of Humanities (DH)
Department of Social Sciences (DSS)
Department of Human Kinetics (DHK)
UP Rural High School (UPRHS)
Undergraduate Degree Programs
BA Communication ArtsMajors: Speech, Theater, Writing
BA Philosophy
BA Sociology
BS Applied MathematicsMajors : Actuarial Science, Operations Research, Biomathematics
BS Applied PhysicsSpecializations: Computational Physics, Experimental Physics, Instrumentation Physics
BS BiologyMajors: Cell and Molecular Biology, Ecology, Genetics, Microbiology, Plant Biology, Systematics, Wildlife Biology and Zoology
BS Chemistry
BS Computer Science
BS Mathematics
BS Mathematics and Science TeachingMajors : Physics, Chemistry, Biology, Mathematics
BS Statistics
References
External links
College of Arts and Sciences website
University of the Philippines Los Baños
Arts and Sciences
Educational institutions established in 1972
|
6099082
|
https://en.wikipedia.org/wiki/Scala%20%28company%29
|
Scala (company)
|
Scala is a producer of multimedia software. It was founded in 1987 as a Norwegian company called Digital Visjon. It is headquartered near Philadelphia, Pennsylvania, USA, and has subsidiaries in Europe and Asia.
History
In 1987 a young Norwegian entrepreneur, Jon Bøhmer founded the company "Digital Visjon" in Brumunddal, Norway to create multimedia software on the Commodore Amiga computer platform. In 1988 they released their first product which was named InfoChannel 0.97L, which had hotels and cable-TV companies as their first customers.
In 1990, they redesigned the program with a new graphical user interface. They renamed the company and the software "Scala" and released a number of multimedia applications. The company attracted investors, mainly from Norway and incorporated in the US in 1994 and is now based in the United States with their European headquarters located in the Netherlands.
The name "Scala" was given by Bøhmer and designer Bjørn Rybakken and represents the scales in colors, tones and the opera in Milano. The name inspired a live actor animation made by Bøhmer and Rybakken using an Amiga, a video camera and a frame-by-frame video digitizer. The animation, named "Lo scalatore" (Italian for 'The Climber'), featured a magic trick of Indian fakirs of a man climbing a ladder and disappearing in the air. This animation was then included into one of the Demo Disks of Scala Multimedia in order to show the capabilities of that presentation software in loading and playing animations whilst also manipulating it with other features of the software.
In 1994 Scala released Multimedia MM400 and InfoChannel 500.
In 1996, due to the bankruptcy of Commodore, Scala left the Amiga platform and started delivering the same applications under MS-DOS. Scala Multimedia MM100, Scala Multimedia Publisher and Scala InfoChannel 100 were released for the x86 platform. Scala MM100 won Byte Magazine's "Best of Comdex" in 1996.
Corporate governance
As of December 2013, the CEO of Scala is Tom Nix, who was formerly a regional vice president. Nix succeeds Gerard Bucas, who retired after nine years.
Scala Multimedia
The first versions for the Amiga computer were a video titler and slide show authoring system. Scala was bundled with typefaces, background images, and a selection of transition effects to be applied to them. The artwork was designed by Bjørn Rybakken. Scala was also capable of working with Genlock equipment to superimpose titles over footage played through the devices video input.
Succeeding versions of the program on the same platform added features such as animation playback, more effects ("Wipes") and the ability to interact with multimedia devices through a programming language called "Lingua" (Latin for "language").
With its move to Windows, Scala became more complex and gained the ability to support languages such as Python and Visual Basic.
Scala5
In late 2008, Scala stopped calling their product line InfoChannel and went through a period of referring only to their "solutions". At the start of 2009, the product line was being called 'Scala5' and being referred to as such in all their press releases.
Scala5 has three main components: Scala Designer, an authoring program which is used to create dynamic content, Scala Content Manager, which is used to manage and distribute content, and Scala Player, which plays back the distributed content.
Scala Enterprise
Scala's latest suite of Digital signage software is referred to as Scala Enterprise. The solution, a software suite consisting of Scala Designer, Scala Player, and Scala Content Manager officially launched in mid- 2013.
At launch, release version 10.0 featured HTML5 and Android player support, the usage of interactive features on mobile devices to engage with retail and corporate communications audiences, and social media integrations.
As of April 2018, the latest version of Scala Enterprise is version 11.05.
References
External links
Original animation of "LoScalatore" and other Scala demos preserved for historical purposes on randelshofer.ch site
Scala's Trade Mark 'InfoChannel' (CTM 301275) has been cancelled by the Office for Harmonization in the Internal Market
Presentation software
Amiga software
Software companies of the United States
Signage
|
14200152
|
https://en.wikipedia.org/wiki/DCC%20Alliance
|
DCC Alliance
|
The DCC Alliance (DCCA) was an industry association designed to promote a common subset of the Debian Linux operating system that multiple companies within the consortium could distribute. It was founded by Ian Murdock in 2005 and was wound up in 2007.
History
The main force behind the DCC Alliance was Ian Murdock, the original founder of the Debian project. The DCC Alliance was formed whilst Murdock was CEO of Progeny Linux Systems, and he remained the key spokesperson for the consortium during their visible existence. The founding of the DCC Alliance was announced at LinuxWorld San Francisco on the 9 August 2005, following a number of pre-announcements.
The stated intention was to assemble a standards-based core of Debian, provide a predictable release cycle and Linux Standards Base compliance.
DCC Alliance shipped their first code 6 months after the original pre-announcements, providing a Linux Standards Base (LSB) 3.0 compliant set of program packages based on those available from Debian.
The Alliance's primary goals were to:
Assemble a 100 percent Debian common core that addresses the needs of enterprise business users
Maintain certification of the common core with the Free Standards Group open specification, the Linux Standard Base
Use the Alliance's combined strength to accelerate the commercial adoption of Debian
Work with the Debian project to ensure predictable release cycles and features important to commercial adoption
Membership
There were two classes of membership in the DCC Alliance:
Members, those organisation creating products based on the DCC-provided core subset of packages.
Knoppix, LinEx, Linspire, MEPIS, Progeny, Sun Wah Linux, Xandros
Associate Members, Independent software vendors, hardware vendors, OEMs and community partners providing related support or business.
credativ, Skolelinux, UserLinux
Membership remained open to additional organizations with an interest in Debian-based solutions. The most visible absent from any involvement was the Ubuntu distribution who declined to join the Alliance. The Ubuntu founder, Mark Shuttleworth, stated in 2006 that he did not believe that the DCC Alliance had any future.
One of the founding members, MEPIS, later left the DCCA, citing "creative differences". MEPIS transitioned their SimplyMEPIS Linux distribution from a Debian Unstable/DCCA-provided core to an Ubuntu-based one.
In 2006 Ian Murdock left the DCC Alliance to chair the Linux Standard Base workgroup and later moved his employment to Sun Microsystems. In 2007, Progeny, the original driver behind the consortium was wound up. In 2006, Xandros was still claiming that Xandros "leads the engineering team at the DCCA".
Name
When originally formed, the names given to the media were that of the "Debian Core Consortium", and then "Debian Common Core". Following trademark notification from the Debian project, the name was withdrawn and replaced—without a formal announcement—by "DCC Alliance". Ian Murdock explained that the D should no longer be treated as an abbreviation of Debian, but of DCC—becoming a recursive acronym for "DCC Common Core".
Notably, the "Debian" trademark that was being denied to Mr. Murdock and the DCC Alliance originates from a combination of the -ian part of Mr. Murdock's own given name, concatenated to that of his wife's name; Debra Murdock, and the decision over the infringement of the trademark fell to Branden Robinson, then Debian Project Leader (DPL), who was an employee of Progeny Linux Systems (and of Mr. Murdock) during the time at which the decision was made. Mr. Robinson stated that this would not represent a conflict of interest.
References
Debian
Linux organizations
|
55751287
|
https://en.wikipedia.org/wiki/Rainway
|
Rainway
|
Rainway is a video game streaming service. Rainway allows users to run games on their Windows 10 PC and play them on other devices over an internet connection. The initial beta version launched on January 20, 2018. Version 1.0 of the software launched on January 31, 2019.
Compatibility
Rainway is compatible with games purchased from Steam, Origin, Battle.net, itch.io, GOG.com and Uplay. The service can run in web browsers and will also be compatible with iOS and Android mobile phones, as well as Xbox One consoles.
History
Rainway was first announced in March 2017, by Andrew Sampson, with a beta planned for May 5. The announcement was made on the official website for Ulterius, another streaming service worked on by Sampson which used similar technologies, but focused on desktop remote access rather than game streaming. However, Rainway did not gain significant attention until April, when it announced its plan to support the then-newly released Nintendo Switch console. During E3 2017, Rainway announced that the Rainway beta would launch on November 25. The release of the beta was later delayed again, to January 20, 2018.
In August 2018, Rainway closed its seed round, having raised $1.5 million in seed funding from GoAhead Ventures. The software left beta on January 31, 2019, with the release of version 1.0.
Later in 2019, David Perry (former CEO of Gaikai) and Jon Kimmich joined the company's advisory board, as it closed another $3.5 million round of funding. Investors included Bullpen Capital, Madrona Venture Group, GoAhead Ventures, and Bill Mooney. An iOS public beta version was released on September 9, 2019.
See also
Cloud gaming
References
External links
Video gaming
Streaming media systems
Streaming software
2018 software
IOS software
Windows software
|
47083813
|
https://en.wikipedia.org/wiki/ERP%20security
|
ERP security
|
ERP Security is a wide range of measures aimed at protecting Enterprise resource planning (ERP) systems from illicit access ensuring accessibility and integrity of system data. ERP system is a computer software that serves to unify the information intended to manage the organization including Production, Supply Chain Management, Financial Management, Human Resource Management, Customer Relationship Management, Enterprise Performance Management. Common ERP systems are SAP, Oracle E-Business Suite, Microsoft Dynamics.
Review
ERP system integrates business processes enabling procurement, payment, transport, human resources management, product management, and financial planning.
As ERP system stores confidential information, the Information Systems Audit and Control Association (ISACA) recommends to regularly conduct a comprehensive assessment of ERP system security, checking ERP servers for software vulnerabilities, configuration errors, segregation of duties conflicts, compliance with relevant standards and recommendations, and recommendations of vendors.
Causes for vulnerabilities in ERP systems
Complexity
ERP systems process transactions and implement procedures to ensure that users have different access privileges. There are hundreds of authorization objects in SAP permitting users to perform actions in the system. In case of 200 users of the company, there are approximately 800,000 (100*2*20*200) ways to customize security settings of ERP systems. With the growth of complexity, the possibility of errors and segregation of duties conflicts increases.
Specificity
Vendors fix vulnerabilities on the regular basis since hackers monitor business applications to find and exploit security issues. SAP releases patches monthly on Patch Tuesday, Oracle issues security fixes every quarter in Oracle Critical Patch Update. Business applications are becoming more exposed to the Internet or migrate to the cloud.
Lack of competent specialists
ERP Cybersecurity survey revealed that organizations running ERP systems "lack both awareness and actions taken towards ERP security".
ISACA states that "there is a shortage of staff members trained in ERP security" and security services have the superficial understanding of risks and threats associated with ERP systems. Consequently, security vulnerabilities complicate undertakings such as detecting and subsequent fixing.
Lack of security auditing tools
ERP security audit is done manually as various tools with ERP packages do not provide means for system security auditing. Manual auditing is a complex and time-consuming process that increases the possibility of making a mistake.
Large number of customized settings
The system includes thousands of parameters and fine settings including segregation of duties for transactions and tables, and the security parameters are set for every single system. ERP system settings are customized according to customers' requirements.
Security issues in ERP systems
Security issues occur in ERP systems at different levels.
Network layer
Traffic interception and modification
Absence of data encryption
In 2011, Sensepost specialists analyzed DIAG protocol used in SAP ERP system for transferring data from the client to the SAP server. Two utilities were published that allowed to intercept, decrypt, and modify client-server requests containing critical information. This made attacks possible including Man-in-the-middle attack. The second utility operates like a Proxy and was created to identify new vulnerabilities. It allowed modifying requests coming to client and server.
Sending password in cleartext (SAP J2EE Telnet / Oracle listener old versions)
In the SAP ERP system, it is possible to perform administering functions via Telnet protocol, which encrypts passwords.
Vulnerabilities in encryption or authentication protocols
Authentication by hash
XOR password encryption (SAP DIAG)
Imposing the use of outdated authentication protocols
Incorrect authentication protocolsVulnerabilities in protocols (e.g. RFC in SAP ERP and Oracle Net in Oracle E-Business Suite).'
RFC protocol is used (Remote Function Call) to connect two systems by TCP/IP in SAP ERP. RFC call is a function that enables calling and running a functional module located in a system. The ABAP language that is used for writing business applications for SAP have functions to make RFC calls. Several critical vulnerabilities were found in SAP RFC Library versions 6.x and 7.x:
RFC function "RFC_SET_REG_SERVER_PROPERTY" allows determining an exclusive use of RFC server. Vulnerability exploits lead to a denial of access for the legitimate users. denial of service becomes possible.
Error in RFC function "SYSTEM_CREATE_INSTANCE". Exploiting vulnerability allows executing arbitrary code.
Error in RFC function "RFC_START_GUI". Exploiting vulnerability also allows executing arbitrary code.
Error in RFC function "RFC_START_PROGRAM". Exploiting vulnerability allows executing arbitrary code or gain information about RFC server configuration.
Error in RFC function "TRUSTED_SYSTEM_SECURITY". Exploiting vulnerability allows obtaining information about existing users and groups in RFC server.
Operating system level
OS software vulnerabilities
Any remote vulnerability in OS is used to gain access to applications
Weak OS passwords
Remote password brute-forcing
Empty passwords for remote management tools like Radmin and VNC
Insecure OS settings
NFS and SMB. SAP data becomes accessible to remote users via NFS an SMB
File access rights. Critical SAP and DBMS Oracle data files have insecure access rights such as 755 and 777
Insecure hosts settings. In the trusted hosts, servers can be listed and an attacker easily accesses them
Application vulnerabilities
ERP systems transfer more functionality on the web applications level with a lot of vulnerabilities:
Web application vulnerabilities (XSS, XSRF, SQL Injection, Response Splitting, Code Execution)
Buffer overflow and format string in web-servers and application-servers (SAP IGS, SAP Netweaver, Oracle BEA Weblogic)
Insecure privileges for access (SAP Netweaver, SAP CRM, Oracle E-Business Suite)
Role-based access control
In ERP systems, RBAC (Role-Based Access Control) model is applied for users to perform transactions and gain access to business objects.
In the model, the decision to grant access to a user is made based on the functions of users, or roles. Roles are a multitude of transactions the user or a group of users performs in the company. Transaction is a procedure of transforming system data, which helps perform this transaction. For any role, there is a number of corresponding users with one or multiple roles. Roles can be hierarchical. After the roles are implemented in the system, transactions corresponding to each role rarely change. The administrator needs to add or delete users from roles. The administrator provides a new user with a membership in one or more roles. When employees leave the organization, the administrator removes them from all the roles.
Segregation of Duties
Segregation or Separation of duties, also known as SoD, is the concept according to which a user cannot make a transaction without other users (e.g. a user cannot add a new supplier, write out a cheque or pay to a supplier) and a risk of fraud is much lower. SoD can be implemented by RBAC mechanisms, and a notion of mutually exclusive roles is introduced. For instance, to pay a supplier, one user initiates payment procedure and another accepts it. In this case, initiating payment and accepting are mutually exclusive roles. Segregation of duties can be either static or dynamic. With static SoD (SSoD), a user cannot belong to two mutually exclusive roles. With dynamic SoD (DSoD), a user does but cannot perform them within one transaction. Both of them have their own advantages. SSoD is simple, while DSoD is flexible. Segregation of Duties is explained in SoD matrix. X and Y matrixes describe system roles. If the two roles are mutually exclusive, there is a flag at the interception of the corresponding rows and columns.
The examples of Segregation of Duties software:
Appsian Security Platform for Oracle E-Business Suite and SAP ECC/S4HANA
ERP Security scanners
ERP Security scanner is a software intended to search for vulnerabilities in ERP systems. Scanner analyzes configurations of ERP system, searches for misconfigurations, access control and encryption conflicts, insecure components, and checks for updates. The scanner checks system parameters for compliance with the manufacturer's recommendations and auditing procedures ISACA. ERP Security scanners produce reports with the vulnerabilities listed according to their criticality.
The examples of scanners:
SecurityBridge Holistic Cybersecurity Platform for SAP ERP
ERPScan for SAP ERP
Onapsis for SAP ERP
Safe O'Clock for SAP ERP
AppSentry for Oracle E-Business Suite
Appsian Security Platform for Oracle E-Business Suite and Oracle PeopleSoft
MaxPatrol for SAP ERP
ERP Data Security
ERP Data Security is software intended to provide fine-grained access controls and visibility to specific transactions and data fields within an ERP application. The intention of ERP Data Security is to ensure that access to data is dynamically enforced based on the context of a user's access versus pre-defined roles and privileges. Both of which can be corrupted or exploited. ERP Data Security software is intended to work in conjunction with an organizations' existing ERP security and identity & access management controls, but provides granular, fine-grained levels of protection for particularly sensitive financial and PII data fields.
ERP Data Security Use Cases:
Securing remote users
Enforcing zero trust and least privilege
Preventing data exfiltration
Privileged access management
Segregation of Duties
Limiting risk exposure in financial transactions
Threat detection, response & forensics
Custom code vulnerability detection
The examples of ERP data security software:
Appsian Security Platform for Oracle E-Business Suite, Oracle PeopleSoft, and SAP ECC/S4HANA
References
ERP Security
Computer security
|
28259824
|
https://en.wikipedia.org/wiki/List%20of%20UPnP%20AV%20media%20servers%20and%20clients
|
List of UPnP AV media servers and clients
|
This is a list of UPnP AV media servers and client application or hard appliances.
UPnP AV media servers
Software
Cross-platform
Allonis myServer, a multi-faceted media player/organizer with a DLNA/UPnP server, controller, and renderer, including conversion. Runs on Microsoft Windows. Supports most all HTML5 devices as remote controls.
Asset UPnP (DLNA compatible) from Illustrate. An audio specific UPnP/DLNA server for Windows, QNAP, macOS and Linux. Features audio WAVE/LPCM transcoding from a range of audio codecs, ReplayGain and playlists.
FreeMi UPnP Media Server, very simple server, historically used to stream to the STB Freebox, based on .net/mono.
Home Media Server, a free media server for Windows, Linux, macOS, individual device settings, transcoding, external and internal subtitles, restricted device access to folders, uploading files, Internet-Radio, Internet-Television, Digital Video Broadcasting (DVB), DMR-control and "Play To", Music (Visualization), Photo (Slideshow), support for 3D-subtitles, support for BitTorrent files, Web-navigation with HTML5 player, Digital Media Renderer (DMR) emulation for AirPlay and Google Cast devices.
Jellyfin, a free and open-source suite of multimedia applications designed to organize, manage, and share digital media files to networked devices.
JRiver Media Center, a multi-faceted media player/organizer with a DLNA/UPnP server, controller, and renderer, including conversion. Supports Microsoft Windows, macOS and Linux.
Kodi (previously XBMC), a cross platform open source software media-player/media center for Android, Apple TV, Linux, macOS and Windows.
LimboMedia, a free cross platform home- and UPnP/DLNA mediaserver with android app and WebM transcoding for browser playback (build with java and FFmpeg).
MinimServer, a Java-based highly configurable uPnP/DNLA music server with additional consideration given to Classical Music, supports transcoding with MinimStreamer, supports Microsoft Windows, macOS, Linux, and various NAS devices.
Neuron Music Player, acts as a cross platform UPnP/DLNA Media Renderer server available for Android, iOS, BlackBerry 10 & PlayBook platforms. Supports gapless playback and has possibility to output rendered audio further to the high-resolution internal DAC or external USB DAC or another UPnP/DLNA Media Renderer with all supported DSP effects applied.
Plex, a cross-platform and closed source software media player and entertainment hub for digital media, available for macOS, Microsoft Windows, Linux, as well as mobile clients for iOS (including Apple TV (2nd generation) onwards), Android, Windows Phone, and many devices such as Xbox. Supports on-the-fly transcoding of video and music.
PonoMusic World. Based on the JRiver Media Center software, includes similar features along with a store for purchasing HD audio tracks.
PS3 Media Server, a free cross platform Java based UPnP DLNA server especially good for AVC and other current HD media codecs with on-the-fly transcoding.
Serviio, is available with a free and a pro license. It can stream media files (music, video or images) to renderer devices (e.g. a TV set, Blu-ray player, games console or mobile phone) on a local area network.
TVMOBiLi, a cross platform, high performance UPnP/DLNA Media Server for Windows, macOS and Linux.
TwonkyMedia server, a cross-platform multimedia server and entertainment hub for digital media, available for Android, Apple TV, iOS, Linux, macOS, Microsoft Windows, Windows Phone, and Xbox 360.
Universal Media Server, a free (open source) DLNA-compliant UPnP Media Server for Windows, macOS and Linux (originally based on the PS3 Media Server). It is able to stream videos, audio and images to any DLNA-capable device. It contains more features than most paid UPnP/DLNA Media Servers. It streams to many devices including TVs (Samsung, Sony, Panasonic, LG, Philips and more.), PS3, Xbox(One/360), smartphones, Blu-ray players and more.
vGet Cast, a simple, cross platform (Chrome App) DLNA server and controller for single, local video files.
Vuze, an open-source Java-based BitTorrent client which contains MediaServer plugin.
Wild Media Server, a media server for Windows, Linux, macOS, individual device settings, transcoding, external and internal subtitles, restricted device access to folders, uploading files, Internet-Radio, Internet-Television, Digital Video Broadcasting (DVB), DMR-control and "Play To", Music (Visualization), Photo (Slideshow), support for 3D-subtitles, support for BitTorrent files, Web-navigation with HTML5 player, Digital Media Renderer (DMR) emulation for AirPlay and Google Cast devices.
Android
BubbleUPnP Android UPNP/DLNA Server, Player, Controller and Renderer
Pixel Media Server, Android UPNP/DLNA Media Server. Supports all popular Video and Audio files. It also support external subtitle file (SRT)
Plato is an Android UPNP Client App that can play videos and audio.
Toaster Cast Android UPNP/DLNA Server, Controller and Renderer
vGet, Android App that can play videos embedded in websites on DLNA Renderers.
Media Cast UPnP, Android UPNP Client App that can play videos/Audio.
Media Server Pro is a DLNA Server that allows individual file selections for sharing.
Slick UPnP A minimal and intuitive open-source Android UPNP client app that can play video/audio. (It is not DMS)
Linux
Microsoft Windows
Sundtek Streamingserver a native Windows TV Server providing DVB, ATSC and ISDB-T via UPnP/DLNA, it also supports streaming media files (it only supports TV devices from Sundtek).
Stream What You Hear, a Windows application that streams the sound of your computer (i.e.: “what you hear”) to UPnP/DLNA device such as TVs, amps, network receivers, game consoles, etc...
TVersity Media Server, a Windows application that streams multimedia content from a personal computer to UPnP, DLNA and mobile devices (Chromecast is also supported). It was the first media server to offer real-time transcoding (back in 2005).
TVersity Screen Server, a Windows application that mirrors the screen of a personal computer to UPnP, DLNA and mobile devices.
DVBViewer, a Windows application, mainly for TV/Radio recording/playback, but with the ability to stream live TV/radio as well as multimedia files via UPNP/DLNA.
DivX, a Windows application, mainly for video encoding into DivX format, but has the ability to stream multimedia files via DLNA.
foobar2000, a freeware audio player for Windows. Highly customizable, audio only. Download of dlna-extension from the developers' webpage necessary.
Home Media Center, a free and open source media server compatible with DLNA. Includes web interface for streaming content to web browser (Android, iOS, ...), subtitles integration and Windows desktop streaming. This server is easy to use.
KooRaRoo Media, a commercial DLNA media server and organizer for Windows. Includes on-the-fly transcoding, per-file and per-folder parental controls, powerful organizing features with dynamic playlists, Internet radio streaming, "Play To" functionality and remote device control, burned-in and external subtitles, extensive format support including RAW photo formats. Streams all files to all devices.
MediaMonkey, a free media player/tagger/editor with an UPnP/DLNA client and server for Microsoft Windows
Mezzmo, a commercial software package. Mezzmo streams music, movies, photos and subtitles to the UPnP and DLNA-enabled devices. It automatically finds and organizes music, movies and photos, imports multimedia files from iPad, iPhone, iPod, Audio CDs, iTunes, Windows Media Player and WinAmp. DLNA server supports all popular media file formats with real time transcoding to meet the device specifications.
PlayOn, a commercial UPnP/DLNA media server for Windows, includes a transcoder for streaming web video.
TVble, a cloud connected (Rotten tomatoes/TMDB etc.), Torrent streaming, DLNA enabled media server. Allows single file or playlist downloads.
Windows Media Connect from Microsoft, a free UPnP AV MediaServer and control point (server and client) for Microsoft Windows
WMC version 2.0 can be installed for usage with Windows Media Player 10 for Windows XP
WMC version 3.0 can be installed for usage with Windows Media Player 11 for Windows XP
WMC version 4.0 comes pre-installed on Windows Vista with its Windows Media Player 11
WMC can also refer to Windows Media Center. From the Windows Media Center entry in Wikipedia: In May 2015, Microsoft announced that Windows Media Center would be discontinued on Windows 10, and that it would be uninstalled when upgrading; but stated that those upgrading from a version of Windows that included the Media Center application would receive the paid Windows DVD Player app to maintain DVD playback functionality, the main purpose for Media Center's use. This is stated on a Windows 10 FAQ page.
macOS
Sundtek Streamingserver a native macOS TV Server providing DVB, ATSC and ISDB-T via UPnP/DLNA, it also supports streaming media files (it only supports TV devices from Sundtek).
FireStream by Cyaneous, Inc., a commercial UPnP/DLNA media server for macOS with advanced transcoding capabilities, per-device profiles and native Mac media organization.
ArkMS by Arkuda Digital, a full-featured UPnP/DLNA media server for macOS to stream video, music and pictures to UPnP/DLNA/Samsung Link compatible devices from Mac.
Hardware
ASUS DSL-N55U ADSL Modem Router, supports USB drive media sharing. (Dual Band WIFI, 10/100/1000 Mbit/s)
AVM FRITZ!Box, the newer revisions of these residential gateway devices come with a UPnP/DLNA compliant media server
Billion 7800xxx series modem-routers come with a built in uPnP/DLNA compliant media server
Buffalo WBMR-HP-G300H ADSL2+ Modem Router, supports USB drive media sharing, possible to install Open-/DD-WRT. Fast NAS sharing too. (Dual Band WIFI, 10/100/1000 Mbit's)
D-Link DNS-323 2-Bay Network Attached Storage Enclosure.
D-Link DNS-325 Share Centre]2-Bay Network Attached Storage Enclosure.
Linksys WRT610N gigabit Wifi-N router supports UPnP with a USB hard drive, as a Storage feature
MELCO N1 UPnP ripping server
Naim NS01 ripping UPnP server and player
Naim NS02 ripping UPnP server and player
Naim NS03 ripping UPnP server and player
Naim HDX ripping UPnP server and player
Naim UnitiServe ripping UPnP server
Naim Uniti Core - ripping UPnP server
Naim Uniti Atom - All in one player with UPnP serving capability
Naim Uniti Star - All in one player with ripping and UPnP serving capability
Naim Uniti Nova - All in one player with UPnP serving capability
Netgear ReadyNAS Includes ReadyDLNA (branded version of miniDLNA) on all ReadyNAS products and some routers as the ReadyShare USB feature.
Noxon iRadio Series - UPnP player
PCEngine Alix and APU UPnP Media server based on ReadyMedia with EasyMPD
Raspberry Pi 2 and 3 UPnP Media server based on ReadyMedia with EasyMPD
SFR Neuf Cegetel NeufBox 5 (Gigabit LAN) and NeufBox 4 (10/100 Mbit/s with firmware 2.0.8) and USB drive key.
All technicolor DSL/GPON Gateways from the TG712 upwards have an embedded DLNA certified Server (sharing content of a USB attached HDD).
All Synology NAS are DLNA mediaserver and contain a webbased DLNA player which is also available as App for Android or iOS.
Several Panasonic TV and Panasonic DVB-Receivers/Recorders are able to activate via menu the DLNA Server
See also
List of NAS manufacturers – there are many uncatalogued NAS devices with UPNP.
UPnP AV clients
See also Digital Living Network Alliance#DLNA-certified software.
A UPnP client, also called a control point, functions as a digital audio/video remote control. Control points automatically detect UPnP servers on the network to browse content directories and request the transfer or streaming of media. A UPnP media renderer performs the actual audio or video rendering. Control points and media renderers most commonly run on separate devices, the control point being for example a tablet, and the renderer a television or a networked audio computer connected to an audio receiver. Some control points integrate a media renderer and may function as a complete music playing application.
UPnP control points and player software
Cross-platform
Audionet RCP is an UPnP control point available for Windows and macOS
Banshee, an open source (MIT) media player with UPnP-client support since version 2.4
Kinsky is an open source UPnP control point for iPod/iPhone, iPad, Windows, macOS, Linux, Android and PocketPC.
Kodi (XBMC#XBMCbuntu|XBMC), a cross platform open source software media-player/media center for Apple TV, Linux, macOS, Windows, Android and the custom XBMC#XBMCbuntu.
Neutron Music Player, a cross platform UPnP/DLNA client which is able to read music files from UPnP/DLNA Media Server or send processed audio to UPnP/DLNA Media Renderer as an endless stream. Available for Android, iOS, BlackBerry 10 & PlayBook platforms.
UPPlay, a desktop UPnP audio Control Point for Linux/Unix and MS Windows, is a light QT-based Control interface. It is free, open-source, and licensed under the GPL.
Plex, a cross-platform and open source (GPL) software media player and a closed source media server and entertainment hub, available for macOS, Microsoft Windows, Linux, as well as mobile clients for iOS (including Apple TV (2nd generation) onwards), Android, and Windows Phone. The desktop version of the media player is free while the mobile version is chargeable.
PlugPlayer is a cross-platform UPnP client, Media Renderer, Media Server and control point available for iPod/iPhone, iPad, Android, Google TV and macOS. In addition to UPnP servers, PlugPlayer can also utilize some cloud-based media services such as MP3tunes and CloudUPnP.
VLC media player, a free, open-source and cross-platform media player that has a built-in UPnP-client that lets the user access the contents listed from an UPnP Media server. Though a very complete media player in itself, it does not provide any UPnP Control Point capabilities, nor can the player be controlled as a UPnP compliant Media Renderer. (For Windows, macOS, Linux, iOS & Android.)
eezUPnP, a free software to play content from a media server on a client, has a built-in UPnP-client for music. (For Windows & Linux)
MusPnP, an open source control point (For Windows, Linux, & macOS)
Android
Audionet aMM is a UPNP/DLNA Controller
BubbleUPnP is a DLNA Controller
Bs.player is a free UPnP renderer/player
Flipps (formerly iMediaShare) is a free DLNA compliant Digital Media Controller, Server and Renderer
Gizmoot is a free UPnP AV Control Point and Renderer
MediaSteersman is a free UPnP control point for Android tablets with included UPnP renderer and server functionality
Mezzmo is a UPNP/DLNA server, renderer and controller. There is a free trial version and a paid licensed version.
Na Remote for UPNP/DLNA Android UPNP/DLNA controller
Network Audio Remote is a free DLNA Controller
Onkyo Remote is a free UPnP renderer/player/server/controller
Pixel Media Server is a free DLNA compliant Digital Media Server on Android platform. Has external subtitles support
Pixel Media Controller is a free DLNA compliant Digital Media Controller on Android platform to control DLNA certified/compliant Digital Media Server, Digital Media Render and Digital Media Printer on Android. Can browse contents from the Media Server and playback media at a DLNA certified TV or renderer devices, and print at DLNA certified Printer
Skifta (discontinued) - a DLNA Certified software app
Toaster Cast Android UPNP/DLNA Server, Controller and Renderer
UPnP-AV Control Point is a free UPnP control point and renderer
UPnPlay is a free UPnP renderer/player
UPnP Monkey is a multi-room control point and DLNA media server which offers the opportunity to stream media from a smartphone or a network hard drive to a media player
VidOn Player is a free DLNA compliant Digital Media Controller, Server and Renderer
YAACC is an open source UPNP/DLNA server, renderer and controller
ZappoTV is a free DLNA compliant Digital Media Controller, Server and Renderer
Archos Video Player has DLNA rendering/player capabilities
PlainUPnP (formerly DroidUPnP) is an open source DLNA controller
MediaMonkey for Android is a UPnP Renderer and Controller
mconnect Player is a UPnP/DLNA Controller with free and paid version
BlackBerry
All devices running the BlackBerry 10 operating system include native UPnP Media Server capabilities.
EnefceDMS is a free UPnP media server.
iOS
AllShare TV - DLNA/UPnP Media Server for iOS with features to push and control media using iOS
Audionet iMM UPnP control point
Infuse DLNA/UPnP streaming client for iPhone and iPad
nPlayer DLNA/UPnP Client for iPhone/iPad
VidOn Player (HD) DLNA/UPnP Client for iPhone/iPad
AirPlayer DLNA/UPnP Client for iPhone/iPad
media:connect DLNA server, controller and renderer
PlayerXtreme is a media playback and organizing solution
8player & Lite
GoodPlayer
Flipps (HD), formerly iMediaShare & iMediaShare Lite
ZappoTV (HD) DLNA/UPnP Client for iPhone/iPad
AcePlayer - player with downloading & background play features
Creation 5 - UPnP control point app and client supporting both audio and video from a variety of sources
mconnect Player is a UPnP/DLNA Controller with free and paid version
Glider Music Player supports UPnP, OpenHome & Chromecast audio devices
foobar2000 UPnP/DLNA Client (audio only), true gapless playback, ability to download a track/album from DLNA server for offline listening.
Linux
BRisa UPnP Framework, a free and open-source UPnP framework that allows the development of UPnP devices, as well as provides three implementation reference of UPnP applications: the BRisa Media server, the BRisa Media Renderer and the BRisa Control Point.
djmount, free software to mount as a Linux filesystem the media content of compatible UPnP AV devices.
Gnome Videos (Totem), a free and open-source Media Player part of the GNOME desktop, via the grilo plugin.
upmpdcli, a free and open-source UPnP media renderer front end to MPD, the Music Player Daemon
upplay, a free and open-source basic UPnP audio control point for the Unix Desktop, based on Qt.
GUpnp-tools supplies a free and open-source GUI control point for AV devices, gupnp-av-cp.
GMediaRender, a UPnP™ media renderer for POSIX®-compliant systems, such as Linux® or UNIX®. It implements the server component that provides UPnP controllers a means to render media content (audio, video and images) from a UPnP media server.
GMRender-Resurrect, resource efficient UPnP/DLNA renderer, optimal for Raspberry Pi, CuBox or a general MediaServer. Fork of GMediaRenderer.
Rhythmbox is an audio player with built-in UPnP/DLNA support, and can act as a UPnP client.
JRiver Media Center, a media player/organizer with a DLNA/UPnP server, controller, and renderer, including conversion.
Microsoft Windows
Windows Media Player, bundled with MS Windows.
foobar2000, an audio player, supports UPnP via a plugin.
WinDVD, is a commercial DVD-Video and video-files playback software for Windows.
Nero MediaHome, a commercial software package containing both a UPnP client and server supporting music and video playback.
MediaMonkey, free media player/tagger/editor with an UPnP/DLNA client and server.
JRiver Media Center, a media player/organizer with a DLNA/UPnP server, controller, and renderer, including conversion.
Wild Media Server, a media player/server with DLNA/UPnP capabilities.
5KPlayer, a mixture of free (MKV) UHD video player, music player, AirPlay & DLNA enabled media streamer and online downloader.
macOS
MediaCloud Mac v2, UPnP Audio/Video Player & Control Point.
Songbook Mac, UPnP Control Point.
OPlayer, media player that has a built-in UPnP-client.
JRiver Media Center, a media player/organizer with a DLNA/UPnP server, controller, and renderer, including conversion.
AirBeamTV, a company who builds screen mirroring apps for Samsung, LG, Panasonic, Sony and Philips TVs based on the UPnP renderer in the TV, the Mac acts as the UPnP server as well as the UPnP control point.
5KPlayer, a mixture of free (MKV) UHD video player, music player, AirPlay & DLNA enabled media streamer and online downloader.
Symbian
Nokia N80 has client and server.
Nokia N82 has client and server, both DLNA certified.
Nokia N95 has client and server, only the server is DLNA certified.
Nokia N78 has client and server, both DLNA certified.
Nokia E72, the first Nokia E-Series device having client and server, both DLNA certified.
Nokia N97 has client and server, both DLNA certified.
Nokia 5630 XpressMusic has client and server.
Sony Ericsson Vivaz has server and is DLNA certified.
Sony Ericsson Vivaz Pro has server and is DLNA certified.
Samsung i8910 has server and is DLNA certified.
Windows Phone
Linada
Smart Player
myMediaHub
Other
GeeXboX, an open-source lightweight media center Live CD, no installation required
Nokia N800 and its native MediaStreamer application. This client can also control other hardware renderers such as the Roku Soundbridge
Nokia N900 through its Maemo 5 Operating System
Moovida (formerly Elisa) is a free, open-source and cross-platform media center solution
PlayStation 3 game-console with OS version 1.8 or later through the Xross Media Bar
Xbox 360 client from Windows Media Connect
OurJukebox client for Amazon Alexa
UPnP player/client hardware
PlayStation 3 Sony games console from home screen
PlayStation Vita Sony handheld games console from home screen
Sony TVs, DVD/Blu-ray Player, Google TV Box and Network Media Players with DLNA certification
Samsung smart TV
Roku Media Player on Roku Players and smart TVs
Panasonic Smart Viera TV
Philips 4000, 5000, 6000, 7000, 8000 and 9000 LCD-LED series with DLNA interface
Xbox One Microsoft games console with DLNA certification (After October 2014 Update)
Xtreamer Digital Media Players
UPnP control point hardware
Philips Streamium range of products
Philips 8000 and 9000 LCD series with DLNA interface
DIRECTV PLUS HD DVR hardware (HR20/HR21/HR21Pro/HR22/HR23)
UPnP media render hardware
Arcam Solo Neo
Archos Generation 5 All Archos models of the 5th Generation like 605 and 705
Archos TV+ Like the 5th generation models
Audionet DNA UPnP media player, DAC and integrated amplifier
Audionet DNC UPnP media player and DAC
Audionet DNP UPnP media player, DAC, preamplifier
Blaupunkt IR-40+ FM/UKW/DAB+, Internet Radio, UPnP media player
Boulder 1021 High Definition Multiple Format Audio Player
Cambridge Audio CXN Network Player
Cambridge Audio Stream Magic 6
Cambridge Audio Stream Magic 6 V2
Cambridge Audio Azur 851N
Cambridge Audio Minx Xi
Brite-View
Denon ASD-3N
Denon ASD-3W
Denon ASD-51N
Denon ASD-51W
Denon CHR-F103
Denon S-32
Denon S-52
Denon S-302
Denon AVP-A1HDCI(A)
Denon AVR-5308CI(A)
Denon AVR-4810CI
Denon AVR-4306
Denon AVR-4308CI
Denon AVR-4310CI
Denon AVR-3310CI
Denon AVR-990*
Denon DNP-730AE (audio)
Denon DNP-F109 (audio)
Grundig Ovation 2i CDS 9000 WEB
HP MediaSmart LCD High Definition Televisions
HTC Media Link
HTC Media Link HD
Kathrein UFC960 - HDTV Cable Receiver
Kodak EasyShare digital picture frames including at least the "W" series
LG televisions and Blu-ray players that include the "Netcast" feature
LG DP-1, DP-1W Digital Media Player
Linn Klimax Exakt DSM
Linn Klimax DSM
Linn Klimax DS
Linn Akurate DSM
Linn Akurate DS
Linn Akurate Renew DS
Linn Majik DSM
Linn Majik DS
Linn Majik DS-I
Linn Sekrit DSM
Linn Sekrit DS-I
Linn Sneaky DSM
Linn Sneaky Music DS
Linn Kiko DSM
Linn Renew DS
LOEWE. Art (SL210 / SL190 - 2013-2014)
LOEWE. Art UHD (SL3xx 2014-20??)
LOEWE. Connect ID (SL221 / SL212 - 2013-2014)
LOEWE. Connect UHD (SL3xx 2014-20??)
LOEWE. Individual Slim Frame (SL220 2012-2014)
LOEWE. Reference ID (SL220 2012-2014)
LOEWE. Reference UHD (SL3xx 2015-20??)
Moon by Simaudio 180D MiND - Streamer supporting 192/24 streams and most major formats.
Moon by Simaudio Neo 280D MiND - DAC with onboard streamer supporting 192/24 streams and most major formats including DSD.
Moon by Simaudio Neo 380D MiND - DAC with onboard streamer supporting 192/24 streams and most major formats including DSD.
Moon by Simaudio Evolution 780D - DAC with onboard streamer supporting 192/24 streams and most major formats including DSD.
Netgear MP101
Muvid ir815
Musica Pristina Musica Pristina Transducer - Digital Music Player and DAC with tube output supporting 192/24 streams and most major formats.
Musical Fidelity M1 Clic - High-End Digital Music Player and DAC that supports 192/24 streams and several formats.
Med100X3D media player 3D - digital video and music player that supports 192/24
Med600X3D media player3D - digital video and music player that supports 192/24
Naim ND5 XS - Streamingplayer
Naim NDX - Streamingplayer
Naim NDS - Streamingplayer
Naim UnitiQute - All-in-one player
Naim UnitiLite - All-in-one player
Naim NaimUniti2 - All-in-one player
Naim SuperUniti - All-in-one player
Naim Uniti Atom - All-in-one player
Naim Uniti Star - All-in-one player
Naim Uniti Nova - All-in-one player
Naim ND5XS2 - Streamer
Naim NDX2 - Streamer
Naim ND555 - Streamer
Olive OPUS music servers and players, Olive MELODY players
Onkyo HT-RC180
Onkyo PR-SC5507
Onkyo TX-NR609
Onkyo TX-NR807
Onkyo TX-NR905
Onkyo TX-NR1007
Onkyo TX-NR3007
Onkyo TX-NR5007
Onkyo TX-NR626
Onkyo TX-NR808
Oppo BDP-105
Oppo BDP-103
Panasonic Plasma Viera G30-Series
Panasonic Plasma Viera NeoPDP G20-Series
Panasonic Plasma Viera NeoPDP G15-Series
Panasonic Plasma Viera NeoPDP V10-Series
Panasonic Plasma Viera NeoPDP VT50-Series
Panasonic Plasma Viera NeoPDP GT50-Series
Panasonic Viera ST60 Series (2013, Plasma, DLNA Client)
Panasonic Viera VT60 Series (2013, Plasma, DLNA Client)
Panasonic Viera ZT60 Series (2013, Plasma, DLNA Client)
Panasonic Plasma Viera NeoPDP Z1-Series
PCEngines Alix (2d2, 3d2) and APU (APU1 and APU2) with EasyMPD
Philips 4000, 5000, 6000, 7000, 8000 and 9000 LCD-LED series with DLNA interface
PEAQ MUNET Link PMN400-B
Pioneer KRP-500A
Pioneer KRP-500ABG
Pioneer KRP-500AW
Pioneer KRP-600A
Pioneer N-30
Pioneer N-50
Pioneer PDP-LX5090H
Pioneer PDP-LX6090H
Pioneer PDX-Z9
Pioneer SC-LX71
Pioneer SC-LX72
Pioneer SC-LX81
Pioneer SC-LX82
Pioneer SC-LX83
Pioneer SC-LX85
Pioneer SC-LX86
Pioneer SC-LX87
Pioneer SC-LX90
Plato Class A
Plato Class B
Plato Pre
Plato Lite
PS Audio PerfectWave DAC in combination with Bridge
Pure Pure One Flow : FM, DAB, DAB+, UPnP/DLNA, Internet Radio
Raumfeld Base
Raumfeld Connector²
Raumfeld Connector
Raumfeld Controller
Raumfeld One
Raumfeld Speaker L
Raumfeld Speaker M
Raumfeld Speaker S
Roberts Radio Stream 83i Radio
Roberts Radio Revival iStream 3 Radio
Roku SoundBridge and SoundBridge Radio (also sold under the Pinnacle Systems brand)
Samsung UE40C8000
Samsung UE46B7090
Samsung UE60ES6300
Sangean WFR-20 (also supports Windows Shares)
Sonos all Sonos players (PLAY:1, PLAY:3, PLAY:5, Connect, Connect:AMP, PLAYBAR)
Sony BD-S580
Sony Network Speaker SA-NS310,SA-NS410, SA-NS510
Sony Receiver STR-DN840, STR-DN1030, STR-DN1040
Sony CMT-MX700NI Micro HIFI Component System
TerraTec Noxon iRadio
TerraTec Noxon M520
WD TV Live Western Digital
Xbox game-console with XBMC4Xbox (originally XBMC) software, which is a free open-source software multimedia-player.
Xbox 360 game-console through the Xbox 360 Dashboard
Yamaha RX-V3900
Yamaha RX-Z7
Yamaha YMC-700
Yamaha RX-V2065
Yamaha RX-Vx067 series (i.e. 1067, 2067, 3067)
Yamaha RX-Vx73 series, beginning with RV-V473
Yamaha RX-Vx75 series, beginning with RV-V475
Yamaha Aventage Series (RX-A1000/A2000/A3000)
Ziova ClearStream Series (i.e. CS510)
See also
Comparison of set-top boxes
Comparison of UPnP AV media servers
Digital Living Network Alliance
Network-attached storage
Universal Plug and Play
References
External links
upnp-database.info Community based approach to build a database with UPnP devices' capabilities
CH3SNAS & DNS-323 Hacking - How to use the commonly used CH3SNAS and DNS-323 NAS as a download/media server
Digital media
Mobile content
Servers (computing)
Media servers
Clients (computing)
|
5278374
|
https://en.wikipedia.org/wiki/SQLFilter
|
SQLFilter
|
SQLFilter is a plugin for OmniPeek that indexes packets and trace files into an SQLite database. The packets can then be searched using SQL queries. The matching packets are loaded directly into OmniPeek and analyzed. The packet database can also be used to build multi-tier data mining and network forensics systems.
As more companies save large quantities of network traffic to disk, tools like the WildPackets SQLFilter make it possible to search through packet data more efficiently. For network troubleshooters, this revolutionizes the job of finding packets. Not only does the SQLFilter allow users to search for packets across thousands of trace files, it also loads the resulting packets directly into OmniPeek or EtherPeek. This cuts out many of the steps usually involved in this process and dramatically shortens time to knowledge, and time to fix.
External links
discussion of the SQLFilter Packet Data Mining and Network Forensics.
Network analyzers
Packets (information technology)
|
37434110
|
https://en.wikipedia.org/wiki/Illinois%20State%20University%20College%20of%20Applied%20Science%20and%20Technology
|
Illinois State University College of Applied Science and Technology
|
The Illinois State University College of Applied Science and Technology is a public university in Illinois. The college offers seven different departments including agriculture, criminal justice, family and consumer sciences, health sciences, information technology, kinesiology and recreation, military science, and technology.
Department of Agriculture
The Department of Agriculture at Illinois State is designed to prepare students to enter into their careers in the food and agriculture industry. There are ten different undergraduate programs offered within this department, as well as a graduate program offering a Master of Science degree with several sequences to choose from.
Illinois State University Farm
Illinois State maintains a university farm to support teaching, research, and outreach activities of the Department of Agriculture. Illinois State University Farm is located near Lexington, Illinois, approximately 18 miles northeast of Normal. Enterprises at the University Farm include corn, soybeans, alfalfa, swine, beef, and sheep. The University Farm expands the Department of Agriculture's ability to do research.
Undergraduate programs
Agribusiness
Agriculture Communications and Leadership
Agriculture Education
Agronomy Management
Animal Industry Management
Animal Science
Crop and Soil Science
Food Industry Management
Horticulture and Landscape Management
Pre-veterinary Medicine
Graduate programs
The Department offers a Master of Science degree in agriculture with the choice of three sequences:
Agribusiness
Agriscience
Agriculture Education and Leadership
Department of Criminal Justice Sciences
The Department of Criminal Justice Sciences at Illinois State provides students with a system orientation to the field of criminal justice by applying the principles, as well as discussion of the behavioral and social issues of criminal justice. The program focuses on the building of knowledge in the areas of policing, courts, and corrections from a social science perspective. The program offers both an undergraduate and graduate degree in criminal justice.
Undergraduate program
Criminal Justice Sciences
Graduate program
Criminal Justice Sciences
Department of Family and Consumer Sciences
The Department of Family and Consumer Sciences at Illinois State strives to apply research of the human environment and systems for the students to be able to enrich human lives and provide leadership. The departments offers bachelor's and master's degrees in several areas to get students prepared for their future career.
Accreditation
Illinois State is one of two programs in Illinois, and one of 59 programs nationwide to be accredited by the American Association of Family and Consumer Sciences. The program is also accredited with the American Dietetic Association, the world's largest organization of food and nutrition professionals, and the Council for Interior Design.
Undergraduate programs
Apparel, Merchandising, and Design
Food, Nutrition, and Dietetics
Human Development and Family Resources
Interior and Environmental Design
Teacher Education
Graduate programs
Apparel, Merchandising, and Design
Food, Nutrition, and Dietetics
Human Development and Family Resources
Interior and Environmental Design
Department of Health Sciences
With several areas to choose from, the Department of Health Sciences strives to educate students in a manner that turns them into successful professionals in their field. The Department is home to undergraduate programs in various health degrees specialized for students to find their niche.
Undergraduate programs
Environmental Health
Health Education
Health Information Management
Medical Laboratory Science
Safety
School of Information Technology
The School of Information Technology focuses on preparing students for the ever-changing world of IT. The School offers premier undergraduate and distinguished graduate programs for the education of computing and telecommunications professionals.
Accreditation
Both the Information System and Computer Science programs are accredited by the Computing Accreditation Commission of Accreditation Board for Engineering and Technology (ABET).
Undergraduate programs
Computer Science
Information Systems
Network and Telecommunications Management
Graduate programs
This program has two options for graduate study.
Masters of Science in Information Systems
Graduate Certificate with a sequence of your choosing:
Enterprise Computing Systems
Information Assurance and Security
Internet Application Development
Systems Analyst
Telecommunication Management
School of Kinesiology and Recreation
The School of Kinesiology and Recreation provides nationally acclaimed programs that promote physically active lifestyles and a healthy use of sport and leisure through exemplary teaching, scholarship and service. The School offers various specialized undergraduate and graduate programs that enable students to choose from a diverse range of prospective careers.
Undergraduate programs
Athletic Training
Exercise Science
Physical Education Teacher Education
Recreation and Park Administration
Graduate programs
Athletic Training
Biomechanics
Exercise Physiology
Physical Education Teacher Education
Psychology of Sport and Physical Activity
Recreation Administration
Sport Management
Military Science
Through classes and field training, the Army ROTC program provides students with the tools to become an Army Officer while pursuing a college degree. The program offers a unique sequence that is designed to be a four-year program of study; however, the program can be completed in as little as two years.
Undergraduate program
Military Science
Department of Technology
The Department of Technology is recognized as one of the premier technology programs in the nation at both the undergraduate and graduate levels. This program is designed to prepare individuals for leadership positions as management-oriented technology professionals in six undergraduate programs and three areas of graduate study.
Accreditation
The Association of Technology, Management, and Applied Engineering, who sets the industry standards for academic accreditation, certification and professional development, accredits Illinois States' Department of Technology. The Department of Technology's Construction Management is accredited by the American Council for Construction Education, who advocates for quality construction education programs. The National Council for Accreditation of Teacher Education accredits the Technology and Engineering Education sequence at Illinois State.
Undergraduate programs
Computer Systems
Construction Management
Engineering Technology
Graphic Communications
Renewable Energy
Technology and Engineering Education
Graduate programs
The Department of Technology offers two types of graduate programs:
Master of Science with a choice of three sequences:
Project Management
Technology Education
Training and Development
Graduate Certificate:
Project Management
Training and Development
References
Illinois State University
|
1655001
|
https://en.wikipedia.org/wiki/List%20of%20video%20editing%20software
|
List of video editing software
|
The following is a list of video editing software.
The criterion for inclusion in this list is the ability to perform non-linear video editing. Most modern transcoding software supports transcoding a portion of a video clip, which would count as cropping and trimming. However, items in this article have one of the following conditions:
Can perform other non-linear video editing function such as montage or compositing
Can do the trimming or cropping without transcoding
Free (libre) or open-source
The software listed in this section is either free software or open source, and may or may not be commercial.
Active and stable
Avidemux (Linux, macOS, Windows)
Losslesscut (Linux, macOS, Windows)
Blender VSE (Linux, FreeBSD, macOS, Windows)
Cinelerra (Linux, FreeBSD)
FFmpeg (Linux, macOS, Windows) – CLI only; no visual feedback
Flowblade (Linux)
Kdenlive (Linux, FreeBSD, macOS, Windows)
LiVES (BSD, IRIX, Linux, Solaris)
Olive (Linux, macOS, Windows) - currently in alpha
OpenShot (Linux, FreeBSD, macOS, Windows)
Pitivi (Linux, FreeBSD)
Shotcut (Linux, FreeBSD, macOS, Windows)
Inactive
Kino (Linux, FreeBSD)
VirtualDub (Windows)
VirtualDubMod (Windows)
VideoLan Movie Creator (VLMC) (Linux, macOS, Windows)
Proprietary (non-commercial)
The software listed in this section is proprietary, and freeware or freemium.
Active
ActivePresenter (Windows) – Also screencast software
DaVinci Resolve (macOS, Windows, Linux)
Freemake Video Converter (Windows)
iMovie (iOS, macOS)
ivsEdits (Windows)
Lightworks (Windows, Linux, macOS)
Microsoft Photos (Windows)
showbox.com (Windows, macOS)
VideoPad Home Edition (Windows, macOS, iPad, Android)
VSDC Free Video Editor (Windows)
WeVideo (Web app)
Discontinued
Adobe Premiere Express (Web app)
Debugmode Wax (Windows)
Pixorial (Web app)
VideoThang (Windows)
Windows Movie Maker (Windows)
Proprietary (commercial)
The software listed in this section is proprietary and commercial.
Active
Adobe After Effects (macOS, Windows)
Adobe Premiere Elements (macOS, Windows)
Adobe Premiere Pro (macOS, Windows)
Adobe Presenter Video Express (macOS, Windows) – Also screencast software
Autodesk Flame
Avid Media Composer (Windows, macOS)
AVS Video Editor (Windows)
Blackbird (macOS, Windows, Linux)
Camtasia (Windows, macOS) – Also screencast software
Corel VideoStudio (Windows)
Cyberlink PowerDirector (Windows)
DaVinci Resolve Studio (macOS, Windows, Linux)
Edius (Windows)
Final Cut Pro X (macOS)
Kaltura (Web app)
Magix Movie Edit Pro (Windows)
Magix Vegas Pro (Windows) - previously Sony Vegas Pro
Media 100 Suite (macOS)
muvee Reveal (Windows, macOS)
Nacsport Video Analysis Software (Windows)
Pinnacle Studio (Windows)
Roxio Creator (Windows)
ScreenFlow (macOS)
Video Toaster (Windows, hardware suite)
VideoPad Masters Edition (Windows, macOS, iPad, Android)
Xedio (Windows)
Discontinued
Xpress Pro (Windows, OS X)
Pinnacle Videospin (Windows)
Final Cut Express (OS X)
Serif MoviePlus (Windows)
MPEG Video Wizard DVD (Windows)
ArcSoft ShowBiz (Windows)
Avid DS (Windows)
Clesh (Java on OS X, Windows, Linux)
See also
Comparison of video editing software
Comparison of video converters
Photo slideshow software
Video editing
References
Video editors
|
2145345
|
https://en.wikipedia.org/wiki/Ganglia%20%28software%29
|
Ganglia (software)
|
Ganglia is a scalable, distributed monitoring tool for high-performance computing systems, clusters and networks. The software is used to view either live or recorded statistics covering metrics such as CPU load averages or network utilization for many nodes.
Ganglia software is bundled with enterprise-level Linux distributions such as Red Hat Enterprise Level (RHEL) or the CentOS repackaging of the same. Ganglia grew out of requirements for monitoring systems by Berkeley (University of California) but now sees use by commercial and educational organisations such as Cray, MIT, NASA and Twitter.
Ganglia
It is based on a hierarchical design targeted at federations of clusters. It relies on a multicast-based listen/announce protocol to monitor state within clusters and uses a tree of point-to-point connections amongst representative cluster nodes to federate clusters and aggregate their state. It leverages widely used technologies such as XML for data representation, XDR for compact, portable data transport, and RRDtool for data storage and visualization. It uses carefully engineered data structures and algorithms to achieve very low per-node overheads and high concurrency. The implementation is robust, has been ported to an extensive set of operating systems and processor architectures, and is currently in use on over 500 clusters around the world. It has been used to link clusters across university campuses and around the world and can scale to handle clusters with 2000 nodes.
The ganglia system comprises two unique daemons, a PHP-based web front-end, and a few other small utility programs.
Ganglia Monitoring Daemon (gmond)
Gmond is a multi-threaded daemon which runs on each cluster node you want to monitor. Installation does not require having a common NFS filesystem or a database back-end, installing special accounts or maintaining configuration files.
Gmond has four main responsibilities:
Monitor changes in host state.
Announce relevant changes.
Listen to the state of all other ganglia nodes via a unicast or multicast channel.
Answer requests for an XML description of the cluster state.
Each gmond transmits information in two different ways:
Unicasting or Multicasting host state in external data representation (XDR) format using UDP messages.
Sending XML over a TCP connection.
Ganglia Meta Daemon (gmetad)
Federation in Ganglia is achieved using a tree of point-to-point connections amongst representative cluster nodes to aggregate the state of multiple clusters. At each node in the tree, a Ganglia Meta Daemon (gmetad) periodically polls a collection of child data sources, parses the collected XML, saves all numeric, volatile metrics to round-robin databases and exports the aggregated XML over a TCP socket to clients. Data sources may be either gmond daemons, representing specific clusters, or other gmetad daemons, representing sets of clusters. Data sources use source IP addresses for access control and can be specified using multiple IP addresses for failover. The latter capability is natural for aggregating data from clusters since each gmond daemon contains the entire state of its cluster.
Ganglia PHP Web Front-end
The Ganglia web front-end provides a view of the gathered information via real-time dynamic web pages. Most importantly, it displays Ganglia data in a meaningful way for system administrators and computer users. Although the web front-end to ganglia started as a simple HTML view of the XML tree, it has evolved into a system that keeps a colorful history of all collected data.
The Ganglia web front-end caters to system administrators and users. For example, one can view the CPU utilization over the past hour, day, week, month, or year. The web front-end shows similar graphs for memory usage, disk usage, network statistics, number of running processes, and all other Ganglia metrics.
The web front-end depends on the existence of the gmetad which provides it with data from several Ganglia sources. Specifically, the web front-end will open the local port 8651 (by default) and expects to receive a Ganglia XML tree. The web pages themselves are highly dynamic; any change to the Ganglia data appears immediately on the site. This behavior leads to a very responsive site, but requires that the full XML tree be parsed on every page access. Therefore, the Ganglia web front-end should run on a fairly powerful, dedicated machine if it presents a large amount of data.
The Ganglia web front-end is written in PHP, and uses graphs generated by gmetad to display history information. It has been tested on many flavours of Unix (primarily Linux) with the Apache webserver and the PHP5 module.
References
External links
Wikimedia Ganglia instance
Free network management software
Free software programmed in C
Free software programmed in Perl
Free software programmed in Python
Internet Protocol based network software
Network management
Parallel computing
System administration
System monitors
Software using the BSD license
|
44471760
|
https://en.wikipedia.org/wiki/Neck%20Amphora%20by%20Exekias%20%28Berlin%20F%201720%29
|
Neck Amphora by Exekias (Berlin F 1720)
|
The Neck Amphora by Exekias is a neck amphora in the black figure style by the Attic vase painter and potter Exekias. It is found in the possession of the Antikensammlung Berlin under the inventory number F 1720 and is on display in the Altes Museum. It depicts Herakles' battle with the Nemean lion on one side and the sons of Theseus on the other (their earliest appearance in Athenian art). The amphora could only be restored for the first time almost a hundred and fifty years after its original discovery due to negligence and political difficulties.
Description
The clay neck amphora is 40.5 cm high. It is dated to around 545/0 BC and is executed in the black figure style, which was still common at the time. The painter Exekias was a master of this style, which he brought to its peak. He added his own innovations and modifications which appear in part also in the amphora. The vase is fragmentary, but large portions survive. Conspicuous absences include the loss of one of the two handles, and a pair of sherds from the body of the vase. The surviving pieces are in good condition.
Both sides of the amphora's belly are framed above and below by chains of painted and stylisted lotus flowers and buds. The area around the handles is decorated with volutes and palmettes. The scenes on each side are of similar size and are not divided into a front main image and a reversed opposite on the reverse as in later times. On the edge of the mouth there is a signature of Exekias, the most well-known Attic vase painter and potter, which reads ΕΞΣΕΚΙΑΣ ΕΓΡΑΦΣΕ ΚΑ ΠΟΕΣΕ ΜΕ, "Exekias painted and made me."
On one side the battle between Herakles and the Nemean lion is depicted – one of the twelve labours which the son of Zeus had to perform in the service of King Eurystheus. Herakles strangles the lion, whose skin could not be wounded, while his brother Iolaos and the goddess Athena look on, serving to frame the scene. The naked Herakles has his left arm on the neck of the lion and holds the paw of the lion in his right hand. The lion is attempting to free itself from the hero's grip. Many details are indicated in red paint, like Iolaos' beard, Athena's shield and details of the lion's mane. On the other side of the vase is a depiction of the two sons of Theseus, Akamas and Demophon with their horses, which are named by inscriptions (just like the sons) as Kalliphora and Phalios. Between the two horses, which are led to the right by their masters, is a vertical Kalos inscription, reading ΟΝΕΤΟΡΙΔΕΣ ΚΑΛΟΣ, "Onetorides is gorgeous". Both men carry a large round shield on their backs and two spears over their shoulders. The shields are detailed in white paint. Their helmets have high plumes painted in red. The sons of Theseus are presumably departing to fight in the Trojan War.
The scenes can be understood as combining two Greek regions which frequently interacted with each other: Herakles is the hero of the Peloponnese, while Theseus' sons represent the Athenians' conception of themselves. This vase marks the first appearance of the sons of Theseus in Attic art. The scene from the outbreak of the Trojan War stresses increasing Athenian self-importance. The participation of their heroes in the legendary Trojan War symbolically placed Athens on the same level as the traditionally important city-states of the Peloponnese, including the leading power of the time, Sparta. In subsequent Athenian art, the sons of Theseus were symbols of the new self-consciousness of the Athenian aristocracy.
Discovery and restoration
In addition to the art historical significance of the vase, the fate of the amphora and its individual sherds since its discovery is also of archaeological-historical significance. The vase was found in the Etruscan necropolis of Ponte dell' Abbadia near Vulci. In Athens, vases were produced largely for export to Etruria, where they were often used as grave goods. Thus, several works of Exekias have been found in Etruscan cemeteries. When the amphora was discovered in one of the Etruscan graves at Vulci which had been under excavation from 1828, it was already broken and was probably no longer complete. The sherds that were discovered were not very carefully collected. The reconstruction of the vase from its sherds was, by modern standards, faulty. As was common in the mid-nineteenth century, missing pieces were replaced and repainted to create the appearance of a complete work. After the restoration, the amphora came into the possession of the painter Eduard Magnus. The sale of smaller archaeological discoveries was common at the time, particularly when no other, more expensive and higher valued artworks (statuary or precious metals) could be found. Together with the painter's other pieces (known as the Dorow-Magnus Collection), the amphora soon entered the newly founded Museum at Lustgarten, in 1831. It stayed, with other items of portable art, in the semi-basement of the museum. According to Jakob Andreas Konrad Levezow's 1834 exhibition catalogue, the vase stood on one of the glass tables placed in a prominent position. When the portable art collection was transferred to the Neue Galerie New York, Exekias' Amphora was taken there as well.
In the 1920s, the amphoras had to be restored for reasons which are no longer known. In the process, the retouching and additions from the original restoration were largely removed. The additions were now made clearly distinct from the original sherds. Due to the war, the amphora was inventoried as "Berlin F 1720" and stored in box 167 in the Zoobunker. In 1945, the box was taken to the Soviet Union as booty. As part of the return of art to the DDR, the amphora was brought back to the Antikensammlung Berlin (unlike many other pieces from box 167) in 1958, which was now divided between East and West Berlin. The Exekian Neck Amphora was one of the few vases which came into the possession of the East Berlin Pergamonmuseum, since the majority of the vases had been kept in the magazin before the war and were hence stored in a different location during the war and ended up in the West Berlin Antikensammlung in Charlottenburg afterwards. The amphora was on display as part of the regular exhibition of the museum.
In the 1970s, the archaeologist Erika Kunze-Götte found a two-piece fragment during work on a volume of the Corpus Vasorum Antiquorum in Munich which she suggested belonged to the Exekias neck amphora. As a result, a lively correspondence sprung up between Munich and East Berlin. Photos and line drawings were exchanged and measurements were produced. A silicon cast finally confirmed that the pieces belonged. Probably the individual sherds were excavated later than the rest of the amphora or mistakenly not connected with it. There was a question of whether the museums ought to carry out an exchange or make a loan agreement, eventually settled in favour of the former option. In exchange for the sherds, the Staatliche Antikensammlung in Munich was to receive an ornamental-black figure/polychrome painted lid from the Pergamonmuseum. Although this was quickly agreed on an academic level, it took a significant time to formalise the agreement, since the DDR officials delayed things for seven years. On 7 January 1988, it finally came to Munich in exchange for the sherds.
After the pieces were reunited, the vase had to be restored again in 1990. Firstly they attempted to remove the modern additions and to insert the new fragment. In the process it was discovered that the earlier restoration had miscalculated the size of the gaps – they were too small. As a result, the vase had to be disassembled. This turned out to be a blessing in disguise. For instance, during the process, Priska Schilling uncovered the letter "ο" in the caption reading (Ι)ΟΛΑΟΣ under a modern layer of paint. Where a handle had been reconstructed, the original handle attachment was discovered underneath. Several incised inscriptions were found on the interior sides of sherds, such as Ο ΠΑΙΣ ΚΑΛΟΣ, "The boy is gorgeous" and ΚΑΛΟΣ, "He is gorgeous." It is suggested that this was a hoax by an earlier restorer, possibly Domenico Campanari in the first half of the nineteenth century. The restoration was completed in 1991.
Today the amphora is on display in the Altes Museum in the Lustgarten, along with the Tombstones of Exekias and an amphora from the outer circle of Group E, which was probably made in Exekias' workshop. This contemporary amphora also depicts Herakles fighting with the Nemean lion.
Bibliography
Adolf Furtwängler. Beschreibung der Vasensammlung im Antiquarium, Berlin 1885, No. 1720.
John D. Beazley. Attic Black-figure Vase-painters. Oxford 1956, pp. 143–144 No. 1.
Ursula Kästner. "Ein deutsch-deutsches Vasenschicksal," EOS IX (November 1999), pp. VII-IX
Ursula Kästner. "Amphora des Töpfers und Vasenmalers Exekias," in Andreas Scholl and Gertrud Platz-Horster (Ed.): Altes Museum. Pergamonmuseum. Die Antikensammlung., von Zabern, Mainz 2007, p. 57
External links
32 Images and Description in the Perseus Digital Library
References
Amphorae
Archaeological discoveries in Italy
Antikensammlung Berlin
Heracles
Theseus
6th-century BC works
|
18529
|
https://en.wikipedia.org/wiki/Lynx%20%28web%20browser%29
|
Lynx (web browser)
|
Lynx is a customizable text-based web browser for use on cursor-addressable character cell terminals. , it is the oldest web browser still being maintained, having started in 1992.
History
Lynx was a product of the Distributed Computing Group within Academic Computing Services of the University of Kansas, and was initially developed in 1992 by a team of students and staff at the university (Lou Montulli, Michael Grobe and Charles Rezac) as a hypertext browser used solely to distribute campus information as part of a Campus-Wide Information Server and for browsing the Gopher space. Beta availability was announced to Usenet on 22 July 1992. In 1993, Montulli added an Internet interface and released a new version (2.0) of the browser.
the support of communication protocols in Lynx is implemented using a version of libwww, forked from the library's code base in 1996. The supported protocols include Gopher, HTTP, HTTPS, FTP, NNTP and WAIS. Support for NNTP was added to libwww from ongoing Lynx development in 1994. Support for HTTPS was added to Lynx's fork of libwww later, initially as patches due to concerns about encryption.
Garrett Blythe created DosLynx in April 1994 and later joined the Lynx effort as well. Foteos Macrides ported much of Lynx to VMS and maintained it for a time. In 1995, Lynx was released under the GNU General Public License, and is now maintained by a group of volunteers led by .
Features
Browsing in Lynx consists of highlighting the chosen link using cursor keys, or having all links on a page numbered and entering the chosen link's number. Current versions support SSL and many HTML features. Tables are formatted using spaces, while frames are identified by name and can be explored as if they were separate pages. Lynx is not inherently able to display various types of non-text content on the web, such as images and video, but it can launch external programs to handle it, such as an image viewer or a video player.
Unlike most web browsers, Lynx does not support JavaScript, which many websites require to work correctly.
The speed benefits of text-only browsing are most apparent when using low bandwidth internet connections, or older computer hardware that may be slow to render image-heavy content.
Privacy
Because Lynx does not support graphics, web bugs that track user information are not fetched, meaning that web pages can be read without the privacy concerns of graphic web browsers. However, Lynx does support HTTP cookies, which can also be used to track user information. Lynx therefore supports cookie whitelisting and blacklisting, or alternatively cookie support can be disabled permanently.
As with conventional browsers, Lynx also supports browsing histories and page caching, both of which can raise privacy concerns.
Configurability
Lynx accepts configuration options from either command-line options or configuration files. There are 142 command line options according to its help message. The template configuration file lynx.cfg lists 233 configurable features. There is some overlap between the two, although there are command-line options such as -restrict which are not matched in lynx.cfg. In addition to pre-set options by command-line and configuration file, Lynx's behavior can be adjusted at runtime using its options menu. Again, there is some overlap between the settings. Lynx implements many of these runtime optional features, optionally (controlled through a setting in the configuration file) allowing the choices to be saved to a separate writable configuration file. The reason for restricting the options which can be saved originated in a usage of Lynx which was more common in the mid-1990s, i.e., using Lynx itself as a front-end application to the Internet accessed by dial-in connections.
Accessibility
Because Lynx is a text-based browser, it can be used for internet access by visually impaired users on a refreshable braille display and is easily compatible with text-to-speech software. As Lynx substitutes images, frames and other non-textual content with the text from alt, name and title HTML attributes and allows hiding the user interface elements, the browser becomes specifically suitable for use with cost-effective general purpose screen reading software. A version of Lynx specifically enhanced for use with screen readers on Windows was developed at Indian Institute of Technology Madras.
Remote access
Lynx is also useful for accessing websites from a remotely connected system in which no graphical display is available. Despite its text-only nature and age, it can still be used to effectively browse much of the modern web, including performing interactive tasks such as editing Wikipedia.
Web design and robots
Since Lynx will take keystrokes from a text file, it is still very useful for automated data entry, web page navigation, and web scraping. Consequently, Lynx is used in some web crawlers. Web designers may use Lynx to determine the way in which search engines and web crawlers see the sites that they develop. Online services that provide Lynx's view of a given web page are available.
Lynx is also used to test websites' performance. As one can run the browser from different locations over remote access technologies like telnet and ssh, one can use Lynx to test the web site's connection performance from different geographical locations simultaneously. Another possible web design application of the browser is quick checking of the site's links.
Supported platforms
Lynx was originally designed for Unix-like operating systems, though it was ported to VMS soon after its public release and to other systems, including DOS, Microsoft Windows, Classic Mac OS and OS/2. It was included in the default OpenBSD installation from OpenBSD 2.3 (May 1998) to 5.5 (May 2014), being in the main tree prior to July 2014, subsequently being made available through the ports tree, and can also be found in the repositories of most Linux distributions, as well as in the Homebrew and Fink repositories for macOS. Ports to BeOS, MINIX, QNX, AmigaOS and OS/2 are also available.
The sources can be built on many platforms, e.g., mention is made of Google's Android operating system.
See also
Computer accessibility
Links (web browser)
ELinks
w3m
ModSecurity#Former Lynx browser blocking
Comparison of web browsers
Timeline of web browsers
Comparison of Usenet newsreaders
Notes
References
External links
1992 software
Cross-platform free software
Curses (programming library)
Free web browsers
Gopher clients
OS/2 web browsers
MacOS web browsers
Portable software
POSIX web browsers
RISC OS software
Software that uses S-Lang
Text-based web browsers
University of Kansas
Web browsers for AmigaOS
Web browsers for DOS
Free software programmed in C
|
50453257
|
https://en.wikipedia.org/wiki/Turris%20Omnia
|
Turris Omnia
|
Turris Omnia started as a crowdfunded open-source SOHO network router developed by the CZ.NIC association.
On 31 January 2016 the Turris Omnia was presented at FOSDEM 2016.
Routers from campaign were delivered in 2016. After that, routers started to be sold through various resellers including Alza.cz, Amazon and various local resellers.
Design
The Turris Omnia is designed to provide its owner with freedom in use. As such it uses open-source software. In addition, the creators published the electrical schematics.
It also incorporates several security measures. It features automated software updates, so software vulnerabilities can be addressed quickly, a unique feature among SOHO routers. It also enables DNSSEC by default and also allows people to easily participate in distributed adaptive firewall which tries to automatically identify attackers by collecting data from numerous sources.
Apart from that, the router yields a sufficient performance that it can handle gigabit traffic and double as home server, NAS and print server.
Funding
Funding for the Turris Omnia initially funded via a crowdfunding campaign at Indiegogo with a target of US$100 000 by 12 January 2016. As the deadline passed, the funding had reached US$857 000.
At the end of campaign, the funding had reached 1 223 230 US$.
Since then, router is sold in retail via various resellers.
Specifications
It is powered by a 1.6 GHz dual-core Marvell Armada 385 ARM CPU. The base model now has 2 GB RAM and 8 GB flash storage, a real-time clock with battery backup, a SFP module and a hardware cryptographically secure pseudorandom number generator. Via Mini PCI Express it supports Wi-Fi in the form of 3×3 MIMO 802.11ac and the older 2×2 MIMO 802.11b/g/n.
Its connectivity consists of:
1 WAN and 5 LAN gigabit ports
2 USB 3.0 ports
2 Mini PCI Express
1 mSATA / mini PCI Express
1 SIM card slot
Initially the devices shipped with 1 GB RAM by default with a 2 GB upgrade available, however 2 GB is now the default configuration.
Software
The Turris Omnia runs the Turris OS, an OpenWrt derivative. It can be managed by web interfaces as well as by CLI. The main web interface is now reForis which is the successor of the legacy Foris; it offers features for regular users, such as WAN and LAN configuration or system reboot. Advanced users can utilize LuCI, the standard web user interface in OpenWrt.
References
External links
Wireless networking hardware
Hardware routers
Linux-based devices
Open-source hardware
|
52045298
|
https://en.wikipedia.org/wiki/Microsoft%20Forms
|
Microsoft Forms
|
Microsoft Forms (formerly Office Forms) is an online survey creator, part of Office 365. Released by Microsoft in June 2016, Forms allows users to create surveys and quizzes with automatic marking. The data can be exported to Microsoft Excel.
In 2019, Microsoft released a preview of Forms Pro which gives users the ability to export data into a Power BI dashboard.
Phishing & Fraud
Due to a wave of phishing attacks utilizing Microsoft 365 in early 2021, Microsoft uses algorithms to automatically detect and block phishing attempts with Microsoft Forms. Also, Microsoft advises Forms users not to submit personal information, such as passwords, in a form or survey, and also place a similar advisory underneath the “Submit” button in every form created with Forms, warning users not to give out their password.
References
External links
Microsoft Office
Web applications
2016 software
|
101416
|
https://en.wikipedia.org/wiki/Virtual%20LAN
|
Virtual LAN
|
A virtual LAN (VLAN) is any broadcast domain that is partitioned and isolated in a computer network at the data link layer (OSI layer 2). LAN is the abbreviation for local area network and in this context virtual refers to a physical object recreated and altered by additional logic. VLANs work by applying tags to network frames and handling these tags in networking systems – creating the appearance and functionality of network traffic that is physically on a single network but acts as if it is split between separate networks. In this way, VLANs can keep network applications separate despite being connected to the same physical network, and without requiring multiple sets of cabling and networking devices to be deployed.
VLANs allow network administrators to group hosts together even if the hosts are not directly connected to the same network switch. Because VLAN membership can be configured through software, this can greatly simplify network design and deployment. Without VLANs, grouping hosts according to their resource needs the labor of relocating nodes or rewiring data links. VLANs allow devices that must be kept separate to share the cabling of a physical network and yet be prevented from directly interacting with one another. This managed sharing yields gains in simplicity, security, traffic management, and economy. For example, a VLAN can be used to separate traffic within a business based on individual users or groups of users or their roles (e.g. network administrators), or based on traffic characteristics (e.g. low-priority traffic prevented from impinging on the rest of the network's functioning). Many Internet hosting services use VLANs to separate customers' private zones from one other, allowing each customer's servers to be grouped in a single network segment no matter where the individual servers are located in the data center. Some precautions are needed to prevent traffic "escaping" from a given VLAN, an exploit known as VLAN hopping.
To subdivide a network into VLANs, one configures network equipment. Simpler equipment might partition only each physical port (if even that), in which case each VLAN runs over a dedicated network cable. More sophisticated devices can mark frames through VLAN tagging, so that a single interconnect (trunk) may be used to transport data for multiple VLANs. Since VLANs share bandwidth, a VLAN trunk can use link aggregation, quality-of-service prioritization, or both to route data efficiently.
Uses
VLANs address issues such as scalability, security, and network management. Network architects set up VLANs to provide network segmentation. Routers between VLANs filter broadcast traffic, enhance network security, perform address summarization, and mitigate network congestion.
In a network utilizing broadcasts for service discovery, address assignment and resolution and other services, as the number of peers on a network grows, the frequency of broadcasts also increases. VLANs can help manage broadcast traffic by forming multiple broadcast domains. Breaking up a large network into smaller independent segments reduces the amount of broadcast traffic each network device and network segment has to bear. Switches may not bridge network traffic between VLANs, as doing so would violate the integrity of the VLAN broadcast domain.
VLANs can also help create multiple layer 3 networks on a single physical infrastructure. VLANs are data link layer (OSI layer 2) constructs, analogous to Internet Protocol (IP) subnets, which are network layer (OSI layer 3) constructs. In an environment employing VLANs, a one-to-one relationship often exists between VLANs and IP subnets, although it is possible to have multiple subnets on one VLAN.
Without VLAN capability, users are assigned to networks based on geography and are limited by physical topologies and distances. VLANs can logically group networks to decouple the users' network location from their physical location. By using VLANs, one can control traffic patterns and react quickly to employee or equipment relocations. VLANs provide the flexibility to adapt to changes in network requirements and allow for simplified administration.
VLANs can be used to partition a local network into several distinctive segments, for instance:
Production
Voice over IP
Network management
Storage area network (SAN)
Guest Internet access
Demilitarized zone (DMZ)
A common infrastructure shared across VLAN trunks can provide a measure of security with great flexibility for a comparatively low cost. Quality of service schemes can optimize traffic on trunk links for real-time (e.g. VoIP) or low-latency requirements (e.g. SAN). However, VLANs as a security solution should be implemented with great care as they can be defeated unless implemented carefully.
In cloud computing VLANs, IP addresses, and MAC addresses in the cloud are resources that end users can manage. To help mitigate security issues, placing cloud-based virtual machines on VLANs may be preferable to placing them directly on the Internet.
Network technologies with VLAN capabilities include:
Asynchronous Transfer Mode (ATM)
Fiber Distributed Data Interface (FDDI)
Ethernet
HiperSockets
InfiniBand
History
After successful experiments with voice over Ethernet from 1981 to 1984, W. David Sincoskie joined Bellcore and began addressing the problem of scaling up Ethernet networks. At 10 Mbit/s, Ethernet was faster than most alternatives at the time. However, Ethernet was a broadcast network and there was no good way of connecting multiple Ethernet networks together. This limited the total bandwidth of an Ethernet network to 10 Mbit/s and the maximum distance between nodes to a few hundred feet.
By contrast, although the existing telephone network's speed for individual connections was limited to 56 kbit/s (less than one hundredth of Ethernet's speed), the total bandwidth of that network was estimated at 1 Tbit/s (100,000 times greater than Ethernet).
Although it was possible to use IP routing to connect multiple Ethernet networks together, it was expensive and relatively slow. Sincoskie started looking for alternatives that required less processing per packet. In the process, he independently reinvented transparent bridging, the technique used in modern Ethernet switches. However, using switches to connect multiple Ethernet networks in a fault-tolerant fashion requires redundant paths through that network, which in turn requires a spanning tree configuration. This ensures that there is only one active path from any source node to any destination on the network. This causes centrally located switches to become bottlenecks, limiting scalability as more networks are interconnected.
To help alleviate this problem, Sincoskie invented VLANs by adding a tag to each Ethernet frame. These tags could be thought of as colors, say red, green, or blue. In this scheme, each switch could be assigned to handle frames of a single color, and ignore the rest. The networks could be interconnected with three spanning trees, one for each color. By sending a mix of different frame colors, the aggregate bandwidth could be improved. Sincoskie referred to this as a multitree bridge. He and Chase Cotton created and refined the algorithms necessary to make the system feasible. This color is what is now known in the Ethernet frame as the IEEE 802.1Q header, or the VLAN tag. While VLANs are commonly used in modern Ethernet networks, they are not used in the manner first envisioned here.
In 1998, Ethernet VLANs were described in the first edition of the IEEE 802.1Q-1998 standard. This was extended with IEEE 802.1ad to allow nested VLAN tags in service of provider bridging. This mechanism was improved with IEEE 802.1ah-2008.
Configuration and design considerations
Early network designers often segmented physical LANs with the aim of reducing the size of the Ethernet collision domain—thus improving performance. When Ethernet switches made this a non-issue (because each switch port is a collision domain), attention turned to reducing the size of the data link layer broadcast domain. VLANs were first employed to separate several broadcast domains across one physical medium. A VLAN can also serve to restrict access to network resources without regard to physical topology of the network.
VLANs operate at the data link layer of the OSI model. Administrators often configure a VLAN to map directly to an IP network, or subnet, which gives the appearance of involving the network layer. Generally, VLANs within the same organization will be assigned different non-overlapping network address ranges. This is not a requirement of VLANs. There is no issue with separate VLANs using identical overlapping address ranges (e.g. two VLANs each use the private network ). However, it is not possible to route data between two networks with overlapping addresses without delicate IP remapping, so if the goal of VLANs is segmentation of a larger overall organizational network, non-overlapping addresses must be used in each separate VLAN.
A basic switch that is not configured for VLANs has VLAN functionality disabled or permanently enabled with a default VLAN that contains all ports on the device as members. The default VLAN typically uses VLAN identifier 1. Every device connected to one of its ports can send packets to any of the others. Separating ports by VLAN groups separates their traffic very much like connecting each group using a distinct switch for each group.
Remote management of the switch requires that the administrative functions be associated with one or more of the configured VLANs.
In the context of VLANs, the term trunk denotes a network link carrying multiple VLANs, which are identified by labels (or tags) inserted into their packets. Such trunks must run between tagged ports of VLAN-aware devices, so they are often switch-to-switch or switch-to-router links rather than links to hosts. (Note that the term 'trunk' is also used for what Cisco calls "channels" : Link Aggregation or Port Trunking). A router (Layer 3 device) serves as the backbone for network traffic going across different VLANs. It is only when the VLAN port group is to extend to another device that tagging is used. Since communications between ports on two different switches travel via the uplink ports of each switch involved, every VLAN containing such ports must also contain the uplink port of each switch involved, and traffic through these ports must be tagged.
Switches typically have no built-in method to indicate VLAN to port associations to someone working in a wiring closet. It is necessary for a technician to either have administrative access to the device to view its configuration, or for VLAN port assignment charts or diagrams to be kept next to the switches in each wiring closet.
Protocols and design
The protocol most commonly used today to support VLANs is IEEE 802.1Q. The IEEE 802.1 working group defined this method of multiplexing VLANs in an effort to provide multivendor VLAN support. Prior to the introduction of the 802.1Q standard, several proprietary protocols existed, such as Cisco Inter-Switch Link (ISL) and 3Com's Virtual LAN Trunk (VLT). Cisco also implemented VLANs over FDDI by carrying VLAN information in an IEEE 802.10 frame header, contrary to the purpose of the IEEE 802.10 standard.
Both ISL and IEEE 802.1Q perform explicit tagging – the frame itself is tagged with VLAN identifiers. ISL uses an external tagging process that does not modify the Ethernet frame, while 802.1Q uses a frame-internal field for tagging, and therefore does modify the basic Ethernet frame structure. This internal tagging allows IEEE 802.1Q to work on both access and trunk links using standard Ethernet hardware.
IEEE 802.1Q
Under IEEE 802.1Q, the maximum number of VLANs on a given Ethernet network is 4,094 (4,096 values provided by the 12-bit field minus reserved values at each end of the range, 0 and 4,095). This does not impose the same limit on the number of IP subnets in such a network since a single VLAN can contain multiple IP subnets. IEEE 802.1ad extends the number of VLANs supported by adding support for multiple, nested VLAN tags. IEEE 802.1aq (Shortest Path Bridging) expands the VLAN limit to 16 million. Both improvements have been incorporated into the IEEE 802.1Q standard.
Cisco Inter-Switch Link
Inter-Switch Link (ISL) is a Cisco proprietary protocol used to interconnect switches and maintain VLAN information as traffic travels between switches on trunk links. ISL is provided as an alternative to IEEE 802.1Q. ISL is available only on some Cisco equipment and has been deprecated.
Cisco VLAN Trunking Protocol
VLAN Trunking Protocol (VTP) is a Cisco proprietary protocol that propagates the definition of VLANs on the whole local area network. VTP is available on most of the Cisco Catalyst Family products. The comparable IEEE standard in use by other manufacturers is GARP VLAN Registration Protocol (GVRP) or the more recent Multiple VLAN Registration Protocol (MVRP).
Multiple VLAN Registration Protocol
Multiple VLAN Registration Protocol is an application of Multiple Registration Protocol that allows automatic configuration of VLAN information on network switches. Specifically, it provides a method to dynamically share VLAN information and configure the needed VLANs.
Membership
VLAN membership can be established either statically or dynamically.
Static VLANs are also referred to as port-based VLANs. Static VLAN assignments are created by assigning ports to a VLAN. As a device enters the network, the device automatically assumes the VLAN of the port. If the user changes ports and needs access to the same VLAN, the network administrator must manually make a port-to-VLAN assignment for the new connection.
Dynamic VLANs are created using software or by protocol. With a VLAN Management Policy Server (VMPS), an administrator can assign switch ports to VLANs dynamically based on information such as the source MAC address of the device connected to the port or the username used to log onto that device. As a device enters the network, the switch queries a database for the VLAN membership of the port that device is connected to. Protocol methods include Multiple VLAN Registration Protocol (MVRP) and the somewhat obsolete GARP VLAN Registration Protocol (GVRP).
Protocol-based VLANs
In a switch that supports protocol-based VLANs, traffic may be handled on the basis of its protocol. Essentially, this segregates or forwards traffic from a port depending on the particular protocol of that traffic; traffic of any other protocol is not forwarded on the port. This allows, for example, IP and IPX traffic to be automatically segregated by the network.
VLAN cross connect
VLAN cross connect (CC or VLAN-XC) is a mechanism used to create Switched VLANs, VLAN CC uses IEEE 802.1ad frames where the S Tag is used as a Label as in MPLS. IEEE approves the use of such a mechanism in part 6.11 of IEEE 802.1ad-2005.
See also
HVLAN, hierarchical VLAN
Multiple VLAN Registration Protocol, GARP VLAN Registration Protocol
Network virtualization
Private VLAN
Software-defined networking
Switch virtual interface
Virtual Extensible LAN (VXLAN)
Virtual Private LAN Service
Virtual private network
VLAN access control list
Wide area network
Notes
References
Further reading
Andrew S. Tanenbaum, 2003, "Computer Networks", Pearson Education International, New Jersey.
Local area networks
Network protocols
|
9876390
|
https://en.wikipedia.org/wiki/Patriotic%20hacking
|
Patriotic hacking
|
Patriotic hacking is a term for computer hacking or system cracking in which citizens or supporters of a country, traditionally industrialized Western countries but increasingly developing countries, attempt to perpetrate attacks on, or block attacks by, perceived enemies of the state.
Recent media attention has focused on efforts related to terrorists and their own attempts to conduct an online or electronic intifada - cyberterrorism. Patriot hacking is illegal in countries such as the United States yet is on the rise elsewhere. "The FBI said that recent experience showed that an increase in international tension was mirrored in the online world with a rise in cyber activity such as web defacements and denial of service attacks," according to the BBC.
Examples
War in Iraq - 2003
At the onset of the War in Iraq in 2003, the FBI was concerned about the increase in hack attacks as the intensity of the conflict grew. Since then, it has been becoming increasingly popular in the North America, Western Europe and Israel. These are the countries which have the greatest threat to Islamic terrorism and its aforementioned digital version.
Summer Olympics - 2008
Around the time of the 2008 Summer Olympics torch relay, which was marred by unrest in Tibet, Chinese hackers claim to have hacked the websites of CNN (accused of selective reporting on the 2008 Lhasa riots) and Carrefour (a French shopping chain, allegedly supporting Tibetan independence), while websites and forums gave tutorials on how to launch a DDoS attack specifically on the CNN website.
Op Vijaya by Indian hackers – 2015
Indian hackers in 2015 took down thousands of Pakistani websites including pakistan.gov.pk and Right To Information Pakistan under the attack named as #OPvijaya under the leadership of In73ct0r d3vil. This attack is considered to be a patriotic move by Indian hackers. Government of India and India's NSA Ajit Dhoval showed support to the attack on his Twitter account.
Retaliation On India - 2017
The official websites of 10 different Indian universities were hacked and defaced in 2017. A group going by the name of ‘Pakistan Haxor Crew’ (PFC) claimed responsibility for the breach, saying it was retaliation for Pakistan’s railway ministry website being hacked by an Indian crew few days before this breach.
Cyber Attack On Aurat Foundation Of Pakistan- 2022
The Database Of Aurat Foundation Of Pakistan was hacked in 2022. A group going by the name 'Ethic-Nity'' undertook control over asf's SQL Database. Other sources claim it as a retaliation for ongoing cyber attacks by hackers from Pakistan.
See also
2007 cyberattacks on Estonia
Black hat hacking
Exploit (computer security)
Cyber spying
Cyber Storm Exercise
Cyber warfare
Grey hat
Hacker (computer security)
Hacker Ethic
Hack value
Hacktivism
Internet vigilantism
IT risk
Metasploit
Penetration test
Vulnerability (computing)
White hat (computer security)
References
Hacking (computer security)
National security
India–Pakistan relations
|
5702698
|
https://en.wikipedia.org/wiki/Protein%E2%80%93ligand%20docking
|
Protein–ligand docking
|
Protein–ligand docking is a molecular modelling technique. The goal of protein–ligand docking is to predict the position and orientation of a ligand (a small molecule) when it is bound to a protein receptor or enzyme. Pharmaceutical research employs docking techniques for a variety of purposes, most notably in the virtual screening of large databases of available chemicals in order to select likely drug candidates. There has been rapid development in computational ability to determine protein structure with programs such as AlphaFold, and the demand for the corresponding protein-ligand docking predictions is driving implementation of software that can find accurate models. Once the protein folding can be predicted accurately along with how the ligands of various structures will bind to the protein, the ability for drug development to progress at a much faster rate becomes possible.
History
Computer-aided drug design (CADD) was introduced in the 1980s in order to screen for novel drugs. The underlying premise is that by parsing an extremely large data set for chemical compounds which may be viable to make a certain pharmaceutical, researchers were able to minimize the amount of novel without testing them all experimentally. The ability to accurately predict target binding sites is a new phenomena, however, which expands on the ability to simply parse a data set of chemical compounds; now due to increasing computational capability, it is possible to inspect the actual geometries of the protein-ligand binding site in vitro. Hardware advancements in computation have made these structure-oriented methods of drug discovery the next frontier in the 21st century biopharma. In order to finely train the new algorithms to capture the accurate geometry of the protein-ligand binding capability, an experimentally gathered dataset can be used by applying techniques such as X-ray crystallography or NMR spectroscopy.
Available software
Several protein–ligand docking software applications that calculate the site, geometry and energy of small molecules or peptides interacting with proteins are available, such as AutoDock and AutoDock Vina, rDock, FlexAID, Molecular Operating Environment, and Glide. In particular, one such program used to model peptides as the specific ligand bonding to the protein is DockThor. Peptides are a highly flexible type of ligand that has proven to be a difficult type of structure to predict in protein bonding programs. DockThor implements up to 40 rotatable bonds to help model these complex physicochemical bindings at the target site. Root Mean Square Deviation is the standard method of evaluating various software performance within the binding mode of the protein-ligand structure. Specifically, it is the root-mean-squared deviation between the software-predicted docking pose of the ligand and the experimental binding mode. The RMSD measurement is computed for all of the computer-generated poses of the possible bindings between the protein and ligand. The program does not always perfectly predict the actual physical pose when evaluating the RMSD between candidates. In order to then evaluate the strength of a computer algorithm to predict protein docking, the ranking of RMSD among computer-generated candidates must be examined to determine whether the experimental pose actually was generated but not selected.
Protein flexibility
Computational capacity has increased dramatically over the last two decades making possible the use of more sophisticated and computationally intensive methods in computer-assisted drug design. However, dealing with receptor flexibility in docking methodologies is still a thorny issue. The main reason behind this difficulty is the large number of degrees of freedom that have to be considered in this kind of calculations. However, in most of the cases, neglecting it leads to poor docking results in terms of binding pose prediction in real-world settings. Using coarse grained protein models to overcome this problem seems to be a promising approach. Coarse-grained models are often implemented in the case of protein-peptide docking, as they frequently involve large-scale conformation transitions of the protein receptor.
AutoDock is one of the computational tools frequently used to model the interactions between proteins and ligands during the drug discovery process. Although the classically used algorithms to search for effective poses often assume the receptor proteins to be rigid while the ligand is moderately flexible, newer approaches are implementing models with limited receptor flexibility as well. AutoDockFR is a newer model that is able to simulate this partial flexibility within the receptor protein by letting side-chains of the protein to take various poses among their conformational space. This allows the algorithm to explore a vastly larger space of energetically relevant poses for each ligand tested.
In order to simplify the complexity of the search space for prediction algorithms, various hypotheses have been tested. One such hypothesis is that side-chain conformational changes that contain more atoms and rotations of greater magnitude are actually less likely to occur than the smaller rotations due to the energy barriers that arise. Steric hindrance and rotational energy cost that are introduced with these larger changes made it less likely that they were included in the actual protein-ligand pose. Findings such as these can make it easier for scientists to develop heuristics that can lower the complexity of the search space and improve the algorithms.
Implementations
The original method of testing the molecular models of various binding sites was introduced in the 1980s where the receptor was estimated in a rough manner by spheres which occupied the surface clefts. The ligand was approximated by more spheres which would occupy the relevant volume. Then a search was executed for maximizing the steric overlap between the spheres of both the binding and receptor spheres.
However, the new scoring functions to evaluate molecular dynamics and protein-ligand docking potential are implementing supervised molecular dynamic approach. Essentially, the simulations are sequences of small time windows by which the distance between the center of mass of the ligand and protein is computed. The distance values are updated at regular frequencies and then regressively fitted linearly. When the slope is negative, the ligand is getting nearer to the binding site, and vice versa. When the ligand is departing from the binding site, the tree of possibilities is pruned right at that moment so as to avoid unnecessary computation. The advantage of this method is speed without the introduction of any energetic bias which could foul the model from accurate mappings to the experimental truths.
See also
Docking (molecular)
Protein–protein docking
Virtual screening
List of protein-ligand docking software
References
External links
BioLiP, a comprehensive ligand-protein interaction database
Molecular modelling
Computational chemistry
Cheminformatics
|
34580660
|
https://en.wikipedia.org/wiki/Hello%20World%21%20%28composition%29
|
Hello World! (composition)
|
"Hello World!" is a piece of contemporary classical music for clarinet-violin-piano trio composed by Iamus Computer in September 2011. It is arguably the first full-scale work entirely composed by a computer without any human intervention and automatically written in a fully-fledged score using conventional musical notation. Iamus generates music scores in PDF and the MusicXML format that can be imported in professional editors such as Sibelius and Finale.
Title
The title makes reference to the computer program Hello World, which is traditionally used to teach the most essential aspects in a programming language.
Dedication
The composition is dedicated to the memory of Raymond Scott, an electronic music pioneer and inventor of the Electronium.
Premiere
"Hello World!" was given its premiere performance on October 15, 2011 by Trio Energio at the Keroxen music festival in Santa Cruz de Tenerife, Spain. The performers were Cristo Barrios (clarinet), Cecilia Bercovich (violin), and Gustavo Díaz-Jerez (piano).
Reception
As supported by neuroscientific studies, critiques of "Hello World!" (and similarly created works) could potentially be affected by anti-computer prejudice, derived from the fact of knowing in advance (or not) the non-human nature of the author. The music critic Tom Service of The Guardian acknowledged as much in his review of a 2012 performance, writing, "Now, maybe I'm falling victim to a perceptual bias against a faceless computer program but I just don't think Hello World! is especially impressive." He continued:
Despite describing the piece as "more successful than previous attempts to produce generic musical compositions from computers," he added, "The real paradox of Iamus is why it's being used to attempt to fool humanity in this way. If you've got a computer program of this sophistication, why bother trying to compose pieces that a human, and not a very good human at that – well, not a compositional genius anyway – could write? Why not use it to find new realms of sound, new kinds of musical ideas?"
Conversely, the musicologist Peter Russell was asked to review "Hello World!" for the BBC, based on a video of the live premiere, but he was not given any information about the composer. In his critique, Russell writes "on listening to this delightful piece of chamber music I could not bring myself to say that it would probably be more satisfying to read the score than listen to it. In fact after repeated hearings, I came to like it".
See also
Algorithmic composition
Computer music
Iamus computer
References
External links
Melomics Homepage
Audio of "Hello World!" in Melomics site
Video of "Hello World!" in YouTube
Full text of "Hello World!" critique from Peter Russell
Compositions by Iamus
2011 compositions
Chamber music compositions
|
64798424
|
https://en.wikipedia.org/wiki/Harmony%20%28operating%20system%29
|
Harmony (operating system)
|
Harmony is an experimental computer operating system (OS) developed at the National Research Council Canada in Ottawa. It is a second-generation message passing system that was also used as the basis for several research projects, including robotics sensing and graphical workstation development. Harmony was actively developed throughout the 1980s and into the mid-1990s.
History
Harmony was a successor to the Thoth system developed at the University of Waterloo. Work on Harmony began at roughly the same time as that on the Verex kernel developed at the University of British Columbia. David Cheriton was involved in both Thoth and Verex, and would later go on to develop the V System at Stanford University. Harmony's principal developers included W. Morven Gentleman, Stephen A. MacKay, Darlene A. Stewart, and Marceli Wein.
Early ports of the system existed for a variety of Motorola 68000-based computers, including ones using the VMEbus and Multibus backplanes and in particular the Multibus-based Chorus multiprocessor system at Waterloo. Other hosts included the Atari 520 or 1040 ST. A port also existed for the Digital Equipment Corporation VAX.
Harmony achieved formal verification in 1995.
Features
Harmony was designed as a real-time operating system (RTOS) for robot control. It is a multitasking, multiprocessing system. It is not multi-user. Harmony provided a runtime system (environment) only; development took place on a separate system, originally an Apple Macintosh. For each processor in the system, an image is created that combines Harmony with the one multitask program for that processor at link time, an exception being a case where the kernel is programmed into a read-only memory (ROM).
Although the term did not appear in the original papers, Harmony was later referred to as a microkernel. A key in Harmony is its use of the term task, which in Harmony is defined as the "unit of sequential and synchronous execution" and "the unit of resource ownership". It is likened to a subroutine, but one that must be explicitly created and which runs independently of the task that created it. Programs are made up of a number of tasks. A task is bound to a given processor, which may be different from that of the instantiating task and which may host many tasks. All system resources are owned and managed by tasks.
Intertask communication is provided mostly by synchronous message passing and four associated primitives. Shared memory is also supported. Destruction of a task closes all of its connections. Input/output uses a data stream model.
Harmony is connection-oriented in that tasks that communicate with each other often maintain state information about each other. In contrast with some other distributed systems, connections in Harmony are inexpensive.
Applications and tools
An advanced debugger called Melody was developed for Harmony at the Advanced Real-Time Toolset Laboratory at Carleton University. It was later commercialized as Remedy.
The Harmony kernel underpinned the Actra project — a multiprocessing, multitasking Smalltalk.
Harmony was used in the multitasking, multiprocessor Adagio robotics simulation workstation.
Concepts from both Harmony and Adagio influenced the design of the Smalltalk-based Eva event driven user interface builder.
Harmony was used as the underlying OS for several experimental robotic systems.
Commercial
Harmony was commercialized by the Taurus Computer Products division of Canadian industrial computer company Dy4. When Dy4 closed down their software division, four of Taurus' former developers founded Precise Software Technologies and continued developing the OS as Precise/MPX, the predecessor to their later Precise/MQX product.
Another commercial operating system derived from Harmony is the Unison OS from Rowebot Research Inc.
References
Further reading
Real-time operating systems
National Research Council (Canada)
Microkernel-based operating systems
Robot operating systems
Operating system families
|
3733729
|
https://en.wikipedia.org/wiki/Picotux
|
Picotux
|
The Picotux is a single-board computer launched in 2005, running Linux. There are several different kinds of picotux available, but the main one is the picotux 100. The Picotux was released for availability on 18 May 2005. It is 35 mm × 19 mm × 19 mm and just barely larger than an 8P8C modular connector.
Technology
The picotux 100 operates a 55 MHz 32-bit ARM7 Netsilicon NS7520 processor, with 2 MB of Flash Memory (750 KB of which contains the OS) and 8 MB SDRAM Memory. The operating system is μClinux 2.4.27 Big Endian. BusyBox 1.0 is used as main shell. The picotux system runs at 250 mA only and 3.3 V +/- 5%.
Two communication interfaces are provided, 10/100 Mbit/s half/full duplex Ethernet and a serial port with up to 230,400 bit/s. Five additional lines can be used for either general input/output or serial handshaking.
External links
Picotux.com
See also
Microcontroller
References
Linux-based devices
|
8231284
|
https://en.wikipedia.org/wiki/Dallas%20Sartz
|
Dallas Sartz
|
Dallas Sartz (born July 8, 1983) is a former American football linebacker and assistant coach for the UC Davis football team. He was drafted by the Washington Redskins in the fifth round of the 2007 NFL Draft. He played college football at Southern California.
Sartz has also been a member of the Minnesota Vikings.
Early years
Sartz prepped at Granite Bay High School in Granite Bay, California. He was originally a powerful safety, but was later moved to linebacker in college. A highly touted recruit, he mulled attending Oregon and Washington before committing to Pete Carroll's Trojans.
College career
Sartz played college football at the University of Southern California, where he was a two-time team captain. He was a 2004 and 2006 Pac-10 conference honorable mention. Sartz was invited to play in the 2007 East-West Shrine Game.
Professional career
Washington Redskins
Sartz was drafted by the Washington Redskins in fifth round in the 2007 NFL Draft with the 143rd overall pick. He ran a 4.58 40 at the NFL Combine. Sartz signed a contract with the Redskins in July 2007. On September 1, 2007, Sartz was released by the Redskins.
Minnesota Vikings
On March 17, 2008, Sartz signed with the Minnesota Vikings. He was later released on May 2.
Seattle Seahawks
In August 2008, Sartz was signed by the Seattle Seahawks, but released later that month.
Coaching
Sartz began working as a coach for the UC Davis Aggies football team, who compete at the FCS level of college football, during the 2010 season. Sartz as of 2014 also is coaching for his old high school team the Granite Bay Grizzlies.
Personal
Sartz graduated from USC with a bachelor's degree in communications and a business law minor. Sartz' mother, Lori, was a track sprinter. Sartz' father, Jeff, played as a safety during his years at Oregon State University, and also attended Shadle Park High School in Spokane, Washington. He was coached by Gary Davis while at Shadle Park. His sister, Stephanie, attended Berkeley and worked as a Cal football recruiter and his grandfather was a boxer at Washington State University and a professional hydroplane racer.
References
External links
USC Trojans bio
Minnesota Vikings bio
Seattle Seahawks bio
Washington Redskins bio
1983 births
Living people
People from Granite Bay, California
American football linebackers
USC Trojans football players
Washington Redskins players
Minnesota Vikings players
Seattle Seahawks players
Players of American football from California
|
34992890
|
https://en.wikipedia.org/wiki/Franca%20IDL
|
Franca IDL
|
Franca Interface Definition Language (Franca IDL) is a formally defined, text-based interface description language. It is part of the Franca framework, which is a framework for definition and transformation of software interfaces. Franca applies model transformation techniques to interoperate with various interface description languages (e.g., D-Bus Introspection language, Apache Thrift IDL, Fibex Services).
Franca is a powerful framework for definition and transformation of software interfaces. It is used for integrating software components from different suppliers, which are built based on various runtime frameworks, platforms and IPC mechanisms. The core of it is Franca IDL(Interface Definition Language), which is a textual language for specification of APIs.
History
The initial version of Franca was developed by the GENIVI consortium in 2011 being a common interface description language used for the standardization
of an In-Vehicle Infotainment (IVI) platform. The first public version of Franca was released in March 2012 under the Eclipse Public License, version 1.0.
In 2013, Franca has been proposed as an official Eclipse foundation project.
Franca is mainly developed by the German company Itemis.
Features
Franca IDL provides a range of features for the specification of software interfaces:
declaration of interface elements: attributes, methods, broadcasts
major/minor versioning scheme
specification of the dynamic behaviour of interfaces based on finite-state machines (Protocol State Machines, short: PSM)
storage of meta-information (e.g., author, description, links) using structured comments
user-defined data types (i.e., array, enumeration, structure, union, map, type alias)
inheritance for interfaces, enumerations and structures
Architecture
In addition to the text-based IDL for the specification of interfaces, Franca provides an HTML documentation generator.
Franca is implemented based on the Eclipse (software) tool platform. For the definition of the actual Franca IDL, the Xtext framework is used. For the user of Franca, this offers a list of benefits for the activity of reviewing and specifying software interfaces.
See also
Model transformation
Automatic programming
Eclipse (software)
Eclipse Modeling Framework
Xtext
References
External links
(at Eclipse Labs)
Resources
Specification languages
Data modeling languages
Inter-process communication
Component-based software engineering
Eclipse (software)
Object models
Remote procedure call
Object-oriented programming
|
31826382
|
https://en.wikipedia.org/wiki/ICMP%20hole%20punching
|
ICMP hole punching
|
ICMP hole punching is a technique employed in network address translator (NAT) applications for maintaining Internet Control Message Protocol (ICMP) packet streams that traverse the NAT. NAT traversal techniques are typically required for client-to-client networking applications on the Internet involving hosts connected in private networks, especially in peer-to-peer and Voice over Internet Protocol (VoIP) deployments.
ICMP hole punching establishes connectivity between two hosts communicating across one or more network address translators in either a peer-to-peer or client–server model. Typically, third party hosts on the public transit network are used to establish UDP or TCP port states that may be used for direct communications between the communicating hosts, however ICMP hole punching requires no third party involvement to pass information between one or more NATs by exploiting a NAT's loose acceptance of inbound ICMP Time Exceeded packets.
Once an ICMP Time Exceeded packet reaches the destination NAT, arbitrary data in the packet expected by the NAT allows the packet to reach the destination server, allowing the destination server to obtain the client's public IP address and other data stored in the packet from the client.
Overview
Currently the only method of ICMP hole punching or hole punching without third party involvement (autonomous NAT traversal) was developed by Samy Kamkar on January 22, 2010 and released in the open source software pwnat, and the method was later published in the IEEE. According to the paper:
The proposed technique assumes that the client has somehow learned the current external (globally routable) IP address of the server's NAT.
The key idea for enabling the server to learn the client's
IP address is for the server to periodically send a message to
a fixed, known IP address. The simplest approach uses ICMP
ECHO REQUEST messages to an unallocated IP address, such
as 1.2.3.4. Since 1.2.3.4 is not allocated, the ICMP REQUEST
will not be routed by routers without a default route;
ICMP DESTINATION UNREACHABLE messages that may
be created by those routers can just be ignored by the server.
As a result of the messages sent to 1.2.3.4, the NAT
will enable routing of replies in response to this request.
The connecting client will then fake such a reply. Specifically,
the client will transmit an ICMP message indicating
TTL_EXPIRED. Such a message could legitimately
be transmitted by any Internet router and the sender address
would not be expected to match the server's target IP.
The server listens for (fake) ICMP replies and upon receipt
initiates a connection to the sender IP specified in the ICMP reply.
See also
Hole punching (networking)
Port Control Protocol (PCP)
TCP hole punching
UDP hole punching
References
Computer network security
|
29995689
|
https://en.wikipedia.org/wiki/1957%20USC%20Trojans%20football%20team
|
1957 USC Trojans football team
|
The 1957 USC Trojans football team represented the University of Southern California (USC) in the 1957 NCAA University Division football season. In their first year under head coach Don Clark, the Trojans compiled a 1–9 record (1–6 against conference opponents), finished in a tie for seventh place in the Pacific Coast Conference, and were outscored by their opponents by a combined total of 204 to 86.
Tom Maudlin led the team in passing with 48 of 100 passes completed for 552 yards, no touchdowns and eight interceptions. Rex Johnston led the team in rushing with 74 carries for 304 yards. Larry Boies was the leading receiver with 14 catches for 144 yards and no touchdowns.
No member of the 1957 Trojans received first-team honors on the 1957 All-Pacific Coast Conference football team. Tackle Mike Henry received second-team honors from the conference coaches.
Schedule
References
USC
USC Trojans football seasons
USC Trojans football
|
24354311
|
https://en.wikipedia.org/wiki/Parasoft%20DTP
|
Parasoft DTP
|
Parasoft DTP (formerly Parasoft Concerto) is a development testing solution from Parasoft that acts as a centralized hub for managing software quality and application security. DTP provides a wide range of traditional software reports from normal software development activities such as coding and testing, and also is able to aggregate data from across all software testing practices (i.e. static code analysis, unit testing, and API testing) to present a comprehensive view of the state of the codebase. DTP provides software testing analytics via an internal intelligence engine.
Analytics are a way to provide actionable information and insights, beyond the list and graphs found in normal software development reports and dashboards. DTP comes with built-in algorithms that perform various analytics that are commonly needed by software development managers, such as aggregated code coverage, which is a way of collecting coverage data from multiple test runs as well as different types of testing activities like manual testing and unit testing, and change-based testing, which is a form of impact analysis that helps understand which tests need to be run in order to validate changes, as well as which tests can safely be skipped.
DTP's web-based UI provides interactive reports and dashboards including a flexible, user-configurable reporting system with full open published APIs to put data in from any software development or testing tool. The Process Intelligence Engine (PIE) in DTP provides analytic capabilities and is open for developers and managers to customize to their individual needs as well as extend with new algorithms and analytics. The reports in DTP give developers and QA team members the ability to monitor and track how the software is being implemented across multiple builds and aggregated across all software testing practices.
Overview
Parasoft DTP was originally known as Parasoft Concerto and integrates with third-party tools such as HP Quality Center, IBM Rational RequisitePro, Concurrent Versions System, Subversion, and other development infrastructure components. It was introduced in 2009. In 2012 DTP won the "Best of Show" Embeddy award from VDC Research.
DTP can be used with:
Agile software development
Extreme Programming
Hybrid methodologies
Scrum
It includes pre-configured templates for:
American National Standards Institute 62304 for Medical Device Software development
DO-178B
IEC 61508 & Safety Integrity Level
U.S. Food and Drug Administration General Principles of Software Validation
ISO 26262 & ASIL
Joint Strike Fighter Program
Safety-critical Software Development
Motor Industry Research Association
Safety Integrity Level
The templates combine automated testing with the process recommendations and requirements outlined in common guidelines (e.g.,
integration of code review and defect prevention practices such as static analysis, unit testing, functional testing, software performance testing, and regression testing throughout the SDLC).
References
External links
Parasoft DTP Advanced Analytics and Reporting home page
Software development process
Software testing tools
Workflow applications
Software project management
|
6006007
|
https://en.wikipedia.org/wiki/Setcontext
|
Setcontext
|
setcontext is one of a family of C library functions (the others being getcontext, makecontext and swapcontext) used for context control. The setcontext family allows the implementation in C of advanced control flow patterns such as iterators, fibers, and coroutines. They may be viewed as an advanced version of setjmp/longjmp; whereas the latter allows only a single non-local jump up the stack, setcontext allows the creation of multiple cooperative threads of control, each with its own stack.
Specification
setcontext was specified in POSIX.1-2001 and the Single Unix Specification, version 2, but not all Unix-like operating systems provide them. POSIX.1-2004 obsoleted these functions, and in POSIX.1-2008 they were removed, with POSIX Threads indicated as a possible replacement. Citing IEEE Std 1003.1, 2004 Edition: With the incorporation of the ISO/IEC 9899:1999 standard into this specification it was found that the ISO C standard (Subclause 6.11.6) specifies that the use of function declarators with empty parentheses is an obsolescent feature. Therefore, using the function prototype:
void makecontext(ucontext_t *ucp, void (*func)(), int argc, ...);
is making use of an obsolescent feature of the ISO C standard. Therefore, a strictly conforming POSIX application cannot use this form. Therefore, use of getcontext(), makecontext(), and swapcontext() is marked obsolescent.
There is no way in the ISO C standard to specify a non-obsolescent function prototype indicating that a function will be called with an arbitrary number (including zero) of arguments of arbitrary types (including integers, pointers to data, pointers to functions, and composite types).
Definitions
The functions and associated types are defined in the ucontext.h system header file. This includes the ucontext_t type, with which all four functions operate:
typedef struct {
ucontext_t *uc_link;
sigset_t uc_sigmask;
stack_t uc_stack;
mcontext_t uc_mcontext;
...
} ucontext_t;
uc_link points to the context which will be resumed when the current context exits, if the context was created with makecontext (a secondary context). uc_sigmask is used to store the set of signals blocked in the context, and uc_stack is the stack used by the context. uc_mcontext stores execution state, including all registers and CPU flags, the instruction pointer, and the stack pointer; mcontext_t is an opaque type.
The functions are:
int setcontext(const ucontext_t *ucp)
This function transfers control to the context in ucp. Execution continues from the point at which the context was stored in ucp. setcontext does not return.
int getcontext(ucontext_t *ucp)
Saves current context into ucp. This function returns in two possible cases: after the initial call, or when a thread switches to the context in ucp via setcontext or swapcontext. The getcontext function does not provide a return value to distinguish the cases (its return value is used solely to signal error), so the programmer must use an explicit flag variable, which must not be a register variable and must be declared volatile to avoid constant propagation or other compiler optimisations.
void makecontext(ucontext_t *ucp, void (*func)(), int argc, ...)
The makecontext function sets up an alternate thread of control in ucp, which has previously been initialised using getcontext. The ucp.uc_stack member should be pointed to an appropriately sized stack; the constant SIGSTKSZ is commonly used. When ucp is jumped to using setcontext or swapcontext, execution will begin at the entry point to the function pointed to by func, with argc arguments as specified. When func terminates, control is returned to ucp.uc_link.
int swapcontext(ucontext_t *oucp, ucontext_t *ucp)
Transfers control to ucp and saves the current execution state into oucp.
Example
The example below demonstrates an iterator using setcontext.
#include <stdio.h>
#include <stdlib.h>
#include <ucontext.h>
#include <signal.h>
/* The three contexts:
* (1) main_context1 : The point in main to which loop will return.
* (2) main_context2 : The point in main to which control from loop will
* flow by switching contexts.
* (3) loop_context : The point in loop to which control from main will
* flow by switching contexts. */
ucontext_t main_context1, main_context2, loop_context;
/* The iterator return value. */
volatile int i_from_iterator;
/* This is the iterator function. It is entered on the first call to
* swapcontext, and loops from 0 to 9. Each value is saved in i_from_iterator,
* and then swapcontext used to return to the main loop. The main loop prints
* the value and calls swapcontext to swap back into the function. When the end
* of the loop is reached, the function exits, and execution switches to the
* context pointed to by main_context1. */
void loop(
ucontext_t *loop_context,
ucontext_t *other_context,
int *i_from_iterator)
{
int i;
for (i=0; i < 10; ++i) {
/* Write the loop counter into the iterator return location. */
*i_from_iterator = i;
/* Save the loop context (this point in the code) into ''loop_context'',
* and switch to other_context. */
swapcontext(loop_context, other_context);
}
/* The function falls through to the calling context with an implicit
* ''setcontext(&loop_context->uc_link);'' */
}
int main(void)
{
/* The stack for the iterator function. */
char iterator_stack[SIGSTKSZ];
/* Flag indicating that the iterator has completed. */
volatile int iterator_finished;
getcontext(&loop_context);
/* Initialise the iterator context. uc_link points to main_context1, the
* point to return to when the iterator finishes. */
loop_context.uc_link = &main_context1;
loop_context.uc_stack.ss_sp = iterator_stack;
loop_context.uc_stack.ss_size = sizeof(iterator_stack);
/* Fill in loop_context so that it makes swapcontext start loop. The
* (void (*)(void)) typecast is to avoid a compiler warning but it is
* not relevant to the behaviour of the function. */
makecontext(&loop_context, (void (*)(void)) loop,
3, &loop_context, &main_context2, &i_from_iterator);
/* Clear the finished flag. */
iterator_finished = 0;
/* Save the current context into main_context1. When loop is finished,
* control flow will return to this point. */
getcontext(&main_context1);
if (!iterator_finished) {
/* Set iterator_finished so that when the previous getcontext is
* returned to via uc_link, the above if condition is false and the
* iterator is not restarted. */
iterator_finished = 1;
while (1) {
/* Save this point into main_context2 and switch into the iterator.
* The first call will begin loop. Subsequent calls will switch to
* the swapcontext in loop. */
swapcontext(&main_context2, &loop_context);
printf("%d\n", i_from_iterator);
}
}
return 0;
}
NOTE: this example is not correct, but may work as intended in some cases. The function makecontext requires additional parameters to be type int, but the example passes pointers. Thus, the example may fail on 64-bit machines (specifically LP64-architectures, where sizeof(void*) > sizeof(int)). This problem can be worked around by breaking up and reconstructing 64-bit values, but that introduces a performance penalty.
For get and set context, a smaller context can be handy:
#include <stdio.h>
#include <ucontext.h>
#include <unistd.h>
int main(int argc, const char *argv[]){
ucontext_t context;
getcontext(&context);
puts("Hello world");
sleep(1);
setcontext(&context);
return 0;
}
This makes an infinite loop because context holds the program counter.
References
External links
System V Contexts - The GNU C Library Manual
Unix
Control flow
C (programming language) libraries
Articles with example C code
Threads (computing)
|
3663940
|
https://en.wikipedia.org/wiki/Dennis%20Thurman
|
Dennis Thurman
|
Dennis Lee Thurman (born April 13, 1956) is an American football coach and former cornerback. He is currently the Defensive coordinator on Deion Sanders' inaugural staff at Jackson State University. He is a former coach in the National Football League for the Phoenix Cardinals, Baltimore Ravens and New York Jets, and in the Alliance of American Football for the Memphis Express. He played for the Dallas Cowboys and St. Louis Cardinals. He played college football at the University of Southern California.
Early years
Thurman attended Santa Monica High School, where he was a quarterback and defensive back. He was a part of three CIF Division I championship teams that combined to go 39–1–1.
Thurman also practiced baseball and basketball. He was recruited by major league baseball teams and to play college basketball.
College career
Thurman accepted a football scholarship from the University of Southern California. As a freshman, he was part of the 1974 National Champion team. Thurman played for John McKay and later for John Robinson. He started five games at flanker in his first two seasons, recording three receptions for 55 yards (18.3-yard avg.) and seven carries for 61 yards (8.7-yard avg.).
As a junior in 1976, Thurman was named the starter at free safety, leading the team and the Pacific-8 Conference with eight interceptions. He intercepted passes in seven straight contests. Thurman led the nation with in interception return yardage (180). He also led the team with 17 punts for 68 yards.
As a senior in 1977, Thurman was second on the team with three interceptions. He was named the team's MVP and its Defensive Player of the Year. He played in the 1978 Senior Bowl and was a Playboy Pre-Season All-American.
Thurman is tied for sixth in school history with 13 interceptions, two of which were returned for touchdowns. He also had 77 tackles, six pass deflections and seven fumble recoveries. Thurman played on Trojan teams that won four bowl games (two Roses, a Liberty and a Bluebonnet). Teammate Ronnie Lott credited Thurman for his development as a player in his Pro Football Hall of Fame speech, stating Thurman was someone who "helped me become a better football player."
Professional career
Dallas Cowboys
Thurman was selected by the Dallas Cowboys in the 11th round (306th overall) of the 1978 NFL Draft, after dropping because he was considered too small and slow to play professional football. Although his college experience was at safety, he made the team as a backup cornerback. As a rookie, he also played on special teams, recovering an onside kick in Super Bowl XIII. He finished the season with 20 tackles and 2 interceptions.
In 1979, he regularly replaced outside linebacker D. D. Lewis on passing situations. He also played strong safety in place of an injured Randy Hughes. He started at cornerback in the season finale against the Washington Redskins. He had 37 tackles, one fumble recovery, one interception in the regular season and one in the divisional playoff game against the Los Angeles Rams.
In 1980, he started at free safety in place of an injured Hughes. In the ninth game against the St. Louis Cardinals, he returned an interception for a 78-yard touchdown. Although his play was inconsistent, he still tied Charlie Waters for the team lead with 5 interceptions. He also had 101 tackles (second on the team), 2 forced fumbles and 2 fumble recoveries.
In 1981, after Charlie Waters retired, cornerback Benny Barnes was moved to strong safety and rookie Michael Downs to free safety, so Thurman became the starter at right cornerback. He registered 76 tackles, one fumble recovery and 9 interceptions (third in team history), which was second on the team to Everson Walls' 11 picks. In the season opener against the Washington Redskins, he returned an interception 96 yards for a touchdown, which was the second longest in club history. He had 2 interceptions in the 28–27 win against the Miami Dolphins. In the fifteenth game against the Philadelphia Eagles, he tied a franchise record with 3 interceptions in a single-game, helping to clinch the NFC East championship. His 187 interception return yardage in the season ranked second in club history. He had 2 interceptions in the 38-0 playoff win against the Tampa Bay Buccaneers.
In 1982, he made 43 tackles and 3 interceptions. He returned a 60-yard interception for a touchdowninterception against the Minnesota Vikings. He tied a club and an NFC playoff record with 3 interceptions, including a 39-yard return for touchdown to clinch a victory in the playoffs second round against the Green Bay Packers.
In 1983, he collected 66 tackles, one fumble recovery and led the team with 6 interceptions. He scored his fourth career touchdown when he recovered a fumble against the St. Louis Cardinals.
In 1984, he was moved to backup Downs at free safety and was more involved in third-down defensive schemes. He registered 34 tackles and 5 interceptions (second on the team).
During the 1985 season quarterback Danny White nicknamed Thurman along with fellow safety Michael Downs and cornerbacks Walls, Ron Fellows, "Thurman's Thieves", for their opportunistic play in the secondary, as they combined for 33 total interceptions. He posted 41 tackles and 5 interceptions (second on the team). He returned an interception 21 yards for a touchdown in the season opener against the Washington Redskins, contributing to a 44–14 win. He had 2 interceptions against the Cleveland Browns, that stopped 2 scoring opportunities in a 20–7 win.
Thurman was waived on August 26, 1986. He left with a franchise record of career 4 interceptions returned for touchdowns, he ranked fourth in regular season career interceptions (36), second in playoff interceptions (7) and third in interception return yardage (562). At the time, he also ranked third in league history for career playoff interceptions.
St. Louis Cardinals
On August 28, 1986, he was claimed off waivers by the St. Louis Cardinals, reuniting with head coach Gene Stallings who was his defensive secondary coach with the Cowboys. He played safety and started three games. He was released on December 22.
Thurman never missed a game during his 137-game career and finished with 36 interceptions, which he returned for 562 yards and four touchdowns, while also recovering seven fumbles.
Coaching career
He made his NFL coaching debut with the Phoenix Cardinals, coaching defensive backs for two seasons (1988–89). He coached from 1993 to 2000 for his alma mater, the USC Trojans where he mentored future NFLers Chris Cash, Kris Richard, Daylon McCutcheon, Brian Kelly and Troy Polamalu.
Baltimore Ravens (2002–2007)
Thurman was part of the Baltimore Ravens coaching staff from 2002 to 2007.
New York Jets (2008–2014)
Defensive backs coach (2008–2012)
Thurman was named defensive backs coach upon the hiring of Rex Ryan as head coach of the Jets. During his tenure, he coached Darrelle Revis and Antonio Cromartie. Revis and Cromartie were vital parts of the Jets defense, especially during the Jets' playoff appearances in 2009 and 2010. Under Thurman's coaching, Revis was described as "one of the best" corners in the league. After five seasons, Thurman was promoted to defensive coordinator after the 2012 season.
Defensive coordinator (2013–2014)
Thurman was named Defensive Coordinator prior to the 2013 season. In his first season, the Jets defense allowed 24.2 points per game against. Thurman's defense was sixth in the league in his final season in New York, allowing 327.2 yards per game. They also finished in the top five among defenses against the running game. Following the 2014 season, he joined Rex Ryan's coaching staff for the Buffalo Bills.
Buffalo Bills (2015–2016)
On January 15, 2015, Thurman was hired by new head coach Rex Ryan to serve as the defensive coordinator. Thurman was credited for helping cornerback Stephon Gilmore emerge. The Bills ranked 19th in the league in defense in his first season and 14th in 2016. He was fired on January 14, 2017.
Memphis Express (2018)
In October 2018, Thurman was named defensive coordinator for the Memphis Express of the Alliance of American Football (AAF).
Personal life
Thurman is the older brother of Ulysses "Junior" Thurman, who also attended Santa Monica High School (1981) and the University of Southern California. He played defensive back in the National Football League (NFL) for the New Orleans Saints.
References
1956 births
Living people
Sportspeople from Santa Monica, California
Players of American football from Santa Monica, California
American football cornerbacks
USC Trojans football players
Dallas Cowboys players
St. Louis Cardinals (football) players
Phoenix Cardinals coaches
Ohio Glory coaches
USC Trojans football coaches
Baltimore Ravens coaches
New York Jets coaches
Buffalo Bills coaches
All-American college football players
National Football League defensive coordinators
Memphis Express (American football) coaches
Jackson State Tigers football coaches
|
22718105
|
https://en.wikipedia.org/wiki/Windows%20Embedded%20Automotive
|
Windows Embedded Automotive
|
Windows Embedded Automotive (formerly Microsoft Auto, Windows CE for Automotive, Windows Automotive, and Windows Mobile for Automotive) was an operating system subfamily of Windows Embedded based on Windows CE for use on computer systems in automobiles. The operating system is developed by Microsoft through the Microsoft Automotive Business Unit that formed in August 1995. The first automotive product built by Microsoft's Automotive Business Unit debuted on December 4, 1998 as the AutoPC, and also includes Ford Sync, Kia Uvo, and Blue&Me. Microsoft's Automotive Business Unit has built both the software platforms used for automotive devices as well as the devices themselves. The current focus is on the software platforms and includes two products, Microsoft Auto and Windows Automotive.
History
The Windows Embedded Automotive operating system was originally shipped with the AutoPC that was jointly developed by Microsoft and Clarion. The system was released in December 1998, and referred to the operating system itself as "Auto PC". Microsoft's Auto PC platform was based on Windows CE 2.0, and had been announced in January of that year.
On October 16, 2000, Microsoft officially announced the next version of the platform. This version of the operating system was renamed to "Windows CE for Automotive" and had new applications preinstalled like the Microsoft Mobile Explorer.
On October 21, 2002, Microsoft announced that the platform would be renamed to "Windows Automotive". The version added support for development using the .NET Compact Framework.
Windows Automotive 4.2 reached General Availability on June 1, 2003 and Windows Automotive 5.0 reached GA on August 8, 2005.
With the release of Ford Sync, Microsoft renamed the platform from "Windows Mobile for Automotive" to "Microsoft Auto".
Microsoft again renamed the operating system as "Windows Embedded Automotive", and updated its version to 7 on October 19, 2010. This is the latest in MS Auto category, and is based on the Windows CE platform.
Windows Embedded Automotive 7 reached GA on March 1, 2011.
In December 2014, Ford announced that the company would be replacing Microsoft Auto with BlackBerry Limited's QNX.
References
External links
Windows Embedded Automotive official website
Windows CE
|
41731508
|
https://en.wikipedia.org/wiki/Digital%20Forensics%20Framework
|
Digital Forensics Framework
|
Digital Forensics Framework (DFF) was a computer forensics open-source software. It is used by professionals and non-experts to collect, preserve and reveal digital evidence without compromising systems and data.
User interfaces
Digital Forensics Framework offers a graphical user interface (GUI) developed in PyQt and a classical tree view. Features such as recursive view, tagging, live search and bookmarking are available. Its command line interface allows the user to remotely perform digital investigation. It comes with common shell functions such as completion, task management, globing and keyboard shortcuts. DFF can run batch scripts at startup to automate repetitive tasks. Advanced users and developers can use DFF directly from a Python interpreter to script their investigation.
Distribution methods
In addition to the source code package and binary installers for Linux and Windows, Digital Forensics Framework is available in operating system distributions as is typical in free and open-source software (FOSS), including Debian, Fedora and Ubuntu.
Other Digital Forensics Framework methods available are digital forensics oriented distribution and live cd:
DEFT Linux Live CD
Kali Linux
Publications
"Scriptez vos analyses forensiques avec Python et DFF" in the French magazine MISC
Several presentations about DFF in conferences: "Digital Forensics Framework" at ESGI Security Day "An introduction to digital forensics" at RMLL 2013
Published books that mention Digital Forensics Framework are:
Digital Forensics with Open Source Tools (Syngress, 2011)
Computer Forensik Hacks (O'Reilly, 2012)
Malwares - Identification, analyse et éradication (Epsilon, 2013)
Digital Forensics for Handheld Devices (CRC Press Inc, 2012)
In literature
Saving Rain: The First Novel in The Rain Trilogy
White papers
Selective Imaging Revisited
A survey of main memory acquisition and analysis techniques for the windows operating system
Uforia : Universal forensic indexer and analyzer
Visualizing Indicators of Rootkit Infections in Memory Forensics
EM-DMKM Case Study Computer and Network Forensics
OV-chipcard DFF Extension
L'investigation numérique « libre »
Malware analysis method based on reverse technology (恶意 口序分析方法 耐)
Prize
DFF was used to solve the 2010 Digital Forensic Research Workshop (DFRWS) challenge consisting of the reconstructing a physical dump of a NAND flash memory.
References
External links
Computer forensics
Digital forensics software
Free security software
Hard disk software
Unix security-related software
|
57422748
|
https://en.wikipedia.org/wiki/University%20of%20Europe%20for%20Applied%20Sciences
|
University of Europe for Applied Sciences
|
The University of Europe for Applied Sciences, shortened as UE, is a private, for-profit university in Germany with its main campus and administrative headquarters in Iserlohn and two further campuses in Berlin and Hamburg.
It was formed in 2017 as the University of Applied Sciences Europe by a merger of the Business and Information Technology School and the Berliner Technische Kunsthochschule. The university was previously owned by the American company Laureate Education and was acquired by Global University Systems in 2018. In October 2020, the university changed its name to University of Europe for Applied Sciences.
Merged and affiliated institutions
Business and Information Technology School
Before its merger into what is now the University of Europe for Applied Sciences, the Business and Information Technology School (known informally as BiTS) was a state-approved, private higher education college (Hochschule). It was founded in 2000 by the German entrepreneur and author . The college's first and main campus was in a former British military hospital in Iserlohn. Initially, it taught business administration with a special emphasis on entrepreneurship, but over the years more and more subjects were added and the number of students grew steadily. In 2008, BiTS was bought by Laureate Education and established branches in Berlin in 2012 and Hamburg in 2013. In 2013 it had a total enrollment of approximately 1800 students, but by 2016 its numbers had begun to fall. It was at this point that Laureate Education initiated its merger with another Laureate-owned college, Berliner Technische Kunsthochschule, into the University of Applied Sciences Europe. Laureate then began looking for a buyer for the newly formed institution. Alumni of the Business and Information Technology School include the German politician Paul Ziemiak.
Berliner Technische Kunsthochschule
Also known as BTK, the Berliner Technische Kunsthochschule was a private, state-approved university for training designers located on Bernburger Straße in Berlin. It specialised in the interface of design, art, and new media. BTK was founded in 2006 by four individuals and received its initial accreditation in 2009. It was bought by Laureate Education in 2011. Following the acquisition, two further branches were established in Iserlohn in 2012 and Hamburg in 2013. In the 2013–14 academic year it had an enrollment of 502 students, 479 on its five bachelor's programmes and 23 on its master's programme. By the time of its merger into the University of Applied Sciences in 2017, BTK had added a bachelor's degree in video game design and a master's degree in photography. Several of its programmes had both German and English-language versions.
HTK Academy of Design
The HTK Academy of Design is a private state-approved vocational school which specialises in training graphic designers for the advertising and publishing sectors. It was founded in Hamburg in 1987 as the Hamburger Technische Kunstschule. It is closely affiliated to the University of Europe for Applied Sciences, a relationship that has continued from its affiliation with the Berliner Technische Kunsthochschule (BTK) which began in 2016. Its main site is now on Museumstraße in Hamburg which it shares with the University of Europe for Applied Sciences, with a further branch in Berlin which had opened in 2001 and is now located on the university's Berlin campus. Like BTK, HTK was bought by Laureate Education in 2011. It was subsequently acquired by Global University Systems in 2018 at the same time the company purchased the University of Applied Sciences Europe.
Programmes
The university has three faculties:
Business and Psychology, originally taught at the Business and Information Technology School
Sport, Media, and Event Management, originally taught at the Business and Information Technology School
Art and Design, originally taught at the Berliner Technische Kunsthochschule
Together, they offer a number of undergraduate and post-graduate degrees, some of which are taught in English. The degree programmes are accredited by FIBAA and .
Campuses
Iserlohn, located at Reiterweg 26B near the shores of Lake Seilersee, the original campus of the Business and Information Technology School (BiTS), and the administrative headquarters of UE
Berlin, located at Dessauer Straße 3–5 in the Kreuzberg district near the original campus of the Berliner Technische Kunsthochschule (BTK)
Hamburg, opened in 2014 and located at Museumstraße 39 in the Altona district near the Altona train station and the River Elbe
See also
Fachhochschule (the German term for universities of applied sciences in general)
References
External links
(in German and English)
Private universities and colleges in Germany
2017 establishments in Germany
Business schools in Germany
For-profit universities and colleges in Europe
Universities and colleges formed by merger in Germany
Universities and colleges in North Rhine-Westphalia
|
2280105
|
https://en.wikipedia.org/wiki/MIT%20in%20popular%20culture
|
MIT in popular culture
|
The Massachusetts Institute of Technology (MIT), a private research university in Cambridge, Massachusetts in the United States, has been mentioned in many works of cinema, television, music, and the written word. MIT's widespread overall reputation has greater influence on its role in popular culture than does any particular aspect of its history or its student lifestyle. Because MIT is well known as a seedbed for technology and technologists, the makers of modern media are able to use it to effectively establish character, in a way that mainstream and international audiences can immediately understand. A smaller number of creative works use MIT directly as their scene of action.
MIT as metaphor
The use of "MIT as metaphor" is relatively widespread, so much so that in popular culture, "the MIT of" is an idiom for "top science and engineering university", or "elite technical institution", like "Cadillac of" for "most luxurious", or "an Einstein" for "intelligent person". Similarly, any regionally prominent science or engineering school is likely to be called "the MIT of" that region. For example, the Georgia Institute of Technology and the University of Texas at Dallas have also been popularly and historically referred to as "the MIT of the South". Additionally, US Senator Richard Shelby (R-Alabama) touted the University of Alabama in Huntsville as a possible "MIT of the South". Other examples, make "X is the MIT of Y" an example of a snowclone (a family of formulaic clichés).
Films and television
Frequently, when a character in Hollywood cinema is required to have a science or engineering background, or in general possess an extremely high level of intelligence, the film establishes that he or she is an MIT graduate or associate. (MIT can also be a comparative or a metaphor for intellect in general: "Would they think of that at MIT?"). Numerous films and television series resort to this technique, including:
The Day the Earth Stood Still (1951)
Desk Set (1957)
The Phantom Planet (1961)
Help! (1965)
Operation Crossbow (1965)
The Time Tunnel (television show) episode 28, "The Hitchiker", a computer on a planet in the solar system of the star Canopus in the year 8433 AD, while compiling all the data of the history of Earth, reveals that Tony Newman received his PhD from MIT in 1954) (Dr. Newman is played by James Darren) (first broadcast on March 24, 1967)
Bread and Circuses (Star Trek: The Original Series) (1968) Colossus: The Forbin Project (1970)WarGames (1983)Ghostbusters (1984)My Stepmother is an Alien (1988)Hackers (1995)Independence Day (1996)Conceiving Ada (1997)Contact (1997)Orgazmo (1997)X-Files (1993–2003)Good Will Hunting 1997Armageddon (1998)Sphere (1998)The West Wing (1999-2006 TV series) – in Season 3 Episode 0Space Cowboys (2000)Gilmore Girls (2000) Season 1 Episode 1Charlie's Angels (2000 film) Main character Natalie CookMalcolm in the Middle (2000–2006) – in Season 5, Episode 6The Fast and the Furious (2001)Antitrust (2001)Undergrads (2001)Smallville (2001-2011)Arrested Development (2003-continuing TV series)Las Vegas (2003-2008 TV series)NCIS (2003-continuing TV series)The Recruit (2003)National Treasure (2004)Supernatural (2005-continuing TV series)Numbers (2005-2010 TV series)Fantastic Four (2005)Mr. & Mrs. Smith (2005)Rent (2005)E-Ring (2005)21 (2008)Seven Pounds (2008)Death Race (2008)Iron Man (2008)Fringe (2008–2013) (characters Walter Bishop and Peter Bishop)Knowing (2009)House (TV series) (2009) Season 6, episode 9 "Ignorance Is Bliss"Burn Notice (2009) (Season 3, Episode 5, "Spencer")SGU Stargate Universe (2009-2011) (character Eli Wallace)Edge of Darkness (2010)Iron Man 2 (2010)Take Me Home Tonight (2011)No Strings Attached (2011)City Hunter (TV series) (2011) Main character Lee Yoon-sung played by Lee Min-ho graduated from MIT with a doctorate degree and landed a job at South Korea's presidential palaceThe Big Bang Theory (2007-2019)Lie to Me (2009-2011-canceled TV series)Castle (2009-2016 TV Series)Person of Interest (2011-2016 TV series)Breaking Bad (2008-2013) Season 4 Episode 4, "Bullet Points"Iron Man 3 (2013)Futurama Season 7, Near-Death Wish – Professor Farnsworth was accepted to MIT at age 14 but wasn't allowed to enrollRevolution (2012-continuing TV series)Arrow (2012-continuing TV series)The Signal (2014)Forever (2014 TV series) Episode 17, "Social Engineering"Blackhat (film) (2015)Project Almanac (2015)Captain America: Civil War (2016)Ghostbusters (2016)The Simpsons (2016) Season 26, Episode 15, "Sky Police"; and Season 28, Episode 3, "The Town (The Simpsons)"MacGyver (2016 TV series) (2016)
Orphan Black (Season 2, Episode 2, "The Preacher")
The Last Ship (Season 2, Episode 11, "Valkyrie")
Modern Family (Season 7, Episode 9, "F.N. Wilson")
Bad Hair Day (2015)
Keeping Up with the Joneses (2016) (character Jeff Gaffney pretending to be Dr. Rascal Flatts)
The Magicians (2016) (Season 1, Episode 9, "Kira")
Timeless (2016) (character Rufus Carlin)
Teenage Mutant Ninja Turtles: Out of the Shadows (2016)Captain America: Civil War (2017)
Salvation (2017) (character Liam Cole)
The Defenders (2017) (character John Raymond)
Gifted (2017 film)
Spider-Man: Homecoming (2017)
Twin Peaks (2017) (character Tamara Preston)
Black Panther (2018) (character Erik "Killmonger" Stephens)
Venom (2018)Shaft (2019) (title character)Santa Clarita Diet (2019) (Season 3, Episode 8, "Forever!")The Forest of Love (2019) in which Murata studied at MITWatchmen (TV series) (2019) Character Lady Trieu, who is also a reference to historical figure Lady TriệuAvengers: Endgame (2019)Rick and Morty (2019) (Season 4, Episode 5, "Rattlestar Ricklactica", in which Rick and Morty travel back in time to "Snake MIT") South Park: Post Covid (2021), shows Dr. Kenny McCormick "lecturing at MIT"Spider-Man: No Way Home (2021), depicts Peter Parker, Ned Leeds, and Michelle Jones-Watson getting rejected from MIT and challenging the rejection by appealing to MIT's Assistant Vice Chancellor, played by Paula Newsome.Black Panther: Wakanda Forever (2022) is scheduled to include several scenes filmed on the MIT campus.Peacemaker (2022), contains the following exchange. Jamil: "Why do you think I'm mopping floors, bro? I went to MIT. I don't want the responsibility." Peacemaker: "You went to MIT?" Jamil: "Oh, yeah."Space Force (2022) Dr. Adrian Mallory, after a prank: "That's how it was done at MIT in the '70s."
James Burke's nonfiction television series The Day the Universe Changed (1985) explicitly employed the snowclone metaphor for a more academic purpose. In the episode "Point of View", which describes the discovery of perspective geometry and its ramifications, Burke spent some time in the Italian city of Padua. This city, which hosted the second-oldest Italian university after Bologna, boasted a large concentration of intellectuals. In Burke's phrase, Padua was "the MIT of the fifteenth century". An episode of his later series Connections 2 (1994) uses a similar shorthand to characterize the seventeenth-century Royal Society.
Films set at MIT are less common than those that use the MIT name as metaphor. Nevertheless, MIT has been part of movie settings, in such films as the action thriller Blown Away (1994), the drama Good Will Hunting (1997), the biographical drama A Beautiful Mind (2001), the heist drama 21 (2008), and the science fiction thriller Knowing (2008, also featuring exteriors of the Haystack Observatory). Most of the scenes for these movies, especially indoor scenes, are in fact filmed elsewhere due to MIT's reluctance to give permission to film on campus. Although portions of Blown Away were shot on the MIT campus, the film still makes several geographical errors about MIT and Boston in general. An incidental scene in neo-noir The Friends of Eddie Coyle (1973) was shot on location outside of MIT Baker House. A scene in the drama A Small Circle of Friends (1980) was shot in Walker Memorial, an MIT cafeteria and gymnasium; ironically the movie setting portrays Harvard University, but Harvard declined to allow the filming on their own campus.
The television series Numbers (2005–2010) has several different connections to MIT locales and people. The pilot episode was shot in Boston. Co-creator and Executive Producer Cheryl Heuton explained, "We originally tried to choose MIT for the show. We originally set the show in Boston, and Charlie [Eppes, one of the main characters,] was going to be a professor at MIT. We contacted MIT, and their answer was they're not in the film and TV business..." Multiple episodes of the show mention that Charlie studied at MIT. Dylan Bruno, the actor who plays Colby Granger, has earned a bachelor's degree in environmental engineering from MIT.
HBO's docudrama television miniseries From the Earth to the Moon (1998) contains segments set at MIT, most notably in the episode covering Apollo 14. The series portrays the Institute's denizens as very slightly eccentric engineers at the MIT Instrumentation Laboratory who do their part to keep the Apollo program running successfully.
Some cinematic references to MIT betray a mild anti-intellectualism, or at least a lack of respect for "book learning". For example, the adventure drama Space Cowboys (2000) features the seasoned hero (Clint Eastwood) trying to explain a piece of antiquated spacecraft technology to a whippersnapper novice. When the young astronaut fails to comprehend Eastwood's explanation, he brags that "I have two master's degrees from MIT", to which Eastwood replies, "Maybe you should get your money back". Similarly, Gus Van Sant's introduction to the published Good Will Hunting screenplay suggests that the lead character's animosity towards official MIT academia reflects a class struggle with ethnic undertones, in particular Will Hunting's Irish background versus the "English aristocracy" of the MIT faculty. Help! (1965), The Beatles' second film, ties MIT to the mad scientist stereotype when Professor Foot (Victor Spinetti) declares, "MIT was after me, you know. Wanted me to rule the world for them!"
"Inside" MIT references also appear in film and television without attribution. In the comedy Stir Crazy (1980), the opening close-up shot of Grossberger, played by Erland Van Lidth De Jeude (MIT Class of 1976, SB in Computer Science and Engineering), clearly reveals his actual "Brass Rat" class ring. In superhero film Iron Man (2008), several close-ups of Terrence Howard clearly show his character ("Jim Rhodes") to be wearing a Brass Rat; Robert Downey, Jr.'s character ("Tony Stark") appears to wear one as well in the movie.
In the second-season episode "Bread and Circuses" (1968) of Star Trek: The Original Series, the starship visits a planet dominated by a Roman Empire possessing 20th century technology. An establishing shot early in the program shows stock footage of the classic view of the MIT Great Dome, as viewed from Memorial Drive.
In The Adventures of Rocky and Bullwinkle (2000), a background image of "Whassamatta U." is recognizable as the centerpiece Great Dome of the main MIT building complex. A story arc from the original 1960s television series Rocky and Bullwinkle, "Goof Gas Attack", starts with a psychoactive gas attack that induces stupidity at the "Double Dome Institute of Advanced Thinking". The MIT campus is noted for its two prominent neoclassical domes.
MIT is referenced in some Japanese anime: the sci-fi series Neon Genesis Evangelion (1995) mentions MIT as the location of one of the replica MAGI supercomputers; and the comedy series Pani Poni Dash! (2001–2011) revolves around an 11-year-old student who graduated from MIT and travels to Japan to become a high school teacher. The CIA character "Ed Hoffman" in the action thriller Body of Lies (2008) can be seen wearing an MIT shirt in multiple shots.
Individual characters in single episodes of television series are often described as MIT graduates. For example, in the 1992 episode "The Corporate Veil" of the crime-solving television series Law & Order, both mother and son protagonists are said to be electrical engineering graduates of MIT. MIT was also mentioned in the year 2000 pilot episode of the comedy-drama television series Gilmore Girls.
In the comedy drama television series Las Vegas (2003), Mike Cannon (played by James Lesure), one of the main characters, is a highly intelligent, and technically very gifted MIT graduate engineer. The character Eli Wallace in the science fiction television series SGU Stargate Universe (2009–2011) is a genius MIT dropout.
In separate episodes of the satirical Da Ali G Show (2003–2004), Ali G (played by Sacha Baron Cohen) interviewed two real-life MIT professors: Jerome Friedman, Institute Professor and Professor of Physics Emeritus; and Noam Chomsky, Institute Professor Emeritus.
Randal Pinkett, the 2005 winner of the reality television season 4 of The Apprentice, is an MIT alum, with an SM in Electrical Engineering (1998), an MBA from Sloan School of Management (1998), and a PhD in Media Arts & Sciences from the MIT Media Lab (2001).
Two lead characters in the science fiction crime-solving television series Fringe (2008–2013) have MIT backgrounds: Walter Bishop earned a PhD at MIT, and his son Peter Bishop has falsified an MIT degree.
The action comedy movie Keeping Up with the Joneses (2016) depicts its protagonist, Jeff Gaffney (Zach Galifianakis), pretending to be a scientist named Dr. Rascal Flatts, about whom his wife says, "He's very smart. MIT."
An episode of the fantasy television series The Magicians (2016) introduces a character named Kira (Yaani King Mondschein), who says, "I went to MIT, but I didn't study a lick of magic in school".
In the science fiction time-travel television series Timeless (2016), a protagonist named Rufus Carlin (Malcolm Barrett) often mentions on the show that he is an alumnus of MIT. In one episode, Carlin time travels to 1893 and meets real-life MIT alumna Sophia Hayden (MIT Class of 1890), who assumes that Carlin must be Robert Robinson Taylor (MIT Class of 1892), the first African-American student at MIT.
In the comedy animatic series, Tenacious D in Post-Apocalypto (2018), the protagonists meet a group of scientists who say, "Where are we from? MIT, where else? We are the top uttermost scientists in all of the world, surviving."
The title character of the action-comedy movie sequel Shaft (2019) is a cybersecurity expert with a degree from MIT.
In a 2019 episode of the science fiction television series Lost in Space, principal character Judy Robinson's biological father, Grant Kelly, is described as having had "a scholarship to MIT when he was 17 and graduated top of his class".
Radio, spoken word, and podcasts
Tom Magliozzi and his younger brother Ray were "Click and Clack, The Tappet Brothers", the hosts of National Public Radio's comedy car advice show Car Talk. Both were MIT alumni — Tom earned a degree in chemical engineering (1958), and Ray earned a degree in humanities and general science (1972) — and they regularly used that fact in their humorously self-deprecating attempts to establish their credibility on technical matters. After campaigning on-air for years, they were finally invited to speak at MIT's 1999 commencement exercise. Although their radio show had stopped new programming in 2012, and Tom died in 2014, archived episodes continue to be aired nationally as The Best of Car Talk.
The comedian James Mattern, in his comedy album No Segues (2019), tells this story: "When they invented emojis years ago in Cupertino, California, who had the gall to go to Steve Jobs like, 'Steve, I've got a great idea — how about drips of water?' 'Eureka, Merv, way to use your MIT degree.'"
Written works
Also see References in the main article, and the bibliography maintained by MIT's Institute Archives & Special Collections
Nonfiction works have examined MIT, its history, and its various subcultures. In addition to books like Nightwork, which recount the Institute's hacking tradition, Benson Snyder's The Hidden Curriculum (1970) describes the state of MIT student and faculty psychology in the late 1960s. Noted physicist and raconteur Richard Feynman built up a collection of anecdotes about his MIT undergraduate years, several of which are retold in his loose memoir Surely You're Joking, Mr. Feynman! Some of this material was incorporated into Matthew Broderick's film Infinity (1996), in addition to Feynman stories from Far Rockaway, Princeton University, and Los Alamos, New Mexico.
In fiction, the novel Now, Voyager (1941, by Olive Higgins Prouty) features a key character, Jeremiah Duvaux Durrance, who studied architecture at MIT. The novel The Gadget Maker (1955, by Maxwell Griffith) traces the life of aeronautical engineer Stanley Brack, who performs his undergraduate studies at MIT. Ben Bova's novel The Weathermakers (1966) about scientists developing methods to prevent hurricanes from reaching land, is also set in part at MIT. Patricia Vasquez visits (or comes from) MIT in Greg Bear's Eon (1985). Neal Stephenson hints at MIT in Quicksilver (2004), and other books of The Baroque Cycle, by having Daniel Waterhouse found the "Massachusetts Bay Colony Institute of the Technologickal Arts" in the 18th century.
Ayn Rand's 1943 novel The Fountainhead begins with architecture student Howard Roark being expelled from the fictional "Stanton Institute of Technology". As that institute is depicted as being located in a seashore suburb of Boston, it seems that MIT – specifically, its School of Architecture – was alluded to.
Focusing principally on campus architecture, Robert B. Parker wrote in Mortal Stakes (1975, the third Spenser novel), "Across the river MIT loomed like a concrete temple to The Great God Brown".
Jhumpa Lahiri's 2003 debut novel The Namesake features a character, Ashoke, who received his PhD in Fiber Optics from MIT.
When the novel The Magicians by Lev Grossman was first published in 2009, the principal review of the book in The New York Times described the story's academic location, Brakebills College, as "kind of like the M.I.T. [sic] for magic".
The 2012 historical fiction novel The Technologists, by Matthew Pearl, is set in the MIT of 1868, during its first decade of existence. The protagonists are some of the first students to enroll in the fledgling college, and include both fictional composite characters and real-life historical figures, such as Ellen Swallow Richards and Daniel Chester French. In response to a high-tech terrorist attack on the City of Boston, the students form a secret research laboratory to discover the perpetrator and to forestall further attacks. They interact closely with prominent historical figures, such as William Barton Rogers (the founder of MIT), Harvard professor Louis Agassiz (pioneer of modern geology and paleontology), and Charles William Eliot (then an MIT professor, and soon to become the longest-serving president of Harvard University). The author spent many hours doing background research in the MIT Archives while writing the novel, and weaves many historical details into his narrative of mystery and adventure.Geeks & Greeks (2016) is a semi-autobiographical graphic novel by Steve Altes and Andy Fish, set at MIT. The story was inspired by MIT's hacking culture and Altes's experiences with fraternity hazing.
In Thornton Wilder's play "Our Town" (1938), the stage manager mentions the gravestone of Joe Crowell, whom he describes as "awful bright – graduated from high school here, head of his class. So he got a scholarship to Massachusetts Tech. Graduated head of his class there, too. It was all wrote up in the Boston paper at the time. Goin’ to be a great engineer, Joe was. But the war broke out and he died in France. – All that education for nothing."
In the 2008 historical novel People of the Book by Geraldine Brooks, Dr. Hanna Heath's research is funded by "an MIT math genius who'd invented an algorithm that led to some kind of toggle switch that was used in every silicon chip. Or something like that."
Kurt Vonnegut
MIT is a recurring motif in the works of Kurt Vonnegut, much like the planet Tralfamadore or the Vietnam War. In part, this recurrence may stem from Vonnegut family history: both his grandfather Bernard and his father Kurt, Sr. studied at MIT and received bachelor's degrees in architecture. His older brother, another Bernard, earned a bachelor's and a PhD in chemistry, also at MIT. Since so many of Vonnegut's stories are ambivalent or outright pessimistic with regard to technology's impact on humankind, it is hardly surprising that his references to the Institute express a mixed attitude.
In Hocus Pocus (1990), the Vietnam-veteran narrator Eugene Debs Hartke applies for graduate study in MIT's physics program, but his plans go awry when he tangles with a hippie at a Harvard Square Chinese restaurant. Hartke observes that men in uniform had become a ridiculous sight around colleges, even though both Harvard and MIT obtained much of their income from weapons research and development. ("I would have been dead if it weren't for that great gift to civilization from the Chemistry Department of Harvard, which was napalm, or sticky jellied gasoline.") Jailbird notes drily that MIT's eighth president was one of the three-man committee who upheld the Sacco and Vanzetti ruling, condemning the two men to death. As reported in The Tech, June 7, 1927:
President Samuel W. Stratton has recently been appointed a member of a committee that will advise Governor Alvan T. Fuller in his course of action in the Sacco-Vanzetti case, it was announced a few days ago by the metropolitan press. The President is one of a committee of three appointed, the others being President A. Lawrence Lowell of Harvard and Judge Robert Grant. It was stated at Dr. Stratton's office that this appointment was very reluctantly accepted, for not only has the President not had experience with criminal law procedure, but he has not been following the case at all in the newspapers. It is thought by some that this very fact may result in an entirely unbiased review of the case, which might not be possible had he followed the case closely.Palm Sunday (1981), a loose collage of essays and other material, contains a markedly skeptical and humanist commencement address Vonnegut gave to Hobart and William Smith Colleges in Geneva, New York. Speaking of the role religion plays in modern society, Vonnegut notes:
We no longer believe that God causes earthquakes and crop failures and plagues when He gets mad at us. We no longer imagine that He can be cooled off by sacrifices and festivals and gifts. I am so glad we don't have to think up presents for Him anymore. What's the perfect gift for someone who has everything?
The perfect gift for somebody who has everything, of course, is nothing. Any gifts we have should be given to creatures right on the surface of the planet, it seems to me. If God gets angry about that, we can call in the Massachusetts Institute of Technology. There's a very good chance they can calm Him down.
Isaac Asimov
Kurt Vonnegut was friends with fellow humanist and writer Isaac Asimov, who resided for many years in Newton, Massachusetts. During much of this time, Asimov chose the date for the MIT Science Fiction Society's annual picnic, citing a superstition that he always picked a day with good weather. In his copious autobiographical writings, Asimov reveals a mild predilection for the institute's architecture, and an awareness of its aesthetic possibilities. For example, In Joy Still Felt (1980) describes a 1957 meeting with Catherine de Camp, who was checking out colleges for her teenage son. Asimov recalls:
I hadn't seen her for five years and she was forty-nine now, and I felt I would be distressed at seeing her beauty fade.
How wrong I was! I saw her coming down the long corridor at MIT and she looked almost as though it were still 1941, when I had first met her.
Asimov's work, too, trades on MIT's reputation for narrative effect, even touching upon an anti-academic theme. In the short story "The Dead Past" (1956), the scientist-hero Foster must overcome the attitudes his Institute physics training has entrenched in his mind, before he can make his critical breakthrough. Several jokes in Isaac Asimov's Treasury of Humor and its sequel Asimov Laughs Again hinge upon MIT, its reputation for scientific prowess, and the technocentric focus of its students. In a similar vein, the satirical newspaper The Onion published an article entitled "Corpse-Reanimation Technology Still 10 Years Off, Say MIT Mad Scientists", among many others in the same general tradition.
Joe Haldeman
From 1983–2014, science fiction writer Joe Haldeman has been an adjunct professor teaching writing at MIT, and has known the Institute well. This is very evident in The Accidental Time Machine (2007) where MIT at various past and future times in its history plays a central role. The institution is described with considerable affection and much "insider" knowledge of the hidden corners in the MIT campus (as well as conspicuous parts of its geography such as the Green Building and the Infinite Corridor), of the relations between students and lecturers, and of various wild and rather illicit student practices.
The book begins with MIT student Matt Fuller accidentally discovering a phenomenon, and using it to create the time machine of the title. He jumps a decade forward to find that his professor has taken credit for his discovery and gotten a Nobel Prize for it; jumps centuries ahead and finds a theocracy where MIT is the Massachusetts Institute of Theology; and after more adventures winds up in the past, in the late 19th century when MIT was still in its original location on Boylston Street. In all time periods visited, under vastly differing circumstances, the protagonist becomes an MIT full professor.
Comic strips
Several comic strips make use of MIT. In Doonesbury, Kim Rosenthal almost earned her PhD in computer science, but dropping out because it was "too easy". In the fall of 2006, Kim and Mike Doonesbury's daughter Alex entered MIT as a freshman. (The 3 October 2006 Doonesbury strip satirizes the "MIT of" snowclone; Zipper Harris declares the fictional Walden College to be "the MIT of southern Connecticut".)
Dilbert, the title character in the comic strip about engineers and corporate management, received a degree from MIT Course VI-1.
Bill Amend's FoxTrot has also made MIT allusions, in keeping with the strip's genial satire of nerd subcultures. On Christmas Day 2005, the comic strip Baby Blues featured a character reading the instruction manual accompanying a gadget that he has given to his child as a Christmas present. The first volume of instructions begins, "Assembly Instructions — Step 1: Obtain a master's degree in mechanical engineering from M.I.T. Step 2: ..."
Computer and video games
Some genres of computer and video games have characterization requirements like those of movies. For example, a game involving a team of commandos might require a member who can break into computers, crack security systems, or work with explosives. This character's background would typically have to be established very quickly and efficiently, perhaps within one screen of introductory text. Stating that a commando or top-secret operative "graduated from MIT" is one way to accomplish this.
MIT is mentioned in the computer games Area 51 (1995), Half-Life (1998), Half-Life 2 (2004), Metal Gear Solid (1998), and in the Fallout series (1997–2015).
In the case of the Half-Life series, the main protagonist, Gordon Freeman, is an MIT graduate and has a PhD in Theoretical Physics.
The Infocom game The Lurking Horror (1987), written by MIT alumnus and interactive fiction pioneer Dave Lebling, is set on the campus of the George Underwood Edwards Institute of Technology, which strongly resembles MIT. Its fictional culture also parodies the MIT culture. For instance, G.U.E. Tech's class ring is known as the "brass hyrax", parodying MIT's Brass Rat.
In the Fallout games, MIT is known as the "Commonwealth Institute of Technology". As nuclear war began, researchers from the university hid below the main building and continued with their research without making contact with other survivors. Eventually after many years, they took on the title of simply "The Institute", and became well known as a shady organization with extraordinary technology and the ability to create androids. The institute is featured as a major faction in the 2015 title, Fallout 4.
In Fallout: New Vegas, one of the main characters in the story is Edwin Robert House, also called Mr. House, is a graduate of the institute, as stated in his obituary.
Music
The song and music video "MIT" by PomDP the PhD rapper was released on May 23, 2020 and played during MIT's 2020 pre-commencement ceremony. The song and music video feature the contributions of a number of renowned alumni from MIT, including Richard Feynman, Patrick Winston, Isaac Chuang, Claude Shannon, Marvin Minsky, and Rafael Reif. The song introduces a new form of tech rap music, and was the first rap song to be featured in an MIT commencement.
In 2012, MIT students released "MIT Gangnam Style", a light-hearted parody of the K-pop viral hit video, "Gangnam Style". Within a week, the parody video reached 4 million views on YouTube, and it also won approval from Psy, the featured performer in the original music video.
"Weird Al" Yankovic's satirical song "White & Nerdy" (2006) riffs upon MIT, along with a plenitude of other geek culture references — Star Wars Holiday Special, pocket protectors, and editing Wikipedia, to name a few. Yankovic claims that he graduated "first in [his] class here at MIT" (however, in actuality MIT does not assign class rankings or confer traditional Latin honors upon its graduates).
The 2001 song "Etoh" by the Australian electronic music group The Avalanches describes MIT as "the home of complicated computers, which speak a mechanical language all their own". This lyric can be taken literally, or it can be read metaphorically as a description of MIT student life and culture.
"Nerdcore" rap artist MC Hawking's song "All My Shootin's Be Drive-bys" (1997) takes tropes associated with gangsta rap and plays them out in a more academic setting. He speaks of taking revenge for the death of a friend, part of his Cambridge, UK crew:
I saw Little Pookie just the other day.
Pookie was my boy we shared Kool-Aid in the park,
now some punks took his life in the dark.
I ask Doomsday who the ############s be,
"some punk ### ####### from MIT".
When the narrator learns the identity of Pookie's killers, he decides to "give a Newtonian demonstration, of a bullet its mass and its acceleration", leaving six MIT students dead in the street.
In the Broadway musical Rent (1996–2008), a major character, Tom Collins, is expelled from teaching at MIT, "for [his] theory of actual reality".
Rhythm and blues group Tony! Toni! Toné! mentions MIT in the song "Born Not to Know" from their 1988 debut album Who? In the song, a pretentious individual rattles off a long list of his impressive academic credentials—culminating with a "PhD from MIT"—only to then ask, "so, can I get a job?" Tony! Toni! Toné! responds with a resounding "No!"
Allan Sherman's 1963 paean to initialisms, "Harvey and Sheila", notes that Harvey "works for IBM; he went to MIT, got his PhD".
The mathematician and satirist Tom Lehrer taught for a time in MIT's political science department in the 1960s, lecturing on quantitative methods and statistics. This experience led him to write a song called "Sociology", played to the tune of Irving Berlin's "Choreography". The lyrics conclude,
They consult, sounding occult,
Talking like a mathematics Ph.D.
They can snow all their clients,
By calling it "science"—
Although it's only sociology!
MIT students have also written many of their own songs during their stays at the Institute. This tradition, which goes back at least to The Doormat Singers of the 1960s, continues with several present-day vocal groups, such as The Logarhythms and The Chorallaries.
List of fictional characters
List of fictional characters in movies
Ellie Arroway, Contact – SETI researcher (in Carl Sagan's novel, Ellie Arroway is a Harvard graduate)
Ben Chapeski, Orgazmo – "MIT graduate"
James Clayton, The Recruit – CIA trainee, degree in "non-linear cryptography"
Emma, No Strings Attached – Protagonist is an MIT graduate, played by Natalie Portman
Jack Florey
Benjamin Gates, National Treasure – historian and amateur cryptologist
Will Hunting, Good Will Hunting – Savant on-campus janitor
James O. Incandenza, Infinite Jest – Played tennis as an MIT student, optical expert
Invisible Woman, The Fantastic FourGerald Lambeau, portrayed by Stellan Skarsgard in Good Will Hunting – professor of mathematics and Fields Medal winner
David Levinson, Independence Day – manager at NYC cable station, degree in computer science
Lex Luthor, in Superman movies – MIT graduate
Sean Maguire, portrayed by Robin Williams in Good Will Hunting – Psychologist
Rockhound, Armageddon – Geologist with two MIT doctorates in Chemistry and Geology
Natalie Cook, Charlie's Angels (2000 film)) MIT's PhD, leader of the Angels team, portrayed by [Cameron Diaz]
Richard Sumner, Desk Set – A "PhD from MIT in Science"
Tim Thomas AKA Ben Thomas, Seven Pounds – studied engineering at MIT
Peter Sullivan, Margin Call – Senior Risk Analyst with a "Ph.D. in Physics"
Richmond Valentine, Kingsman: The Secret Service – billionaire philanthropist
Tony Stark, Marvel Cinematic Universe – At the age of 15 Tony entered the undergraduate electrical engineering program at the Massachusetts Institute of Technology (MIT), and graduated with two master's degrees by age 19. Stark has a "Brass Rat" ring which can be seen during a dinner scene in the movie.
Erik "Killmonger" Stephens, Black Panther – attended graduate school at MIT with his graduate thesis, named "Project Liberator," on building an automated combat drone
Nicholas Hathaway and Chen Dawai, Blackhat (film), two hackers and computer experts, who co-wrote a remote access tool (RAT) during their time at MIT
Muri Forester, "The Tomorrow War" - PhD from MIT in biotechnology with an emphasis in genomics and immunology
List of fictional characters in TV shows
Sam Beckett, Quantum Leap – completed bachelor's degree in two years
Darcy, secretary in The LoopMike Cannon, Las Vegas – "MIT graduate degree"
Tony Newman, PhD one of two lead characters in The Time Tunnel (television show) is revealed to have earned a degree from MIT in episode 28, 'The Hitchiker'. Dr. Tony Newman is played by James Darren) (first broadcast on March 24, 1967)
Zane Donovan, Eureka – expelled from MIT
Liam Cole, Salvation – MIT graduate student and Darius Tanz, a billionaire scientist who later become US Vice President and then President.
Tobias Fünke, Arrested Development, completed his fellowship in psycholinguistics
Tim McGee, NCIS "has a Masters in Computing Forensics at MIT"
Howard Wolowitz, The Big Bang Theory – Masters in Engineering
Barney Stinson, How I Met Your Mother – May be an MIT alumnus as revealed in Season 7, Episode 16. Turns out it stood for Magicians Institute of Teaneck as told by him in Season 9, episode 15.
Walden Schmidt, Two and a Half Men – MIT dropout
Eli Wallace, SGU Stargate Universe – genius MIT dropout
Walter Bishop, Fringe – doctoral degree
Peter Bishop, Fringe – falsified an MIT degree
Harold Finch, Person of Interest – attended under the name Harold Wren
Nathan Ingram, Person of Interest – attended alongside his friend Harold Finch/Wren
Nolan Ross, Revenge – dropped out to start his own company, NolCorp
Felicity Smoak, Arrow – Master's degree in cyber security and computer sciences
Ash, Supernatural – Thrown out of MIT for fighting, computer genius
MacGyver, Title character from the rebooted series MacGyver (2016 TV series) and Nikki Carpenter his former girlfriend and former agent for DAX
Lee Yoon-sung, City Hunter graduated from MIT with a doctorate degree and landed a job at South Korea's presidential palace
Mariana Adams Foster, main character in Good Trouble (TV series)
Keira, The Magicians
Rufus Carlin, TimelessBen Larson, IncorporatedPhillip "Lip" Gallagher, ShamelessJohn Raymond, The Defenders – graduated from MIT
Tamara Preston, Twin Peaks, on dean's list at MIT
Molly Griggs, Stuart Campbell, Winslow Schott, Gabriel Duncan, Alistair Kreig are among many villains from Smallville who graduated from MIT
Dabney Donovan, in Superman & Lois, is described by Lois Lane as having dual PhDs in genetics and molecular neurochemistry, and she says that he "left a tenure-track position at MIT to work for Edge". In Season 1 Episode 12, an aerial view of MIT's campus is shown, followed by a close-up of a sign saying "Metropolis Institute of Technology".
Argenthina Woolridge, in iCarly, describes herself as a "Harvard and MIT graduate".
Reagan Ridley, in Inside Job, described as having graduated MIT at the age of 13
List of fictional characters — other
Stanley Brack in the novel The Gadget MakerDilbert-has an MIT degree.
Alex Doonesbury- character in the comic strip Doonesbury, daughter of Mike Doonesbury and J. J.
Gordon Freeman, Half-Life – Degree in theoretical physics
Harvey, from Allan Sherman's song parody "Harvey and Sheila" ("He went to MIT and got his PhD")
, Q.E.D (manga), a-15 year-old genius graduated from MIT's mathematics undergraduate
Mei Ling, Metal Gear SolidBlack Mass (comics) was a physicist at MIT before he was granted powers by the Overmaster
, Pani Poni Dash (manga), 11-year-old MIT graduate
Otacon, Metal Gear SolidJim Rhodes, Marvel Comics' Iron ManReed Richards, Mr. Fantastic Marvel Comics The Fantastic FourTony Stark, Marvel Comics' Iron Man – enrolled in MIT's undergraduate program and easily graduated with double honors majors in electrical engineering and physics at the age of 17.
Ed Straker, commander of SHADO
Djinn Makhmud, Virgil Ayres, and Anne Saint James met at MIT and teamed up as the engineers in the novel New Jersey's Famous Turnpike Witch by Brad Abruzzi. Makhmud received "wholly unsolicited admission to [MIT's] class of 2012", Ayres was Makhmud's classmate, and Saint James was a "townie [who had permission] to haunt with impunity the student-only computer clusters at MIT".
Alex Altschuler, in Mind's Eye by Douglas E. Richards "finished his doctorate at MIT in Electrical Engineering and Computer Science in only four years".
In the novel Split Second by Douglas E. Richards, the character Edgar Knight says, "Long story short, the head of Black Ops R&D got wind of my abilities and plucked me right up after I graduated MIT".
Elena Janev (also known as Sally Bins née Sally Petracova), in the novel 3:34 a.m by Nick Pirog, "was awarded a full-ride scholarship to MIT where she studied chemistry with a minor in psychology. She graduated in 1970."
Anna Thurman in the novel First Shift - Legacy by Hugh Howey. "Her research at MIT had been in wireless harmonics; remote charging technology; the ability to assume control of electronics via radio".
The novel MindWar by Douglas E. Richards includes the character Lucas, who "had just graduated from MIT with a PhD in physics and robotics, the youngest PhD the school had minted in over a decade".
In the novels Extinction Code and Extinction Countdown'' by James D. Prescott, Rajesh Viswanathan "help[ed] to pioneer [an android powered by artificial intelligence]'s creation, a move that has made him one of MIT's rising stars."
References
Massachusetts Institute of Technology
American universities and colleges in popular culture
Snowclones
|
3979875
|
https://en.wikipedia.org/wiki/On-line%20Debugging%20Tool
|
On-line Debugging Tool
|
On-line Debugging Tool (ODT) was used to describe several debugger programs developed for Digital Equipment Corporation (DEC) hardware. Various operating systems including OS/8, RT-11, RSX-11, and RSTS/E implemented ODT as did the firmware console of all of the LSI-11-family processors including the 11/03, 11/23/24, 11/53, 11/73, and 11/83/84.
The debugger allowed access to memory using octal addresses and data. Within the software systems, the debugger accessed the process's address space. DEC's line of PDP-11 processors did not implement virtual memory, from an operating system perspective, but instead worked in a fixed address space, which was mapped into a unified view of the program's address space, using an Active Page Register (APR). An APR could map the program's RAM in increments of 4K 16-bit words, to a maximum of 32K. In other words, an APR could map 8 segments of RAM, each limited to 4K. Because of this structure, the maximum RAM an APR was able to map was 32K 16-bit words. In the case of RSTS/E, this usually meant that a Runtime System, or RTS, mapped to the upper portion of the address space and a user program resided in the lower portion of the address space. The RTS provided code to support access to the Operating System, on behalf of the user program; the RTS itself stored any of its non-static data in the address space of the user program, because the RTS was typically read-only. The operating system loaded a single copy of the RTS and this was mapped to any user program that required that RTS. The APR would be set to map the RTS into the upper portion of the program's address space, in 4K increments. So the BASIC Plus RTS (for the Basic+ Programming Language) typically mapped 16K to itself and the user program was mapped, in 4K increments, in the lower 16K. The RT11 RTS occupied 4K so a user program, like the RT11-based Peripheral Interchange Program (PIP), could expand to a maximum of 28K.
ODT could be used to "patch" binary modules, like an RTS, without requiring the re-compilation of the binary's source.
The firmware console implementation accessed physical memory.
ODT is a non-symbolic debugger and implements similar functionality to Advanced Debugger (adb) on Unix systems.
Console ODT
Console ODT replaced the lights and switches console of many of the earlier processors.
Access to console ODT is obtained either from power up (with appropriate power up mode selected), by the execution of a HALT instruction in kernel mode, or by use of the front panel halt switch or button.
Example
@1000/ xxxxxx 112737<LF>
001002 xxxxxx 101<LF>
001004 xxxxxx 177566<LF>
001006 xxxxxx 137<LF>
001010 xxxxxx 1000<CR>
>R7/xxxxxx 1000<CR>
>RS/340
This deposits the program
MOVB 'A', @#177566 ; Move 'A' into console transmit register
JMP @#1000 ; Jump back to start
The deposit to the PC [Program Counter], sets the PC to the start of the program and the deposit to the PSW [Program Status Word] locks out interrupts.
The effect of this will be to write a stream of "A" to the console. As there is no check for transmitter ready, it is highly probable that a large number of garbage characters will be displayed.
RSX-11M-Plus ODT
The RSX-11M-Plus ODT is essentially a superset of all other ODT implementations.
ODT is implemented as code that is linked with a task using the Task Builder /DA switch.
TKB HELLO/DA,HELLO/CR=HELLO
Once any task built with ODT is run ODT is invoked on entry.
RUN HELLO
ODT:TT0
_
The underscore is the standard ODT prompt.
Addresses in the ODT debugger are 16 bit addresses in the mode in which ODT is operating, not the physical addresses used with console ODT.
OS/8 Octal Debugging Technique
The PDP-8's OS/8 operating system's ODT command invoked its Octal Debugging Technique tool.
As with the subsequent PDP-11 ODT programs, it was non-symbolic, and it could examine or modify memory, and also set breakpoints.
See also
Dynamic Debugging Technique (DDT)
Executive Debugging Technique (XDT)
References
Debuggers
Digital Equipment Corporation
|
55548770
|
https://en.wikipedia.org/wiki/List%20of%202018%E2%80%9319%20Premiership%20Rugby%20transfers
|
List of 2018–19 Premiership Rugby transfers
|
This is a list of player transfers involving Premiership Rugby teams before or during the 2018–19 season. The list is of deals that are confirmed and are either from or to a rugby union team in the Premiership during the 2017–18 season. Bristol Bears won promotion to the Premiership from the 2018-19 season, whilst London Irish were relegated to the RFU Championship from the 2018-19 season.
Bath
Players In
Jackson Willison from Worcester Warriors
Lucas Noguera Paz from Jaguares
Victor Delmas from Colomiers
Darren Atkins promoted from Academy
Ruaridh McConnochie from England Sevens
Jamie Roberts from Harlequins
Will Chudley from Exeter Chiefs
Joe Cokanasiga from London Irish
Alex Davies from Yorkshire Carnegie
Jacques van Rooyen from Lions
Players Out
Matt Banahan to Gloucester
Josh Lewis to Dragons
Ben Tapuai to Harlequins
Nick Auterac to Harlequins
James Phillips to Sale Sharks
Nathan Charles to Melbourne Rebels
Rory Jennings to London Scottish
Will Homer to Jersey Reds
Kane Palma-Newport to Colomiers
Shaun Knight to Rouen
James Wilson to Southland
Harry Davies to Bedford Blues
Jeff Williams to Rodez Aveyron
Darren Allinson released
Bristol Bears
Players In
Charles Piutau from Ulster
John Afoa from Gloucester
Shaun Malton from Exeter Chiefs
Nic Stirzaker from Melbourne Rebels
Yann Thomas from Rouen
Aly Muldowney from Grenoble
Tiff Eden from Nottingham
Harry Thacker from Leicester Tigers
Jake Heenan from Connacht
Jordan Lay from Edinburgh
Ollie Dawe promoted from Academy
Tom Lindsay from Bedford Blues
Jake Armstrong from Jersey Reds
Jake Woolmore from Jersey Reds
Tom Pincus from Jersey Reds
Lewis Thiede from Ealing Trailfinders
Piers O'Conor from Ealing Trailfinders
Luke Daniels from Ealing Trailfinders
Harry Randall from Gloucester
Ed Holmes from Exeter Chiefs
James Lay from Bay of Plenty
George Smith from Queensland Reds
Players Out
Jordan Williams to Dragons
Rhodri Williams to Dragons
Olly Robinson to Cardiff Blues
Max Crumpton to Harlequins
Ryan Bevington to Dragons
David Lemi to Chanlon
Jack O'Connell to Ealing Trailfinders
Tyler Gendall to Cornish Pirates
James Newey to Jersey Reds
Billy Searle to Wasps
Soane Tonga'uiha to Ampthill
Giorgi Nemsadze to Ospreys
Gaston Cortes to Leicester Tigers
Jack Wallace to Richmond
Dan Tuohy to Vannes
Jordan Liney to Hartpury College
Ross McMillan to Leicester Tigers
Alex Giltrow to Clifton
Jason Harris-Wright released
Thretton Palamo released
Ryan Glynn released
Ben Gompels released
Exeter Chiefs
Players In
Alex Cuthbert from Cardiff Blues
Santiago Cordero from Jaguares
Players Out
Kai Horstmann retired
Shaun Malton to Bristol Bears
Thomas Waldrom to Wellington Lions
Will Chudley to Bath
Ed Holmes to Bristol Bears
Julian Salvi retired
Carl Rimmer retired
Michele Campagnaro to Wasps
Gloucester
Players In
Matt Banahan from Bath
Franco Marais from Sharks
Jaco Kriel from Lions
Danny Cipriani from Wasps
Tom Hudson promoted from Academy
Gerbrandt Grobler from Munster
Will Safe promoted from Academy
Franco Mostert from Lions
Ruan Dreyer from Lions
Todd Gleave from London Irish
Kyle Traynor from Leicester Tigers
Mike Sherry from Munster (loan)
Players Out
Ross Moriarty to Dragons
Richard Hibbard to Dragons
John Afoa to Bristol Bears
Matt Scott to Edinburgh
Cameron Orr to Western Force
Andy Symons to Northampton Saints
Tom Denton to Ealing Trailfinders
Harry Randall to Bristol Bears
David Halaifonua to Coventry
Charlie Beckett to Jersey Reds
Jeremy Thrush to Western Force
Ed Bogue to Cinderford
Motu Matu'u to London Irish
Elliott Creed to Doncaster Knights
Billy Burns to Ulster
Alfie North to Ayr
Jacob Rowan retired
Carwyn Penny to Dragons
Mariano Galarza to Bordeaux
Mason Tonks to Worcester Warriors
Harlequins
Players In
Marcus Smith promoted from Academy
Nathan Earle from Saracens
Max Crumpton from Bristol Bears
Alex Dombrandt from Cardiff Metropolitan University
Ben Tapuai from Bath
Nick Auterac from Bath
Matt Symons from Wasps
Paul Lasike from Utah Warriors
Semi Kunatani from Toulouse
Players Out
Jamie Roberts to Bath
Winston Stanley retired
Adam Jones retired
Harry Sloan to Ealing Trailfinders
Sam Aspland-Robinson to Leicester Tigers
Charlie Matthews to Wasps
Ian Prior to Western Force
Cameron Holenstein to Jersey Reds
Sam Twomey to London Irish
Jono Kitto to Northland
Joe Gray to Northampton Saints (short-term deal)
Tim Swiel to Newcastle Falcons
Leicester Tigers
Players In
Guy Thompson from Wasps
Will Spencer from Worcester Warriors
David Denton from Worcester Warriors
James Voss from Jersey Reds
Sam Aspland-Robinson from Harlequins
Jimmy Stevens from Nottingham
Gaston Cortes from Bristol Bears
Owen Hills promoted from Academy
Charlie Thacker promoted from Academy
Fred Tuilagi promoted from Academy
George Worth promoted from Academy
Kyle Eastmond from Wasps
Campese Ma'afu from Northampton Saints
David Feao from Narbonne
Ross McMillan from Bristol Bears
Felipe Ezcurra from Jaguares (short-term deal)
Tom Varndell from Angouleme
Leonardo Sarto from Glasgow Warriors
Players Out
Harry Thacker to Bristol Bears
Dominic Barrow to Northampton Saints
Ben Betts to Ealing Trailfinders
Logovi'i Mulipola to Newcastle Falcons
George McGuigan to Newcastle Falcons
Joe Maksymiw to Connacht
Nick Malouf to Australia Sevens
George Catchpole retired
Michele Rizzo to Petrarca
Luke Hamilton to Edinburgh
Pat Cilliers to London Irish
Dominic Ryan retired
Afa Pakalani to NSW Country Eagles
Tom Brady to Carcassonne
Kyle Traynor to Gloucester
Chris Baumann released
Newcastle Falcons
Players In
Guy Graham from Hawick
Tom Arscott from Rouen
Logovi'i Mulipola from Leicester Tigers
George McGuigan from Leicester Tigers
Johnny Williams from London Irish
Connor Collett from North Harbour
Nemani Nagusa from Aurillac
Pedro Bettencourt from Carcassonne
Paul Mullen from Houston SaberCats (short-term deal)
Tim Swiel from Harlequins
John Hardie from Edinburgh
Rodney Ah You from Ulster
Players Out
Juan Pablo Socino to Edinburgh
Harrison Orr to Western Force
D.T.H. van der Merwe to Glasgow Warriors
Belisario Agulla to Hindu Club
Craig Willis to Ealing Trailfinders
Jake Ilnicki to Yorkshire Carnegie
Rob Vickers retired
Ally Hogg retired
Scott Lawson retired
Nick Civetta to Doncaster Knights
Maxime Mermoz to Toulouse
Nili Latu to Hino Red Dolphins
Evan Olmstead to Auckland
Ben Sowrey to Wharfedale
Cameron Cowell to Doncaster Knights (season-long loan)
Max Davies to Ealing Trailfinders
Andrew Davidson to Glasgow Warriors (short-term deal)
Scott Wilson retired
Northampton Saints
Players In
Dan Biggar from Ospreys
Taqele Naiyaravoro from NSW Waratahs
Will Davis from Ealing Trailfinders
Ben Franks from London Irish
Dominic Barrow from Leicester Tigers
Andy Symons from Gloucester
James Haskell from Wasps
Matt Worley from Racing 92
Charlie Davies from Dragons
Andrew Kellaway from NSW Waratahs
Joe Gray from Harlequins (short-term deal)
Players Out
Sam Dickinson to Ealing Trailfinders
Jordan Onojaife to Ealing Trailfinders
Nic Groom to Lions
Charlie Clare to Bedford Blues
Matt Beesley to Ealing Trailfinders
Christian Day retired
Rob Horne retired
George North to Ospreys
Ben Nutley to Coventry
Stephen Myler to London Irish
Tom Stephenson to London Irish
Kieran Brookes to Wasps
Tom Kessell to Coventry
Juan Pablo Estelles to Atlético del Rosario
Ben Foden to Rugby United New York
Jamie Elliott to Bedford Blues
Campese Ma'afu to Leicester Tigers
Alex Woolford to Coventry
Josh Peters to Blackheath
Michael Paterson released
Sale Sharks
Players In
Joe Jones from Perpignan
James Phillips from Bath
Rohan Janse van Rensburg from Lions
Chris Ashton from Toulon
Tom Bristow from Narbonne
Robert du Preez from Sharks (short-term deal)
Valery Morozov from Enisei-STM
Players Out
Mike Haley to Munster
Josh Charnley to Warrington Wolves
Will Addison to Ulster
David Seymour to Sale FC
Halani Aulika to Grenoble
TJ Ioane to London Irish
Marc Jones to Scarlets
Saracens
Players In
Alex Lewington from London Irish
David Strettle from Clermont
Tom Woolstencroft from London Irish
Viliami Hakalo from Nottingham
Christian Judge from Cornish Pirates (short-term loan)
Joe Gray from Northampton Saints
Hisa Sasagi from (short-term deal)
Chris van Zyl from Stormers (short-term deal)
Players Out
Schalk Brits retired
Nathan Earle to Harlequins
Chris Wyles retired
Kieran Longbottom to Western Force
Danny Cutmore to Cornish Pirates
Mark Flanagan to Bedford Blues
Matt Hankin retired
Mike Ellery to England Sevens
Joel Conlon retired
Wasps
Players In
Brad Shields from Hurricanes
Lima Sopoaga from Highlanders
Joe Atkinson from London Scottish
Ross Neal from London Scottish
Michael Le Bourgeois from Bedford Blues
Ben Morris from Nottingham
Billy Searle from Bristol Bears
Ambrose Curtis from Manawatu
Charlie Matthews from Harlequins
Tom West promoted from Academy
Will Stuart promoted from Academy
Nizaam Carr from Stormers
Kieran Brookes from Northampton Saints
Zurab Zhvania from Stade Francais
Michele Campagnaro from Exeter Chiefs
Players Out
Marty Moore to Ulster
Guy Thompson to Leicester Tigers
Sam Jones retired
Guy Armitage to Ealing Trailfinders
Will Owen to Nottingham
Danny Cipriani to Gloucester
James Haskell to Northampton Saints
Matt Symons to Harlequins
Alex Lundberg to Ealing Trailfinders
Kyle Eastmond to Leicester Tigers
Paul Doran-Jones to Rosslyn Park
Brendan Macken to London Irish
Christian Wade retired
Worcester Warriors
Players In
Callum Black from Ulster
Ashley Beck from Ospreys
Cornell du Preez from Edinburgh
Michael Heaney from Doncaster Knights
Isaac Miller from London Scottish
Scott van Breda from Jersey Reds
Jono Lance from Queensland Reds
Francois Venter from Cheetahs
Michael Fatialofa from Hurricanes
Duncan Weir from Edinburgh
Farai Mudariki from Tarbes
Justin Clegg promoted from Academy
Zac Xiourouppa promoted from Academy
Mason Tonks from Gloucester
Players Out
Donncha O'Callaghan retired
Huw Taylor to Dragons
Jackson Willison to Bath
Will Spencer to Leicester Tigers
David Denton to Leicester Tigers
Sam Olver to Ealing Trailfinders
Andrew Durutalo to Ealing Trailfinders
Michael Dowsett to Canon Eagles
Ben Howard to England Sevens
Kurt Haupt to SWD Eagles
Grayson Hart to London Scottish
Max Stelling to Hino Red Dolphins
Peter Stringer retired
Biyi Alo to Coventry
Tom Heathcote released
See also
List of 2018–19 Pro14 transfers
List of 2018–19 RFU Championship transfers
List of 2018–19 Super Rugby transfers
List of 2018–19 Top 14 transfers
List of 2018-19 Major League Rugby transfers
References
2018-19
transfers
|
41601904
|
https://en.wikipedia.org/wiki/Eye%20vein%20verification
|
Eye vein verification
|
Eye vein verification is a method of biometric authentication that applies pattern-recognition techniques to video images of the veins in a user's eyes. The complex and random patterns are unique, and modern hardware and software can detect and differentiate those patterns at some distance from the eyes.
Introduction
The veins in the sclera—the white part of the eyes—can be imaged when a person glances to either side, providing four regions of patterns: one on each side of each eye. Verification employs digital templates from these patterns, and the templates are then encoded with mathematical and statistical algorithms. These allow confirmation of the identity of the proper user and the rejection of anyone else.
Advocates of eye vein verification note that one of the technology's strengths is the stability of the pattern of eye blood vessels; the patterns do not change with age, alcohol consumption, allergies, or redness. Eye veins are clear enough that they can be reliably imaged by the cameras on most smartphones. The technology works through contacts and glasses, though not through sunglasses. At least one version of eye vein detection uses infrared illumination as part of the imaging, allowing imaging even in low-light conditions.
History
Dr. Reza Derakhshani at University of Missouri, Kansas City, developed the concept of using the veins in the whites of the eyes for identification. He holds several patents on the technology, including a 2008 patent for the concept of using the blood vessels seen in the whites of the eye as a unique identifier.
More recent research has explored using vein patterns in both the iris and the sclera for recognition.
Uses
Eye vein verification, like other methods of biometric authentication, can be used in a range of security situations, including mobile banking, government security, and in healthcare environments.
EyeVerify, a Kansas City, Kansas, company, markets eye vein verification with a system called Eyeprint. In 2012, EyeVerify licensed the technology developed and patented by Derakhshani. And Derakhshani now serves as chief science officer of EyeVerify.
Advantages
Eye vein patterns are unique to each person
Patterns do not change over time and are still readable with redness
Works with contacts and glasses
Resistant to false matches
Disadvantages
Phone must be held close to face
Not supported on devices without cameras or on older smartphones
See also
Iris recognition
Finger vein recognition
Fingerprint recognition
Biometrics
Voice recognition
Access control
References
Biometrics software
Authentication methods
Veins
Human eye
|
930698
|
https://en.wikipedia.org/wiki/Reseller
|
Reseller
|
A reseller is a company or individual (merchant) that purchases goods or services with the intention of selling them rather than consuming or using them. This is usually done for profit (but could be resold at a loss). One example can be found in the industry of telecommunications, where companies buy excess amounts of transmission capacity or call time from other carriers and resell it to smaller carriers.
According to the Institute for Partner Education & Development, a resellers' product fulfillment-based business model includes a corporate reseller, retail, direct market reseller (DMR), and an internet retailer (eTailer); less than 10 percent of its revenue comes from services.
Internet
Resellers are known to conduct operations on the Internet through sites on the web.
For example, this occurs where individuals or companies act as agents for ICANN accredited registrars. They either sell on commission or for profit and in most cases, but not all, the purchase from the registrar and the sale to the ultimate buyer occurs in real time. These resellers are not to be confused with speculators, who purchase many domain names with the intention of holding them and selling them at some future time at a profit. Resellers, by the very nature of their business are retailers, not wholesalers. It is not unheard of for online pawn shops like iPawn to also act as a reseller, and purchase rather than loan against valuables. Online auction and classifieds websites, such as those owned by eBay Inc. and Craigslist provide services for resellers to sell their goods and services. However although resellers are indeed retailers it does not follow that retailers are resellers.
Another common example of this is in the web hosting area, where a reseller will purchase bulk hosting from a supplier with the intention of reselling it to a number of consumers at a profit.
Software and ebooks
Software and ebooks are two products that are very easy to obtain by resellers. Their digital format makes them ideal for internet distribution. In many cases, such as brandable software, the reseller can obtain even the right to change the name of the software and claim it as one's own and resell it on an ebook shop hosting platform.
A software reseller is a consultant who sells the software from large companies under a licence. They have no legal employment status with the parent company and generally operate on a freelance basis.
Business model
The companies visited to and pitched to by software resellers are often small and medium enterprises (SMEs), local businesses and niche operators. This benefits the software house as they may not hold the resources for the legwork needed to spread their network on a lower scale. While it benefits the reseller because he/she can build up networks of smaller clients and become a single point of contact for them for every aspect concerned with the software, be it advice, training or updating.
Web resellers
A subcategory of reseller is a web operative who will buy a large amount of hosting space from an Internet service provider (ISP) and then resell some of this space to clients. Their hosting is often managed through a virtual private server (VPS) which allows them, through a control panel, to administer bandwidth, databases, passwords etc., for the client.
The popularity of this business model grew with the rise of freelance web designers as it enabled them to be the sole service provider for the client. After an initial consultation with the client they could subsequently design, develop and also host the site as a single operation.
See also
Arbitrage
Price discrimination
First-sale doctrine
Secondary market
Recommerce
References
External links
Sales
|
23132783
|
https://en.wikipedia.org/wiki/List%20of%20higher%20education%20institutions%20in%20Maharashtra
|
List of higher education institutions in Maharashtra
|
In Maharashtra, there is one central university, twenty three state universities and twenty-one deemed universities.
Universities
Central University
Mahatma Gandhi Antarrashtriya Hindi Vishwavidyalaya, Wardha.
State Universities
granted university / State university status
|
National Law Universities
Deemed Universities
Private Universities
Private universities are approved by the UGC. They can grant degrees but they are not allowed to have off-campus affiliated colleges.
Engineering and Technology
Central Government
Central Institute of Plastics Engineering and Technology, Aurangabad
Indian Institute of Technology, Bombay
Indian Institute of Information Technology, Nagpur
Indian Institute of Information Technology, Pune
National Fire Service College, Nagpur
National Institute of Electronics & Information Technology, Aurangabad
National Institute of Industrial Engineering, Mumbai
National Power Training Institute, Nagpur
Visvesvaraya National Institute of Technology, Nagpur
Government of Maharashtra
University managed
Deemed University (State funded)
Medical
Central government
All India Institute of Medical Sciences, Nagpur
State government
Agriculture
Central Institute of Cotton Research, Nagpur
National Research Centre for Citrus, Nagpur
Armed force academies
Sainik School
Sainik School, Satara
Tri-service Institutes
National Defence Academy, Khadakwasla
Indian Army
Army Institute of Technology
College of Military Engineering, Pune
Defence Institute of Advanced Technology
Medical Personnel
Armed Forces Medical College
Other institutions
National School of Leadership
Bajaj institute of technology
(http://bit.shikshamandal.org/)
Sarwodaya Research And Training Institute Of Maharashtra
Music School
See also
List of institutions of higher education in Goa
References
M
|
5730131
|
https://en.wikipedia.org/wiki/Nuclear%20briefcase
|
Nuclear briefcase
|
A nuclear briefcase is a specially outfitted briefcase used to authorize the use of nuclear weapons and is usually kept nearby the leader of a nuclear weapons state at all times.
France
In France, the nuclear briefcase does not exist officially. A black briefcase called the "mobile base" follows the president in all his trips, but it is not specifically devoted to nuclear force.
India
India does not have a nuclear briefcase. In India, the Political Council of the Nuclear Command Authority (NCA) must collectively authorize the use of nuclear weapons. The NCA Executive Council gives its opinion to the Political Council, which authorises a nuclear attack when deemed necessary. While the Executive Council is chaired by the National Security Advisor (NSA), the Political Council is chaired by the Prime Minister. This mechanism was implemented to ensure that Indian nuclear weapons remain firmly in civilian control and that there exists a sophisticated command and control mechanism to prevent their accidental or unauthorised use.
The Prime Minister is often accompanied by Special Protection Group personnel carrying a black briefcase. It contains a briefcase-based foldable Kevlar protection armor, essential documents and has a pocket that can hold a pistol.
Pakistan
On 11 April 2019, the BBC revealed footage of Prime Minister Imran Khan carrying a black briefcase that contains the codes to Pakistan's nuclear weapons.
Russia
Russia's "nuclear briefcase" is code-named Cheget. It "supports communication between senior government officials while they are making the decision whether to use nuclear weapons, and in its own turn is plugged into the special communication system Kazbek, which embraces all the individuals and agencies involved in command and control of the Strategic Nuclear Forces." It is usually assumed, although not known with certainty, that the nuclear briefcases are also issued to the Minister of Defense and the Chief of General Staff of the Russian Federation.
United States
Contents
Operation
Briefcases in fiction
Cinema and literature have approached this subject several times, notably:
Film
The Dead Zone (1983)
Johnny Smith, while shaking the hand of Greg Stillson — a candidate for the post of the United States senator — during an electoral meeting, in the prophetic vision of Stilson, became president of the United States, launching a nuclear attack against Russia, scanning the palm personally on a computer terminal to validate the launching of missiles;
The Peacekeeper (1997)
Deterrence (1999)
Fictional President of the United States Walter Emerson uses his nuclear briefcase in this movie to authorize a nuclear attack on the city of Baghdad.
A group of rogue veterans turned terrorists manages to steal the briefcase
24 (TV series) Terrorists get their hands on the nuclear briefcase and steal a page from the playbook containing activation codes and locations for warheads. (2005)
Swing Vote (2008)
The incumbent president attempts to impress a key voter by letting him hold the nuclear football.
Salt (2010)
Near the end of the film, the President of the United States reacts to Russia's threatening nuclear posture after the death of the Russian President at the apparent hands of an American agent by deploying the briefcase and authenticating his identity, and shortly after a Soviet sleeper agent kills his security detail and uses the briefcase to issue nuclear attack orders.
Mission: Impossible – Ghost Protocol (2011)
G.I. Joe: Retaliation (2013)
White House Down (2013)
Scorpion S1E15 (2015); A team must return a "football" stolen sixteen years earlier in the course of surgical operation. The pirates had already tried to launch a strike using an American nuclear silo based in Iceland, but they failed.
The Fate of the Furious (2017)
Literature
Langelot et la Clef de la guerre, a children's spy novel by Vladimir Volkoff.
The key commanding the firing of nuclear missiles is stolen from the President of France.
See also
Letters of last resort – (United Kingdom)
References
External links
Shattered Shield. Cold-War Doctrines Refuse to Die By David Hoffman, Washington Post, March 15, 1998
Military communications
Nuclear command and control
United Kingdom nuclear command and control
Cabinet Office (United Kingdom)
|
1602071
|
https://en.wikipedia.org/wiki/Screencast
|
Screencast
|
A screencast is a digital recording of computer screen output, also known as a video screen capture or a screen recording, often containing audio narration. The term screencast compares with the related term screenshot; whereas screenshot generates a single picture of a computer screen, a screencast is essentially a movie of the changes over time that a user sees on a computer screen, that can be enhanced with audio narration and captions.
Etymology
In 2004, columnist Jon Udell invited readers of his blog to propose names for the emerging genre. Udell selected the term "screencast", which was proposed by both Joseph McDonald and Deeje Cooley.
The terms "screencast" and "screencam" are often used interchangeably, due to the market influence of ScreenCam as a screencasting product of the early 1990s. ScreenCam, however, is a federal trademark in the United States, whereas screencast is not trademarked and has established use in publications as part of Internet and computing vernacular.
Uses
Screencasts can help demonstrate and teach the use of software features. Creating a screencast helps software developers show off their work. Educators may also use screencasts as another means of integrating technology into the curriculum. Students can record video and audio as they demonstrate the proper procedure to solve a problem on an interactive whiteboard.
Screencasts are useful tools for ordinary software users as well: They help filing report bugs in which the screencasts take the place of potentially unclear written explanations; they help showing others how a given task is accomplished in a specific software environment.
Organizers of seminars may choose to routinely record complete seminars and make them available to all attendees for future reference and/or sell these recordings to people who cannot afford the fee of the live seminar or do not have the time to attend it. This will generate an additional revenue stream for the organizers and makes the knowledge available to a broader audience.
This strategy of recording seminars is already widely used in fields where using a simple video camera or audio recorder is insufficient to make a useful recording of a seminar. Computer-related seminars need high quality and easily readable recordings of screen contents which is usually not achieved by a video camera that records the desktop.
In classrooms, teachers and students can use this tool to create videos to explain content, vocabulary, etc. Videos can make class time more productive for both teachers and students. Screencasts may increase student engagement and achievement and also provide more time in which students can work collaboratively in groups, so screencasts help them to think through cooperative learning.
In addition, screencasts allow students to move at their own pace since they can pause or review content anytime and anywhere. Screencasts are excellent for those learners who just need an oral as well as a visual explanation of the content presented.
Software
Microsoft's Xbox (app) included in Windows 10 has a screen recorder.
Trial versions of screencasting programs often apply a watermark, encouraging users to purchase the full version in order to remove it.
Open-source tools like Open Broadcaster Software and ShareX exist for both screencasting and live streaming the recorded video. This tool in particular is widely used for video game live streaming due to its ability to handle additional sources like cameras and microphones.
Proprietary software programs of note include Screencast-O-Matic, CloudApp, and Camtasia.
Hardware
An alternative solution for capturing a screencast is the use of a hardware RGB or DVI frame grabber card. This approach places the burden of the recording and compression process on a machine separate from the one generating the visual material being captured.
In popular culture
The films Unfriended, Unfriended: Dark Web, and Searching contain screencasts that were simulated for the purposes of the film.
See also
Comparison of screencasting software
Online lecture
Slidecast
Screenshot
Software vision mixer
Live streaming
Video capture
References
Further reading
External links
Articles containing video clips
Digital container formats
Film and video technology
Graphical user interfaces
Screencasting software
Training
|
19214033
|
https://en.wikipedia.org/wiki/NexentaStor
|
NexentaStor
|
NexentaStor is an OpenSolaris or more recently Illumos distribution optimized for virtualization, storage area networks, network-attached storage, and iSCSI or Fibre Channel applications employing the ZFS file system.
Like OpenSolaris, NexentaStor is a Unix-like operating system. Nexenta Systems started NexentaStor as a fork of another OpenSolaris distribution, Illumos.
NexentaStor supports iSCSI, unlimited incremental backups ('snapshots'), snapshot mirroring (replication), continuous data protection, integrated search within ZFS snapshots, and an API.
Nexenta distributes the operating system as a disk image. The Community Edition is available free of charge for users with up 10 TB of used disk space who deploy the operating system in a non-production environment.
NexentaStor Community Edition includes all the common storage area network features of the production version, but if the amount of disk data addressed by the system exceeds 18 TB, the operating system locks most administration functions.
NexentaStor`s predecessor is/was Nexenta OS.
References
External links
NexentaStor Community Edition
Computer storage devices
OpenSolaris-derived software distributions
|
42032995
|
https://en.wikipedia.org/wiki/Smartglasses
|
Smartglasses
|
Smartglasses or smart glasses are wearable computer glasses that add information alongside or to what the wearer sees. Alternatively, smartglasses are sometimes defined as wearable computer glasses that are able to change their optical properties at runtime. Smart sunglasses which are programmed to change tint by electronic means are an example of the latter type of smartglasses.
Superimposing information onto a field of view is achieved through an optical head-mounted display (OHMD) or embedded wireless glasses with transparent heads-up display (HUD) or augmented reality (AR) overlay. These systems have the capability to reflect projected digital images as well as allowing the user to see through it or see better with it. While early models can perform basic tasks, such as serving as a front end display for a remote system, as in the case of smartglasses utilizing cellular technology or Wi-Fi, modern smart glasses are effectively wearable computers which can run self-contained mobile apps. Some are handsfree and can communicate with the Internet via natural language voice commands, while others use touch buttons.
Like other computers, smartglasses may collect information from internal or external sensors. It may control or retrieve data from other instruments or computers. It may support wireless technologies like Bluetooth, Wi-Fi, and GPS. A small number of models run a mobile operating system and function as portable media players to send audio and video files to the user via a Bluetooth or WiFi headset. Some smartglasses models also feature full lifelogging and activity tracker capability.
Smartglasses devices may also have features found on a smartphone. Some have activity tracker functionality features (also known as "fitness tracker") as seen in some GPS watches.
Features and applications
As with other lifelogging and activity tracking devices, the GPS tracking unit and digital camera of some smartglasses can be used to record historical data. For example, after the completion of a workout, data can be uploaded into a computer or online to create a log of exercise activities for analysis. Some smart watches can serve as full GPS navigation devices, displaying maps and current coordinates. Users can "mark" their current location and then edit the entry's name and coordinates, which enables navigation to those new coordinates.
Although some smartglasses models manufactured in the 21st century are completely functional as standalone products, most manufacturers recommend or even require that consumers purchase mobile phone handsets that run the same operating system so that the two devices can be synchronized for additional and enhanced functionality. The smartglasses can work as an extension, for head-up display (HUD) or remote control of the phone and alert the user to communication data such as calls, SMS messages, emails, and calendar invites.
Security applications
Smart glasses could be used as a body camera. In 2018, Chinese police in Zhengzhou and Beijing were using smart glasses to take photos which are compared against a government database using facial recognition to identify suspects, retrieve an address, and track people moving beyond their home areas.
Healthcare applications
Several proofs of concept for Google Glasses have been proposed in healthcare. In July 2013, Lucien Engelen started research on the usability and impact of Google Glass in health care. Engelen, who is based at Singularity University and in Europe at Radboud University Medical Center, is participating in the Glass Explorer program.
Key findings of Engelen's research included:
The quality of pictures and video are usable for healthcare education, reference, and remote consultation. The camera needs to be tilted to different angle for most of the operative procedures
Tele-consultation is possible—depending on the available bandwidth—during operative procedures.
A stabilizer should be added to the video function to prevent choppy transmission when a surgeon looks to screens or colleagues.
Battery life can be easily extended with the use of an external battery.
Controlling the device and/or programs from another device is needed for some features because of a sterile environment.
Text-to-speech ("Take a Note" to Evernote) exhibited a correction rate of 60 percent, without the addition of a medical thesaurus.
A protocol or checklist displayed on the screen of Google Glass can be helpful during procedures.
Dr. Phil Haslam and Dr. Sebastian Mafeld demonstrated the first concept for Google Glass in the field of interventional radiology. They demonstrated the manner in which the concept of Google Glass could assist a liver biopsy and fistulaplasty, and the pair stated that Google Glass has the potential to improve patient safety, operator comfort, and procedure efficiency in the field of interventional radiology. In June 2013, surgeon Dr. Rafael Grossmann was the first person to integrate Google Glass into the operating theater, when he wore the device during a PEG (percutaneous endoscopic gastrostomy) procedure. In August 2013, Google Glass was also used at Wexner Medical Center at Ohio State University. Surgeon Dr. Christopher Kaeding used Google Glass to consult with a colleague in a distant part of Columbus, Ohio. A group of students at The Ohio State University College of Medicine also observed the operation on their laptop computers. Following the procedure, Kaeding stated, "To be honest, once we got into the surgery, I often forgot the device was there. It just seemed very intuitive and fit seamlessly."
16 November 2013, in Santiago de Chile, the maxillofacial team led by Dr.gn Antonio Marino conducted the first orthognathic surgery assisted with Google Glass in Latin America, interacting with them and working with simultaneous three-dimensional navigation. The surgical team was interviewed by ADN radio. In January 2014, Indian Orthopedic Surgeon Selene G. Parekh conducted the foot and ankle surgery using Google Glass in Jaipur, which was broadcast live on Google website via the internet. The surgery was held during a three-day annual Indo-US conference attended by a team of experts from the US and co-organized by Ashish Sharma. Sharma said Google Glass allows looking at an X-Ray or MRI without taking the eye off of the patient and allows a doctor to communicate with a patient's family or friends during a procedure.
In Australia, during January 2014, Melbourne tech startup Small World Social collaborated with the Australian Breastfeeding Association to create the first hands-free breastfeeding Google Glass application for new mothers. The application, named Google Glass Breastfeeding app trial, allows mothers to nurse their baby while viewing instructions about common breastfeeding issues (latching on, posture etc.) or call a lactation consultant via a secure Google Hangout, who can view the issue through the mother's Google Glass camera. The trial was successfully concluded in Melbourne in April 2014, and 100% of participants were breastfeeding confidently.
Display types
Various techniques have existed for see-through HMDs. Most of these techniques can be summarized into two main families: "Curved Mirror" (or Curved Combiner) based and "Waveguide" or "Light-guide" based. The mirror technique has been used in EyeTaps, by Meta in their Meta 1, by Vuzix in their Star 1200 product, by Olympus, and by Laster Technologies.
Various waveguide techniques have existed for some time. These techniques include diffraction optics, holographic optics, polarized optics, reflective optics, and projection:
Diffractive waveguide – slanted diffraction grating elements (nanometric 10E-9). Nokia technique now licensed to Vuzix.
Holographic waveguide – 3 holographic optical elements (HOE) sandwiched together (RGB). Used by Sony and Konica Minolta.
Reflective waveguide – A thick light guide with single semi-reflective mirror is used by Epson in their Moverio product. A curved light guide with partial-reflective segmented mirror array to out-couple the light is used by tooz technologies GmbH.
Virtual retinal display (VRD) – Also known as a retinal scan display (RSD) or retinal projector (RP), is a display technology that draws a raster display (like a television) directly onto the retina of the eye - developed by MicroVision, Inc.
The Technical Illusions castAR uses a different technique with clear glass. The glasses have a projector, and the image is returned to the eye by a reflective surface.
Smart sunglasses
Smart sunglasses which are able to change their light filtering properties at runtime generally use liquid crystal technology. As lighting conditions change, for example when the user goes from indoors to outdoors, the brightness ratio also changes and can cause undesirable vision impairment. An attractive solution for overcoming this issue is to incorporate dimming filters into smart sunglasses which control the amount of ambient light reaching the eye. An innovative liquid crystal based component for use in the lenses of smart sunglasses is PolarView by LC-Tec. PolarView offers analog dimming control, with the level of dimming being adjusted by an applied drive voltage.
Another type of smart sunglasses uses adaptive polarization filtering (ADF). ADF-type smart sunglasses can change their polarization filtering characteristics at runtime. For example, ADF-type smart sunglasses can change from horizontal polarization filtering to vertical polarization filtering at the touch of a button.
The lenses of smart sunglasses can be manufactured out of multiple adaptive cells, therefore different parts of the lens can exhibit different optical properties. For example the top of the lens can be electronically configured to have different polarization filter characteristics and different opacity than the lower part of the lens.
Human Computer Interface (HCI) control input
Head-mounted displays are not designed to be workstations, and traditional input devices such as keyboard and mouse do not support the concept of smartglasses. Instead Human Computer Interface (HCI) control input needs to be methods lend themselves to mobility and/or hands-free use are good candidates, for example:
Touchpad or buttons
Compatible devices (e.g. smartphones or control unit) for remote control
Speech recognition
Gesture recognition
Eye tracking
Brain–computer interface
Notable products
In development
b.g. (Beyond Glasses) by Meganesuper Co., Ltd. – adjustable wearable display than can be attached to regular prescription glasses
castAR by Technical Illusions – wearable AR device for gaming
Apple AR glasses – wearable AR device for Apple devices (not officially announced but rumored)
Xiaomi Smart Glasses by Xiaomi – wearable AR device
Essnz Berlin by tooz technologies GmbH
Current
Airscouter, a virtual retinal display made by Brother Industries
Epiphany Eyewear - smart glasses developed by Vergence Labs, a subsidiary of Snap Inc.
Epson Moverio BT-300/350 and Moverio Pro BT-2000/2200 – augmented reality smartglasses by Epson.
Everysight Raptor – smart glasses for cyclists.
EyeTap – eye-mounted camera and head-up display (HUD).
Ray-Ban Stories, built in a partnership with Facebook
Golden-i Infinity – a wearable smart screen for Android or Win10 host devices.
Google Glass – optical head-mounted display.
Iristick.G1 – The first industrial iOS and Android compatible smart safety glasses manufactured by Iristick.
Lucyd Lytes - The first 100+ hour battery life smartglasses
Magic Leap
Microsoft HoloLens - a pair of mixed reality smart glasses with high-definition 3D optical head-mounted display and spatial sound developed and manufactured by Microsoft, using the Windows Holographic platform.
Pivothead SMART – "Simple Modular Application-Ready Technology", released in October 2014
SixthSense – wearable AR device.
Spectacles - sunglasses with an embedded wearable camera by Snap Inc.
Vue - prescription smart glasses with audio features including playing music, making calls, activity tracking, notifications and voice assistants.
Vuzix – Augmented reality glasses for 3D gaming, manufacturing training, and military applications.
Photons - Wearable augmented reality smart glasses for fitness gaming created by PhotonLens, partnered with Shadow Creator.
MAD Gaze - Creators of several MR smart glasses such as Ares, X5, X5S, Vader, & GLOW, intended to take the place of tablets and laptops.
Discontinued
Looxcie – ear-mounted streaming video camera
History
2012
On 17 April 2012, Oakley's CEO Colin Baden stated that the company has been working on a way to project information directly onto lenses since 1997, and has 600 patents related to the technology, many of which apply to optical specifications.
On 18 June 2012, Canon announced the MR (Mixed Reality) System which simultaneously merges virtual objects with the real world at full scale and in 3D. Unlike the Google Glass, the MR System is aimed for professional use with a price tag for the headset and accompanying system is $125,000, with $25,000 in expected annual maintenance.
2013
At MWC 2013, the Japanese company Brilliant Service introduced the Viking OS, an operating system for HMD's which was written in Objective-C and relies on gesture control as a primary form of input. It includes a facial recognition system and was demonstrated on a revamped version of Vuzix STAR 1200XL glasses ($4,999) which combined a generic RGB camera and a PMD CamBoard nano depth camera.
At Maker Faire 2013, the startup company Technical Illusions unveiled CastAR augmented reality glasses which are well equipped for an AR experience: infrared LEDs on the surface detect the motion of an interactive infrared wand, and a set of coils at its base are used to detect RFID chip loaded objects placed on top of it; it uses dual projectors at a frame rate of 120 Hz and a retro-reflective screen providing a 3D image that can be seen from all directions by the user; a camera sitting on top of the prototype glasses is incorporated for position detection, thus the virtual image changes accordingly as a user walks around the CastAR surface.
At D11 Conference 2013, the startup company Atheer Labs unveiled its 3D augmented reality glasses prototype. The prototype includes binocular lens, 3D images support, a rechargeable battery, WiFi, Bluetooth 4.0, accelerometer, gyro and an IR. User can interact with the device by voice commands and the mounted camera allows the users to interact naturally with the device with gestures.
2014
The Orlando Magic, Indiana Pacers, and other NBA teams used Google Glass on the CrowdOptic platform to enhance the in-game experience for fans.
Rhode Island Hospital's Emergency Department became the first emergency department to experiment with Google Glass applications.
2018
Intel announces Vaunt, a set of smart glasses that are designed to appear like conventional glasses and are display-only, using retinal projection. The project was later shut down.
Zeiss and Deutsche Telekom partners up to form tooz technologies GmbH to develop optical elements for smart glass displays.
2021
Facebook Reality Labs and Ray-Ban announced a collaboration project called Ray-Ban Stories. Unlike previous smart glasses by other companies, Ray-Ban Stories have no HUD or AR display but have integrated cameras, speakers, and microphones running through a Qualcomm Snapdragon processor and connect via bluetooth to integrate with Facebook on your phone.
Market structure
Analytics company IHS has estimated that the shipments of smart glasses may rise from just 50,000 units in 2012 to as high as 6.6 million units in 2016. According to a survey of more than 4,600 U.S. adults conducted by Forrester Research, around 12 percent of respondents are willing to wear Google Glass or other similar devices if it offers a service that piques their interest. Business Insider's BI Intelligence expects an annual sales of 21 million Google Glass units by 2018. Samsung and Microsoft are expected to develop their own version of Google Glass within six months with a price range of $200 to $500. Samsung has reportedly bought lenses from Lumus, a company based in Israel. Another source says Microsoft is negotiating with Vuzix. In 2006, Apple filed patent for its own HMD device. In July 2013, APX Labs founder and CEO Brian Ballard stated that he knows of 25 to 30 hardware companies which are working on their own versions of smartglasses, some of which APX is working with.
In fact, there were only about 150K AR glasses shipped to customers through the world in 2016 despite the strong opinion of CEOs of leading tech companies that AR is entering our life. This outlines some serious technical limitations that prevent OEMs from offering a product that would balance functionality and customers’ desire not to wear daily a massive facial/cephalic device. The solution could be in transfer of battery, processing power and connectivity from the AR glasses frame to an external wire-connected device such as a smart necklace. This could allow development of AR glasses serving as display only – lite, cheap and stylish.
Public reception for commercial usage
Critical reception
In November 2012, Google Glass received recognition by Time Magazine as one of the "Best Inventions of the Year 2012", alongside inventions such as the Curiosity Rover. After a visit to the University of Cambridge by Google's chairman Eric Schmidt in February 2013, Wolfson College professor John Naughton praised the Google Glass and compared it with the achievements of hardware and networking pioneer Douglas Engelbart. Naughton wrote that Engelbart believed that machines "should do what machines do best, thereby freeing up humans to do what they do best". Lisa A. Goldstein, a freelance journalist who was born profoundly deaf, tested the product on behalf of people with disabilities and published a review on 6 August 2013. In her review, Goldstein states that Google Glass does not accommodate hearing aids and is not suitable for people who cannot understand speech. Goldstein also explained the limited options for customer support, as telephone contact was her only means of communication.
In December 2013, David Datuna became the first artist to incorporate Google Glass into a contemporary work of art. The artwork debuted at a private event at The New World Symphony in Miami Beach, Florida, US and was moved to the Miami Design District for the public debut. Over 1500 people used Google Glass to experience Datuna's American flag from his "Viewpoint of Billions" series.
After a negative public reaction, the retail availability of Google Glass ended in January 2015, and the company moved to focus on business customers in 2017.
Privacy concerns
The EyeTap's functionality and minimalist appearance have been compared to Steve Mann's EyeTap, also known as "Glass" or "Digital Eye Glass", although Google Glass is a "Generation-1 Glass" compared to EyeTap, which is a "Generation-4 Glass". According to Mann, both devices affect both privacy and secrecy by introducing a two-sided surveillance and sousveillance. Concerns have been raised by various sources regarding the intrusion of privacy, and the etiquette and ethics of using the device in public and recording people without their permission. There is controversy that Google Glass would violate privacy rights due to security problems and others.
Privacy advocates are concerned that people wearing such eyewear may be able to identify strangers in public using facial recognition, or surreptitiously record and broadcast private conversations. Some companies in the U.S. have posted anti-Google Glass signs in their establishments. In July 2013, prior to the official release of the product, Stephen Balaban, co-founder of software company Lambda Labs, circumvented Google’s facial recognition app block by building his own, non-Google-approved operating system. Balaban then installed face-scanning Glassware that creates a summary of commonalities shared by the scanned person and the Glass wearer, such as mutual friends and interests. Additionally, Michael DiGiovanni created Winky, a program that allows a Google Glass user to take a photo with a wink of an eye, while Marc Rogers, a principal security researcher at Lookout, discovered that Glass can be hijacked if a user could be tricked into taking a picture of a malicious QR code.
Other concerns have been raised regarding legality of Google Glass in a number of countries, particularly in Russia, Ukraine, and other post-USSR countries. In February 2013, a Google+ user noticed legal issues with Google Glass and posted in the Google Glass community about the issues, stating that the device may be illegal to use according to the current legislation in Russia and Ukraine, which prohibits use of spy gadgets that can record video, audio or take photographs in an inconspicuous manner. Concerns were also raised in regard to the privacy and security of Google Glass users in the event that the device is stolen or lost, an issue that was raised by a US congressional committee. As part of its response to the governmental committee, Google stated in early July that is working on a locking system and raising awareness of the ability of users to remotely reset Google Glass from the web interface in the event of loss. Several facilities have banned the use of Google Glass before its release to the general public, citing concerns over potential privacy-violating capabilities. Other facilities, such as Las Vegas casinos, banned Google Glass, citing their desire to comply with Nevada state law and common gaming regulations which ban the use of recording devices near gambling areas.
Safety considerations
Concerns have also been raised on operating motor vehicles while wearing the device. On 31 July 2013 it was reported that driving while wearing Google Glass is likely to be banned in the UK, being deemed careless driving, therefore a fixed penalty offense, following a decision by the Department for Transport. In the U.S., West Virginia state representative Gary G. Howell introduced an amendment in March 2013 to the state's law against texting while driving that would include bans against "using a wearable computer with head mounted display." In an interview, Howell stated, "The primary thing is a safety concern, it [the glass headset] could project text or video into your field of vision. I think there's a lot of potential for distraction."
In October 2013, a driver in California was ticketed for "driving with monitor visible to driver (Google Glass)" after being pulled over for speeding by a San Diego Police Department officer. The driver was reportedly the first to be ticketed for driving while wearing a Google Glass. While the judge noted that 'Google Glass fell under "the purview and intent" of the ban on driving with a monitor', the case was thrown out of court due to lack of proof the device was on at the time.
Functionality considerations
Today most AR devices look bulky, and applications such as navigation, a real-time tourist guide, and recording, can drain smart glasses' batteries in about 1–4 hours. Battery life might be improved by using lower-power display systems (as with the Vaunt), wearing a battery pack elsewhere on the body (such as a belt pack or companion smart necklace).
See also
Head-mounted display
Wearable technology
Quantified self
Bionic contact lens
References
Further reading
3D VIS Lab, University of Arizona – "Head-Mounted Display Systems" by Jannick Rolland and Hong Hua
Optinvent – "Key Challenges to Affordable See Through Wearable Displays: The Missing Link for Mobile AR Mass Deployment" by Kayvan Mirza and Khaled Sarayeddine
Optics & Photonics News – "A review of head-mounted displays (HMD) technologies and applications for consumer electronics" by Jannick Rolland and Ozan Cakmacki
Google Inc. – "A review of head-mounted displays (HMD) technologies and applications for consumer electronics" by Bernard Kress & Thad Starner (SPIE proc. # 8720, 31 May 2013)
SPIE Newsroom – Bernard Kress plenary: Designing the next generation of wearable displays (31 August 2015, SPIE Newsroom)
Display technology
Eyewear
Mixed reality
Multimodal interaction
Augmented reality
Emerging technologies
Wearable computers
Display devices
Mobile computers
Personal digital assistants
Human–computer interaction
Ubiquitous computing
Japanese inventions
Embedded Linux
Navigational equipment
|
21018561
|
https://en.wikipedia.org/wiki/Daniel%20Jackson%20%28computer%20scientist%29
|
Daniel Jackson (computer scientist)
|
Daniel Jackson (born 1963) is a professor of Computer Science at the Massachusetts Institute of Technology (MIT). He is the principal designer of the Alloy modelling language, and author of the book Software Abstractions: Logic, Language, and Analysis.
Biography
Jackson was born in London, England, in 1963.
He studied physics at the University of Oxford, receiving an MA in 1984. After completing his MA, Jackson worked for two years as a software engineer at Logica UK Ltd. He then returned to academia to study computer science at MIT, where he received an SM in 1988, and a PhD in 1992. Following the completion of his doctorate Jackson took up a position as an Assistant Professor of Computer Science at Carnegie Mellon University, which he held until 1997. He has been on the faculty of the Department of Electrical Engineering and Computer Science at MIT since 1997.
In 2017 he became a Fellow of the Association for Computing Machinery.
Jackson is also a photographer, and has an interest in the straight photography style. The MIT Museum commissioned a series of photographs of MIT laboratories from him, displayed from May to December 2012, to accompany an exhibit of images by Berenice Abbott.
Jackson is the son of software engineering researcher Michael A. Jackson, developer of Jackson Structured Programming (JSP), Jackson System Development (JSD), and the Problem Frames Approach.
Research
Jackson's research is broadly concerned with improving the dependability of software. He is a proponent of lightweight formal methods. Jackson and his students developed the Alloy language and its associated Alloy Analyzer analysis tool to provide support for lightweight specification and modelling efforts.
Between 2004 and 2007, Jackson chaired a multi-year United States National Research Council study on dependable systems.
Selected publications
References
External links
Daniel Jackson MIT home page
Daniel Jackson photography website
1963 births
Living people
Photographers from London
Alumni of the University of Oxford
British computer programmers
British expatriate academics in the United States
MIT School of Engineering alumni
Carnegie Mellon University faculty
MIT School of Engineering faculty
English computer scientists
Formal methods people
Software engineering researchers
Computer science writers
20th-century British photographers
21st-century British photographers
Fellows of the Association for Computing Machinery
|
45622586
|
https://en.wikipedia.org/wiki/Danny%20Greefhorst
|
Danny Greefhorst
|
Danny Greefhorst (born 31 December 1972) is a Dutch enterprise architect and consultant at ArchiXL, known for his work in the field of enterprise architecture.
Biography
Greefhorst obtained his master in computer science at Utrecht University in 1995 with the master thesis ""A Simulation Environment for Ariadne." Furthermore, he became IBM Certified Senior IT Architect in 2004. He is TOGAF 9 level 2 and ArchiMate 2.0 certified.
After graduation Greefhorst started his career at the Software Engineering Research Centre in Utrecht in 1995. He worked in various roles from software researcher, designer, architect, developer, tester and webmaster to class instructor, coach, seminar organiser, and IT consultant, and published his first papers. In 2001 he moved to IBM, where he became Senior IT Architect for five years. After another year as Principal Consultant at the management consultancy firm Yellowtail, he started his own enterprise architecture consultancy firm named ArchiXL.
Since 2010 Greefhorst chairs the governing board of Via Nova Architectura, and since 2014 also chairs the governing board of the Special Interest Group on architecture of the Dutch Computer Society Ngi-NGN. In 2011 he received a medal of honor from the Dutch Architecture Forum for his contributions to the Dutch enterprise architecture community.
Work
Architecture Principles, 2011
In "Architecture Principles – The Cornerstones of Enterprise Architecture," (2011) Greefhorst and Proper present an extensive study of architecture principles. They presume that "enterprises, from small to large, evolve continuously. As a result, their structures are transformed and extended continuously. Without some means of control, such changes are bound to lead to an overly complex, uncoordinated and heterogeneous environment that is hard to manage and hard to adapt to future changes. Enterprise architecture principles provide a means to direct transformations of enterprises. As a consequence, architecture principles should be seen as the cornerstones of any architecture."
Furthermore, they argue, that this work "provide[s] both a theoretical and a practical perspective on architecture principles. The theoretical perspective involves a brief survey of the general concept of principle as well as an analysis of different flavors of principles. Architecture principles are regarded as a specific class of normative principles that direct the design of an enterprise, from the definition of its business to its supporting IT. The practical perspective on architecture principles is concerned with an approach to the formulation of architecture principles, as well as their actual use in organizations."
Publications
Danny Greefhorst has authored and co-authored numerous publications in the fields of enterprise architecture, software engineering and IT. The books he has co-authored:
Peter Beijer, Danny Greefhorst, Rob Kruijk, Martijn Sasse, Robert Slagter. Ruimte voor mens en organisatie - Visie en aanpak voor de digitale samenleving, BIM Media B.V., Den Haag, 2014.
Danny Greefhorst, Erik Proper. Architecture Principles – The Cornerstones of Enterprise Architecture, 1st Edition, Springer, 2011.
Articles, a selection:
Florijn, Gert, Timo Besamusca, and Danny Greefhorst. "Ariadne and HOPLa: flexible coordination of collaborative processes." Coordination Languages and Models. Springer Berlin Heidelberg, 1996. 197-214.
Bosch, J., Florijn, G., Greefhorst, D., Kuusela, J., Obbink, J. H., & Pohl, K. (2002). "Variability issues in software product lines." In Software Product-Family Engineering (pp. 13–21). Springer Berlin Heidelberg.
Greefhorst, Danny, Henk Koning, and Hans van Vliet. "The many faces of architectural descriptions." Information Systems Frontiers 8.2 (2006): 103-113.
Angelov, Samuil, P. W. P. J. Grefen, and Danny Greefhorst. "A classification of software reference architectures: Analyzing their success and effectiveness." Software Architecture, 2009 & European Conference on Software Architecture. WICSA/ECSA 2009. Joint Working IEEE/IFIP Conference on. IEEE, 2009.
Proper, Erik, and Danny Greefhorst. "The roles of principles in enterprise architecture." Trends in Enterprise Architecture Research. Springer Berlin Heidelberg, 2010. 57-70.
References
External links
Danny Greefhorst, Director, ArchiXL at opengroup.org
1972 births
Living people
Dutch computer scientists
Enterprise modelling experts
Utrecht University alumni
|
929796
|
https://en.wikipedia.org/wiki/Tampa%20Bay%20area
|
Tampa Bay area
|
The Tampa Bay area is a major populated area surrounding Tampa Bay on the west coast of Florida in the United States. It includes the main cities of Tampa, St. Petersburg, and Clearwater. It is the 18th largest metropolitan area in the United States, with an estimated population of over three million.
The exact boundaries of the metro area can differ in different contexts. Hillsborough County and Pinellas County (including the cities of Tampa, St. Petersburg, Clearwater, and several smaller communities) make up the most limited definition. The United States Census Bureau defines the Tampa–St. Petersburg–Clearwater Metropolitan Statistical Area (MSA) as including Hillsborough and Pinellas counties along with Hernando and Pasco counties to the north.
Other definitions are
the four counties in the MSA plus Citrus and Manatee Counties, used by the Tampa Bay Regional Planning Council
the four counties in the MSA plus Citrus, Manatee and Sarasota Counties, used by the Tampa Bay Area Regional Transportation Authority
the four counties in the MSA plus Citrus, Manatee, Sarasota and Polk Counties, used by the Tampa Bay Partnership and the Tampa Bay media market.
This wider area may also be known as Central West Florida as part of Central Florida.
Tampa–St. Petersburg–Clearwater Metropolitan Statistical Area
The population of the Tampa Bay MSA is estimated at 3,142,663 people as of 2018.
The following is a list of principal cities and unincorporated communities, including census-designated places (CDPs), located in the Tampa–St. Petersburg–Clearwater MSA based on the 2010 U.S. Census:
Principal cities
Each of these cities has a population in excess of 250,000 inhabitants:
Tampa
St. Petersburg
More than 100,000 inhabitants
Clearwater
Lakeland
Riverview (CDP)
Brandon (CDP)
Spring Hill (CDP)
More than 10,000 inhabitants
Demographics
According to the 2000 U.S. Census, the Tampa–St. Petersburg–Clearwater MSA consists of the following ethnic demographics:
Age
Ethnicity
Hispanic or Latino by origin
Geography
The Tampa Bay area is located along Tampa Bay which it is named for. Pinellas County and St. Petersburg, Florida lies on a peninsula between Tampa Bay and the Gulf of Mexico, and much of the city of Tampa, Florida lies on a smaller peninsula jutting out into Tampa Bay.
Climate
The Tampa Bay area has a humid subtropical climate (Koppen Cfa) with hot, humid summers, with daily thunderstorms, drier, predominantly sunny winters, and warm-to-hot springs with a pronounced dry season maximum. On average, two days experience frost per year in the cooler parts of the Tampa Bay area, less than annually in the coastal parts. However, hard freezes (low temperatures below ) are very rare, occurring only a few times in the last 75 years. The United States Department of Agriculture designates the area as being in hardiness zones 9b and 10a. Coastal parts of the Tampa Bay area closely border a tropical savanna climate (As) with many tropical microclimates due to maritime influences of the Gulf of Mexico and the 400-square-mile Tampa Bay. Plant climate-indicator species such as coconut palms and royal palms, as well as other elements of south Florida's native tropical flora, reach their northern limits of reliable culture and native range in the area. Highs usually range between year-round. Tampa's official high has never reached —the all-time record high temperature is . St. Petersburg's all-time record high is exactly .
Pinellas County lies on a peninsula between Tampa Bay and the Gulf of Mexico, and much of the city of Tampa lies on a smaller peninsula jutting out into Tampa Bay. This proximity to large bodies of water both moderates local temperatures and introduces large amounts of humidity into the atmosphere. In general, the communities farthest from the coast have more extreme temperature differences, both during a single day and throughout the seasons of the year.
Economy
As of July 1, 2019, the largest employers within the Tampa Bay area are:
Finance and insurance
Nearly one in four of the state's business and information services firms resides in Tampa Bay. These firms range from financial services firms to information technology providers to professional services organizations such as law firms, accounting firms, engineering firms, consulting and more. As a gateway to the Florida High Tech Corridor, Tampa Bay is home to many information technology firms along with many business services providers.
Financial services firms:
Bank of America
JPMorgan Chase
Citigroup
Wells Fargo
Depository Trust & Clearing Corporation
Raymond James Financial
Franklin Templeton
Metlife
USAA
Progressive Insurance
Transamerica
State Farm
New York Life
Health care
With more than 50 hospitals, dozens of clinics and ambulatory care centers, the Tampa Bay has an abundance of top-rated health care facilities for children and adults. The region also has a wealth of well-trained medical professionals—nearly 53,000 nurses and more than 9,200 physicians (including physician assistants)—provide care to Tampa Bay residents and visitors every year.
Information technology
Tampa Bay serves as the gateway to the Florida High Tech Corridor which spans 23 counties. Created as a partnership between the University of South Florida, University of Central Florida and now including the University of Florida, the Florida High Tech Corridor promotes the growth of the high-tech industry across Central Florida.
Higher education and research
Academic research is a key component of high-tech growth and a powerful economic engine. The presence of cutting-edge research in the region is vital to technology transfer, which enables innovative ideas discovered in academia to achieve commercialization in the marketplace. Tampa Bay has several powerhouse research centers that are engaged in both pure scientific research and aggressively pursuing technology transfer to enrich people's lives.
Researchers at the University of South Florida's Nanomaterials and Nanomanufacturing Research Center (NNRC), H. Lee Moffitt Cancer Center and the Center for Ocean Technology at USF's College of Marine Science are researching how to use nanotechnology for a myriad of targeted uses including drug delivery, mechanized microsurgery, customized laser microchips, ways to turn sunlight into electricity, purifying water, storing hydrogen in small nanotubes, designing and developing marine sensors using microelectromechanical systems (MEMS) and curing cancer.
University of Tampa is located in Downtown Tampa, Florida on the Hillsborough River and is a historic university linked back to Teddy Roosevelt.
Housing
In 2008 the area's construction based boom was brought to a sudden halt by the financial crisis of 2007–2010, and by 2009 it was ranked as the fourth worst performing housing market in the United States.
Changes in house prices for the area are publicly tracked on a regular basis using the Case–Shiller index; the statistic is published by Standard & Poor's and is also a component of S&P's 20-city composite index of the value of the U.S. residential real estate market.
Avionics, defense, and marine electronics
The University of South Florida's Center for Ocean Technology, which has been a leader in microelectromechanical systems research and development and has been using the technology to collect biological and chemical data to monitor water quality, provided underwater technology for port security at the 2004 Republican National Convention. USF's Center for Robot-Assisted Search and Rescue used its miniature robots to assist rescue teams at Ground Zero following the September 11 terrorist attacks.
Tampa Bay is also the location of three major military installations, MacDill Air Force Base, Coast Guard Air Station Clearwater and Coast Guard Station St. Petersburg. MacDill AFB is home to the 6th Air Mobility Wing (6 AMW) of the Air Mobility Command (AMC) and the 927th Air Refueling Wing (927 ARW) of the Air Force Reserve Command (AFRC). Both wings share flight operations of a fleet of KC-135R Stratotanker aircraft and the 6 AMW also operates a fleet of C-37A Gulfstream V aircraft. MacDill AFB also hosts multiple tenant organizations, to include two major combatant commands: United States Central Command (USCENTCOM), which directs military operations in Afghanistan, Iraq, and the Middle East; and United States Special Operations Command (USSOCOM), which has responsibility for all special operations forces in the U.S. Armed Forces. CGAS Clearwater is located at the St. Petersburg–Clearwater International Airport. It is the largest air station in the United States Coast Guard, operating HC-130H Hercules aircraft and MH-60T Jayhawk helicopters with principal missions focused on search and rescue, counternarcotics interdiction, and homeland security. The HC-130 aircraft are slated to be replaced by new HC-27J Spartan aircraft beginning in 2017. Coast Guard Station St. Petersburg is located on the site of the former Coast Guard Air Station St. Petersburg at Albert Whitted Airport. It is home to Coast Guard Sector St. Petersburg and is homeport for the USCGC Resolute (WMEC 620), USCGC Venturous (WMEC 625), and numerous smaller cutters and patrol boats.
Education
Primary and secondary education is provided by the school districts of the individual counties making up the region.
The area is home to several institutions of higher learning, including the main campus of the University of South Florida in Tampa and the satellite campuses of USF St. Petersburg. Eckerd College in St. Petersburg, the University of Tampa, Florida College in Temple Terrace, Trinity College (Florida) in New Port Richey, are all four-year institutions located in the area. Embry–Riddle Aeronautical University and Troy University also maintain satellite education centers at MacDill AFB.
There are two law schools in the area, Stetson University College of Law and Thomas M. Cooley Law School. Stetson University has campuses in Gulfport and Tampa. The newly built (May 2012) Thomas M. Cooley Law school is located in Riverview.
Hillsborough Community College, St. Petersburg College, Polk State College, and Pasco-Hernando State College are community colleges serving the area.
Culture
The Tampa Bay area is home to a high concentration of quality art museums. Long established communities, particularly those near the bay such as Cuban influenced Ybor City, Old Northeast in St. Petersburg, and Palma Ceia and Hyde Park in Tampa contain historic architecture.
Fresh seafood and locally grown produce are available in many restaurants and in weekly farmers' markets in multiple urban centers in the area. Yuengling, the largest American-owned brewer, operates a brewery in Tampa, as does the highly regarded craft brewer Cigar City Brewing. The area is also known for its influence on heavy metal music, specifically death metal. Within both the Florida death metal scene and broader genre Tampa Bay became known as the "capital of death metal."
Arts and culture make a big impact in Tampa Bay. In a single year, the economic impact of the cultural institutions in the Tampa Bay area was $521.3 million, according to a recent PricewaterhouseCoopers study. In 2004 5.6 million people attended plays, musical performances, museum exhibits, and other cultural institutions in Tampa Bay, supporting 7,800 jobs.
Museums
Museum of Fine Arts near the Pier in downtown St. Petersburg
Salvador Dalí Museum in downtown St. Petersburg
Florida International Museum at St. Petersburg College in downtown St. Petersburg
Florida Holocaust Museum in downtown St. Petersburg
Tampa Museum of Art in downtown Tampa
USF Contemporary Art Museum on the USF's main Tampa campus
Florida Museum of Photographic Arts in downtown Tampa
Museum of Science and Industry adjacent to USF's main Tampa campus
Tampa Bay Automobile Museum in Pinellas Park
Leepa-Rattner Museum of Art on the Tarpon Springs Campus of St. Petersburg College
The Royal Theater & Manhattan Casino Historic Landmarks in St. Petersburg
The Carter J. Woodson African-American Museum St. Petersburg
the Tampa Bay History Center
Performing arts halls
Straz Center for the Performing Arts in Tampa
Ruth Eckerd Hall in Clearwater
Mahaffey Theater in St. Petersburg
Tarpon Springs Performing Arts Center
Cultural events
Gasparilla Pirate Festival held every January in Tampa
Florida Strawberry Festival held every March in Plant City
Clearwater Jazz Holiday held every October in Coachman Park in downtown Clearwater; in its 32nd year
Guavaween, a Latin-flavored Halloween celebration held every October in the Ybor City section of Tampa
Festa Italiana, annual festival of Italian heritage held every April in Ybor City, Tampa's Latin Quarter
Recreation
The Tampa Bay area is highly noted for its beaches, with the warm, blue gulf waters and nearly 70 miles of barriers islands from North Pinellas south to Venice, attracting tourists from all over the world. Three of the beaches in this area, Fort De Soto's North Beach (2005), Caladesi Island (2008), and Sarasota's Siesta Key (2011) have been named by Dr. Beach as America's Top Beach. The 15th IIFA Awards would be held at Tampa Bay Area in April 2014.
Sports attractions, in addition to the teams listed below, include many professional quality golf courses, tennis courts, and pools. Ybor and the Channel District in Tampa, downtown St. Petersburg, and the beaches all along the coast all attract a vibrant nightlife.
Theme parks
Adventure Island in Tampa
Busch Gardens in Tampa
Dinosaur World in Plant City
Weeki Wachee Springs in Hernando County
Legoland Florida in Winter Haven, Polk County
Zoos and aquariums
Lowry Park Zoo in Tampa
Florida Aquarium in Tampa
Clearwater Marine Aquarium in Clearwater
Suncoast Seabird Sanctuary in Indian Shores
Botanical gardens
Florida Botanical Gardens, part of the Pinewood Cultural Park in Largo
Sunken Gardens in St. Petersburg, a former tourist attraction now run by the City of St. Petersburg
USF Botanical Gardens in Tampa
Notable public parks and recreation areas
The Tampa Bay area is home to an extensive system of state, county, and city parks. Hillsborough River State Park in Thonotosassa is one of Florida's original eight state parks and Honeymoon Island State Park, near Dunedin, is Florida's most visited state park. Pinellas County is home to the Fred Marquis Pinellas Trail, a 37-mile running and cycling trail over a former railroad bed connecting Tarpon Springs to St. Petersburg. Skyway Fishing Pier State Park, the remnants of the approaches to the original Sunshine Skyway Bridge forms the world's largest fishing pier in Pinellas and Manatee counties. The shallow waters and many mangrove islands of the bay and gulf make the area popular with kayakers. The gulf is also home to a large number of natural and artificial coral reefs that are popular for fishing and scuba diving. Away from the coast, Circle B Bar Reserve in Lakeland (Polk county) has been designated as a Great Florida Birding Trail site, a program of the Florida Fish and Wildlife Conservation Commission.
Sports
Sports teams
The Tampa Bay Area is home to three major professional sports teams: the Buccaneers (NFL), Rays (MLB), and Lightning (NHL). The Tampa Bay area also hosts a number of minor-league and college teams.
MLB spring training teams
Major League Baseball teams have come to the Tampa Bay area for spring training since the Chicago Cubs trained at Tampa's Plant Field in 1913 and the St. Louis Browns trained at St. Petersburg's Coffee Pot Park in 1914. Grapefruit League games are still a favorite pastime for both residents and tourists alike every March. The following five Major League Baseball teams play spring training games in the Tampa Bay area:
The New York Yankees in Tampa
The Philadelphia Phillies in Clearwater
The Toronto Blue Jays in Dunedin
The Pittsburgh Pirates in Bradenton
The Detroit Tigers in Lakeland
Minor League baseball
Minor League baseball has also been a constant in the Tampa Bay area for over a century. The Tampa Smokers, St. Petersburg Saints, Lakeland Highlanders, and Bradenton Growers were charter members of the original Florida State League, which began play in 1919. Current local teams include:
Florida State League (Class A)
The Tampa Tarpons: George M. Steinbrenner Field in Tampa
The Clearwater Threshers: Spectrum Field in Clearwater
The Dunedin Blue Jays: TD Ballpark in Dunedin
The Bradenton Marauders: LECOM Park in Bradenton
The Lakeland Flying Tigers: Publix Field at Joker Marchant Stadium in Lakeland
The area is also home to several affiliates of the Gulf Coast League, a rookie league in which many young players gain their first experience in professional baseball.
Basketball
The Tampa Bay area does not have a basketball team in the NBA; the Orlando Magic are the closest team to the area, 85 miles east. The Toronto Raptors made Tampa their temporary home prior to the 2020–21 NBA season during the COVID-19 pandemic, necessitated by restrictions on travel between Canada and the United States that were in effect. Their "home" games were played at Amalie Arena.
The Tampa Bay area has had several teams in minor basketball leagues. The Tampa Bay Titans play in The Basketball League (TBL). Their home games are played at Pasco–Hernando State College. The St. Pete Tide and the Tampa Gunners play in the Florida Basketball Association (FBA). The Tide's home games are played at St. Petersburg Catholic High School, and the Gunners are a travel team.
Sporting events
Major League sports
Five Super Bowls have been held in Tampa: Super Bowl XVIII in 1984, Super Bowl XXV in 1991, Super Bowl XXXV in 2001, Super Bowl XLIII in 2009, and Super Bowl LV in 2021. Super Bowls XVIII and XXV were played at Tampa Stadium, while Super Bowls XXXV, XLIII and LV were played at Raymond James Stadium. The 1978 AFC–NFC Pro Bowl was held in Tampa at Tampa Stadium.
The 2008 MLB World Series; Games 1 and 2 were played in St. Petersburg at Tropicana Field.
The 1999 NHL All-Star Game was held in Tampa at the Ice Palace. It was held again in 2018, having been renamed Amalie Arena by then.
The 2004 NHL Stanley Cup Finals; Games 1, 2, 5 and 7 were played in Tampa at the St. Pete Times Forum, Games 1, 2, and 5 of the 2015 Stanley Cup Final were played at Amalie Arena and Games 1, 2, and 5 of the 2021 Stanley Cup Final were played at Amalie Arena .
NCAA sports
The NCAA football Outback Bowl is held annually at Raymond James Stadium, usually on January 1. The Gasparilla Bowl is also held annually at Raymond James Stadium, usually in December. It began in 2008 at Tropicana Field in St. Petersburg until moving to Tampa in 2018. The NCAA football East–West Shrine Game is held annually at Tropicana Field since 2012, usually in January.
The 2017 College Football Playoff National Championship was held at Raymond James Stadium on January 9, 2017.
Two NCAA football ACC Championship Games (2008 and 2009) have been played in Tampa at Raymond James Stadium.
Amalie Arena in Tampa has been the site for various rounds of NCAA Men's and Women's basketball championship tournament over the years, as well as conference tournaments. The 1999 NCAA Men's Final Four was held in St. Petersburg at Tropicana Field. The 2008, 2015 NCAA Women's Final Four and 2019 NCAA Division I Women's Basketball Tournament Final Four were held in Tampa at the Tampa Bay Times Forum/Amalie Arena.
Five NCAA Division I Men's Soccer Championships have been held in Tampa: 1978, 1979, 1980, 1990 and 1991
The 2012 and 2016 NCAA Men's Frozen Four were held in Tampa at the Tampa Bay Times Forum/Amalie Arena.
Tampa will host the 2023 Division I NCAA Women's Volleyball Championship, the 2023 NCAA Division I Men's Frozen Four, the 2025 NCAA Division I Women's Basketball Final Four and the 2026 NCAA Division I Men's Basketball First and Second Rounds, all at Amalie Arena.
Transportation
Air
Tampa International Airport is the largest airport in the region with 21 carriers and more than 17 million passengers served last year. In addition to the recent opening of a new terminal, improvements are being planned to handle 25 million passengers by 2020.
St. Petersburg–Clearwater International Airport provides access to commercial airliners, and smaller charter craft. The airport is currently planning an expansion which will include new terminal facilities and runway extension. Dotting the landscape throughout the area, are many general aviation airports for the aircraft enthusiast and smaller corporate jets.
Rail
Amtrak provides passenger rail service from Union Station in Tampa. CSX provides freight rail service for the entire Tampa Bay region.
Water
The Cross-Bay Ferry has connected Tampa's Channelside District to Downtown St. Petersburg since 2016. The Pirate Water Taxi, also operating since 2016, has several stops along the waterways in the vicinity of Tampa's downtown area and Channelside District.
Transit systems
Bus service is provided in Hillsborough County by Hillsborough Area Regional Transit (HART), in Pinellas County by Pinellas Suncoast Transit Authority (PSTA), in Pasco County by Pasco County Public Transportation and in Hernando County by THE Bus. HART and PSTA provide express services between Tampa and Pinellas County, and PSTA provides connections to Pasco County. HART also operates the TECO streetcar between Downtown Tampa and Ybor City. In 2013, HART also began operating a Bus rapid transit system called MetroRapid that runs from Downtown Tampa to the University of South Florida.
On July 1, 2007, an intermodal transportation authority was created to serve the seven-county Tampa Bay area. The Tampa Bay Area Regional Transportation Authority (TBARTA) was formed to develop bus, rapid transit, and other transportation options for the region.
Roads and freeways
The Tampa Bay area is served by these interstate highways.
Interstate 4
Interstate 75
Interstate 175
Interstate 275
Interstate 375
Hillsborough County is also served by other roadways such as the Lee Roy Selmon Expressway (SR 618) which commutes workers from Brandon into downtown Tampa and the Veterans Expressway/Suncoast Parkway (SR 589) which serves traffic from the Citrus/Hernando County border southward into Tampa. Both of these highways, which are built to limited access freeway standards, are toll roads as is the connecting junction between the Selmon Expressway and Interstate 4.
In Pinellas County, U.S. 19 is the main north–south route through the county, and is being upgraded to freeway standards complete with frontage roads to ease congestion through the north part of the county. Also, the Bayside Bridge allows traffic to go from Clearwater into without having to use U.S. 19.
The Courtney Campbell Causeway (SR 60) is one of the three roads that connect Pinellas County to Hillsborough County across the bay. The other two are the Howard Frankland Bridge (I-275) and Gandy Bridge (U.S. 92). The Sunshine Skyway Bridge is part of I-275 and connects Bradenton and other Manatee County and Sarasota County commuters into Pinellas County.
See also
Media in the Tampa Bay area
Central Florida
Florida Suncoast
United States metropolitan area
Notes
References
Hillsborough County, Florida
Manatee County, Florida
Pasco County, Florida
Pinellas County, Florida
Regions of Florida
Central Florida
|
31822809
|
https://en.wikipedia.org/wiki/Nokia%20N9
|
Nokia N9
|
The Nokia N9 (codename Lankku) is a smartphone developed by Nokia, running on the Linux-based MeeGo mobile operating system. Announced in June 2011 and released in September, it was the first and only device from Nokia with MeeGo, partly because of the company's partnership with Microsoft announced that year. It was initially released in three colors: black, cyan and magenta, before a white version was announced at Nokia World 2011.
Despite a limited release, the N9 received widespread critical acclaim, with some describing it as Nokia's finest device to date. It was praised for both its software and hardware, including the MeeGo operating system, buttonless 'swipe' user interface, and its high-end features. Its design looks same as the of Windows Phone-powered Nokia Lumia 800 later that year.
Background
The successor of Nokia N900, internally known as N9-00, was scheduled to be released in late 2010, approximately one year after N900 launched. Pictures of the prototype leaked in August 2010 showed an industrial design and a 4-row keyboard. A software engineer working for Nokia's device division cited the N9-00 (the product number) in the public bug tracker for Qt, an open source application development framework used in MeeGo. This would later be known as the N950. This design was dropped; then Nokia started working on the N9-01, codenamed Lankku, a new variant without a keyboard.
Nokia planned in 2010 to make MeeGo their flagship smartphone platform, replacing Symbian, whose N8 flagship launched that year. Thus effectively N9 was originally meant to be the flagship device from the company. On 11 February 2011 Nokia partnered with Microsoft to use Windows Phone 7 as the flagship operating system to replace Symbian, with MeeGo also sidelined. Nokia CEO Stephen Elop promised to still ship one MeeGo device that year, which would end up as the N9.
Nokia N9 was announced on 21 June 2011 at the Nokia Connection event in Singapore. At the time, the phone was presumed to become available to the public in September 2011. Users can get notified via e-mail of the availability of N9 in their country at the webpage of the Nokia Online Store. Since Nokia closed its Nokia Online Shop in many countries, including Poland, Germany, Netherlands, France, Italy, Spain, United Kingdom, and the United States on 30 June 2011, availability in those countries will be in the hands of retailers and operators.
Elop restated that the company will not be continuing development of MeeGo even if the N9 would be a success, focusing solely on the future Lumia series, something that MeeGo supporters already felt before the N9 announcement due to the Microsoft deal. They responded by creating a petition "We want Nokia to keep MeeGo". That was even more severe as MeeGo Linux was also a form of continuation of Maemo Linux, which was established with combining Nokia's Maemo with Intel's Moblin, in frames of Nokia and Intel alliance created for purposes of such cooperation. Despite the success of the alliance, it was broken and MeeGo canceled by Stephen Elop's decision. Intel officially expressed regrets because of this situation. After the N9's positive reception and generally weak sales of its Lumia range, Elop was criticised for this move, which has been said by some to have contributed to the company's demise in the smartphone market. According to Elop following the Microsoft alliance, MeeGo became an experimental "project", with some of Harmattan's interface elements being used in the cancelled "Meltemi" project and later the Nokia Asha platform.
Availability
In August 2011, Nokia announced that Nokia N9 will not be released in the United States. Other reports indicated that the device will not be available in other markets such as Japan, Canada and Germany. Nokia posted on the official blog in the last week of September 2011 that N9 phones are heading to the stores. The initial retail price was announced to be around €480 (16GB) and €560 (64GB) before applicable taxes or subsidies. In Germany, devices imported from Switzerland are available online from Amazon and German Cyberport GmbH. In January 2012, they were also made available in some major stores of the Saturn Media Markt chain. In February 2012, Nokia N9 appeared on the Italian Nokia site, which is supposed to be a sign of N9 being in official Nokia distribution for the Italian market.
Prices in January 2012 were, depending on the size of the internal memory, between €500 and €630.
Hardware
Processors and memory
The Nokia N9 is powered by a Texas Instruments OMAP 3630 which is a System-on-a-chip based on a 45 nanometer CMOS process. It includes three processor units: a 1 GHz ARM Cortex A8 CPU which runs the operating system and applications, an Imagination Technologies PowerVR SGX530 GPU supporting OpenGL ES 2.0 and capable of processing up to 14 million polygons per second; and a 430 MHz TI TMS320C64x, a digital signal processor, which does image processing for the camera, audio processing for telephony and data transmission. The system also has 1 GB of low power single channel RAM (Mobile DDR). Compcache uses part of this memory as compressed fast swap. It was, at the time, the most powerful device Nokia created.
All user data is stored on the internal eMMC chip; in 16 and 64 GB variants. The N9 was the first smartphone to encompass 64 GB of storage.
Screen and input
Nokia N9 has a capacitive touchscreen (up to 6 simultaneous points) with a resolution of 854 × 480 pixels (FWVGA, 251 ppi) in PenTile RGBG layout. According to Nokia, it is capable of displaying up to 16.7 million colors. The OLED screen is covered by a curved scratch-resistant Corning Gorilla glass. The gap between the glass and the display has been reduced and the screen is coated with an anti-glare polarizer to ease the usability in daylight. There is a proximity sensor which deactivates the display and touchscreen when the device is brought near the face during a call. It has also an ambient light sensor that adjusts the display brightness.
The device also makes use of its accelerometer to rotate the screen in portrait/landscape mode for some applications, such as the web browser.
GPS
N9 has an autonomous GPS feature with optional A-GPS functionality, Wi-Fi network positioning, a magnetometer, and comes pre-loaded with Nokia Maps and Nokia Drive applications.
Nokia Maps is similar to Ovi Maps found on recent Symbian devices from Nokia and is mostly about finding nearby places (restaurants, metro station, theater, etc...) around the user. Nokia Maps for MeeGo is also integrated with the Contacts and Calendar applications. Nokia Drive is a dedicated application for car navigation and provides free lifetime turn-by-turn voice guided car navigation. The Nokia N9 comes with preloaded maps of the continent where it was purchased, and as such, Nokia Drive does not require an active data connection and can work as a stand-alone GPS navigator.
Camera
The main (back) camera has an autofocus feature, dual LED flash, is optimized for 16:9 and 4:3 aspect ratios, and has a 4× digital zoom for both video and camera. The sensor size of the back camera is 8.7 megapixels (3552 × 2448 px); the effective resolution for the 16:9 aspect ratio is 3552 × 2000 px (7.1 megapixels), and 3248 × 2448 px (8 megapixels) for the 4:3 aspect ratio. Typically, a 16:9 picture format on a digital camera is achieved by cropping the top and bottom of a 4:3 image, since the sensor is 4:3. Nokia N9 genuinely provides more in the width of the picture by choosing the 16:9 aspect ratio option by using the full 3552-pixel width of the sensor, and more in the height of the picture by choosing the 4:3 aspect ratio option by using the full 2448-pixel height of the sensor. The Carl Zeiss lens has quite unusual specifications for a mobile phone: 28mm wide-angle lens focal length, fast (for this class) f/2.2 aperture, and a 10 cm-to-infinity focus range. It is capable of recording up to 720p video at 30 fps with stereo sound.
Buttons
When holding the device facing the screen, on the right side, there is a power on/off (long press) and lock/unlock (short press) button and volume keys. The Nokia N9 has fewer hardware buttons than most smartphones and makes extensive use of the touchscreen to navigate the user interface. For example, to minimize a running application, the user has to swipe their finger from one side of the bezel surrounding the screen to the opposite side. There is also no dedicated shutter key for the camera; the touch screen is instead used to focus and take the picture. The screen can be unlocked by double tapping on it.
Audio and output
The N9 has two microphones and a loudspeaker situated at the bottom of the phone. The main microphone enables conversation and recording. The second microphone is located on the back of the device near the flash LEDs and main camera, it is used by MeeGo system for noise cancellation which make phone conversations clearer in noisy environment. On the top, there is a 3.5 mm AV connector which simultaneously provides stereo audio output, with support for Dolby Headphone, and either microphone input or video output. Next to the 3.5 mm connector, there is a High-Speed USB 2.0 USB Micro-B connector provided for data synchronization, mass storage mode (client) and battery charging. The USB connector is protected by a small door.
The built-in Bluetooth v2.1 +EDR (Enhanced Data Rate) supports stereo audio output with the A2DP profile. Built-in car hands-free kits are also supported with the HFP profile. File transfer is supported (FTP) along with the OPP profile for sending/receiving objects. It is possible to remote control the device with the AVRCP profile. The Bluetooth chip also functions as an FM Receiver/Transmitter, allowing one to listen to the FM radio by using headphones connected to the 3.5 jack as antenna. As with the Nokia N800, N810 and N900, it will ship without software support. However an FM radio application is already available in OVI Store from independent developer.
NFC is also supported for sharing photos, contacts, or music with other devices supporting NFC (e.g. Nokia C7, Nokia 701) and also pairing (connecting) stereo speakers (e.g. Nokia Play 360) and headset (e.g. Nokia BH-505). More than one device can be connected simultaneously with N9 via NFC.
Battery
The Nokia N9 has a BV-5JW 3.8V 1450mAh battery. According to Nokia, this provides from 7h to 11h of continuous talk time, from 16 to 19.5 days of standby, 4.5h of video playback and up to 50h of music playback.
The phone supports USB charging only.
Accessories
A number of devices can be used with the N9 via several connectivity options: external keyboards via Bluetooth, wireless headphones via NFC, wireless loudspeakers via NFC, and many others.
System software
MeeGo
Strictly speaking, the Nokia N9 does not run MeeGo 1.2 as its operating system. It instead runs what Nokia refers to as a "MeeGo instance". During the development of Harmattan (previously marketed as Maemo 6), Nokia and Intel merged their open source projects into one new common project called MeeGo. Not to postpone the development schedule, Nokia decided to keep the "core" of Harmattan, such as middleware components (GStreamer) and packaging managers (the Harmattan system uses Debian packages instead of RPM packages). Nonetheless, Harmattan is designed to be fully API compatible with MeeGo 1.2 via Qt. As far as end users and application developers are concerned, the distinction between Harmattan and MeeGo 1.2 is minimal. Since all marketing effort would have been directed to "MeeGo", Nokia dropped the Maemo branding to adopt MeeGo as to not confuse customers.
Swipe User Interface
The Nokia N9 user experience provides three panes, called Home, and a Lock Screen. Dragging or flicking horizontally navigates between the three panes of the home. The Home consists of:
Events : It holds all the notification such as missed calls, upcoming meeting, unread messages/emails and feeds (web feeds, Facebook, Twitter, etc. if enabled from Notifications settings).
Applications : Menu with all the installed application shortcuts. It displays 4 columns that can be scrolled up and down as needed by the number of application.
Open Applications : A task manager that can be viewed either as a 2 columns or 3 columns (a pinch gesture will switch between each mode). If more application are open that can be displayed on the screen, the user can scroll the open applications list up and down.
When in an application a swipe gesture from one edge of the screen to the other one will return the user to one of the three views of Home. This will not close the application, it will either be suspended or keep running in the background, depending on the application. To close an application, the user must press and hold until a red "X" appears on the upper left corner of the application thumbnail in the Open Application view, which will close it. The user may also close apps by swiping from the top of the device and down while in the application (with fadeout effect). Tapping on the status bar on the top of the screen while using an application will display a menu allowing the user to adjust the volume, change the active profile (silent, beep & ringing), Internet connection (WiFi, GSM data), bluetooth control shortcut (if enabled in Bluetooth settings), media sharing (DLNA) shortcut (if enabled in media sharing settings which was introduced in PR 1.2) and availability. The Lock Screen display the status bar, a clock and some notifications. This screen also holds music controls (introduced in PR 1.1) when the music player is active. It is customizable by the end user.
The phone can be unlocked by double tapping on the screen. Sliding and holding the lock screen up reveals 4 shortcuts, called the Quick Launcher. The Quick Launcher can also be accessed while using an application.
The swiping UI of the N9, including the visual style and double-tap feature, was resurrected in the Nokia Asha platform, which was introduced on the Nokia Asha 501 device in 2013.
Reception
The Nokia N9 was announced at Nokia's Connections event in Singapore, June 2011. The reception for the device has been very positive, citing the MeeGo v1.2 Harmattan UI, pseudo-buttonless design, polycarbonate unibody construction and its NFC capabilities. Still, many reviewers did not recommend to buy the N9 only because of Nokia's earlier decision to drop MeeGo for Windows Phone for future smartphones – often questioning this decision at the same time. Engadget's editor Vlad Savov said in June 2011 that "it's a terrific phone that's got me legitimately excited to use it, but its future is clouded by a parent that's investing its time and money into building up a whole other OS." In a later review, Engadget writes: "Love at first sight — this is possibly the most beautiful phone ever made," and "MeeGo 1.2 Harmattan is such a breath of fresh air it will leave you gasping — that is, until you remember that you're dealing with a dead man walking." In a review for Ars Technica, Ryan Paul writes: "The N9 is an impressively engineered device that is matched with a sophisticated touch-oriented interface and a powerful software stack with open source underpinnings." The Verge (website) writes: "The Nokia N9 is, without doubt, one of the most fascinating phones of the last few years."
The German Der Spiegel titles "this could have been Nokia's winner", and the German magazine Stern describes it as one of the best devices ever made by Nokia. Delimiter called the N9 Nokia's "most significant" handset since the Nokia N95.
Sales
The Nokia N9 has not been released in most of the largest smartphone markets such as the U.S., Canada, UK, the Netherlands, Germany, France, Italy, Spain, and others. Nokia did not disclose the number of sales for the N9.
Awards
In November 2011, the Nokia N9 won 3 out of 4 applicable titles (including design, camera and cellphone of the year) at a gala held by Swedish magazine and webzine Mobil.se.
In January 2012, the Nokia N9 Swipe UI was nominated for an IxDA Interaction Award.
In February 2012, the N9 reached number 1 in ranking "by rate" with a rate of 8.432 (out of 10) and votes of 74,940, and also number 5 by daily interest hits in GSMArena's ranking.
In April 2012, the N9 was awarded a Design and Art Direction "Yellow Pencil", in the interactive product design category, beating among others the iPad 2 and the Nokia Lumia 800.
Open/closed source packages and community contributions
The approach applied by Nokia is one of an open platform, with exception, and a closed user experience. As with Maemo 5 on the Nokia N900, the community can request a closed source component owned by Nokia to be released as open source.
Hundreds of 3rd party applications, mostly free and open source, have already been created or ported to the Linux MeeGo Harmattan platform.
Released updates
Ports for the N9
Android 2.3 port leak
Images of an N9 prototype running Android 2.3 were leaked to Sina Weibo by a user who had previously uploaded prototype images of Nokia's Sea Ray (later Lumia 800) Windows Phone. They were believed to be likely genuine, as Steven Elop had mentioned Nokia had considered Android in the past.
Android 4.1.1 Jelly Bean
An unofficial Android 4.1.1 port by the NITDroid community was made. The port features general functionality but misses some features such as voice calling and use of the camera.
Sailfish OS
On 21 November 2012, Jolla announced and demonstrated Sailfish OS, which is direct continuation and based on MeeGo. Above 80% of the first Linux Sailfish OS is the open source part of the Linux MeeGo. The original MeeGo open source code was developed further in frames of Mer (software distribution) which comes from MEego Reinstated and has established current standard of the middleware stack core, so software above a kernel and below a UI of OS, what's more it is open source and free for vendors. The Harmattan UI and several software applications used in the N9 was closed and proprietary of Nokia, hence could not be used neither in MER project nor Sailfish OS. So Jolla introduced its own swipe UI, used MER core standard and created Sailfish OS. Videos of the Sailfish OS running on a Nokia N950 appeared on the Internet the same day as the announcement. As the N950 has similar technical specifications as the N9, with slight differences including a physical QWERTY keyboard, this led many owners of the N9 to believe that Sailfish OS can be ported to the N9. Jolla confirmed this, but also stated that it has no "official possibilities" for such kind of support for the N9, and instead the community will provide the unofficial port for Sailfish OS. However, Jolla maintained that the experience will not be the same as the Sailfish on official Jolla phones (Jolla released the first Jolla mobile phone on 27 November 2013). Sailfish OS is the first full Linux MeeGo OS, as the MeeGo Harmattan was only a "MeeGo instance" because of not fully finished combining of Maemo and Moblin. Sailfish OS is actively developed and commonly assumed to be next and better incarnation of MeeGo, also Jolla device is assumed unofficial successor of the N9 and its legacy by all means.
KaiOS
In early 2019 KaiOS Technologies Inc. demonstrate the devices running KaiOS. There are Nokia 8110 (2018), Jio Phone, and one full touch device suspected to be Nokia N9.
See also
Nokia N950 developer's mobile for N9 software development
Jolla the Finnish company continuing MeeGo smartphones manufacturing which employed almost the whole engineering team which has designed the Nokia N9 and the original Linux MeeGo OS.
Sailfish OS informally the next incarnation and successor of MeeGo Linux by Jolla.
Jolla (smartphone) first mobile with the Sailfish OS 1.0, considered as N9's successor.
Sailfish Alliance the alliance created with Jolla to promote MeeGo based Linux Sailfish and worldwide MeeGo ecosystem.
Comparison of smartphones
List of open-source mobile phones
List of Nokia products
Nokia X family
Nokia 6
Nokia 8 Sirocco
References
External links
.
.[Forwards to Microsoft Mobile]
MeeGo devices
Open-source mobile phones
Smartphones
Nokia Nseries
|
6084017
|
https://en.wikipedia.org/wiki/Ewido%20Networks
|
Ewido Networks
|
Ewido Networks was a software company based in Germany known for creating Ewido Anti-Spyware. Ewido Anti-Spyware was software used to remove malware such as spyware, trojan horses, adware, dialers, and worms. It also featured real-time protection, automatic updates, and other privacy features. Ewido had both a free version, and a paid version which added realtime protection, and automatic updates.
History
Ewido networks was founded in Germany in 2004 by Andreas Rudyk, Peter Klapprodt and Tobias Graf. Their first product was Ewido Security Suite. Ewido was given Digital River's ICE award for "Best newcomer of the year".
Grisoft Acquisition
On April 19, 2006 it was announced that Czech Grisoft had acquired the German Anti-Malware company Ewido Networks. This was the birth of Grisoft's AVG Anti Spyware, an anti spyware based on Ewido's engine. Grisoft now includes ewido in many security suites and bundles and antivirus products.
Ewido Anti-Spyware
This software began life as Ewido Security Suite and the name was changed to Ewido Anti-malware in December 2005. With the release of 4.0, it was later changed again to Ewido Anti-Spyware.
Ewido Anti-Spyware included new features such as scheduled scans, file shredder, running process manager, and a new interface. It also included an LSP and BHO viewer. There was a free version with no realtime protection and automatic updates (Users could update manually). The last known price was $29.99.
After Grisoft's acquisition, however, Ewido's Anti-Spyware development has not stopped. It continues to exist as Ewido Online Scanner or Ewido Micro Scanner, using the full Ewido engine and signatures, without excluding heuristic detection options. As of AVG 8.0, AVG Anti-Spyware is integrated into AVG Anti-Virus and is no longer available as a standalone product. That means that AVG Anti-Spyware will no longer receive updates.
Ewido works with many popular antivirus and other spyware products such as:
AVG Anti-Virus
Ad-Aware
Avast! Antivirus
Avira Security Software
Comodo Internet Security
CounterSpy
Kaspersky Anti-Virus
McAfee
Norton AntiVirus
Sophos
Spybot Search & Destroy
Spyware Doctor
ZoneAlarm Security Suite
See also
Grisoft
Malware
Spyware
References
External links
Ewido's Official site
Software companies of Germany
Antivirus software
Spyware removal
Windows security software
Windows-only software
|
56030641
|
https://en.wikipedia.org/wiki/National%20Digital%20Preservation%20Program
|
National Digital Preservation Program
|
Keeping the foresight of rapidly changing technologies and rampant digital obsolescence, in 2008, the R & D in IT Group, Ministry of Electronics and Information Technology, Government of India envisaged to evolve Indian digital preservation initiative. In order to learn from the experience of developed nations, during March 24–25, 2009, an Indo-US Workshop on International Trends in Digital Preservation was organized by C-DAC, Pune with sponsorship from Indo-US Science & Technology Forum, which lead to more constructive developments towards formulation of the national program.
National Study Report on Digital Preservation Requirements of India
During April 2010, Ministry of Electronics and Information Technology, Government of India entrusted the responsibility of preparing National Study Report on Digital Preservation Requirements of India with Human-Centred Design & Computing Group, C-DAC, Pune, which was already active in the thematic area of heritage computing. The objective of this project was to present a comprehensive study of current situation in India versus the international trends of digital preservation along with the recommendations for undertaking the National Digital Preservation Program by involving all stakeholder organizations.
Technical experts from around 24 organizations representing diverse domains such as e-governance, government and state archives, audio, video and film archives, cultural heritage repositories, health, science and education, insurance and banking, law, etc. were included in the national expert group. Major institutions represented in the expert group were Centre for Development of Advance Computing (C-DAC), National Informatics Centre (NIC), Unique Identity Program, National Archives of India, National Film Archive of India, Indira Gandhi National Centre for the Arts, Information and Broadcasting (Doordarshan and All India Radio), National Remote Sensing Center (NRSC) / ISRO, Controller of Certifying Authorities (CCA), National e-Governance Division (NeGD), Life Insurance Corporation, Reserve Bank of India (RBI), National Institute of Oceanography (NIO), Indian Institute of Public Administration, Defense Scientific Information & Documentation Centre (DSIDC) and several other organizations. The expert group members were asked to submit position papers highlighting the short term and long-term plans for digital preservation with respect to their domain. The study report was presented before Government of India in two volumes as under -
Volume –I Recommendations for National Digital Preservation Program of India
Volume-II Position Papers by the National Expert Group Members
The report included an overview of international digital preservation projects, study of legal imperatives (Information Technology ACT 2000/2008), study of technical challenges and standards, consolidated recommendations given by the national expert group for the National Digital Preservation Program.
One of the key recommendations given in this report was to harmonize Public Records Act, Right to Information Act, Indian Evidence Act, Copyright Act and other related Acts with the Information Technology Act in order to address the digital preservation needs. The foresight of this recommendation has proved right, as in 2018, the Indian judiciary has initiated the drafting of electronic evidence rules to be introduced under the Indian Evidence Act. In this context, the Joint Committee of High Court Judges visited C-DAC, Pune on 10 March 2018 to examine the technical aspects of the proposed electronic evidence rules in terms of extraction, encryption, preservation, retrieval and authentication of e-evidence in the court of law.
Centre of Excellence for Digital Preservation
As recommended in the national study report, during April 2011, Centre of Excellence for Digital Preservation was launched as the flagship project under the National Digital Preservation Program, funded by Ministry of Electronics and Information Technology, Government of India. The project was awarded to Human-Centred Design & Computing Group, C-DAC Pune, India. The objectives of Centre of Excellence were as under:
Conduct research and development in digital preservation to produce the required tools, technologies, guidelines and best practices.
Develop the pilot digital preservation repositories and provide help in nurturing the network of Trustworthy Digital Repositories (National Digital Preservation Infrastructure) as a long-term goal
Define the digital preservation standards by involving the experts from stakeholder organizations, consolidate and disseminate the digital preservation best practices generated through various projects under National Digital Preservation Program, being the nodal point for pan-India digital preservation initiatives.
Provide inputs to Ministry of Electronics & Information Technology in the formation of National Digital Preservation Policy
Spread awareness about the potential threats and risks due to digital obsolescence and the digital preservation best practices.
The major outcomes of this project are briefly summarised hereafter.
Digital Preservation Standard and Guidelines
Digital preservation standard and guidelines are developed in order to help local data intensive projects in preparing for highly demanding standards such as ISO 16363 for Audit and Certification of Trusted Digital Repositories. The standard is duly notified by Ministry of Electronics and Information Technology, Government of India Vide Notification No. 1(2)/2010-EG-II dated December 13, 2013 for all e-governance applications in India.
e-Governance standard for Preservation Information Documentation (eGOV-PID) of Electronic Records
The eGOV-PID provides standard metadata dictionary and schema for automatically capturing the preservation metadata in terms of cataloging information, enclosure information, provenance information, fixity information, representation information, digital signature information and access rights information immediately after an electronic record is produced by e-governance system. It helps in producing an acceptable Submission Information Package (SIP) for an Open Archival Information System (OAIS) ISO 14721:2012.
Best practices and guidelines for Production of Preservable e-Records
Best practices and guidelines introduce 5 distinct steps of e-record management namely e-record creation, e-record capturing, e-record keeping, e-record transfer to trusted digital repository and e-record preservation which need to be adopted in all e-governance projects. It also specifies the open source and standard based file formats for the production of e-records. The guidelines incorporate the Electronic Records Management practice as per the ISO/TR 15489-1 and 2 Information and Documentation - Records Management.
Digital Preservation Tools and Solutions
However, it is difficult to implement the digital preservation standard due to unavailability required tools and solutions. Therefore, the standard and guidelines are supported with a variety of digital preservation tools and solutions which can be given to the memory institutions and records creating organizations for long term preservation. The project team at C-DAC Pune has developed a software framework for digital archiving named as DIGITĀLAYA (डिजिटालय in Hindi language) which is customizable for various domains, data types and application contexts such as
E-records management & archival (a variety of born digital records produced by organizations on day-to-day basis)
Large volume of e-governance records
Audiovisual archives
Digital libraries / document archives
DIGITĀLAYA (डिजिटालय) is designed and developed as per the CCSDS Open Archival Information System (OAIS) Reference Model, ISO 14721: 2012.
A number of digital preservation tools are developed to help in processing the digital data
e-SANGRAHAN (ई-संग्रहण): E-acquisition tool
e-RUPĀNTAR (ई-रूपांतर): Pre-archival data processing tool
DATĀNTAR (डेटांतर): E-records extraction tool
SUCHI SAMEKAN (सूची समेकन): Metadata importing and aggregation tool
META-PARIVARTAN (मेटा-परिवर्तन): Any to any metadata conversion tool
DATA HASTĀNTAR (डेटा-हस्तांतर): Data encryption and transfer tool
PDF/A converter tool
All the archival systems and digital preservation tools are developed in such a way that they enable in producing evidences / reports as required for the audit and certification of trustworthy digital repositories.
Pilot Digital Repositories
In order to test and demonstrate the effectiveness of digital preservation tools, various pilot digital repositories were developed in collaboration with domain institutions such as Indira Gandhi National Centre for Arts; New Delhi; National Archives of India, New Delhi; Stamps and registration Department, Hyderabad; and e-District. C-DAC Noida developed the pilot digital repository for e-Court in collaboration with district courts of Delhi using e-Goshwara: e-Court Solution.
The pilot digital repositories were selected from different domains with following objectives:
Understand different data sets in terms of metadata, digital objects, file formats, authenticity, access control and requirements of designated users
Identify opportunities for development of tools and solutions in order to address domain specific requirements
Involve the stakeholders in digital preservation process
Generate proof of concept by deploying the solutions in the domain institutions
ISO 16363 Certified Trusted Digital Repository
As a part of the pilot digital repositories, National Cultural Audiovisual Archive (NCAA) at IGNCA, New Delhi is established using DIGITĀLAYA (डिजिटालय). NCAA manages around 2 Petabytes of rare cultural audiovisual data. During June 2017, Primary Trustworthy Digital Repository Authorization Body (PTAB), UK got accredited by National Accreditation Board for Certification Bodies (NABCB), New Delhi, India. PTAB was involved to audit National Cultural Audiovisual Archive. Both NCAA and C-DAC teams worked together during the audit process. Finally, NCAA has been awarded the certified status as Trusted Digital Repository on 27 November 2017, as per ISO 16363. It happens to be the first Certified Trusted Digital Repository (Certificate No. PTAB-TDRMS 0001) as per ISO 16363 in India and world.
Capacity Building for Audit and Certification
The High Level 3-day Training Course on ISO 16363 for Auditors and Managers of Digital Repositories was conducted during 11–13 January 2017 at India Habitat Centre, New Delhi, India. This training was organized as per the deliverable of Centre of Excellence for Digital Preservation by C-DAC Pune in collaboration with Primary Trustworthy Digital Repository Authorization Body (PTAB), UK.
This initiative was helpful in formally introducing the ISO 16363 and ISO 16919 through the National Accreditation Board for Certification Bodies (NABCB) for the audit and certification of Indian digital repositories. The first batch of potential technical auditors was trained which included 27 Participants from various stakeholder organisations. Apart from this, numerous digital preservation and DIGITĀLAYA (डिजिटालय) training sessions were organised for the staff of NAI, IGNCA and the 21 partner institutions contributing in NCAA project.
Contribution to UNESCO Standard Setting Instrument on Preservation of Digital Heritage
The Principal Investigator of Centre of Excellence for Digital Preservation, Dr. Dinesh Katre represented India in the UNESCO International Experts Consultative Meet on Preservation and Access during June 25–26, 2014 at Warsaw, Poland, which drafted the Standard Setting Instrument for the protection and preservation of the digital heritage. General Conference of UNESCO at its 38th session on 1 and 2 July 2015 unanimously adopted the Recommendation Safeguarding the Memory of the World –Preservation of, Access to, Documentary Heritage in the Digital Era (38 C/Resolutions – Annex V).
Based on the experience gained from this project, Government of India is considering to create a national policy on digital preservation which will be instrumental in establishing national digital preservation infrastructure. The digital preservation initiative stands at the crux where it is crucial to fill up the gap between the Digital India and the challenges posed by rampant technological obsolescence, to make it a truly sustainable vision.
See also
National Digital Information Infrastructure and Preservation Program (NDIIPP), USA
Internet Archive, USA
Wayback Machine
Internet Memory Foundation
Digital Curation Centre, UK
Digital Preservation Coalition (DPC), UK
Trustworthy Repositories Audit & Certification
Big Data
References
Digital preservation
Archival science
Information technology in India
|
16598003
|
https://en.wikipedia.org/wiki/British%20and%20Irish%20Steam%20Packet%20Company
|
British and Irish Steam Packet Company
|
The British and Irish Steam Packet Company Limited was a steam packet and passenger ferry company operating between ports in Ireland and in Great Britain between 1836 and 1992. It was latterly popularly called the B&I, and branded as B&I Line.
The company took over the business of the City of Dublin Steam Packet Company.
Private company
The B&I was established in Dublin in 1836 with an initial fleet of paddle steamers by a group of Dublin businessmen including James Jameson, Arthur Guinness and Francis Carlton. The company was based on Eden Quay until it moved to No. 46 East Wall in 1860. The fleet changed to iron in the 1840s and 1850s to ply on the company routes of Falmouth–Torquay–Southampton–Portsmouth and London together with Dublin–Wexford–Waterford. The company acquired the London service of the Waterford Steamship Company in 1870 by which they dominated this route.
The controlling owner of the B&I was the Liverpool Shipping Company. It was taken over by the Kylsant Royal Mail Company in 1917 and renamed Coast Lines which by the end of 1917 held all the shares in the B&I. Among the operations of this group were,
Burns and Laird
City of Cork Steam Packet
The Dublin and Lancashire Shipping Co. (1922)
Dundalk and Lancashire Shipping Co. (1922)
Dundalk and Newry Steam Packet Company (1926)
City of Dublin Steam Packet Company, founded 1823 (1920)
The Belfast Steamship Company (1919)
Tedcastle and McCormack of Dublin (1919)
The 1930s was a difficult period for the B&I, and Coast Lines offered the Irish Government a share in the company but they declined. This was regretted on the outbreak of World War II, when Coast Lines withdrew most of the vessels and placed them at the disposal of the British authorities. During the war, the company sustained casualties with the separate losses of two vessels in Liverpool in 1940: the Innisfallen, and sunk by a mine.
B&I had offices and owned several buildings (9 North Wall Quay - Cartage and Motor Haulage Department, 12 North Wall Quay - further larger offices) and a yard at North Wall Quay which bore its name in large letters and were demolished in the 1990s to make way for the offices of Citibank as well as at 27 Sir John Rogerson's Quay which bore its name and are still standing as a protected structure as of 2020.
Nationalisation
B&I was taken over by the Irish Government in 1965. It had ten passenger and cargo vessels, many built in the late 1940s. The new management commenced a major programme of modernisation, launching the car ferries , Innisfallen and Leinster (1969). The Munster and Leinster plied the Dublin–Liverpool route and the new Innisfallen out of Cork changed from Fishguard to Swansea in 1969. The company was also operating new freight ships.
On 25 April 1980 a jetfoil service from Dublin to Liverpool started but was withdrawn as it was not a commercial success. The company ran into major financial problems in 1981, this and labour disputes persisted into the early 1992 when the company was privatised and taken over by the Irish Continental Group.
References
External links
Irish Ferries Enthusiasts History of the B & I Line
Shipping companies of the Republic of Ireland
Companies established in 1836
Defunct shipping companies of the United Kingdom
Former state-sponsored bodies of the Republic of Ireland
1836 establishments in Ireland
1995 disestablishments in Ireland
Shipping companies of Ireland
Dublin Docklands
|
51357581
|
https://en.wikipedia.org/wiki/Meizu%20PRO%206
|
Meizu PRO 6
|
The Meizu PRO 6 is a smartphone designed and produced by the Chinese manufacturer Meizu, which runs on Flyme OS, Meizu's modified Android operating system. It is the company's latest model of the flagship PRO series, succeeding the Meizu PRO 5. It was unveiled on April 13, 2016, in Beijing.
History
In March 2016, rumors about the PRO 6 possibly featuring force touch technology appeared after a screenshot had been posted on social media.
Later that month, MediaTek announced that the Helio X25 system-on-a-chip was co-developed together with Meizu and will be exclusively used in the PRO 6.
On April 7, 2016, Meizu officially confirmed the launch event of the PRO 6 in Beijing for April 13, 2016.
Release
Pre-orders for the PRO 6 began after the launch event on April 13, 2016.
Sales in mainland China began on April 30, 2016.
Features
Flyme
The Meizu PRO 6 was released with an updated version of Flyme OS, a modified operating system based on Android Marshmallow. It features an alternative, flat design and improved one-handed usability. For the PRO 6, it has been extended by features for the pressure-sensitive 3D Press technology.
Hardware and design
The Meizu PRO 6 features a MediaTek Helio X25 with an array of ten ARM Cortex CPU cores, an ARM Mali-T880 MP4 GPU and 4 GB of RAM, which scores a result of 96765 points on the AnTuTu benchmark.
This represents an increase of 13% compared to its predecessor, the Meizu PRO 5.
The Meizu PRO 6 has a full-metal body, which measures x x and weighs . It has a slate form factor, being rectangular with rounded corners and has only one central physical button at the front.
Unlike most other Android smartphones, the PRO 6 doesn't have capacitive buttons nor on-screen buttons. The functionality of these keys is implemented using a technology called mBack, which makes use of gestures with the physical button. This button also includes a fingerprint sensor called mTouch.
Furthermore, a haptic technology called 3D Press has debuted on the PRO 6, which allows the user to perform a different action by pressing the touchscreen instead of tapping.
The PRO 6 is available in three different colors (grey, silver and champagne gold) and comes with either 32 or 64 GB of internal storage.
The PRO 6 features a 5.2-inch Super AMOLED multi-touch capacitive touchscreen display with a (FHD resolution of 1080 by 1920 pixels. The pixel density of the display is 426.3 ppi.
In addition to the touchscreen input and the front key, the device has a volume/zoom control and the power/lock button on the right side and a 3.5mm TRS audio jack, which is powered by a dedicated Cirrus Logic CS43L36 Hi-Fi amplifier.
Just like its predecessor, it uses USB-C for both data connectivity and charging.
The Meizu PRO 6 has two cameras. The rear camera has a resolution of 21.16 MP, a ƒ/2.2 aperture and a 6-element lens. Furthermore, the phase-detection autofocus of the rear camera is laser-supported.
The front camera has a resolution of 5 MP, a ƒ/2.0 aperture and a 5-element lens.
Reception
The PRO 6 received mostly favorable reviews. Android Authority gave an overall rating of 7.9 out of 10 points, concluding that the PRO 6 “gets a lot right, such as the build quality, the display [..], an extremely fast and accurate fingerprint scanner, and a great sounding speaker”.
AndroidPit noted that the device offers good performance for an attractive price, concluding that “for $370 you get a phone with above average performance”.
See also
Meizu
Meizu PRO 5
Comparison of smartphones
References
External links
Official product page Meizu
Android (operating system) devices
Mobile phones introduced in 2016
Meizu smartphones
Discontinued smartphones
Mobile phones with pressure-sensitive touch screen
|
300594
|
https://en.wikipedia.org/wiki/Gil%20Amelio
|
Gil Amelio
|
Gilbert Frank Amelio (born March 1, 1943) is an American technology executive. Amelio worked at Bell Labs, Fairchild Semiconductor, and the semiconductor division of Rockwell International, and also a former CEO of National Semiconductor and Apple Computer.
Early life and career
Amelio grew up in Miami, Florida, of Italian born parents, and graduated from Miami High School. He received a bachelor's degree, master's degree, and PhD in physics from the Georgia Institute of Technology. While at Georgia Tech, Amelio was a member of the Pi Kappa Alpha fraternity.
Amelio joined Bell Labs as a researcher in 1968.
In 1970, Amelio was on the team that demonstrated the first working charge-coupled device (CCD).
He moved to Fairchild Semiconductor in 1971, where he led the development of the first commercial CCD image sensors in the early 1970s, and in 1977 became head of the MOS division.
He worked his way up to president of the semiconductor division of Rockwell International, and then its communications systems division.
Amelio joined National Semiconductor as president and chief executive in February 1991.
Apple Computer
In 1994 Amelio joined the board of directors of Apple. After his resignation from National Semiconductor, Amelio became Apple CEO on February 2, 1996, succeeding Michael Spindler. His salary was a reported $990,000 plus bonuses and a $5 million loan. He also received approximately $100,000 for the use of his business jet by Apple the previous year according to the section "Certain Transactions" in the Apple Proxy Statement for 1996.
Amelio cited several problems at Apple including a shortage of cash and liquidity, low-quality products, lack of a viable operating system strategy, undisciplined corporate culture, and fragmentation in trying to do too much and in too many directions. To address these problems Amelio cut costs, reduced Apple's work force by one third, discontinued the Copland operating system project, and oversaw the development of Mac OS 8.
To replace Copland and fulfill the need for a next generation operating system Amelio started negotiations to buy BeOS from Be Inc. but negotiations stalled when Be CEO Jean-Louis Gassée demanded $275 million; Apple was unwilling to offer more than $200 million. In November 1996 Amelio started discussions with Steve Jobs's NeXT, and bought the company on February 4, 1997, for $429 million.
During Amelio's tenure Apple's stock continued to slump and hit a 12-year low in Q2 1997 that was at least partially caused by a single sale of 1.5 million shares of Apple stock on June 26 by an anonymous party who was later confirmed to be Steve Jobs. Apple lost another $708 million. On the July 4, 1997 weekend, Jobs convinced the directors to oust Amelio in a boardroom coup; Amelio submitted his resignation less than a week later; and Jobs then became interim CEO on September 16. In a 2007 interview with technology journalist Gina Smith, Jobs quoted Amelio as having a saying:
Apple is like a ship with a hole in the bottom, leaking water, and my job is to get the ship pointed in the right direction.
It was reported that Amelio's contract gave him about $3.5 million in severance pay, after a $2.3 million performance bonus in 1996.
Post-Apple career
Since 1998 Amelio has been a venture capitalist. In February 2001, Amelio became CEO of Advanced Communications Technologies (ADC). ADC is the United States arm of an Australian firm that has developed a product for the wireless communications industry called SpectruCell.
He became senior partner at Sienna Ventures in Sausalito, California in May 2001.
In 2005 he co-founded Acquicor with ex-Apple CTO Ellen Hancock and Apple co-founder Steve Wozniak.
Acquicor acquired Jazz Semiconductor in early 2007, and sold it in 2008 for a loss.
Amelio was a director and chairman of the Semiconductor Industry Association. Since 1996 he has been an advisor to the Malaysia Multimedia Super Corridor and to Malaysia's Prime Minister. Amelio was director of AT&T Inc., Pacific Telesis, Chiron Corporation, Sematech, the Georgia Tech Advisory Board (as chairman) and the American Film Institute. In June 2003 he was named chairman of the board of Ripcord Networks; where he joined Steve Wozniak, Ellen Hancock, and other Apple alumni. In October 2005 Amelio joined the board of advisors to Vanguard PAC (now TheVanguard.Org). Amelio is also a member of the board of directors of InterDigital, a wireless R&D company. Gil Amelio is on the advisory board of tech start-up Intelicloud.
He was a contributor to the report An American Imperative (1993), and author of the books Profit from Experience (1995, ) and On the Firing Line: My 500 Days at Apple (1998, ).
In November 2020, Amelio joined the board of directors for Nashville-based augmented reality startup VideoBomb.
Awards and honors
Amelio is an IEEE Fellow. He received the IEEE Masaru Ibuka Consumer Electronics Award in 1991 for contributions to the development of the charge-coupled device (CCD) image sensors in consumer video cameras. He has been awarded 16 patents.
References
External links
The Rise and Fall of Apple's Gil Amelio from Low End Mac
Apple CEO Gil Amelio's first interview after being fired by Apple
1943 births
Living people
Directors of Apple Inc.
Fellow Members of the IEEE
Georgia Tech alumni
American corporate directors
Apple Inc. executives
American people of Italian descent
|
1692411
|
https://en.wikipedia.org/wiki/Symarip
|
Symarip
|
Symarip (also known at various stages of their career as The Bees, The Pyramids, Seven Letters and Zubaba) were a British ska and reggae band, originating in the late 1960s, when Frank Pitter and Michael Thomas founded the band as The Bees. The band's name was originally spelled Simaryp, which is an approximate reversal of the word pyramids. Consisting of members of West Indian descent, Simaryp is widely marked as one of the first skinhead reggae bands, being one of the first to target skinheads as an audience. Their hits included "Skinhead Girl", "Skinhead Jamboree" and "Skinhead Moonstomp", the latter based on the Derrick Morgan song, "Moon Hop".
They moved to Germany in 1971, performing reggae and Afro-rock under the name Zubaba. In 1980, the single "Skinhead Moonstomp" was re-issued in the wake of the 2 Tone craze, hitting No. 54 on the UK Singles Chart. The band officially split in 1985 after releasing the album Drunk & Disorderly as The Pyramids. The album was released by Ariola Records and was produced by Stevie B.
Pitter and Ellis moved back to England, where Ellis continued performing as a solo artist, sometimes using the stage name 'Mr. Symarip'. Mike Thomas met a Finnish woman while living in Switzerland and relocated to Finland doing the groundwork for the Finnish reggae culture through his band 'Mike T. Saganor'. Monty Neysmith moved to the United States, where he toured as a solo artist.
In 2004, Trojan Records released a best of album including a new single by Neysmith and Ellis, "Back From the Moon". In 2005, Neysmith and Ellis performed together at Club Ska in England, and a recording of the concert was released on Moon Ska Records as Symarip – Live at Club Ska. In April 2008, they headlined the Ska Splash Festival in Lincolnshire as Symarip, and later performed at the Endorse-It and Fordham Festivals. Pitter and Thomas now perform in a different band as Symarip Pyramid. Their Back From The Moon Tour 2008–2009 was with The Pioneers. In 2009, to celebrate the rebirth of the band and the reunion of the two original members, Trojan Records released a compilation album, Ultimate Collection. Pitter holds all copyright and trademark rights for the name 'Symarip Pyramid'.
Line-up
Roy Ellis – Singer, trombone (1969–1985)
Josh Roberts – Guitar (1969–1985)
Michael "Mik" Thomas – Bass guitar (1969 – 1985, 2008 – present)
Frank Pitter – Drums (1969 – 1985, 2008 – present)
Monty Neysmith – Keyboards, including Hammond organ (1969 – 1985, 2010 – present)
Roy Bug Knight – Saxophone (2008 – present)
Johney Johnson – Trumpet (2008 – present)
Carl Grifith – Tenor & alto sax (2008 – present)
Partial discography
Albums
The Pyramids – The Pyramids – President – PTL-1021 (1968)
Symarip – Skinhead Moonstomp – Trojan – TBL-102 (1970)
Simaryp – Skinhead Moonstomp – Trojan – TRLS187 (1980)
The Pyramids – Drunk and Disorderly – Ariola (1985)
Symarip/The Pyramids/Seven Letters – The Best Of – Trojan TJACD154 (2004)
Symarip/The Pyramids – Ultimate Collection – Trojan (2009)
Singles
Blue Beat BB-386A "Jesse James Rides Again" (as The Bees) 1967
Blue Beat BB-386B "The Girl in My Dreams" (as The Bees) 1967
Clmbia Blue Beat DB-101A "Jesse James Rides Again" (as The Bees) 1967
Clmbia Blue Beat DB-101B "The Girl in My Dreams" (as The Bees) 1967
Clmbia Blue Beat DB-111A "Prisoner from Alcatraz" (as The Bees) 1967
Clmbia Blue Beat DB-111B "The Ska's The Limit" (as The Bees) 1967
President PT-161A "Train Tour To Rainbow City" 1967
President PT-161B "John Chewey" 1967
President PT-177A "Wedding in Peyton Place" 1968
President PT-177B "Girls, Girls, Girls" 1968
President PT-195A "All Change on the Bakerloo Line" 1968
President PT-195B "Playing Games" 1968
President PT-206A "Mexican Moonlight" 1968
President PT-206B "Mule" 1968
President PT-225A "Tisko My Darling" 1968
President PT-225B "Movement All Around" 1968
President PT-243A "Do Re Mi" 1969
President PT-243B "I'm Outnumbered" 1969
President PT-274A "I'm a Man" 1969
President PT-274B "Dragonfly" 1969
Attack ATT-8013A "I'm A Puppet" (as Symarip) 1969
Attack ATT-8013B "Vindication" (as Symarip) 1969
Doctor Bird DB-1189A "People Get Ready" (as Seven Letters) 1969
Doctor Bird DB-1189B "The Fit" (as Seven Letters) 1969
Doctor Bird DB-1194A "Please Stay" (as Seven Letters) 1969
Doctor Bird DB-1194B "Special Beat" (as Seven Letters) 1969
Doctor Bird DB-1195A "Flour Dumpling" (as Seven Letters) 1969
Doctor Bird DB-1195B "Equality" (as Seven Letters) 1969
Doctor Bird DB-1206A "Mama Me Want Girl" (as Seven Letters) 1969
Doctor Bird DB-1206B "Sentry" (as Seven Letters) 1969
Doctor Bird DB-1207A "Soul Crash (Soul Serenade)" (as Seven Letters) 1969
Doctor Bird DB-1207B "Throw Me Things" (as Seven Letters) 1969
Doctor Bird DB-1208A "There Goes My Heart" (as Seven Letters) 1969
Doctor Bird DB-1208B "Wish" (as Seven Letters) 1969
Doctor Bird DB-1209A "Bam Bam Baji" (as Seven Letters) 1969
Doctor Bird DB-1209B "Hold Him Joe" (as Seven Letters) 1969
Doctor Bird DB-1306A "Fung Sure" (as Simaryp) 1969
Doctor Bird DB-1306B "Tomorrow at Sundown" (as Simaryp) 1969
Doctor Bird DB-1307A "Stay With Him" 1969
Doctor Bird DB-1307B "Chicken Mary" 1969
Treasure Isle TI-7050A "Skinhead Moonstomp" 1969
Treasure Isle TI-7050A "Must Catch A Train" 1969
Treasure Isle TI-7054A "Parson's Corner" 1970
Treasure Isle TI-7054A "Redeem" 1970
Treasure Isle TI-7055A "La Bella Jig" 1970
Treasure Isle TI-7055A "Holiday by the Sea" 1970
Attack ATT-8013A "I'm A Puppet" 1970
Attack ATT-8013B "Vindication" 1970
Duke DU-80A "Geronimo" 1970
Duke DU-80B "Feel Alright" 1970
Trojan TR-7755A "Feel Alright" 1970
Trojan TR-7755B "Telstar" 1970
Trojan TR-7770A "To Sir With Love" 1970
Trojan TR-7770B "Reggae Shuffle" 1970
Trojan TR-7803A "All For You" 1971
Trojan TR-7803B "All For You" (version) 1971
Trojan TR-7814B (1) "Stingo" 1971
Trojan TR-7814B (2) "Geronimo" 1971
Creole CR-1003A "Mosquito Bite"
Creole CR-1003B "Mother's Bath"
Creole CR-1006A "Can't Leave Now"
Creole CR-1006B "Teardrops"
Rhino RNO-129A "Jesse James Rides Again" 1974
References
External links
Biography on Trojan Records site
Mr. Symarip – Roy Ellis
Interview with Symarip member
Symarip discography
Symarip Pyramids Myspace profile
Interview wirh Roy Ellis on Litopia
First-wave ska groups
British reggae musical groups
British ska musical groups
Skinhead
Trojan Records artists
Blue Beat Records artists
|
59583203
|
https://en.wikipedia.org/wiki/Gang%20Hua
|
Gang Hua
|
Gang Hua (; born 1979) is a Chinese-American computer scientist who specializes in the field of computer vision and pattern recognition. He is an IEEE Fellow, IAPR Fellow and ACM Distinguished Scientist. He is a key contributor to Microsoft's Facial Recognition technologies.
Biography
Gang Hua is the Vice President and Chief Scientist of Wormpex AI Research. His research focuses on computer vision, pattern recognition, machine learning, robotics, towards general Artificial Intelligence, with primary applications in cloud and edge intelligence, and currently with a focus on new retail intelligence.
Before that, he served in various roles at Microsoft (2015-18) as the Science/Technical Adviser to the CVP of the Computer Vision Group, Director of Computer Vision Science Team in Redmond and Taipei ATL, and Principal Researcher/Research Manager at Microsoft Research. He was an Associate Professor in Computer Science at Stevens Institute of Technology (2011-15). During 2014-15, he took an on leave and worked at Amazon (company) on the Amazon-Go project. He was an Visiting Researcher (2011-14) and a Research Staff Member (2010-11) at IBM Thomas J. Watson Research Center, a Senior Researcher (2009-10) at Nokia Research Center Hollywood, and a Scientist (2006-09) at Microsoft Live Labs.
He received his Ph.D. degree in Electrical Engineering and Computer Engineering from Northwestern University in 2006. He received his M.S. degree in Pattern Recognition and Intelligent System in 2002 and B.S. degree in Control Engineering and Science in 1999, both from Xi'an Jiaotong University. In 1994, he was selected to the Special Class for Gifted Young in Xi'an Jiaotong University.
Services
He is a general chair for IEEE/CVF International Conference on Computer Vision 2025. He is a program chair for IEEE/CVF Conference on Computer Vision and Pattern Recognition 2019 and 2022.
He is also a member of the editorial board of International Journal of Computer Vision, an Associate Editor in Chief for Computer Vision and Image Understanding (journal), and an Associate Editor for IAPR Journal of Machine Vision and Applications. He was An Associate Editor for IEEE Transaction on Image Processing for two terms (2012-2015, 2017-2019) and IEEE Transaction on Circuit Systems and Video Technologies (2015-2019), and Vision and View Department Editor for IEEE Multimedia Magazine (2011-16).
Awards
In 2018, Hua was elevated to a Fellow of Institute of Electrical and Electronics Engineers for contributions to Facial Recognition in Images and Videos. In 2016, Hua was elected as a Fellow of International Association for Pattern Recognition for contributions to visual computing and learning from unconstrained images and videos and a Distinguished Scientist of Association for Computing Machinery for contributions to Multimedia and Computer Vision. He is the recipient of the 2015 IAPR Young Biometrics Investigator Award for contributions to Unconstrained Face Recognition in Images and Videos.
References
1979 births
Living people
Chinese computer scientists
Microsoft people
Microsoft Research people
Fellow Members of the IEEE
Scientists from Hunan
Xi'an Jiaotong University alumni
Northwestern University alumni
Stevens Institute of Technology faculty
IBM people
|
67478609
|
https://en.wikipedia.org/wiki/T2%20SDE
|
T2 SDE
|
The T2 SDE (System Development Environment) is an open source Linux distribution kit. It is primarily developed by René Rebe.
History
ROCK Linux was started in the summer of 1998 by Claire Wolf. T2 SDE was forked in 2004, when developers where dissatisfied with the project. ROCK Linux was discontinued in 2010.
In August 2006, version 6.0 was released with ISO images for AMD64, i386, PPC64 and SPARC64. In July 2010, version 8.0 (codenamed "Phoenix") was released. In April 2021, version 21.4 was released.
Usage
Puppy Linux has used T2 SDE for compiling their packages. AskoziaPBX has used a fork of T2 SDE because it had support for Blackfin. Archivista made a document management system based on T2 SDE.
Hardware support
T2 SDE supports the x86-64, x86, arm64, arm, RISC-V (32 and 64 bit), ppc64le, ppc64-32, sparc64, MIPS64, mipsel, hppa, m68k, alpha, and ia64 architectures. The PowerPC platform is well supported. There are ISO images available, or users can build it themselves.
T2 SDE has been shown to run on the Nintendo Wii. It also supports the SGI Octane and the PlayStation 3.
See also
OpenEmbedded
Gentoo Linux
Linux From Scratch
References
External links
Distrowatch
YouTube channel
Original official website of ROCK Linux
Light-weight Linux distributions
Embedded Linux distributions
Linux distributions
|
5172349
|
https://en.wikipedia.org/wiki/Library%20and%20information%20science
|
Library and information science
|
Library and information science (LIS) (sometimes given as the plural library and information sciences) is a branch of academic disciplines that deals generally with organization, access, collection, and protection/regulation of information, whether in physical (e.g. art, legal proceedings) or digital forms. By the late 1960s, mainly due to the meteoric rise of human computing power and the new academic disciplines formed therefrom, academic institutions began to add the term "information science" to their names. The first school to do this was at the University of Pittsburgh in 1964. More schools followed during the 1970s and 1980s, and by the 1990s almost all library schools in the USA had added information science to their names. Although there are exceptions, similar developments have taken place in other parts of the world. In Denmark, for example, the 'Royal School of Librarianship' changed its English name to The Royal School of Library and Information Science in 1997.
In spite of various trends to merge the two fields, some consider the two original disciplines, library science and information science, to be separate. However, it is common today is to use the terms as synonyms or to drop the term "library" and to speak about information departments or I-schools. There have also been attempts to revive the concept of documentation and to speak of Library, information and documentation studies (or science).
Relations between library science, information science and LIS
Tefko Saracevic (1992, p. 13) argued that library science and information science are separate fields:
Another indication of the different uses of the two terms are the indexing in UMI's Dissertations Abstracts. In Dissertations Abstracts Online in November 2011 were 4888 dissertations indexed with the descriptor LIBRARY SCIENCE and 9053 with the descriptor INFORMATION SCIENCE. For the year 2009 the numbers were 104 LIBRARY SCIENCE and 514 INFORMATION SCIENCE. 891 dissertations were indexed with both terms (36 in 2009).
It should be considered that information science grew out of documentation science and therefore has a tradition for considering scientific and scholarly communication, bibliographic databases, subject knowledge and terminology etc. Library science, on the other hand has mostly concentrated on libraries and their internal processes and best practices. It is also relevant to consider that information science used to be done by scientists, while librarianship has been split between public libraries and scholarly research libraries. Library schools have mainly educated librarians for public libraries and not shown much interest in scientific communication and documentation. When information scientists from 1964 entered library schools, they brought with them competencies in relation to information retrieval in subject databases, including concepts such as recall and precision, boolean search techniques, query formulation and related issues. Subject bibliographic databases and citation indexes provided a major step forward in information dissemination - and also in the curriculum at library schools.
Julian Warner (2010) suggests that the information and computer science tradition in information retrieval may broadly be characterized as query transformation, with the query articulated verbally by the user in advance of searching and then transformed by a system into a set of records. From librarianship and indexing, on the other hand, has been an implicit stress on selection power enabling the user to make relevant selections.
Difficulties defining LIS
"The question, 'What is library and information science?' does not elicit responses of the same internal conceptual coherence as similar inquiries as to the nature of other fields, e.g., 'What is chemistry?', 'What is economics?', 'What is medicine?' Each of those fields, though broad in scope, has clear ties to basic concerns of their field. [...] Neither LIS theory nor practice is perceived to be monolithic nor unified by a common literature or set of professional skills. Occasionally, LIS scholars (many of whom do not self-identify as members of an interreading LIS community, or prefer names other than LIS), attempt, but are unable, to find core concepts in common. Some believe that computing and internetworking concepts and skills underlie virtually every important aspect of LIS, indeed see LIS as a sub-field of computer science! [Footnote III.1] Others claim that LIS is principally a social science accompanied by practical skills such as ethnography and interviewing. Historically, traditions of public service, bibliography, documentalism, and information science have viewed their mission, their philosophical toolsets, and their domain of research differently. Still others deny the existence of a greater metropolitan LIS, viewing LIS instead as a loosely organized collection of specialized interests often unified by nothing more than their shared (and fought-over) use of the descriptor information. Indeed, claims occasionally arise to the effect that the field even has no theory of its own." (Konrad, 2007, p. 652-653).
A multidisciplinary, interdisciplinary or monodisciplinary field?
The Swedish researcher Emin Tengström (1993) described cross-disciplinary research as a process, not a state or structure. He differentiates three levels of ambition regarding cross-disciplinary research:
The "Pluridisciplinary" or "multidisciplinarity" level
The genuine cross-disciplinary level: "interdisciplinarity"
The discipline-forming level "transdisciplinarity"
What is described here is a view of social fields as dynamic and changing. Library and information science is viewed as a field that started as a multidisciplinary field based on literature, psychology, sociology, management, computer science etc., which is developing towards an academic discipline in its own right. However, the following quote seems to indicate that LIS is actually developing in the opposite direction:
Chua & Yang (2008) studied papers published in Journal of the American Society for Information Science and Technology in the period 1988-1997 and found, among other things: "Top authors have grown in diversity from those being affiliated predominantly with library/information-related departments to include those from information systems management, information technology, business, and the humanities. Amid heterogeneous clusters of collaboration among top authors, strongly connected crossdisciplinary
coauthor pairs have become more prevalent. Correspondingly, the distribution of top keywords’ occurrences that leans heavily on core information science has shifted towards other subdisciplines such as information technology and sociobehavioral science."
A more recent study revealed that 31% of the papers published in 31 LIS journals from 2007 through 2012 were by authors in academic departments of library and information science (i.e., those offering degree programs accredited by the American Library Association or similar professional organizations in other countries). Faculty in departments of computer science (10%), management (10%), communication (3%), the other social sciences (9%), and the other natural sciences (7%) were also represented. Nearly one-quarter of the papers in the 31 journals were by practicing librarians, and 6% were by others in non-academic (e.g., corporate) positions.
As a field with its own body of interrelated concepts, techniques, journals, and professional associations, LIS is clearly a discipline. But by the nature of its subject matter and methods LIS is just as clearly an interdiscipline, drawing on many adjacent fields (see below).
A fragmented adhocracy
Richard Whitley (1984, 2000) classified scientific fields according to their intellectual and social organization and described management studies as a ‘fragmented adhocracy’, a field with a low level of coordination around a diffuse set of goals and a non-specialized terminology; but with strong connections to the practice in the business sector. Åström (2006) applied this conception to the description of LIS.
Scattering of the literature
Meho & Spurgin (2005) found that in a list of 2,625 items published between 1982 and 2002 by 68 faculty members of 18 schools of library and information science, only 10 databases provided significant coverage of the LIS literature. Results also show that restricting the data sources to one, two, or even three databases leads to inaccurate rankings and erroneous conclusions. Because no database provides comprehensive coverage of the LIS literature, researchers must rely on a wide range of disciplinary and multidisciplinary databases for ranking and other research purposes. Even when the nine most comprehensive databases in LIS was searched and combined, 27.0% (or 710 of 2,635) of the publications remain not found.
The unique concern of library and information science
"Concern for people becoming informed is not unique to LIS, and thus is insufficient to differentiate LIS from other fields. LIS are a part of a larger enterprise." (Konrad, 2007, p. 655).
"The unique concern of LIS is recognized as: Statement of the core concern of LIS:
Humans becoming informed (constructing meaning) via intermediation between inquirers and instrumented records. No other field has this as its concern. " (Konrad, 2007, p. 660)
"Note that the promiscuous term information does not appear in the above statement circumscribing the field's central concerns: The detrimental effects of the ambiguity this term provokes are discussed above (Part III). Furner [Furner 2004, 427] has shown that discourse in the field is improved where specific terms are utilized in place of the i-word for specific senses of that term." (Konrad, 2007, p. 661).
Michael Buckland wrote: "Educational programs in library, information and documentation are concerned with what people know, are not limited to technology, and require wide-ranging expertise. They differ fundamentally and importantly from computer science programs and from the information systems programs found in business schools.".
Bawden and Robinson argue that while Information Science has overlaps with numerous other disciplines with interest in studying communication, it is unique in that it is concerned with all aspects of the communication chain. For example, Computer Science may be interested in the indexing and retrieval, sociology with user studies, and publishing (business) with dissemination, whereas information science is interested in the study of all of these individual areas and the interactions between them.
The organization of information and information resources is one of the fundamental aspects of LIS. and is an example of both LIS's uniqueness and its multidisciplinary origins. Some of the main tools used by LIS toward this end to provide access to the digital resources of modern times (particularly theory relating to indexing and classification) originated in 19th century to assist humanity's effort to make its intellectual output accessible by recording, identifying, and providing bibliographic control of printed knowledge. The origin for some of these tools were even earlier. For example, in the 17th century, during the 'golden age of libraries', publishers and sellers seeking to take advantage of the burgeoning book trade developed descriptive catalogs of their wares for distribution – a practice was adopted and further extrapolated by many libraries of the time to cover areas like philosophy, sciences, linguistics, medicine, etc. In this way, a business concern of publishers – keeping track of and advertising inventory – was developed into a system for organizing and preserving information by the library.
The development of Metadata is another area that exemplifies the aim of LIS to be something more than an mishmash of several disciplines – that uniqueness Bawden and Robinson describe. Pre-Internet classification systems and cataloging systems were mainly concerned with two objectives: 1. to provide rich bibliographic descriptions and relations between information objects and 2. to facilitate sharing of this bibliographic information across library boundaries. The development of the Internet and the information explosion that followed found many communities needing mechanisms for the description, authentication and management of their information. These communities developed taxonomies and controlled vocabularies to describe their knowledge as well as unique information architectures to communicate these classifications and libraries found themselves as liaison or translator between these metadata systems. Of course the concerns of cataloging in the Internet era have gone beyond simple bibliographic descriptions. The need for descriptive information about the ownership and copyright of a digital product – a publishing concern – and description for the different formats and accessibility features of a resource – a sociological concern – show the continued development and cross discipline necessity of resource description.
In the 21st century, the usage of open data, open source and open protocols like OAI-PMH has allowed thousands of libraries and institutions to collaborate on the production of global metadata services previously offered only by increasingly expensive commercial proprietary products. Examples include BASE and Unpaywall, which automates the search of an academic paper across thousands of repositories by libraries and research institutions.
Christopher M. Owusu-Ansah argued that, Many African universities have employed distance education to expand access to education and digital libraries can ensure seamless access to information for distance learners.
LIS theories
Julian Warner (2010, p. 4-5) suggests that
The domain analytic approach (e.g., Hjørland 2010) suggests that the relevant criteria for making discriminations in information retrieval are scientific and scholarly criteria. In some fields (e.g. evidence-based medicine) the relevant distinctions are very explicit. In other cases they are implicit or unclear. At the basic level, the relevance of bibliographical records are determined by epistemological criteria of what constitutes knowledge.
Among other approaches, Evidence Based Library and Information Practice should also be mentioned.
Journals
(see also List of LIS Journals in India page, :Category:Library science journals and Journal Citation Reports for listing according to Impact factor)
Some core journals in LIS are:
Annual Review of Information Science and Technology (ARIST) (1966–2011)
El Profesional de la Información (es) (EPI) (1992-) (Formerly Information World en Español)
Information Processing and Management
Information Research: An international electronic journal (IR) (1995-)
Italian Journal of Library and Information Studies (JLIS.it)
Journal of Documentation (JDoc) (1945-)
Journal of Information Science (JIS) (1979-)
Journal of the Association for Information Science and Technology (Formerly Journal of the American Society for Information Science and Technology) (JASIST) (1950-)
Knowledge Organization (journal)
Library Literature and Information Science Retrospective
Library Trends (1952-)
Scientometrics (journal) (1978-)
The Library Quarterly (LQ) (1931-)
Important bibliographical databases in LIS are, among others, Social Sciences Citation Index and Library and Information Science Abstracts
Conferences
This is a list of some of the major conferences in the field.
American Library Association Annual Conference and Exhibition
Annual meeting of the American Society for Information Science and Technology
Conceptions of Library and Information Science
i-Schools' "iConferences
ISIC - the Information Behaviour Conference
The International Federation of Library Associations and Institutions (IFLA): World Library and Information Congress
The international conferences of the International Society for Knowledge Organization (ISKO)
Common subfields
An advertisement for a full Professor in information science at the Royal School of Library and Information Science, spring 2011, provides one view of which subdisciplines are well-established: "The research and teaching/supervision must be within some (and at least one) of these well-established information science areas
a. Knowledge organization
b. Library studies
c. Information architecture
d. Information behavior
e. Interactive information retrieval
f. Information systems
g. Scholarly communication
h. Digital literacy (cf information literacy)
i. Bibliometrics or scientometrics
j. Interaction design and user experience"
k. Digital library
There are other ways to identify subfields within LIS, for example bibliometric mapping and comparative studies of curricula.
Bibliometric maps of LIS have been produced by, among others, Vickery & Vickery (1987, frontispiece), White & McCain (1998), Åström (2002), 2006) and Hassan-Montero & Herrero-Solana (2007).
An example of a curriculum study is Kajberg & Lørring, 2005. In this publication are the following data reported (p 234):
"Degree of overlap of the ten curricular themes with subject areas in the current curricula of responding LIS schools
Information seeking and Information retrieval 100%
Library management and promotion 96%
Knowledge management 86%
Knowledge organization 82%
Information literacy and learning 76%
Library and society in a historical perspective (Library history) 66%
The Information society: Barriers to the free access to information 64%
Cultural heritage and digitisation of the cultural heritage (Digital preservation) 62%
The library in the multi-cultural information society: International and intercultural communication 42%
Mediation of culture in a special European context 26% "
There is often an overlap between these subfields of LIS and other fields of study. Most information retrieval research, for example, belongs to computer science. Knowledge management is considered a subfield of management or organizational studies.
See also
Archival science
Authority control
Bibliography
Digital Asset Management (DAM)
Documentation science
Education for librarianship
Glossary of library and information science
I-school
Information history
Information systems
Knowledge management
Library and information scientist
Metadata
Museology
Museum informatics
Records Management
References
{{reflist|refs=
<ref name="istp">Vickery, Brian & Vickery, Alina (1987). Information science in theory and practice. London: Bowker-Saur.</ref>
}}
Further reading
External links
Birger Hjørland 2017 "Library and Information Science In Encyclopedia of Library and Information Science'' eds. Birger Hjørland and Claudio Gnoli.
Information science
Librarians
Library science education
hu:Könyvtár- és információtudomány
sv:Biblioteks- och informationsvetenskap
|
889606
|
https://en.wikipedia.org/wiki/88th%20Infantry%20Division%20%28United%20States%29
|
88th Infantry Division (United States)
|
The 88th Infantry Division was an infantry division of the United States Army that saw service in both World War I and World War II. It was one of the first of the Organized Reserve divisions to be called into federal service, created nearly "from scratch" after the implementation of the draft in 1940. Previous divisions were composed of either Regular Army or National Guard personnel. Much of the experience in reactivating it was used in the subsequent expansion of the U.S. Army.
By the end of World War II the 88th Infantry fought its way to the northernmost extreme of Italy. In early May 1945 troops of its 349th Infantry Regiment joined the 103d Infantry Division of the VI Corps of the U.S. Seventh Army, part of the 6th Army Group, which had raced south through Bavaria into Innsbruck, Austria, in Vipiteno in the Italian Alps.
World War I
Activated: 5 August 1917, Camp Dodge, Iowa
Overseas: 7 September 1918
Major operations: Did not participate as a division
Casualties: Total-78 (KIA-12; WIA-66)
Commanders:
Maj. Gen. Edward H. Plummer (25 August 1917)
Brig. Gen. Robert N. Getty (27 November 1917)
Maj. Gen. Edward H. Plummer (19 February 1918)
Brig. Gen. Robert N. Getty (15 March 1918)
Brig. Gen. William D. Beach (24 May 1918)
Maj. Gen. William Weigel (10 September 1918)
Inactivated: 10 June 1919, Camp Dodge, Iowa
Composition
Initially, personnel for the division were furnished by Selective Service men from Illinois, Iowa, Minnesota, and North Dakota. The 88th Division, like many National Army divisions, suffered heavily from transfers to Regular Army and National Guard units preparing to go overseas, delaying its combat readiness. In October and November 1917, men were transferred to the 34th and 87th Divisions. In February 1918, 12,000 men arrived from Iowa and Minnesota to bring the division to full strength, but, subsequently, about 16,000 men were transferred to the 30th, 33rd, 35th, 82nd, and 90th Divisions. In May and June 1918, 10,000 Selective Service men, mostly from Missouri, Nebraska, and South Dakota, joined the division.
The division was composed of the following units:
Headquarters, 88th Division
175th Infantry Brigade
349th Infantry Regiment
350th Infantry Regiment
338th Machine Gun Battalion
176th Infantry Brigade
351st Infantry Regiment
352nd Infantry Regiment
339th Machine Gun Battalion
163rd Field Artillery Brigade
337th Field Artillery Regiment (155 mm)
338th Field Artillery Regiment (75 mm)
339th Field Artillery Regiment (155 mm)
313th Trench Mortar Battery
Headquarters Troop, 88th Division
337th Machine Gun Battalion
338th Engineer Regiment
313th Field Signal Battalion
313th Train Headquarters and Military Police
313th Ammunition Train
313th Supply Train
313th Engineer Train
313th Sanitary Train
349th, 350th, 351st, and 352nd Ambulance Companies and Field Hospitals
Interwar period
The division was reconstituted in the Organized Reserve on 24 June 1921 and assigned to the states of Minnesota, Iowa, and North Dakota. The headquarters was organized on 2 September 1921.
World War II
Ordered into active military service: 15 July 1942, Camp Gruber, Oklahoma
Overseas: 6 December 1943
Distinguished Unit Citations: 3
Campaigns: Rome-Arno, North Apennines, Po Valley
Days of combat: 344
Awards: Medal of Honor-3 ; Distinguished Service Cross (United States)-40 ; Distinguished Service Medal (United States)-2 ; Silver Star-522; Legion of Merit-66; Soldier's Medal-19 ; Bronze Star Medal-3,784.
Unit citations: Third Battalion, 351st Infantry Regiment (action vicinity Laiatico; 9–13 July 1944). Second Battalion, 350th Infantry Regiment (action on Mt. Battaglia, 27 Sept – 3 Oct 1944). Second Battalion, 351st Infantry Regiment (action vicinity Mt. Cappello, 27 Sept – 1 Oct 1944).
Commanders:
Maj. Gen. John E. Sloan (July 1942 – September 1944)
Maj. Gen. Paul W. Kendall (September 1944 – July 1945)
Brig. Gen. James C. Fry (July–November 1945)
Maj. Gen. Bryant Moore (November 1945 to inactivation)
Inactivated: 24 October 1947 in Italy
Combat chronicle
First Entered combat: Advance party on night of 3–4 January 1944 in support of Monte Cassino attacks.
First Organization Committed to Line: 2nd Battalion, 351st Infantry Regiment plus attachments
First combat fatality: 3 January 1944
Began post war POW Command: 7 June 1945. Responsible for guarding and later repatriating 324,462 German POWs.
The 88th Infantry Division was one of the first all-draftee divisions of the United States Army to enter the war. Ordered into active military service at Camp Gruber, Oklahoma, the division, commanded by Major General John E. Sloan, arrived at Casablanca, French Morocco on 15 December 1943, and moved to Magenta, Algeria, on 28 December for intensive training. Destined to spend the war fighting on the Italian Front, the 88th Division arrived at Naples, Italy on 6 February 1944, and concentrated around Piedimonte d'Alife for combat training. An advance element went into the line before Monte Cassino on 27 February, and the entire division relieved the battered British 46th Infantry Division along the Garigliano River in the Minturno area on 5 March. A period of defensive patrols and training followed. The 88th formed part of Major General Geoffrey Keyes's II Corps, part of the U.S. Fifth Army, under Lieutenant General Mark W. Clark.
After being inspected by the Fifth Army commander on 5 May, the 88th Division, six days later, drove north to take Spigno, Mount Civita, Itri, Fondi, and Roccagorga, reached Anzio, 29 May, and pursued the enemy into Rome, being the first unit of the Fifth Army into the city on 4 June, two days before the Normandy landings, after a stiff engagement on the outskirts of the city. An element of the 88th is credited with being first to enter the Eternal City. After continuing across the Tiber to Bassanelio the 88th retired for rest and training, 11 June. The division went into defensive positions near Pomerance on 5 July, and launched an attack toward Volterra on the 8th, taking the town the next day. Laiatico fell on the 11th, Villamagna on the 13th, and the Arno River was crossed on the 20th although the enemy resisted bitterly.
After a period of rest and training, the 88th Division, now commanded by Major General Paul Wilkins Kendall, opened its assault on the Gothic Line on 21 September, and advanced rapidly along the Firenzuola-Imola road, taking Mount Battaglia (Casola Valsenio, RA) on the 28th. The enemy counterattacked savagely and heavy fighting continued on the line toward the Po Valley. The strategic positions of Mount Grande and Farnetto were taken on 20 and 22 October. From 26 October 1944 to 12 January 1945, the 88th entered a period of defensive patrolling in the Mount Grande-Mount Cerrere sector and the Mount Fano area. From 24 January to 2 March 1945, the division defended the Loiano-Livergnano area and after a brief rest returned to the front. The drive to the Po Valley began on 15 April. Monterumici fell on the 17th after an intense artillery barrage and the Po River was crossed at Revere-Ostiglia on 24-25 April, as the 88th pursued the enemy toward the Alps. The cities of Verona and Vicenza were captured on the 25th and 28th and the Brenta River was crossed on 30 April. The 88th was driving through the Dolomite Alps toward Innsbruck, Austria where it linked up with the 103rd Infantry Division, part of the U.S. Seventh Army, when the hostilities ended on 2 May 1945. The end of World War II in Europe came six days later. Throughout the war the 88th Infantry Division was in combat for 344 days.
Casualties
Total battle casualties: 13,111
Killed in action: 2,298
Wounded in action: 9,225
Missing in action: 941
Prisoner of war: 647
Units
Units assigned to the division during World War II included:
Headquarters, 88th Infantry Division
349th Infantry Regiment
350th Infantry Regiment
351st Infantry Regiment
Headquarters and Headquarters Battery, 88th Infantry Division Artillery
337th Field Artillery Battalion
338th Field Artillery Battalion
339th Field Artillery Battalion
913th Field Artillery Battalion
313th Engineer Combat Battalion
313th Medical Battalion
88th Cavalry Reconnaissance Troop (Mechanized)
Headquarters, Special Troops, 88th Infantry Division
788th Ordnance Light Maintenance Company
88th Quartermaster Company
88th Signal Company
Military Police Platoon
Band
88th Counterintelligence Corps Detachment
Post war
After the war, the 88th Infantry Division absorbed some personnel and units from the 34th Infantry Division and served on occupation duty in Italy guarding the Morgan Line from positions in Italy and Trieste until 15 September 1947 when the Italian peace treaty came into force. The 351st Infantry was relieved from assignment to the division on 1 May 1947 and served as temporary military Government of the Free Territory of Trieste, securing the new independent State between Italy and Yugoslavia on behalf of the United Nations Security Council. Designated TRUST (Trieste United States Troops), the command served as the front line in the Cold War from 1947 to 1954, including confrontations with Yugoslavian forces.
In October 1954 the mission ended upon the signing of the Memorandum of Understanding of London establishing a temporary civil administration in the Anglo-American Zone of the Free Territory of Trieste, entrusted to the responsibility of the Italian Government.
TRUST units, which included a number of 88th divisional support units, all bore a unit patch which was the coat of arms of the Free Territory of Trieste superimposed over the divisional quatrefoil, over which was a blue scroll containing the designation "TRUST" in white.
Cold War and beyond
The 88th Army Reserve Command (ARCOM) was formed at Fort Snelling in January, 1968, as one of 18 ARCOMs which were organized to provide command and control to Army Reserve units. The initial area of responsibility for the 88th ARCOM included Minnesota and Iowa, and this area was later expanded to include Wisconsin. (Note: ARCOMs were authorized to use the number and shoulder sleeve insignia of infantry divisions with the same number; however, ARCOMs did not inherit the lineage and honors of the divisions because it is against DA policy for a TDA unit, such as an ARCOM, to perpetuate the lineage and honors of a TO&E unit, such as a division.)
In 1996, when the Army Reserve's command structure was revised, the 88th Regional Support Command (88th RSC) was established at Fort Snelling. Its mission was to provide command and control for Reserve units in a six state region, which included Minnesota, Wisconsin, Illinois, Indiana, Michigan and Ohio. In addition, the 88th RSC ensured operational readiness, provided area support services, and supported emergency operations in its area of responsibility.
In 2003, the Army Reserve's command structure was again revised, and the 88th Regional Readiness Command (88th RRC) was formed at Fort Snelling with responsibility for USAR units in the same six states included in the 88th RSC. Various Combat Support units mobilize and deploy to Operation Iraqi Freedom in late 2003-mid 2004.
In its 2005 BRAC Recommendations, DoD recommended to realign Fort Snelling, MN by disestablishing the 88th Regional Readiness Command. This recommendation was part of a larger recommendation to re-engineer and streamline the Command and Control structure of the Army Reserve that would create the Northwest Regional Readiness Command at Fort McCoy, WI.
In 2008, the 88th Regional Readiness Command (88th RRC) moved to Ft McCoy Wisconsin. The mission was changed to provide base operations support to the new 19 state region, Welcome Home Warrior ceremonies, and the Yellow Ribbon weekends. The units assigned to the 88th RSC include 6 Army Reserve Bands and the Headquarters Company. It may supervise the 643rd Area Support Group at Whitehall, Ohio.
Current
The division shoulder patch is worn by the United States Army Reserve 88th Readiness Division at Fort Snelling, Minnesota; the division lineage is perpetuated by the 88th RD. RDs such as the 88th have the same number as inactivated divisions and are allowed to wear the shoulder patch, and division lineage and honors are inherited by an RD.
General
Shoulder patch: A blue (for Infantry) quatrefoil, formed by two Arabic numeral "8s". A rocker above it with the nickname "Blue Devils" was often worn.
During World War II, the Germans thought the 88th was an elite stormtrooper Division. This was most likely due to parallels between the "Blue Devil" nickname and patch rocker and the German SS's use of the Totenkopf death's head insignia.
See also
1st Lieutenant James Henry Taylor
Sgt Keith Matthew Maupin
References
Bibliography
The Army Almanac: A Book of Facts Concerning the Army of the United States U.S. Government Printing Office, 1950 reproduced at http://www.history.army.mil/html/forcestruc/cbtchron/cbtchron.html. (public domain, work of U.S. government)
About Face: The Odyssey of an American Warrior, by David Hackworth: pp 35, 308.
Brown, John Sloan. Draftee Division: the 88th Infantry Division in World War II. Lexington, KY: University Press of Kentucky, 1986.
Delaney, John P. The Blue Devils in Italy: a history of the 88th Infantry Division in World War II. Washington: Infantry Journal Press, [1947] 1988 reprint is also available.
External links
History of the 88th Division in the Great War
The 88th Division in the World War of 1914 – 1918
We Were There: From Gruber to the Brenner Pass
The battle of Cornuda, the 88th division's last battle of World War II
Oral history interview with Nicholas Cipu, a Staff Sergeant in the 88th Infantry Division, during World War II from the Veterans History Project at Central Connecticut State University
752nd Tank Battalion in World War II
088th Infantry Division, U.S.
Infantry Division, U.S. 088th
Military units and formations established in 1917
Military units and formations disestablished in 1947
United States Army divisions of World War I
Infantry divisions of the United States Army in World War II
|
66764971
|
https://en.wikipedia.org/wiki/AlmaLinux
|
AlmaLinux
|
AlmaLinux is a free and open source Linux distribution, created originally by CloudLinux to provide a community-supported, production-grade enterprise operating system that is binary-compatible with Red Hat Enterprise Linux (RHEL). The first stable release of AlmaLinux was published on March 30, 2021.
History
On December 8, 2020, Red Hat announced that development of CentOS Linux, a free-of-cost downstream fork of the commercial Red Hat Enterprise Linux (RHEL), would be discontinued and its official support would be cut short to focus on CentOS Stream, a rolling release officially used by Red Hat to preview what is intended for inclusion in updates to RHEL.
In response, CloudLinux – which maintains its own commercial Linux distribution, CloudLinux OS – created AlmaLinux to provide a community-supported spiritual successor to CentOS Linux, aiming for binary-compatibility with the current version of RHEL. A beta version of AlmaLinux was first released on February 1, 2021, and the first stable release of AlmaLinux was published on March 30, 2021. AlmaLinux 8.x will be supported until 2029. On March 30, 2021, the AlmaLinux OS Foundation was created to take over AlmaLinux development and governance from CloudLinux, which has promised $1 million in annual funding to the project.
The name of the distribution comes from the Spanish word "alma", meaning "soul", chosen to be an homage to the Linux community.
Releases
See also
Rocky Linux
References
External links
Enterprise Linux distributions
RPM-based Linux distributions
Linux distributions
|
1278286
|
https://en.wikipedia.org/wiki/Download.com
|
Download.com
|
Download.com is an Internet download directory website launched in 1996 as a part of CNET. Originally, the domain was download.com, which became download.com.com for a while, and is now download.cnet.com. The domain download.com attracted at least 113 million visitors annually by 2008 according to a Compete.com study.
Overview
The offered content is available in four major categories: software (including Windows, Mac and mobile), music, games, and videos, offered for download via FTP from Download.com's servers or third-party servers. Videos are streams (at present) and music was all free MP3 downloads, or occasionally rights-managed WMAs or streams until it was replaced with last.fm.
The Software section includes over 100,000 freeware, shareware, and try-first downloads. Downloads are often rated and reviewed by editors and contain a summary of the file from the software publisher. Registered users may also write reviews and rate the product. Software publishers are permitted to distribute their titles via CNET's Upload.com site for free, or for a fee structure that offers enhancements.
Up until 2015 CNet used Spigot Inc to monetize the traffic to download.com. According to Sean Murphy, then a General Manager at CNet , "Spigot continues to be a great partner to Download.com, sharing our desire to balance customer experience with revenue."
Malware distribution
In August 2011, Download.com introduced an installation manager called CNET TechTracker for delivering many of the software titles from its catalog. This installer included trojans and bloatware, such as toolbars. CNET admitted in their download FAQ that "a small number of security publishers have flagged the Installer as adware or a potentially unwanted application".
In December 2011, Gordon Lyon, writing under his pseudonym Fyodor wrote of his strong dislike of the installation manager and the bundled software. His post was very popular on social networks, and was reported by a few dozen media. The main problem is the confusion between the content offered on Download.com and the software offered by the original authors; the accusations included deception as well as copyright and trademark violation.
In 2014, The Register and US-CERT warned that via download.com's "foistware", an "attacker may be able to download and execute arbitrary code". In 2015, research by :Emsisoft suggested that all free download portals bundled their downloads with potentially unwanted software, and that Download.com was the worst offender.
A study done by How-To Geek in 2015 revealed that Download.com was packaging malware inside their installers. The test was done in a virtual machine where the testers downloaded the Top 10 apps. These all contained crapware/malware; one example was the KMPlayer installer, which installed a rogue antivirus named 'Pro PC Cleaner' and attempted to execute WajamPage.exe. Some downloads, specifically YTD, were completely blocked by Avast.
Another study done by How-To Geek in 2015 revealed that Download.com was installing fake SSL certificates inside their installers, similar to the Lenovo Superfish certificate. These fake certificates can completely compromise SSL encryption and allow man-in-the-middle attacks.
However, in July 2016, How-To Geek discovered that Download.com no longer included adware/malware in its downloads and that its Installer program had been discontinued.
See also
Spigot Inc
References
CNET
Former CBS Interactive websites
Download websites
File hosting
Free music download websites
Internet properties established in 1996
Adware
American music websites
|
22734933
|
https://en.wikipedia.org/wiki/Cassis%20%28album%29
|
Cassis (album)
|
Cassis is the 20th solo studio album by Japanese singer-songwriter Yōsui Inoue, released in July 2002.
The album includes "Kimerareta Rhythm", which was featured on the Academy Award-nominated motion picture The Twilight Samurai, directed by Yoji Yamada. The song "You are the Top" was originally contributed to the same-titled musical by Kōki Mitani. Two singles were cut from the album: "Kono Yo no Sadame" and "Final Love Song"; both songs were featured on the TV advertisings of Kirin Beverage starring Inoue himself.
Track listing
All songs written and composed by Yōsui Inoue, unless otherwise indicated
"" - 4:23
"" - 5:00
"Final Love Song" - 4:25
"" - 4:28
"" - 6:18
"" - 3:19
"" (Inoue/Natsumi Hirai) - 3:10
"" (Inoue/Hirai) - 2:28
"You are the Top" (Inoue/Hirai/Kōki Mitani) - 4:28
"" - 4:47
"" - 4:13
Personnel
Yōsui Inoue - vocals, acoustic guitar, electric guitar, hand-claps
Tsuyoshi Kon - electric guitar, acoustic guitar
Haruo Kubota - electric guitar
Takayuki Hijikata - electric guitar
Tsuneo Imabori - electric guitar, acoustic guitar, computer programming
Chiharu Mikuzuki - electric bass, uplight bass, 12-string guitar
Masafumi Minato - drums
Hideo Yamaki - drums
Matarou Misawa - percussion
Tomohiro Yahiro - percussion
Nobuo Kurata - acoustic piano
Natsumi Hirai - acoustic piano
Yasuharu Nakanishi - acoustic piano, electric piano
Yoshiki Kojima - acoustic piano, keyboards, organ
Yūta Saitō - acoustic piano, keyboards
Masao Ōmura - keyboards
Banana U-G - keyboards, computer programming
Kazuya Miyazaki - computer programming
Tetsuo Ishikawa - computer programming
Yōichi Murata - trombone, bass trombone
Kōji Nishimura - trumpet
Shirou Sasaki - trumpet
Masahiko Sugasaka - trumpet
Takuo Yamamoto - saxophone
Bob Zang - saxophone
Masakuni Takeno - saxophone
Udai Shike - cello
Jun - background vocals
Chie Nagai - background vocals
Chart positions
Album
Singles
2002 albums
Yōsui Inoue albums
|
12034759
|
https://en.wikipedia.org/wiki/SolidThinking
|
SolidThinking
|
solidThinking is a software company developing Evolve, a 3D modeling and rendering software and Inspire, a concept generation tool.
History
Brothers Alex Mazzardo and Mario Mazzardo started the solidThinking project in 1991 with Guido Quaroni, today Vice President of Software R&D at Pixar Animation Studios. The design software was originally developed for NEXTSTEP, the operating system developed by NeXT, and it was soon awarded as the best new application in the "CAD and 3D" category at NeXTWORLD EXPO held in San Francisco in June 1993.
A complete re-write of the application for the Windows platform was completed in 1998 with the release of solidThinking 3.0 for Windows. The Construction History and an extensive NURBS modeling toolset were the main additions. One year later solidThinking for Mac OS X was also released. Today all releases of solidThinking are simultaneously developed for both Windows and Mac OS X.
solidThinking version 8.0 and solidThinking Inspired 8.0, introducing the morphogenesis form-generation technology, were released in September 2009.
With the 9.0 release, solidThinking became Evolve and solidThinking Inspired became Inspire, both sold under the solidThinking brand.
References
External links
Official site
Computer-aided design software
3D graphics software
|
3178199
|
https://en.wikipedia.org/wiki/Tethering
|
Tethering
|
Tethering, or phone-as-modem (PAM), is the sharing of a mobile device's Internet connection with other connected computers. Connection of a mobile device with other devices can be done over wireless LAN (Wi-Fi), over Bluetooth or by physical connection using a cable, for example through USB.
If tethering is done over WLAN, the feature may be branded as a personal hotspot or mobile hotspot, which allows the device to serve as a portable router. Mobile hotspots may be protected by a PIN or password. The Internet-connected mobile device can act as a portable wireless access point and router for devices connected to it.
Mobile device's OS support
Many mobile devices are equipped with software to offer tethered Internet access. Windows Mobile 6.5, Windows Phone 7, Android (starting from version 2.2), and iOS 3.0 (or later) offer tethering over a Bluetooth PAN or a USB connection. Tethering over Wi-Fi, also known as Personal Hotspot, is available on iOS starting with iOS 4.2.5 (or later) on iPhone 4 or iPad (3rd gen), certain Windows Mobile 6.5 devices like the HTC HD2, Windows Phone 7, 8 and 8.1 devices (varies by manufacturer and model), and certain Android phones (varies widely depending on carrier, manufacturer, and software version).
For IPv4 networks, the tethering normally works via NAT on the handset's existing data connection, so from the network point of view, there is just one device with a single IPv4 network address, though it is technically possible to attempt to identify multiple machines.
On some mobile network operators, this feature is contractually unavailable by default, and may be activated only by paying to add a tethering package to a data plan or choosing a data plan that includes tethering, such as Lycamobile MVNO. This is done primarily because with a computer sharing the network connection, there is typically substantially more network traffic.
Some network-provided devices have carrier-specific software that may deny the inbuilt tethering ability normally available on the device, or enable it only if the subscriber pays an additional fee. Some operators have asked Google or any mobile device producer using Android to completely remove tethering capability from the operating system on certain devices. Handsets purchased SIM-free, without a network provider subsidy, are often unhindered with regard to tethering.
There are, however, several ways to enable tethering on restricted devices without paying the carrier for it, including 3rd party USB Tethering apps such as PDAnet, rooting Android devices or jailbreaking iOS devices and installing a tethering application on the device. Tethering is also available as a downloadable third-party application on most Symbian mobile phones as well as on the MeeGo platform and on WebOS mobiles phones.
In carriers' contracts
Depending on the wireless carrier, a user's cellular device may have restricted functionality. While tethering may be allowed at no extra cost, some carriers impose a one-time charge to enable tethering and others forbid tethering or impose added data charges. Contracts that advertise "unlimited" data usage often have limits detailed in a Fair usage policy.
United Kingdom
Since 2014, all pay-monthly plans from the Three network in the UK include a "personal hotspot" feature.
Earlier, two tethering-permitted mobile plans offered unlimited data: The Full Monty on T-Mobile, and The One Plan on Three. Three offered tethering as a standard feature until early 2012, retaining it on selected plans. T-Mobile dropped tethering on its unlimited data plans in late 2012.
United States
As cited in Sprint Nextel's "Terms of Service":
"Except with Phone-as-Modem plans, you may not use a mobile device (including a Bluetooth device) as a modem in connection with any computer. We reserve the right to deny or terminate service without notice for any misuse or any use that adversely affects network performance."
T-Mobile USA has a similar clause in its "Terms & Conditions":
"Unless explicitly permitted by your Data Plan, other uses, including for example, using your Device as a modem or tethering your Device to a personal computer or other hardware, are not permitted."
T-Mobile's Simple Family or Simple Business plans offer "Hotspot" from devices that offer that function (such as Apple iPhone) to up to 5 devices. Since 2014-03-27, 1000 MB/month is free in the US with cellular service. The host device has unlimited slow internet for the rest of the month, and all month while roaming in 100 countries, but with no tethering. For $10 or $20/month more per host device, the amount of data available for tethering can be increased markedly. The host device cellular services can be canceled, added, or changed at any time, pro-rated, data tethering levels can be changed month-to-month, and T-Mobile no longer requires any long-term service contracts, allowing users to bring their own devices or buy devices from them, independent of whether they continue service with them.
Verizon Wireless and AT&T Mobility offer wired tethering to their plans for a fee, while Sprint Nextel offers a Wi-Fi connected "mobile hotspot" tethering feature at an added charge. However, actions by the FCC and a small claims court in California may make it easier for consumers to tether. On July 31, 2012, the FCC released an unofficial announcement of Commission action, decreeing Verizon Wireless must pay US$1.25 million to resolve the investigation regarding compliance of the C Block Spectrum (see US Wireless Spectrum Auction of 2008).
The announcement also stated that "(Verizon) recently revised its service offerings such that consumers on usage-based pricing plans may tether, using any application, without paying an additional fee." After that judgement Verizon release "Share Everything" plans that enable tethering, however users must drop old plans they were grandfathered under (such as the Unlimited Data plans) and switch, or pay a tethering fee.
In another instance, Judge Russell Nadel of the Ventura Superior Court awarded AT&T customer Matt Spaccarelli US$850, despite the fact that Spaccarelli had violated his terms of service by jailbreaking his iPhone in order to fully utilize his iPhone's hardware. Spaccarelli demonstrated that AT&T had unfairly throttled his data connection. His data shows that AT&T had been throttling his connection after approximately 2GB of data was used. Spaccarelli responded by creating a personal web page in order to provide information that allows others to file a similar lawsuit, commenting :
"Hopefully with all this concrete data and the courts on our side, AT&T will be forced to change something. Let’s just hope it chooses to go the way of Sprint, not T-Mobile."
While T-Mobile did eventually allow tethering, on August 31, 2015 the company announced it will punish users who abuse its unlimited data by violating T-Mobile's rules on tethering (which unlike standard data does carry a 7 GB cap before throttling takes effect) by permanently kicking them off the unlimited plans and making users sign up for tiered data plans. T-Mobile mentioned that it was only a small handful of users who abused the tethering rules by using an Android app that masks T-Mobile's tethering monitoring and uses as much as 2 TB's per month, causing speed issues for most customers who don't abuse the rules.
See also
Internet Connection Sharing
Mobile broadband
Mobile Internet device (MID)
Mobile modems and routers
Open Garden
Smartbook
Smartphone
References
Wireless networking
Mobile telecommunications
Net neutrality
|
1722534
|
https://en.wikipedia.org/wiki/Gapless%20playback
|
Gapless playback
|
Gapless playback is the uninterrupted playback of consecutive audio tracks, such that relative time distances in the original audio source are preserved over track boundaries on playback. For this to be useful, other artifacts (than timing-related ones) at track boundaries should not be severed either. Gapless playback is common with compact discs, gramophone records, or tapes, but is not always available with other formats that employ compressed digital audio. The absence of gapless playback is a source of annoyance to listeners of music where tracks are meant to segue into each other, such as some classical music (opera in particular), progressive rock, concept albums, electronic music, and live recordings with audience noise between tracks.
Causes of gaps
Playback latency
Various software, firmware and hardware components may add up to a substantial delay associated with starting playback of a track. If not accounted for, the listener is left waiting in silence as the player fetches the next file (see harddisk access time), updates metadata, decodes the whole first block, before having any data to feed the hardware buffer. The gap can be as much as half a second or more — very noticeable in "continuous" music such as certain classical or dance genres. In extreme cases, the hardware is even reset between tracks, creating a very short "click".
To account for the whole chain of delays, the start of the next track should ideally be readily decoded before the currently playing track finishes. The two decoded pieces of audio must be fed to the hardware continuously over the transition, as if the tracks were concatenated in software.
Many older audio players on personal computers do not implement the required buffering to play gapless audio. Some of these rely on third-party gapless audio plug-ins to buffer output. Most recent players and newer versions of old players now support gapless playback directly.
Compression artifacts
Lossy audio compression schemes that are based on overlapping time/frequency transforms add a small amount of padding silence to the beginning and end of each track. These silences increase the playtime of the compressed audio data. If not trimmed off upon playback, the two silences played consecutively over a track boundary will appear as a pause in the original audio content. Lossless formats are not prone to this problem.
For some audio formats (e.g. Ogg Vorbis), where the start and end are precisely defined, the padding is implicitly trimmed off in the decoding process. Other formats may require extra metadata for the player to achieve the same. The popular MP3 format defines no way to record the amount of delay or padding for later removal. Also, the encoder delay may vary from encoder to encoder, making automatic removal difficult. Even if two tracks are decompressed and merged into a single track, a pause will usually remain between them.
CD recorded in TAO mode
Audio-CDs can be recorded in either disc at once (DAO) or track at once (TAO) mode. The latter is more flexible, but has the drawback of inserting approximately 2 seconds of silence between tracks.
Ways to eliminate the gaps
Precise gapless playback
As opposed to heuristic techniques, what is often meant by precise gapless playback, is that playback timing is guaranteed to be identical to the source. By this definition, a precise gapless player is not allowed to introduce either gaps or overlaps (crossfading) between successive tracks, and is not allowed to use guesswork.
Apart from accounting for playback latency, the preciseness here lies in treating lossless data as-is, and removing the correct amount of padding from lossy data. This is not possible for file formats with loosely defined encoder specifications and no metadata and therefore no way for encoders to record the duration of extraneous silence.
Approximate methods
Heuristics are used by some music players to detect silence between tracks and trim the audio as necessary on playback. Due to the loss of time resolution of lossy compression, this method is inexact. In particular, the silence is not exactly zero. If the silence threshold is too low, some silences go undetected. Too high, and entire sections of quiet music at the beginning or end of a track may be removed.
Digital signal processing (DSP) algorithms can also be used to crossfade between tracks. This eliminates gaps that some listeners find distracting, but also greatly alters the audio signal, which may have undesirable effects on the listening experience. Some listeners dislike these effects more than the gap they attempt to remove. For example, crossfading is inappropriate for files that are already gapless, in which case the transition may feel artificially short and disturb the rhythm. Also, depending on the length of untrimmed silence and the particular crossfader, it may cause a large volume drop.
These methods defeat the purpose of intentional spacing between tracks. Not all albums are mix albums; perhaps more typically, there is an aesthetic pause between unrelated tracks. Also, the artist may intentionally leave in silences for dramatic effect, which should arguably be preserved regardless of whether there is a track boundary there.
Compared to precise gapless playback, these methods are a different approach to erroneous silence in audio files, but other required features are the same. However, this approach requires more computation. In portable digital audio players, this means a reduced playing time on batteries.
User workarounds
A common workaround is to encode consecutive tracks as one single file, relying on cue sheets (or something similar) for navigation. While this method results in gapless playback within consecutive tracks, it can be unwieldy because of the possibly large size of the resulting compressed file. Furthermore, unless the playback software or hardware can recognize the cue sheets, navigating between tracks may be difficult.
It may be possible to add gapless metadata to existing files. If the encoder is known, it is possible to guess the encoder delay. Also, if the compression was performed on CD audio, the original playback length will be an integer multiple of 588 samples, the size of one CD sector. Thus the total playback time can also be guessed. Adding such information to audio files will enable precise gapless playback in players that support this.
Prerequisites
Format support
Since lossless data compression excludes the possibility of the introduction of padding, all lossless audio file formats are inherently gapless.
These lossy audio file formats have provisions for gapless encoding:
Musepack
Ogg Vorbis
Speex
Opus
Some other formats do not officially support gapless encoding, but some implementations of encoders or decoders may handle gapless metadata.
LAME-encoded MP3 can be gapless with players that support the LAME Mp3 info tag.
AAC in MP4 encoded with Nero Digital from Nero AG can be gapless with foobar2000, latest XMMS2, and iTunes 7.1.1.5 through 11.4.
AAC in MP4 encoded with iTunes (current and previous versions) is gapless in iTunes 7.0 through 11.4, 2nd generation iPod nanos, all video-capable iPods with the latest firmware, and recent versions of foobar2000.
iTunes-encoded MP3 is gapless when played back in iTunes 7.0 through 11.4, 2nd generation iPod nanos, and all video-capable iPods with the latest firmware.
Windows Media Audio encoded with Windows Media Player 9 can be gapless with Windows Media Player 9 and onwards.
Windows Media Audio encoded with Sound Player Lilith can be gapless with latest Sound Player Lilith onwards.
ATRAC on MiniDisc is gapless through the use of TOC (Table of Contents).
Player support
Optimal solutions:
Hardware
Apple:
iPod classic supports gapless playback of MP3s and AACs from the fifth generation onward
iPod nano second generation and later
iPod Touch
Archos Gmini XS202S
Cowon S9 supports gapless playback without software dependency since 2.31b firmware. Most newer Cowon players support gapless playback right out of the box (J3, X7, iAudio 9)
Linn Products DS network players
All players in the Logitech/Slim Devices Squeezebox range support gapless playback for all gapless formats (lame MP3, FLAC, Vorbis, etc.). Crossfading is also optionally available.
Microsoft Zune supports gapless playback with Zune 2.5 or later firmware, though some bugs remain and occasionally small pops or skips can be heard.
Rio Karma gapless hardware player with no software dependency (FLAC, Ogg, MP3, WMA), first portable DAP with the feature
Rockbox for various digital audio players.
Sony:
MiniDisc Walkman supports gapless playback (including non-Sony Walkman MiniDisc players)
CD Walkman (such as D-NE330) supports gapless playback of ATRAC-encoded CDs
VAIO Pocket supports gapless playback (through a firmware update) of ATRAC files
Network Walkman NW-HDx and NW-A (1x00, 3000, 60x, 80x) DAPs supports gapless playback of ATRAC files - after this Walkman DAPs lost the feature when ATRAC support ceased, but continued in Japan where players still came with ATRAC. Gapless playback returned outside Japan 5 years later with Walkman NWZ-F80x through the FLAC format.
Trekstor Vibes gapless hardware player with no software dependency
Victor Alneo V Series and C Series
Software
Amarok, for Linux
AIMP for windows
Audacious, for Linux
Banshee, for Linux
Clementine, cross-platform.
cmus, for Linux and BSD.
Cog, for OS X
DeaDBeeF, for Linux
foobar2000, for Windows
Groove Music, for Windows 10.
iTunes 7.0 through 11.4 supported as default gapless playback on Macintosh and Windows without having to combine tracks during encoding (a limitation of previous releases). Some users in unusual situations have complained that the one-time analysis is a system-intensive process that can stall or crash computers.
JRiver Media Center, for Windows
KODI, for Windows, Linux, OS X, Android and others.
mplayer2, for Linux, OS X, and Windows supports gapless playback of flac when used with option "-gapless-audio".
mpv (media player) for BSD, Linux, macOS, Windows.
MusicBee, for Windows
Music On Console, for Linux and other Unix-like platforms.
Music Player Daemon, for Linux and other Unix-like platforms.
Plex, for all supported platforms either through the platform player or PlexAmp
Qlab, for OS X
Quod Libet, multi-platform.
Rhythmbox, for Linux
Winamp, supports gapless playback for MP3, M4A/AAC, Ogg Vorbis and FLAC files (since version 5.3).
Windows Media Player, has supported gapless ripping and playback of WMA since Windows Media 9. Available on all current Windows machines.
XMPlay, supports gapless playback for all format files
Alternative or partial solutions:
XMMS2 – has native support for gapless MP3 / Ogg Vorbis and FLAC
See also
Segue, the technique in classical music
References
Notes
External links
MP3 players: Buyer Beware, a description of gapless playback in digital audio players
Digital audio
|
25170183
|
https://en.wikipedia.org/wiki/SmartQ%205
|
SmartQ 5
|
The SmartQ 5 is a budget mobile Internet device manufactured by the Chinese company Smart Devices. It was officially announced 11 February 2009.
Overview
The SmartQ 5 comes with a custom version of Ubuntu Linux installed which is adapted for use with a touchscreen. It uses the LXDE desktop environment.
Ubuntu's main pre-installed applications are:
Midori web browser
FBReader e-book reader
Claws email client
SMPlayer multimedia player
Abiword word processor
Gnumeric spreadsheet
Transmission torrent client
Sonata music player
Pidgin instant messenger
Evince PDF/document reader
rgbPaint painting program
GDebi package installer
PCMan File Manager
It is possible to install another Linux besides the default OS. Several Linux distributions like Mer and a ported Android support the SmartQ 5.
Smart Devices has obtained a Windows CE 6.0 royalty, the OS has been made available on the official site. Although a license from Microsoft needs to be purchased to activate Windows CE.
Specifications
Samsung Mobile Application Processor S3C6410 based on ARM11 core at 667 MHz/800 MHz
128MB DDR 133/333 MHz SDRAM
1GB NAND FLASH (256 MB usable for storage)
AC97 audio codec & PCM 24-bit audio
SoC graphics unit, OpenGL ES 1.1/2.0, 4M triangles/sec @133Mhz (Transform only)
Integrated Wi-Fi 802.11b/g
Integrated Bluetooth 2.0 + EDR
800x480 resolution resistive touchscreen LCD, 4.3", 16.7 million colors
SDHC card slot (up to 32 GB)
Headphone output power up to earphone 40 mW, frequency Response 20 Hz-20.000 Hz SNR 94dB
Internal microphone
USB 2.0 OTG port (480Mbit/s)
Runs Ubuntu Linux
2000mAH rechargeable lithium polymer battery
Dimensions: 120x74x14mm
Weight: 160 g
See also
SmartQ 7
SmartQ V5
SmartQ V7
SmartQ R7
References
External links
Official English Smart Devices site
Main English Community offering a blog, a board and a wiki
Android Covia Forum
Chronolytics Embedded Systems Platform Technologies
SmartQ MID Forum
Mobile Linux
Mobile computers
Android (operating system) devices
Linux-based devices
|
3532943
|
https://en.wikipedia.org/wiki/Enid%20Mumford
|
Enid Mumford
|
Enid Mumford (6 March 1924 – 7 April 2006) was a British social scientist, computer scientist and Professor Emerita of Manchester University and a Visiting Fellow at Manchester Business School, largely known for her work on human factors and socio-technical systems.
Biography
Enid Mumford was born in Merseyside in North West England, where her father Arthur McFarland was magistrate and her mother Dorothy Evans was teacher. She attended Wallasey high school, and received her BA in Social Science from Liverpool University in 1946.
After graduation Enid Mumford spent time working in industry, first as personnel manager for an aircraft factory and later as production manager for an alarm clock manufacturer. The first job was important for her career as an academic, since it involved looking after personnel policy and industrial relations strategy for a large number of women staff. The second job also proved invaluable, as she was running a production department, providing a level of practical experience that is unusual among academics.
Enid Mumford then joined the Faculty of Social Science at Liverpool University in 1956. Later she then spent a year at the University of Michigan, where she worked for the University Bureau of Public Health Economics and studied Michigan medical facilities while her husband took a higher degree in dental science. On returning to England, she joined the newly formed Manchester Business School (MBS), where she undertook many research contracts investigating the human and organisational impacts of computer based systems. During this time she became Professor of Organisational Behaviour and Director of the Computer and Work Design Research Unit (CAWDRU). She also directed the MBA programme for four years.
As a newly joined member of Manchester Business School, Enid provides formative advice to students starting on research/engineering projects advising students to choose topics of study that are interesting yet challenging. In addition, Enid mentions that research projects should include research methods such as Large-scale surveys, face-to-face interviews and close observations. Finally, she suggests all students to keep good respectable terms with everyone involved with their research methods.
She was a companion of the Chartered Institute of Personnel and Development, a Fellow of the British Computer Society (BCS), also an Honorary Fellow of the BCS in 1984, and also a founder member and ex-chairperson of the BCS Sociotechnical Group.
In 1983 Enid Mumford was awarded the American Warnier Prize for her contributions to information science. In 1996, she was given an Honorary Doctorate by the university of Jyvaskyla in Finland. And in 1999, she was the only British recipient of a Leo Lifetime Achievement Award for Exceptional Achievement in Information Systems, one of only four in that year. Leo Awards are given by the Association for Information Systems (AIS) and the International Conference on Information Systems (ICIS).
Work
Research in industrial relations
At the Faculty of Social Science at Liverpool University Mumford carried out research in industrial relations in the Liverpool docks and in the North West coal industry. To collect information for the dock research, she became a canteen assistant in the canteens used by the stevedores for meals. Each canteen was in a different part of the waterfront estate and served dockers working on different shipping lines and with different cargoes. The coal mine research required her to spend many months underground talking to miners at the coal face.
The purpose of research is understanding, explanation and prediction. When gathering data from face-to-face interviewing programs, fewer formal methods are shown to be more respectable and often show superior quality of information. Whereas observational research tends to look at the patterns of behaviour and insights into why this behaviour is taking place. This can be hard to apply statics to this data, rather a description of what has taken place and why, will be more beneficial.
Human factors and socio-technical systems
Early in her career Enid Mumford realised that the implementation of large computer systems generally resulted in failure to produce a satisfactory outcome. Such failure could arise even when the underlying technology was adequate. She demonstrated that the underlying cause was an inability to overcome human factors associated with the implementation and use of computers. Four decades later, despite the identification of these sociotechnical factors and the development of methodologies to overcome such problems, large scale computer implementations are often unsuccessful in practice.
Mumford recognised that user participation of system design is just as important as the technology being introduced. She believed it was important to take into account users' social and technical needs when creating an IS, and that user participation is needed for this to happen. Mumford described participation as the democratic processes that allow staff to have control over the environment they work in and the future of their job.
Enid Mumford specifically emphasized the importance of participative system design. This emphasis has been accepted within the context of IS development. One of the main success factors indicated from this design was the importance of exploitative progression in the post implementation environment. Enid Mumford's theory of the importance of user participation has been widely recognised as effective and beneficial.
Mumford also used Talcott Parsons and Edward Shils’ patterns variables to propose five different contracts that can be used to evaluate employer-employee relationships.
One of the contracts proposed was the work structure contract, which aimed to emphasize the importance of ensuring employees found their jobs both interesting and challenging. To implement this contract, Mumford states the need for the continual questioning of production processes and principles alongside the identification of tools, techniques, and technologies which can be considered efficient and humanistic.
Influencing all five contracts of the employer-employee relationship was the value contract. This contract specifically set out to develop a set of values both employees and management could agree on, simply because the values and interests of employees differ from those of the employers. Mumford described that employees were interested in being economically incentivised in exchange for the services they provide; however, the overall consensus was to produce values such as long-term humanistic profitability, ensuring both company economic success and employee motivation.
The socio-technical approach
While at MBS, Mumford developed a close relationship with the Tavistock Institute and became interested in their democratic socio-technical approach to work organisation. Since then, she has applied this approach to the design and implementation of computer-based systems and information technology. One of her largest socio-technical projects was with the Digital Equipment Corporation (DEC) in Boston. In the 1970s she became a member of the International Quality of Working Life Group, the goal of which was to spread the socio-technical message around the world. She later became a council member of the Tavistock Institute and was also a member of the US Socio-technical Round Table.
Mumford’s 2000 conference paper titled “Socio-Technical Design: An Unfulfilled Promise or a Future Opportunity?” discussed the origins and evolution of socio-technical design, starting with its beginnings at the Tavistock institute. Mumford outlined the promises and possibilities of socio-technical design that were apparent at the time of its conception. She highlighted the ways that it had moved from success into failure, and evaluated the socio-technical initiatives that had occurred in different nations.
Despite the replacement of socio-technical projects by more efficient systems such as lean production, socio-technical notions remain essential when conceptualizing frameworks involving humans and computers (Mumford,2000).
Choosing the type of method you are going to use is dependent on a number of factors. Mumford highlights the importance of the question ‘what will be most effective in enabling me to collect the data I need to test my hypothesis and answer my questions?’. The chosen method may be a single technique but is preferably a blend of techniques that will reinforce each other and provide different but complimentary data. Often a mix of methods produces the best results as it not only considers the political issues with research such as differences in opinions between researchers and how the task should be carried out, but also allows the subject to be fully investigated to achieve the most accurate results.
Among Enid Mumford’s accomplishments and spearheading believing is the advancement of a coordinated strategy for frameworks usage named Effective Technical and Human Implementation of Computer Systems (ETHICS) that joins work plan as a feature of the frameworks arranging and execution exertion. This examination addresses why ETHICS at first rose in prominence and afterward declined throughout the long term. To respond to this inquiry, we apply Latour's (1999) five-circle structure to depict the arrangement of science. The discoveries uncover that Mumford held and adjusted numerous heterogeneous entertainers and assets that together added to the forming of ETHICS. As the substance of ETHICS was formed by the interweaving of numerous components, when a portion of these components later changed and subverted their past arrangement, the substance of ETHICS was not reshaped, and subsequently it lost its status and declined. The paper closes by drawing more broad exercises for IS research.
Future Analysis, it is an attribute of most of today’s computer systems is their flexibility in terms of work organization. To help systems designers, managers and other interested groups take advantage of this flexibility and achieve good organizational as well as good technical design, the author developed the ETHICS method.
Mumford suggests change and that those affected by it should be involved and have an input on the change if it’s to be accepted. This reflects on the ethical views Mumford has as she supports the idea of morality as a natural right. She makes it very clear on how moral responsibility is personal and precious and how no one can take it away from someone. This is relatable to employees as they should be made aware of the changes within their organization.
Enid Mumford had always been passionate about developing the information systems research community, her favoured method of research came in the form of action research as this helps to promote cooperative development of systems, this research method is proven by the influential Manchester conference in 1984. This was the first conference to ever genuinely question the broadly differing conceptions of what established Information Systems research is.
Enid Mumford’s draws success from the implementation of Socio-Technical Design; an organisational development method that focuses on the relationship between people and technology in the work environment. Its relationship with action research, was highlighted by its evolution in the 1960s and 1970s. Which improved general work practices as well as the relationship between management and workers. With the Global economy being in a recession during the early 1980s, Enid Mumford’s theory of socio-technical design gave way to several cost cutting methods that helped better organisations during this period. By making technology more viable in the workplace environment, enabling them to introduce lean production and suitable downsizing techniques.
ETHICS Methodology of Systems Implementation
Enid Mumford devised the ETHICS approach to the design and implementation of computer-based information systems. She explains in her work that while others are more intent on improving the ‘bottom line’ of corporations with the use of IT, Enid’s approach was more focused on the everyday workers and IT’s impact on their working lives (Avison et al., 2006).
Her work placed the social context and human activities/needs at the centre of IS design. Findings from projects across the 1960 and 1970s were consolidated by Mumford and her peers to bring rise to system development methodology known as ETHICS (Effective Technical & Human Implementation of Computer-based Systems).
Enid Mumford mentions that there can be progressive improvement in work and life. (Bednar & Welch, 2016) suggests that key values underpinned their work: a desire to improve job design to create a safer and more enjoyable work systems and a wish to see greater democracy in both the workplace and in wider society.
Enid Mumford described fifteen steps of the ETHICS methodology. Having set the ETHICS methodology as a toolbox for organizational change, Bednar & Welch suggests she explores key aspects of socio-technical design in practice; gives an overview of experiences of using her approach in different organizational settings to identify which socio-technical facilitation can provide support.
In the progress method Mumford used, she included quick ethics, to support the business process. By including quick ethics in the process, it created the business process to become more efficient and more effective in attaining business objectives and was able to offer a higher quality working environment that inspires staff.
Furthermore, Mumford’s work around Ethics Methodology, change management, and the humanly acceptable development of systems to provide an ethically acceptable way for the use technology was supported by Critical Research in Information Systems (CRIS) as many of ideas that still dominate Critical Research, which aim to improve the Social Reality. The overlapping theories between Mumford’s work and CRIS are related to change and change management, which have links to the issues of power and coercion. Mumford’s also uses wording derived from the Marxist tradition of Critical Research, for example the ‘ideology of capitalism’. Mumford also debated the commodification of computing and working time, which is also identified as a critical research area. Making Enid Mumford’s work around Ethics Methodology and Change very important in today’s economy.
Effective Technical and Human Implementation of Computer Systems (ETHICS) method is made to help integrate the company and its aims with that of its stakeholders. ETHICS uses a mix of technology and people participation to come up with solutions. The ETHICS method can greatly contribute to encourage people to embrace change and adopt new technological solution, thus resulting in higher job satisfaction and efficiency. This ETHICS method follows 15 steps for designing new systems, they start with asking why to change and then end with the evaluation and testing to see if it is achieving what is required.
Designing human systems for new technology (Ethics) methods that are transforming virtually every aspect of human life, interaction, and the process of work. such changes are drastically evident in the way in which human work is performed and organised. the ethics states that the bridge builders in IT development aim to understands the users from a presentation perspective, furthermore, to work in collaboration in the development and growth of IT artifacts, which then results in serving the interests of the stakeholders.
Action Research
A theoretical foundation in Mumford’s career was Action research – an approach adopted from the Tavistock Institute in which analysis and theory are associated with remedial change. She believed "There should be no theory without practice and no practice without research." Whilst working at Turner’s Asbestos Cement, she used this approach to survey the sales office, who then discussed their problems internally and implemented a work structure that alleviated most of their efficiency and job satisfaction problems.
Enid Mumford: a tribute
Nineteen individuals influenced by Enid Mumford contributed to Enid Mumford: A Tribute, an article reflecting on Mumford's contributions.
Publications
Enid Mumford has produced a large number of publications and books in the field of sociotechnical design. A selection:
1989. XSEL's Progress: the continuing journey of an expert system. Wiley.
1995. Effective Systems Design and Requirements Analysis: the ETHICS Approach. Macmillan.
1996. Systems Design: Ethical Tools for Ethical Change. Macmillan.
1999. Dangerous Decisions: problem solving in tomorrow's world. Plenum.
2003. Redesigning Human Systems. Idea Publishing Group.
2006. Designing human systems: an agile update to ETHICS
Books and book chapters
Mumford, E. (1983). Designing Secretaries: The Participative Design of a Word Processing System. Manchester Business School, UK. ISBN 0-903-8082-5-0. First published 1983, http://www.opengrey.eu/item/display/10068/558836
Mumford Enid (1996). The past and the present. Chapter 1 Pp. 1-13. In “Systems design : ethical tools for ethical change”. Macmillan, Basingstoke, UK. ISBN 0-333-66946-0. First published January 1996,
Mumford, E. (1996). Systems design in an unstable environment. Systems Design Ethical Tools for Ethical Change, 30–45. https://doi.org/10.1007/978-1-349-14199-9_3, Macmillan, Basingstoke, UK. ISBN 0-333-66946-0. First published January 1996,
Mumford E. (1996). An Ethical Pioneer: Mary Parker Follett. Chapter 4. Pp 46-63. In “Systems Design Ethical Tools for Ethical Change”. Palgrave, London. ISBN 978-1-349-14199-9, First published: January 1996, https://doi.org/10.1007/978-1-349-14199-9_4
Mumford Enid (1996). Designing for freedom in the ethical company. Chapter 6. Pp79-98. In "Systems Design Ethical Tools for Ethical Change". Palgrave, London, UK. . First published on: 11 November 1996,
Mumford, E. (1996). Designing for the future. In. Systems Design Ethical Tools for Ethical Change Chapter 7 (pp. 99-107). Publisher. https://doi.org/10.1007/978-1-349-14199-9_7
Mumford Enid (1997). Requirements Analysis for Information Systems. Chapter 3. Pp 15-20. In “Systems for Sustainability”, which is edited by Frank A. Stowell, Ray L. Ison, Rosalind Armson, Jacky Holloway, Sue Jackson and Steve McRobb. Springer, Boston, MA. . First published 31st July 1997.
Mumford, E. (1999). The Problems of Problem Solving. In Dangerous Decisions: Problem Solving in Tomorrow’s World (pp. 13–24). Springer, Boston, MA. https://doi.org/10.1007/978-0-585-27445-4_2
Mumford, E. (1999). Dangerous Decisions Problem Solving in Tomorrow's World. [ebook] Chapter 4, Problem Solving and the Police Pp. 59-73. First published on: 31 May 1999 https://link.springer.com/book/10.1007/b102291
Mumford, E. (2001). Action Research: Helping Organizations to Change. Chapter 3.Pp. 46-77. In”Qualitative Research in IS: Issues and Trends”,edited by Trauth, Eileen M., UK.1-930708-06-8. First published:01 July 2000, https://www.igi-global.com/gateway/chapter/28259
Mumford Enid & Carolyn Axtell (2003). Tools and Methods to Support the Design and Implementation of New Work Systems. Chapter 17. Pp 331-346. In “The new workplace: a guide to the human impact of modern working practices”, edited by David Holman David Holman, Toby D. Wall, Chris W. Clegg, Paul Sparrow and Ann Howard. Wiley & Sons, Chichester, UK. . First published: 01 January 2002, Publisher link: villey.com
Mumford, E. (1996). Systems Design: Ethical Tools for Ethical Change. Palgrave Macmillan, London, UK. . First published: 11 November 1996,
Mumford Enid (2003) Redesigning Human Systems. Idea Group Publishing. ISBN: 1591401186 First published July 2002. https://doi.org/10.4018/978-1-59140-118-6
Enid Mumford, Steve Hickey, and Holly Matthies (2006). Designing Human Systems for New Technology - The ETHICS Method, by Enid Mumford (1983) Pages 37-51 https://books.google.com.qa/books?id=he9NuM64WN8C&lpg=PP1&pg=PP1#v=onepage&q&f=false
Mumford, Enid. “Designing for Freedom in a Technical World.” In InformationTechnology and Changes in Organizational Work, edited by Wanda J. Orlikowski,Geoff Walsham, Matthew R. Jones, and Janice I. Degross, 425–441. IFIP Advances in Information and Communication Technology. Boston, MA: Springer US, 1996. https://doi.org/10.1007/978-0-387-34872-8_25
Conference and journal papers
Mumford, E. (1994). New treatments or old remedies: is business process reengineering really socio-technical design? Journal of Strategic Information Systems, 3(4), 313–326. https://doi.org/10.1016/0963-8687(94)90036-1
Mumford, E. (1995). Contracts, complexity and contradictions: The changing employment relationship. Personnel Review, 24(8), 54–70.
Mumford, E. (1995). Review: Understanding and Evaluating Methodologies. International Journal of Information Management Vol 15, Issue 3, Pages 243-245. Published by Elsevier Science Ltd, , .
Mumford, E. (1996). Risky ideas in the risk society. Journal of Information Technology (Routledge, Ltd.), 11(4), 321. https://doi.org/10.1057/jit.1996.6
Facilitating Technology Transfer through Partnership: Learning from practice and research: IFIP TC8 WG8.6 International Working Conference on Diffusion, Adoption, and Implementation of Information Technology (25th-27th June 1997), Ambleside, Cumbria, UK.Book: 383 pages, part of the “IFIP Advances in Information and Communication Technology book series (IFIPAICT)”, edited by Tom McMaster, Enid Mumford, E. Burton Swanson, Brian Warboys, David Wastell. Springer, Boston, MA. ISBN: 978-0-387-35092-9. First published: 1997, https://doi.org/10.1007/978-0-387-35092-9
Mumford, E. (1998). Problems, knowledge, solutions: solving complex problems. The Journal Of Strategic Information Systems, 7(4), 255-269. https://doi.org/10.1016/S0963-8687(99)00003-7
Mumford Enid (1999), Choosing Problem Solving Methods Chapter 2. pp 25-39. In “Dangerous Decision: Problem Solving in Tomorrow’s World”), Springer, Boston, MA. eBook Packages Springer Book Archive. ISBN 978-0-585-27445-4. https://doi.org/10.1007/b102291
Mumford, E. (2000). Socio-Technical Design: An Unfulfilled Promise or a Future Opportunity? In R. Baskerville, J. Stage, & J. I. DeGross (Eds.), Organizational and Social Perspectives on Information Technology: IFIP TC8 WG8.2 International Working Conference on the Social and Organizational Perspective on Research and Practice in Information Technology June 9–11, 2000, Aalborg, Denmark (pp. 33–46). Springer US.
Mumford, E. (2001). Advice for an action researcher. Information Technology & People, 14(1), 12–27.
Mumford, E. (2006). Researching people problems: Some advice to a student. Inf. Syst. J., 16, 383–389. https://doi.org/10.1111/j.1365-2575.2006.00223.x.
Mumford, E. (2006). The story of socio-technical design: reflections on its successes, failures and potential. Information Systems Journal, 16(4), 317-342.
References
External links
last version of Enid Mumford website, on Internet Archive
Guardian obituary
1924 births
2006 deaths
British computer scientists
Information systems researchers
Fellows of the British Computer Society
British women computer scientists
University of Michigan people
20th-century British women scientists
|
66116255
|
https://en.wikipedia.org/wiki/Grindstone%20%28video%20game%29
|
Grindstone (video game)
|
Grindstone is a 2019 puzzle-adventure game created and published by Capybara Games. The game revolves around the player completing levels by clearing enemies using attacks. It was originally released for MacOS and iOS through Apple Arcade on September 16, 2019.
Gameplay
The game takes place on a grid, where the player can move Jorj by attacking monsters with his sword. Jorj can move in eight directions: up, down, left, right alongside diagonal movements. Jorj has three ability slots which can be filled with unique powers such as a bow, which lets you pick one enemy off from around the map. Jorj can only attack one color of monsters at a time, but if the player smashes through 10 monsters, a Grindstone will spawn. This allows Jorj to switch the color of the monster he is attacking, allowing him to chain together larger combos.
While most enemies can be dispatched in one hit, special and boss enemies have additional health points that need to be depleted in order to defeat them. Monsters over time become enraged, meaning that if Jorj lands next to them at the end of a turn, they will attack him. Each level has a goal, which could be defeating a certain amount of enemies, or fighting a boss. Levels also have secondary items to unlock or collect such as a treasure chest. Once the goal is complete, the player has a choice between leaving the level through the gate, or continuing to fight to get more rewards. If the player decides to stay the game starts to get harder, more enemies become enraged making it difficult to make movements. If Jorj takes three hits and depletes his health bar, the player loses all progress in a level, making it risky to stay to get higher rewards.
Development
The original concept for Grindstone dates back to the development of two of Capybara's previous games, Critter Crunch and Might & Magic: Clash of Heroes. The original idea for Grindstone was a color based matching game where the player would move around the board. The setting was designed to be a brutal, "barbarian" world, but with a cartoonish take on the idea. In the early stages of development, the game's standard enemies would be able to attack Jorj in diagonal directions, but the team found it too frustrating and restricted the ability to special enemies only. Grindstones were added to help the player create long combos. The game also had a coin based currency system, but it was removed as Capybara felt it didn't incentivise the player to create long combos.
In the original prototypes, the goal of the game was to get Jorj to the door but it was scrapped in favor of having the player complete a challenge, and then being able to exit through the door. Capybara wanted to have a risk/reward aspect in where the player could try to get more combos but risk dying, or could leave with the rewards they already had. To make the game less frustrating, the team also limited how many different colors of enemies there could be for each level. Items were added to give the player more options in each level, and to incentivize the player to complete more objectives to earn better rewards.
Release
Grindstone was released as a launch title for Apple Arcade on iOS and MacOS on September 16, 2019. A Nintendo Switch port was released later on December 15, 2020. Alongside the Switch version, a physical edition was made available through Iam8bit. A Microsoft Windows version released on May 20, 2021 on the Epic Games Store.
Reception
Grindstone received positive reviews from critics, who praised the game for its color-matching puzzles and mechanics. The game has "Generally favorable reviews" on Metacritic. Nathan Reinauer of TouchArcade enjoyed the lack of in-app purchases and the cartoon-like visuals. Writing for Destructoid, Jordan Devore praised the gameplay as being easy to pick up and play, but disliked the length of the game, feeling that many of the stages felt like padding.
Grindstone was nominated for Best Mobile Game at The Game Awards 2019. The game also won Best Mobile Game in Edge magazine's 2019 Game of The Year awards.
References
2019 video games
Apple Arcade games
Fantasy video games
IOS games
MacOS games
Nintendo Switch games
Puzzle video games
Video games developed in Canada
|
4426114
|
https://en.wikipedia.org/wiki/DES%20Challenges
|
DES Challenges
|
The DES Challenges were a series of brute force attack contests created by RSA Security to highlight the lack of security provided by the Data Encryption Standard.
The Contests
The first challenge began in 1997 and was solved in 96 days by the DESCHALL Project.
DES Challenge II-1 was solved by distributed.net in 39 days in early 1998. The plaintext message being solved for was "The secret message is: Many hands make light work."
DES Challenge II-2 was solved in just 56 hours in July 1998, by the Electronic Frontier Foundation (EFF), with their purpose-built Deep Crack machine. EFF won $10,000 for their success, although their machine cost $250,000 to build. The contest demonstrated how quickly a rich corporation or government agency, having built a similar machine, could decrypt ciphertext encrypted with DES. The text was revealed to be "The secret message is: It's time for those 128-, 192-, and 256-bit keys."
DES Challenge III was a joint effort between distributed.net and Deep Crack. The key was found in just 22 hours 15 minutes in January 1999, and the plaintext was "See you in Rome (second AES Conference, March 22-23, 1999)".
Reaction
After the DES had been shown to be breakable, FBI director Louis Freeh told Congress, "That is not going to make a difference in a kidnapping case. It is not going to make a difference in a national security case. We don't have the technology or the brute force capability to get to this information."
It was not until special purpose hardware brought the time down below 24 hours that both industry and federal authorities had to admit that the DES was no longer viable. Although the National Institute of Standards and Technology started work on what became the Advanced Encryption Standard in 1997, they continued to endorse the DES as late as October 1999, with FIPS 46-3. However, Triple DES was preferred.
See also
RSA Factoring Challenge
RSA Secret-Key Challenge
References
Cryptography contests
Data Encryption Standard
Recurring events established in 1997
|
10443665
|
https://en.wikipedia.org/wiki/Apache%20Solr
|
Apache Solr
|
Solr (pronounced "solar") is an open-source enterprise-search platform, written in Java. Its major features include full-text search, hit highlighting, faceted search, real-time indexing, dynamic clustering, database integration, NoSQL features and rich document (e.g., Word, PDF) handling. Providing distributed search and index replication, Solr is designed for scalability and fault tolerance. Solr is widely used for enterprise search and analytics use cases and has an active development community and regular releases.
Solr runs as a standalone full-text search server. It uses the Lucene Java search library at its core for full-text indexing and search, and has REST-like HTTP/XML and JSON APIs that make it usable from most popular programming languages. Solr's external configuration allows it to be tailored to many types of applications without Java coding, and it has a plugin architecture to support more advanced customization.
Apache Solr is developed in an open, collaborative manner by the Apache Solr project at the Apache Software Foundation.
History
In 2004, Solr was created by Yonik Seeley at CNET Networks as an in-house project to add search capability for the company website.
In January 2006, CNET Networks decided to openly publish the source code by donating it to the Apache Software Foundation. Like any new Apache project, it entered an incubation period which helped solve organizational, legal, and financial issues.
In January 2007, Solr graduated from incubation status into a standalone top-level project (TLP) and grew steadily with accumulated features, thereby attracting users, contributors, and committers. Although quite new as a public project, it powered several high-traffic websites.
In September 2008, Solr 1.3 was released including distributed search capabilities and performance enhancements among many others.
In January 2009, Yonik Seeley along with Grant Ingersoll and Erik Hatcher joined Lucidworks (formerly Lucid Imagination), the first company providing commercial support and training for Apache Solr search technologies. Since then, support offerings around Solr have been abundant.
November 2009 saw the release of Solr 1.4. This version introduced enhancements in indexing, searching and faceting along with many other improvements such as rich document processing (PDF, Word, HTML), Search Results clustering based on Carrot2 and also improved database integration. The release also features many additional plug-ins.
In March 2010, the Lucene and Solr projects merged. Separate downloads continued, but the products were now jointly developed by a single set of committers.
In 2011 the Solr version number scheme was changed in order to match that of Lucene. After Solr 1.4, the next release of Solr was labeled 3.1, in order to keep Solr and Lucene on the same version number.
In October 2012 Solr version 4.0 was released, including the new SolrCloud feature. 2013 and 2014 saw a number of Solr releases in the 4.x line, steadily growing the feature set and improving reliability.
In February 2015, Solr 5.0 was released, the first release where Solr is packaged as a standalone application, ending official support for deploying Solr as a war. Solr 5.3 featured a built-in pluggable Authentication and Authorization framework.
In April 2016, Solr 6.0 was released. Added support for executing Parallel SQL queries across SolrCloud collections. Includes StreamExpression support and a new JDBC Driver for the SQL Interface.
In September 2017, Solr 7.0 was released. This release among other things, added support multiple replica types, auto-scaling, and a Math engine.
In March 2019, Solr 8.0 was released including many bugfixes and component updates. Solr nodes can now listen and serve HTTP/2 requests. Be aware that by default, internal requests are also sent by using HTTP/2. Furthermore, an admin UI login was added with support for BasicAuth and Kerberos. And plotting math expressions in Apache Zeppelin is now possible.
In November 2020, Bloomberg donated the Solr Operator to the Lucene/Solr project. The Solr Operator helps deploy and run Solr in Kubernetes.
In February 2021, Solr was established as a separate Apache project (TLP), independent from Lucene.
Operations
In order to search a document, Apache Solr performs the following operations in sequence:
Indexing: converts the documents into a machine-readable format.
Querying: understanding the terms of a query asked by the user. These terms can be images or keywords, for example.
Mapping: Solr maps the user query to the documents stored in the database to find the appropriate result.
Ranking: as soon as the engine searches the indexed documents, it ranks the outputs by their relevance.
Community
Solr has both individuals and companies who contribute new features and bug fixes.
Integrating Solr
Solr is bundled as the built-in search in many applications such as content management systems and enterprise content management systems. Hadoop distributions from Cloudera, Hortonworks and MapR all bundle Solr as the search engine for their products marketed for big data. DataStax DSE integrates Solr as a search engine with Cassandra. Solr is supported as an end point in various data processing frameworks and Enterprise integration frameworks.
Solr exposes industry standard HTTP REST-like APIs with both XML and JSON support, and will integrate with any system or programming language supporting these standards. For ease of use there are also client libraries available for Java, C#, PHP, Python, Ruby and most other popular programming languages.
See also
Open Semantic Framework
Search oriented architecture
List of information retrieval libraries
References
Bibliography
External links
Ansible role to install SolrCloud in a Debian environment
Solr
Free software programmed in Java (programming language)
Free search engine software
Search engine software
Database-related software for Linux
NoSQL
|
22546946
|
https://en.wikipedia.org/wiki/Juniper%20J%20series
|
Juniper J series
|
Juniper J series is a line of enterprise routers designed and manufactured by Juniper Networks. They are modular routers for enterprises running desktops, servers, VoIP, CRM / ERP / SCM applications. The J Series routers are typically deployed at remote offices or branch locations. These Services routers include the J2320 and J2350 for smaller offices, the J4350 for medium-size branches, and the J6350 for large branches or regional offices.
Platform development history
Juniper began working on J series in the middle of the telecom downturn (2002), while looking for ways to extend its product portfolio. The main idea behind the new product line was to create the cost-optimized routing system that could utilize increasingly powerful general-purpose CPUs and operate under fully-fledged, multi-threaded OS. This was a major departure from "traditional" branch router design, which dictated the use of low-end RISC CPUs working under simplified operating system with marginal multitasking and memory protection capabilities. The first iteration of J-series design was based on high-end Intel CPUs and featured Intel IXP-based interface cards running over PCI bus. Later models added PCI Express connectivity as well as specialized Cavium security processors. From the software perspective, the J series runs JUNOS with a real-time extensions for the forwarding plane function. This unique architecture allows J-series routers to avoid the "resource starvation" problem commonly seen on legacy software forwarding platforms.
Models and platforms
The J series of routers includes the models such as J2320, J2350, J4350 and J6350. The initial models were J2300, J4300 and J6300 routers.
J2320
The J2320 routers are entry level service routers which gives up to 600 Mbit/s throughput performance, has four built-in Gigabit Ethernet ports. It has three PIM slots for additional LAN/WAN connectivity, Avaya VoIP Gateway, and WAN acceleration. They are used for one or two broadband, T1, or E1 interfaces with integrated services.
J2350
The J2350 router which has four built-in Gigabit Ethernet ports, gives up to 700 Mbit/s performance. It gives five PIM slots. They are usually used for multiple broadband, T1, or E1 interfaces with multiple integrated services.
J4350
The J4350 enterprise router gives up to 1 Gbit/s in performance. They are usually used for DS3, E3, and Metro Ethernet interfaces with integrated services. It has six PIM slots. Two of these slots are enhanced-performance slots that provide additional performance to multiple Gigabit Ethernet configurations.
J6350
The J6350 gives up to 2 Gbit/s in performance. It has six PIM slots for additional LAN/WAN connectivity, Avaya VoIP Gateway, and WAN acceleration. These routers have optional redundant power supplies for high system availability.
Features
The J-series routers run on Juniper's network operating system, JUNOS. These routers have 4 on-board GigE ports and expandable WAN and LAN interfaces via pluggable modules. They have a wide range of interfaces supporting Serial, T1/E1, FE, DS3/E3, ISDN, ADSL2/2+, G.SHDSL and Gigabit Ethernet and a wide array of Layer 2 access protocols including Frame Relay, Ethernet and Point-to-Point Protocol (PPP)/HDLC. Other features includes Network Address Translation (NAT), and J-Flow accounting and advanced services such as IPv6, MPLS, Stateful firewall, quality of service, multicast, VPN, security services and IPSec. Juniper partnered with Avaya to deliver packet voice functionality.
J-series routers directly benefit from modular and fault-protected software design of the JUNOS operating system. Unlike traditional enterprise routers, each software module in the JUNOS operating system runs independently and therefore cannot impact other processes. The unique, generalized JUNOS architecture provides complete separation of the routing and packet forwarding engines in platforms with both hardware and software forwarding planes.
Even under DDoS attack, J-series routers retain complete control over system operation, allowing console-connected operator to add new filters and policies in order to mitigate the threat. Parts of J-series technology were later reused in SRX series products.
References
External links
http://www.juniper.net/us/en/products-services/routing/j-series/
Juniper Networks
Routers (computing)
|
13221503
|
https://en.wikipedia.org/wiki/Vault%20Corp.%20v.%20Quaid%20Software%20Ltd.
|
Vault Corp. v. Quaid Software Ltd.
|
Vault Corporation v Quaid Software Ltd. 847 F.2d 255 (5th Cir. 1988) is a case heard by the United States Court of Appeals for the Fifth Circuit that tested the extent of software copyright. The court held that making RAM copies as an essential step in utilizing software was permissible under §117 of the Copyright Act even if they are used for a purpose that the copyright holder did not intend. It also applied the "substantial noninfringing uses" test from Sony Corp. of America v. Universal City Studios, Inc. to hold that Quaid's software, which defeated Vault's copy protection mechanism, did not make Quaid liable for contributory infringement. It held that Quaid's software was not a derivative work of Vault's software, despite having approximately 30 characters of source code in common. Finally, it held that the Louisiana Software License Enforcement Act clause permitting a copyright holder to prohibit software decompilation or disassembly was preempted by the Copyright Act, and was therefore unenforceable.
Background information
Vault Corporation created and held the copyright for a program called PROLOK, which provided copy protection for software on floppy disks. Software companies purchased PROLOK from Vault in order to protect their software from end users making unauthorized copies. PROLOK worked by having an indelible "fingerprint" on each PROLOK protected disk in addition to the PROLOK software and the software to be protected. The PROLOK protected program allowed the software to function only if the fingerprint was present on the disk.
Quaid Software Ltd. created a program called RAMKEY, which allowed copies of Vault's clients' software to function without the original program disks. RAMKEY made PROLOK think that the necessary fingerprint was present even though it was not.
Actions and claims
Vault sought preliminary and permanent injunctions against Quaid to prevent them from advertising and selling RAMKEY. They also sought an order to impound all of Quaid's copies of RAMKEY, as well as $100,000,000 in monetary damages. Vault asserted the following claims:
Infringement of the exclusive right to copy by copying PROLOK into RAM for a purpose other than that intended by Vault
Contributory infringement by providing software that customers can use to infringe on Vault's and Vault's clients' copyrights
Creation of an infringing derivative work by one version of RAMKEY that contained approximately 30 characters of the PROLOK source code
Breach of license agreement based on Louisiana License Act, which Vault utilized to attempt to prohibit decompilation and disassembly of its software
Procedural history
The district court initially dismissed Vault's complaint for lack of personal jurisdiction, but this was reversed by the circuit court. On remand, the district court denied Vault's motion for a preliminary injunction. After agreement by both parties to submit the case for final decision, the district court entered a final judgment based on its decision on the preliminary injunction. Vault subsequently appealed.
Direct infringement claim
The district court held that Quaid's copying of the software into RAM was permissible under , which permits copies "created as an essential step." Vault argued that the §117(1) exemption does not apply when the program is used in a manner not intended by the copyright holder. The circuit court disagreed with this argument, writing that the statute does not contain "language to suggest that the copy it permits must be employed for a use intended by the copyright owner."
Contributory infringement claim
Sony Corp. of America v. Universal City Studios, Inc. established the "substantial non-infringing use" test for contributory infringement. Quaid argued that RAMKEY passes this test because it can be used to create archival copies that are exempt under 17 U.S.C. §117(2). Vault argued that RAMKEY did not have any non-infringing use because one could create a sufficient archival copy without the use of RAMKEY. Vault asserted that the archival copy exemption of 17 U.S.C. §117(2) was designed to protect only against "destruction or damage by mechanical or electrical failure," but not against (for example) loss or destruction of a disk. The court declined to construe the archival exemption in this manner, saying that even though it had appeal, it was not the law and that only the Congress could decide to limit the exemption in that way.
Derivative work claim
One version of RAMKEY contained approximately 30 characters of source code from PROLOK. Vault alleged that this constituted an infringing derivative work.
The district court focused on the size of the copied code, arguing that it was not significant. Vault argued that the court should instead focus on the qualitative aspect of the copied code because the 30 characters were important to the correct operation of PROLOK. The circuit court rejected the argument that the copying was qualitatively significant on the basis that PROLOK and RAMKEY "serve opposing functions."
Louisiana Software License Enforcement Act claim
The license agreement for PROLOK depended on the Louisiana Software License Enforcement Act to give it the authority to prohibit users from decompiling or disassembling the software. The Act purported to permit certain license agreements to contain "...prohibitions on translating, reverse engineering, decompiling, disassembling, and/or creating derivative works based on the computer software." Vault had included such a provision in its license agreement and claimed that Quaid violated this provision when it reverse engineered PROLOK. The district court held that the Louisiana License Act was unenforceable because it was preempted by the Copyright Act. The circuit court ruled only on the clause permitting a licensor to prohibit decompilation or disassembly, holding that this clause was preempted by the exemptions of 17 U.S.C. §117, which grant permission to make "essential step" and archival copies.
See also
Sony Corp. of America v. Universal City Studios, Inc.
Software license
Reverse engineering
Copyright Act of 1976
External links
Vault Corp. v. Quaid Software Ltd. (opinion full text) at the Berkman Center for Internet & Society
United States Court of Appeals for the Fifth Circuit cases
United States copyright case law
1988 in United States case law
Computer memory
|
24448891
|
https://en.wikipedia.org/wiki/Where%27s%20My%20Water%3F
|
Where's My Water?
|
Where's My Water? is a puzzle video game developed by American studio Creature Feep and published by Disney Mobile, a subsidiary of Disney Interactive Studios. Released for desktop web browsers and devices using iOS, Android, Windows Phone and BlackBerry 10 operating systems, the game requires players to route a supply of water to an alligator. Where's My Water? has been praised for its gameplay and its graphical style, with special recognition of its lead character, Swampy, the first original Disney character for a mobile game, voiced by actor, Justin T. Bowler.
The game has inspired multiple spin-offs including: Where's My Perry?, Where's My Mickey?, Where's My Water? featuring XYY and Where's My Valentine?. This game was also released on Microsoft Windows in 2011. More mobile versions continued to be released through 2013.
In September 2013, a sequel titled Where's My Water? 2 was released.
Gameplay
Swampy, an alligator living in a city sewer system, hates being dirty, but whenever he tries to take a bath, Cranky, another alligator living in the sewers, disrupts the water flow to Swampy's home. Located somewhere on the level is a supply of water, either a finite amount pooled at various locations or an infinite amount flowing from a pipe. Players use the touch screen on their device to dig through the dirt and redirect the water towards an inlet leading to Swampy's bathtub. Occasionally, the water must be routed through other pipes or must interact with machines in order to open up a route to the inlet. When the required amount of water reaches the bathtub, the level is completed and the next level is unlocked. If all of the water flows away the player loses the level. Also scattered around the level are three rubber ducks that can be collected when they absorb an amount of water. Select levels also include items hidden in the dirt that will unlock bonus levels when three-item collections are completed.
Certain levels are also populated by hazards that must be avoided or removed. For example, some levels contain algae that will absorb water and grow. Other types of fluids will sometimes appear, such as purple poison, reddish mud, and green ooze. A single drop of poison will contaminate pure water, turning it into poison as well, while the ooze will erode through the dirt, pop balloons, and react with water, destroying both fluids. Mud eventually hardens into dirt, unless water reaches it, in which case the water turns to mud instantly. If either poison, ooze or mud reaches the inlet, the level will be lost and will automatically restart. In addition, bombs will destroy all the objects and kill all the rubber ducks on contact, while regular water kills Cranky Ducks, and poison kills all other Ducks. However, poison and ooze will also destroy the invasive algae on contact. The poison will eliminate it, while the ooze will cause it to solidify, creating a new barrier—and they will dissolve each other explosively if they touch each other, potentially opening up parts of the level to the benefit or detriment of the player.
Points are awarded for the amount of time taken to complete the level, for collecting rubber ducks, and for delivering more than the minimum amount of water to Swampy's tub. Collecting a certain number of rubber ducks will also unlock new groups of levels.
Development
Where's My Water? was developed by Creature Feep, a team of designers within the Disney Mobile division of Disney Interactive Studios. Creature Feep is headed up by game design director Tim FitzRandolph, whose earlier works included the popular game JellyCar that Disney would later acquire and distribute. In an October 2011 interview, FitzRandolph explained that the goal for the development of Where's My Water? was "to contribute a new character to the company, while making a really fun game in the process".
The earliest phase of development centered around the concept of the game, which was players using their fingers to guide water to a goal. According to FitzRandolph, "We had a whole bunch of ideas, and at some point along the line, it kept coming back that water, water was very fresh and people hadn't done a lot of physics around water." Designers invested time in making sure the water flowed naturally and as a player might expect it would in real life, thus making the gameplay easier to learn for newcomers. In actuality, the water is rendered as many individual "drops" that interact with each other.
The place players were routing water towards became a bathtub, at which point the designers had to devise a reason for having a bathtub underground. That reason came from the urban legend of alligators living in city sewers, so the game's lead character became a "hygiene-conscious alligator". Unlike many mobile games released by Disney, where characters from the company's films are used, Where's My Water? represents the first time that Disney has produced an original character for a mobile game. In designing that character, Disney Mobile wanted one "that felt like it belonged when lined up with other Disney characters".
Release
Where's My Water? was launched with four chapters - "Meet Swampy", "Troubled Waters", "Under Pressure" and "Sink or Swim" — each containing 20 levels. New chapters are rolled out with updates, each featuring new gameplay mechanics. An October 2011 update added "Change is Good", a 20-level expansion that added the ability to change fluid types from one to another in order to complete levels. "Boiling Point", the game's sixth 20-level chapter, was released in a November 2011 update and included levels where players must convert steam into liquid water. A version for devices equipped with the Android operating system was released on the Android Market in North America on November 23, 2011 and included all six chapters available up to that point.
In December 2011, "Stretched Thin" was released to both platforms, adding 20 new levels, a Christmas overlay for the title screen and new water balloon obstacles. A free, ad-supported version of Where's My Water? was also released to both iOS and Android in December 2011. The free version includes 25 unique levels, plus the ability to unlock five popular levels taken from the main game.
"Caution to the Wind", a new 20-level chapter, was launched in March 2012, adding fans and vacuums that move water and other game elements around the level. In April 2012, "Rising Tide" was added, which introduced valves that can redirect water and the other fluids as needed to complete the chapter's 20 levels. In May 2012, a total of 20 levels that originally appeared in the free version were added to the full version, collectively known as "The Lost Levels". The levels in question are grouped into two holiday-themed chapters, "10 Days of Swampy" (Christmas) and "Hearts and Crafts" (Valentine's Day). June 2012 saw the release of "Out to Dry", which included levels involving wet mud that sets into dirt. The update also included two new in-app purchases: the "Mystery Duck" mode (see below) and Locksmith Duck, which would unlock a chapter without having to collect a certain number of ducks within the main game. In connection, Where's My Perry, a version of the game featuring Perry the Platypus from Phineas and Ferb, was released the same day as the "Out to Dry" update. On September 19, 2012, a new update brought a special Birthday Level, "Make a Wish", to Where's My Water and Swampy's 1 Year Birthday. 10 more Lost Levels were also added for free. An infographic of Where's my Water's history teased a new update with a black-and-white and Frankenweenie-based levels.
Cranky's Story
In January 2012, "Cranky's Story", a new subset of levels within the game, was added initially to the iOS version and later to the Android version. The gameplay in "Cranky's Story" is basically the same as the main game, in that players must route a fluid to an inlet goal. However, this time the player must help Cranky by bringing the purple poisonous water into his lair to melt algae that is covering his food. This time, the ducks are now purple and can only be collected by being splashed with poison, while other fluids (including clean water) will kill it. If water or any of the other fluids enter the inlet, the level is failed (the water causes Cranky's food to be covered with more algae, surprising him, while ooze will turn it into a rock and being kicked away, the same is true with mud). If all of the poison is lost, then Cranky will get very angry. The first five levels in the first chapter, "Cranky's First Course", are free to play, while the rest of the chapter and the whole the second chapter, "Hunger Pains", are accessible through a one-time in-app purchase. The update also includes "Cranky's Challenge", a set of 12 challenges and four bonus stages for the player to accomplish. If those challenges are failed, then Swampy will cry as if you lost all the water. An all new Food Groups and the third episode "Bulking Up" were released on April 5, 2012 and adds 6 challenges and two bonus stages. The final episode, "Overstuffed", was released on May 18, 2012. Cranky is also voiced by Justin T. Bowler.
Mystery Duck
In June 2012, a new game mode called "Mystery Duck" was introduced. It is a revisit of previous levels from the main game except the player has to deal three special kinds of ducks. They are the Mega Duck, a large duck that requires a large amount of water to fill; Ducklings, a group of 10 tiny ducks (which can easily be filled with a drop of water); and the tuxedo-clad Mystery Duck, which goes around an entire level either by disappearing and reappearing in certain spots or physically moving up and down and side to side. Like "Cranky's Story", a one-time in-app purchase was required to play beyond the first five levels. On September 19, as part of the Birthday update, 40 more levels were added to Mystery Duck. On October 30, as part of the release of Swampy's Underground adventures, 20 more levels were added to Mystery duck. On November 15, as part of the levels of the week, the last 40 levels were added to Mystery Duck.
Allie's Story
On 25 May 2013, a new subset of levels called "Allie's Story" was added. Allie is an organ player and Swampy's girlfriend. The gameplay in this mode is the same as the other modes, but this time, it requires players to direct steam to operate Allie's makeshift pipe organ. Ducks are blue and can only be covered in steam, while other fluids (except clean water) will kill them. If water or other fluids get in the inlet, the level is failed. So far, only two episodes in this mode, "Warming Up" and "Tuning In" were available in the update. The last two chapters "Rising to the Top" and "Symphony in Steam" were added on September 11, 2013. As with Cranky's Story and Mystery Duck, a one-time in-app purchase is required to play beyond the first five levels. Allie is voiced by Rebecca Metz
Spin-offs
Disney has released several spin-off games under the Where's My moniker. They include:
Where's My Perry? is a game starring Perry the Platypus from Phineas and Ferb who 'needs' water to enter in the O.W.C.A. HQ.
Where's My Mickey? (released June 19, 2013) is a game starring Mickey Mouse (as seen in the Disney Channel shorts) who 'needs' 'water'.
Where's My Valentine? (released February 1, 2013) is a game starring Swampy and Perry.
Where's My Holiday? (December 1, 2012 - January 21, 2013) was another game starring Swampy and Perry. It also contained the old version of Where's My Valentine?.
Where's My Summer? is a game starring Perry.
Where's My Water? featuring XYY (April 2014) stars the popular Chinese cartoon character Pleasant Goat. The game was retired on August 29, 2014.
Reception
Where's My Water? has received universal acclaim from critics. Mike Thompson of Gamezebo said "anyone who enjoys physics puzzle titles would be out of their mind to miss picking this up". IGN's Justin Davis said players "will have a ton of fun figuring out how to get Swampy clean level after level". Chris Reed, writing for Slide To Play, called the game "a highly polished and appealing physics puzzler that nearly everyone can enjoy". Along with the gameplay, reviewers have made special mention of the game's graphical presentation. Gamezebo said the graphics "are particularly great, featuring crisp and cartoon-like visuals that look like something out of ... well, out of a Disney cartoon", and IGN said that Swampy "animates wonderfully and always appears incredibly adorable". Pocket Gamer's Steve McCaskill praised its visual design, stating that it gave "the impression of an interactive cartoon".
After only one day on Apple's U.S. App Store, Where's My Water? ascended to the top of the list of paid apps, surpassing Angry Birds. In its first month of release, Where's My Water? was downloaded more than one million times. The game remained on top of the App Store charts for three weeks, and it has also reached #1 on App Stores in 30 other countries. In March 2012, Apple announced that a copy of the free version of Where's My Water?, downloaded by a user from China, was the 25 billionth application downloaded from the App Store. Pocket Gamer awarded it Best Casual/Puzzle Game in 2012 During WWDC 2012, the app was awarded a 2012 "Apple Design Award" for iPhone apps.
Impact
The popularity of Where's My Water?, and of Swampy in particular, has led Disney to develop a web series based on Swampy and other characters introduced in the game's cutscenes, including Allie, a female alligator who is the object of both Swampy's and Cranky's affection. Where's My Water?: Swampy's Underground Adventures debuted with a teaser August 31, 2012 on the Disney.com website and will feature a 12-episode season, with each episode running around two minutes. The series is animated by Animax Entertainment. According to Mark Walker, the senior vice-president of Disney.com, the series will "build out the world and tell Swampy's story and that of other characters".
Fluids
Water
Water is the first fluid that you can see in the game. Water approaching algae will cause more algae to grow. Any amount of water approaching poison will cause the water itself to become poison. Approaching with ooze will cause both fluids to dissolve. Approaching with mud will make make the mud wet.
Poison Water
Purple-colored poison water is the second fluid encountered in the game. Any amount of poison approaching water will cause the water to become poison. Poison also removes algae. Approaching with ooze will cause an explosion.
Ooze
Ooze is the third fluid. It slowly eats through dirt. Approaching with water will cause both fluids to dissolve. Approaching algae will transform the algae into stone. Approaching with poison will cause an explosion.
Steam
Steam is the fourth fluid. Unlike the other fluids, steam floats upwards instead of falling.
Mud
Mud is the fifth fluid. It gradually dries itself into dirt. Dry mud can be changed to wet mud by adding water or poison. However, dirt cannot be changed back to mud.
See also
List of most downloaded Android applications
References
External links
2011 video games
Android (operating system) games
Apple Design Awards recipients
BlackBerry games
Disney Interactive franchises
Disney video games
Fictional crocodilians
IOS games
Universal Windows Platform apps
Video games about reptiles
Video games developed in the United States
Video game franchises introduced in 2011
Windows games
Windows Phone games
|
6359161
|
https://en.wikipedia.org/wiki/Games%20for%20Windows
|
Games for Windows
|
Games for Windows is a discontinued brand owned by Microsoft and introduced in 2006 to coincide with the release of the Windows Vista operating system. The brand itself represents a standardized technical certification program and online service for Windows games, bringing a measure of regulation to the PC game market in much the same way that console manufacturers regulate their platforms. The branding program was open to both first-party and third-party publishers.
Games for Windows was promoted through convention kiosks and through other forums as early as 2005. The promotional push culminated in a deal with Ziff Davis Media to rename the Computer Gaming World magazine to Games for Windows: The Official Magazine. The first GFW issue was published for November 2006. In 2008, Ziff Davis announced that the magazine would cease to be published, though online content would still be updated and maintained.
In 2013, Microsoft announced that Xbox PC Marketplace would cease operations, which would result in the discontinuation of the Games for Windows brand. In spite of this announcement, the company stated that content previously purchased could still be accessed via the Games for Windows – Live client software.
Certification
Games certified by Microsoft feature a prominent "Games for Windows" logo border across the top of their packaging, in a manner similar to games developed for the Xbox 360. Software must meet certain requirements mandated by Microsoft in order to display the brand on its packaging. These requirements include:
An "Easy Install" option that installs the title on a PC in the fewest possible steps and mouse clicks
Compatibility with Xbox 360 peripherals
An "Only on Xbox 360 and Windows Vista" or "Only on Windows Vista" stamp for game packaging
Compatibility with the Games Explorer
Compatibility with x64 processors with proper installation and execution on 64-bit versions of Windows Vista and Windows 7; games themselves can be 32-bit
Support for normal and widescreen resolutions, such as 4:3 aspect ratio (800 × 600, 1024 × 768), 16:9 aspect ratio (1280 × 720, 1920 × 1080), and 16:10 aspect ratio (1280 × 800, 1440 × 900, 1680 × 1050, 1920 × 1200)
Support for parental controls and family safety features
Support for launching from Windows Media Center
Microsoft claimed that it had increased its sales of Games for Windows-branded games in stores that had been giving the games greater focus, and stated that it planned to increase marketing efforts for the brand.
Features
Cross-platform compatibility
Certain games certified under the Games for Windows brand, including Shadowrun, and UNO featured cross-platform compatibility, allowing gamers to play against each other across Xbox 360 consoles and traditional Windows Vista or Windows 7 PCs.
Online play
Starting with Halo 2 on May 31, 2007, certain Games for Windows titles have access to Microsoft's Live network for online play and other features, including voice chat, instant messaging and friends lists, accessed from an in-game menu called the "Guide". Users can log in with their Xbox Live gamertags to gain achievements and play games and chat across platforms with games that support cross-platform compatibility. Some features, including cross-platform multiplayer gaming and multiplayer achievements, initially required a subscription to the Xbox Live Gold. However, on July 22, 2008, Microsoft announced that all Games for Windows functionality would be free for existing and future members, and that early adopters of the technology would receive refunds for previously incurred charges. In addition, Microsoft launched a Games for Windows Live Marketplace, similar to the Xbox Live Marketplace, which allowed users to download or purchase content, such as game demos, add-ons, and gamer pics, with Microsoft Points; the publisher of a title would determine if an item required to be purchased. At the same time, Microsoft announced its intentions to make the Games for Windows - Live client software interface more friendly and to reduce the technical requirements for developers.
Games Explorer
The Games Explorer, included with all versions of Windows Vista and Windows 7, is a special folder that showcases the games installed on a user's computer and their related information, essentially making it a games gallery. When a compatible game is installed, the operating system adds a shortcut of the game to the Games Explorer, and can optionally download additional information, such as game packaging and content rating information (e.g., ESRB, PEGI, ACB, CERO) through the developer's own game definition file or from information provided by the Internet, although this feature was discontinued since 2016. Windows Experience Index information is also displayed within the interface. The feature was removed entirely in Windows 10 v1803.
Games Explorer supports custom commands for games and also includes shortcuts to configure various operating system components which may be pertinent to gamers, such as audio devices, display devices, firewall settings, and game controllers. In Windows Vista, Games Explorer allows developers to expose game metadata and thumbnails to the interface and Windows Search through a shell handler. The Games Explorer is fully compatible with the parental controls feature included in Windows Vista and Windows 7. Parental controls allows parents to include or preclude certain games from being played based on their content, rating, and/or title, and can also block games from being played altogether.
Compatibility typically depends on the age or popularity of a game, with newer games having better compatibility. If a game is incompatible, a user can manually add a game by dragging and dropping it to the Games Explorer.
Tray and Play
Tray and Play is a technology developed by Microsoft for Windows Vista that allows users to insert a game disc into an optical disc drive and play the game while it installs itself in the background and streams off the disc with minimal or zero caching—in a manner similar to a game console. The first and only commercial game known to use this technology is the Windows version of Halo 2.
Xbox 360 peripheral compatibility
Part of the Games for Windows initiative involved ensuring that Xbox 360 peripherals, such as the Xbox 360 Controller and Wireless Gaming Receiver worked across Windows platforms. Xbox 360 peripherals not only work with certified games, but also with the default games included with Windows Vista, such as Minesweeper.
See also
DirectX
List of Games for Windows titles
List of Games for Windows – Live titles
List of Windows Games on Demand
List of Xbox games on Windows
Live Anywhere
PC Gaming Alliance
References
External links
Games for Windows Technical Requirements
Games for Windows Test Requirements
Products and services discontinued in 2013
Microsoft initiatives
Windows Vista
Xbox
Xbox network
Products introduced in 2006
|
24879552
|
https://en.wikipedia.org/wiki/Comparison%20of%20audio%20player%20software
|
Comparison of audio player software
|
The following comparison of audio players compares general and technical information for a number of software media player programs. For the purpose of this comparison, "audio players" are defined as any media player explicitly designed to play audio files, with limited or no support for video playback. Multi-media players designed for video playback, which can also play music, are included under comparison of video player software.
General
Operating system compatibility
This section lists the operating systems on which the player works. There may be multiple versions of a player for different operating systems.
Features
Audio format ability
Information about what audio formats the players understand. Footnotes lead to information about abilities of future versions of the players or plugins/filters that provide such functionality.
Container format ability
Information about what container formats the players understand. Footnotes lead to information about abilities of future versions of the players or filters that provide such functionality.
Scalable, composite and emulation format abilities
Protocol abilities
Information about which internet protocols the players understand, for receiving streaming media content. Footnotes lead to information about abilities of future versions of the players or plugins that provide such functionality.
Playlist format ability
Information about which playlist formats the players understand.
Metadata ability
Information about what metadata, or tagging, formats the players understand. Most other containers have their own metadata format and the players usually use them. Footnotes lead to information about abilities of future versions of the players or plugins that provide such functionality.
Optical media ability
Information about what kinds of optical discs the players can play. Footnotes lead to information about abilities of future versions of the players or plugins that provide such functionality.
Playback of Super Audio CD is not possible for any media player, because no suitable hardware exists.
All media players capable of audio CD playback will also play the Redbook core of any HDCD disc, providing no sound-quality benefits over standard audio CDs.
See also
List of codecs
Open source codecs and containers
Comparison of container formats
Comparison of portable media players
List of podcast clients
Notes
References
Audio player software
|
4015871
|
https://en.wikipedia.org/wiki/David%20Eppstein
|
David Eppstein
|
David Arthur Eppstein (born 1963) is an American computer scientist and mathematician. He is a Distinguished Professor of computer science at the University of California, Irvine. He is known for his work in computational geometry, graph algorithms, and recreational mathematics. In 2011, he was named an ACM Fellow.
Biography
Born in Windsor, England, in 1963, Eppstein received a B.S. in Mathematics from Stanford University in 1984, and later an M.S. (1985) and Ph.D. (1989) in computer science from Columbia University, after which he took a postdoctoral position at Xerox's Palo Alto Research Center. He joined the UC Irvine faculty in 1990, and was co-chair of the Computer Science Department there from 2002 to 2005. In 2014, he was named a Chancellor's Professor. In October 2017, Eppstein was one of 396 members elected as fellows of the American Association for the Advancement of Science.
Eppstein is also an amateur digital photographer.
Research interests
In computer science, Eppstein's research has included work on minimum spanning trees, shortest paths, dynamic graph data structures, graph coloring, graph drawing and geometric optimization. He has published also in application areas such as finite element meshing, which is used in engineering design, and in computational statistics, particularly in robust, multivariate, nonparametric statistics.
Eppstein served as the program chair for the theory track of the ACM Symposium on Computational Geometry in 2001, the program chair of the ACM-SIAM Symposium on Discrete Algorithms in 2002, and the co-chair for the International Symposium on Graph Drawing in 2009.
Selected publications
Republished in
Books
See also
Eppstein's algorithm
References
External links
David Eppstein's profile at the University of California, Irvine
1963 births
Living people
American computer scientists
British emigrants to the United States
Cellular automatists
Columbia School of Engineering and Applied Science alumni
Fellows of the American Association for the Advancement of Science
Fellows of the Association for Computing Machinery
Graph drawing people
Graph theorists
Palo Alto High School alumni
People from Irvine, California
Recreational mathematicians
Stanford University School of Humanities and Sciences alumni
Researchers in geometric algorithms
University of California, Irvine faculty
Science bloggers
Scientists at PARC (company)
American Wikimedians
|
21415032
|
https://en.wikipedia.org/wiki/Netgear%20WGR614L
|
Netgear WGR614L
|
The WGR614L (also known as the WGR614v8) is an 802.11b/g wireless network router created by Netgear. It was officially launched on June 30, 2008. The WGR614L runs an open source linux firmware and supports the installation of third party packages such as DD-WRT, Tomato, and OpenWrt.
Hardware
Broadcom BCM5354 240 MHz SoC
4 MB Flash memory
16 MB RAM
16 kB instruction cache
16 kB data cache
1000 byte pre-fetch cache
4 MB CPU cache
2 dBi gain antennas (1 internal and 1 external dipole)
802.11 b/g wireless support
Certified for use with Windows Vista
Features
Supports installation of OpenWrt, Tomato firmware, and DD-WRT
Supports Wi-Fi Protected Setup (WPS)
Automatically detects ISP type, exposed host (DMZ), MAC address authentication, URL content filtering, logs and email alerts of Internet activity
Static & dynamic routing with TCP/IP, VPN pass-through (IPsec, L2TP), NAT, PPTP, PPPoE, DHCP (client & server)
Applications
The WGR614L is designed to be used in home or business environments. It is often used in connection with third-party firmware and solutions, such as SputnikNet and Titan Hotspots. The router can also be used as a wireless client bridge (utilizing OpenWrt firmware) and as a wireless repeater bridge (using DD-WRT firmware).
External links
Press Release announcing WGR614L
Official Support Page
List Of WGR614L Resources
The WGR614L at DD-WRT.com
Using the WGR614L As A Wireless Repeater Bridge Using DD-WRT
Using the WGR614L As a Wireless Client Bridge using OpenWrt Firmware
Firmware downloads
DD-WRT Router Database (lookup WGR614L).
Old DD-WRT, Tomato and OpenWrt links over on My Open Router website.
WGR614L
Hardware routers
Linux
|
37481
|
https://en.wikipedia.org/wiki/Intranet
|
Intranet
|
An intranet is a computer network for sharing information, easier communication, collaboration tools, operational systems, and other computing services within an organization, usually to the exclusion of access by outsiders. The term is used in contrast to public networks, such as the Internet, but uses most of the same technology based on the Internet Protocol Suite.
A company-wide intranet can constitute an important focal point of internal communication and collaboration, and provide a single starting point to access internal and external resources. In its simplest form, an intranet is established with the technologies for local area networks (LANs) and wide area networks (WANs). Many modern intranets have search engines, user profiles, blogs, mobile apps with notifications, and events planning within their infrastructure.
An intranet is sometimes contrasted to an extranet. While an intranet is generally restricted to employees of the organization, extranets may also be accessed by customers, suppliers, or other approved parties. Extranets extend a private network onto the Internet with special provisions for authentication, authorization and accounting (AAA protocol).
Uses
Increasingly, intranets are being used to deliver tools, e.g. collaboration (to facilitate working in groups and teleconferencing) or sophisticated corporate directories, sales and customer relationship management tools, project management etc.,
Intranets are also being used as corporate culture-change platforms. For example, large numbers of employees discussing key issues in an intranet forum application could lead to new ideas in management, productivity, quality, and other corporate issues.
In large intranets, website traffic is often similar to public website traffic and can be better understood by using web metrics software to track overall activity. User surveys also improve intranet website effectiveness.
Larger businesses allow users within their intranet to access public internet through firewall servers. They have the ability to screen messages coming and going, keeping security intact. When part of an intranet is made accessible to customers and others outside the business, it becomes part of an extranet. Businesses can send private messages through the public network, using special encryption/decryption and other security safeguards to connect one part of their intranet to another.
Intranet user-experience, editorial, and technology teams work together to produce in-house sites. Most commonly, intranets are managed by the communications, HR or CIO departments of large organizations, or some combination of these.
Because of the scope and variety of content and the number of system interfaces, intranets of many organizations are much more complex than their respective public websites. Intranets and their use are growing rapidly. According to the Intranet design annual 2007 from Nielsen Norman Group, the number of pages on participants' intranets averaged 200,000 over the years 2001 to 2003 and has grown to an average of 6 million pages over 2005–2007.
Benefits
Workforce productivity: Intranets can help users to locate and view information faster and use applications relevant to their roles and responsibilities. With the help of a web browser interface, users can access data held in any database the organization wants to make available, anytime and — subject to security provisions — from anywhere within the company workstations, increasing the employees ability to perform their jobs faster, more accurately, and with confidence that they have the right information. It also helps to improve the services provided to the users.
Time: Intranets allow organizations to distribute information to employees on an as-needed basis; Employees may link to relevant information at their convenience, rather than being distracted indiscriminately by email.
Communication: Intranets can serve as powerful tools for communication within an organization, vertically strategic initiatives that have a global reach throughout the organization. The type of information that can easily be conveyed is the purpose of the initiative and what the initiative is aiming to achieve, who is driving the initiative, results achieved to date, and whom to speak to for more information. By providing this information on the intranet, staff have the opportunity to keep up-to-date with the strategic focus of the organization. Some examples of communication would be chat, email, and/or blogs. A great real-world example of where an intranet helped a company communicate is when Nestle had a number of food processing plants in Scandinavia. Their central support system had to deal with a number of queries every day. When Nestle decided to invest in an intranet, they quickly realized the savings. McGovern says the savings from the reduction in query calls was substantially greater than the investment in the intranet.
Web publishing allows cumbersome corporate knowledge to be maintained and easily accessed throughout the company using hypermedia and Web technologies.Examples include employee manuals, benefits documents, company policies, business standards, news feeds, and even training, can be accessed using common Internet standards (Acrobat files, Flash files, CGI applications). Because each business unit can update the online copy of a document, the most recent version is usually available to employees using the intranet.
Business operations and management: Intranets are also being used as a platform for developing and deploying applications to support business operations and decisions across the internetworked enterprise.
Workflow: a collective term that reduces delay, such as automating meeting scheduling and vacation planning
Cost-effectiveness: Users can view information and data via web-browser rather than maintaining physical documents such as procedure manuals, internal phone list and requisition forms. This can potentially save the business money on printing, duplicating documents, and the environment as well as document maintenance overhead. For example, the HRM company PeopleSoft "derived significant cost savings by shifting HR processes to the intranet". McGovern goes on to say the manual cost of enrolling in benefits was found to be US$109.48 per enrollment. "Shifting this process to the intranet reduced the cost per enrollment to $21.79; a saving of 80 percent". Another company that saved money on expense reports was Cisco. "In 1996, Cisco processed 54,000 reports and the amount of dollars processed was USD19 million".
Enhance collaboration: Information is easily accessible by all authorised users, which enables teamwork. Being able to communicate in real-time through integrated third party tools, such as an instant messenger, promotes the sharing of ideas and removes blockages to communication to help boost a business' productivity.
Cross-platform capability: Standards-compliant web browsers are available for Windows, Mac, and UNIX.
Built for one audience: Many companies dictate computer specifications which, in turn, may allow Intranet developers to write applications that only have to work on one browser (no cross-browser compatibility issues). Being able to specifically address one's "viewer" is a great advantage. Since intranets are user-specific (requiring database/network authentication prior to access), users know exactly who they are interfacing with and can personalize their intranet based on role (job title, department) or individual ("Congratulations Jane, on your 3rd year with our company!").
Promote common corporate culture: Every user has the ability to view the same information within the intranet.
Supports a distributed computing architecture: The intranet can also be linked to a company's management information system, for example a time keeping system.
Employee Engagement: Since "involvement in decision making" is one of the main drivers of employee engagement, offering tools (like forums or surveys) that foster peer-to-peer collaboration and employee participation can make employees feel more valued and involved.
Planning and creation
Most organizations devote considerable resources into the planning and implementation of their intranet as it is of strategic importance to the organization's success. Some of the planning would include topics such as determining the purpose and goals of the intranet, identifying persons or departments responsible for implementation and management and devising functional plans, page layouts and designs.
The appropriate staff would also ensure that implementation schedules and phase-out of existing systems were organized, while defining and implementing security of the intranet and ensuring it lies within legal boundaries and other constraints. In order to produce a high-value end product, systems planners should determine the level of interactivity (e.g. wikis, on-line forms) desired.
Planners may also consider whether the input of new data and updating of existing data is to be centrally controlled or devolve. These decisions sit alongside to the hardware and software considerations (like content management systems), participation issues (like good taste, harassment, confidentiality), and features to be supported.
Intranets are often static sites; they are a shared drive, serving up centrally stored documents alongside internal articles or communications (often one-way communication). By leveraging firms which specialise in 'social' intranets, organisations are beginning to think of how their intranets can become a 'communication hub' for their entire team. The actual implementation would include steps such as securing senior management support and funding, conducting a business requirement analysis and identifying users' information needs.
From the technical perspective, there would need to be a co-ordinated installation of the web server and user access network, the required user/client applications and the creation of document framework (or template) for the content to be hosted.
The end-user should be involved in testing and promoting use of the company intranet, possibly through a parallel adoption methodology or pilot programme. In the long term, the company should carry out ongoing measurement and evaluation, including through benchmarking against other company services.
Maintenance
Some aspects are non-static.
Staying current
An intranet structure needs key personnel committed to maintaining the intranet and keeping content current. For feedback on the intranet, social networking can be done through a forum for users to indicate what they want and what they do not like.
Privacy protection
The European Union's General Data Protection Regulation went into effect May 2018.
Enterprise private network
An enterprise private network''' is a computer network built by a business to interconnect its various company sites (such as production sites, offices and shops) in order to share computer resources.
Beginning with the digitalisation of telecommunication networks, started in the 1970s in the US by AT&T, and propelled by the growth in computer systems availability and demands, enterprise networks have been built for decades without the need to append the term private to them. The networks were operated over telecommunication networks and, as for voice communications, a certain amount of security and secrecy was expected and delivered.
But with the Internet in the 1990s came a new type of network, virtual private networks, built over this public infrastructure, using encryption to protect the data traffic from eaves-dropping. So the enterprise networks are now commonly referred to as enterprise private networks'' in order to clarify that these are private networks, in contrast to public networks.
See also
eGranary Digital Library
Enterprise portal
Intranet portal
Intranet strategies
Intranet Wiki
Intraweb
Kwangmyong (intranet)
Virtual workplace
Web portal
References
Computer networks
Internet privacy
|
2860479
|
https://en.wikipedia.org/wiki/Debian-Installer
|
Debian-Installer
|
Debian-Installer is a system installer designed for the Debian Linux distribution. It originally appeared in the Debian release 3.1 (Sarge), released on June 6, 2005, although the first release of a Linux distribution that used it was Skolelinux (Debian-Edu) 1.0, released in June 2004.
It is also one of two official installers available for Ubuntu, the other being called Ubiquity (itself based on parts of debian-installer) which was introduced in Ubuntu 6.06 (Dapper Drake).
It makes use of cdebconf (a re-implementation of debconf in C) to perform configuration at install time.
Originally, it was only supported under text-mode and ncurses. A graphical front-end (using GTK-DirectFB) was first introduced in Debian 4.0 (Etch). Since Debian 6.0 (Squeeze), it is used over Xorg instead of DirectFB.
debootstrap
debootstrap is software which allows installation of a Debian base system into a subdirectory of another, already installed operating system. It needs access to a Debian repository and doesn't require an installation CD. It can also be installed and run from another operating system or to create a "cross-debootstrapping", a rootfs for a machine of a different architecture, for instance, OpenRISC. There is also a largely equivalent version written in C – cdebootstrap, which is used in debian-installer.
debootstrap can be used to install Debian in a system without using an installation disk but can also be used to run a different Debian flavor in a chroot environment. This way it is possible to create a full (minimal) Debian installation which can be used for testing purposes, or for building packages in a "clean" environment (e.g., as pbuilder does).
Features
Set language
Select location
Configure keyboard
Configure network
Setup users and passwords
Configure clock
Partition disk
Create partition
Format device
LVM/Cryptsetup
Install system base
Configure package manager
Configure mirrorlist
Configure bootloader
See also
Anaconda
Calamares
Ubiquity
Wubi
References
External links
Debian
Free software programmed in C
Linux installation software
Software that uses GTK
|
1341263
|
https://en.wikipedia.org/wiki/Diskless%20node
|
Diskless node
|
A diskless node (or diskless workstation) is a workstation or personal computer without disk drives, which employs network booting to load its operating system from a server. (A computer may also be said to act as a diskless node, if its disks are unused and network booting is used.)
Diskless nodes (or computers acting as such) are sometimes known as network computers or hybrid clients. Hybrid client may either just mean diskless node, or it may be used in a more particular sense to mean a diskless node which runs some, but not all, applications remotely, as in the thin client computing architecture.
Advantages of diskless nodes can include lower production cost, lower running costs, quieter operation, and manageability advantages (for example, centrally managed software installation).
In many universities and in some large organizations, PCs are used in a similar configuration, with some or all applications stored remotely but executed locally—again, for manageability reasons. However, these are not diskless nodes if they still boot from a local hard drive.
Distinction between diskless nodes and centralized computing
Diskless nodes process data, thus using their own CPU and RAM to run software, but do not store data persistently—that task is handed off to a server. This is distinct from thin clients, in which all significant processing happens remotely, on the server—the only software that runs on a thin client is the "thin" (i.e. relatively small and simple) client software, which handles simple input/output tasks to communicate with the user, such as drawing a dialog box on the display or waiting for user input.
A collective term encompassing both thin client computing, and its technological predecessor, text terminals (which are text-only), is centralized computing. Thin clients and text terminals can both require powerful central processing facilities in the servers, in order to perform all significant processing tasks for all of the clients.
Diskless nodes can be seen as a compromise between fat clients (such as ordinary personal computers) and centralized computing, using central storage for efficiency, but not requiring centralized processing, and making efficient use of the powerful processing power of even the slowest of contemporary CPUs, which would tend to sit idle for much of the time under the centralized computing model.
Principles of operation
The operating system (OS) for a diskless node is loaded from a server, using network booting. In some cases, removable storage may be used to initiate the bootstrap process, such as a USB flash drive, or other bootable media such as a floppy disk, CD or DVD. However, the firmware in many modern computers can be configured to locate a server and begin the bootup process automatically, without the need to insert bootable media.
For network auto-booting, the Preboot Execution Environment (PXE) or Bootstrap Protocol (BOOTP) network protocols are commonly used to find a server with files for booting the device. Standard full-size desktop PCs are able to be network-booted in this manner with an add-on network card that includes a UNDI boot ROM. Diskless network booting is commonly a built-in feature of desktop and laptop PCs intended for business use, since it can be used on an otherwise disk-booted standard desktop computer to remotely run diagnostics, to install software, or to apply a disk image to the local hard drive.
After the bootstrapping process has been initiated, as described above, bootstrapping will take place according to one of three main approaches.
In the first approach (used, for example, by the Linux Terminal Server Project), the kernel is loaded into memory and then the rest of the operating system is accessed via a network filesystem connection to the server. (A small RAM disk may be created to store temporary files locally.) This approach is sometimes called the "NFS root" technique when used with Linux or Unix client operating systems.
In the second approach, the kernel of the OS is loaded, and part of the system's memory is configured as a large RAM disk, and then the remainder of the OS image is fetched and loaded into the RAM disk. This is the implementation that Microsoft has chosen for its Windows XP Embedded remote boot feature.
In the third approach, disk operations are virtualized and are actually translated into a network protocol. The data that is usually stored in a disk drive are then stored in virtual disks files homed on a server. The disk operations such as requests to read/write disk sectors are translated into corresponding network requests and processed by a service or daemon running on the server side. This is the implementation that is used by Neoware Image Manager, Ardence, VHD and various "boot over iSCSI" products. This third approach differs from the first approach because what is remote is not a file system but actually a disk device (or raw device) and that the client OS is not aware that it is not running off a hard disk. This is why this approach is sometimes named "Virtual Hard Disk" or "Network Virtual Disk".
This third approach makes it easier to use client OS than having a complete disk image in RAM or using a read-only file system. In this approach, the system uses some "write cache" that stores every data that a diskless node has written. This write cache is usually a file, stored on a server (or on the client storage if any). It can also be a portion of the client RAM. This write cache can be persistent or volatile. When volatile, all the data that has been written by a specific client to the virtual disk are dismissed when said client is rebooted, and yet, user data can remain persistent if recorded in user (roaming) profiles or home folders (that are stored on remote servers). The two major commercial products (the one from Hewlett-Packard, and the other one from Citrix Systems) that allow the deployment of Diskless Nodes that can boot Microsoft Windows or Linux client OS use such write caches. The Citrix product cannot use persistent write cache, but VHD and HP product can.
Diskless Windows nodes
Windows 3.x and Windows 95 OSR1 supported Remote Boot operations, from NetWare servers, Windows NT Servers and even DEC Pathworks servers.
Third party software Vendors such as Qualystem (acquired by Neoware), LanWorks (acquired by 3Com), Ardence (acquired by Citrix), APCT and Xtreamining Technology have developed and marketed software products aimed to remote-boot newer versions of the Windows product line: Windows 95 OSR2 and Windows 98 were supported by Qualystem and Lanworks, Windows NT was supported by APCT and Ardence (called VenturCom at that time), and Windows 2000/XP/2003/Vista/Windows 7 are supported by Hewlett Packard (which acquired Neoware which had previously acquired Qualystem) and Citrix Systems (which acquired Ardence).
Comparison with fat clients
Software installation and maintenance
With essentially a single OS image for an array of machines (with perhaps some customizations for differences in hardware configurations among the nodes), installing software and maintaining installed software can be more efficient. Furthermore, any system changes made during operation (due to user action, worms, viruses, etc.) can be either wiped out when the power is removed (if the image is copied to a local RAM disk) such as Windows XP Embedded remote boot or prohibited entirely (if the image is a network filesystem). This allows use in public access areas (such as libraries) or in schools etc., where users might wish to experiment or attempt to "hack" the system.
However, it is not necessary to implement network booting to achieve either of the above advantages - ordinary PCs (with the help of appropriate software) can be configured to download and reinstall their operating systems on (e.g.) a nightly basis, with extra work compared to using shared disk image that diskless nodes boot off.
Modern diskless nodes can share the very same disk image, using a 1:N relationship (1 disk image used simultaneously by N diskless nodes). This makes it very easy to install and maintain software applications: The administrator needs to install or maintain the application only once, and the clients can get the new application as soon as they boot off the updated image. Disk image sharing is made possible because they use the write cache: No client competes for any writing in a shared disk image, because each client writes to its own cache.
All the modern diskless nodes systems can also use a 1:1 Client-to-DiskImage relationship, where one client "owns" one disk image and writes directly into said disk image. No write cache is used then.
Making a modification in a shared disk image is usually made this way:
The administrator makes a copy of the shared disk image that he/she wants to update (this can be done easily because the disk image file is opened only for reading)
The administrator boots a diskless node in 1:1 mode (unshared mode) from the copy of the disk image he/she just made
The administrator makes any modification to the disk image (for instance install a new software application, apply patches or hotfixes)
The administrator shutdowns the diskless node that was using the disk image in 1:1 mode
The administrator shares the modified disk image
The diskless nodes use the shared disk image (1:N) as soon as they are rebooted.
Centralized storage
The use of central disk storage also makes more efficient use of disk storage. This can cut storage costs, freeing up capital to invest in more reliable, modern storage technologies, such as RAID arrays which support redundant operation, and storage area networks which allow hot-adding of storage without any interruption. Further, it means that losses of disk drives to mechanical or electrical failure—which are statistically highly probable events over a timeframe of years, with a large number of disks involved—are often both less likely to happen (because there are typically fewer disk drives that can fail) and less likely to cause interruption (because they would likely be part of RAID arrays). This also means that the nodes themselves are less likely to have hardware failures than fat clients.
Diskless nodes share these advantages with thin clients.
Performance of centralized storage
However, this storage efficiency can come at a price. As often happens in computing, increased storage efficiency sometimes comes at the price of decreased performance.
Large numbers of nodes making demands on the same server simultaneously can slow down everyone's experience. However, this can be mitigated by installing large amounts of RAM on the server (which speeds up read operations by improving caching performance), by adding more servers (which distributes the I/O workload), or by adding more disks to a RAID array (which distributes the physical I/O workload). In any case this is also a problem which can affect any client-server network to some extent, since, of course, fat clients also use servers to store user data.
Indeed, user data may be much more significant in size and may be accessed far more frequently than operating systems and programs in some environments, so moving to a diskless model will not necessarily cause a noticeable degradation in performance.
Greater network bandwidth (i.e. capacity) will also be used in a diskless model, compared to a fat client model. This does not necessarily mean that a higher capacity network infrastructure will need to be installed—it could simply mean that a higher proportion of the existing network capacity will be used.
Finally, the combination of network data transfer latencies (physically transferring the data over the network) and contention latencies (waiting for the server to process other nodes' requests before yours) can lead to an unacceptable degradation in performance compared to using local drives, depending on the nature of the application and the capacity of the network infrastructure and the server.
Other advantages
Another example of a situation where a diskless node would be useful is in a possibly hazardous environment where computers are likely to be damaged or destroyed, thus making the need for inexpensive nodes, and minimal hardware a benefit. Again, thin clients can also be used here.
Diskless machines may also consume little power and make little noise, which implies potential environmental benefits and makes them ideal for some computer cluster applications.
Comparison with thin clients
Major corporations tend to instead implement thin clients (using Microsoft Windows Terminal Server or other such software), since much lower specification hardware can be used for the client (which essentially acts as a simple "window" into the central server which is actually running the users operating system as a login session). Of course, diskless nodes can also be used as thin clients. Moreover, thin client computers are increasing in power to the point where they are becoming suitable as fully-fledged diskless workstations for some applications.
Both thin client and diskless node architectures employ diskless clients which have advantages over fat clients (see above), but differ with regard to the location of processing.
Advantages of diskless nodes over thin clients
Distributed load The processing load of diskless nodes is distributed. Each user gets its own processing isolated environment, barely affecting other users in the network, as long as their workload is not filesystem-intensive. Thin clients rely on the central server for the processing and thus require a fast server. When the central server is busy and slow, both kinds of clients will be affected, but thin clients will be slowed completely, whereas diskless nodes will only be slowed when accessing data on the server.
Better multimedia performance. Diskless nodes have advantages over thin clients in multimedia-rich applications that would be bandwidth intensive if fully served. For example, diskless nodes are well suited for video gaming because the rendering is local, lowering the latency.
Peripheral support Diskless nodes are typically ordinary personal computers or workstations with no hard drives supplied, which means the usual large variety of peripherals can be added. By contrast, thin clients are typically very small, sealed boxes with no possibility for internal expansion, and limited or non-existent possibility for external expansion. Even if e.g. a USB device can be physically attached to a thin client, the thin client software might not support peripherals beyond the basic input and output devices - for example, it may not be compatible with graphics tablets, digital cameras or scanners.
Advantages of thin clients over diskless nodes
The hardware is cheaper on thin clients, since processing requirements on the client are minimal, and 3D acceleration and elaborate audio support are not usually provided. Of course, a diskless node can also be purchased with a cheap CPU and minimal multimedia support, if suitable. Thus, cost savings may be smaller than they first appear for some organizations. However, many large organizations habitually buy hardware with a higher than necessary specification to meet the needs of particular applications and uses, or to ensure future proofing (see next point). There are also less "rational" reasons for overspecifying hardware which quite often come into play: departments wastefully using up budgets in order to retain their current budget levels for next year; and uncertainty about the future, or lack of technical knowledge, or lack of care and attention, when choosing PC specifications. Taking all these factors into account, thin clients may bring the most substantial savings, as only the servers are likely to be substantially "gold-plated" and/or "future-proofed" in the thin client model.
Future proofing is not much of an issue for thin clients, which are likely to remain useful for the entirety of their replacement cycle - one to four years, or even longer - as the burden is on the servers. There are issues when it comes to diskless nodes, as the processing load is potentially much higher, thus meaning more consideration is required when purchasing. Thin client networks may require significantly more powerful servers in the future, whereas a diskless nodes network may in future need a server upgrade, a client upgrade, or both.
Thin client networks have less network bandwidth consumption potentially, since much data is simply read by the server and processed there, and only transferred to the client in small pieces, as and when needed for display. Also, transferring graphical data to the display is usually more suited for efficient data compression and optimisation technologies (see e.g. NX technology) than transferring arbitrary programs, or user data. In many typical application scenarios, both total bandwidth consumption and "burst" consumption would be expected to be less for an efficient thin client, than for a diskless node.
See also
Thin client
Network block device
Diskless Remote Boot in Linux
Preboot Execution Environment
Notes
References
1989: License of Science in Technology. Helsinki University of Technology, department of Electrical Engineering. Hannu H. Kari: "Diskless Workstations in a Local Area Network".
Registration required.
Flaherty, James; Abrahams, Alan. . 1992. Remote bootstrapping a node over communication link by initially requesting remote storage access program which emulates local disk to load other programs.
1993: A Workstation Architecture to Support Multimedia by Mark David Hayter
Abdous, Arave; Demortain, Stephane; Dalongvile, Didier. . 1992. Remote booting of an operating system by a network.
1996: Operating systems support for the Desk Area Network by Ian Leslie and Derek McAuley (postscript file)
2004: Management of Diskless Windows 2000 and XP Stations from a Linux Server
External links
Network Block Device home page http://nbd.sourceforge.net/
|
1142729
|
https://en.wikipedia.org/wiki/FROG
|
FROG
|
In cryptography, FROG is a block cipher authored by
Georgoudis, Leroux and Chaves. The algorithm can work with any block size between 8 and 128 bytes, and supports key sizes between 5 and 125 bytes. The algorithm consists of 8 rounds and has a very complicated key schedule.
It was submitted in 1998 by TecApro, a Costa Rican software company, to the AES competition as a candidate to become the Advanced Encryption Standard. Wagner et al. (1999) found a number of weak key classes for FROG. Other problems included very slow key setup and relatively slow encryption. FROG was not selected as a finalist.
Design philosophy
Normally a block cipher applies a fixed sequence of primitive mathematical or logical operators (such as additions, XORs, etc.) on the plaintext and secret key in order to produce the ciphertext. An attacker uses this knowledge to search for weaknesses in the cipher which may allow the recovery of the plaintext.
FROG's design philosophy is to hide the exact sequence of primitive operations even though the cipher itself is known. While other ciphers use the secret key only as data (which are combined with the plain text to produce the cipher text), FROG uses the key both as data and as instructions on how to combine these data. In effect an expanded version of the key is used by FROG as a program. FROG itself operates as an interpreter that applies this key-dependent program on the plain text to produce the cipher text. Decryption works by applying the same program in reverse on the cipher text.
Description
The FROG key schedule (or internal key) is 2304 bytes long. It is produced recursively by iteratively applying FROG to an empty plain text. The resulting block is processed to produce a well formatted internal key with 8 records. FROG has 8 rounds, the operations of each round codified by one record in the internal key. All operations are byte-wide and consist of XORs and substitutions.
FROG is very easy to implement (the reference C version has only about 150 lines of code). Much of the code needed to implement FROG is used to generate the secret internal key; the internal cipher itself is a very short piece of code. It is possible to write an assembly routine of just 22 machine instructions that does full FROG encryption and decryption. The implementation will run well on 8 bit processors because it uses only byte-level instructions. No bit-specific operations are used. Once the internal key has been computed, the algorithm is fairly fast: a version implemented using 8086 assembler achieves processing speeds of over 2.2 megabytes per second when run on a 200 MHz Pentium PC.
Security
FROG's design philosophy is meant to defend against unforeseen/unknown types of attacks. Nevertheless, the very fact that the key is used as the encryption program means that some keys may correspond to weak encryption programs. David Wagner et al. found that 2−33 of the keys are weak and that in these cases the key can be broken with 258 chosen plaintexts.
Another flaw of FROG is that the decryption function has a much slower diffusion than the encryption function. Here 2−29 of keys are weak and can be broken using 236 chosen ciphertexts.
Notes
References
David Wagner, Niels Ferguson and Bruce Schneier, Cryptanalysis of FROG, in proceedings of the 2nd AES candidate conference, pp175–181, NIST, 1999 .
Dianelos Georgoudis, Damian Leroux and Billy Simón Chaves, The FROG Encryption Algorithm, June 15, 1998 .
External links
Specification of the FROG encryption algorithm
256bit Ciphers - FROG Reference implementation and derived code
Block ciphers
|
1666392
|
https://en.wikipedia.org/wiki/Autodesk%20Media%20and%20Entertainment
|
Autodesk Media and Entertainment
|
Autodesk Media and Entertainment is a division of Autodesk which offers animation and visual effects products, and was formed by the combination of multiple acquisitions. In 2018, the company began operating
as a single operating segment and reporting unit.
History
Discreet Logic
Montreal-based Discreet Logic was founded in 1991 by former Softimage Company sales director Richard Szalwinski, to commercialize the 2D compositor Eddie, licensed from Australian production company Animal Logic. Eddie was associated with Australian software engineer Bruno Nicoletti, who later founded visual effects software company The Foundry, in London, England.
In 1992, Discreet Logic entered into a European distribution agreement with Softimage, and shifted its focus on Flame, one of the first software-only image compositing products, developed by Australian Gary Tregaskis. Flame, which was originally named Flash, was first shown at NAB in 1992, ran on the Silicon Graphics platform, and became the company's flagship product.
In July 1995, Discreet Logic's initial public offering raised about US$40 million.
On May 26, 1995, the company acquired the assets of Brughetti Corporation for about CDN$1 million, and in October acquired Computer-und Serviceverwaltungs AG, located in Innsbruck, Austria and some software from Innovative Medientechnik-und Planungs-GmbH in Geltendorf, Germany.
After a 2-for-1 stock split on October 16, 1995, a secondary offering in December 1995 raised an additional $28 million.
On April 15, Discreet invested $2.5 million in privately held Essential Communications Corporation.
Kinetix
Autodesk originally created a San Francisco multimedia unit in 1996 under the name Kinetix to publish 3D Studio Max, a product developed by The Yost Group.
In August 1998, Autodesk announced plans to acquire Discreet Logic and its intent to combine that operation with Kinetix.
At the time, it was its largest acquisition, valued at about $410 million by the time it closed in March 1999 (down from an estimated $520 million when announced). The new business unit was named the Discreet division.
The combined Discreet-branded product catalog then encompassed all the Discreet Logic products, including Flame, Flint, Fire, Smoke, Effect, Edit, and Kinetix's product, including 3D Studio Max, Lightscape, Character Studio.
Media and Entertainment
In March 2005, Autodesk renamed its business unit Autodesk Media and Entertainment and discontinued the Discreet brand (still headquartered in Montreal).
Through the years, Autodesk augmented its entertainment division with many other acquisitions. One of the most significant was in October 2005, when Autodesk acquired Toronto-based Alias Systems Corporation for an estimated $182 million from Accel-KKR, and merged its animation business into its entertainment division.
Alias had been part of SGI until 2004.
In 2008, it acquired technology of the former Softimage Company from Avid Technology.
In 2011, Autodesk acquired image tools and utilities that use cloud computing called Pixlr.
Industry usage
By 2011, these products were used in films that won the Academy Award for Best Visual Effects for 16 consecutive years.
Much of Avatars visual effects were created with Autodesk media and entertainment software. Autodesk software enabled Avatar director James Cameron to aim a camera at actors wearing motion-capture suits in a studio and see them as characters in the fictional world of Pandora in the film. Autodesk software also played a role in the visual effects of Alice in Wonderland, The Curious Case of Benjamin Button, Harry Potter and the Deathly Hallows Part 1, Inception, Iron Man 2, King Kong, Gladiator, Titanic, Life of Pi, Hugo, The Adventures of Tintin: The Secret of the Unicorn and other films.
In November 2010, Ubisoft announced that Autodesk's 3D gaming technology was used in Assassin's Creed: Brotherhood.
Products
The division's products include Maya, 3ds Max (the new name of 3D Studio Max), Softimage, Mudbox, MotionBuilder the game middleware Kynapse, and the creative finishing products Flame, Flare, Lustre, Smoke, Stingray game engine (discontinued, but still supported until end of subscription).
Historical
Discreet Frost, introduced in 1996, a SGI-based template-based on-air graphics system for news, weather and sports
Matchmover, now bundled with 3ds Max, Maya and Softimage, Retimer and VTour. All acquired from RealViz
Media Cleaner, a video-encoder for the Mac, and Edit, acquired from Media 100 in 2001
Lightscape, real-time radiosity software for Microsoft Windows acquired in December 1997 by Discreet, was incorporated in 3ds Max in 2003.
Discreet Plasma, released in 2002, a simplified version of 3ds Max for Adobe Flash authoring
Discreet GMax, a simplified version of 3ds Max customized for game modders
Autodesk Toxik, introduced in 2007, compositing software that allowed users to coordinate work on a production. The software could only be bought for a minimum of 3 PCs, underlining its focus on collaborative, database-driven workflow. With its collaborative functions and databases removed, and renamed "Composite", it is now bundled with Maya 3ds Max, and Softimage.
Combustion - acquired as Illuminaire paint and composite from Denim software running on Windows NT and Mac OS. Rebranded as paint* and effect* and integrated into a suite with edit*. Finally unified as combustion, a desktop shot compositor and motion graphics application for Mac OS and Windows. Shared some technologies and user interface elements with discreet systems based products (flame, smoke). Ran as stand alone and integrated with edit*. Eventually ran stand alone only when edit* was EOL'd.
SketchBook Pro
Creative finishing
IFF
Flame, Flint and Inferno (collectively known as IFF') is a series of compositing and visual effects applications originally created for MIPS architecture computers from Silicon Graphics (SGI), running Irix.
Flame was first released in January 1993; by mid-1995, it had become a market leader in visual effects software, with a price around US$175,000, or US$450,000 with a Silicon Graphics workstation. Time with the software was typically rented at a post-production house with an operator. The Flame software is licensed in a variety of forms, including Flint, a lower-priced version of Flame with fewer functions, and Inferno, introduced in 1995, a version intended for the film market, with a price of about US$225,000 without hardware. Traditionally Inferno ran on the SGI Onyx series, while Flame and Flint ran on SGI Indigo² and Octane workstations.
Flame/Inferno were implemented on Linux in 2006. Autodesk said the use of more powerful hardware allowed complex 3D composites to be rendered more than 20 times faster than on the previous SGI workstations.
The first movie to use Flame was Super Mario Bros.; the software was then still in beta. The software also saw use on PBS's 1995 graphics package, designed by PMcD Design and animated by Tape House Digital.
In the 1998 Academy Scientific and Technical Awards Gary Tregaskis (design), Dominique Boisvert, Phillippe Panzini and Andre Le Blanc (development and implementation) received a Scientific and Engineering Award for Inferno and Flame.
Flare and Smoke
Flare, a software-only subset of Flame for creative assistants, was introduced in 2009 at around one-fifth the cost of a full-featured Flame seat.
Autodesk Smoke is non-linear video editing software that integrates with Flame. When sold as a turnkey system, e.g. with an IBM Linux workstation, 2004 pricing started at US$68,000. A version for Mac OS X was announced in 2009, initially priced at US$14,995.
Lustre
Lustre is color grading software originally developed by Mark Jaszberenyi, Gyula Priskin and Tamas Perlaki at Colorfront in Hungary. The application was first packaged as a plugin for Flame product under the name "Colorstar" to emulate film type color grading using printer lights controls. It was then developed as a standalone software. It was introduced through British company 5D under the Colossus name in private demonstrations at IBC show in Amsterdam in 2001. Alpha and beta testing were held at Eclair Laboratoires in Paris. During the trials, Colossus was running on the Windows XP operating system, but the same code base was also used on the IRIX operating system.
After the demise of 5D in 2002, Autodesk acquired the license to distribute the Lustre software, and later acquired Colorfront entirely. In the 2009 Academy Scientific and Technical Awards the original developers received a Scientific and Engineering Award for Lustre.
Flame Premium
In September 2010, Autodesk introduced Flame Premium 2011, a single license for running Flame, Smoke Advanced and Lustre together on a single workstation. At launch, new licenses were priced from US$129,000 excluding hardware, with upgrades from existing Flame licenses priced from US$10,000. Existing users of Smoke Advanced or Lustre could upgrade from US$25,000.
References
External links
Official website
Media and Entertainment
1991 establishments in Quebec
3D graphics software
Companies based in Montreal
Companies based in New York (state)
Compositing software
IRIX software
Software companies of Canada
Software companies of the United States
Visual effects software
|
10733530
|
https://en.wikipedia.org/wiki/Internet%20in%20Africa
|
Internet in Africa
|
The Internet in Africa is limited by a lower penetration rate when compared to the rest of the world. Measurable parameters such as the number of ISP subscriptions, overall number of hosts, IXP-traffic, and overall available bandwidth all indicate that Africa is far behind the "digital divide". Moreover, Africa itself exhibits an inner digital divide, with most Internet activity and infrastructure concentrated in South Africa, Morocco, Egypt as well as smaller economies like Mauritius and Seychelles.
While the telecommunications market in Africa is still in its early stages of development, it is also one of the fastest-growing in the world. In the 2000s, mobile telephone service in Africa has been rising, and mobile telephone use is now substantially more widespread than fixed line telephony. Telecommunication companies in Africa are looking at Broadband Wireless Access technologies as the key to make Internet available to the population at large. Projects are being completed that aim at the realization of Internet backbones that might help cut the cost of bandwidth in African countries.
The International Telecommunication Union has held the first Connect the World meeting in Kigali, Rwanda (in October 2007) as a demonstration that the development of telecommunications in Africa is considered a key intermediate objective for the fulfillment of the Millennium Development Goals.
Internet penetration in Africa, by country
Previous situation
The information available about the ability of people in Africa to use the internet (for instance ISP subscriptions, host number, network traffic, available bandwidth and bandwidth cost) give an essentially homogeneous picture. South Africa is the only African country that has figures similar to those of Europe and North America: it is followed by some smaller, tourist-dependent economies such as Seychelles and Mauritius, and a few North African countries, notably Morocco and Egypt. The leading Subsaharan countries in telecommunication and internet development are South Africa and Kenya.
Current trend
As of December 2020, Kenya had an internet penetration of approximately 85.2. This high rate is mainly because Kenya is home to M-Pesa, which is a mobile wallet provider and the offered secure payment system encourages internet access. As of October 2020, the majority of web traffic in leading digital markets in Africa originated from mobile devices in Nigeria, one of the countries with the biggest number of internet users worldwide. Across the nation, 74 percent of web traffic was generated via smartphones and only 24 percent via PC devices. This is connected to the fact that mobile connections are much cheaper and do not require the infrastructure that is needed for traditional desktop PCs with fixed-line internet connections.
Context
Obstacles to the accessibility of Internet services in Africa include generally low levels of computer literacy in the population, poor infrastructures, and high costs of Internet services. Power availability is also scarce, with vast rural areas that are not connected to power grids as well as frequent black-outs in major urban areas such as Dar es Salaam.
In 2000, Subsaharan Africa as a whole had fewer fixed telephone lines than Manhattan, and in 2006 Africa contributed to only 2% of the world's overall telephone lines in the world. As a consequence of this general lack of connectivity, most Africa-generated network traffic (something between 70% and 85%) is routed through servers that are located elsewhere (mainly Europe).
Overall bandwidth in Africa is scarce, and its irregular distribution clearly reflects the African "inner digital divide". In 2007, 16 countries in Africa had just one international Internet connection with a capacity of 10 Mbit/s or lower, while South Africa alone had over 800 Mbit/s. The main backbones connecting Africa to the rest of the world via submarine cables, i.e., SAT-2 and SAT-3, provide for a limited bandwidth. In 2007, all these international connections from Africa amounted to roughly 28,000 Mbit/s, while Asia has 800,000 Mbit/s and Europe over 3,000,000 Mbit/s. The total bandwidth available to Africa was less than that available to Norway alone (49,000 Mbit/s).
As a consequence of the scarce overall bandwidth provided by cable connections, a large section of Internet traffic in Africa goes through expensive satellite links. In general, thus, the cost of Internet access (and even more so broadband access) is unaffordable by most of the population. According to the Kenyan ISPs association, high costs are also a consequence of the subjection of African ISPs to European ISPs and the lack of a clear international regulation of inter-ISP cost partition. For example, while ITU has long ratified that the cost of inter-provider telephonic connections must be charged to all involved providers in equal parts, in 2002 the Kenyan ISP association has denounced that all costs of Internet traffic between Europe and Africa are charged to African providers.
Internet access
According to 2011 estimates, about 13.5% of the African population has Internet access. While Africa accounts for 15.0% of the world's population, only 6.2% of the World's Internet subscribers are Africans. Africans who have access to broadband connections are estimated to be in percentage of 1% or lower. In September 2007, African broadband subscribers were 1,097,200, with a major part of these subscriptions from large companies or institutions.
Internet access is also irregularly distributed, with 2/3 of overall online activity in Africa being generated in South Africa (which only accounts for 5% of the continent's population). Most of the remaining 1/3 is in Morocco and Egypt. The largest percentage of Internet subscribers are found in small economies such as Seychelles, where as much as 37% of the population has Internet access (while in South Africa this value is 11% and in Egypt it is 8%).
It has been noted, anyway, that data on Internet subscribers only partially reflect the actual number of Internet users in Africa, and the impact of the network on African daily life and culture. For example, cybercafes and Internet kiosks are common in the urban areas of many African countries. There are also other informal means to "access" the Internet; for example, couriers that print e-mail message and deliver them by hand to recipients in remote locations, or radio stations that broadcast information taken from the Internet.
Number of hosts
The picture provided by the figures for the number of network hosts is coherent with those above. At the end of 2007:
about 1.8 million hosts were in Africa, versus over 120 million in Europe, 67 million in Asia and 27 million in South America;
Africa as a whole had fewer hosts than Finland alone;
relatively developed Nigeria, despite its 155 million inhabitants, had one third of the hosts found in Liechtenstein with its 35,000 inhabitants; and
the largest number of African hosts (almost 90%) were in just three countries, South Africa, Morocco, and Egypt.
The table below lists the number of hosts for African countries with more than 1000 hosts in 2007 and 2013. These countries collectively account for 99% of Africa's overall hosts. The last column for each year provides the "host density" measured as the number of hosts per 1000 inhabitants; for comparison, consider that the average host density in the world was 43 hosts per 1000 inhabitants in 2007 and 72 hosts per 1000 inhabitants in 2013.
IXP traffic
An indirect measure that is sometimes used to assess the penetration of Internet technology in a given area is the overall amount of data traffic at Internet exchange points (IXPs). On African IXPs, traffic can be measured in kbit/s (kilobits per second) or Mbit/s (megabits per second), while in the rest of the world it is typically in the order of magnitude of Gbit/s (gigabits per second). The main IXP of Johannesburg, JINX (which is also the largest IXP in Africa) has about 6.5 Gbit/s traffic (in Sep 2012).
IXP traffic, anyway, is only a measure of local network traffic (mainly e-mail), while most of African generated traffic is routed through other continents, and most Web content created in Africa is hosted on Web servers located elsewhere. Additionally, measurable data do not consider private peering, i.e., inter-ISP traffic that does not go through IXPs. For example, the main academic network in South Africa, TENET, has 10 Gbit/s private peering with ISP Internet Solutions both in Johannesburg and Cape Town.
Regulation
The privatization of the telecommunication market, as well as the regulation of the competition in this market, are in an early stage of development in many Africa countries. Kenya and Botswana have started a privatization process for Telkom Kenya and Botswana Telecommunications Corporation (BTC), respectively. The mobile telephony market is generally more open and dynamic, and even more so is the Internet market.
The table below depicts the percentage of African countries where telecommunications markets (fixed line telephony, mobile telephony, Internet) are monopolistic, partially competitive, or fully competitive, either de iure or de facto (data refer to 2007).
The regulation of network businesses and the establishment of authorities to control them is widely recognised as a relevant objective by most African governments. A model for such regulation is provided by Morocco; after an authority was established in 1998, and Meditel entered the market in 1999 to compete with the main incumbent Maroc Telecom, the situation has been quickly developing. Based on such experiences and on the directions provided by ITU, most African countries now have local Internet authorities and are defining local regulation of the Internet market. In 2007, 83% of African countries had their own authority for Internet services and data traffic.
Benefits of Internet Access in Africa
It is widely recognized that increased availability of Internet technology in Africa would provide several key benefits. Specifically, some of the major issues of the continent might be tackled by applications of this technology, as demonstrated by some initiatives that have already been started and that proved successful. For example, organizations such as RANET (RAdio and interNET for The communication of Hydro-Meteorological and Climate-Related Information) and the ACMAD (African Centre of Meteorological Application for Development) use Internet to develop reliable weather models for Sahel and other areas in Africa, with dramatic benefits for local agricultures.
Internet-based telemedicine and distance education could improve quality of life in the most remote rural areas of Africa. The availability of information on the network could benefit education in general, counterbalancing the general lack of local libraries. It has also been suggested that e-Government applications could indirectly alleviate widespread political issues such as they would definitely help bridge the gap between the institutions and remote rural areas. Most Web 2.0 applications developed in Africa insofar have actually been created by governments.
African economy might also benefit from broadband availability, for example as a consequence of the applicability of e-commerce and outsourcing business models that have long proved effective in Europe and North America. Currently there are many small businesses (Cybercafes, local ISPs or Wireless ISPs) that benefit from broadband availability via satellite to provide Internet connectivity solutions to local customers.
One technology that has been utilized in many African countries for the provision of Internet broadband connectivity is VSAT, which allows businesses to access the European or US Internet backbone via satellite in regions that lack terrestrial Internet access. Fiber in Africa has been restricted to big coastal cities facing North Atlantic, South Atlantic, and Indian Oceans. According to World Bank data only 37% of Africa's 1.2 billion people actually live in those regions. Therefore, satellite remains to be the most effective and viable way to reach rural areas, and thus a major portion of Africa's population. Satellite access in Africa is popular on KU band and C band, with C band being the preferred access method in countries that have heavy rainfall.
Evolution and perspectives
Internet availability
The African telecommunication market is growing at a faster rate than in the rest of the world. In the 2000s this has especially been true for the mobile telephony market, that between 2004 and 2007 grew three times as fast as the world's average. In 2005, over 5 billion USD have been invested in Africa in telecommunication infrastructures.
Internet in Africa is now growing even faster than mobile telephony. Between 2000 and 2008, Internet subscriptions have grown by 1030.2%, versus the world's average of 290.6%.
The table below summarizes figures for the number of Internet subscription in Africa from 2000 to 2008, based on estimates made in 2008.
Infrastructure development
A number of projects have been started that aim at bringing more bandwidth to Africa, in order to cut down costs for both operators and end users. At least three projects for an underseas backbone in the Indian Ocean have been started. EASSy (East African Submarine cable System), sponsored by the World Bank and the Development Bank of Southern Africa, is a cable system that will connect Mtunzini (South Africa) and Port Sudan (Sudan), with branches to several countries on the eastern coast of Africa. The Kenyan government has started a similar project named TEAMS (The East Africa Marine System), with the collaboration of Etisalat. A third project, SEACOM, is completely African-owned. SEACOM bandwidth has already been sold to several customers, including the South African network TENET.
In South Africa, the SANReN network, with a 500 Gbit/s core, has been designed to become the fastest academic network in the world; the universities of Witwatersrand and Johannesburg are already using a bandwidth of 10 Gbit/s provided by this network.
According to the European Commission, a 10% rise in digital coverage could result in a more than 1% increase in African GDP. The European Investment Bank makes funding emerging developments on the continent a priority, in line with the EU's plan for African digital transformation.
Access
Efforts to connect previously disconnected parts of the world have been compared to previous rounds of infrastructure in Africa. The recent linking of East Africa to the global fibre-optic network generated similar visions and hopes to those that emerged in the Victorian era when railways were used to connect the previously disconnected.
With bandwidth becoming more available and less costly, the first to benefit will be institutions and companies that already have Internet access. In order for the network to reach a larger part of the population, solutions are needed for the last mile problem, i.e., to make bandwidth available to the final user. To be feasible for Africa, last mile solutions must be found that take into account the limited penetration of fixed telephony lines, especially in rural areas. Of about 400.000 rural communities that are estimated to exist in Africa, less than 3% have PSTN access. Note that providing network access to rural communities is one of Millennium Goals defined by the World Summit on the Information Society.
Most studies on this subject identify Broadband Wireless Access (BWA) technologies such as WiMAX as the most promising solution for the end user's Internet access in Africa. These technologies can also benefit from the wide availability of the mobile telephony network. Even in smaller countries like Seychelles, most Internet users already access the network via the GSM network. Providers that have 3G licenses will be able to provide WiMAX services.
Some experimentation is already being conducted in a few countries. In Kenya, the Digital Village Scheme project aims at providing government services in rural areas via wireless access. In Nigeria, Horizon Wireless is running a broadband (3.5 GHz) wireless network. Since 2007, MTN Rwanda has been working to provide broadband wireless access in Kigali. In Algeria, the Icosnet ISP and Aperto Networks have been collaborating for a business WiMAX solution. The South African authority ICASA has already assigned WiMAX licences to several providers, and Neotel is implementing WiMAX-based last mile solutions in Johannesburg, Pretoria, Cape Town and Durban.
Wireless
There is a distinction between wireless broadband in Afric, from GSM, 3G 4G/LTE and 5G services throughout Africa.
1G
2G
3G
4G/LTE
5G
Broadband
Dial up Internet
ADSL
Fibre to the home (FTTH)
See also
AfriNIC (regional Internet registry for Africa)
List of terrestrial fibre optic cable projects in Africa
Digital divide
Millennium Development Goals
Mobile telephony in Africa
Media of Africa
Africa Digital Awards
References
Jean-Michel Cornu (2005), How people use the Internet today in Africa, UNESCO,
Giancarlo Livraghi (2008), Dati sull'Internet in Africa,
Giancarlo Livraghi (2014), Dati sull'Internet in Africa,
Darren Waters (2007), Africa waiting for net revolution. «BBC News» October 29,
Balancing Act (2005), South Africa's MTN Spends USD60-70M on 3G Launch, «Balancing Act» nr. 264,
Balancing Act (2008), Private Investors Sign Up for Stake in TEAMS cable project in Kenya, «Balancing Act» n. 398,
Balancing Act (2008b), Mobile Internet Take-up Is Speeding the Take-up of IPv6 in Africa, «Balancing Act» n. 406,
BBC News (2002), The Great African Internet Robbery, April 15,
ITU (2007), Telecommunications/ICT Markets and Trends in Africa,
ITU (2010), Connect the World,
Internet World Stats (2008), African Internet Usage and Population Stats
MyBroadband (2007), Is SEACOM Racing Past EASSy?,
Banji Oyelaran-Oyeyinka and Catherine Nyaki Adeya (2002), Internet Access in Africa: An Empirical Exploration, May, United Nations University,
Pingdom (2008), Africa's Internet is Still Very Far Behind, March,
External links
African office of the Internet Society
Balancing Act, telecommunications, internet and broadcast in Africa
ACMAD
TENET, the main academic network in Africa
African online communities
Kenyan Pundit, Kenyan blog server
Mentalacrobatics, Kenyan blog server
Mashada, Kenyan forum
Urban Legend Kampala, Ugandan blog server
Kenyayote, Kenyan leading Campus blog server
|
771174
|
https://en.wikipedia.org/wiki/Cyberterrorism
|
Cyberterrorism
|
Cyberterrorism is the use of the Internet to conduct violent acts that result in, or threaten, the loss of life or significant bodily harm, in order to achieve political or ideological gains through threat or intimidation. Acts of deliberate, large-scale disruption of computer networks, especially of personal computers attached to the Internet by means of tools such as computer viruses, computer worms, phishing, malicious software, hardware methods, programming scripts can all be forms of internet terrorism. Cyberterrorism is a controversial term. Some authors opt for a very narrow definition, relating to deployment by known terrorist organizations of disruption attacks against information systems for the primary purpose of creating alarm, panic, or physical disruption. Other authors prefer a broader definition, which includes cybercrime. Participating in a cyberattack affects the terror threat perception, even if it isn't done with a violent approach. By some definitions, it might be difficult to distinguish which instances of online activities are cyberterrorism or cybercrime.
Cyberterrorism can be also defined as the intentional use of computers, networks, and public internet to cause destruction and harm for personal objectives. Experienced cyberterrorists, who are very skilled in terms of hacking can cause massive damage to government systems and might leave a country in fear of further attacks. The objectives of such terrorists may be political or ideological since this can be considered a form of terror.
There is much concern from government and media sources about potential damage that could be caused by cyberterrorism, and this has prompted efforts by government agencies such as the Federal Bureau of Investigations (FBI) and the Central Intelligence Agency (CIA) to put an end to cyber attacks and cyberterrorism.
There have been several major and minor instances of cyberterrorism. Al-Qaeda utilized the internet to communicate with supporters and even to recruit new members. Estonia, a Baltic country which is constantly evolving in terms of technology, became a battleground for cyberterror in April, 2007 after disputes regarding the relocation of a WWII soviet statue located in Estonia's capital Tallinn.
Overview
There is debate over the basic definition of the scope of cyberterrorism. These definitions can be narrow such as the use of Internet to attack other systems in the Internet that result to violence against persons or property. They can also be broad, those that include any form of Internet usage by terrorists to conventional attacks on information technology infrastructures. There is variation in qualification by motivation, targets, methods, and centrality of computer use in the act. U.S. government agencies also use varying definitions and that none of these have so far attempted to introduce a standard that is binding outside of their sphere of influence.
Depending on context, cyberterrorism may overlap considerably with cybercrime, cyberwar or ordinary terrorism. Eugene Kaspersky, founder of Kaspersky Lab, now feels that "cyberterrorism" is a more accurate term than "cyberwar". He states that "with today's attacks, you are clueless about who did it or when they will strike again. It's not cyber-war, but cyberterrorism." He also equates large-scale cyber weapons, such as the Flame Virus and NetTraveler Virus which his company discovered, to biological weapons, claiming that in an interconnected world, they have the potential to be equally destructive.
If cyberterrorism is treated similarly to traditional terrorism, then it only includes attacks that threaten property or lives, and can be defined as the leveraging of a target's computers and information, particularly via the Internet, to cause physical, real-world harm or severe disruption of infrastructure.
Many academics and researchers who specialize in terrorism studies suggest that cyberterrorism does not exist and is really a matter of hacking or information warfare. They disagree with labeling it as terrorism because of the unlikelihood of the creation of fear, significant physical harm, or death in a population using electronic means, considering current attack and protective technologies.
If death or physical damage that could cause human harm is considered a necessary part of the cyberterrorism definition, then there have been few identifiable incidents of cyberterrorism, although there has been much policy research and public concern. Modern terrorism and political violence is not easily defined, however, and some scholars assert that it is now "unbounded" and not exclusively concerned with physical damage.
There is an old saying that death or loss of property are the side products of terrorism, the main purpose of such incidents is to create terror in peoples' minds and harm bystanders. If any incident in cyberspace can create terror, it may be rightly called cyberterrorism. For those affected by such acts, the fears of cyberterrorism are quite real.
As with cybercrime in general, the threshold of required knowledge and skills to perpetrate acts of cyberterrorism has been steadily diminishing thanks to freely available hacking suites and online courses. Additionally, the physical and virtual worlds are merging at an accelerated rate, making for many more targets of opportunity which is evidenced by such notable cyber attacks as Stuxnet, the Saudi petrochemical sabotage attempt in 2018 and others.
Defining cyberterrorism
Assigning a concrete definition to cyberterrorism can be hard, due to the difficulty of defining the term terrorism itself. Multiple organizations have created their own definitions, most of which are overly broad. There is also controversy concerning overuse of the term, hyperbole in the media and by security vendors trying to sell "solutions".
One way of understanding cyberterrorism involves the idea that terrorists could cause massive loss of life, worldwide economic chaos and environmental damage by hacking into critical infrastructure systems. The nature of cyberterrorism covers conduct involving computer or Internet technology that:
is motivated by a political, religious or ideological cause
is intended to intimidate a government or a section of the public to varying degrees
seriously interferes with infrastructure
The term "cyberterrorism" can be used in a variety of different ways, but there are limits to its use. An attack on an Internet business can be labeled cyberterrorism, however when it is done for economic motivations rather than ideological it is typically regarded as cybercrime. Convention also limits the label "cyberterrorism" to actions by individuals, independent groups, or organizations. Any form of cyberwarfare conducted by governments and states would be regulated and punishable under international law.
The Technolytics Institute defines cyberterrorism as "[t]he premeditated use of disruptive activities, or the threat thereof, against computers and/or networks, with the intention to cause harm or further social, ideological, religious, political or similar objectives. Or to intimidate any person in furtherance of such objectives." The term appears first in defense literature, surfacing (as "cyber-terrorism") in reports by the U.S. Army War College as early as 1998.
The National Conference of State Legislatures, an organization of legislators created to help policymakers in the United States of America with issues such as economy and homeland security defines cyberterrorism as:
[T]he use of information technology by terrorist groups and individuals to further their agenda. This can include use of information technology to organize and execute attacks against networks, computer systems and telecommunications infrastructures, or for exchanging information or making threats electronically. Examples are hacking into computer systems, introducing viruses to vulnerable networks, web site defacing, Denial-of-service attacks, or terroristic threats made via electronic communication.
NATO defines cyberterrorism as "[a] cyberattack using or exploiting computer or communication networks to cause sufficient destruction or disruption to generate fear or to intimidate a society into an ideological goal".
The United States National Infrastructure Protection Center defined cyberterrorism as: "A criminal act perpetrated by the use of computers and telecommunications capabilities resulting in violence, destruction, and/or disruption of services to create fear by causing confusion and uncertainty within a given population, with the goal of influencing a government or population to conform to a political, social, or ideological agenda.
The FBI, another United States agency, defines "cyber terrorism" as “premeditated, politically motivated attack against information, computer systems, computer programs, and data which results in violence against non-combatant targets by subnational groups or clandestine agents”.
These definitions tend to share the view of cyberterrorism as politically and/or ideologically inclined. One area of debate is the difference between cyberterrorism and hacktivism. Hacktivism is ”the marriage of hacking with political activism”. Both actions are politically driven and involve using computers, however cyberterrorism is primarily used to cause harm. It becomes an issue because acts of violence on the computer can be labeled either cyberterrorism or hacktivism.
Types of cyberterror capability
In 1999 the Center for the Study of Terrorism and Irregular Warfare at the Naval Postgraduate School in Monterey, California defined three levels of cyberterror capability:
Simple-Unstructured: the capability to conduct basic hacks against individual systems using tools created by someone else. The organization possesses little target-analysis, command-and-control, or learning capability.
Advanced-Structured: the capability to conduct more sophisticated attacks against multiple systems or networks and possibly, to modify or create basic hacking-tools. The organization possesses an elementary target-analysis, command-and-control, and learning capability.
Complex-Coordinated: the capability for a coordinated attack capable of causing mass-disruption against integrated, heterogeneous defenses (including cryptography). Ability to create sophisticated hacking tools. Highly capable target-analysis, command-and-control, and organization learning-capability.
Concerns
Cyberterrorism is becoming more and more prominent on social media today. As the Internet becomes more pervasive, individuals or groups can use the anonymity afforded by cyberspace to threaten other individuals, specific groups (with membership based, for example, on ethnicity or belief), communities and entire countries, without the inherent threat of identification, capture, injury, or death of the attacker that being physically present would bring.
Many groups such as Anonymous, use tools such as denial-of-service attacks to attack and censor groups which oppose them, creating many concerns for freedom and respect for differences of thought.
Many believe that cyberterrorism is an extreme threat to countries' economies, and fear an attack could potentially lead to another Great Depression. Several leaders agree that cyberterrorism has the highest percentage of threat over other possible attacks on U.S. territory. Although natural disasters are considered a top threat and have proven to be devastating to people and land, there is ultimately little that can be done to prevent such events from happening. Thus, the expectation is to focus more on preventative measures that will make Internet attacks impossible for execution.
As the Internet continues to expand, and computer systems continue to be assigned increased responsibility while becoming more complex and interdependent, sabotage or terrorism via the Internet may become a more serious threat and is possibly one of the top 10 events to "end the human race." People have much easier access to illegal involvement within cyberspace by the ability to access a part of the internet known as the Dark Web. The Internet of Things promises to further merge the virtual and physical worlds, which some experts see as a powerful incentive for states to use terrorist proxies in furtherance of objectives.
Dependence on the Internet is rapidly increasing on a worldwide scale, creating a platform for international cyber-terror plots to be formulated and executed as a direct threat to national security. For terrorists, cyber-based attacks have distinct advantages over physical attacks. They can be conducted remotely, anonymously, and relatively cheaply, and they do not require significant investment in weapons, explosives or personnel. The effects can be widespread and profound. Incidents of cyberterrorism are likely to increase. They can be expected to take place through denial-of-service attacks, malware, and other methods that are difficult to envision today. One example involves the deaths involving the Islamic State and the online social networks Twitter, Google, and Facebook - leading to legal action being taken against them, that ultimately resulted in them being sued.
In an article about cyber attacks by Iran and North Korea, The New York Times observes: "The appeal of digital weapons is similar to that of nuclear capability: it is a way for an outgunned, outfinanced nation to even the playing field. 'These countries are pursuing cyberweapons the same way they are pursuing nuclear weapons,' said James A. Lewis, a computer security expert at the Center for Strategic and International Studies in Washington. 'It's primitive; it's not top of the line, but it's good enough and they are committed to getting it.'"
History
Public interest in cyberterrorism began in the late 1990s, when the term was coined by Barry C. Collin. As 2000 approached, the fear and uncertainty about the millennium bug heightened, as did the potential for attacks by cyber terrorists. Although the millennium bug was by no means a terrorist attack or plot against the world or the United States, it did act as a catalyst in sparking the fears of a possibly large-scale devastating cyber-attack. Commentators noted that many of the facts of such incidents seemed to change, often with exaggerated media reports.
The high-profile terrorist attacks in the United States on September 11, 2001, and the ensuing War on Terror by the US led to further media coverage of the potential threats of cyberterrorism in the years following. Mainstream media coverage often discusses the possibility of a large attack making use of computer networks to sabotage critical infrastructures with the aim of putting human lives in jeopardy or causing disruption on a national scale either directly or by disruption of the national economy.
Authors such as Winn Schwartau and John Arquilla are reported to have had considerable financial success selling books which described what were purported to be plausible scenarios of mayhem caused by cyberterrorism. Many critics claim that these books were unrealistic in their assessments of whether the attacks described (such as nuclear meltdowns and chemical plant explosions) were possible. A common thread throughout what critics perceive as cyberterror-hype is that of non-falsifiability; that is, when the predicted disasters fail to occur, it only goes to show how lucky we've been so far, rather than impugning the theory.
In 2016, for the first time ever, the Department of Justice charged Ardit Ferizi with cyberterrorism. He is accused of allegedly hacking into a military website and stealing the names, addresses, and other personal information of government and military personnel and selling it to ISIS
On the other hand, it is also argued that, despite substantial studies on cyberterrorism, the body of literature is still unable to present a realistic estimate of the actual threat. For instance, in the case of a cyberterrorist attack on a public infrastructure such as a power plant or air traffic control through hacking, there is uncertainty as to its success because data concerning such phenomena are limited.
Current threats
Cyberterrorism ranks among the highest potential security threats in the world. It has become more critical than the development of nuclear weapons or the current conflicts between nations. Due to the pervasiveness of the internet and the amount of responsibility assigned to this technology, digital weapons pose a threat to entire economic or social systems. Some of the most critical international security concerns include:
DDoS Attacks – Millions of Denial of Service attacks occur every year and the service disruption can cost hundreds of thousands of dollars each hour they are down. It is important to keep critical systems secured and redundant to remain online during these attacks.
Social Engineering – In 1997 an experiment conducted by the NSA concluded that thirty five-hackers were able to access critical pentagon computer systems and could easily edit accounts, reformat data and even shut down entire systems. Often they used phishing tactics such as calling offices and pretending to be technicians to gain passwords.
Third Party Software – The top retailers are connected with thousands of separate third-party recourses and at least 23% of those assets have at least one critical vulnerability. These companies need to manage and reevaluate their network security in order to keep personal data safe.
Future threats
As technology becomes more and more integrated into society, new vulnerabilities and security threats are opened up on these complex networks that we have set up. If an intruder was to gain access to these networks they have the potential to threaten entire communities or economic systems. There is no certainty for what events will take place in the future, which is why it is important that there are systems build to adapt to the changing environment.
The most apparent cyberterrorism threat in our near future will involve the state of remote work during the COVID-19 pandemic. Companies cannot expect that every home office is up to date and secure so they must adopt a zero-trust policy from home devices. This means that they must assume corporate resources and unsecured devices are sharing the same space and they must act accordingly.
The rise of cryptocurrency has also sparked some additional threats in the realm of security. Cyber Criminals are now hijacking home computers and company networks in order to mine certain cryptocurrencies such as bitcoin. This mining process requires an immense amount of computer processing power which can cripple a business’ network and lead to severe downtime if the issue is not resolved.
International attacks and response
Conventions
As of 2016 there have been eightteen conventions and major legal instruments that specifically deal with terrorist activities and cyber terrorism.
1963: Convention on Offences and Certain Other Acts Committed on Board Aircraft
1970: Convention for the Suppression of Unlawful Seizure of Aircraft
1971: Convention for the Suppression of Unlawful Acts Against the Safety of Civil Aviation
1973: Convention on the Prevention and Punishment of Crimes against Internationally Protected Persons
1979: International Convention against the Taking of Hostages
1980: Convention on the Physical Protection of Nuclear Material
1988: Protocol for the Suppression of Unlawful Acts of Violence at Airports Serving International Civil Aviation
1988: Protocol for the Suppression of Unlawful Acts against the Safety of Fixed Platforms Located on the Continental Shelf
1988: Convention for the Suppression of Unlawful Acts against the Safety of Maritime Navigation
1989: Supplementary to the Convention for the Suppression of Unlawful Acts against the Safety of Civil Aviation
1991: Convention on the Marking of Plastic Explosives for the Purpose of Detection
1997: International Convention for the Suppression of Terrorist Bombings
1999: International Convention for the Suppression of the Financing of Terrorism
2005: Protocol to the Convention for the Suppression of Unlawful Acts against the Safety of Maritime Navigation
2005: International Convention for the Suppression of Acts of Nuclear Terrorism
2010: Protocol Supplementary to the Convention for the Suppression of Unlawful Seizure of Aircraft
2010: Convention on the Suppression of Unlawful Acts Relating to International Civil Aviation
2014: Protocol to Amend the Convention on Offences and Certain Acts Committed on Board Aircraft
Motivations for cyberattacks
There are many different motives for cyberattacks, with the majority being for financial reasons. However, there is increasing evidence that hackers are becoming more politically motivated. Cyberterrorists are aware that governments are reliant on the internet and have exploited this as a result. For example, Mohammad Bin Ahmad As-Sālim's piece '39 Ways to Serve and Participate in Jihad' discusses how an electronic jihad could disrupt the West through targeted hacks of American websites, and other resources seen as anti-Jihad, modernist, or secular in orientation (Denning, 2010; Leyden, 2007).
Many of the cyberattacks are not conducted for money, rather the cyberattacks are conducted due to different ideological beliefs and due to wanting to get personal revenge and outrage towards company or individiaual, the cybercriminal is attacking. An employee might want to get revenage on a company if they were mistreated and wrongfully terminated.
Other motivations for cybercriminals include-
Political goals
Competition between companies
Cyberware between two countries
Money
Political goals motivate cyberattackers because they are not happy with candidates and they might want certain candidates to win the election, therefore, they might alter the election voting to help their preferred candidate win.
Competition between two companies can also stir up a cyberattack, as one company can hire a hacker to conduct the attack on a company as they might want to test the rival company's security. This will also benefit a company because it will force their competitor's customers to think that the company is not secure due to them getting cyber attacked effortlessly and they don't want any of their personal credentials getting leaked.
Cyberwarfare is motivation for countries that are fighting each other. This is mainly used to weaken the opposing country by compromising its core systems and the countries data and other vulnerable information.
Money is motivating for cyber attackes for ransomware, phishing, and data theft as the cyber criminals can differently contact the victims and ask for money and in return the data stays safe.
International Institutions
The United Nations has several agencies that seek to address in cyberterrorism, including, the United Nations Office of Counter-Terrorism, the United Nations Office on Drugs and Crime, the United Nations Office for Disarmament Affairs, the United Nations Institute for Disarmament Research, the United Nations Interregional Crime and Justice Research Institute, and the International Telecommunications Union. Both EUROPOL and INTERPOL also notably specialize on the subject.
Both Europol and Interpol specialize in operations against cyberterrorism as they both collaborate on different operations together and host a yearly joint cybercrime conference. While they both fight against cybercrime, both institutions operate differently. Europol sets up and coordinates cross-border operations against cybercriminals in the EU, while Interpol helps law enforcement and coordinates operations against cyber criminals globally.
Estonia and NATO
The Baltic state of Estonia was the target of a massive denial-of-service attack that ultimately rendered the country offline and shut out from services dependent on Internet connectivity in April 2007. The infrastructure of Estonia including everything from online banking and mobile phone networks to government services and access to health care information was disabled for a time. The tech-dependent state experienced severe turmoil and there was a great deal of concern over the nature and intent of the attack.
The cyber attack was a result of an Estonian-Russian dispute over the removal of a bronze statue depicting a World War II-era Soviet soldier from the center of the capital, Tallinn. In the midst of the armed conflict with Russia, Georgia likewise was subject to sustained and coordinated attacks on its electronic infrastructure in August 2008. In both of these cases, circumstantial evidence point to coordinated Russian attacks, but attribution of the attacks is difficult; though both the countries blame Moscow for contributing to the cyber attacks, proof establishing legal culpability is lacking.
Estonia joined NATO in 2004, which prompted NATO to carefully monitor its member states' response to the attack. NATO also feared escalation and the possibility of cascading effects beyond Estonia's border to other NATO members. In 2008, directly as a result of the attacks, NATO opened a new center of excellence on cyberdefense to conduct research and training on cyber warfare in Tallinn.
The chaos resulting from the attacks in Estonia illustrated to the world the dependence countries had on information technology. This dependence then makes countries vulnerable to future cyber attacks and terrorism.
Quick information on the cyber attack on Estonia and its effects on the country.
Online services of Estoninan banks and government services were taken down by uncontrollable high level of internet traffic
Media outlets were also down and so broadcasters could not deliver the news of the cyber attacks
Some of the services were under attack for 22 days, while other online services were taken down comlpety
Riots and Looting went on for 48 hours in Tallinn, Estonia
The cyber attack served as a wake up call to Estonia and for the entire world on the importance of cyber defence.
As cyberattacks continue to increase around the world, countries still look at the attacks on Estonia in the 2007 as an example of how countries can fight future cyberattacks and terrorism. As a result of the attacks, Estonia is now is currently one of the top countries in cyber defence and online safety and its capital city of Tallinn is home to NATO’s cyber defense hub. The government of Estonia continues to update there cyber defence protocols and national cybersecurity strategies. NATO’s Coopeative Cyber Defence Centre in Tallinn also conducts research and training on cyber security to not just help Estonia but other countries that are in the alliance.
China
The Chinese Defense Ministry confirmed the existence of an online defense unit in May 2011. Composed of about thirty elite internet specialists, the so-called "Cyber Blue Team", or "Blue Army", is officially claimed to be engaged in cyber-defense operations, though there are fears the unit has been used to penetrate secure online systems of foreign governments. China's leaders have invested in its foundations of cyber defense and quantum computing and artificial intelligence. 39 Chinese soldiers were chosen to strengthen China's cyber defenses. The reason given by Spokesman for the Ministry of National Defense, Geng Yansheng was that their internet protection was currently weak. Geng claimed that the program was only temporary to help improve cyber defenses.
India
To counter the cyber terrorists, also called "white-collar jihadis", the police in India has registered private citizens as volunteers who patrol the internet and report the suspected cyber terrorists to the government. These volunteers are categorised in three categories, namely "Unlawful Content Flaggers", "Cyber Awareness Promoters" and "Cyber Experts". In August 2021, police arrested five suspected white-collar jihadis who were preparing a hit list of officers, journalists, social activists, lawyers and political functionaries to create fear among people. The white-collar jihadis are considered "worst kind of terrorists" as they remain anonymous and safe in other nations, but inflict "immeasurable" amount of damage and brainwashing.
In India, the demand for cyber security professionals has increased over 100 per cent in 2021 and will rise 200 per cent by 2024.
Eighty two percent of companies in India had a ransomware attack in the year 2020. The cost it takes to recover from a ransomware attack in India has gone from $1.1 million in 2020 to $3.38 million in 2021. India is at the top of the list of 30 countries for ransomware attacks.
A cyber-attack took place on the electricity grid in Maharashtra that resulted in a power outage. This occurred in October 2020 and the authorities believe China was behind it.
Important information like dates of birth and full names were leaked for thousands of patients who were tested for Covid=19. This information was made accessible on Google and was leaked from government websites. The job portal IIMjobs was attacked and the information of 1.4 million people looking for jobs was leaked. The information leaked was quite extensive including the location of users and their names and phone numbers. The information for 500,000 Indian police personal was sold on a forum in February 2021. The information contained much personal information. The data was from a police exam taken in December 2019.
Korea
According to 2016 Deloitte Asia-Pacific Defense Outlook, South Korea's 'Cyber Risk Score' was 884 out of 1,000 and South Korea is found to be the most vulnerable country to cyber attacks in the Asia-Pacific region. Considering South Korea's high speed internet and cutting-edge technology, its cyber security infrastructure is relatively weak. The 2013 South Korea cyberattack significantly damaged the Korean economy. This attack wounded the systems of two banks and the computer networks of three TV broadcasters. The incident was a massive blow, and the attacker was never identified. It was theorized to be North Korea. The week before North Korea accused the United States and South Korea of shutting down their internet for two days. In 2017, a ransomware attack harassed private companies and users, who experienced personal information leakage. Additionally, there were North Korea's cyber attacks which risked national security of South Korea.
In response to this, South Korean government's countermeasure is to protect the information security centres the National Intelligence Agency. Currently, 'cyber security' is one of the major goals of NIS Korea. Since 2013, South Korea had established policies related to National cyber security and trying to prevent cyber crises via sophisticated investigation on potential threats. Meanwhile, scholars emphasize on improving the national consciousness towards cyber attacks as South Korea had already entered the so-called 'hyper connected society'.
North Korea's cyberwarfare is incredibly efficient and the best of state-sponsored hackers. Those who are chosen to be hackers are selected when they are young and trained specifically in cyberwarfare. Hackers are trained to steal money from ATMs but not enough to be reported. North Korea is great at zero-day exploits. The country will hack anyone they chose to. They steal secrets from companies and government agencies and steal money from financial systems to fund their hacking operations.
Pakistan
Pakistani Government has also taken steps to curb the menace of cyberterrorism and extremist propaganda. National Counter Terrorism Authority (Nacta) is working on joint programs with different NGOs and other cyber security organizations in Pakistan to combat this problem. Surf Safe Pakistan is one such example. Now people in Pakistan can report extremist and terrorist related content online on Surf Safe Pakistan portal. The National Counter Terrorism Authority (NACTA) provides the Federal Government's leadership for the Surf Safe Campaign.
Ukraine
A series of powerful cyber attacks began 27 June 2017 that swamped websites of Ukrainian organizations, including banks, ministries, newspapers and electricity firms.
USA
The US Department of Defense (DoD) charged the United States Strategic Command with the duty of combating cyberterrorism. This is accomplished through the Joint Task Force-Global Network Operations, which is the operational component supporting USSTRATCOM in defense of the DoD's Global Information Grid. This is done by integrating GNO capabilities into the operations of all DoD computers, networks, and systems used by DoD combatant commands, services and agencies.
On November 2, 2006, the Secretary of the Air Force announced the creation of the Air Force's newest MAJCOM, the Air Force Cyber Command, which would be tasked to monitor and defend American interest in cyberspace. The plan was however replaced by the creation of Twenty-Fourth Air Force which became active in August 2009 and would be a component of the planned United States Cyber Command.
On December 22, 2009, the White House named its head of computer security as Howard Schmidt to coordinate U.S Government, military and intelligence efforts to repel hackers. He left the position in May 2012. Michael Daniel was appointed to the position of White House Coordinator of Cyber Security the same week and continues in the position during the second term of the Obama administration.
Obama signed an executive order to enable the US to impose sanctions on either individuals or entities that are suspected to be participating in cyber related acts. These acts were assessed to be possible threats to US national security, financial issues or foreign policy issues. U.S. authorities indicted a man over 92 cyberterrorism hacks attacks on computers used by the Department of Defense. A Nebraska-based consortium apprehended four million hacking attempts in the course of eight weeks. In 2011 cyberterrorism attacks grew 20%.
In May 2021, President Joe Biden announced an executive order aiming to improve America's cybersecurity. It came about after an increase in cybersecurity attacks aimed at the country's public and private sector. The plan aims to improve the government's cyberdefense by working on its ability to identify, deter, protect against, detect, and respond to attacks. The plan has 10 sections written into the document that include, to name a few, improving sharing of threat information, modernizing the government's cybersecurity, and establishing a Cybersecurity Review Board.
Examples
An operation can be done by anyone anywhere in the world, for it can be performed thousands of miles away from a target. An attack can cause serious damage to a critical infrastructure which may result in casualties.
Some attacks are conducted in furtherance of political and social objectives, as the following examples illustrate:
In 1996, a computer hacker allegedly associated with the White Supremacist movement temporarily disabled a Massachusetts ISP and damaged part of the ISP's record keeping system. The ISP had attempted to stop the hacker from sending out worldwide racist messages under the ISP's name. The hacker signed off with the threat: "you have yet to see true electronic terrorism. This is a promise."
In 1998, Spanish protesters bombarded the Institute for Global Communications (IGC) with thousands of bogus e-mail messages. E-mail was tied up and undeliverable to the ISP's users, and support lines were tied up with people who couldn't get their mail. The protestors also spammed IGC staff and member accounts, clogged their Web page with bogus credit card orders, and threatened to employ the same tactics against organizations using IGC services. They demanded that IGC stop hosting the Web site for the Euskal Herria Journal, a New York-based publication supporting Basque independence. Protestors said IGC supported terrorism because a section on the Web pages contained materials on the terrorist group ETA, which claimed responsibility for assassinations of Spanish political and security officials, and attacks on military installations. IGC finally relented and pulled the site because of the "mail bombings."
In 1998, ethnic Tamil guerrillas attempted to disrupt Sri Lankan embassies by sending large volumes of e-mail. The embassies received 800 e-mails a day over a two-week period. The messages read "We are the Internet Black Tigers and we're doing this to disrupt your communications." Intelligence authorities characterized it as the first known attack by terrorists against a country's computer systems.
During the Kosovo conflict in 1999, NATO computers were blasted with e-mail bombs and hit with denial-of-service attacks by hacktivists protesting the NATO bombings. In addition, businesses, public organizations and academic institutes received highly politicized virus-laden e-mails from a range of Eastern European countries, according to reports. Web defacements were also common. After the Chinese Embassy was accidentally bombed in Belgrade, Chinese hacktivists posted messages such as "We won't stop attacking until the war stops!" on U.S. government Web sites.
Since December 1997, the Electronic Disturbance Theater (EDT) has been conducting Web sit-ins against various sites in support of the Mexican Zapatistas. At a designated time, thousands of protestors point their browsers to a target site using software that floods the target with rapid and repeated download requests. EDT's software has also been used by animal rights groups against organizations said to abuse animals. Electrohippies, another group of hacktivists, conducted Web sit-ins against the WTO when they met in Seattle in late 1999. These sit-ins all require mass participation to have much effect, and thus are more suited to use by activists than by terrorists.
In 2000, a Japanese investigation revealed that the government was using software developed by computer companies affiliated with Aum Shinrikyo, the doomsday sect responsible for the sarin gas attack on the Tokyo subway system in 1995. "The government found 100 types of software programs used by at least 10 Japanese government agencies, including the Defense Ministry, and more than 80 major Japanese companies, including Nippon Telegraph and Telephone." Following the discovery, the Japanese government suspended use of Aum-developed programs out of concern that Aum-related companies may have compromised security by breaching firewalls. gaining access to sensitive systems or information, allowing invasion by outsiders, planting viruses that could be set off later, or planting malicious code that could cripple computer systems and key data system.
In March 2013, The New York Times reported on a pattern of cyber attacks against U.S. financial institutions believed to be instigated by Iran as well as incidents affecting South Korean financial institutions that originate with the North Korean government.
In August 2013, media companies including The New York Times, Twitter and the Huffington Post lost control of some of their websites after hackers supporting the Syrian government breached the Australian Internet company that manages many major site addresses. The Syrian Electronic Army, a hacker group that has previously attacked media organisations that it considers hostile to the regime of Syrian president Bashar al-Assad, claimed credit for the Twitter and Huffington Post hacks in a series of Twitter messages. Electronic records showed that NYTimes.com, the only site with an hours-long outage, redirected visitors to a server controlled by the Syrian group before it went dark.
Pakistani Cyber Army is the name taken by a group of hackers who are known for their defacement of websites, particularly Indian, Chinese, and Israeli companies and governmental organizations, claiming to represent Pakistani nationalist and Islamic interests. The group is thought to have been active since at least 2008, and maintains an active presence on social media, especially Facebook. Its members have claimed responsibility for the hijacking of websites belonging to Acer, BSNL, India's CBI, Central Bank, and the State Government of Kerala.
British hacker Kane Gamble, sentenced to 2 years in youth detention, posed as CIA chief to access highly sensitive information. He also "cyber-terrorized" high-profile U.S. intelligence officials such as then CIA chief John Brennan or Director of National Intelligence James Clapper. The judge said Gamble engaged in "politically motivated cyber terrorism."
In March, 2021 hackers affiliated with Russia were reported to have targeted Lithuanian Officials and decision makers. The cyber-espionage group APT29 which is believed to have carried out the attacks utilized the country's own IT infrastructure against organizations involved in the development of a COVID-19 vaccine.
On March 21, 2021, the CNA was attacked with a ransomware attack, which caused the company to have no control over its network. CNA Financial Corporation is one of the largest insurance companies based in the United States. It offers cyber insurance to its customers. This attack caused the organization to lose access to online services and business operations. Thus, the CNA had to pay 40 million dollars to regain control of its network. At first, the CNA decided to ignore the hackers by trying to solve the problem independently, but they could not find a way, so they surrendered money to the group within a week. The group responsible for this attack is called Evil Corp. They used a new type of malware called Phoenix CrytoLocker. The new malware encrypted 15,000 devices on the network and employees working remotely while logged into the company's VPN during the attack. The FBI strongly discourages companies from paying ransomware because it encourages more attacks in the future, and data might not get returned.
On May 7, 2021, the Colonial Pipeline was hit with a cyberattack that disrupted oil distribution. The Colonial Pipeline is a pipeline that controls almost half (45%) of the oil that runs through the East Coast of the United States. This attack caused the company to turn off the pipeline, which it had never done before. Thus, many people panicked buying gasoline at gas stations, and the government thought this attack would quickly spread. Ultimately, the Colonial Pipeline paid nearly an amount of 5 million dollars worth of cryptocurrency. Even though the Colonial paid all the money, the system did not turn on as rapidly as it used to. The hacker accused of this attack is a group called DarkSide. The money that the Colonial paid went to DarkSide, but there are other entities involved as well. For now, DarkSide has decided to discontinue its operations.
On May 30, 2021, JBS was exposed to a cyberattack of ransomware which delayed the plant's meat production. JBS is the world's largest meat producer that provides meat-related products for people. This attack caused the shutdown of all nine beef factories in the United States and disrupted poultry and pork production. In addition, labor had to be cut due to the closings of the factories, and the cost of meat increased due to no meat being produced. Ultimately, JBS had to pay 11 million dollars worth of cryptocurrency to regain control. A group called REvil was responsible for the attack. REvil is a group based in the country of Russia that is also one of the most productive ransomware organizations.
In the summer of 2021, crimes committed in Cyprus, Israel and Lithuania were classified by experts as Internet terrorism. Anonymous persons informed law enforcement authorities through the internet about mined business centers and office buildings. Main target was the gambling company Affise. According to Ambassador John R. Bolton, these occurrences are vivid examples of Internet terrorism. Amb. Bolton believes that they are consequences of financial conflict stirred among the owners of Affise, PlayCash and “CyberEye-25” group. According to the expert, all three companies gain illicit income associated with criminal activities on the Internet.
In early December 2021 it was reported least nine U.S State Department had their phones hacked by an unknown attacker. All nine employees had Apple Iphones. The hack, which took place over several months, was done through the use of iMessages that had a software attached that when sent, without needing to be interacted with, installed spyware known as Pegasus. The software used was developed and sold by an Israel-based spyware development company named NSO Group.
In December 2021 at least five US defense and tech firms have been hacked by a group operating from China. The group took advantage of an exploit used in these organization's software to conduct their campaign which came to light in upcoming months. The target of these breaches were passwords as well as having the goal of intercepting private communications. As of right now the extent of the damage is unclear as the breaches are ongoing.
Sabotage
Non-political acts of sabotage have caused financial and other damage. In 2000, disgruntled employee Vitek Boden caused the release of 800,000 litres of untreated sewage into waterways in Maroochy Shire, Australia.
More recently, in May 2007 Estonia was subjected to a mass cyber-attack in the wake of the removal of a Russian World War II war memorial from downtown Tallinn. The attack was a distributed denial-of-service attack in which selected sites were bombarded with traffic to force them offline; nearly all Estonian government ministry networks as well as two major Estonian bank networks were knocked offline; in addition, the political party website of Estonia's Prime Minister Andrus Ansip featured a counterfeit letter of apology from Ansip for removing the memorial statue. Despite speculation that the attack had been coordinated by the Russian government, Estonia's defense minister admitted he had no conclusive evidence linking cyber attacks to Russian authorities. Russia called accusations of its involvement "unfounded", and neither NATO nor European Commission experts were able to find any conclusive proof of official Russian government participation. In January 2008 a man from Estonia was convicted for launching the attacks against the Estonian Reform Party website and fined.
During the Russia-Georgia War, on 5 August 2008, three days before Georgia launched its invasion of South Ossetia, the websites for OSInform News Agency and OSRadio were hacked. The OSinform website at osinform.ru kept its header and logo, but its content was replaced by a feed to the Alania TV website content. Alania TV, a Georgian government-supported television station aimed at audiences in South Ossetia, denied any involvement in the hacking of the websites. Dmitry Medoyev, at the time the South Ossetian envoy to Moscow, claimed that Georgia was attempting to cover up information on events which occurred in the lead-up to the war. One such cyber attack caused the Parliament of Georgia and Georgian Ministry of Foreign Affairs websites to be replaced by images comparing Georgian president Mikheil Saakashvili to Adolf Hitler.
Other attacks involved denials of service to numerous Georgian and Azerbaijani websites, such as when Russian hackers allegedly disabled the servers of the Azerbaijani Day.Az news agency.
In June 2019, Russia has conceded that it is "possible" its electrical grid is under cyber-attack by the United States. The New York Times reported that American hackers from the United States Cyber Command planted malware potentially capable of disrupting the Russian electrical grid.
Website defacement and denial of service
Even more recently, in October 2007, the website of Ukrainian president Viktor Yushchenko was attacked by hackers. A radical Russian nationalist youth group, the Eurasian Youth Movement, claimed responsibility.
In 1999 hackers attacked NATO computers. The computers flooded them with email and hit them with a denial-of-service attack. The hackers were protesting against the NATO bombings of the Chinese embassy in Belgrade. Businesses, public organizations and academic institutions were bombarded with highly politicized emails containing viruses from other European countries.
In December 2018, Twitter warned of "unusual activity" from China and Saudi Arabia. A bug was detected in November that could have revealed the country code of users' phone numbers. Twitter said the bug could have had ties to "state-sponsored actors".
In May 2021 successive waves of DDOS attacks aimed at Belnet, Belgium's public sector ISP, took down multiple government sites in Belgium. 200 sites were affected leaving public offices, universities, and research centers unable to access the internet fully or partially.
In fiction
The Japanese cyberpunk manga, Ghost in the Shell (as well as its popular movie and TV adaptations) centers around an anti-cyberterrorism and cybercrime unit. In its mid-21st century Japan setting such attacks are made all the more threatening by an even more widespread use of technology including cybernetic enhancements to the human body allowing people themselves to be direct targets of cyberterrorist attacks.
Dan Brown's Digital Fortress.
Amy Eastlake's Private Lies.
In the movie Live Free or Die Hard, John McClane (Bruce Willis) takes on a group of cyberterrorists intent on shutting down the entire computer network of the United States.
The movie Eagle Eye involves a super computer controlling everything electrical and networked to accomplish the goal.
The plots of 24 Day 4 and Day 7 include plans to breach the nation's nuclear plant grid and then to seize control of the entire critical infrastructure protocol.
The Tom Clancy created series Netforce was about a FBI/Military team dedicated to combating cyberterrorists.
Much of the plot of Mega Man Battle Network is centered around cyberterrorism.
In the 2009 Japanese animated film Summer Wars, an artificial intelligence cyber-terrorist attempts to take control over the world's missiles in order to "win" against the main characters that attempted to keep it from manipulating the world's electronic devices.
In the 2012 film Skyfall, part of the James Bond franchise, main villain Raoul Silva (Javier Bardem) is an expert cyberterrorist who is responsible for various cyberterrorist incidents in the past.
Cyberterrorism plays a role in the 2012 video game Call of Duty: Black Ops II, first when main antagonist Raul Menendez cripples the Chinese economy with a cyberattack and frames the United States for it, starting a new Cold War between the two powers. Later, another cyberattack with a computer worm leads to Menendez seizing control of the entire U.S drone fleet. Finally, one of the game's endings leads to another attack similar to the latter, this time crippling the U.S' electrical and water distribution grids. An alternate ending depicts the cyberattack failing after it is stopped by one of the game's characters pivotal to the storyline.
The plot of the 2014 video game Watch Dogs is heavily influenced by cyber-terrorism. In which players take control of the game's protagonist, Aiden Pierce, an accused murder suspect, who hacks into a ctOS (Central Operating System), giving him complete control of Chicago's mainframe in order to hunt down his accusers.
The video game Metal Slug 4 focuses on Marco and Fio, joined by newcomers Nadia and Trevor, to battle a terrorist organization known as Amadeus that is threatening the world with a computer virus.
The visual novel Baldr Force has the main character Tooru Souma joining a military organization to fight cyberterrorism to avenge the death of his friend.
The Japanese manga and live action Bloody Monday is highly influenced by hacking and cracking. The main character Takagi Fujimaru is a Super Elite hacker which use his hacking knowledge to fight against his enemies.
In the 2016 movie Death Note: Light Up the New World society is afflicted with cyber-terrorism.
In the television series Mr. Robot, the main plot line follows groups of hackers who engage in cyber terrorism as well as other events.
In "The President is Missing," a novel by Bill Clinton and James Patterson.
In The Fate of the Furious, the eight installment in the Fast and Furious franchise, a cyberterrorist named Cipher acts as the main antagonist blackmailing main character Dominic "Dom" Toretto into going rogue.
In Sneakers a 1992 film that is centered around a hacker named Martin Brice is tasked by the NSA to obtain a device known as the "black box" from the Russian government. The device is capable of breaking the encryption of almost any computer. The antagonist and former friend of Martin, Cosmo, is aiming to use the device to attack and destabalize the world's economy.
See also
2007 cyberattacks on Estonia
2008 cyberattacks during South Ossetia war
Anonymous (group)
Computer crime
Cyberwarfare
FBI Cyber Division
List of cyber warfare forces
Patriotic hacking
United States Computer Emergency Readiness Team (US-CERT)
References
Further reading
Bibi van Ginkel, "The Internet as Hiding Place of Jihadi Extremists" (International Centre for Counter-Terrorism – The Hague, 2012)
U.S. Army Cyber Operations and Cyber Terrorism Handbook 1.02
Rolón, Darío N., (2013) Control, vigilancia y respuesta penal en el ciberespacio, Latinamerican's new security thinking, Clacso.
Record, Jeffery: Bounding the Global War on Terrorism, Strategic Studies Institute, US Army War College, Leavenworth, 2003
Schmid, Alex and Jongmans, Albert et al.: Political Terrorism: A new guide to Action, Authors, Concepts, Data Bases, Theories and Literature, Transaction Books, New Brunswick, 1988
COE DAT Cyber Terrorism Couse IV Mar 09
Hennessy, Joh L and others: Information Technology for Counterterrorism, National Academies Press,
Washington DC, 2003
Hoffman, Bruce: Inside Terrorism, Columbia University Press, New York, 2006
Laqueur, Walter: The New Terrorism: Fanaticism and the Arms of Mass Destruction, Oxford University
Press, New York, 1999
Sageman, Marc: Understanding Terror Networks, Penn, Philadelphia, 2004
Wilkinson, Paul: Terrorism Versus Democracy, Routledge, London, 2006
External links
General
CRS Report for Congress – Computer Attack and Cyber Terrorism – 17/10/03
Cyber-Terrorism: Propaganda or Probability?
How terrorists use the Internet ABC Australia interview with Professor Hsinchun Chen
Department of Defense Cyber Crime Center
defcon.org
RedShield Association- Cyber Defense
Cyber Infrastructure Protection – Strategic Studies Institute
strategicstudiesinstitute.army.mil
Cyber-Terrorism and Freedom of Expression: Sultan Shahin Asks United Nations to Redesign Internet Governance New Age Islam
Global response to cyberterrorism and cybercrime: A matrix for international cooperation and vulnerability assessment
News
Cyber Security Task Force Takes 'Whole Government' Approach FBI, October 20, 2014
BBC News – US warns of al-Qaeda cyber threat – 01/12/06
BBC News – Cyber terrorism 'overhyped' – 14/03/03
Calls for anti-cyber terrorism bill resurface in South Korea – NK News
Cyberwarfare
Cybercrime
Terrorism by method
Cyberattacks
|
21489
|
https://en.wikipedia.org/wiki/NetHack
|
NetHack
|
NetHack is an open source single-player roguelike video game, first released in 1987 and maintained by the NetHack DevTeam. The game is a software fork of the 1982 game Hack, itself inspired by the 1980 game Rogue. The player takes the role as one of several pre-defined character classes to descend through multiple dungeon floors, fighting monsters and collecting treasure, to recover the "Amulet of Yendor" at the lowest floor and then escape. As a traditional roguelike, NetHack features procedural-generated dungeons and treasure, hack and slash combat, tile-based gameplay (using ASCII graphics by default but with optional graphical tilesets), and permadeath, forcing the player to restart anew should their character die. While Rogue, Hack and other earlier roguelikes stayed true to a high fantasy setting, NetHack introduced humorous and anachronistic elements over time, including popular cultural reference to works such as Discworld and Raiders of the Lost Ark.
It is identified as one of the "major roguelikes" by John Harris. Comparing it with Rogue, Engadgets Justin Olivetti wrote that it took its exploration aspect and "made it far richer with an encyclopedia of objects, a larger vocabulary, a wealth of pop culture mentions, and a puzzler's attitude." In 2000, Salon described it as "one of the finest gaming experiences the computing world has to offer".
Gameplay
Before starting a game, players choose their character's race, role, sex, and alignment, or allow the game to assign the attributes randomly. There are traditional fantasy roles such as knight, wizard, rogue, and priest; but there are also unusual roles, including archaeologist, tourist, and caveman. The player character's role and alignment dictate which deity the character serves in the game, "how other monsters react toward you", as well as character skills and attributes.
After the player character is created, the main objective is introduced. To win the game, the player must retrieve the Amulet of Yendor, found at the lowest level of the dungeon, and offer it to their deity. Successful completion of this task rewards the player with the gift of immortality, and the player is said to "ascend", attaining the status of demigod. Along the path to the amulet, a number of sub-quests must be completed, including one class-specific quest.
The player's character is, unless they opt not to be, accompanied by a pet animal, typically a kitten or little dog, although knights begin with a saddled pony. Pets grow from fighting, and they can be changed by various means. Most of the other monsters may also be tamed using magic or food.
Dungeon levels
NetHack'''s dungeon spans about fifty primary levels, most of which are procedurally generated when the player character enters them for the first time. A typical level contains a way "up" and "down" to other levels. These may be stairways, ladders, trapdoors, etc. Levels also contain several "rooms" joined by corridors. These rooms are randomly generated rectangles (as opposed to the linear corridors) and may contain features such as altars, shops, fountains, traps, thrones, pools of water, and sinks based on the randomly generated features of the room. Some specific levels follow one of many fixed designs or contain fixed elements. Later versions of the game added special branches of dungeon levels. These are optional routes that may feature more challenging monsters but can reward more desirable treasure to complete the main dungeon. Levels, once generated, remained persistent, in contrast to games that followed Moria-style of level generation.
Items and toolsNetHack features a variety of items: weapons (melee or ranged), armor to protect the player, scrolls and spellbooks to read, potions to quaff, wands, rings, amulets, and an assortment of tools, such as keys and lamps.NetHack's identification of items is almost identical to Rogue's. For example, a newly discovered potion may be referred to as a "pink potion" with no other clues as to its identity. Players can perform a variety of actions and tricks to deduce, or at least narrow down, the identity of the potion. The most obvious is the somewhat risky tactic of simply drinking it. All items of a certain type will have the same description. For instance, all "scrolls of enchant weapon" may be labeled "TEMOV", and once one has been identified, all "scrolls of enchant weapon" found later will be labeled unambiguously as such. Starting a new game will scramble the items descriptions again, so the "silver ring" that is a "ring of levitation" in one game might be a "ring of hunger" in another.
Blessings and curses
As in many other roguelike games, all items in NetHack are either "blessed", "uncursed", or "cursed". The majority of items are found uncursed, but the blessed or cursed status of an item is unknown until it is identified or detected through other means.
Generally, a blessed item will be more powerful than an uncursed item, and a cursed item will be less powerful, with the added disadvantage that once it has been equipped by the player, it cannot be easily unequipped. Where an object would bestow an effect upon the character, a curse will generally make the effect harmful, or increase the amount of harm done. However, there are very specific exceptions. For example, drinking a cursed "potion of gain level" will make the character literally rise through the ceiling to the level above, instead of gaining an experience level.
Character death
As in other roguelike games, NetHack features permadeath: expired characters cannot be revived.
Although NetHack can be completed without any artificial limitations, experienced players can attempt "conducts" for an additional challenge. These are voluntary restrictions on actions taken, such as using no wishes, following a vegetarian or vegan diet, or even killing no monsters. While conducts are generally tracked by the game and are displayed at death or ascension, unofficial conducts are practiced within the community.
When a player dies, the cause of death and score is created and added to the list where the player's character is ranked against other previous characters. The prompt "Do you want your possessions identified?" is given by default at the end of any game, allowing the player to learn any unknown properties of the items in their inventory at death. The player's attributes (such as resistances, luck, and others), conduct (usually self-imposed challenges, such as playing as an atheist or a vegetarian), and a tally of creatures killed, may also be displayed.
The game sporadically saves a level on which a character has died and then integrates that level into a later game. This is done via "bones files", which are saved on the computer hosting the game. A player using a publicly hosted copy of the game can thus encounter the remains and possessions of many other players, although many of these possessions may have become cursed.
Because of the numerous ways that a player-character could die between a combination of their own actions as well as from reactions from the game's interacting systems, players frequently refer to untimely deaths as "Yet Another Stupid Death" (YASD). Such deaths are considered part of learning to play NetHack as to avoid conditions where the same death may happen again.NetHack does allow players to save the game so that one does not have to complete the game in one session, but on opening a new game, the previous save file is subsequently wiped as to enforce the permadeath option. One option some players use is to make a backup copy of the save game file before playing a game, and, should their character die, restoring from the copied version, a practice known as "save scumming". Additionally, players can also manipulate the "bones files" in a manner not intended by the developers. While these help the player to learn the game and get around limits of permadeath, both are considered forms of cheating the game.
Culture around spoilersNetHack is largely based on discovering secrets and tricks during gameplay. It can take years for one to become well-versed in them, and even experienced players routinely discover new ones. A number of NetHack fan sites and discussion forums offer lists of game secrets known as "spoilers".
InterfaceNetHack was originally created with only a simple ASCII text-based user interface, although the option to use something more elaborate was added later in its development. Interface elements such as the environment, entities, and objects are represented by arrangements of ASCII or Extended ASCII glyphs, "DEC graphics", or "IBM graphics" mode. In addition to the environment, the interface also displays character and situational information.
A detailed example:
You see here a silver ring.
------------
##....._.....|
|...........# ------
#...........| |....|
--------------- ###------------ |...(|
|..%...........|########## ###-@...|
|...%...........### # ## |....|
+.......<......| ### ### |..!.|
--------------- # # ------
### ###
# #
---.----- ###
|.......| #
|........####
|.......|
|.......|
---------
Hacker the Conjurer St:11 Dx:13 Co:12 In:11 Wi:18 Ch:11 Neutral
Dlvl:3 $:120 HP:39(41) Pw:36(36) AC:6 Exp:5 T:1073
The player (the '@' sign, a wizard in this case) has entered the level via the stairs (the '<' sign) and killed a few monsters, leaving their corpses (the '%' signs) behind. Exploring, the player has uncovered three rooms joined by corridors (the '#' signs): one with an altar (the '_' sign), another empty, and the final one (that the player is currently in) containing a potion (the '!' sign) and chest (the '(' sign). The player has just moved onto a square containing a silver ring. Parts of the level are still unexplored (probably accessible through the door to the west (the '+' sign)) and the player has yet to find the downstairs (a '>' sign) to the next level.
Apart from the original termcap interface shown above, there are other interfaces that replace standard screen representations with two-dimensional images, or tiles, collectively known as "tiles mode". Graphic interfaces of this kind have been successfully implemented on the Amiga, the X Window System, the Microsoft Windows GUI, the Qt toolkit, and the GNOME libraries.
Enhanced graphical options also exist, such as the isometric perspective of Falcon's Eye and Vulture's Eye, or the three-dimensional rendering that noegnud offers. Vulture's Eye is a fork of the now defunct Falcon's Eye project. Vulture's Eye adds additional graphics, sounds, bug fixes and performance enhancements and is under active development in an open collaborative environment.
History and development NetHack is a software derivative of Hack, which itself was inspired by Rogue. Hack was created by students Jay Fenlason, Kenny Woodland, Mike Thome, and Jonathan Payne at Lincoln-Sudbury Regional High School as part of a computer class, after seeing and playing Rogue at the University of California Berkeley computer labs. The group had tried to get the source code of Rogue from Glenn Wichman and Michael Toy to build upon, but Wichman and Toy had refused, forcing the students to build the dungeon-creation routines on their own. As such, the game was named Hack in part for the hack-and-slash gameplay and that the code to generate the dungeons was considered a programming hack. After their classes ended, the students' work on the program also ended, though they had a working game. Fenlason provided the source code to a local USENIX conference, and eventually it was uploaded to USENET newsgroups. The code drew the attention of many players who started working to modify and improve the game as well as port it to other computer systems. Hack did not have any formal maintainer and while one person was generally recognized to hold the main code to the current version of Hack, many software forks emerged from the unorganized development of the game.
Eventually, Mike Stephenson took on the role as maintainer of the Hack source code. At this point, he decided to create a new fork of the game, bringing in novel ideas from Izchak Miller, a philosophy professor at University of Pennsylvania, and Janet Walz, another computer hacker. They called themselves the DevTeam and renamed their branch NetHack since their collaboration work was done over the Internet. They expanded the bestiary and other objects in the game, and drew from other sources outside of the high fantasy setting, such as from Discworld with the introduction of the tourist character class. Knowing of the multiple forks of Hack that existed, the DevTeam established a principle that while the game was open source and anyone could create a fork as a new project, only a few select members in the DevTeam could make modifications to the main source repository of the game, so that players could be assured that the DevTeam's release was the legitimate version of NetHack.
Release history
The DevTeam's first release of NetHack was on 28 July 1987.
The core DevTeam had expanded with the release of NetHack 3.0 in July 1989. By that point, they had established a tight-lipped culture, revealing little, if anything, between releases. Owing to the ever-increasing depth and complexity found in each release, the development team enjoys a near-mythical status among fans. This perceived omniscience is captured in the initialism TDTTOE, "The DevTeam Thinks of Everything", in that many of the possible emergent gameplay elements that could occur due to the behavior of the complex game systems had already been programmed in by the DevTeam. Since version 3.0, the DevTeam has typically kept to minor bug fix updates, represented by a change in the third version number (e.g. v3.0.1 over v3.0.0), and only releases major updates (v3.1.0 over v3.0.0) when significant new features are added to the game, including support for new platforms. Many of those from the community that helped with the ports to other systems were subsequently invited to be part of the DevTeam as the team's needs grew, with Stephenson remaining the key member currently.
Updates to the game were generally regular from around 1987 through 2003, with the DevTeam releasing v3.4.3 in December 2003. Subsequent updates from the DevTeam included new tilesets and compatibility with variants of Mac OS, but no major updates to the game had been made. In the absence of new releases from the developers, several community-made updates to the code and variants developed by fans emerged.
On 7 December 2015, version 3.6.0 was released, the first major release in over a decade. While the patch did not add major new gameplay features, the update was designed to prepare the game for expansion in the future, with the DevTeam's patch notes stating "This release consists of a series of foundational changes in the team, underlying infrastructure and changes to the approach to game development". Stephenson said that despite the number of roguelike titles that had emerged since the v3.4.3 release, they saw that NetHack was still being talked about online in part due to its high degree of portability, and decided to continue its development. According to DevTeam member Paul Winner, they looked to evaluate what community features had been introduced in the prior decade to improve the game while maintaining the necessary balance. The update came shortly after the death of Terry Pratchett, whose Discworld had been influential on the game, and the new update included a tribute to him. With the v3.6.0 release, NetHack remains "one of the oldest games still being developed".
A public read-only mirror of NetHack git repository was made available on 10 February 2016. Since v3.6.0, the DevTeam has continued to push updates to the title, with the latest being v3.6.6 on 8 March 2020. Version 3.7.0 is currently in development.
, the official source release supports the following systems: Windows, Linux, macOS, Windows CE, OS/2, Unix (BSD, System V, Solaris, HP-UX), BeOS, and VMS.
Licensing, ports, and derivative portsNetHack is released under the NetHack General Public License, which was written in 1989 by Mike Stephenson, patterned after the GNU bison license (which was written by Richard Stallman in 1988). Like the Bison license, and Stallman's later GNU General Public License, the NetHack license was written to allow the free sharing and modification of the source code under its protection. At the same time, the license explicitly states that the source code is not covered by any warranty, thus protecting the original authors from litigation. The NetHack General Public License is a copyleft software license certified as an open source license by the Open Source Initiative.
The NetHack General Public License allows anyone to port the game to a platform not supported by the official DevTeam, provided that they use the same license. Over the years this licensing has led to a large number of ports and internationalized versions in German, Japanese, and Spanish. The license also allows for software forks as long as they are distributed under the same license, except that the creator of a derivative work is allowed to offer warranty protection on the new work. The derivative work is required to indicate the modifications made and the dates of changes. In addition, the source code of the derivative work must be made available, free of charge except for nominal distribution fees. This has also allowed source code forks of NetHack including Slash'EM, UnNetHack, and dNethack.
Online support
Bugs, humorous messages, stories, experiences, and ideas for the next version are discussed on the Usenet newsgroup rec.games.roguelike.nethack.
A public server at nethack.alt.org, commonly known as "NAO", gives players access to NetHack through a Telnet or SSH interface. A browser-based client is also available on the same site. Ebonhack connects to NAO with a graphical tiles-based interface.
During the whole month of November, the annual /dev/null NetHack Tournament took place every year from 1999 to 2016. The November NetHack Tournament, initially conceived as a one-time tribute to devnull, has taken place each year since 2018. The Junethack Cross-Variant Summer Tournament has taken place annually since 2011.
NetHack Learning Environment
The Facebook artificial intelligence (AI) research team, along with researchers at the University of Oxford, New York University, the Imperial College London, and University College London, developed an open-source platform called the NetHack Learning Environment, designed to teach AI agents to play NetHack. The base environment is able to maneuver the agent and fight its way through dungeons, but the team seeks community help to build an AI on the complexities of NetHack interconnected systems, using implicit knowledge that comes from player-made resources, thus giving a means for programmers to hook into the environment with additional resources. Facebook's research led the company to pose Nethack'' as a grand challenge in AI in June 2021, in part due to the game's permadeath and inability to experiment with the environment without creating a reaction. Facebook stated they will announce a competition in this area starting at the 2021 Conference on Neural Information Processing Systems.
See also
List of open-source games
References
External links
A Guide to the Mazes of Menace (Guidebook for NetHack)
Download page for Official Binary and Source Releases
Info page for NetHack's public git repository
The NetHack Wiki
NAO website
/dev/null NetHack Tournament
NetHack at SourceForge.net
Hall of Fame – NetHack at GameSpy
1987 video games
Acorn Archimedes games
Amiga games
Android (operating system) games
Atari ST games
Cross-platform software
Fantasy video games
Free and open-source Android software
Games with concealed rules
GP2X games
Linux games
Classic Mac OS games
Open-source video games
MacOS games
Roguelike video games
Role-playing video games
Video games with textual graphics
Windows games
Video games using procedural generation
|
1212147
|
https://en.wikipedia.org/wiki/VisiCorp
|
VisiCorp
|
VisiCorp was an early personal computer software publisher. Its most famous products were Microchess, Visi On and VisiCalc.
It was founded in 1976 by Dan Fylstra and Peter R. Jennings as Personal Software, and first published Jennings' Microchess program for the MOS Technology KIM-1 computer, and later Commodore PET, Apple II, TRS-80 and Atari 8-bit. In 1979 it released VisiCalc, which would be so successful that in 1982 the company was renamed "VisiCorp".
VisiCalc was the first electronic spreadsheet for personal computers, developed by Software Arts and published by VisiCorp.
Visi On was the first GUI for the IBM PC.
Bill Gates came to see Visi On at a trade show, and this seems what inspired him to create a windowed GUI for Microsoft. Visicorp was larger than Microsoft at the time, and the two companies entered negotiations to merge, but could not agree on who would sit on the board of directors. Microsoft Windows when it was released included a wide range of drivers, so it could run on many different PC's, while Visi On cost more, and had stricter system requirements. Lotus released Lotus 1-2-3 in 1983. Microsoft eventually released its own spreadsheet Microsoft Excel.
Early alumni of this company included Ed Esber who would later run Ashton-Tate, Bill Coleman who would found BEA Systems, Mitch Kapor founder of Lotus Software and the Electronic Frontier Foundation, Rich Melmon who would co-found Electronic Arts, Bruce Wallace author of Asteroids in Space, and Brad Templeton who would found early dot-com company ClariNet and was the director of the Electronic Frontier Foundation from 2000 to 2010.
VisiCorp agreed in 1979 to pay 36-50% of VisiCalc revenue to Software Arts, compared to typical software royalties of 8-12%. It composed 70% of VisiCorp revenue in 1982 and 58% in 1983. By 1984 InfoWorld stated that although VisiCorp's $43 million in 1983 sales made it the world's fifth-largest microcomputer-software company, it was "a company under siege" with "rapidly declining" VisiCalc sales and mediocre Visi On sales. The magazine wrote that "VisiCorp's auspicious climb and subsequent backslide will no doubt become a How Not To primer for software companies of the future, much like Osborne Computer's story has become the How Not To for the hardware industry."
VisiCorp was sold to Paladin Software after a legal feud between Software Arts and VisiCorp.
References
Defunct computer companies based in Massachusetts
Software companies disestablished in 1984
Software companies established in 1976
|
22456231
|
https://en.wikipedia.org/wiki/Tellico%20%28software%29
|
Tellico (software)
|
Tellico is a KDE application for organizing various collections. It provides default templates for example for books, bibliographies, videos, music, video games, coins, stamps, trading cards, comic books, and wines. For custom collections data models are freely modifiable. Data can be entered manually or by downloading data from various Internet sources. Even though Tellico has default template also for data-files it has no jukebox or mediacenter like features.
Released under the GNU General Public License, Tellico is free software.
Tellico stores its collection files in XML format instead of SQL databases, which makes it easy for the users to export data or visualize it.
See also
Evergreen (software)
Koha (software)
PMB (software)
GCstar
References
External links
Software that uses Qt
KDE software
|
323677
|
https://en.wikipedia.org/wiki/IP%20fragmentation
|
IP fragmentation
|
IP fragmentation is an Internet Protocol (IP) process that breaks packets into smaller pieces (fragments), so that the resulting pieces can pass through a link with a smaller maximum transmission unit (MTU) than the original packet size. The fragments are reassembled by the receiving host.
The details of the fragmentation mechanism, as well as the overall architectural approach to fragmentation, are different between IPv4 and IPv6.
Process
describes the procedure for IP fragmentation, and transmission and reassembly of IP packets. RFC 815 describes a simplified reassembly algorithm. The Identification field along with the foreign and local internet address and the protocol ID, and Fragment offset field along with Don't Fragment and More Fragments flags in the IP header are used for fragmentation and reassembly of IP packets.
If a receiving host receives a fragmented IP packet, it has to reassemble the packet and pass it to the higher protocol layer. Reassembly is intended to happen in the receiving host but in practice it may be done by an intermediate router, for example, network address translation (NAT) may need to reassemble fragments in order to translate data streams.
IPv4 and IPv6 differences
Under IPv4, a router that receives a network packet larger than the next hop's MTU has two options: drop the packet if the Don't Fragment (DF) flag bit is set in the packet's header and send an Internet Control Message Protocol (ICMP) message which indicates the condition Fragmentation Needed (Type 3, Code 4), or fragment the packet and send it over the link with a smaller MTU. Although originators may produce fragmented packets, IPv6 routers do not have the option to fragment further. Instead, network equipment is required to deliver any IPv6 packets or packet fragments smaller than or equal to 1280 bytes and IPv6 hosts are required to determine the optimal MTU through Path MTU Discovery before sending packets.
Though the header formats are different for IPv4 and IPv6, analogous fields are used for fragmentation, so the same algorithm can be reused for IPv4 and IPv6 fragmentation and reassembly.
In IPv4, hosts must make a best-effort attempt to reassemble fragmented IP packets with a total reassembled size of up to 576 bytes. They may also attempt to reassemble fragmented IP packets larger than 576 bytes, but they are also permitted to silently discard such larger packets. Applications are recommended to refrain from sending packets larger than 576 bytes unless they have prior knowledge that the remote host is capable of accepting or reassembling them.
In IPv6, hosts must make a best-effort attempt to reassemble fragmented packets with a total reassembled size of up to 1500 bytes, larger than IPv6's minimum MTU of 1280 bytes. Fragmented packets with a total reassembled size larger than 1500 bytes may optionally be silently discarded. Applications relying upon IPv6 fragmentation to overcome a path MTU limitation must explicitly fragment the packet at the point of origin; however, they should not attempt to send fragmented packets with a total size larger than 1500 bytes unless they know in advance that the remote host is capable of reassembly.
Impact on network forwarding
When a network has multiple parallel paths, technologies like LAG and CEF split traffic across the paths according to a hash algorithm. One goal of the algorithm is to ensure all packets of the same flow are sent out the same path to minimize unnecessary packet reordering.
IP fragmentation can cause excessive retransmissions when fragments encounter packet loss and reliable protocols such as TCP must retransmit all of the fragments in order to recover from the loss of a single fragment. Thus, senders typically use two approaches to decide the size of IP packets to send over the network. The first is for the sending host to send an IP packet of size equal to the MTU of the first hop of the source-destination pair. The second is to run the Path MTU Discovery algorithm to determine the path MTU between two IP hosts so that IP fragmentation can be avoided.
, IP fragmentation is considered fragile and often undesired due to its security impact.
See also
IP fragmentation attack
Protocol data unit and Service data unit
References
External links
What is packet fragmentation?
The Never-Ending Story of IP Fragmentation
Internet Protocol
|
315794
|
https://en.wikipedia.org/wiki/SD%20card
|
SD card
|
Secure Digital, officially abbreviated as SD, is a proprietary non-volatile memory card format developed by the SD Association (SDA) for use in portable devices.
The standard was introduced in August 1999 by joint efforts between SanDisk, Panasonic (Matsushita) and Toshiba as an improvement over MultiMediaCards (MMCs), and has become the industry standard. The three companies formed SD-3C, LLC, a company that licenses and enforces intellectual property rights associated with SD memory cards and SD host and ancillary products.
The companies also formed the SD Association (SDA), a non-profit organization, in January 2000 to promote and create SD Card standards. SDA today has about 1,000 member companies. The SDA uses several trademarked logos owned and licensed by SD-3C to enforce compliance with its specifications and assure users of compatibility.
History
1999–2002: Creation
In 1999, SanDisk, Panasonic (Matsushita), and Toshiba agreed to develop and market the Secure Digital (SD) Memory Card. The card was derived from the MultiMediaCard (MMC) and provided digital rights management based on the Secure Digital Music Initiative (SDMI) standard and for the time, a high memory density.
It was designed to compete with the Memory Stick, a DRM product that Sony had released the year before. Developers predicted that DRM would induce wide use by music suppliers concerned about piracy.
The trademarked "SD" logo was originally developed for the Super Density Disc, which was the unsuccessful Toshiba entry in the DVD format war. For this reason the D within the logo resembles an optical disc.
At the 2000 Consumer Electronics Show (CES) trade show, the three companies announced the creation of the SD Association (SDA) to promote SD cards. The SD Association, headquartered in San Ramon, California, United States, started with about 30 companies and today consists of about 1,000 product manufacturers that make interoperable memory cards and devices. Early samples of the SD card became available in the first quarter of 2000, with production quantities of 32 and 64 MB cards available three months later.
2003: Mini cards
The miniSD form was introduced at March 2003 CeBIT by SanDisk Corporation which announced and demonstrated it. The SDA adopted the miniSD card in 2003 as a small form factor extension to the SD card standard. While the new cards were designed especially for mobile phones, they are usually packaged with a miniSD adapter that provides compatibility with a standard SD memory card slot.
2004–2005: Micro cards
The microSD removable miniaturized Secure Digital flash memory cards were originally named T-Flash or TF, abbreviations of TransFlash. TransFlash and microSD cards are functionally identical allowing either to operate in devices made for the other. microSD (and TransFlash) cards are electrically compatible with larger SD cards and can be used in devices that accept SD cards with the help of a passive adapter, which contains no electronic components, only metal traces connecting the two sets of contacts. Unlike the larger SD cards, microSD does not offer a mechanical write protect switch, thus an operating-system-independent way of write protecting them does not exist in the general case. SanDisk conceived microSD when its Chief Technology Officer (CTO) and the CTO of Motorola concluded that current memory cards were too large for mobile phones.
The card was originally called T-Flash, but just before product launch, T-Mobile sent a cease-and-desist letter to SanDisk claiming that T-Mobile owned the trademark on T-(anything), and the name was changed to TransFlash.
At CTIA Wireless 2005, the SDA announced the small microSD form factor along with SDHC secure digital high capacity formatting in excess of 2 GB with a minimum sustained read and write speed of 17.6 Mbit/s. SanDisk induced the SDA to administer the microSD standard. The SDA approved the final microSD specification on July 13, 2005. Initially, microSD cards were available in capacities of 32, 64, and 128 MB.
The Motorola E398 was the first mobile phone to contain a TransFlash (later microSD) card. A few years later, its competitors began using microSD cards.
2006–2008: SDHC and SDIO
The SDHC format, announced in January 2006, brought improvements such as 32 GB storage capacity and mandatory support for FAT32 file system. In April, the SDA released a detailed specification for the non-security related parts of the SD memory card standard and for the Secure Digital Input Output (SDIO) cards and the standard SD host controller.
In September 2006, SanDisk announced the 4 GB miniSDHC. Like the SD and SDHC, the miniSDHC card has the same form factor as the older miniSD card but the HC card requires HC support built into the host device. Devices that support miniSDHC work with miniSD and miniSDHC, but devices without specific support for miniSDHC work only with the older miniSD card. Since 2008, miniSD cards are no longer produced, due to market domination of the even smaller microSD cards.
2009–2019: SDXC
The storage density of memory cards has increased significantly throughout the 2010s decade, allowing the earliest devices to offer support for the SD:XC standard, such as the Samsung Galaxy S III and Samsung Galaxy Note II mobile phones, to expand their available storage to several hundreds of gigabytes.
2009
In January 2009, the SDA announced the SDXC family, which supports cards up to 2 TB and speeds up to 300 MB/s. SDXC cards are formatted with the exFAT filesystem by default. SDXC was announced at Consumer Electronics Show (CES) 2009 (January 7–10). At the same show, SanDisk and Sony also announced a comparable Memory Stick XC variant with the same 2 TB maximum as SDXC, and Panasonic announced plans to produce 64 GB SDXC cards. On March 6, Pretec introduced the first SDXC card, a 32 GB card with a read/write speed of 400 Mbit/s. But only early in 2010 did compatible host devices come onto the market, including Sony's Handycam HDR-CX55V camcorder, Canon's EOS 550D (also known as Rebel T2i) Digital SLR camera, a USB card reader from Panasonic, and an integrated SDXC card reader from JMicron. The earliest laptops to integrate SDXC card readers relied on a USB 2.0 bus, which does not have the bandwidth to support SDXC at full speed.
2010
In early 2010, commercial SDXC cards appeared from Toshiba (64 GB), Panasonic (64 GB and 48 GB), and SanDisk (64 GB).
2011
In early 2011, Centon Electronics, Inc. (64 GB and 128 GB) and Lexar (128 GB) began shipping SDXC cards rated at Speed Class 10. Pretec offered cards from 8 GB to 128 GB rated at Speed Class 16. In September 2011, SanDisk released a 64 GB microSDXC card. Kingmax released a comparable product in 2011.
2012
In April 2012, Panasonic introduced MicroP2 card format for professional video applications. The cards are essentially full-size SDHC or SDXC UHS-II cards, rated at UHS Speed Class U1. An adapter allows MicroP2 cards to work in current P2 card equipment.
2013
Panasonic MicroP2 cards shipped in March 2013 and were the first UHS-II compliant products on market; initial offer includes a 32GB SDHC card and a 64GB SDXC card. Later that year, Lexar released the first 256 GB SDXC card, based on 20 nm NAND flash technology.
2014
In February 2014, SanDisk introduced the first 128 GB microSDXC card, which was followed by a 200 GB microSDXC card in March 2015. September 2014 saw SanDisk announce the first 512 GB SDXC card.
2016
Samsung announced the world's first EVO Plus 256 GB microSDXC card in May 2016, and in September 2016 Western Digital (SanDisk) announced that a prototype of the first 1 TB SDXC card would be demonstrated at Photokina.
2017
In August 2017, SanDisk launched a 400 GB microSDXC card.
2018
In January 2018, Integral Memory unveiled its 512 GB microSDXC card. In May 2018, PNY launched a 512 GB microSDXC card. In June 2018 Kingston announced its Canvas series of MicroSD cards which were capable of capacities up to 512 GB, in three variations, Select, Go!, and React.
2019
In February 2019, Micron and SanDisk unveiled their microSDXC cards of 1 TB capacity.
2019–present: SDUC
The Secure Digital Ultra Capacity (SDUC) format supports cards up to 128 TB and offers speeds up to 985 MB/s.
Capacity
Secure Digital includes five card families available in three sizes. The five families are the original Standard-Capacity (SDSC), the High-Capacity (SDHC), the eXtended-Capacity (SDXC), the Ultra-Capacity (SDUC) and the SDIO, which combines input/output functions with data storage. The three form factors are the original size, the mini size, and the micro size. Electrically passive adapters allow a smaller card to fit and function in a device built for a larger card. The SD card's small footprint is an ideal storage medium for smaller, thinner, and more portable electronic devices.
SD (SDSC)
The second-generation Secure Digital (SDSC or Secure Digital Standard Capacity) card was developed to improve on the MultiMediaCard (MMC) standard, which continued to evolve, but in a different direction. Secure Digital changed the MMC design in several ways:
Asymmetrical shape of the sides of the SD card prevent inserting it upside down (whereas an MMC goes in most of the way but makes no contact if inverted).
Most SD cards are thick, compared to for MMCs. The SD specification defines a card called Thin SD with a thickness of 1.4 mm, but they occur only rarely, as the SDA went on to define even smaller form factors.
The card's electrical contacts are recessed beneath the surface of the card, protecting them from contact with a user's fingers.
The SD specification envisioned capacities and transfer rates exceeding those of MMC, and both of these functionalities have grown over time. For a comparison table, see below.
While MMC uses a single pin for data transfers, the SD card added a four-wire bus mode for higher data rates.
The SD card added Content Protection for Recordable Media (CPRM) security circuitry for digital rights management (DRM) content-protection.
Addition of a write-protect notch
Full-size SD cards do not fit into the slimmer MMC slots, and other issues also affect the ability to use one format in a host device designed for the other.
SDHC
The Secure Digital High Capacity (SDHC) format, announced in January 2006 and defined in version 2.0 of the SD specification, supports cards with capacities up to 32 GB. The SDHC trademark is licensed to ensure compatibility.
SDHC cards are physically and electrically identical to standard-capacity SD cards (SDSC). The major compatibility issues between SDHC and SDSC cards are the redefinition of the Card-Specific Data (CSD) register in version 2.0 (see below), and the fact that SDHC cards are shipped preformatted with the FAT32 file system.
Version 2.0 also introduces a High-speed bus mode for both SDSC and SDHC cards, which doubles the original Standard Speed clock to produce 25 MB/s.
SDHC host devices are required to accept older SD cards. However, older host devices do not recognize SDHC or SDXC memory cards, although some devices can do so through a firmware upgrade. Older Windows operating systems released before Windows 7 require patches or service packs to support access to SDHC cards.
SDXC
The Secure Digital eXtended Capacity (SDXC) format, announced in January 2009 and defined in version 3.01 of the SD specification, supports cards up to 2 TB, compared to a limit of 32 GB for SDHC cards in the SD 2.0 specification. SDXC adopts Microsoft's exFAT file system as a mandatory feature.
Version 3.01 also introduced the Ultra High Speed (UHS) bus for both SDHC and SDXC cards, with interface speeds from 50 MB/s to 104 MB/s for four-bit UHS-I bus. (this number has since been exceeded with SanDisk proprietary technology for 170 MB/s read, which is not proprietary anymore, as Lexar has the 1066x running at 160 MB/s read and 120 MB/s write via UHS 1, and Kingston also has their Canvas Go! Plus, also running at 170 MB/s).
Version 4.0, introduced in June 2011, allows speeds of 156 MB/s to 312 MB/s over the four-lane (two differential lanes) UHS-II bus, which requires an additional row of physical pins.
Version 5.0 was announced in February 2016 at CP+ 2016, and added "Video Speed Class" ratings for UHS cards to handle higher resolution video formats like 8K. The new ratings define a minimal write speed of 90 MB/s.
SDUC
The Secure Digital Ultra Capacity (SDUC) format, described in the SD 7.0 specification, and announced in June 2018, supports cards up to 128 TB and offers speeds up to 985 MB/s, regardless of form factor, either micro or full size, or interface type including UHS-I, UHS-II, UHS-III or SD Express. The SD Express interface can also be used with SDHC and SDXC cards.
exFAT filesystem
SDXC and SDUC cards are normally formatted using the exFAT file system, thereby limiting their use to a limited set of operating systems. Therefore, exFAT-formatted SDXC cards are not a 100% universally readable exchange medium. However, SD cards can be reformatted to any file system required.
Windows Vista (SP1) and later and OS X (10.6.5 and later) have native support for exFAT. (Windows XP and Server 2003 can support exFAT via an optional update from Microsoft.)
Most BSD and Linux distributions did not, for legal reasons; though in Linux kernel 5.4 Microsoft open-sourced the spec and allowed the inclusion of an exfat driver. Users of older kernels or BSD can manually install third-party implementations of exFAT (as a FUSE module) in order to be able to mount exFAT-formatted volumes. However, SDXC cards can be reformatted to use any file system (such as ext4, UFS, or VFAT), alleviating the restrictions associated with exFAT availability.
Except for the change of file system, SDXC cards are mostly backward compatible with SDHC readers, and many SDHC host devices can use SDXC cards if they are first reformatted to the FAT32 file system.
Nevertheless, in order to be fully compliant with the SDXC card specification, some SDXC-capable host devices are firmware-programmed to expect exFAT on cards larger than 32 GB. Consequently, they may not accept SDXC cards reformatted as FAT32, even if the device supports FAT32 on smaller cards (for SDHC compatibility). Therefore, even if a file system is supported in general, it is not always possible to use alternative file systems on SDXC cards at all depending on how strictly the SDXC card specification has been implemented in the host device. This bears a risk of accidental loss of data, as a host device may treat a card with an unrecognized file system as blank or damaged and reformat the card.
The SD Association provides a formatting utility for Windows and Mac OS X that checks and formats SD, SDHC, SDXC, and SDUC cards.
Comparison
Speed
SD card speed is customarily rated by its sequential read or write speed. The sequential performance aspect is the most relevant for storing and retrieving large files (relative to block sizes internal to the flash memory), such as images and multimedia. Small data (such as file names, sizes and timestamps) falls under the much lower speed limit of random access, which can be the limiting factor in some use cases.
With early SD cards, a few card manufacturers specified the speed as a "times" ("×") rating, which compared the average speed of reading data to that of the original CD-ROM drive. This was superseded by the Speed Class Rating, which guarantees a minimum rate at which data can be written to the card.
The newer families of SD card improve card speed by increasing the bus rate (the frequency of the clock signal that strobes information into and out of the card). Whatever the bus rate, the card can signal to the host that it is "busy" until a read or a write operation is complete. Compliance with a higher speed rating is a guarantee that the card limits its use of the "busy" indication.
Bus
Default Speed
SD Cards will read and write at speeds of 12.5 MB/s.
High Speed
High Speed Mode (25 MB/s) was introduced to support digital cameras with 1.10 spec version.
Ultra High Speed (UHS)
The Ultra High Speed (UHS) bus is available on some SDHC and SDXC cards. The following ultra-high speeds are specified:
UHS-I
Specified in SD version 3.01. Supports a clock frequency of 100 MHz (a quadrupling of the original "Default Speed"), which in four-bit transfer mode could transfer 50 MB/s (SDR50). UHS-I cards declared as UHS104 (SDR104) also support a clock frequency of 208 MHz, which could transfer 104 MB/s. Double data rate operation at 50 MHz (DDR50) is also specified in Version 3.01, and is mandatory for microSDHC and microSDXC cards labeled as UHS-I. In this mode, four bits are transferred when the clock signal rises and another four bits when it falls, transferring an entire byte on each full clock cycle, hence a 50 MB/s operation could be transferred using a 50 MHz clock.
There is a proprietary UHS-I extension primarily by SanDisk that increases transfer speed further to 170 MB/s, called DDR208 (or DDR200). Unlike UHS-II, it does not use additional pins. It achieves this by using the 208 MHz frequency of the standard SDR104 mode, but using DDR transfers. This extension has since then been used by Lexar for their 1066x series (160 MB/s), Kingston Canvas Go Plus (170 MB/s) and the MyMemory PRO SD card (180 MB/s).
UHS-II
Specified in version 4.0, further raises the data transfer rate to a theoretical maximum of 156 MB/s (full-duplex) or 312 MB/s (half-duplex) using an additional row of pins (a total of 17 pins for full-size and 16 pins for micro-size cards). While first implementations in compact system cameras were seen three years after specification (2014), it took many more years until UHS-II was implemented on a regular basis. At the beginning of 2021, there were more than 50 DSLR and compact system cameras using UHS-II.
UHS-III
Version 6.0, released in February 2017, added two new data rates to the standard. FD312 provides 312 MB/s while FD624 doubles that. Both are full-duplex. The physical interface and pin-layout are the same as with UHS-II, retaining backward compatibility.
Cards that comply with UHS show Roman numerals 'I', 'II' or 'III' next to the SD card logo, and report this capability to the host device. Use of UHS-I requires that the host device command the card to drop from 3.3-volt to 1.8-volt operation over the I/O interface pins and select the four-bit transfer mode, while UHS-II requires 0.4-volt operation.
The higher speed rates are achieved by using a two-lane low voltage (0.4 V pp) differential interface. Each lane is capable of transferring up to 156 MB/s. In full-duplex mode, one lane is used for Transmit while the other is used for Receive. In half-duplex mode both lanes are used for the same direction of data transfer allowing a double data rate at the same clock speed. In addition to enabling higher data rates, the UHS-II interface allows for lower interface power consumption, lower I/O voltage and lower electromagnetic interference (EMI).
SD Express
The SD Express bus was released in June 2018 with SD specification 7.0. It uses a single PCIe lane to provide full-duplex 985 MB/s transfer speed. Supporting cards must also implement the NVM Express storage access protocol. The Express bus can be implemented by SDHC, SDXC, and SDUC cards. For legacy application use, SD Express cards must also support High Speed bus and UHS-I bus. The Express bus re-uses the pin layout of UHS-II cards and reserves the space for additional two pins that may be introduced in the future.
Hosts which implement version 7.0 of the spec allow SD Cards to do direct memory access, which increases the attack surface of the host dramatically in the face of malicious SD cards.
Version 8.0 was announced on 19 May 2020, with support for two PCIe lanes with additional row of contacts and PCIe 4.0 transfer rates, for a maximum bandwidth of 3938 MB/s.
microSD Express
In February 2019, the SD Association announced microSD Express. The microSD Express cards offer PCI Express and NVMe interfaces, as the June 2018 SD Express release did, alongside the legacy microSD interface for continued backwards compatibility. The SDA also released visual marks to denote microSD Express memory cards to make matching the card and device easier for optimal device performance.
Bus speed Comparison
Compatibility
NOTE: If the card reader uses the DDR208 controller on the UHS 1 pins, the card reader will perform at 180MB/s on applicable UHS 1 cards
Class
The SD Association defines standard speed classes for SDHC/SDXC cards indicating minimum performance (minimum serial data writing speed). Both read and write speeds must exceed the specified value. The specification defines these classes in terms of performance curves that translate into the following minimum read-write performance levels on an empty card and suitability for different applications:
The SD Association defines three types of Speed Class ratings: the original Speed Class, UHS Speed Class, and Video Speed Class.
(Original) Speed Class
Speed Class ratings 2, 4, and 6 assert that the card supports the respective number of megabytes per second as a minimum sustained write speed for a card in a fragmented state.
Class 10 asserts that the card supports 10 MB/s as a minimum non-fragmented sequential write speed and uses a High Speed bus mode. The host device can read a card's speed class and warn the user if the card reports a speed class that falls below an application's minimum need. By comparison, the older "×" rating measured maximum speed under ideal conditions, and was vague as to whether this was read speed or write speed.
The graphical symbol for the speed class has a number encircled with 'C' (C2, C4, C6, and C10).
UHS Speed Class
UHS-I and UHS-II cards can use UHS Speed Class rating with two possible grades: class 1 for minimum write performance of at least 10 MB/s ('U1' symbol featuring number 1 inside 'U') and class 3 for minimum write performance of 30 MB/s ('U3' symbol featuring 3 inside 'U'), targeted at recording 4K video. Before November 2013, the rating was branded UHS Speed Grade and contained grades 0 (no symbol) and 1 ('U1' symbol). Manufacturers can also display standard speed class symbols (C2, C4, C6, and C10) alongside, or in place of UHS speed class.
UHS memory cards work best with UHS host devices. The combination lets the user record HD resolution videos with tapeless camcorders while performing other functions. It is also suitable for real-time broadcasts and capturing large HD videos.
Video Speed Class
Video Speed Class defines a set of requirements for UHS cards to match the modern MLC NAND flash memory and supports progressive 4K and 8K video with minimum sequential writing speeds of 6 – 90 MB/s. The graphical symbols use a stylized 'V' followed by a number designating write speed (i.e. V6, V10, V30, V60, and V90).
Comparison
Application Performance Class
Application Performance Class is a newly defined standard from the SD Specification 5.1 and 6.0 which not only define sequential Writing Speeds but also mandates a minimum IOPS for reading and writing. Class A1 requires a minimum of 1500 reading and 500 writing operations per second, while class A2 requires 4000 and 2000 IOPS. A2 class cards require host driver support as they use command queuing and write caching to achieve their higher speeds. If used in an unsupported host, they might even be slower than other A1 cards, and if power is lost before cached data is actually written from the card's internal RAM to the card's internal flash RAM, that data is likely to be lost.
"×" rating
The "×" rating, that was used by some card manufacturers and made obsolete by speed classes, is a multiple of the standard CD-ROM drive speed of 150 KB/s (approximately 1.23 Mbit/s). Basic cards transfer data at up to six times (6×) the CD-ROM speed; that is, 900 kbit/s or 7.37 Mbit/s. The 2.0 specification defines speeds up to 200×, but is not as specific as Speed Classes are on how to measure speed. Manufacturers may report best-case speeds and may report the card's fastest read speed, which is typically faster than the write speed. Some vendors, including Transcend and Kingston, report their cards' write speed. When a card lists both a speed class and an "×" rating, the latter may be assumed a read speed only.
Real-world performance
In applications that require sustained write throughput, such as video recording, the device might not perform satisfactorily if the SD card's class rating falls below a particular speed. For example, a high-definition camcorder may require a card of not less than Class 6, suffering dropouts or corrupted video if a slower card is used. Digital cameras with slow cards may take a noticeable time after taking a photograph before being ready for the next, while the camera writes the first picture.
The speed class rating does not totally characterize card performance. Different cards of the same class may vary considerably while meeting class specifications. A card's speed depends on many factors, including:
The frequency of soft errors that the card's controller must re-try
Write amplification: The flash controller may need to overwrite more data than requested. This has to do with performing read-modify-write operations on write blocks, freeing up (the much larger) erase blocks, while moving data around to achieve wear leveling.
File fragmentation: where there is not sufficient space for a file to be recorded in a contiguous region, it is split into non-contiguous fragments. This does not cause rotational or head-movement delays as with electromechanical hard drives, but may decrease speed — for instance, by requiring additional reads and computation to determine where on the card the file's next fragment is stored.
In addition, speed may vary markedly between writing a large amount of data to a single file (sequential access, as when a digital camera records large photographs or videos) and writing a large number of small files (a random-access use common in smartphones). A study in 2012 found that, in this random-access use, some Class 2 cards achieved a write speed of 1.38 MB/s, while all cards tested of Class 6 or greater (and some of lower Classes; lower Class does not necessarily mean better small-file performance), including those from major manufacturers, were over 100 times slower. In 2014, a blogger measured a 300-fold performance difference on small writes; this time, the best card in this category was a class 4 card.
Features
Card security
Cards can protect their contents from erasure or modification, prevent access by non-authorized users, and protect copyrighted content using digital rights management.
Commands to disable writes
The host device can command the SD card to become read-only (to reject subsequent commands to write information to it). There are both reversible and irreversible host commands that achieve this.
Write-protect notch
Most full-size SD cards have a "mechanical write protect switch" allowing the user to advise the host computer that the user wants the device to be treated as read-only. This does not protect the data on the card if the host is compromised: "It is the responsibility of the host to protect the card. The position of the write protect switch is unknown to the internal circuitry of the card." Some host devices do not support write protection, which is an optional feature of the SD specification, and drivers and devices that do obey a read-only indication may give the user a way to override it.
The switch is a sliding tab that covers a notch in the card. The miniSD and microSD formats do not directly support a write protection notch, but they can be inserted into full-size adapters which do.
When looking at the SD card from the top, the right side (the side with the beveled corner) must be notched.
On the left side, there may be a write-protection notch. If the notch is omitted, the card can be read and written. If the card is notched, it is read-only. If the card has a notch and a sliding tab which covers the notch, the user can slide the tab upward (toward the contacts) to declare the card read/write, or downward to declare it read-only. The diagram to the right shows an orange sliding write-protect tab in both the unlocked and locked positions.
Cards sold with content that must not be altered are permanently marked read-only by having a notch and no sliding tab.
Card password
A host device can lock an SD card using a password of up to 16 bytes, typically supplied by the user. A locked card interacts normally with the host device except that it rejects commands to read and write data. A locked card can be unlocked only by providing the same password. The host device can, after supplying the old password, specify a new password or disable locking. Without the password (typically, in the case that the user forgets the password), the host device can command the card to erase all the data on the card for future re-use (except card data under DRM), but there is no way to gain access to the existing data.
Windows Phone 7 devices use SD cards designed for access only by the phone manufacturer or mobile provider. An SD card inserted into the phone underneath the battery compartment becomes locked "to the phone with an automatically generated key" so that "the SD card cannot be read by another phone, device, or PC". Symbian devices, however, are some of the few that can perform the necessary low-level format operations on locked SD cards. It is therefore possible to use a device such as the Nokia N8 to reformat the card for subsequent use in other devices.
smartSD cards
A smartSD memory card is a microSD card with an internal "secure element" that allows the transfer of ISO 7816 Application Protocol Data Unit commands to, for example, JavaCard applets running on the internal secure element through the SD bus.
Some of the earliest versions of microSD memory cards with secure elements were developed in 2009 by DeviceFidelity, Inc., a pioneer in near field communication (NFC) and mobile payments, with the introduction of In2Pay and CredenSE products, later commercialized and certified for mobile contactless transactions by Visa in 2010. DeviceFidelity also adapted the In2Pay microSD to work with the Apple iPhone using the iCaisse, and pioneered the first NFC transactions and mobile payments on an Apple device in 2010.
Various implementations of smartSD cards have been done for payment applications and secured authentication. In 2012 Good Technology partnered with DeviceFidelity to use microSD cards with secure elements for mobile identity and access control.
microSD cards with Secure Elements and NFC (near field communication) support are used for mobile payments, and have been used in direct-to-consumer mobile wallets and mobile banking solutions, some of which were launched by major banks around the world, including Bank of America, US Bank, and Wells Fargo, while others were part of innovative new direct-to-consumer neobank programs such as moneto, first launched in 2012.
microSD cards with Secure Elements have also been used for secure voice encryption on mobile devices, which allows for one of the highest levels of security in person-to-person voice communications. Such solutions are heavily used in intelligence and security.
In 2011, HID Global partnered with Arizona State University to launch campus access solutions for students using microSD with Secure Element and MiFare technology provided by DeviceFidelity, Inc. This was the first time regular mobile phones could be used to open doors without need for electronic access keys.
Vendor enhancements
Vendors have sought to differentiate their products in the market through various vendor-specific features:
Integrated Wi-Fi – Several companies produce SD cards with built-in Wi-Fi transceivers supporting static security (WEP 40; 104; and 128, WPA-PSK, and WPA2-PSK). The card lets any digital camera with an SD slot transmit captured images over a wireless network, or store the images on the card's memory until it is in range of a wireless network. Examples include: Eye-Fi / SanDisk, Transcend Wi-Fi, Toshiba FlashAir, Trek Flucard, PQI Air Card and LZeal ez Share. Some models geotag their pictures.
Pre-loaded content – In 2006, SanDisk announced Gruvi, a microSD card with extra digital rights management features, which they intended as a medium for publishing content. SanDisk again announced pre-loaded cards in 2008, under the slotMusic name, this time not using any of the DRM capabilities of the SD card. In 2011, SanDisk offered various collections of 1000 songs on a single slotMusic card for about $40, now restricted to compatible devices and without the ability to copy the files.
Integrated USB connector – The SanDisk SD Plus product can be plugged directly into a USB port without needing a USB card reader. Other companies introduced comparable products, such as the Duo SD product of OCZ Technology and the 3 Way (microSDHC, SDHC, and USB) product of A-DATA, which was available in 2008 only.
Different colors – SanDisk has used various colors of plastic or adhesive label, including a "gaming" line in translucent plastic colors that indicated the card's capacity.
Integrated display – In 2006, A-DATA announced a Super Info SD card with a digital display that provided a two-character label and showed the amount of unused memory on the card.
SDIO cards
A SDIO (Secure Digital Input Output) card is an extension of the SD specification to cover I/O functions. SDIO cards are only fully functional in host devices designed to support their input-output functions (typically PDAs like the Palm Treo, but occasionally laptops or mobile phones). These devices can use the SD slot to support GPS receivers, modems, barcode readers, FM radio tuners, TV tuners, RFID readers, digital cameras, and interfaces to Wi-Fi, Bluetooth, Ethernet, and IrDA. Many other SDIO devices have been proposed, but it is now more common for I/O devices to connect using the USB interface.
SDIO cards support most of the memory commands of SD cards. SDIO cards can be structured as eight logical cards, although currently, the typical way that an SDIO card uses this capability is to structure itself as one I/O card and one memory card.
The SDIO and SD interfaces are mechanically and electrically identical. Host devices built for SDIO cards generally accept SD memory cards without I/O functions. However, the reverse is not true, because host devices need suitable drivers and applications to support the card's I/O functions. For example, an HP SDIO camera usually does not work with PDAs that do not list it as an accessory. Inserting an SDIO card into any SD slot causes no physical damage nor disruption to the host device, but users may be frustrated that the SDIO card does not function fully when inserted into a seemingly compatible slot. (USB and Bluetooth devices exhibit comparable compatibility issues, although to a lesser extent thanks to standardized USB device classes and Bluetooth profiles.)
The SDIO family comprises Low-Speed and Full-Speed cards. Both types of SDIO cards support SPI and one-bit SD bus types. Low-Speed SDIO cards are allowed to also support the four-bit SD bus; Full-Speed SDIO cards are required to support the four-bit SD bus. To use an SDIO card as a "combo card" (for both memory and I/O), the host device must first select four-bit SD bus operation. Two other unique features of Low-Speed SDIO are a maximum clock rate of 400 kHz for all communications, and the use of Pin 8 as "interrupt" to try to initiate dialogue with the host device.
Ganging cards together
The one-bit SD protocol was derived from the MMC protocol, which envisioned the ability to put up to three cards on a bus of common signal lines. The cards use open collector interfaces, where a card may pull a line to the low voltage level; the line is at the high voltage level (because of a pull-up resistor) if no card pulls it low. Though the cards shared clock and signal lines, each card had its own chip select line to sense that the host device had selected it.
The SD protocol envisioned the ability to gang 30 cards together without separate chip select lines. The host device would broadcast commands to all cards and identify the card to respond to the command using its unique serial number.
In practice, cards are rarely ganged together because open-collector operation has problems at high speeds and increases power consumption. Newer versions of the SD specification recommend separate lines to each card.
Compatibility
Host devices that comply with newer versions of the specification provide backward compatibility and accept older SD cards. For example, SDXC host devices accept all previous families of SD memory cards, and SDHC host devices also accept standard SD cards.
Older host devices generally do not support newer card formats, and even when they might support the bus interface used by the card, there are several factors that arise:
A newer card may offer greater capacity than the host device can handle (over 4 GB for SDHC, over 32 GB for SDXC).
A newer card may use a file system the host device cannot navigate (FAT32 for SDHC, exFAT for SDXC)
Use of an SDIO card requires the host device be designed for the input/output functions the card provides.
The hardware interface of the card was changed starting with the version 2.0 (new high-speed bus clocks, redefinition of storage capacity bits) and SDHC family (Ultra-high speed (UHS) bus)
UHS-II has physically more pins but is backwards compatible to UHS-I and non-UHS for both slot and card.
Some vendors produced SDSC cards above 1 GB before the SDA had standardized a method of doing so.
Markets
Due to their compact size, Secure Digital cards are used in many consumer electronic devices, and have become a widespread means of storing several gigabytes of data in a small size. Devices in which the user may remove and replace cards often, such as digital cameras, camcorders, and video game consoles, tend to use full-sized cards. Devices in which small size is paramount, such as mobile phones, action cameras such as the GoPro Hero series, and camera drones, tend to use microSD cards.
Mobile phones
The microSD card has helped propel the smartphone market by giving both manufacturers and consumers greater flexibility and freedom.
While cloud storage depends on stable internet connection and sufficiently voluminous data plans, memory cards in mobile devices provide location-independent and private storage expansion with much higher transfer rates and no latency (engineering)(), enabling applications such as photography and video recording. While data stored internally on bricked devices is inaccessible, data stored on the memory card can be salvaged and accessed externally by the user as mass storage device. A benefit over USB on the go storage expansion is uncompromised ergonomy. The usage of a memory card also protects the mobile phone's non-replaceable internal storage from weardown from heavy applications such as excessive camera usage and portable FTP server hosting over WiFi Direct. Due to the technical development of memory cards, users of existing mobile devices are able to expand their storage further and priceworthier with time.
Recent versions of major operating systems such as Windows Mobile and Android allow applications to run from microSD cards, creating possibilities for new usage models for SD cards in mobile computing markets, as well as clearing available internal storage space.
SD cards are not the most economical solution in devices that need only a small amount of non-volatile memory, such as station presets in small radios. They may also not present the best choice for applications that require higher storage capacities or speeds as provided by other flash card standards such as CompactFlash. These limitations may be addressed by evolving memory technologies, such as the new SD 7.0 specifications which allow storage capabilities of up to 128 TB.
Many personal computers of all types, including tablets and mobile phones, use SD cards, either through built-in slots or through an active electronic adapter. Adapters exist for the PC card, ExpressBus, USB, FireWire, and the parallel printer port. Active adapters also let SD cards be used in devices designed for other formats, such as CompactFlash. The FlashPath adapter lets SD cards be used in a floppy disk drive.
Some devices such as the Samsung Galaxy Fit (2011) and Samsung Galaxy Note 8.0 (2013) have an SD card compartment located externally and accessible by hand, while it is located under the battery cover on other devices. More recent mobile phones use a pin-hole ejection system for the tray which houses both the memory card and SIM card.
Counterfeits
Commonly found on the market are mislabeled or counterfeit Secure Digital cards that report a fake capacity or run slower than labeled.
Software tools exist to check and detect counterfeit products. Detection of counterfeit cards usually involves copying files with random data to the SD card until the card's capacity is reached, and copying them back. The files that were copied back can be tested either by comparing checksums (e.g. MD5), or trying to compress them. The latter approach leverages the fact that counterfeited cards let the user read back files, which then consist of easily compressible uniform data (for example, repeating 0xFFs).
Digital cameras
SD/MMC cards replaced Toshiba's SmartMedia as the dominant memory card format used in digital cameras. In 2001, SmartMedia had achieved nearly 50% use, but by 2005 SD/MMC had achieved over 40% of the digital camera market and SmartMedia's share had plummeted by 2007.
At this time, all the leading digital camera manufacturers used SD in their consumer product lines, including Canon, Casio, Fujifilm, Kodak, Leica, Nikon, Olympus, Panasonic, Pentax, Ricoh, Samsung, and Sony. Formerly, Olympus and Fujifilm used XD-Picture Cards (xD cards) exclusively, while Sony only used Memory Stick; by early 2010 all three supported SD.
Some prosumer and professional digital cameras continued to offer CompactFlash (CF), either on a second card slot or as the only storage, as CF supports much higher maximum capacities and historically was cheaper for the same capacity.
Secure Digital memory cards can be used in Sony XDCAM EX camcorders with an adapter and in Panasonic P2 card equipment with a MicroP2 adapter.
Personal computers
Although many personal computers accommodate SD cards as an auxiliary storage device using a built-in slot, or can accommodate SD cards by means of a USB adapter, SD cards cannot be used as the primary hard disk through the onboard ATA controller, because none of the SD card variants support ATA signalling. Primary hard disk use requires a separate SD controller chip or an SD-to-CompactFlash converter. However, on computers that support bootstrapping from a USB interface, an SD card in a USB adapter can be the primary hard disk, provided it contains an operating system that supports USB access once the bootstrap is complete.
In laptop and tablet computers, memory cards in an integrated card reader offer an ergonomical benefit over USB flash drives, as the latter sticks out of the device, and the user would need to be cautious not to bump it while transporting the device, which could damage the USB port. Memory cards have a unified shape and do not reserve a USB port when inserted into a computer's dedicated card slot.
Since late 2009, newer Apple computers with installed SD card readers have been able to boot in macOS from SD storage devices, when properly formatted to Mac OS Extended file format and the default partition table set to GUID Partition Table. (See Other file systems below).
SD cards are increasing in usage and popularity among owners of vintage computers like 8-bit Atari. For example SIO2SD (SIO is an Atari port for connecting external devices) is used nowadays. Software for an 8-bit Atari may be included on one SD card that may have less than 4-8 GB of disk size (2019).
Embedded systems
In 2008, the SDA specified Embedded SD, "leverag[ing] well-known SD standards" to enable non-removable SD-style devices on printed circuit boards. However this standard was not adopted by the market while the MMC standard became the de facto standard for embedded systems. SanDisk provides such embedded memory components under the iNAND brand.
Most modern microcontrollers have built-in SPI logic that can interface to an SD card operating in its SPI mode, providing non-volatile storage. Even if a microcontroller lacks the SPI feature, the feature can be emulated by bit banging. For example, a home-brew hack combines spare General Purpose Input/Output (GPIO) pins of the processor of the Linksys WRT54G router with MMC support code from the Linux kernel. This technique can achieve throughput of up to .
Music distribution
Prerecorded microSDs have been used to commercialize music under the brands slotMusic and slotRadio by SanDisk and MQS by Astell&Kern.
Technical details
Physical size
The SD card specification defines three physical sizes. The SD and SDHC families are available in all three sizes, but the SDXC and SDUC families are not available in the mini size, and the SDIO family is not available in the micro size. Smaller cards are usable in larger slots through use of a passive adapter.
Standard
SD (SDSC), SDHC, SDXC, SDIO, SDUC
(as thin as MMC) for Thin SD (rare)
MiniSD
miniSD, miniSDHC, miniSDIO
microSD
The micro form factor is the smallest SD card format.
microSD, microSDHC, microSDXC, microSDUC
Transfer modes
Cards may support various combinations of the following bus types and transfer modes. The SPI bus mode and one-bit SD bus mode are mandatory for all SD families, as explained in the next section. Once the host device and the SD card negotiate a bus interface mode, the usage of the numbered pins is the same for all card sizes.
SPI bus mode: Serial Peripheral Interface Bus is primarily used by embedded microcontrollers. This bus type supports only a 3.3-volt interface. This is the only bus type that does not require a host license.
One-bit SD bus mode: Separate command and data channels and a proprietary transfer format.
Four-bit SD bus mode: Uses extra pins plus some reassigned pins. This is the same protocol as the one-bit SD bus mode which uses one command and four data lines for faster data transfer. All SD cards support this mode. UHS-I and UHS-II require this bus type.
Two differential lines SD UHS-II mode: Uses two low-voltage differential interfaces to transfer commands and data. UHS-II cards include this interface in addition to the SD bus modes.
The physical interface comprises 9 pins, except that the miniSD card adds two unconnected pins in the center and the microSD card omits one of the two VSS (Ground) pins.
Notes:
Direction is relative to card. I = Input, O = Output.
PP = Push-Pull logic, OD = Open-Drain logic.
S = Power Supply, NC = Not Connected (or logical high).
Interface
Command interface
SD cards and host devices initially communicate through a synchronous one-bit interface, where the host device provides a clock signal that strobes single bits in and out of the SD card. The host device thereby sends 48-bit commands and receives responses. The card can signal that a response will be delayed, but the host device can abort the dialogue.
Through issuing various commands, the host device can:
Determine the type, memory capacity, and capabilities of the SD card
Command the card to use a different voltage, different clock speed, or advanced electrical interface
Prepare the card to receive a block to write to the flash memory, or read and reply with the contents of a specified block.
The command interface is an extension of the MultiMediaCard (MMC) interface. SD cards dropped support for some of the commands in the MMC protocol, but added commands related to copy protection. By using only commands supported by both standards until determining the type of card inserted, a host device can accommodate both SD and MMC cards.
Electrical interface
All SD card families initially use a 3.3 volt electrical interface. On command, SDHC and SDXC cards can switch to 1.8 V operation.
At initial power-up or card insertion, the host device selects either the Serial Peripheral Interface (SPI) bus or the one-bit SD bus by the voltage level present on Pin 1. Thereafter, the host device may issue a command to switch to the four-bit SD bus interface, if the SD card supports it. For various card types, support for the four-bit SD bus is either optional or mandatory.
After determining that the SD card supports it, the host device can also command the SD card to switch to a higher transfer speed. Until determining the card's capabilities, the host device should not use a clock speed faster than 400 kHz. SD cards other than SDIO (see below) have a "Default Speed" clock rate of 25 MHz. The host device is not required to use the maximum clock speed that the card supports. It may operate at less than the maximum clock speed to conserve power. Between commands, the host device can stop the clock entirely.
Achieving higher card speeds
The SD specification defines four-bit-wide transfers. (The MMC specification supports this and also defines an eight-bit-wide mode; MMC cards with extended bits were not accepted by the market.) Transferring several bits on each clock pulse improves the card speed. Advanced SD families have also improved speed by offering faster clock frequencies and double data rate (explained here) in a high-speed differential interface (UHS-II).
File system
Like other types of flash memory card, an SD card of any SD family is a block-addressable storage device, in which the host device can read or write fixed-size blocks by specifying their block number.
MBR and FAT
Most SD cards ship preformatted with one or more MBR partitions, where the first or only partition contains a file system. This lets them operate like the hard disk of a personal computer. Per the SD card specification, an SD card is formatted with MBR and the following file system:
For SDSC cards:
Capacity of less than 32,680 logical sectors (smaller than 16 MB ): FAT12 with partition type 01h and BPB 3.0 or EBPB 4.1
Capacity of 32,680 to 65,535 logical sectors (between 16 MB and 32 MB): FAT16 with partition type 04h and BPB 3.0 or EBPB 4.1
Capacity of at least 65,536 logical sectors (larger than 32 MB): FAT16B with partition type 06h and EBPB 4.1
For SDHC cards:
Capacity of less than 16,450,560 logical sectors (smaller than 7.8 GB): FAT32 with partition type 0Bh and EBPB 7.1
Capacity of at least 16,450,560 logical sectors (larger than 7.8 GB): FAT32 with partition type 0Ch and EBPB 7.1
For SDXC cards: exFAT with partition type 07h
Most consumer products that take an SD card expect that it is partitioned and formatted in this way. Universal support for FAT12, FAT16, FAT16B, and FAT32 allows the use of SDSC and SDHC cards on most host computers with a compatible SD reader, to present the user with the familiar method of named files in a hierarchical directory tree.
On such SD cards, standard utility programs such as Mac OS X's "" or Windows' SCANDISK can be used to repair a corrupted filing system and sometimes recover deleted files. Defragmentation tools for FAT file systems may be used on such cards. The resulting consolidation of files may provide a marginal improvement in the time required to read or write the file, but not an improvement comparable to defragmentation of hard drives, where storing a file in multiple fragments requires additional physical, and relatively slow, movement of a drive head. Moreover, defragmentation performs writes to the SD card that count against the card's rated lifespan. The write endurance of the physical memory is discussed in the article on flash memory; newer technology to increase the storage capacity of a card provides worse write endurance.
When reformatting an SD card with a capacity of at least 32 MB (65,536 logical sectors or more), but not more than 2 GB, FAT16B with partition type 06h and EBPB 4.1 is recommended if the card is for a consumer device. (FAT16B is also an option for 4 GB cards, but it requires the use of 64 KB clusters, which are not widely supported.) FAT16B does not support cards above 4 GB at all.
The SDXC specification mandates the use of Microsoft's proprietary exFAT file system, which sometimes requires appropriate drivers (e.g. exfat-utils/exfat-fuse on Linux).
Other file systems
Because the host views the SD card as a block storage device, the card does not require MBR partitions or any specific file system. The card can be reformatted to use any file system the operating system supports. For example:
Under Windows, SD cards can be formatted using NTFS and, on later versions, exFAT.
Under macOS, SD cards can be partitioned as GUID devices and formatted with either HFS Plus or APFS file systems or still use exFAT.
Under Unix-like operating systems such as Linux or FreeBSD, SD cards can be formatted using the UFS, Ext2, Ext3, Ext4, btrfs, HFS Plus, ReiserFS or F2FS file system. Additionally under Linux, HFS Plus file systems may be accessed for read/write if the "hfsplus" package is installed, and partitioned and formatted if "hfsprogs" is installed. (These package names are correct under Debian, Ubuntu etc., but may differ on other Linux distributions.)
Any recent version of the above can format SD cards using the UDF file system.
Additionally, as with live USB flash drives, an SD card can have an operating system installed on it. Computers that can boot from an SD card (either using a USB adapter or inserted into the computer's flash media reader) instead of the hard disk drive may thereby be able to recover from a corrupted hard disk drive. Such an SD card can be write-locked to preserve the system's integrity.
The SD Standard allows usage of only the above-mentioned Microsoft FAT file systems and any card produced in the market shall be preloaded with the related standard file system upon its delivery to the market. If any application or user re-formats the card with a non-standard file system the proper operation of the card, including interoperability, cannot be assured.
Risks of reformatting
Reformatting an SD card with a different file system, or even with the same one, may make the card slower, or shorten its lifespan. Some cards use wear leveling, in which frequently modified blocks are mapped to different portions of memory at different times, and some wear-leveling algorithms are designed for the access patterns typical of FAT12, FAT16 or FAT32. In addition, the preformatted file system may use a cluster size that matches the erase region of the physical memory on the card; reformatting may change the cluster size and make writes less efficient. The SD Association provides freely-downloadable SD Formatter software to overcome these problems for Windows and Mac OS X.
SD/SDHC/SDXC memory cards have a "Protected Area" on the card for the SD standard's security function. Neither standard formatters nor the SD Association formatter will erase it. The SD Association suggests that devices or software which use the SD security function may format it.
Power consumption
The power consumption of SD cards varies by its speed mode, manufacturer and model.
During transfer it may be in the range of 66–330 mW (20–100 mA at a supply voltage of 3.3 V). Specifications from TwinMos Technologies list a maximum of 149 mW (45 mA) during transfer. Toshiba lists 264–330 mW (80–100 mA). Standby current is much lower, less than 0.2 mA for one 2006 microSD card. If there is data transfer for significant periods, battery life may be reduced noticeably; for reference, the capacity of smartphone batteries is typically around 6 Wh (Samsung Galaxy S2: 1650 mAh @ 3.7 V).
Modern UHS-II cards can consume up to 2.88 W, if the host device supports bus speed mode SDR104 or UHS-II. Minimum power consumption in the case of a UHS-II host is 720 mW.
Storage capacity and compatibilities
All SD cards let the host device determine how much information the card can hold, and the specification of each SD family gives the host device a guarantee of the maximum capacity a compliant card reports.
By the time the version 2.0 (SDHC) specification was completed in June 2006, vendors had already devised 2 GB and 4 GB SD cards, either as specified in Version 1.01, or by creatively reading Version 1.00. The resulting cards do not work correctly in some host devices.
SDSC cards above 1 GB
A host device can ask any inserted SD card for its 128-bit identification string (the Card-Specific Data or CSD). In standard-capacity cards (SDSC), 12 bits identify the number of memory clusters (ranging from 1 to 4,096) and 3 bits identify the number of blocks per cluster (which decode to 4, 8, 16, 32, 64, 128, 256, or 512 blocks per cluster). The host device multiplies these figures (as shown in the following section) with the number of bytes per block to determine the card's capacity in bytes.
SD version 1.00 assumed 512 bytes per block. This permitted SDSC cards up to 4,096 × 512 × 512 B = 1 GB, for which there are no known incompatibilities.
Version 1.01 let an SDSC card use a 4-bit field to indicate 1,024 or 2,048 bytes per block instead. Doing so enabled cards with 2 GB and 4 GB capacity, such as the Transcend 4 GB SD card and the Memorette 4 GB SD card.
Early SDSC host devices that assume 512-byte blocks therefore do not fully support the insertion of 2 GB or 4 GB cards. In some cases, the host device can read data that happens to reside in the first 1 GB of the card. If the assumption is made in the driver software, success may be version-dependent. In addition, any host device might not support a 4 GB SDSC card, since the specification lets it assume that 2 GB is the maximum for these cards.
Storage capacity calculations
The format of the Card-Specific Data (CSD) register changed between version 1 (SDSC) and version 2.0 (which defines SDHC and SDXC).
Version 1
In version 1 of the SD specification, capacities up to 2 GB are calculated by combining fields of the CSD as follows:
Capacity = (C_SIZE + 1) × 2(C_SIZE_MULT + READ_BL_LEN + 2)
where
0 ≤ C_SIZE ≤ 4095,
0 ≤ C_SIZE_MULT ≤ 7,
READ_BL_LEN is 9 (for 512 bytes/sector) or 10 (for 1024 bytes/sector)
Later versions state (at Section 4.3.2) that a 2 GB SDSC card shall set its READ_BL_LEN (and WRITE_BL_LEN) to indicate 1024 bytes, so that the above computation correctly reports the card's capacity; but that, for consistency, the host device shall not request (by CMD16) block lengths over 512 B.
Versions 2 and 3
In the definition of SDHC cards in version 2.0, the C_SIZE portion of the CSD is 22 bits and it indicates the memory size in multiples of 512 KB (the C_SIZE_MULT field is removed and READ_BL_LEN is no longer used to compute capacity). Two bits that were formerly reserved now identify the card family: 0 is SDSC; 1 is SDHC or SDXC; 2 and 3 are reserved. Because of these redefinitions, older host devices do not correctly identify SDHC or SDXC cards nor their correct capacity.
SDHC cards are restricted to reporting a capacity not over 32 GB.
SDXC cards are allowed to use all 22 bits of the C_SIZE field. An SDHC card that did so (reported C_SIZE > 65,375 to indicate a capacity of over 32 GB) would violate the specification. A host device that relied on C_SIZE rather than the specification to determine the card's maximum capacity might support such a card, but the card might fail in other SDHC-compatible host devices.
Capacity is calculated thus:
Capacity = (C_SIZE + 1) × 524288
where for SDHC
4112 ≤ C_SIZE ≤ 65375
≈2 GB ≤ Capacity ≤ ≈32 GB
where for SDXC
65535 ≤ C_SIZE
≈32 GB ≤ Capacity ≤ 2 TB
Capacities above 4 GB can only be achieved by following version 2.0 or later versions. In addition, capacities equal to 4 GB must also do so to guarantee compatibility.
Openness of specification
Like most memory card formats, SD is covered by numerous patents and trademarks. Excluding SDIO cards, royalties for SD card licenses are imposed for manufacture and sale of memory cards and host adapters (US$1,000/year plus membership at US$1,500/year)
Early versions of the SD specification were available under a non-disclosure agreement (NDA) prohibiting development of open-source drivers. However, the system was eventually reverse-engineered and free software drivers provided access to SD cards not using DRM. Subsequent to the release of most open-source drivers, the SDA provided a simplified version of the specification under a less restrictive license helping reduce some incompatibility issues.
Under a disclaimers agreement, the simplified specification released by the SDA in 2006 – as opposed to that of SD cards – was later extended to the physical layer, ASSD extensions, SDIO, and SDIO Bluetooth Type-A.
The Simplified Specification is available.
Again, most of the information had already been discovered and Linux had a fully free driver for it. Still, building a chip conforming to this specification caused the One Laptop per Child project to claim "the first truly Open Source SD implementation, with no need to obtain an SDI license or sign NDAs to create SD drivers or applications."
The proprietary nature of the complete SD specification affects embedded systems, laptop computers, and some desktop computers; many desktop computers do not have card slots, instead using USB-based card readers if necessary. These card readers present a standard USB mass storage interface to memory cards, thus separating the operating system from the details of the underlying SD interface. However, embedded systems (such as portable music players) usually gain direct access to SD cards and thus need complete programming information. Desktop card readers are themselves embedded systems; their manufacturers have usually paid the SDA for complete access to the SD specifications. Many notebook computers now include SD card readers not based on USB; device drivers for these essentially gain direct access to the SD card, as do embedded systems.
The SPI-bus interface mode is the only type that does not require a host license for accessing SD cards.
SD Express/UHS-II Verification Program (SVP)
SD Association (SDA) developed the SD Express/UHS-II Verification Program (SVP) to verify the electronic interfaces of members' UHS-II and SD Express card/host/ancillary products. Products passing SVP may be listed on the SDA website as a Verified Product. SVP provides both consumers and businesses higher confidence that products passing SVP meet the interface standards, ensuring compatibility.
SVP tests products for compliance against the SDA's Physical Test Guideline. Products eligible for SVP include card/host/ancillary products using SD Express, with PCI Express (PCIe) interface or SD UHS-II interface. The SDA selected Granite River Labs (GRL) as the first test provider with labs located in Japan, Taiwan and US. SVP is a voluntary program available exclusively to SDA members. Members may choose to have products passing SVP tests listed on the SDA website.
PCIe and UHS-II interfaces are both high differential interfaces and meeting their specifications demanding requirements is extremely important to assure proper operation and interoperability. The SVP serves the market by assuring better interoperability and by publishing a list of SVP Verified Products. This list allows members to promote their products and allows both consumers and OEMs to have more confidence by selecting products on the list.
For a limited time, the SDA is subsidizing SVP costs and is providing its members with additional discount options via a Test Shuttle volume discount program. Test Shuttle leverages multiple members submitting products of the same type for bulk testing. Companies interested in creating products using SDA specifications and participating in SVP can join the SDA by visiting: https://www.sdcard.org/join/.
Comparison to other flash memory formats
Overall, SD is less open than CompactFlash or USB flash memory drives. Those open standards can be implemented without paying for licensing, royalties, or documentation. (CompactFlash and USB flash drives may require licensing fees for the use of the SDA's trademarked logos.)
However, SD is much more open than Sony's Memory Stick, for which no public documentation nor any documented legacy implementation is available. All SD cards can be accessed freely using the well-documented SPI bus.
xD cards are simply 18-pin NAND flash chips in a special package and support the standard command set for raw NAND flash access. Although the raw hardware interface to xD cards is well understood, the layout of its memory contents—necessary for interoperability with xD card readers and digital cameras—is totally undocumented. The consortium that licenses xD cards has not released any technical information to the public.
Data recovery
A malfunctioning SD card can be repaired using specialized equipment, as long as the middle part, containing the flash storage, is not physically damaged. The controller can in this way be circumvented. This might be harder or even impossible in the case of monolithic card, where the controller resides on the same physical die.
See also
Comparison of memory cards
Flash memory
Microdrive
Serial Peripheral Interface Bus (SPI)
Universal Flash Storage
References
External links
SD simplified specifications
How to Use MMC/SDC elm-chan.org, December 26, 2019
Optimizing Linux with cheap flash drives lwn.net
Flash memory card: design, and List of cards and their characteristics linaro
Independent SD Card Speed Tests
Types of Memory Cards and Sizes
Computer-related introductions in 1999
Computer storage devices
Japanese inventions
Solid-state computer storage media
|
28207659
|
https://en.wikipedia.org/wiki/Brooklyn%20College%20Center%20for%20Computer%20Music
|
Brooklyn College Center for Computer Music
|
The Brooklyn College Center for Computer Music (BC-CCM) located at Brooklyn College of the City University of New York (CUNY) was one of the first computer music centers at a public university in the United States. The BC-CCM is a community of artists and researchers that began in the 1970s.
The mission of the BC-CCM is to explore the creative possibilities of technology in relation to the creation of music, sound art, sound design, and multimedia arts. Courses cover techniques of music composition with digital tools and instruments, theories and implementation of sound processing and sound synthesis, design and creation of new digital music and multimedia performance instruments, audio production, history and aesthetics of experimental music and sound art, and creative collaboration. The BC-CCM also sponsors residencies of visiting composers and media creators.
History
The Brooklyn College Center for Computer Music began when composer Robert Starer, then a member of the faculty of the Conservatory of Music at Brooklyn College, proposed the idea of creating an electronic music studio at Brooklyn College in the mid-1970s. The idea took root, and Jacob Druckman and Noah Creshevsky were the studio’s first Co-Directors. In those early days the equipment consisted largely of Moog analog synthesizers. Charles Dodge took over as Director in 1978, and he was responsible for having the studios designated as a center within Brooklyn College, the Center for Computer Music (CCM).
Charles Dodge was a pivotal figure in the history of the center. Dodge, originally from Iowa, had done a bachelor's degree at the University of Iowa and then earned his MA and doctorate (DMA) in Music Composition at Columbia University. While at Columbia, Dodge was very active at the Columbia-Princeton Electronic Music Center. In particular, Dodge was an innovator in the emerging field of computer music composition (as opposed to analog electronic composition, the norm in the field through the 1970s). Dodge created some of the first meritorious works in the field of computer music, including Earth’s Magnetic Field (1970), which mapped magnetic field data to musical sounds, Speech Songs, a 1974 work that used analysis and resynthesis of human voices, and Any Resemblance is Purely Coincidental (1980), which combines live piano performance with a digitally-manipulated recording of Enrico Caruso singing the aria "Vesti la giubba".
During Dodge’s years as Professor of Composition and Director of the Brooklyn College Center for Computer Music (BC-CCM), Dodge had the CCM designated as a center within Brooklyn College in 1978, and brought it to a world-class standing in the field of computer music. Dodge secured an initial donation of equipment from Bell Laboratories, and then proceeded to acquire large grants to fund BC-CCM work. The facilities received funding through grants from the United States Office of Education, the National Endowment for the Arts, the City University of New York Faculty Research and Award Program, and the Rockefeller Foundation, and through donations from private individuals.
Under Dodge’s leadership and with the efforts of numerous students, guests and artistic partners, the BC-CCM came to national prominence. At that time the USA was leading the world in the field of computer music, and so this made the BC-CCM one of the world’s most highly regarded centers. During these years, the BC-CCM presented summer workshops, which were attended by musicians from around the world, and hosted residencies for many composers of national and international stature, including John Cage, Lejaren Hiller, Laurie Spiegel, and Judy Klein, Larry Austin, the Fylkingen Group from Stockholm, EMS Sweden, Robert Dick, Bob Ostertag, Morton Subotnick, Pauline Oliveros, Jon Appleton, Noah Creshevsky, James C. Mobberley, Jean Claude Risset, Lars Gunnar Bodin, Sten Hanson, and directors of IMEB Françoise Barriere and Christian Clozier. This helped attract outstanding students, some of whom are now leaders in the field today, including Curtis Bahn (faculty, Rensselaer Polytechnic Institute), Matthew Suttor (faculty, Yale), Jason Stanyek (faculty, NYU), and Madelyne Byrne (faculty, Palomar College).
In the early 1990s after Charles Dodge stepped down as Director of the BC-CCM Noah Creshevsky assumed the directorship, with George Brunner as Technical Director. It was at this time that the CCM began to host an International Electro-Acoustic Music Festival and concert series, offering performances of music, video, film, and live electronic works by artists from around the world. When Noah Creshevsky retired in 2000, George Brunner took over as Acting Director until Amnon Wolman was named Director in 2003. Douglas Cohen generously served as Acting Director while Wolman was on an extended leave, and Douglas Geers joined the faculty as Director of the BC-CCM in fall of 2009.
Current faculty include composer Douglas Geers, Director; composer-producer George Brunner, Director of Music Technology; composer Doug Cohen, Associate Director; guitarist/composer David Grubbs; media artist John J.A. Jannone; audio producer Miguel Macias; and computer scientist Elizabeth Sklar.
Faculty
George Brunner
Douglas Cohen
Douglas Geers
David Grubbs
John J.A. Jannone
Miguel Macias
Nicholas Nelson
Elizabeth Sklar
References
Chadabe, J (1996). Electric Sound, Prentice Hall.
Dodge, C (1997). Computer Music, Schirmer.
Holmes, T (2008). Electronic and Experimental Music, Routledge.
Manning, P (2004). Electronic and Computer Music, Oxford University Press.
External links
Online interview with Charles Dodge
Electronic music organizations
Information technology organizations based in North America
Experimental Music Studios
|
4522868
|
https://en.wikipedia.org/wiki/Philosophy%20of%20information
|
Philosophy of information
|
The philosophy of information (PI) is a branch of philosophy that studies topics relevant to information processing, representational system and consciousness, computer science, information science and information technology.
It includes:
the critical investigation of the conceptual nature and basic principles of information, including its dynamics, utilisation and sciences
the elaboration and application of information-theoretic and computational methodologies to philosophical problems.
History
The philosophy of information (PI) has evolved from the philosophy of artificial intelligence, logic of information, cybernetics, social theory, ethics and the study of language and information.
Logic of information
The logic of information, also known as the logical theory of information, considers the information content of logical signs and expressions along the lines initially developed by Charles Sanders Peirce.
Cybernetics
One source for the philosophy of information can be found in the technical work of Norbert Wiener, Alan Turing (though his work has a wholly different origin and theoretical framework), William Ross Ashby, Claude Shannon, Warren Weaver, and many other scientists working on computing and information theory back in the early 1950s. See the main article on Cybernetics.
Some important work on information and communication was done by Gregory Bateson and his colleagues.
Study of language and information
Later contributions to the field were made by Fred Dretske, Jon Barwise, Brian Cantwell Smith, and others.
The Center for the Study of Language and Information (CSLI) was founded at Stanford University in 1983 by philosophers, computer scientists, linguists, and psychologists, under the direction of John Perry and Jon Barwise.
P.I.
More recently this field has become known as the philosophy of information. The expression was coined in the 1990s by Luciano Floridi, who has published prolifically in this area with the intention of elaborating a unified and coherent, conceptual frame for the whole subject.
Definitions of "information"
The concept information has been defined by several theorists.
Peirce
Charles S. Peirce's theory of information was embedded in his wider theory of symbolic communication he called the semeiotic, now a major part of semiotics. For Peirce, information integrates the aspects of signs and expressions separately covered by the concepts of denotation and extension, on the one hand, and by connotation and comprehension on the other.
Shannon and Weaver
Claude E. Shannon, for his part, was very cautious: "The word 'information' has been given different meanings by various writers in the general field of information theory. It is likely that at least a number of these will prove sufficiently useful in certain applications to deserve further study and permanent recognition. It is hardly to be expected that a single concept of information would satisfactorily account for the numerous possible applications of this general field." (Shannon 1993, p. 180). Thus, following Shannon, Weaver supported a tripartite analysis of information in terms of (1) technical problems concerning the quantification of information and dealt with by Shannon's theory; (2) semantic problems relating to meaning and truth; and (3) what he called "influential" problems concerning the impact and effectiveness of information on human behaviour, which he thought had to play an equally important role. And these are only two early examples of the problems raised by any analysis of information.
A map of the main senses in which one may speak of information is provided by the Stanford Encyclopedia of Philosophy article. The previous paragraphs are based on it.
Bateson
Gregory Bateson defined information as "a difference that makes a difference", which is based on Donald M. MacKay: information is a distinction that makes a difference.
Floridi
According to Luciano Floridi, four kinds of mutually compatible phenomena are commonly referred to as "information":
Information about something (e.g. a train timetable)
Information as something (e.g. DNA, or fingerprints)
Information for something (e.g. algorithms or instructions)
Information in something (e.g. a pattern or a constraint).
The word "information" is commonly used so metaphorically or so abstractly that the meaning is unclear.
Philosophical directions
Computing and philosophy
Recent creative advances and efforts in computing, such as semantic web, ontology engineering, knowledge engineering, and modern artificial intelligence provide philosophy with fertile ideas, new and evolving subject matters, methodologies, and models for philosophical inquiry. While computer science brings new opportunities and challenges to traditional philosophical studies, and changes the ways philosophers understand foundational concepts in philosophy, further major progress in computer science would only be feasible when philosophy provides sound foundations for areas such as bioinformatics, software engineering, knowledge engineering, and ontologies.
Classical topics in philosophy, namely, mind, consciousness, experience, reasoning, knowledge, truth, morality and creativity are rapidly becoming common concerns and foci of investigation in computer science, e.g., in areas such as agent computing, software agents, and intelligent mobile agent technologies.
According to Luciano Floridi " one can think of several ways for applying computational methods towards philosophical matters:
Conceptual experiments in silico: As an innovative extension of an ancient tradition of thought experiment, a trend has begun in philosophy to apply computational modeling schemes to questions in logic, epistemology, philosophy of science, philosophy of biology, philosophy of mind, and so on.
Pancomputationalism: On this view, computational and informational concepts are considered to be so powerful that given the right level of abstraction, anything in the world could be modeled and represented as a computational system, and any process could be simulated computationally. Then, however, pancomputationalists have the hard task of providing credible answers to the following two questions:
how can one avoid blurring all differences among systems?
what would it mean for the system under investigation not to be an informational system (or a computational system, if computation is the same as information processing)?
Information and society
Numerous philosophers and other thinkers have carried out philosophical studies of the social and cultural aspects of electronically mediated information.
Albert Borgmann, Holding onto Reality: The Nature of Information at the Turn of the Millennium (Chicago University Press, 1999)
Mark Poster, The Mode of Information (Chicago Press, 1990)
Luciano Floridi, "The Informational Nature of Reality", Fourth International European Conference on Computing and Philosophy 2006 (Dragvoll Campus, NTNU Norwegian University for Science and Technology, Trondheim, Norway, 22–24 June 2006).
See also
Barwise prize
Complex system
Digital divide
Digital philosophy
Digital physics
Game theory
Freedom of information
Informatics
Information
Information art
Information ethics
Information theory
International Association for Computing and Philosophy
Logic of information
Philosophy of artificial intelligence
Philosophy of computer science
Philosophy of technology
Philosophy of thermal and statistical physics
Relational quantum mechanics
Social informatics
Statistical mechanics
Notes
Further reading
Luciano Floridi, "What is the Philosophy of Information?" Metaphilosophy, 33.1/2: 123-145. Reprinted in T.W. Bynum and J.H. Moor (eds.), 2003. CyberPhilosophy: The Intersection of Philosophy and Computing. Oxford – New York: Blackwell.
-------- (ed.), 2004. The Blackwell Guide to the Philosophy of Computing and Information. Oxford - New York: Blackwell.
Greco, G.M., Paronitti G., Turilli M., and Floridi L., 2005. How to Do Philosophy Informationally. Lecture Notes on Artificial Intelligence 3782, pp. 623–634.
External links
IEG site, the Oxford University research group on the philosophy of information.
It from bit and fit from bit. On the origin and impact of information in the average evolution - from bit to atom and ecosystem. Information philosophy which covers not only the physics of information, but also how life forms originate and from there evolve to become more and more complex, including evolution of genes and memes, into the complex memetics from organisations and multinational corporations and a "global brain", (Yves Decadt, 2000). Book published in Dutch with English paper summary in The Information Philosopher, http://www.informationphilosopher.com/solutions/scientists/decadt/
Luciano Floridi, "Where are we in the philosophy of information?" University of Bergen, Norway. Podcast dated 21.06.06.
Inf
Philosophy of artificial intelligence
Knowledge representation
|
3353235
|
https://en.wikipedia.org/wiki/Komodo%20Edit
|
Komodo Edit
|
Komodo Edit is a free and open source text editor for dynamic programming languages. It was introduced in January 2007 to complement ActiveState's commercial Komodo IDE. As of version 4.3, Komodo Edit is built atop the Open Komodo project. Komodo IDE is no longer supported and maintained by developers for python.
History
Komodo Edit 4.0 was originally a freeware version of Komodo IDE 4.0, released in 2007-02-14.
On 2008-03-05, ActiveState Software Inc. announced Komodo Edit 4.3 to be open-sourced software, licensed under Mozilla Public License (MPL), GNU General Public License (GPL), and GNU Lesser Public License (LGPL).
Open Komodo
It is a subset version of Komodo Edit, with initial goal of Web development. The code was to be available between late October or early November 2007, with Open Komodo code repository created by ActiveState in August 2007.
On 2007-10-30, ActiveState Software Inc. announced the release of Open Komodo. The initial release was 1.0.0 Alpha 1.
Komodo Snapdragon
It is an announced initiative from ActiveState to create an open source development environment that promotes open standards on the web. It was to be based on Open Komodo.
Features
Many of Komodo's features are derived from an embedded Python interpreter.
Open Komodo uses Mozilla and Scintilla code base to provide its features, including support for many popular languages (including Python, Perl, PHP, Ruby, Tcl, SQL, Smarty, CSS, HTML, and XML), across all common operating systems (Linux, OS X, and Windows). The editor component is implemented using the Netscape Plugin Application Programming Interface (NPAPI), with the Scintilla view embedded in the XML User Interface Language (XUL) interface in the same manner as a web browser plugin.
Both Komodo Edit and IDE support user customizing via plug-ins and macros. Komodo plug-ins are based on Mozilla Add-ons and extensions can be searched for, downloaded, configured, installed and updated from within the application. Available extensions include a functions list, pipe features, additional language support and user interface enhancements.
Komodo IDE has features found in an integrated development environment (IDE), such as integrated debugger support, Document Object Model (DOM) viewer, interactive shells, source code control integration, and the ability to select the engine used to run regular expressions, to ensure compatibility with the final deployment target.
The commercial version also adds code browsing, a database explorer, collaboration, support for many popular source code control systems, and more. Independent implementations of some of these features, such as the database editor, Git support, and remote FTP file access, are available in the free version via Komodo Edit's plugin system.
References
External links
Open Komodo
Unix text editors
Windows text editors
MacOS text editors
Free text editors
Free software programmed in C
Linux text editors
Linux integrated development environments
Free integrated development environments
Gecko-based software
Free software programmed in C++
Free software programmed in Python
Software that uses XUL
Software that uses Scintilla
Software using the Mozilla license
|
2784344
|
https://en.wikipedia.org/wiki/Lossy%20data%20conversion
|
Lossy data conversion
|
A lossy data conversion method is one where converting data between one storage format and another displays data in a form that is "close enough" to be useful, but may differ in some ways from the original. This type of conversion is used frequently between software packages that rely on different storage techniques. In many cases, a software package such as Microsoft Word will enable a document stored in one format to be saved as another, in particular HTML. The document saved in the lossy format may look identical, but the conversion can also cause some loss of fidelity or functionality.
Types of lossy conversion
There are three basic types of lossy data conversion:
With in-place lossy data conversion, software packages such as IBM's Lotus Domino transform a proprietary rich text format into a web standard HTML as the page is requested. Because the page is served up just in time, it can rely on the existence of the software package to handle specialized data features that may not be available in the new format natively. On the other hand, the converted data may not be usable outside of the in-place context.
With file export lossy data conversion, software packages allow either a File Export to the new data storage format, or a File Save to the new data storage format. The former leaves the original content in its original format and creates a new lossy version in the named file. The latter changes the format of the existing file.
With extraction lossy data conversion, software packages take content stored by a different software package and extract out the content to the desired format. This may allow data to be extracted in a format not recognized by the original software package.
Other types of data
Graphic data (images) is often converted from one data storage format to another. Such conversions are usually described separately as either lossy data compression or lossless data compression.
See also
Round-trip format conversion
Transcoding
Computer file formats
Data compression
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.