id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
61319041
|
https://en.wikipedia.org/wiki/Political%20positions%20of%20Susan%20Collins
|
Political positions of Susan Collins
|
The political positions of Susan Collins are reflected by her United States Senate voting record, public speeches, and interviews. Susan Collins is a Republican senator from Maine who has served since 1997.
Collins is a self-described "moderate Republican". She has occasionally been referred to as a "liberal Republican" relative to her colleagues. In 2013, the National Journal gave Collins a score of 55% conservative and 45% liberal.
The New York Times arranged Republican Senators in 2017 based on ideology and ranked Senator Collins as the most liberal Republican. According to GovTrack, Senator Collins is the most moderate Republican in the Senate; GovTrack's analysis places her to the left of every Republican and four Democrats in 2017. Another website, OnTheIssues.org, labels Collins a "Moderate Libertarian Liberal". It also gives politicians a "social score" and an "economic score". Her social score is 60%, with 0% being the most conservative and 100% being the most liberal. Additionally, Collins's economic score is 53%, with 0% being the most liberal and 100% being the most conservative. The American Conservative Union (ACU) gives her a lifetime rating of 46.03% conservative. In 2016, they gave Collins a score of 23%. The Americans for Democratic Action gives her a rating of 45% liberal. In 2015, the ADA gave her a score of 30%.
According to CQ Roll Call, Collins sided with President Obama's position 75.9% of the time in 2013, one of only two Republicans to vote with him more than 70% of the time. FiveThirtyEight, which tracks Congressional votes, found that Collins voted with President Trump's positions about 67% of the time as of November, 2020. Nonetheless, she has voted with the GOP majority on party-line votes with much greater frequency during the Trump presidency than during the Obama presidency. During the Biden presidency, as of February 2022, FiveThirtyEight found that she has voted with Biden's positions approximately 75.6% of the time.
Bipartisanship and moderate Republicanism
Susan Collins has been considered by some groups and organizations to be a relatively bipartisan member of Congress. In 2018, Collins was considered the most bipartisan senator for the fifth consecutive year by the Lugar Center, an organization founded by former Republican Senate colleague Richard Lugar. A study published by Congressional Quarterly found that Collins voted with her party on party-line votes 59% of the time between 1997 and 2016; currently, she is the Republican senator most likely to vote with Democrats. Her perceived bipartisanship is largely due to her roots as a Northeastern Republican. With regard to judicial nominees, however, Collins has voted with the GOP majority nearly 99% of the time over the last 22 years. However, she also voted to confirm Democratic Supreme Court nominees, Sonia Sotomayor and Elena Kagan. Her office also noted that she has voted to confirm both Democratic and Republican judicial nominees 90% of the time during her tenure.
In 2014, her Senate colleague, Angus King, an Independent who caucuses with the Democratic Party, endorsed her for her re-election campaign. In 2019, Democratic Senator Joe Manchin endorsed Susan Collins for her 2020 re-election bid. She was also endorsed in 2020 by former Independent Senator, and 2000 Democratic vice-presidential nominee, Joe Lieberman. This bipartisanship and centrism has attracted some criticism from the conservative faction of the GOP. The conservative magazine, Human Events, considered her to be one of the top ten RINOs, or what they label insufficiently conservative, in 2005. Her highest conservative composite score from the National Journal was a 62% in 2009, while her highest liberal composite score was a 52.8% in 2006. The Tea Party threatened to challenge Collins over some of her votes. Collins "who is fiscally conservative but holds socially moderate views, plays a unique role in the current Republican drama at a time when a strong Tea Party faction has pushed the GOP — and its leadership — to the right." She was the subject of negative criticism from movement conservatives for her vote against repealing Obamacare.
Donald Trump
On August 8, 2016, Collins announced that she would not be voting for Donald Trump, the Republican nominee for the 2016 election. She said that as a lifelong Republican she did not make the decision lightly but felt that he is unsuitable for office, "based on his disregard for the precept of treating others with respect, an idea that should transcend politics." She considered voting for the Libertarian Party's ticket or a write-in candidate. During the Trump presidency, Collins has voted with the GOP majority with much greater frequency (87% of the time on party-line votes in 2017).
Firing of FBI Director James Comey
Collins supported Trump's decision to fire FBI Director James Comey.
Travel ban
On January 28, 2017, Collins joined five other Republicans to oppose President Donald Trump's temporary ban on immigration from seven Muslim-majority countries saying it is "overly broad and implementing it will be immediately problematic." She said, for example, that "it could interfere with the immigration of Iraqis who worked for American forces in Iraq as translators and bodyguards — people who literally saved the lives of our troops and diplomats during the last decade and whose lives are at risk if they remain in Iraq." She also objected to the religious aspects of the ban saying, "As I stated last summer, religious tests serve no useful purpose in the immigration process and run contrary to our American values."
Investigations
Collins stated in February 2017 that she was open to subpoena President Trump's tax returns as part of an investigation into Russian interference in the 2016 presidential election. She also said that she was open to public and secret hearings into Michael T. Flynn's covert communications with Russian officials.
In July 2017, after President Trump said it would be a violation for Special Counsel Robert Mueller to investigate the finances of both him and his family not related to the probe, Collins commented, "I understand how difficult and frustrating this investigation is for the president, but he should not say anything further about the special counsel, his staff or the investigation."
In October 2017, Collins stated her support for the Senate Intelligence Committee calling back Hillary Clinton campaign chairman John Podesta, and former Democratic National Committee chairwoman Debbie Wasserman Schultz after it was revealed that the Clinton campaign and the DNC paid for an opposition research dossier on Trump, and Podesta and Schultz had earlier denied knowledge of any payment under the committee's questioning.
In a January 2018 interview, Collins stated her openness to legislation protecting Special Counsel Robert Mueller from being fired after reports surfaced that President Trump considered doing so the previous June and her confidence in United States Deputy Attorney General Rod Rosenstein: "It probably wouldn't hurt for us to pass one of those bills. There are some constitutional issues with those bills, but it certainly wouldn't hurt to put that extra safeguard in place given the latest stories, but again, I have faith in the deputy attorney general." She furthered that Rosenstein being fired would be a mistake and compared the scenario to that of the Saturday Night Massacre.
In November 2018, Collins expressed concern over comments made by Acting Attorney General of the United States Matthew Whitaker and support for the Senate bringing up "legislation that would put restrictions on the ability of President Donald Trump to fire the special counsel", furthering that the bill being debated and passed in the Senate would "send a powerful message that Mr. Mueller must be able to complete his work unimpeded."
Impeachment inquiry
Senator Collins has made several comments related to the impeachment inquiry against Donald Trump and statements the White House has made regarding the inquiry. She emphasized multiple times that it would be inappropriate for her to come to any conclusions prior to evidence being presented in the Senate due to her duties as a juror in such an event. When answering a question about the transcript of the July 25, 2019 telephone conversation between President Trump and Ukrainian President Volodymyr Zelensky where the resumption of a closed Ukrainian investigation into Hunter Biden was discussed, she said that it “raises a number of important questions.” She followed up this statement with the comment, “If there are articles of impeachment I would be a juror, and as a juror I think it’s inappropriate for me to reach conclusions about evidence or to comment on the proceedings in the House.”
On October 5, 2019, when reacting to Trump's comments about China becoming involved in investigating his political opponents Collins stated, “I thought the president made a big mistake by asking China to get involved in investigating a political opponent.” Collins went on to state, “It’s completely inappropriate.” Collins said she hopes the impeachment process “will be done with the seriousness that any impeachment proceeding deserves” and repeated the point about her being a juror in the event of an impeachment trial, “Should the articles of impeachment come to the Senate — and right now I’m going to guess that they will — I will be acting as a juror as I did in the Clinton impeachment trial.” During the same interview, Collins stated she was concerned that comments made by Representative Adam Schiff, chair of the House Intelligence Committee, “misrepresented and misled people about what was in the transcript in the call,” referring to the July 25, 2019 call between Trump and Ukrainian President Zelensky.
When reacting to comments made by Acting White House Chief of Staff Mick Mulvaney about the Trump administration connecting the release of military aid to Ukraine and the resumption of an investigation into Hunter Biden Collins stated, "I was surprised that the president's chief of staff made that comment." She went on to state, "He then appeared to walk it back a few hours later. It shows why it's important to look at all the evidence in an objective way and that 's what I have pledged to do if articles of impeachment reach the Senate which I suspect they will."
Senate trial
Senator Collins voted with her party on most initial motions with the 53-47 votes falling along party caucus lines. On January 22, 2020, Collins was the first Republican to break from the GOP, voting with Democrats on a motion to increase the time allotted for responses.
Foreign policy and terrorism
October 10, 2002, saw Collins vote with the majority in favor of the Iraq War Resolution authorizing President George W. Bush to go to war against Iraq. In November 2007, Collins was one of four Republicans to vote for a Democratic proposal of 50 billion that would condition further spending on a timeline for withdrawing troops, mandating that a withdrawal begin 30 days after the bill was enacted as part of goal of removing all US troops in Iraq by December 15, 2008. The bill failed to get the sixty votes needed to overcome a filibuster. In April 2008, Collins and Democrats Ben Nelson and Evan Bayh met with President Bush's advisor on Iraq and Afghanistan Douglas Lute as the three senators expressed support for a prohibition on spending for major reconstruction projects, the proposal requiring Iraqis to pay for its security forces to be trained and equipped and reimburse the American military for the estimated $153 million a month the military spent on fueling in combat operations in Iraq. Collins stated after the meeting that while the administration did not have a view that was entirely similar to that of the senators, they at least seemed open to it. In June 2014, while growing violence erupted in Iraq under the leadership of Prime Minister Nouri al-Maliki, Collins stated that the violence would have been slower had a residual NATO force been present in Iraq and that the question was whether air strikes were effective.
On September 19, 2007, she voted against a motion to invoke cloture on Senator Arlen Specter's amendment proposing to restore habeas corpus for those detained by the United States.
Collins, joining the Senate majority, voted in favor of the Protect America Act, an amendment to the Foreign Intelligence Surveillance Act of 1978. She later sponsored the Accountability in Government Contracting Act of 2007, approved unanimously by the Senate, which would create more competition between military contractors.
Agreeing with the majority in both parties, Collins voted in favor of the Kyl-Lieberman Amendment, which gave President Bush and the executive branch the authorization for military force against Iran.
In January 2010, Collins was one of six senators to sign a letter to the Justice Department expressing concern "about using the U.S. criminal justice system for trying enemy combatants" and urged a reconsideration of the "decision to try Khalid Sheikh Mohammed and the other alleged conspirators in the September 11, 2001 attacks in the United States District Court for the Southern District of New York." The senators cited the September 11 attacks as an act of war with the perpetrators being "war criminals".
In December 2010, Collins voted for the ratification of New START, a nuclear arms reduction treaty between the United States and Russian Federation obliging both countries to have no more than 1,550 strategic warheads as well as 700 launchers deployed during the next seven years along with providing a continuation of on-site inspections that halted when START I expired the previous year. It was the first arms treaty with Russia in eight years.
In September 2014, Collins voted for President Obama's plan to training and arm moderate Syrian rebels to battle the Islamic State as part of the administration's military campaign to destroy the latter group while noting that she believed she was not given enough information in accordance with her position as a member of the Senate Intelligence Committee and expressed concern "that the fighters that we train will be focused on what really motivates them, which is removing (Syrian President Bashar al-) Assad, not fighting ISIS."
In September 2016, in advance of a UN Security Council resolution 2334 condemning Israeli settlements in the occupied Palestinian territories, Collins signed an AIPAC-sponsored letter urging President Barack Obama to veto "one-sided" resolutions against Israel. In 2017, Collins supported an Anti-Boycott Act, which would make it legal for U.S. states to refuse to do business with contractors that engage in boycotts against Israel. On August 15, 2019, after the Israeli government announced Representatives Ilhan Omar and Rashida Tlaib were barred from entering Israel, Collins tweeted that the Trump administration made "a mistake in urging Israel to prevent them from entering the country" and instead "should have encouraged Israel to welcome the visit as an opportunity for Reps. Tlaib and Omar to learn from the Israeli people."
In August 2017, after President Trump threatened North Korea would be "met with fire and fury like the world has never seen" if it continued threatening the United States, Collins said in a statement, "Given the credible and serious threat North Korea poses to our country, and in particular to U.S. forces and our allies in the region, I welcome the administration's success in securing new economic sanctions against North Korea at the United Nations." In July 2018, Collins said a Washington Post report that found North Korea allegedly not willing to denuclearize as troubling, citing North Korea's "long history of cheating on agreements that it's made with previous administrations." She recalled her support for Trump communicating with North Korean leader Kim Jong-un was "because I do believe that has the potential for increasing our safety and eventually leading to the denuclearization of North Korea" and added that this could be achieved through "verifiable, unimpeded, reliable inspections."
In January 2018, in response to the Trump administration not implementing congressional-approved sanctions on Russia, Collins stated that it was confirmed Russia tried to interfere in the 2016 U.S. presidential election, furthering that "not only should there be a price to pay in terms of sanctions, but also we need to put safeguards in place right now for the elections for this year." She noted that the legislation received bipartisan support and predicted that Russia would also attempt to interfere in the 2018 elections.
In May 2018, Collins and fellow Maine senator Angus King introduced the PRINT Act, a bill that if enacted would halt collections of countervailing duties and antidumping duties on Canadian newsprint and assert the U.S. Department of Commerce conduct a study of economic health of printing and publishing industries. Proponents of the bill argued it would offer a lifeline to the publishing industry amid newsprint price increases while critics accused it of setting "a dangerous precedent for future investigations into allegations of unfair trade practices."
In January 2019, Collins was one of eleven Republican senators to vote to advance legislation intended to block President Trump's intent to lift sanctions against three Russian companies. Collins told reporters that she disagreed with "the easing of the sanctions because I think it sends the wrong message to Russia and to the oligarch and close ally of Mr. Putin, Oleg Deripaska, who will in my judgement continue to maintain considerable [ownership] under the Treasury's plan."
In January 2019, Collins was one of eight senators to reintroduce legislation to prevent President Trump from withdrawing the United States from NATO by imposing a requirement of a two-thirds approval from the Senate for a president to suspend, terminate or withdraw American involvement with the international organization following a report that President Trump expressed interest in withdrawing from NATO several times during the previous year.
In 2019, after President Trump cut aid to Honduras, Guatemala, and El Salvador as part of an effort to curb immigration to the United States from those countries, Collins opined "that cutting aid may have the opposite effect" and could possibly "make the lives of these individuals even worse and thus encourage more of them to flee the countries that they are now leaving. So I'd actually like to see the president consider a different approach, an opposite approach." She added that increasing aid could "help the countries stem some of the problems that are causing people to leave."
Afghanistan
In September 2009, Collins stated that she was unsure if adding more American troops in Afghanistan was the solution to ending the conflict, but cited the need for "more American civilians to help build up institutions" and growth of the Afghan army. She opined that the US was "dealing with widespread corruption, a very difficult terrain, and I'm just wondering where this ends and how we'll know when we've succeeded."
In October 2010, along with Chuck Grassley, Tom Coburn, and Claire McCaskill, Collins was one of four senators to send a letter to President Obama requesting he remove Arnold Field from the latter's position as Special Inspector General for Afghanistan Reconstruction, citing their repeated expressing of concern for the SIGAR and their disappointment with the Obama administration's "ongoing failure to take decisive action."
In August 2017, following President Trump giving a national speech on the war in Afghanistan in which he announced an increase in troops there and that he would prioritize partnerships between the US, Pakistan and India, Collins commended Trump for providing clarity after years of the US lacking a "clear focus and defined strategy" with respect to Afghanistan and that he made the case for the Afghanistan government needing to participate "in defending its people, ending havens for terrorists, and curtailing corruption." Collins confirmed she had spoken to Homeland Security Advisor Tom Bossert.
China
Ahead of President Obama and President of the People's Republic of China Xi Jinping holding a meeting at an informal retreat in June 2013, Collins cosponsored legislation that would expand American law to authorize the Commerce Department impose "countervailing duties" in response to subsidized imports through mandating the Commerce Department investigate in order to determine if currency manipulation counts as a form of subsidization. The bill also contained a provision mandating the Treasury Department designate countries with "fundamentally misaligned currencies" and was sponsored at a time of a recent rise in the Chinese currency to the highest level since 2005.
Following reports of China-based hackers breaking into the computer networks of the U.S. government personnel office and stealing information identifying at least 4 million federal workers in June 2015, Collins commented that the hacking was "yet another indication of a foreign power probing successfully and focusing on what appears to be data that would identify people with security clearances."
In April 2018, Collins stated her belief that the US needed "a more nuanced approach" in dealing with China but gave President Trump "credit for levying these tariffs against the Chinese, with whom we've talked for a decade about their unfair trade practices and their theft of intellectual property from American firms." She furthered that while the US needed to toughen its stance against China, it would need to do this in a manner that did not create "a trade war and retaliation that will end up with our European and Asian competitors getting business that otherwise would have come to American farmers."
In June 2018, Collins cosponsored a bipartisan bill that would reinstate penalties on ZTE for export control violations in addition to barring American government agencies from either purchasing or leasing equipment or services from ZTE or Huawei. The bill was offered as an amendment to the National Defense Authorization Act and was in direct contrast to the Trump administration's announced intent to ease sanctions on ZTE.
In January 2019, Collins was a cosponsor of legislation unveiled by Marco Rubio and Mark Warner intended to "combat tech-specific threats to national security posed by foreign actors like China and ensure U.S. technological supremacy by improving interagency coordination across the U.S. government" through the formation of a White House Office of Critical Technologies and Security. The proposed office would be responsible for coordinating across agencies and with developing a strategy that was long-term and having to do with the entire government with the aim of protecting "against state-sponsored technology theft and risks to critical supply chains."
In February 2019, amid a report by the Commerce Department that ZTE had been caught illegally shipping goods of American origin to Iran and North Korea, Collins was one of seven senators to sponsor a bill reimposing sanctions on ZTE in the event that ZTE did not honor both American laws and its agreement with the Trump administration.
In February 2019, Collins signed a letter to President Trump noting that China "has not opened their market to fresh potatoes from the United States and has left U.S. potato growers without a clear path forward on how to resolve concerns that are standing in the way of opening this important market" and requesting that the administration treat the issue with high priority in its talks with China regarding a trade deal.
In February 2019, during ongoing disputes between the United States and China on trade, Collins was one of ten senators to sign a bipartisan letter to Homeland Security Secretary Kirstjen Nielsen and Energy Secretary Rick Perry asserting that the American government "should consider a ban on the use of Huawei inverters in the United States and work with state and local regulators to raise awareness and mitigate potential threats" and urged them "to work with all federal, state and local regulators, as well as the hundreds of independent power producers and electricity distributors nation-wide to ensure our systems are protected."
On July 15, 2019, Collins introduced the Huawei Prohibition Act of 2019 with Utah Senator Mitt Romney, a bill that prevented companies on the U.S. Commerce Department's Entity List of sanctioned companies until it was certified to Congress that neither Huawei nor any senior officers of the company in question had previously engaged in actions violating sanctions imposed by the United States or the United Nations in the period of five years preceding the certification. Collins was also part of a group of senators that signed a July 17 letter to Senate leadership requesting that Chairman Jim Inhofe and ranking member of the U.S. Senate Armed Services Committee Jack Reed include a provision in the final National Defense Authorization Act for Fiscal Year 2020 that was similar to the Collins-Romney bill, calling it "imperative that Congress address known threats to the security of technology in the United States" before identifying Huawei as one and asserting that "Huawei must meet strict standards to assure it is no longer a national security threat before being removed from the entities list."
Cuba
In 2016, Collins authorized a provision to allow aircraft traveling to or returning from Cuba in the transatlantic route the authority to make stops in the US for refueling at the Bangor, Maine airport. The provision was approved as part of an amendment to a spending bill and earned objection from the Treasury Department who sent a concerned message asserting that the provision's language could be used by airlines or countries not allowed to fly in the US to land planes on American soil.
In May 2019, Collins was one of thirteen senators to support a bipartisan proposal that would remove restrictions on private financing for exports in an effort to remove a barrier for farmers interested in selling products to Cuba. Collins and Angus King said in a statement that the intended effects of the proposal were about evening "the playing field for American farmers to open up a significant new export opportunity."
Defense
In August 2019, Collins was one of eight senators to sign a letter led by Marco Rubio and Pat Toomey to United States Secretary of Defense Mark Esper calling for an expansion of the F-35 program in response to the Trump administration ejecting Turkey from the group of F-35 partner nations.
Iran
Collins was one of seven Senate Republicans who did not sign a March 2015 letter to the leadership of the Islamic Republic of Iran attempting to cast doubt on the Obama administration's authority to engage in nuclear-proliferation negotiations with Iran. In reference to the letter, Collins told reporters, "I don't think that the ayatollah is going to be particularly convinced by a letter from members of the Senate, even one signed by a number of my distinguished and high-ranking colleagues." A deal between the United States and other world powers with the stated aim of keeping Iran from being able to produce an atomic weapon for at least 10 years was announced in July 2015. Collins was reluctant to evaluate the effectiveness of the agreement as described: "A verifiable diplomatic agreement that prevents Iran’s pursuit of nuclear weapons and dismantles its nuclear infrastructure is the desired outcome; however, it is far from clear that this agreement will accomplish those goals." In September 2015, Collins announced her opposition to the Joint Comprehensive Plan of Action in a Senate floor speech, stating that the agreement was "fundamentally flawed because it leaves Iran as capable of building a nuclear weapon at the expiration of the agreement as it is today" and predicted that following the agreement's expiration, Iran "will be a more dangerous and stronger nuclear threshold state – exactly the opposite of what this negotiation should have produced."
In September 2016, Collins was one of thirty-four senators to sign a letter to United States Secretary of State John Kerry advocating for the United States using "all available tools to dissuade Russia from continuing its airstrikes in Syria" from an Iranian airbase near Hamadan "that are clearly not in our interest" and stating that there should be clear enforcement by the US of the airstrikes violating "a legally binding Security Council Resolution" on Iran.
In June 2017, Collins voted for legislation that imposed new sanctions on Russia targeting the country's mining, metals, shipping and railways in response to Russian meddling in the 2016 Presidential election and implemented new sanctions on Iran regarding its ballistic missile program as well as other activities that were not related to the Joint Comprehensive Plan of Action. In July 2017, Collins voted in favor of the Countering America's Adversaries Through Sanctions Act that placed sanctions on Iran together with Russia and North Korea.
In August 2018, after President Trump imposed sanctions on Iran while remaining "open to reaching a more comprehensive deal that addresses the full range of the regime's malign activities, including its ballistic missile program and its support for terrorism", Collins opined that it was likely unilateral sanctions would make Iran "less likely to come back to the negotiating table."
In June 2019, following President Trump's decision to halt an air strike on Iran planned as a response to an American surveillance drone being downed by Iran, Collins stated that the US could not "allow Iran to continue to launch this kind of attack" but warned miscalculations by either side "could lead to a war in the Middle East, and that is something I don’t think anyone wants to see happen."
Saudi Arabia and Yemen
In March 2018, Collins was one of five Republican senators to vote against tabling a resolution that would cease the U.S. military's support for Saudi Arabia's bombing operations in Yemen. In August, Collins was one of nine senators and two Republicans to sign a letter to Secretary of State Mike Pompeo urging the Trump administration to comply with a law requiring certification that Saudi Arabia and the United Arab Emirates were meeting a humanitarian criteria or else being removed from American military assistance. The letter implicated the ongoing Yemen civil war as posing a threat to American interests through its continuation. In October 2018, Collins was one of seven senators to sign a letter to Secretary of State Pompeo expressing that they found it "difficult to reconcile known facts with at least two" of the Trump administration's certifications that Saudi Arabia and the United Arab Emirates were attempting to protect Yemen civilians and were in compliance with US laws on arms sales, citing their lack of understanding for "a certification that the Saudi and Emirati governments are complying with applicable agreements and laws regulating defense articles when the [memo] explicitly states that, in certain instances, they have not done so." In December, Collins was one of seven Republican senators to vote for the resolution withdrawing American armed forces' support for the Saudi-led coalition in Yemen and an amendment by Todd Young ensuring mid-air refueling between American and Saudi Air Force did not resume.
In February 2019, Collins was one of seven senators to reintroduce legislation requiring sanctions on Saudi officials involved in the killing of Jamal Khashoggi and seeking to address support for the Yemen civil war through prohibiting some weapons sales to Saudi Arabia and U.S. military refueling of Saudi coalition planes. Collins was one of seven Republicans who voted to end US support for the war in Yemen in February 2019, and, in May 2019, she was again one of seven Republicans who voted to override Trump's veto of the resolution on Yemen. In June 2019, Collins was one of seven Republicans to vote to block President Trump's Saudi arms deal providing weapons to Saudi Arabia, United Arab Emirates and Jordan, and was one of five Republicans to vote against an additional 20 arms sales.
Social issues
Abortion laws
Collins is a pro-choice Republican. The now-defunct Republican Majority for Choice, a pro-choice Republican PAC, supported Senator Collins. By July 2018, Collins was one of three Republican Senators, the others being Shelley Moore Capito and Lisa Murkowski, who publicly supported the Roe v. Wade decision. In 2020, she was one of 13 GOP Senators who declined to sign an amicus brief asking the Supreme Court to overturn Roe v. Wade. She did, however, declare that she is opposed to repealing the Hyde Amendment which prohibits federal funding for abortions.
On October 21, 2003, with Senate Democrats, Collins was one of the three Republican Senators to oppose the Partial-Birth Abortion Ban Act. She did however join the majority of Republicans in voting for Laci and Conner's Law to increase penalties for killing the fetus while committing a violent crime against the mother. On March 30, 2017, Collins would again join Senator Lisa Murkowski (R-AK) to break party lines on a vote; this time against a bill allowing states to defund Planned Parenthood. As in that case, Vice President Pence was forced to break a 50–50 tie in favor of the bill. She was one of three Republicans, with Capito and Murkowski, who opposed a bill to repeal the Affordable Care Act that included a provision to defund Planned Parenthood. She was one of seven Republicans, including Capito and Murkowski, who voted against a bill to repeal the ACA without replacement that would have also defunded Planned Parenthood. In 2018, Collins voted with the majority of Senate Democrats against a bill that would ban abortion after 20 weeks of pregnancy. She was also one of two Republicans who voted against an amendment to ban federal funds for facilities that provide abortion services and family planning. In 2019, Collins joined a majority of Republicans, and three Democrats, to vote for a bill that required doctors to provide care and medical intervention for infants born alive after a failed abortion. Also in 2019, she announced that she is opposed to laws that ban abortions even in cases of rape or incest, specifically stating that such laws are against national Supreme Court rulings.
Planned Parenthood, which rates politicians' support for pro-choice issues, has given Collins a lifetime rating of 70%. In 2017, Planned Parenthood gave her a rating of 61%. Also in 2017, Planned Parenthood gave Collins an award given to Republicans who vote closely in line with their positions. NARAL Pro-Choice America, which also provides ratings, gave her a score of 90% in 2014 and a 45% in 2017. Conversely, National Right to Life, which opposes abortion and rates support for pro-life issues, gave Collins a rating of 25% during the 114th Congress and a 50% in 2019.
Age discrimination
In February 2019, along with Democrats Patrick Leahy and Bob Casey, Jr. and Republican Chuck Grassley, Collins was one of four senators to introduce the Protecting Older Workers Against Discrimination Act (POWADA), a bill that sought to undo the standards imposed by the 2009 U.S. Supreme Court ruling in Gross v. FBL Financial Services and restore the requirement that plaintiffs had to show only that age was a factor in their decision on employment as opposed to the deciding factor.
Agriculture
In September 2017, Collins was one of four senators to introduce the Cultivating Revitalization by Expanding American Agricultural Trade and Exports Act (CREAATE Act), legislation that would increase funding for both the Market Access Program (MAP) and the Foreign Market Development Program (FMDP) of the Agriculture Department. The National Corn Growers Association (NCGA) stated that the CREAATE Act would double annual MAP funding from $200 million to $400 million, and increase annual FMDP funding from $34.5 million to $69 million over a five-year period.
In November 2017, following an announcement of the Agriculture Department's National Institute of Food and Agriculture (NIFA) awarding a grant of $388,000 to the University of Maine at Orono, Collins and fellow Maine Senator Angus King said the funding would "support the University of Maine's cutting-edge research into potato breeding and help the state build on our strong agricultural traditions so we can make Maine potato products more economically resilient."
In February 2018, Collins and Democrat Bob Casey introduced the Organic Agriculture Research Act of 2018, a bill reauthorizing increased funding for the Organic Agriculture Research and Extension Initiative (OREI) of the USDA as part of an assurance of organic agricultural research having continued investment. The bill also reauthorized OREI for five more years and increased funding from $30 million in fiscal year 2019 to $50 million in fiscal year 2023. Collins commented that the legislation would "provide some funding for research into organic farming methods and help offset part of the cost that the state uses to certify farms as complying with USDA standards for organic farming."
In August 2018, Collins was one of thirty-one senators to vote against the Protect Interstate Commerce Act of 2018, a proposed amendment to the 2018 United States farm bill that would mandate states to authorize agricultural product sales not be prohibited under federal law. After the farm bill passed in December, Collins and Angus King released a statement expressing their delight at the amendment not being included as there were a "number of state laws in Maine that would have been undermined if this amendment was adopted, including those on crate bans for livestock, consumer protections for blueberry inspections, and environmental safeguards for cranberry cultivation."
In 2019, Collins worked with Democrats Patrick Leahy and Sherrod Brown and fellow Republican David Perdue on a bipartisan effort meant to ensure students have access to local foods that will also help both local farmers and childhood health. The proposal would assist the Farm to School Grant Program administered through the Agriculture Department and raise the program's authorized level from $5 million to $15 million in addition to furthering the maximum grant award to $250,000.
In March 2019, Collins was one of thirty-eight senators to sign a letter to United States Secretary of Agriculture Sonny Perdue warning that dairy farmers "have continued to face market instability and are struggling to survive the fourth year of sustained low prices" and urging his department to "strongly encourage these farmers to consider the Dairy Margin Coverage program."
Animal fighting
In February 2019, Collins and Democrat Kamala Harris introduced the Help Extract Animals from Red Tape Act (HEART Act), a bill meant to assist animals previously rescued by the federal government from being used in animal fights. Collins stated that animals needed to be placed in "loving homes as soon as it is safely possible" and that the HEART Act "would reduce the minimum amount of time animals must be held in shelters and alleviate the financial burdens that fall on those who care for seized animals."
Cybersecurity
In February 2019, Collins and Rhode Island Senator Jack Reed introduced the Cybersecurity Disclosure Act of 2019, a bill that would require that publicly traded companies include information in their Securities and Exchange Commission disclosures for investors as to determine whether or not any member of the company's board of directors is a cybersecurity expert. Collins stated that cyberattacks had become more common and called on Congress to take action "to better protect Americans from hackers attempting to steal sensitive data and personal information." Collins also cited statistics from the Identity Theft Resource Center and Deloitte that demonstrated an increased numbers of cyberattacks across numerous industries in the United States and noting financial institutions that had named cybersecurity as one of the top three risks expected to rise in importance as it related to businesses over the course of the following two years. The bill was referred to for consideration to the Banking, Housing, and Urban Affairs Committee of the Senate.
Elections
On December 21, 2017, Collins was one of six senators to introduce the Secure Elections Act, legislation authorizing block grants for states that would update outdated voting technology as well as form a program for an independent panel of experts that would work toward the development of cybersecurity guidelines for election systems which would then be implemented by states if they choose along with offering states resources to install the recommendations.
In October 2018 Collins cosponsored, together with Chris Van Hollen and Ben Cardin, a bipartisan bill that if passed would block "any persons from foreign adversaries from owning or having control over vendors administering U.S. elections." Protect Our Elections Act would make companies involved in administering elections reveal foreign owners, and informing local, state and federal authorities if said ownership changes. Companies failing to comply would face fined of $100,000.
In May 2019, Collins and Democrat Amy Klobuchar introduced the Invest in Our Democracy Act of 2019, legislation that would direct the Election Assistance Commission to provide grants supporting education being continued in election administration or cybersecurity for both election officials and employees, Klobuchar stating that the bill "would ensure that election officials have the training and resources to improve cyber-defenses ahead of future elections."
Hate crimes
In April 2017, along with Democrats Kamala Harris and Dianne Feinstein and fellow Republican Marco Rubio, Collins cosponsored a resolution condemning hate crimes related to ethnicity, religion, and race. The resolution's text cited incidents reflecting an uptake of anti-Semitic hate crimes throughout the United States and incidents of Islamic centers and mosques being burned in Texas, Washington, and Florida in addition to asking the federal government to cooperate with state and local officials to increase the speed of its investigations into hate crimes. In a statement, Collins said, "The recent rise in the number of hate crimes is truly troubling and is counter to American values. No individual in our society should have to live in fear of violence or experience discrimination."
LGBT issues
In 2004, Susan Collins was one of six Republicans who voted against the Federal Marriage Amendment which was an amendment intended to ban same-sex marriage. In June 2006, she voted against the Federal Marriage Amendment for a second time. Collins joined six other Republicans, including Olympia Snowe and John McCain, in voting against the effort to ban gay marriage.
On December 18, 2010, Collins voted in favor of the Don't Ask, Don't Tell Repeal Act of 2010 and was the primary Republican sponsor of the repeal effort.
In May 2012, in their capacity as members of the Homeland Security and Governmental Affairs Committee, Collins and Joe Lieberman sponsored a bill intended to extend benefits to same-sex partners of American government workers and stated that the legislation was meant to make the government compete with the private sector for top employees along with provide assurance of fair treatment for those in same-sex relationships rather than address the issue of same-sex marriage. The bill cleared the committee on a voice vote.
In September 2013, Collins and Democrat Tammy Baldwin introduced the Domestic Partnership Benefits and Obligations Act of 2013, legislation that would extend employee benefit programs in order to provide coverage for federal employees' same-sex domestic partners to the same extent as those benefits used to cover married opposite-sex spouses of federal employees. Collins stated the bill being implemented would be "both fair policy and good business practice" and that the federal government "must compete with the private sector when it comes to attracting the most qualified, skilled, and dedicated employees."
Collins stated her support on same-sex marriage on June 25, 2014, after previously declining to publicly state her views, citing a policy to not discuss state-level issues, as well as a belief that each state's voters should decide the issue. When she won reelection in 2014, she became the first Republican senator to be reelected while supporting same-sex marriage.
Collins voted for the Employment Non-Discrimination Act to prevent job discrimination based on sexuality and gender identity. In 2015, she was one of 11 Republican Senators who voted to give social security benefits to same-sex couples in states where same-sex marriage was not yet recognized. The Human Rights Campaign, which rates politicians' support for LGBT issues, gave Collins a score of 85% during the 114th Congress. She received a 33% during the 115th Congress.
In 2017, Collins and New York Senator Kirsten Gillibrand "introduced a bipartisan amendment to protect transgender service members from President Trump's plan to ban them from the military." Collins and Gillibrand were joined by Jack Reed in reintroducing the legislation in February 2019, after the Supreme Court upheld the Trump administration's ban on transgender individuals serving in the military. In a statement, Collins said that "if individuals are willing to put on the uniform of our country and risk their lives for our freedoms, then we should be expressing our gratitude to them, not trying to kick them out of the military." In 2019, Collins co-sponsored legislation with Jeff Merkley (D-Oregon) to extend the Civil Rights Act of 1964 to people regardless of sexual orientation or gender identity. She later withdrew her co-sponsorship of that legislation. In May 2019, she also introduced legislation, co-sponsoring the bill with Independent Senator Angus King (Maine) and Democratic Senator Tim Kaine (Virginia), to prohibit housing discrimination against LGBT people.
In February 2021, Collins voted to confirm Pete Buttigieg as the Secretary of Transportation, making him the first openly gay presidential cabinet member to have been confirmed by the Senate. After previously sponsoring the Equality Act, Collins announced that she would not co-sponsor the Equality Act in the 117th Congress. In March 2021, Collins voted in favor of a failed amendment to legislation that would have rescinded funds from public schools that allow trans youth to participate in the sporting teams of their gender identity, and this was seen as a departure from her past support for LGBT rights. Later in the month, she was one of two Republicans on the HELP Committee to vote in favor of advancing the nomination of Dr. Rachel Levine, a trans woman and physician, in a 13-9 vote. She was one of two Republicans who voted to confirm Dr. Levine in a vote of the full Senate.
Maternal mortality
In June 2019, Collins and Democrat Debbie Stabenow introduced legislation to grant funding for new community partnerships that would respond to the high rate of maternal and infant mortality in the US.
Opioids
In 2016, Collins authored the Safe Treatments and Opportunities to Prevent Pain Act, a provision intended to encourage the National Institutes of Health to further its research into opioid therapy alternatives in regard to pain management, and the Infant Plan of Safe Care Act, which mandated that states ensure safe care plans are developed for infants that are drug dependent before they are discharged from hospitals. These provisions were included in the Comprehensive Addiction and Recovery Act, legislation that created programs and expanded treatment access alongside implementing 181 million in new spending as part of an attempt to curb heroin and opioid addiction.
In May 2017, Collins was one of six senators to introduce the Medicaid Coverage for Addiction Recovery Expansion Act, legislation that would allow treatment facilities with up to 40 beds reimbursement by Medicaid for 60 consecutive days of inpatient services and serve as a modification of the Medicaid Institutions for Mental Disease law which only authorized Medicaid coverage for facilities with 16 beds or less. Every senator that introduced the bill said that their state had been impacted by opioid addiction and would benefit from the bill's passage.
In December 2017, Collins was one of nine senators to sign a letter to Senate Majority Leader Mitch McConnell and Senate Minority Leader Chuck Schumer describing opioid use as a non-partisan issue presently "ravaging communities in every state and preys upon individuals and families regardless of party affiliation" and requesting the pair "make every effort to ensure that new, substantial and sustained funding for the opioid epidemic is included in any legislative package."
In September 2018, Collins authored two bills as part of the "Opioid Crisis Response Act", a bipartisan package of 70 Senate bills that would alter programs across multiple agencies in an effort to prevent opioids from being shipped through the U.S. Postal Service and grant doctors the ability to prescribe medications designed to wean opioid addictions. The bills passed 99 to 1.
In April 2019, Collins cosponsored the Protecting Jessica Grubb's Legacy Act, legislation that authorized medical records of patients being treated for substance use disorder being shared among healthcare providers in the event that the patient provided the information. Cosponsor Shelley Moore Capito stated that the bill also prevented medical providers from unintentionally providing opioids to individuals in recovery.
Pharmaceutical drugs
In 2015, Collins recounted that drug manufactures had claimed their price increases were necessary for cost related to both research and development and that she happened to know "in the case of [the antimalarial drug] Daraprim, that it’s been around since the 1950s, and Turing [which owns Daraprim] was founded in 2015."
In 2016, Collins and Democrat Claire McCaskill signed a letter to Pfizer CEO Ian Read where they noted that drug overdoses were the leading cause of accidental death in the US and requested an explanation on "the number and amount of price increases and decreases taken by Hospira between 2009 and 2014 for naloxone" along with "how Hospira came to the decision to raise the price, as well as how much the increases contributed to research and development into improving the product, and whether any issues of patient access arose."
In January 2017, along with Chuck Grassley, Sherrod Brown, and Bob Casey, Jr., Collins introduced the Pharmacy and Medically Underserved Areas Enhancement Act, a bill that would grant Medicare the ability to reimburse in regards to immunizations, preventive screenings, and chronic disease management and recognize pharmacists as healthcare providers in "medically underserved areas" through an amendment of title XVIII of the Social Security Act.
In December 2017, along with Democrats Amy Klobuchar and Tammy Baldwin, Collins was one of three senators to sign a letter to Strongbridge Biopharma CEO Matthew Pauls that stated their commitment "to combatting sudden astronomical price increases as well as any anticompetitive conduct and attempts to game the regulatory process at the expense of Americans in need of life-saving therapies." The senators requested the company alleviate the price increase on Keveyis and provide compliance to relevant laws and a written response related to their acquisition of dichlorphenamide.
In January 2019, Collins sent a letter to the Department of Health and Human Services citing a The Wall Street Journal article that reported how over three dozen pharmaceutical companies raised the price of hundreds of drugs on New Year's Day and requested that the department take action in regards to a Trump administration pledge to alter drug rebates. Collins wrote that the price increases were "shocking, but they are unfortunately not unusual, nor are they unexpected" and of the potential necessity of legislation to reform the drug rebates system.
In February 2019, Collins was a cosponsor of the Creating and Restoring Equal Access To Equivalent Samples (CREATES) Act of 2019, a bipartisan bill preventing brand-name pharmaceutical and biologic companies from stifling competition through blockage of the entry of lower-cost generic drugs into the market. The CREATES Act was placed on the U.S. Senate Legislative Calendar under General Orders.
In June 2019, when Collins and other members of the Problem Solvers Caucus announced guiding principles as a framework for legislation related to lowering the costs of prescription drugs, she said in part, "I look forward to working with our partners in the House to pass legislation to help Americans facing exorbitant costs for the medications they need, particularly seniors, 90 percent of whom take a prescription drug."
In July 2019, along with Democrats Jeanne Shaheen and Tom Carper and fellow Republican Kevin Cramer, Collins was one of four senators to cosponsor the Insulin Price Reduction Act, a bill that would prohibit insurers and pharmacy benefit managers (PBMs) from engaging in schemes with insulin manufacturers that would bring 2020 list prices in line with level from 2006 and mandate that price hikes would not exceed medical inflation in that year.
Robocalls
In July 2019, Collins introduced the Anti-Spoofing Penalties Modernization Act of 2019, a bill that would double the penalties for robocalling from $10,000 to $20,000 upon violation and increase the maximum fine from $1 million to $2 million. Collins reflected on the 93 million robocalls received in her home state of Maine the previous year and asserted that ending illegal robocalls would "take an aware public, aggressive action by regulators and law enforcement agencies, and a coordinated effort at every level of our telecommunications industry", citing the Anti-Spoofing Penalties Modernization Act as an important tool in this effort.
United States Postal Service
In March 2019, Collins was a cosponsor of a bipartisan resolution led by Gary Peters and Jerry Moran that opposed privatization of the United States Postal Service (USPS), citing the USPS as an establishment that was self-sustained and noting concerns that a potential privatization could cause higher prices and reduced services for customers of USPS with a particular occurrence in rural communities.
Judicial appointments
In May 2005, Collins was one of fourteen senators (seven Democrats and seven Republicans) to forge a compromise on the Democrats' use of the judicial filibuster, thus allowing the Republican leadership to end debate without having to exercise the nuclear option. Under the agreement, the minority party agreed that it would filibuster President George W. Bush's judicial nominees only in "extraordinary circumstances"; three Bush appellate court nominees (Janice Rogers Brown, Priscilla Owen, and William Pryor) would receive a vote by the full Senate; and two others, Henry Saad and William Myers, were expressly denied such protection (both eventually withdrew their names from consideration).
Collins voted for the confirmation of George W. Bush Supreme Court nominees Samuel Alito and John G. Roberts, as well Barack Obama Supreme Court nominees Elena Kagan and Sonia Sotomayor.
After President Obama nominated Merrick Garland to the Supreme Court, Collins publicly opposed the Senate Republican leadership's decision to refuse to consider the nomination, and urged her Republican colleagues to "follow regular order" and give Garland a confirmation hearing and a vote in the Senate Judiciary Committee in the normal fashion.
In 2017, Collins voted for the confirmation of President Trump's nomination of John K. Bush for Circuit Judge of the United States Court of Appeals for the Sixth Circuit. During his confirmation hearings it was disclosed that he had authored pseudonymous blog posts in which he disparaged gay rights, compared abortion to slavery, and linked to articles on right-wing conspiracy theory websites.
In 2017 and 2018, Collins was one of two Senate Republicans (the other being Lisa Murkowski) who were opposed to efforts by Senate Majority Leader Mitch McConnell and the rest of the Senate Republican leadership to change the Senate's rules in order to speed up Senate confirmation of President Donald Trump's judicial nominees.
Also in 2018, Collins was one of three Republican Senators, along with Jeff Flake (Arizona) and Murkowski, who supported an FBI investigation into sexual assault allegations made against Trump's second Supreme Court nominee, Brett Kavanaugh. She later announced her decision to vote in favor of his confirmation, stating that the "presumption of innocence" should be retained regarding Kavanaugh's sexual assault allegations and that she did not believe he would overturn Roe v. Wade. Her vote sparked opposition, including fundraising for her next hypothetical opponent, and increased speculation about possible Democratic challengers in 2020. Collins stated that she felt "vindication" in December 2018 when Kavanaugh voted with the court's liberal justices to decline to hear two cases against Planned Parenthood, thus allowing lower court rulings in favor of Planned Parenthood to stand. However, in February 2019, Kavanaugh voted to uphold a Louisiana abortion law which effectively shuttered most of the state's abortion clinics (the law was blocked by the Court's majority).
Collins endorsed another controversial judicial nominee in 2018: Thomas Farr, whose federal court nomination by President Trump was controversial due to his support for North Carolina laws that were ruled to be discriminatory toward African-American voters.
In March 2019, Collins became the first Republican to announce opposition to Chad Readler's nomination for the 6th U.S. Circuit Court of Appeals, citing his "role in the government's failure to defend provisions under current law that protect individuals with pre-existing conditions". In May 2019, she was the only Republican to vote against the confirmation of Wendy Vitter as a federal judge citing controversial statements that Vitter had made about abortion as well as her declining to say whether Brown v. Board was rightly decided. She also opposed the nomination of Matthew Kacsmaryk as a district judge over his opposition to LGBTQ rights and his comments against abortion rights. She was the only Republican to vote against advancing the nomination of Kacsmryk.
By June 2019, Collins, who has stated that she is pro-choice, had supported more than 90% of President Trump's judicial nominees. 32 of these judges had indicated that they opposed abortion rights, according to the abortion rights organization NARAL. A spokeswoman for Collins said that Collins has voted for 90% of both Democratic and Republican nominees and that she ignores the personal beliefs of judicial nominees, but considers if they "can set aside these beliefs and rule fairly and impartially." In December 2019, Collins was the only Republican to vote against ending debate as well as against the final confirmation of Sarah Pitlyk as a federal judge citing concerns about Pitlyk's opposition to abortion and fertility treatments.
In October 2020, Collins was of two Republican Senators, the other being Lisa Murkowski, to vote against a motion to enter an executive session which would move forward with a vote to confirm Judge Amy Coney Barrett, Pres. Trump's third nominee to the Supreme Court. Collins, in the final vote, had intended to vote "no" on the final vote, however Murkowski announced she would vote "yes". She cited the Merrick Garland Supreme Court nomination and the standard set during that nomination, and also noted that this nomination is being voted on much closer to the election. On October 26, 2020, she joined all members of the Democratic caucus in voting against Barrett's confirmation.
As of August 2021, Collins had voted for all of Pres. Joe Biden's judicial nominees to date. In November 2021, Collins was one of two Republicans, along with Lisa Murkowski, who voted with the Democratic caucus, in a 51-45 vote, to confirm Beth Robinson, the first openly lesbian to be confirmed to the federal circuit courts, to the US Court of Appeals for the Second Circuit.
Immigration and trade
Collins has voted against free-trade agreements including the Dominican Republic – Central America Free Trade Agreement. In 1999 she was one of four Republicans (along with her colleague Olympia Snowe) to vote for a Wellstone amendment to the Trade and Development Act of 2000 which would have conditioned trade benefits for Caribbean countries on "compliance with internationally recognized labor rights".
Collins coauthored, along with Senator Joe Lieberman (D-CT/I-CT), the Collins-Lieberman Intelligence Reform and Terrorism Prevention Act of 2004. This law implemented many of the recommendations of the 9-11 Commission, modernizing and improving America's intelligence systems. In October 2006, President George W. Bush signed into law major port security legislation coauthored by Collins and Washington Senator Patty Murray. The new law includes major provisions to significantly strengthen security at US ports.
As ranking member of the Homeland Security and Governmental Affairs Committee, Collins and committee Chairman Joe Lieberman voiced concerns about budget, outside contractors, privacy and civil liberties relating to the National Cybersecurity Center, the Comprehensive National Cybersecurity Initiative and United States Department of Homeland Security plans to enhance Einstein, the program which protects federal networks. Citing improved security and the benefits of information sharing, as of mid-2008, Collins was satisfied with the response the committee received from Secretary Michael Chertoff.
In 2007, she voted against the McCain-Kennedy proposal which would have given amnesty to undocumented immigrants. In 2010, Collins voted against the DREAM Act. However, in 2013, Collins was one of fourteen Republicans who voted in favor of a comprehensive immigration bill that included border security and a pathway to citizenship for undocumented immigrants.
In November 2014, following President Obama's decision to achieve immigration reform through executive action with a plan to give deportation relief to as many as 5 million undocumented immigrants, Collins stated that the president was "a huge mistake from both the political and policy perspective" and that members of his own party agreed with her.
In 2016, Collins cosponsored a bill requiring the Department of Homeland Security evaluate security threats at the northern border and said that it would mandate the federal government to consider tools border security officials would need in the prevention of drug and human trafficking.
Collins criticized President Donald Trump's 2017 executive order to ban entry to the U.S. to citizens of seven Muslim-majority countries, stating: "The worldwide refugee ban set forth in the executive order is overly broad and implementing it will be immediately problematic." In 2018, Susan Collins co-sponsored bipartisan comprehensive immigration reform which would have granted a pathway to citizenship to 1.8 million Dreamers while also giving $25 billion to border security; at the same time, Collins voted against the McCain/Coons proposal for a pathway to citizenship without funding for a border wall as well as against the Republican proposal backed by Trump to reduce and restrict legal immigration.
When President Trump and Jeff Sessions announced a 'zero-tolerance' policy on migrants at the border and separated children from parents, Susan Collins opposed the move and urged Trump to "put an end" to the separation of families. She said that separating children from parents at the border is "inconsistent with American values." However, she said that she did not support the Democratic bill to stop the separation of families and said that she instead supports the bipartisan bill she proposed in February to give a pathway to citizenship for 2 million undocumented immigrants and provide $25 billion in border security. In 2019, she introduced bipartisan legislation to oppose Trump's declaration emergency at the southern border in order to build a wall. She was one of a dozen Republicans who broke with their party, joining all Democrats, to vote for the resolution rejecting the emergency declaration.
In October 2018, following President Trump announcing his intent to issue an executive order that would revoking birthright citizenship for the children of noncitizens and unauthorized immigrants born in the United States, Collins stated that she disagreed entirely with the planned executive order and that anyone born in the US was an American. Collins speculated that the executive order would be subject to a court challenge and the order would be invalidated by the courts.
In June 2019, Collins and fellow Maine senator Angus King released a joint statement confirming that they had questioned U.S. Customs and Border Protection "on the process being used to clear" asylum seekers for transportation to Portland, Maine and opined that it was "clearly not a sustainable approach to handling the asylum situation." Collins and King were said to both be "interested in providing additional resources to the federal agencies that process asylum claims, so we can reduce the existing backlog and adjudicate new claims in a more timely fashion." In December 2019, Collins introduced legislation to allow migrants seeking asylum to work sooner in the US diverging from the Trump administration's view.
Economic issues
Susan Collins had a mixed record on the Bush tax cuts. In 2004, she joined other "Senate moderates -- John McCain of Arizona, Olympia J. Snowe...of Maine, and Lincoln Chafee of Rhode Island" in opposing how the Bush administration wanted to implement the tax cuts. The four Republicans cited deficit concerns as a reason for opposing the tax cut plans. Collins voted in favor of and for the extension of the Bush tax cuts in 2003 and 2006.
She offered an amendment to the original bill that allowed for tax credits to school teachers who purchase classroom materials.
Ultimately, Collins was one of just three Republican lawmakers to vote for the American Recovery and Reinvestment Act, earning heated criticism from the right for crossing party lines on the bill.
In mid-December 2009, she was again one of three Republican senators to back a $1.1 trillion appropriations bill for the fiscal year beginning in 2010, joining Thad Cochran (R-Mississippi) and Kit Bond (R-Missouri) in compensating for three Democratic "nay" votes to pass the bill over a threatened GOP filibuster.
In May 2011, Collins was one of seventeen senators to sign a letter to Commodity Futures Trading Commission Chairman Gary Gensler requesting a regulatory crackdown on speculative Wall Street trading in oil contracts, asserting that they had entered "a time of economic emergency for many American families" while noting that the average retail price of regular grade gasoline was $3.95 nationwide. The senators requested that the CFTC adopt speculation limits in regard to markets where contracts for future delivery of oil are traded.
In February 2012, after Senate leaders reached a compromise to lower the threshold for the number of votes needed to pass bills, Collins was one of fourteen Republican senators to vote for legislation that extended a 2 percentage-point cut in the payroll tax for the remainder of the year and provided an extension of federal unemployment benefits along with preventing doctors' payments under Medicare from being cut.
In April 2014, the United States Senate debated the Minimum Wage Fairness Act (S. 1737; 113th Congress). The bill would amend the Fair Labor Standards Act of 1938 (FLSA) to increase the federal minimum wage for employees to $10.10 per hour over the course of a two-year period. The bill was strongly supported by President Barack Obama and many of the Democratic Senators, but strongly opposed by Republicans in the Senate and House. Collins tried to negotiate a compromise bill that centrist Republicans could agree to, but was unable to do so.
Collins tried to argue that the Congressional Budget Office report predicting 500,000 jobs lost if the minimum wage was increased to $10.10 also said that an increase to $9.00 would only lead to 100,000 jobs lost, but the argument did not seem to persuade her fellow centrists. She said, "I'm confident that the votes are not there to pass a minimum wage increase up to $10.10 therefore it seems to me to make sense for senators on both sides of the aisle to get together and see if we can come up with a package that would help low-income families with causing the kind of job loss that the Congressional Budget Office has warned against."
Collins announced that she's opposed to cutting the tax rate for income earners making more than $1 million a year and opposed to eliminating the estate tax. She stated that she does not see a need to eliminate the estate tax. She was also one of two Republicans to vote with Democrats against budget cuts.
In December 2017, Collins voted to pass the 2017 Republican tax plan. The bill would greatly reduce corporate taxes, reduce taxes for some individuals but increase them for other individuals by removing some popular deductions, and increase the deficit. The bill also repeals the individual mandate of the Affordable Care Act, which would leave 13 million Americans uninsured and raise premiums by an estimated additional 10% per year. After the vote, Collins said that she received assurances from congressional leaders that they would pass legislation intended to mitigate some of the adverse effects of the repeal of the individual mandate. When asked how she could vote for a bill that would raise the deficit by an estimated $1 trillion (over ten years) after having railed against the deficit during the Obama administration, Collins insisted that the tax plan would not raise the deficit. She said she had been advised in this determination by economists Glenn Hubbard, Larry Lindsey, and Douglas Holtz-Eakin, but Hubbard and Holtz-Eakin later denied stating that the plan would not increase the deficit.
In March 2018, Collins and fellow Maine senator Angus King introduced the Northern Border Regional Commission Reauthorization Act, a bill that would bolster the Northern Border Regional Commission and was included in the 2018 United States farm bill. In June 2019, when Collins and King announced the Northern Border Regional Commission (NBRC) would award grant funding to the University of Maine, the senators called the funding an investment in the forest economy of Maine that would "help those who have relied on this crucial sector for generations" and "bolster efforts by UMaine to open more opportunities in rural communities."
In May 2018, Collins was one of twelve senators to sign a letter to Chairman of the Federal Labor Relations Authority Colleen Kiko urging the FLRA to end efforts to close its Boston regional office until Congress debated the matter, furthering that the FLRA closing down its seven regional offices would cause staff to be placed farther away from the federal employees they protect the rights of.
In August 2018, it was reported that House Republicans were considering another round of tax cuts upon returning to Congress. Collins responded by saying she was opposed to more and was instead interested in amending the Tax Cuts and Jobs Act to address "certain inequities", citing a reduction in the corporate tax cut and using the money to make the individual tax cuts permanent as some of the parts needing fixing.
On December 6, 2018, Senator Collins cast the deciding vote to make Kathy Kraninger the Director of the Consumer Financial Protection Bureau, which cleared the United States Senate by a margin of 50–49, with all 50 present Republicans voting in support and all 49 Democrats voting in opposition.
In January 2019, Collins voted for both Republican and Democratic bills to end a government shutdown. She was one of six Republicans to break with their party and vote for the Democratic proposal. Later that month, after President Trump signed a bill reopening the government for three weeks, Collins stated that the shutdown had not accomplished anything and advocated for Congress to pass a spending measure funding the government for the remainder of the fiscal year. She further stated that they "cannot have the threat of a government shutdown hanging over our people and our economy." In March, Collins was the only Republican senator to sign a letter opining that contractor workers and by extension their families "should not be penalized for a government shutdown that they did nothing to cause" while noting that there were bills in both chambers of Congress that if enacted would provide back pay to compensate contractor employees for lost wages before urging the Appropriations Committee "to include back pay for contractor employees in a supplemental appropriations bill for FY2019 or as part of the regular appropriations process for FY2020."
In March 2019, after President Trump proposed a 4.7 trillion budget that reduced domestic spending by 5 percent while increasing defense spending by 4 percent to $750 billion and included $8.6 billion for his proposed border wall, Collins stated that they needed to "come together and decide on a new package for what the spending caps are going to be" and there would a be a reset to the Budget Control Act of 2011 if the proposed budget's spending caps were not reset.
In April 2019, Collins, Shelley Moore Capito, and Chris Coons introduced the Sustainable Chemistry Research and Development Act of 2019, a bill that would further development of new and innovative chemicals, products and processes and also focus on the uses of resources in an efficient manner and reducing or abolishing exposure to hazardous substances. Collins commented that the bill would authorize grants and training and educational opportunities for scientists and engineers.
In May 2019, Collins, Angus King, and Tennessee Senator Lamar Alexander joined Assistant Secretary in the Office of Energy Efficiency and Renewable Energy Daniel Simmons and Maine officials in announcing the formation of a research collaboration between the University of Maine and the Oak Ridge National Laboratory to advance attempts to 3D print using wood products. Collins stated that the initiative was a win for all parties involved that would "bolster the cutting-edge research performed at the University of Maine as well as support job creation in our state" and called the project "an outstanding example of our national labs working cooperatively with universities to drive American innovation and strengthen our economy."
In 2019, Collins worked with Democrat Kyrsten Sinema on the Senior Security Act, legislation intended to form a task force at the Securities and Exchange Commission (SEC) that would "examine and identify challenges facing senior investors" and report its findings to Congress along with recommended regulatory or statutory changes every two years.
In 2019, while President Trump and top aides met with Republican leadership for discussions about avoiding a budget debacle that fall, Collins observed, "A lot of the cuts that they made in the president's budget were arbitrary and made without any consultation at all. An example would be zeroing out the Community Development Block Grant fund." She added that the aforementioned fund was the one most requested by members of the Appropriations panel to fund.
In June 2019, Collins and Democrat Tom Carper introduced a bill they described as combatting "problems federal firefighters face when they try to prove their injuries took place in the line of duty" and stated that federal laws have placed burdens on federal firefighters so that they have to prove cancers and other diseases were the result of exposure during their work.
In 2021, Collins crossed the aisle to vote with the Democratic caucus on several votes related to economic issues. In August 2021, she was among the 19 Republicans in the Senate who voted with the Democratic caucus to pass the bipartisan $1.2 trillion infrastructure bill. In October 2021, she was one of 15 Republicans who voted with the Democratic caucus to approve a temporary spending bill in order to avoid a government shutdown. That same month, she joined 10 other Republicans in voting with the Democratic majority to break the filibuster on raising the debt ceiling. However, she voted with all other Republicans against the bill itself to raise the debt ceiling.
Education
In July 2007, after the Senate voted 95 to 0 to boost the amount of federal aid low-income student can receive and undo some conflicts of interest for the student-loan industry, Collins stated that the reauthorization "brings back a balance between [lender] subsidies and financial aid" due to removing some funds away from lenders but not cutting them out completely from the system and that private lenders were "healthy for the marketplace."
In June 2014, along with Bob Corker and Lisa Murkowski, Collins was one of three Republicans to vote for the Bank on Students Emergency Loan Refinancing Act, a Democratic proposal authored by Elizabeth Warren that would authorize more than 25 million people to refinance their student loans into lower interest rates of less than 4 percent. The bill received 56 votes and was successfully blocked by Republicans.
In September 2017, along with Republican Rob Portman and Democrats Bob Casey, Jr. and Tammy Baldwin, Collins cosponsored a bipartisan bill that would extend the Perkins Loan Program by two years when it was then set to expire by the end of the month. Collins noted that in her state "more than 4,000 students received a Perkins Loan last year, providing nearly $8.6 million in aid," and that the extension would "provide students in Maine and across our country with the critical certainty required to plan for and afford higher education."
In February 2019, Collins was one of twenty senators to sponsor the Employer Participation in Repayment Act, enabling employers to contribute up to $5,250 to the student loans of their employees as a means of granting employees relief and incentivizing applicants to apply to jobs with employers who implement the policy.
Healthcare
In April 1997, Collins was one of seven Republicans cosponsoring legislation introduced by Ted Kennedy and Orrin Hatch that would provide children's health insurance by raising the cigarette tax. Later that month, Collins said she disapproved of the cigarette tax provision and that she wanted other ways to finance the insurance.
On January 29, 2009, Collins voted in favor of the State Children's Health Insurance Program Reauthorization Act (H.R. 2).
During negotiations on passing stimulus legislation during the financial crisis, Collins successfully removed $870 million from the legislation which was intended for pandemic protection.
Collins opposed President Barack Obama's health reform legislation, the Patient Protection and Affordable Care Act, and voted against it in December 2009. She voted against the Health Care and Education Reconciliation Act of 2010. Senate Republicans made an effort to delay or kill the health care legislation through a filibuster of the defense spending bill, however the filibuster was defeated and Collins was one of three Republicans who voted with Democrats to end the filibuster.
With the passage of the Obama administration-supported 21st Century Cures Act in December 2016, legislation increasing funding for disease research while addressing flaws in the American mental health systems and altering drugs and medical devices' regulatory system, Collins stated, "I doubt that there is a family in America who will not be touched by this important legislation."
In January 2017, at the beginning of the Congress, Collins voted in favor of a bill to begin the repeal of the Affordable Care Act ("Obamacare"). Collins and fellow Republican Senator Bill Cassidy of Louisiana have proposed legislation that permits states to either keep the ACA or move to a replacement program to be funded in part by the federal government. In January 2017, Collins "was the only Republican to vote for a defeated amendment...that would have prevented the Senate from adopting legislation cutting Social Security, Medicare or Medicaid."
In March 2017, Collins said that she could not support the American Health Care Act, the House Republicans' plan to repeal and replace the ACA. Collins announced she would vote against the Senate version of the Republican bill to repeal Obamacare. Collins has also clarified that she is against repealing the Affordable Care Act without a replacement proposal.
On July 26, 2017, Collins was one of seven Republicans in voting against repealing the ACA without a suitable replacement. On July 27 the following day, Collins joined two other Republicans in voting 'No' to the 'Skinny' repeal of the ACA.
In August 2017, Collins and Democrat Jeanne Shaheen sent a letter to Centers for Medicare and Medicaid Services (CMS) Administrator Seema Verma requesting CMS offer Medicare coverage for clinically appropriate treatment, opining that the effectiveness of diabetes management was "crucial to holding down health care costs and helping seniors manage their diabetes successfully to allow them to continue to live healthy and productive lives" and urged the CMS to conduct a "careful review of Medicare coverage policies for patch pumps and other life-saving therapies for diabetes, in accordance with applicable laws and regulations, and to review the procedures at CMS that have resulted in these disparities in coverage."
In October 2017, Collins called for President Trump to support a bipartisan Congressional effort led by Lamar Alexander and Patty Murray to reinstate insurer payments, stating that what Trump was doing was "affecting people's access and the cost of health care right now".
In December 2017, Collins voted for a tax bill that repealed the Affordable Care Act's individual mandate, which the CBO estimates would increase the number of uninsured Americans by 13 million while causing higher health care premiums for those who remain insured. Collins made a deal with Senate Majority Leader Mitch McConnell, trading her opposition to repealing the Affordable Care Act's individual mandate provision, in exchange for legislation that would financially stabilize the remaining health insurance program. "But after Collins voted for the tax reform package, McConnell reneged and never brought the stabilization bill up for a vote. In 2018, she was the only Republican who voted with Democrats on a resolution, that ultimately did not pass, against the "low cost, low coverage" insurance plans allowed by an executive order of President Trump.
In June 2018, Collins and fellow Maine Senator Angus King released a statement endorsing a proposal by FCC Chairman Ajit Pai intended to boost funding for the Rural Health Care Program of the Universal Service Fund. stating that "with demand for RHC funding continuing to rise, any further inaction would risk leaving rural healthcare practitioners without lifesaving telemedicine services. This long-overdue funding increase would be a boon to both healthcare providers and patients in rural communities across our country."
In December 2018, Collins criticized the decision by a judge to overturn the Affordable Care Act. Asked if she regretted voting for the Republican tax reform of 2017 which zeroed out the individual mandate of the ACA and was used as a justification for the judge's ruling, Collins said she did not regret it.
In January 2019, Collins was a cosponsor of the Community Health Investment, Modernization, and Excellence (CHIME) Act, a bipartisan bill that would continue federal funding of community health centers and the National Health Service Corps (NHSC) beyond that year's September 30 deadline for five years and provide both with yearly federal funding increases beginning in fiscal year 2020.
In March 2019, Collins, Shelley Moore Capito, and Debbie Stabenow introduced the Improving Health Outcomes, Planning, and Education (HOPE) for Alzheimer's Act, legislation mandating the United States Department of Health and Human Services (HHS) conduct outreach to health care practitioners regarding several Alzheimer's disease care services and benefits and would be followed by HHS reporting on the rates of utilization of the services and barriers to access.
In April 2019, in response to the Justice Department announcing that it would side with a ruling by U.S. District Judge Reed O'Connor of the position that the Affordable Care Act's individual mandate was unconstitutional and the rest of law was thereby invalid, Collins sent a letter to United States Attorney General William Barr expressing her disappointment with the decision and that the department's support for the ruling put "critical consumer provisions" of the ACA at risk. She opined that the Trump administration "should not attempt to use the courts to bypass Congress."
In a May 2019 letter to Attorney General Barr, Collins and Democrat Joe Manchin wrote that the Affordable Care Act "is quite simply the law of the land, and it is the Administration's and your Department's duty to defend it" and asserted that Congress could "work together to fix legislatively the parts of the law that aren't working" without letting the position of a federal court "stand and devastate millions of seniors, young adults, women, children and working families."
On May 21, 2019, Collins and Democrat Tammy Duckworth introduced the Veterans Preventive Health Coverage Fairness Act, legislation that would eliminate out-of-pocket costs for preventive health medications and prescription drugs along with introducing preventive medications and services to the list of no-fee treatments covered by the Veterans Affairs Department. Collins said the bill "would protect patients from experiencing serious illnesses that are costly to treat and promote the health and well-being of our veterans" through abolishing the copayment requirement related to preventive health care.
In October 2019, Collins was one of twenty-seven senators to sign a letter to Senate Majority Leader Mitch McConnell and Senate Minority Leader Chuck Schumer advocating for the passage of the Community Health Investment, Modernization, and Excellence (CHIME) Act, which was set to expire the following month. The senators warned that if the funding for the Community Health Center Fund (CHCF) was allowed to expire, it "would cause an estimated 2,400 site closures, 47,000 lost jobs, and threaten the health care of approximately 9 million Americans." In March, 2021, she was the only Republican voting with Democrats to confirm Xavier Becerra as the HHS Secretary, with one Democrat not voting, resulting in a 50-49 vote in favor of confirmation.
Environmental issues
In September 2008, Collins joined the Gang of 20, a bipartisan group seeking a comprehensive energy reform bill. The group is pushing for a bill that would encourage state-by-state decisions on offshore drilling and authorize billions of dollars for conservation and alternative energy.
In September 2010, Collins backed a bill introduced by Senate Energy Committee Chair Jeff Bingaman and Sam Brownback that would establish a Renewable Electricity Standard (RES) requiring the generation of 15 percent renewable power through utilities by 2021. The legislation was said by President of the United Steelworkers union Leo Gerard to "protect and create hundreds of thousands of good-paying jobs and keep America in the clean energy race."
The Carbon Limits and Energy for America's Renewal (CLEAR) Act (S. 2877), also called the Cantwell-Collins bill, would have directed the Secretary of the Treasury "to establish a program to regulate the entry of fossil carbon into commerce in the United States to promote renewable energy, jobs and economic growth."
In November 2011, as the Obama administration drew condemnation from Republicans over the president's climate policy, Collins was one of six Republicans to vote against a resolution by Kentucky Senator Rand Paul that would overturn the Cross-State Air Pollution Rule, which mandated a reduction in smog and particulate-forming pollution through plants in 27 states.
In February 2017, Collins was the only Republican to vote against the Congressional Review Act (CRA) challenge undoing the Stream Protection Rule of the Interior Department. It was the first attempt by the Trump administration to undo an environmental regulation imposed by the Obama administration.
In February 2017, Collins was the only Senate Republican to vote against confirmation of Scott Pruitt to lead the Environmental Protection Agency. Fourteen months later, on CNN's "State of the Union," she said regarding his actions as the EPA head, "whether it's trying to undermine the Clean Power Plan or weaken the restrictions on lead or undermine the methane rules," his behavior has validated her "no" vote.
In May 2017, Collins was one of three Republicans who joined Democrats in voting against a repeal of Obama's regulations for drilling on public lands; the repeal effort was rejected by a 49-51 margin.
In September 2017, along with Lamar Alexander, Collins was one of two Republican senators on the Senate Appropriations Committee to vote for an amendment by Jeff Merkley restoring funding for the U.N.'s Framework Convention on Climate Change in the appropriations bill of the State Department that had been given annually by the US since 1992 and that President Trump had advocated for ending in his first budget proposal earlier that year.
In September 2017, Collins and John Hoeven sent a letter to United States Secretary of Health and Human Services Tom Price in which they called the Low-Income Home Energy Assistance Program "the main federal program that helps low-income households and seniors with their energy bills, providing critical assistance during the cold winter and hot summer months" and advocated for the program to be distributed as quickly as possible.
In 2019, Collins was a cosponsor of the Securing Energy Infrastructure Act, a bill that would form a two-year pilot program with national laboratories that would study security vulnerabilities and research in addition to testing technology for the purpose of isolating the most critical systems from cyberattacks with a focus on segments of the energy sector where cybersecurity incidents can result in the most damage. Collins stated the increase in the potential of a devastating cyber-attack with each day and cited the importance of taking "commonsense steps now to eliminate vulnerabilities and protect our energy infrastructure from future disruption." The bill passed in the Senate in July of that year, and its companion version in the House was passed as an amendment to the Intelligence Authorization Act.
On February 28, 2019, Collins was the only Republican senator to vote against the confirmation of Andrew Wheeler as EPA administrator, Collins in a statement saying she believed Wheeler was qualified for the position but she also had "too many concerns with the actions he has taken during his tenure as Acting Administrator to be able to support his promotion."
Collins has been a strong proponent of wood biomass as "a cost-effective, renewable, and environmentally friendly source of energy". In her arguments for biomass on the Senate floor, she repeated almost verbatim talking points from a biomass industry group's website.
In March 2019, Collins and Lisa Murkowski were the only Republican senators to sign a letter to the Trump administration advocating for the inclusion of funding for the Low-Income Home Energy Assistance Program (LIHEAP), which they credited with helping "to ensure that eligible recipients do not have to choose between paying their energy bills and affording other necessities like food and medicine", and the Weatherization Assistance Program (WAP) in the fiscal year 2020 budget proposal.
In March 2019, in response to the EPA releasing a proposal that would revoke findings asserting the necessity of mercury emissions regulations the previous December, Collins was one of six senators to send a letter to EPA Administrator Andrew Wheeler criticizing the proposal and expressing the position that evidence showed the effectiveness of the Mercury Rule.
In March 2019 Collins joined all Senate Republicans, three Democrats, and Angus King in voting against the Green New Deal resolution, a proposal that strove for net-zero greenhouse gas emissions in the US.
In April 2019, Collins was one of four senators to sponsor a bill granting a $7,000 tax credit to the next 400,000 buyers after an initial cap on vehicles from an automaker that exceeds 200,000 sales is hit. Collins argued in a statement that the legislation "would continue the momentum towards cleaner transportation and help tackle harmful transportation emissions."
In April 2019, Collins was one of five senators to cosponsor the Land and Water Conservation Fund Permanent Funding Act, bipartisan legislation that would provide permanent and dedicated funding for the Land and Water Conservation Fund (LWCF) at a level of $900 million as part of an effort to protect public lands.
In June 2019, Collins was one of eight senators to cosponsor the bipartisan Save our Seas 2.0 Act, a bill unveiled by Dan Sullivan and Bob Menendez intended to spur innovation along with aiding in the reduction plastic waste's creation and both find ways to use already existing plastic waste to stop it from entering the oceans and address this problem on a global scale. The bill was meant to respond to the plastic pollution crisis threatening oceans, shorelines, marine life, and coastal economies and served as a continuation of the Save Our Seas Act.
In July 2019, Collins and Democrat Tammy Duckworth introduced the Sensible, Timely Relief for America's Nuclear Districts’ Economic Development (STRANDED) Act, a bill that would give economic impact grants to local government entities for the purpose of offsetting economic impacts of stranded nuclear waste in addition to forming a task force that would identify funding which already exists that could be used to benefit its respective community and form a competitive innovative solutions prize competition to aid those communities in their search for alternatives to "nuclear facilities, generating sites, and waste sites." Collins said the bill would "take interim steps to assist these adversely impacted communities" while stating the requirement of the federal government to move forward with a lasting solution for nuclear waste under lawful means.
In July 2019, along with Democrats Chris Coons, Jeanne Shaheen, and Jack Reed, Collins was one of four senators and the only Republican to sponsor of a bill to extend the Weatherization Assistance Program through 2024, lauding the program as a "cost-effective way to reduce energy usage and cut low-income homeowners’ energy bills for the long-term."
Gun policy
In January 2013, after President Obama unveiled a gun control plan that included his calling on Congress to approve both an assault weapons ban and provide background checks for all gun buyers, Collins stated that she was disappointed none of Obama's proposals "include a National Commission on Violence that I, along with several of my colleagues on both sides of the aisle, had recommended" and called for an examination into "whether states are reporting data on mentally ill individuals found to be a danger to themselves or others to the national background check database designed to prevent gun purchases by such individuals."
Collins voted for the unsuccessful Manchin–Toomey bill to amend federal law to expand background checks for gun purchases in 2013. She did vote against a ban of high-capacity magazines over 10 bullets. She has received a C+ grade on gun rights from the NRA, and D- from Gun Owners of America.
In 2018, Collins was a cosponsor of the NICS Denial Notification Act, legislation developed in the aftermath of the Stoneman Douglas High School shooting that would require federal authorities to inform states within a day of a prohibited person attempting to buy a firearm failing the National Instant Criminal Background Check System. Collins noted Maine as one of thirty-seven states where a prohibited person attempting to buy a firearm is not subject to law enforcement being required to be notified of the attempted purchase and promoted the bill as aiding the prevention of "dangerous people" obtaining illegal firearms while preserving the rights of law abiding gun owners.
In February 2019, Collins supported the Terrorist Firearms Prevention Act, legislation enabling the attorney general to deny the sale of a firearm to individuals on the no-fly list or selectee list that subject airline passengers to more screening. Collins stated, "If you are considered to be too dangerous to fly on an airplane, you should not be able to buy a firearm. This bill is a sensible step we can take right now to reform our nation's gun laws while protecting the Second Amendment rights of law-abiding Americans."
In July 2019, Collins and Democrat Patrick Leahy introduced the Stop Illegal Trafficking in Firearms Act of 2019, a bill that would establish penalties for individuals that have transferred a firearm and there being reasonable cause to believe that it will be used in a drug crime, crime of violence, or act of terrorism. Collins praised the bill as strengthening federal law through "making it easier for prosecutors to go after gun traffickers and straw purchasers" and entirely protecting "the rights of the vast majority of gun owners who are law-abiding citizens."
In an August 15, 2019 interview, Collins stated that it made sense to her if "we can come together working with President Trump working with colleagues from both sides of the aisle and say that if you advertise the sale of a firearm over the Internet there should be a background check" and confirmed she was working with Senators Pat Toomey and Joe Manchin and the White House "on making sure that we close some loopholes in the background check system to ensure that people with criminal records or who are mentally ill can not purchase firearms." Collins furthered that the difference between the new calls for gun control and previous instances were that three mass shootings had occurred "so close together" and that it was her hope that the "Democrats truly want a solution and some progress" on the matter as opposed to playing political games.
Other issues
In June 2004, Collins voted for a proposal increasing the maximum penalty the Federal Communications Commission could issue in response to decency violations on television and radio from 27,500 to 275,000 and setting a limit of $3 million for a violation either receiving or producing multiple complaints.
In April 2014, the United States Senate debated the Paycheck Fairness Act (S. 2199; 113th Congress). It was a bill that "punishes employers for retaliating against workers who share wage information, puts the justification burden on employers as to why someone is paid less and allows workers to sue for punitive damages of wage discrimination." Collins voted against ending debate on the bill, saying that one of her reasons for doing so was that Majority Leader Harry Reid had refused to allow votes on any of the amendments that Republicans had suggested for the bill.
In 2015, as part of the fiscal year 2016 budget of the Obama administration, the United States Department of Veteran Affairs proposed congressional authorization for consent to spend $6.8 million that would go toward leasing a 56,600 square feet at an unspecified location in Portland, Maine for the purpose of expanding a clinic that would authorize southern Maine veterans to receive basic medical and mental health care locally. Collins supported the proposal, releasing a statement alongside Angus King in which they said that ensuring Maine veterans had access to high quality care "is one of our top priorities, and we’re pursuing the input of local veterans and interested stakeholders to understand their perspective about the proposal."
In September 2016, Collins and Democrat Mark Warner unveiled a bill that directed the Departments of Labor and Treasury to authorize employers and sole-proprietors to file one form for the satisfaction of reporting requirements as opposed to forms for each individual plan. Collins stated in a press release that Americans were not "saving enough to be able to afford a comfortable retirement" and cited an estimate by the Center for Retirement Research that there was roughly a $7.7 trillion gap between the funds Americans have saved for retirement and what they actually need.
In January 2017, both Collins and Senator Lisa Murkowski voted for Donald Trump's selection for Secretary of Education, Betsy DeVos, within the Senate Health, Education, Labor and Pensions, passing DeVos' nomination by a vote of 12–11 to allow the full Senate to vote on the nominee. Collins justified her support vote due to her belief that "Presidents are entitled to considerable deference in the selection of Cabinet members". Later, Collins and Murkowski became the only Republicans to break party lines and vote against the nominee. This caused a 50–50 tie that was broken by Senate President Mike Pence to successfully confirm DeVos' appointment.
Another noted involvement in the Trump Cabinet confirmation process for Collins was her formal introduction of Sen. Jeff Sessions (R-AL) to the Judiciary Committee for its hearings on Sessions' nomination to be Attorney General.
On December 14, 2017, the same day that the FCC was set to hold a vote on net neutrality, Collins, along with Angus King, sent a letter to the FCC asking that the vote be postponed so as to allow for public hearings on the merits of repealing net neutrality. Collins and King expressed concerns that repealing net neutrality could adversely affect the US economy. As part of this drive, Collins is reported to support using the authority under the Congressional Review Act to nullify the FCC's repeal vote. In 2018, Collins was one of three Republicans voting with Democrats to repeal rule changes enacted by the Republican-controlled FCC. The measure was meant to restore Obama-era net neutrality rules.
In February 2019, Collins was one of twenty-five senators to serve as original cosponsors to the Restore Our Parks Act, a bill that would create the National Park Service Legacy Restoration Fund as part of an effort to reduce the maintenance backlog through existing revenues received by the government for on and offshore energy development being allocated and the funding being derived from 50 percent of all revenues not otherwise being allocated in addition to being deposited into the General Treasury that do not exceed $1.3 billion every year for the following five years.
In June 2019, Collins and Democrat Doug Jones cosponsored the American Broadband Buildout Act of 2019, a bill that requested 5 billion for a matching funds program that the Federal Communications Commission would administer to "give priority to qualifying projects," the bill also mandating that at least 15% of funding go to high-cost and geographically challenged areas. The legislation also authorized recipients of the funding form "public awareness" and "digital literacy" campaigns to further awareness of the "value and benefits of broadband internet access service" and served as a companion to the Broadband Data Improvement Act.
In June 2019, Collins was one of thirty-three senators to cosponsor legislation that would establish a "National Post-Traumatic Stress Awareness Day" on June 27 in addition to designating the month of June as "National Post-Traumatic Stress Awareness Month." Kevin Cramer, a cosponsor of the bill, said June being designated as "National Post-Traumatic Stress Awareness Month shines a light on the resources available to veterans and reaffirms our commitment to ensuring they receive the care and assistance they need."
Notable legislation
Collins introduced a bill in June 2013 that would define a "full-time employee" as someone who works for 40 hours per week (instead of 30 hours). The Affordable Care Act (ACA) defined a full-time worker as someone who works 30 hours per week. Collins is cited as saying that her bill would help avoid employers reducing workers' hours to below 30 per week in order to comply with the ACA.
In September 2013, Collins introduced a bill aimed at preventing Sudden Unexpected Infant Death Syndrome (SUIDS). The bill, dubbed The Child Care Infant Mortality Prevention Act, aims to raise the amount of provider training in infant wards as well as enhanced CPR and first aid training. Backers of this bill hope this will make a dent in the 4,000 children killed every year due to SUIDS. This would require the Health and Human Services Department to update their materials as well as improve their training resources to primary providers.
In May 2019, Collins introduced the TICK Act with Democrat Tina Smith, legislation devoting over a $100 million in new federal spending to address Lyme and other tick-borne diseases. Collins noted in a Senate floor speech that tick-borne diseases had become a larger public health issue in the last 15 years and presented a "grave risks to our public health and serious harm to our families and communities".
References
Political positions of United States senators
|
30747613
|
https://en.wikipedia.org/wiki/OpenComRTOS
|
OpenComRTOS
|
OpenComRTOS is a commercial network-centric, formally developed real-time operating system, aimed primarily at the embedded systems market.
Overview
OpenComRTOS is a network-centric RTOS (Real-time operating system) that was developed using Formal Methods. It has features like the capability to support heterogeneous multi-processor systems in a transparent way, independently of the processor type (16bit, 24bit, 32bit, 64bit) and the communication medium (shared memory, buses, point-to-point links or virtual links on top of existing communication mechanisms). Typical code size on a 32bit target processor is about 5 KiBytes.
OpenComRTOS is based on the meta-modelling paradigm of Interacting Entities. In OpenComRTOS the unit of execution is a "Task" (a function with its local workspace or stack). Task entities synchronise and communicate using intermediate "Hubs" entities that are decoupled from the interacting Tasks. Hubs are formally modelled as "Guarded Actions". The current implementation provides the functionality of traditional RTOS services like Events, Semaphores, Ports, FIFOs, Resources, Packet Pools and Memory Pools. The user can also create his own Hub types.
OpenComRTOS uses a uniform architecture with a Kernel Task, driver Tasks and application Tasks, each having a Task input Port. The same interface is used for the Interrupt Service Routines.
The underlying architecture relies on the use of prioritised Packet switching with communication and routing being part of the underlying system services. One of the results is that the source code of the Tasks is independent of the mapping of Tasks and Hubs to the processing nodes in the target system.
History
The initial purpose for developing OpenComRTOS was to provide a software runtime environment supporting a coherent and unified systems engineering methodology based on Interacting Entities. This was originally developed by Open License Society since 2005, and since 2008 further developed and commercialised by Altreonic. A previously developed RTOS called Virtuoso served as a guideline. Virtuoso was a distributed RTOS, developed by Eonic Systems until the technology was sold to Wind River Systems in 2001. Its overall functionality of transparent parallel processing (called the Virtual Single Processor runtime model) was a major driving force to redevelop it in a better way. OpenComRTOS is conceptually a fourth generation of Virtuoso although it was a clean room development. The Virtuoso RTOS had its origin in the pioneering INMOS Transputer, a partial hardware implementation of C.A.R. Hoare's Communicating Sequential Processes (CSP) process algebra.
Most challenging applications:
Oil exploration system with 12000 processors featuring microcontrollers, fixed point and floating point DSPs and a Linux host in a single network.
Sonar system with 1600 floating point DSPs.
Rosetta and Giotto ESA space missions.
Converting a 400000 lines application running on a POSIX style RTOS to OpenComRTOS.
Formal development approach
For the development of OpenComRTOS a systematic but iterative engineering process was followed. Requirements and specifications being defined, models were developed in Leslie Lamport's Temporal logic of actions (TLA+) and then model checked with the corresponding TLC model checker. Based on these models, the code was written and then a third person created new models in TLA+ to verify that the implementation was still isomorphic. The timer and associated time-out functionality for services were model checked using the Uppaal Model Checker. In 2011 Springer published the book on the OpenComRTOS project.
OpenComRTOS Designer: development environment and tools
OpenComRTOS comes with a number of tools. Visual Designer is a visual modelling environment whereby the user specifies node topology and application topology in a graphical way. From these diagrams an application specific runtime model is generated. Application specific code is provided in ANSI-C for each task. Runtime execution, as well as inter-processor interactions, are visualised using the Event Tracer. A System Inspector allows reading out and modifying the data structures.
Additional modules are hostserver modules (these allow any task access to the host node services) and a Safe Virtual Machine for C'. The latter requires about 3 KiBytes (10 KiBytes for program and data) and allows dynamically downloading binary-compiled C code at runtime.
Portability
OpenComRTOS was developed for embedded systems and is written in portable ANSI-C, except the context switch and ISR interfaces.
OpenComRTOS has been ported to the following targets:Freescale PowerPC, Texas Instruments C66xx DSP, Melexis MLX16, ARM Cortex M3/4, Xilinx MicroBlaze, LEON3, NXP CoolFlux DSP and to MS-Windows and Linux.'' The latter versions allow transparent integration of host nodes and serve as well cross development and simulation systems. As the RTOS kernel is identical for single or multi-processor nodes, supporting a multi-processor system requires only to write a small task level driver that can send and receives Packets.
OpenComRTOS is made available in binary, source code and Open Technology licenses. The latter provides formal models, design documents, source code and test suites.
References
External links
Altreonic
Real-time operating systems
Embedded operating systems
ARM operating systems
|
23370128
|
https://en.wikipedia.org/wiki/Ken%20Thompson
|
Ken Thompson
|
Kenneth Lane Thompson (born February 4, 1943) is an American pioneer of computer science. Thompson worked at Bell Labs for most of his career where he designed and implemented the original Unix operating system. He also invented the B programming language, the direct predecessor to the C programming language, and was one of the creators and early developers of the Plan 9 operating system. Since 2006, Thompson has worked at Google, where he co-developed the Go programming language.
Other notable contributions included his work on regular expressions and early computer text editors QED and ed, the definition of the UTF-8 encoding, and his work on computer chess that included the creation of endgame tablebases and the chess machine Belle. He won the Turing Award in 1983 with his long-term colleague Dennis Ritchie.
Education and Early life
Thompson was born in New Orleans, Louisiana. When asked how he learned to program, Thompson stated, "I was always fascinated with logic and even in grade school I'd work on arithmetic problems in binary, stuff like that. Just because I was fascinated."
Thompson received a Bachelor of Science in 1965 and a master's degree in 1966, both in electrical engineering and computer science, from the University of California, Berkeley, where his master's thesis advisor was Elwyn Berlekamp.
Career and research
Thompson was hired by Bell Labs in 1966. In the 1960s at Bell Labs, Thompson and Dennis Ritchie worked on the Multics operating system. While writing Multics, Thompson created the Bon programming language. He also created a video game called Space Travel. Later, Bell Labs withdrew from the MULTICS project. In order to go on playing the game, Thompson found an old PDP-7 machine and rewrote Space Travel on it. Eventually, the tools developed by Thompson became the Unix operating system: Working on a PDP-7, a team of Bell Labs researchers led by Thompson and Ritchie, and including Rudd Canaday, developed a hierarchical file system, the concepts of computer processes and device files, a command-line interpreter, pipes for easy inter-process communication, and some small utility programs. In 1970, Brian Kernighan suggested the name "Unix", in a pun on the name "Multics". After initial work on Unix, Thompson decided that Unix needed a system programming language and created B, a precursor to Ritchie's C.
In the 1960s, Thompson also began work on regular expressions. Thompson had developed the CTSS version of the editor QED, which included regular expressions for searching text. QED and Thompson's later editor ed (the standard text editor on Unix) contributed greatly to the eventual popularity of regular expressions, and regular expressions became pervasive in Unix text processing programs. Almost all programs that work with regular expressions today use some variant of Thompson's notation. He also invented Thompson's construction algorithm used for converting regular expressions into nondeterministic finite automata in order to make expression matching faster.
1970s
Throughout the 1970s, Thompson and Ritchie collaborated on the Unix operating system; they were so prolific on Research Unix that Doug McIlroy later wrote, "The names of Ritchie and Thompson may safely be assumed to be attached to almost everything not otherwise attributed." In a 2011 interview, Thompson stated that the first versions of Unix were written by him, and that Ritchie began to advocate for the system and helped to develop it:
Feedback from Thompson's Unix development was also instrumental in the development of the C programming language. Thompson would later say that the C language "grew up with one of the rewritings of the system and, as such, it became perfect for writing systems".
In 1975, Thompson took a sabbatical from Bell Labs and went to his alma mater, UC Berkeley. There, he helped to install Version 6 Unix on a PDP-11/70. Unix at Berkeley would later become maintained as its own system, known as the Berkeley Software Distribution (BSD).
In early 1976, Thompson wrote the initial version of Berkeley Pascal at the Computer Science Division, Department of Electrical Engineering and Computer Science, UC Berkeley (with extensive modifications and additions following later that year by William Joy, Charles Haley and faculty advisor Susan Graham).
Ken Thompson wrote a chess-playing program called "chess" for the first version of Unix (1971). Later, along with Joseph Condon, Thompson created the hardware-assisted program Belle, a world champion chess computer. He also wrote programs for generating the complete enumeration of chess endings, known as endgame tablebases, for all 4, 5, and 6-piece endings, allowing chess-playing computer programs to make "perfect" moves once a position stored in them is reached. Later, with the help of chess endgame expert John Roycroft, Thompson distributed his first results on CD-ROM. In 2001, the ICGA Journal devoted almost an entire issue to Ken Thompson's various contributions to computer chess.
1980s
In 1983, Thompson and Ritchie jointly received the Turing Award "for their development of generic operating systems theory and specifically for the implementation of the UNIX operating system". His acceptance speech, "Reflections on Trusting Trust", presented the persistent compiler backdoor attack now known as the Thompson hack or trusting trust attack, and is widely considered a seminal computer security work in its own right.
Throughout the 1980s, Thompson and Ritchie continued revising Research Unix, which adopted a BSD codebase for the 8th, 9th, and 10th editions. In the mid-1980s, work began at Bell Labs on a new operating system as a replacement for Unix. Thompson was instrumental in the design and implementation of the Plan 9 from Bell Labs, a new operating system utilizing principles of Unix, but applying them more broadly to all major system facilities. Some programs that were part of later versions of Research Unix, such as mk and rc, were also incorporated into Plan 9.
Thompson tested early versions of the C++ programming language for Bjarne Stroustrup by writing programs in it, but later refused to work in C++ due to frequent incompatibilities between versions. In a 2009 interview, Thompson expressed a negative view of C++, stating, "It does a lot of things half well and it's just a garbage heap of ideas that are mutually exclusive."
1990s
In 1992, Thompson developed the UTF-8 encoding scheme together with Rob Pike. UTF-8 encoding has since become the dominant character encoding for the World Wide Web, accounting for more than 90% of all web pages in 2019.
In the 1990s, work began on the Inferno operating system, another research operating system that was based around a portable virtual machine. Thompson and Ritchie continued their collaboration with Inferno, along with other researchers at Bell Labs.
2000s
In late 2000, Thompson retired from Bell Labs. He worked at Entrisphere, Inc. as a fellow until 2006 and now works at Google as a Distinguished Engineer. Recent work has included the co-design of the Go programming language. Referring to himself along with the other original authors of Go, he states:
According to a 2009 interview, Thompson now uses a Linux-based operating system.
Awards
National Academies
In 1980, Thompson was elected to the National Academy of Engineering for "designing UNIX, an operating system whose efficiency, breadth, power, and style have guided a generation's exploitation of minicomputers". In 1985 he was elected a Member of the National Academy of Sciences (NAS).
Turing Award
In 1983, Thompson and Ritchie jointly received the Turing Award "for their development of generic operating systems theory and specifically for the implementation of the UNIX operating system". His acceptance speech, "Reflections on Trusting Trust", presented the backdoor attack now known as the Thompson hack or trusting trust attack, and is widely considered a seminal computer security work in its own right.
IEEE Richard W. Hamming Medal
In 1990, both Thompson and Dennis Ritchie received the IEEE Richard W. Hamming Medal from the Institute of Electrical and Electronics Engineers (IEEE), "for the origination of the UNIX operating system and the C programming language".
Fellow of the Computer History Museum
In 1997, both Thompson and Ritchie were inducted as Fellows of the Computer History Museum for "the co-creation of the UNIX operating system, and for development of the C programming language".
National Medal of Technology
On April 27, 1999, Thompson and Ritchie jointly received the 1998 National Medal of Technology from President Bill Clinton for co-inventing the UNIX operating system and the C programming language which together have "led to enormous advances in computer hardware, software, and networking systems and stimulated growth of an entire industry, thereby enhancing American leadership in the Information Age".
Tsutomu Kanai Award
In 1999, the Institute of Electrical and Electronics Engineers chose Thompson to receive the first Tsutomu Kanai Award "for his role in creating the UNIX operating system, which for decades has been a key platform for distributed systems work".
Japan Prize
In 2011, Thompson, along with Dennis Ritchie, was awarded the Japan Prize for Information and Communications for the pioneering work in the development of the Unix operating system.
Personal life
Ken Thompson is married and has a son.
See also
Brian Kernighan
Dennis Ritchie
References
Sources
External links
Ken Thompson, Bell Labs
Reflections on Trusting Trust 1983 Turing Award Lecture
Unix and Beyond: An Interview with Ken Thompson IEEE Computer Society
Ken Thompson: A Brief Introduction The Linux Information Project (LINFO)
Computer Chess Comes of Age: Photos Computer History Museum
Computer Chess Comes of Age: Video of Interview with Ken Thompson Computer History Museum
Reading Chess paper by HS Baird and Ken Thompson on optical character recognition
Members of the United States National Academy of Sciences
American computer scientists
American computer programmers
American technology writers
Turing Award laureates
Multics people
Unix people
Plan 9 people
Inferno (operating system) people
Scientists at Bell Labs
National Medal of Technology recipients
UC Berkeley College of Engineering alumni
Programming language designers
Google employees
1943 births
Living people
Members of the United States National Academy of Engineering
20th-century American inventors
People from New Orleans
|
18459313
|
https://en.wikipedia.org/wiki/Wolfenstein%20%282009%20video%20game%29
|
Wolfenstein (2009 video game)
|
Wolfenstein is a first-person shooter video game developed by Raven Software and published by Activision, part of the Wolfenstein video game series. It serves as a loose sequel to the 2001 entry Return to Castle Wolfenstein, and uses an enhanced version of id Software's id Tech 4. The game was released in August 2009 for Microsoft Windows, PlayStation 3 and Xbox 360.
Wolfenstein received lukewarm to positive reception by critics and suffered from poor commercial sales; selling a combined 100,000 copies within its first month. It would be the final game id Software oversaw as an independent developer, released two months after their acquisition by ZeniMax Media in June 2009. The game would be loosely succeeded by Wolfenstein: The New Order, released on May 20, 2014.
Plot and Setting
The story is set in the fictional town of Isenstadt during World War II, which the Nazis have enforced martial law in order to excavate rare Nachtsonne crystals necessary to access the "Black Sun" dimension. As the game progresses, happenings in Isenstadt become stranger (military patrols are replaced by supernatural creatures, etc.). Locations include the town's sewers, a tavern, a hospital, a farm, an underground mining facility, a church, the SS headquarters, a dig site and caverns, a cannery, a radio station, a paranormal base, a general's home, a castle, an airfield and a large zeppelin.
Story
In an introduction sequence, special agent William "B.J." Blazkowicz steals a medallion from a general on the German battleship Tirpitz. Discovered and captured, he unwittingly unleashes the power of the medallion, which kills all his foes for him. Hijacking a plane from the Tirpitz, he escapes and returns to the OSA headquarters. During a meeting there, he learns that the medallion needs crystals called Nachtsonne, mined only in a city in Germany called Isenstadt, to make use of its full power. The Nazis have begun digging for crystals, led by a general named Viktor Zetta. He also hands over the medallion to the OSA for further research. Shortly after, Blazkowicz is sent to Isenstadt, but upon arriving by train, his cover is already blown by an unknown informant. He then meets up with agents from the Kreisau Circle, a German resistance group dedicated to fighting the Nazis, and with them, makes it to Isenstadt.
In Isenstadt, he meets the brothers Stefan and Anton Kriege, who run the Black Market where Blazkowicz can upgrade all of his weapons and powers. (He pays for upgrades with gold earned from missions or found scattered throughout the game.) He also meets the leader of the Kreisau Circle, a former schoolteacher named Caroline Becker and her lieutenant Erik Engle. Becker sends Blazkowicz on a mission into a dig site, where he frees a young Russian named Sergei Kovlov. He also finds an exact copy of the medallion that he found on the Nazi warship, which Kovlov calls the Thule Medallion. Kovlov introduces Blazkowicz to the Golden Dawn, a group of scholars who specialize in the occult, founded and led by Dr. Leonid Alexandrov. He also shows Blazkowicz how to use the Thule Medallion. With a crystal provided by Kovlov, Blazkowicz is able to enter the Veil, a barrier between Earth and a dimension known as the Black Sun. In the Veil the player is able to run faster, spot enemies in the dark and walk through doors which have the Black Sun symbol. Using the Veil, he manages to escape. As Blazkowicz completes more missions, he gains new weapons and new defensive and offensive powers for the Thule Medallion: The yellow crystal allows him to slow down time and dodge projectiles, the blue crystal deploys a shield around B.J. which grants him temporal invulnerability, and the red crystal greatly enhances the damage caused by the weapons that he uses. Through his missions he learns that the Nazis try to harness the power of the Black Sun dimension. With it, their goal is to turn the tide in the war against the Allies. Eventually, he manages to confront and kill General Zetta, who turns out to be a monster when viewed through the Veil. The Black Market, the Kreisau Circle, and the Golden Dawn then move to a new location in downtown Isenstadt to escape retaliation for Zetta's death.
Shortly after the move, Caroline Becker is captured and held in a nearby castle. Blazkowicz helps the Kreisau Circle stage a rescue mission. He confronts Zetta's replacement, Obergruppenführer Wilhelm "Deathshead" Strasse, who is eager for revenge after Blazkowicz destroyed his Übersoldat-program in Return to Castle Wolfenstein. During a struggle, Caroline appears to be killed by Hans Grosse, Deathshead's henchman. Upon Blazkowicz's return to Isenstadt, Stefan Kriege informs him that he has killed his brother, Anton, thinking he was the mole and betrayed both Blazkowicz and Caroline. Blazkowicz then finds out that a Nazi superweapon, powered by Black Sun energy, is about to be fired at the city from a zeppelin that has been hovering over the city since Blazkowicz first arrived in Isenstadt. He boards the airship, where he discovers that Dr. Alexandrov was the real traitor all along. Alexandrov's treachery is rewarded only by an execution at the hand of Hans Grosse. In order to prepare the weapon, Deathshead and Grosse enter the Black Sun through a portal that Nazi scientists had excavated and reassembled. Blazkowicz jumps in after them. In the Black Sun, he encounters Hans Grosse guarding the machine that powers Deathshead's superweapon. Grosse greets him in a mechanical suit outfitted with two chainguns (recreating his earlier appearance in Wolfenstein 3D), and a Thule Medallion identical to Blazkowicz's. Blazkowicz kills Grosse by jamming the Nachtsonne crystals from his medallion into Grosse's. He then destroys the machine, but Deathshead flees through the portal before B.J. can capture him. The explosion takes out the portal and destabilizes the zeppelin, effectively destroying all ways of accessing the Black Sun Dimension. B.J. grabs onto a parachute and leaps from the railing. Shortly afterward, the zeppelin falls from the sky and B.J. witnesses as it crashes into the distant castle, severely damaging the castle in a giant series of explosions. In a post-credits cutscene, a wounded Deathshead is seen clambering out of the burning zeppelin and castle debris, screaming in frustration.
Development
Wolfenstein uses an improved version of id Software's id Tech 4 game engine, the technology behind Doom 3 and Enemy Territory: Quake Wars. The game was developed by Raven Software for Windows, PlayStation 3 and Xbox 360. The modifications to the game engine include depth of field effects, soft shadowing, post-processing effects, Havok physics, as well as the addition of a supernatural realm, called the Veil. While in the Veil the player has access to certain special abilities, such as the power to slow down time, to get around obstacles that exist on Earth, or even to be able to defeat enemies that have an otherwise impenetrable shield (similar to "Spirit Walk" from the previous id Tech 4 title Prey). The actress Carrie Coon started her career by doing motion caption work for Wolfenstein. The multiplayer part was developed by Endrant Studios. Wolfenstein is the first in a string of id Software games not planned to have a Linux port (continued on throughout Rage onwards), with the person in charge of Linux ports at id, Timothee Besset, commenting that "It is unlikely the new Wolfenstein title is going to get a native Linux release. None of it was done in house, and I had no involvement in the project."
On the day of Wolfensteins release, a PC patch was released to address several issues with the online multiplayer component. The multiplayer development studio, Endrant Studios, soon laid off some of its workforce after the completion of the development of Wolfensteins multiplayer.
Motion comics
Four promotional motion comics, each about three minutes long, were released. Each was based on a particular installment in the Wolfenstein series and served as a nostalgic reminder. The first one recreated Wolfenstein 3Ds escape from Castle Wolfenstein, the Hans Grosse killing and the final battle against Adolf Hitler. The second was based upon Wolfenstein 3Ds prequel game Spear of Destiny, and recreated its final battle, in which B.J. fights the cybernetic Death Knight and the Angel of Death for control of the Spear. The third comic was based on Return to Castle Wolfenstein and recreated the battle with Olaric, the destruction of an experimental V2 rocket and later the final battle against Heinrich I. The fourth comic was based on the Wolfensteins own cinematic introduction and shows B.J. infiltrating a Nazi battleship and stealing the first Thule medallion.
Reception
The game received "average" reviews on all platforms according to the review aggregation website Metacritic. IGN gave the game a positive review, though Jason Ocampo said of it, "...you can't help but wish that they developed the kernel of ideas in this game into something more. As it is, this new Wolfenstein comes off as an engaging, if otherwise forgettable, shooter."
411Mania gave the Xbox 360 version eight out of ten and said that it "holds up this tradition of mindless fun, although it doesn't do anything revolutionary." The Daily Telegraph gave the PlayStation 3 version seven out of ten and called it "a game that swings wildly in quality on an almost minute-by-minute basis, and a rather vanilla multiplayer offering doesn't do much to quicken the pulse." However, The A.V. Club gave the same console version a C+ and said that the multiplayer "feels jerky and unbalanced." Edge gave the same console version five out of ten and said, "For all its foibles, Raven's brand of brazen, aimless carnage is a gruesome thrill with just enough dynamism in each battle to keep its anachronistic heart beating." Ben "Yahtzee" Croshaw of Zero Punctuation found the game so dull that he resorted to writing his review in limerick form. Years later, he held the game in a much kinder light when compared to more contemporary shooters such as Call of Duty.
As a result of low sales figures (only 100,000 copies were sold in its first month), Activision laid off employees from Raven Software. The game has been unavailable digitally on Xbox Live, PlayStation Network, or Steam since 2014 for unknown reasons.
References
External links
2009 video games
Activision games
Alternate history video games
Games for Windows certified games
Experimental medical treatments in fiction
Id Software games
Id Tech games
Multiplayer and single-player video games
Multiplayer online games
Nazism in fiction
PlayStation 3 games
Video game reboots
Video games about Nazi Germany
Video games developed in the United States
Video games scored by Bill Brown
Video games set in Germany
Video games set in psychiatric hospitals
Video games using Havok
Windows games
Wolfenstein
World War II first-person shooters
Xbox 360 games
|
450547
|
https://en.wikipedia.org/wiki/Laoco%C3%B6n%20and%20His%20Sons
|
Laocoön and His Sons
|
The statue of Laocoön and His Sons, also called the Laocoön Group (), has been one of the most famous ancient sculptures ever since it was excavated in Rome in 1506 and placed on public display in the Vatican, where it remains. It is very likely the same statue that was praised in the highest terms by the main Roman writer on art, Pliny the Elder. The figures are near life-size and the group is a little over in height, showing the Trojan priest Laocoön and his sons Antiphantes and Thymbraeus being attacked by sea serpents.
The group has been called "the prototypical icon of human agony" in Western art, and unlike the agony often depicted in Christian art showing the Passion of Jesus and martyrs, this suffering has no redemptive power or reward. The suffering is shown through the contorted expressions of the faces (Charles Darwin pointed out that Laocoön's bulging eyebrows are physiologically impossible), which are matched by the struggling bodies, especially that of Laocoön himself, with every part of his body straining.
Pliny attributes the work, then in the palace of Emperor Titus, to three Greek sculptors from the island of Rhodes: Agesander, Athenodoros and Polydorus, but does not give a date or patron. In style it is considered "one of the finest examples of the Hellenistic baroque" and certainly in the Greek tradition, but it is not known whether it is an original work or a copy of an earlier sculpture, probably in bronze, or made for a Greek or Roman commission. The view that it is an original work of the 2nd century BC now has few if any supporters, although many still see it as a copy of such a work made in the early Imperial period, probably of a bronze original. Others see it as probably an original work of the later period, continuing to use the Pergamene style of some two centuries earlier. In either case, it was probably commissioned for the home of a wealthy Roman, possibly of the Imperial family. Various dates have been suggested for the statue, ranging from about 200 BC to the 70s AD, though "a Julio-Claudian date [between 27 BC and 68 AD] ... is now preferred".
Although mostly in excellent condition for an excavated sculpture, the group is missing several parts, and analysis suggests that it was remodelled in ancient times and has undergone a number of restorations since it was excavated. It is on display in the Museo Pio-Clementino, a part of the Vatican Museums.
Subject
The story of Laocoön, a Trojan priest, came from the Greek Epic Cycle on the Trojan Wars, though it is not mentioned by Homer. It had been the subject of a tragedy, now lost, by Sophocles and was mentioned by other Greek writers, though the events around the attack by the serpents vary considerably. The most famous account of these is now in Virgil's Aeneid (see the Aeneid quotation at the entry Laocoön), but this dates from between 29 and 19 BC, which is possibly later than the sculpture. However, some scholars see the group as a depiction of the scene as described by Virgil.
In Virgil, Laocoön was a priest of Poseidon who was killed with both his sons after attempting to expose the ruse of the Trojan Horse by striking it with a spear. In Sophocles, on the other hand, he was a priest of Apollo, who should have been celibate but had married. The serpents killed only the two sons, leaving Laocoön himself alive to suffer. In other versions he was killed for having had sex with his wife in the temple of Poseidon, or simply making a sacrifice in the temple with his wife present. In this second group of versions, the snakes were sent by Poseidon and in the first by Poseidon and Athena, or Apollo, and the deaths were interpreted by the Trojans as proof that the horse was a sacred object. The two versions have rather different morals: Laocoön was either punished for doing wrong, or for being right.
The snakes are depicted as both biting and constricting, and are probably intended as venomous, as in Virgil. Pietro Aretino thought so, praising the group in 1537:
...the two serpents, in attacking the three figures, produce the most striking semblances of fear, suffering and death. The youth embraced in the coils is fearful; the old man struck by the fangs is in torment; the child who has received the poison, dies.
In at least one Greek telling of the story the older son is able to escape, and the composition seems to allow for that possibility.
History
Ancient times
The style of the work is agreed to be that of the Hellenistic "Pergamene baroque" which arose in Greek Asia Minor around 200 BC, and whose best known undoubtedly original work is the Pergamon Altar, dated c. 180–160 BC, and now in Berlin. Here the figure of Alcyoneus is shown in a pose and situation (including serpents) which is very similar to those of Laocoön, though the style is "looser and wilder in its principles" than the altar.
The execution of the Laocoön is extremely fine throughout, and the composition very carefully calculated, even though it appears that the group underwent adjustments in ancient times. The two sons are rather small in scale compared to their father, but this adds to the impact of the central figure. The fine white marble used is often thought to be Greek, but has not been identified by analysis.
Pliny
In Pliny's survey of Greek and Roman stone sculpture in his encyclopedic Natural History (XXXVI, 37), he says:
....in the case of several works of very great excellence, the number of artists that have been engaged upon them has proved a considerable obstacle to the fame of each, no individual being able to engross the whole of the credit, and it being impossible to award it in due proportion to the names of the several artists combined. Such is the case with the Laocoön, for example, in the palace of the Emperor Titus, a work that may be looked upon as preferable to any other production of the art of painting or of [bronze] statuary. It is sculptured from a single block, both the main figure as well as the children, and the serpents with their marvellous folds. This group was made in concert by three most eminent artists, Agesander, Polydorus, and Athenodorus, natives of Rhodes.
It is generally accepted that this is the same work as is now in the Vatican. It is now very often thought that the three Rhodians were copyists, perhaps of a bronze sculpture from Pergamon, created around 200 BC. It is noteworthy that Pliny does not address this issue explicitly, in a way that suggests "he regards it as an original". Pliny states that it was located in the palace of the emperor Titus, and it is possible that it remained in the same place until 1506 (see "Findspot" section below). He also asserts that it was carved from a single piece of marble, though the Vatican work comprises at least seven interlocking pieces. The phrase translated above as "in concert" (de consilii sententia) is regarded by some as referring to their commission rather than the artists' method of working, giving in Nigel Spivey's translation: " [the artists] at the behest of council designed a group...", which Spivey takes to mean that the commission was by Titus, possibly even advised by Pliny among other savants.
The same three artists' names, though in a different order (Athenodoros, Agesander, and Polydorus), with the names of their fathers, are inscribed on one of the sculptures at Tiberius's villa at Sperlonga (though they may predate his ownership), but it seems likely that not all the three masters were the same individuals. Though broadly similar in style, many aspects of the execution of the two groups are drastically different, with the Laocoon group of much higher quality and finish.
Some scholars used to think that honorific inscriptions found at Lindos in Rhodes dated Agesander and Athenodoros, recorded as priests, to a period after 42 BC, making the years 42 to 20 BC the most likely date for the Laocoön group's creation. However the Sperlonga inscription, which also gives the fathers of the artists, makes it clear that at least Agesander is a different individual from the priest of the same name recorded at Lindos, though very possibly related. The names may have recurred across generations, a Rhodian habit, within the context of a family workshop (which might well have included the adoption of promising young sculptors). Altogether eight "signatures" (or labels) of an Athenodoros are found on sculptures or bases for them, five of these from Italy. Some, including that from Sperlonga, record his father as Agesander. The whole question remains the subject of academic debate.
Renaissance
The group was unearthed in February 1506 in the vineyard of Felice De Fredis; informed of the fact, Pope Julius II, an enthusiastic classicist, sent for his court artists. Michelangelo was called to the site of the unearthing of the statue immediately after its discovery, along with the Florentine architect Giuliano da Sangallo and his eleven-year-old son Francesco da Sangallo, later a sculptor, who wrote an account over sixty years later:
The first time I was in Rome when I was very young, the pope was told about the discovery of some very beautiful statues in a vineyard near Santa Maria Maggiore. The pope ordered one of his officers to run and tell Giuliano da Sangallo to go and see them. So he set off immediately. Since Michelangelo Buonarroti was always to be found at our house, my father having summoned him and having assigned him the commission of the pope’s tomb, my father wanted him to come along, too. I joined up with my father and off we went. I climbed down to where the statues were when immediately my father said, "That is the Laocoön, which Pliny mentions". Then they dug the hole wider so that they could pull the statue out. As soon as it was visible everyone started to draw (or "started to have lunch"), all the while discoursing on ancient things, chatting as well about the ones in Florence.
Julius acquired the group on March 23, giving De Fredis a job as a scribe as well as the customs revenues from one of the gates of Rome. By August the group was placed for public viewing in a niche in the wall of the brand new Belvedere Garden at the Vatican, now part of the Vatican Museums, which regard this as the start of their history. As yet it had no base, which was not added until 1511, and from various prints and drawings from the time the older son appears to have been completely detached from the rest of the group.
In July 1798 the statue was taken to France in the wake of the French conquest of Italy, though the replacement parts were left in Rome. It was on display when the new Musée Central des Arts, later the Musée Napoléon, opened at the Louvre in November 1800. A competition was announced for new parts to complete the composition, but there were no entries. Some plaster sections by François Girardon, over 150 years old, were used instead. After Napoleon's final defeat at the Battle of Waterloo in 1815 most (but certainly not all) the artworks plundered by the French were returned, and the Laocoön reached Rome in January 1816.
Restorations
When the statue was discovered, Laocoön's right arm was missing, along with part of the hand of one child and the right arm of the other, and various sections of snake. The older son, on the right, was detached from the other two figures. The age of the altar used as a seat by Laocoön remains uncertain. Artists and connoisseurs debated how the missing parts should be interpreted. Michelangelo suggested that the missing right arms were originally bent back over the shoulder. Others, however, believed it was more appropriate to show the right arms extended outwards in a heroic gesture.
According to Vasari, in about 1510 Bramante, the Pope's architect, held an informal contest among sculptors to make replacement right arms, which was judged by Raphael, and won by Jacopo Sansovino. The winner, in the outstretched position, was used in copies but not attached to the original group, which remained as it was until 1532, when Giovanni Antonio Montorsoli, a pupil of Michelangelo, added his even more straight version of Laocoön's outstretched arm, which remained in place until modern times. In 1725–1727 Agostino Cornacchini added a section to the younger son's arm, and after 1816 Antonio Canova tidied up the group after their return from Paris, without being convinced by the correctness of the additions but wishing to avoid a controversy.
In 1906 Ludwig Pollak, archaeologist, art dealer and director of the Museo Barracco, discovered a fragment of a marble arm in a builder's yard in Rome, close to where the group was found. Noting a stylistic similarity to the Laocoön group he presented it to the Vatican Museums: it remained in their storerooms for half a century. In 1957 the museum decided that this armbent, as Michelangelo had suggestedhad originally belonged to this Laocoön, and replaced it. According to Paolo Liverani: "Remarkably, despite the lack of a critical section, the join between the torso and the arm was guaranteed by a drill hole on one piece which aligned perfectly with a corresponding hole on the other."
In the 1980s the statue was dismantled and reassembled, again with the Pollak arm incorporated. The restored portions of the children's arms and hands were removed. In the course of disassembly, it was possible to observe breaks, cuttings, metal tenons, and dowel holes which suggested that in antiquity, a more compact, three-dimensional pyramidal grouping of the three figures had been used or at least contemplated. According to Seymour Howard, both the Vatican group and the Sperlonga sculptures "show a similar taste for open and flexible pictorial organization that called for pyrotechnic piercing and lent itself to changes at the site, and in new situations".
The more open, planographic composition along a plane, used in the restoration of the Laocoön group, has been interpreted as "apparently the result of serial reworkings by Roman Imperial as well as Renaissance and modern craftsmen". A different reconstruction was proposed by Seymour Howard, to give "a more cohesive, baroque-looking and diagonally-set pyramidal composition", by turning the older son as much as 90°, with his back to the side of the altar, and looking towards the frontal viewer rather than at his father. The findings Seymour Howard documented do not change his belief about the organization of the original. But dating the reworked coil ends by measuring the depth of the surface crust and comparing the metal dowels in the original and reworked portions allows one to determine the provenance of the parts and the sequence of the repairs. Other suggestions have been made.
Influence
The discovery of the Laocoön made a great impression on Italian artists and continued to influence Italian art into the Baroque period. Michelangelo is known to have been particularly impressed by the massive scale of the work and its sensuous Hellenistic aesthetic, particularly its depiction of the male figures. The influence of the Laocoön, as well as the Belvedere Torso, is evidenced in many of Michelangelo's later sculptures, such as the Rebellious Slave and the Dying Slave, created for the tomb of Pope Julius II. Several of the ignudi and the figure of Haman in the Sistine Chapel ceiling draw on the figures. Raphael used the face of Laocoön for his Homer in his Parnassus in the Raphael Rooms, expressing blindness rather than pain.
The Florentine sculptor Baccio Bandinelli was commissioned to make a copy by the Medici Pope Leo X. Bandinelli's version, which was often copied and distributed in small bronzes, is in the Uffizi Gallery, Florence, the Pope having decided it was too good to send to François I of France as originally intended. A bronze casting, made for François I at Fontainebleau from a mold taken from the original under the supervision of Primaticcio, is at the Musée du Louvre. There are many copies of the statue, including a well-known one in the Grand Palace of the Knights of St. John in Rhodes. Many still show the arm in the outstretched position, but the copy in Rhodes has been corrected.
The group was rapidly depicted in prints as well as small models, and became known all over Europe. Titian appears to have had access to a good cast or reproduction from about 1520, and echoes of the figures begin to appear in his works, two of them in the Averoldi Altarpiece of 1520–1522. A woodcut, probably after a drawing by Titian, parodied the sculpture by portraying three apes instead of humans. It has often been interpreted as a satire on the clumsiness of Bandinelli's copy, or as a commentary on debates of the time around the similarities between human and ape anatomy. It has also been suggested that this woodcut was one of a number of Renaissance images that were made to reflect contemporary doubts as to the authenticity of the Laocoön Group, the 'aping' of the statue referring to the incorrect pose of the Trojan priest who was depicted in ancient art in the traditional sacrificial pose, with his leg raised to subdue the bull. Over 15 drawings of the group made by Rubens in Rome have survived, and the influence of the figures can be seen in many of his major works, including his Descent from the Cross in Antwerp Cathedral.
The original was seized and taken to Paris by Napoleon Bonaparte after his conquest of Italy in 1799, and installed in a place of honour in the Musée Napoléon at the Louvre. Following the fall of Napoleon, it was returned by the Allies to the Vatican in 1816.
Laocoön as an ideal of art
Pliny's description of Laocoön as "a work to be preferred to all that the arts of painting and sculpture have produced" has led to a tradition which debates this claim that the sculpture is the greatest of all artworks. Johann Joachim Winkelmann (1717–1768) wrote about the paradox of admiring beauty while seeing a scene of death and failure. The most influential contribution to the debate, Gotthold Ephraim Lessing's essay Laocoon: An Essay on the Limits of Painting and Poetry, examines the differences between visual and literary art by comparing the sculpture with Virgil's verse. He argues that the artists could not realistically depict the physical suffering of the victims, as this would be too painful. Instead, they had to express suffering while retaining beauty.
Johann Goethe said the following in his essay, Upon the Laocoon "A true work of art, like a work of nature, never ceases to open boundlessly before the mind. We examine,we are impressed with it,it produces its effect; but it can never be all comprehended, still less can its essence, its value, be expressed in words.
The most unusual intervention in the debate, William Blake's annotated print Laocoön, surrounds the image with graffiti-like commentary in several languages, written in multiple directions. Blake presents the sculpture as a mediocre copy of a lost Israelite original, describing it as "Jehovah & his two Sons Satan & Adam as they were copied from the Cherubim Of Solomons Temple by three Rhodians & applied to Natural Fact or History of Ilium". This reflects Blake's theory that the imitation of ancient Greek and Roman art was destructive to the creative imagination, and that Classical sculpture represented a banal naturalism in contrast to Judeo-Christian spiritual art.
The central figure of Laocoön served as loose inspiration for the Indian in Horatio Greenough's The Rescue (1837–1850) which stood before the east facade of the United States Capitol for over 100 years.
Near the end of Charles Dickens' 1843 novella, A Christmas Carol, Ebenezer Scrooge self-describes "making a perfect Laocoön of himself with his stockings" in his hurry to dress on Christmas morning.
John Ruskin disliked the sculpture and compared its "disgusting convulsions" unfavourably with work by Michelangelo, whose fresco of The Brazen Serpent, on a corner pendentive of the Sistine Chapel, also involves figures struggling with snakesthe fiery serpents of the Book of Numbers. He invited contrast between the "meagre lines and contemptible tortures of the Laocoon" and the "awfulness and quietness" of Michelangelo, saying "the slaughter of the Dardan priest" was "entirely wanting" in sublimity. Furthermore, he attacked the composition on naturalistic grounds, contrasting the carefully studied human anatomy of the restored figures with the unconvincing portrayal of the snakes:
In 1910 the critic Irving Babbit used the title The New Laokoon: An Essay on the Confusion of the Arts for an essay on contemporary culture at the beginning of the 20th century. In 1940 Clement Greenberg adapted the concept for his own essay entitled Towards a Newer Laocoön in which he argued that abstract art now provided an ideal for artists to measure their work against. A 2007 exhibition at the Henry Moore Institute in turn copied this title while exhibiting work by modern artists influenced by the sculpture.
Findspot
The location where the buried statue was found in 1506 was always known to be "in the vineyard of Felice De Fredis" on the Oppian Hill (the southern spur of the Esquiline Hill), as noted in the document recording the sale of the group to the Pope. But over time, knowledge of the site's precise location was lost, beyond "vague" statements such as Sangallo's "near Santa Maria Maggiore" (see above) or it being "near the site of the Domus Aurea" (the palace of the Emperor Nero); in modern terms near the Colosseum. An inscribed plaque of 1529 in the church of Santa Maria in Aracoeli records the burial of De Fredis and his son there, covering his finding of the group but giving no occupation. Research published in 2010 has recovered two documents in the municipal archives (badly indexed, and so missed by earlier researchers), which have established a much more precise location for the find: slightly to the east of the southern end of the Sette Sale, the ruined cistern for the successive imperial baths at the base of the hill by the Colosseum.
The first document records De Fredis' purchase of a vineyard of about 1.5 hectares from a convent for 135 ducats on 14 November 1504, exactly 14 months before the finding of the statue. The second document, from 1527, makes it clear that there is now a house on the property, and clarifies the location; by then De Fredis was dead and his widow rented out the house. The house appears on a map of 1748, and still survives as a substantial building of three storeys, in the courtyard of a convent. The area remained mainly agricultural until the 19th century, but is now entirely built up. It is speculated that De Fredis began building the house soon after his purchase, and as the group was reported to have been found some four metres below ground, at a depth unlikely to be reached by normal vineyard-digging operations, it seems likely that it was discovered when digging the foundations for the house, or possibly a well for it.
The findspot was inside and very close to the Servian Wall, which was still maintained in the 1st century AD (possibly converted to an aqueduct), though no longer the city boundary, as building had spread well beyond it. The spot was within the Gardens of Maecenas, founded by Gaius Maecenas the ally of Augustus and patron of the arts. He bequeathed the gardens to Augustus in 8 BC, and Tiberius lived there after he returned to Rome as heir to Augustus in 2 AD. Pliny said the Laocoön was in his time at the palace of Titus (qui est in Titi imperatoris domo), then heir to his father Vespasian, but the location of Titus's residence remains unknown; the imperial estate of the Gardens of Maecenas may be a plausible candidate. If the Laocoön group was already in the location of the later findspot by the time Pliny saw it, it might have arrived there under Maecenas or any of the emperors. The extent of the grounds of Nero's Domus Aurea is now unclear, but they do not appear to have extended so far north or east, though the newly rediscovered findspot-location is not very far beyond them.
Notes
References
Barkan, Leonard, Unearthing the Past: Archaeology and Aesthetics in the Making of Renaissance Culture, 1999, Yale University Press,
Beard, Mary, Times Literary Supplement, "Arms and the Man: The restoration and reinvention of classical sculpture", 2 February 2001, subscription required, reprinted in Confronting the Classics: Traditions, Adventures and Innovations, 2013 EBL ebooks online, Profile Books, Google Books
Boardman, John ed., The Oxford History of Classical Art, 1993, OUP,
"Chronology": Frischer, Bernard, Digital Sculpture Project: Laocoon, "An Annotated Chronology of the “Laocoon” Statue Group", 2009
Clark, Kenneth, The Nude, A Study in Ideal Form, orig. 1949, various edns, page refs from Pelican edn of 1960
Cook, R.M., Greek Art, Penguin, 1986 (reprint of 1972),
Farinella, Vincenzo, Vatican Museums, Classical Art, 1985, Scala
Haskell, Francis, and Penny, Nicholas, 1981. Taste and the Antique: The Lure of Classical Sculpture 1500–1900 (Yale University Press), cat. no. 52, pp. 243–47
Herrmann, Ariel, review of Sperlonga und Vergil by Roland Hampe, The Art Bulletin, Vol. 56, No. 2, Medieval Issue (Jun., 1974), pp. 275–277,
Howard, Seymour, "Laocoon Rerestored", American Journal of Archaeology, Vol. 93, No. 3 (Jul., 1989), pp. 417–422,
Isager, Jacob, Pliny on Art and Society: The Elder Pliny's Chapters On The History Of Art, 2013, Routledge, , Google Books
Rice, E. E., "Prosopographika Rhodiaka", The Annual of the British School at Athens, Vol. 81, (1986), pp. 209–250,
Spivey, Nigel (2001), Enduring Creation: Art, Pain, and Fortitude, University of California Press,
Smith, R.R.R., Hellenistic Sculpture, a handbook, Thames & Hudson, 1991,
Stewart, A., "To Entertain an Emperor: Sperlonga, Laokoon and Tiberius at the Dinner-Table", The Journal of Roman Studies, Vol. 67, (1977), pp. 76–90,
"Volpe and Parisi": Digital Sculpture Project: Laocoon. "Laocoon: The Last Enigma", translation by Bernard Frischer of Volpe, Rita and Parisi, Antonella, "Laocoonte. L'ultimo engima," in Archeo 299, January 2010, pp. 26–39
Warden, P. Gregory, "The Domus Aurea Reconsidered", Journal of the Society of Architectural Historians, Vol. 40, No. 4 (Dec., 1981), pp. 271–278, ,
External links
University of Virginia's Digital Sculpture Project 3D models, bibliography, annotated chronology of the Laocoon
Laocoon photos
Laocoon and his Sons in the Census database
FlickR group "Responses To Laocoön", a collection of art inspired by the Laocoön group
Lessing's Laocoon etext on books.google.com
Laocoonte: variazioni sul mito, con una Galleria delle fonti letterarie e iconografiche su Laocoonte, a cura del Centro studi classicA, "La Rivista di Engramma" n. 50, luglio/settembre 2006
Nota sul ciclo di Sperlonga e sulle relazioni con il Laoocoonte Vaticano, a cura del Centro studi classicA, "La Rivista di Engramma" n. 50. luglio/settembre 2006
Nota sulle interpretazioni del passo di Plinio, Nat. Hist. XXXVI, 37, a cura del Centro studi classicA, "La Rivista di Engramma" n. 50. luglio/settembre 2006
Scheda cronologica dei restauri del Laocoonte, a cura di Marco Gazzola, "La Rivista di Engramma" n. 50, luglio/settembre 2006
1st-century BC sculptures
1506 archaeological discoveries
Antiquities acquired by Napoleon
Hellenistic sculpture
Roman copies of Greek sculptures
Sculptures of the Vatican Museums
Tourist attractions in Rome
Archaeological discoveries in Italy
Hellenistic-style Roman sculptures
Nude sculptures
Snakes in art
|
34762736
|
https://en.wikipedia.org/wiki/Open%20Web%20Analytics
|
Open Web Analytics
|
Open Web Analytics (OWA) is open-source web analytics software created by Peter Adams. OWA is written in PHP and uses a MySQL database, which makes it compatible for running with an AMP solution stack on various web servers. OWA is comparable to Google Analytics, though OWA is server software anyone can install and run on their own host, while Google Analytics is a software service offered by Google. OWA supports tracking with WordPress and MediaWiki, two popular web site frameworks. This application helps webmasters keep track of and observe the influx of views on their websites. The program also tracks the websites of competitors and their company's growth compared to the site in question.
See also
List of web analytics software
References
External links
Web software
Web analytics
Free web analytics software
Free software programmed in PHP
WordPress
|
31096384
|
https://en.wikipedia.org/wiki/Outline%20of%20chess
|
Outline of chess
|
The following outline is provided as an overview of and topical guide to chess:
Chess is a two-player board game played on a chessboard (a square-checkered board with 64 squares arranged in an eight-by-eight grid). In a chess game, each player begins with sixteen pieces: one king, one queen, two rooks, two knights, two bishops, and eight pawns. The object of the game is to checkmate the opponent's king, whereby the king is under immediate attack (in "check") and there is no way to remove or defend it from attack, or force the opposing player to forfeit.
Nature of chess
Chess can be described as all of the following:
Form of entertainment – form of activity that holds the attention and interest of an audience, or gives pleasure and delight.
Form of recreation – activity of leisure, leisure being discretionary time.
Form of play – voluntary, intrinsically motivated activity normally associated with recreational pleasure and enjoyment.
Game – structured playing, usually undertaken for enjoyment and sometimes used as an educational tool. Games are distinct from work, which is usually carried out for remuneration, and from art, which is more often an expression of aesthetic or ideological elements. However, the distinction is not clear-cut, and many games are also considered to be work (such as professional players of spectator sports/games) or art (such as jigsaw puzzles or games involving an artistic layout such as Mahjong, solitaire, or some video games).
Board game – game in which counters or pieces are placed, removed, or moved on a premarked surface or "board" according to a set of rules. Games may be based on pure strategy, chance or a mixture of the two and usually have a goal which a player aims to achieve.
Strategy game – game (e.g. computer, video or board game) in which the players' uncoerced, and often autonomous decision-making skills have a high significance in determining the outcome. Almost all strategy games require internal decision tree style thinking, and typically very high situation awareness.
Two-player game – game played by just two players, usually against each other.
Sport – form of play, but sport is also a category of entertainment in its own right (see immediately below for description)
Sport – organized, competitive, entertaining, and skillful activity requiring commitment, strategy, and fair play, in which a winner can be defined by objective means. It is governed by a set of rules or customs. Chess is recognized as a sport by the International Olympic Committee.
Mind sport – game where the outcome is determined mainly by mental skill, rather than by pure chance.
Chess equipment
Essential equipment
Chessboard – board with 64 squares (eight rows and eight columns) arranged in two alternating colors (light and dark). The colors are called "black" and "white", although the actual colors vary: usually they are dark green and buff for boards used in competition, and often natural shades of light and dark woods for home boards. Chess boards can be built into chess tables, or dispensed with (along with pieces) if playing mental chess, computer chess, Internet chess and sometimes correspondence chess.
– horizontal row of squares on the chessboard.
– vertical (i.e. in the direction from one player to the other) column of squares on the chessboard.
Chess set – all the pieces required to play a game of chess. Chess sets come in various materials and styles, and some are considered collectors' items and works of art. The most popular style for competitive play is the Staunton chess set, named after Howard Staunton, which are described below; some regions have alternate standard shapes for some pieces. The relative point values given are approximate and depend on the current game situation.
Chess pieces – two armies of 16 chess pieces, one army designated "white", the other "black". Each player controls one of the armies for the entire game. The pieces in each army include:
1 king – most important piece, and one of the weakest (until the endgame). The object of the game is checkmate, by placing the enemy king in check in a way that it cannot escape capture in the next move. On the top of the piece is a cross.
1 queen – most powerful piece in the game, with a relative value of 9 points. The top of the piece is crown-like. Official tournament chess sets have 2 queens of each color, to deal with pawns being promoted
2 rooks – look like castle towers and have a relative value of 5 points each.
2 bishops – stylized after mitres (bishops' hats), and have a relative value of 3 points each.
2 knights – usually look like horse heads and have a relative value of 3 points each.
8 pawns – smallest pieces in the game, each topped by a ball. Pawns have a relative value of 1 point each.
Specialized equipment
Game clock – dual timer used to monitor each player's thinking time. Only the timer of the player who is to move is active. Used for speed chess, and to regulate time in tournament games.
and writing implement – Tournament games require scores to be kept, and many players like to record other games for later analysis.
Rules of chess
The modern rules of chess (and breaking them) are discussed in separate articles, and briefly in the following subsections:
Rules of chess – rules governing the play of the game of chess.
White and Black in chess – one set of pieces is designated "white" and the other is designated "black". White moves first. Some older sets had white and red, some modern sets have tan and brown.
Cheating in chess – methods that have been used to gain an unfair advantage by breaking the rules.
Initial set up
Initial set up – initial placement of the pieces on the chessboard before any moves are made.
Moves
– move of a piece to a square occupied by an opposing piece, which is removed from the board and from play.
Check – situation in which the king would be subject to capture (but the king is never actually captured).
Checkmate – a winning move which makes capture of the opposing king inevitable.
How each piece moves
Moving a pawn – pawns move straight forward one space at a time, but capture diagonally (within a one-square range). On its first move, a pawn may move two squares forward instead (with no capturing allowed in a two-square move). Also, pawns are subject to the en passant and promotion movement rules (see below).
En passant – on the very next move after a player moves a pawn two squares forward from its starting position, an opposing pawn that is guarding the skipped square may capture the pawn (taking it "as it passes"), by moving to the passed square as if the pawn had stopped there. If this is not done on the very next move, the right to do so is lost.
Pawn promotion – when a pawn reaches its eighth rank it is exchanged for the player's choice of a queen, rook, bishop or knight (usually a queen, since it is the most powerful piece).
Moving a knight – knights move two squares horizontally and one square vertically from their original position, or two squares vertically and one square horizontally, jumping directly to the destination while ignoring any pieces in the intervening spaces.
Moving a bishop – bishops move any distance in a straight line in either direction along squares connected diagonally. One bishop in each army moves diagonally on white squares only, and the other bishop is restricted to moving along black squares.
Moving a rook – rook may move any distance along a rank or a file (forward, backward, left, or right), and can also be used for castling (see below).
Castling – special move available to each player once in the game (with restrictions, see below) where the king is moved two squares to the left or right and the rook on that side is moved to the other side of the king.
Requirements for castling – Castling is legal if the following conditions are all met:
1. Neither the king nor the rook involved have previously moved.
2. There are no pieces in between the king and chosen rook.
3. The king is not currently in check. (For clarification, the involved rook may be currently under attack. Additionally, the king may have previously been in check, as long as the king did not move to resolve it.)
4. The king does not pass through a square that is under attack by an enemy piece. (For clarification, the rook may pass through a square that is under attack by an enemy piece; the only such square is the one adjacent to the rook when castling queenside, b1 for White and b8 for Black.)
5. The king does not end in a square that is under attack by an enemy piece.
Moving the queen – queen can move like a rook or like a bishop (horizontally, vertically, or diagonally), but no castling.
Moving the king – king may move one square in any direction, but may not move into check. It may also make a special move called "castling" (see above).
End of the game
Resigning – a player may end the game by resigning, which cedes victory to the opponent.
Checkmate – object of the game – a king is in check and has no move to get out of check, losing the game.
Draw – neither side wins or loses. In competition this usually counts as a half-win for each player.
Draw by agreement – players may agree that the game is a draw.
Stalemate – if the player whose turn it is to move has no legal move and is not in check, the game is a draw by stalemate.
Fifty-move rule – if within the last fifty moves by both sides, no pawn has moved and there have been no captures, a player may claim a draw.
Threefold repetition – if the same position has occurred three times with the same player to move, a player may claim a draw.
Perpetual check – situation in which one king cannot escape an endless series of checks but cannot be checkmated. This was formerly a rule of chess to result in a draw, and still used informally, but superseded by the threefold repetition rule and fifty-move rule, which make it implicit.
Competition rules and other features
Adjournment – play stops, and the game is resumed later. This has become rare since the advent of computer analysis of chess games.
Chess notation – system of recording chess moves.
Algebraic chess notation – most common method of recording moves.
Descriptive chess notation – obsolete method of recording moves, it was widely used, especially in English- and Spanish-speaking countries, and is still sometimes seen.
Draw by agreement – the two players agree to call the game a draw, as neither is likely to win.
Time control – each player must complete either a specified number of moves or all of his moves before a certain time elapses on his game clock.
Touch-move rule – if a player touches his own piece, he must move it if it has a legal move. If he touches an opponent's piece, he must capture it if he can legally.
Minor variants
Blindfold chess – one or both players play without seeing the board and pieces.
Chess handicap – one of the players gives a handicap to the other player, usually starting the game without a certain piece.
Fast chess – chess played with a time control limiting each player to a specified time of 60 minutes or less (can be as low as 1 minute).
Gameplay
Blunder – very bad move.
Candidate move – move that upon initial observation of the position, warrants further analysis. Spotting these moves is the key to higher-level play.
Compensation – having positional advantages in spite of disadvantages.
Chess handicap – way to enable a weaker player to have a chance of winning against a stronger one. There are a variety of such handicaps, such as material odds (the stronger player surrenders a certain piece or pieces), extra moves (the weaker player has an agreed number of extra moves at the beginning of the game), extra time on the chess clock, and special conditions (such as requiring the odds-giver to deliver checkmate with a specified piece or pawn). Various permutations of these, such as "pawn and two moves", are also possible.
Chess piece relative value – relative value of chess pieces, based on their relative power.
Premove – used in fast online games, it refers to a player making his next move while his opponent is thinking about his move. After the opponent's move, the premove will be made, if legal, taking only 0.1 seconds on the game clock.
Priyome – typical maneuver or technique in chess.
Ply – half-turn, that is, one player's portion of a turn.
Tempo – a "unit" similar to time, equal to one chess move, e.g. to lose a tempo is to waste a move or give the opponent the opportunity of an extra move. Sometimes a player may want to lose a tempo.
General situations
En prise – when an unguarded piece is in position to be captured.
Initiative – situational advantage in which a player can make threats that cannot be ignored, forcing the opponent to use his turns to respond to threats rather than make his own.
Transposition – sequence of moves resulting in a position which may also be reached by another common sequence of moves. Transpositions are particularly common in openings, where a given position may be reached by different sequences of moves. Players sometimes use transpositions deliberately in order to avoid variations they dislike, lure opponents into unfamiliar or uncomfortable territory or simply to worry opponents.
Time trouble – having little thinking time left in a timed game, thereby increasing the likelihood of making weak or losing moves or overlooking strong or winning moves.
Zugzwang – situation in which a player would prefer to pass and make no move, because he has no move that does not worsen his position.
Pawn structure
Pawn structure – describes features of the positions of the pawns. Pawn structure may be used for tactical or strategic effect, or both.
Backward pawn – pawn that is not supported by other pawns and cannot advance.
Connected pawns – pawns of the same color on adjacent files so that they can protect each other.
Doubled pawns – two pawns of the same color on the same file, so that one blocks the other.
Half-open file – file that has pawns of one color only.
Isolated pawn – pawn with no pawns of the same color on adjacent files.
Maróczy Bind – formation with white pawns on c4 and e4, after the exchange of White's d-pawn for Black's c-pawn.
Open file – file void of pawns.
Passed pawn – pawn that can advance to its eighth rank without being blocked by an opposing pawn and without the possibility of being captured by a pawn on an adjacent file.
Chess tactics
Chess tactics – a chess tactic is a move or sequence of moves which may result in tangible gain or limits the opponent's options. Tactics are usually contrasted with strategy, in which advantages take longer to be realized, and the opponent is less constrained in responding.
Anti-computer tactics – tactics used by humans in games against computers that the program cannot handle very well
– to remove an opposing piece from the board by taking it with one of your own. Except in the case of an en passant capture, the capturing man replaces the captured man on its square. Also, a move that captures. Captures can be executed offensively or defensively.
Combination – series of moves, often with an exchange or sacrifice, to achieve some advantage.
Exchange – capturing a piece in return for allowing another piece to be captured.
The exchange – exchange of a bishop or knight for a rook. The rook is generally the stronger piece unless a player obtains other advantages for allowing the exchange.
Flight square – square that the king can retreat to, if attacked.
Fundamental tactics
Fundamental tactics include:
Battery – two or more pieces that can move and attack along a shared path, situated on the same rank, file, or diagonal; e.g., the queen and a bishop, or the queen and a rook, or both rooks, or the queen and both rooks.
Block (blocking an attack) – interposing a piece between another piece and its attacker. When the piece being attacked is the king, this is blocking a check.
Deflection – tactic that forces an opposing piece to leave the square, rank or file it occupies, thus exposing the king or a valuable piece.
Discovered attack – moving a piece uncovers an attack by another piece along a straight line
Fork – attack on two or more pieces by one piece
Interference – blocking the line along which an enemy piece is defended, leaving it vulnerable to capture.
Overloading – giving a defensive piece an additional defensive assignment which it cannot complete without abandoning its original defensive assignment.
Pin – piece is under attack and either cannot legally move because it would put its king in check or should not move because it will allow an attack on a more valuable piece.
Skewer – if a piece under attack moves it will allow an attack on another piece
Undermining – capturing a defensive piece, leaving one of the opponent's pieces undefended or underdefended. Also known as "removal of the guard".
X-ray – (1) synonym for skewer. The term is also sometimes used to refer to a tactic where a piece either (2) indirectly attacks an enemy piece through another piece or pieces or (3) defends a friendly piece through an enemy piece.
Offensive tactics
Battery – two or more pieces that can move and attack along a shared path, situated on the same rank, file, or diagonal; e.g., the queen and a bishop, or the queen and a rook, or both rooks, or the queen and both rooks.
Alekhine's gun – formation named after the former World Chess Champion, Alexander Alekhine, which consists of placing the two rooks stacked one behind another and the queen at the rear.
Cross-check – tactic in which a check is played in response to a check, especially when the original check is blocked by a piece that itself either delivers check or reveals a discovered check from another piece.
Decoy – ensnaring a piece, usually the king or queen, by forcing it to move to a poisoned square with a sacrifice on that square.
Deflection – forces an opposing piece to leave the square, rank or file it occupies, thus exposing the king or a valuable piece.
Discovered attack – attack revealed when one piece moves out of the way of another.
Discovered check – discovered attack that is also a check
Domination – occurs when a piece has a relatively wide choice of destination squares, but nevertheless cannot avoid being captured.
Double attack – attack on two pieces at once, such as in a fork, or via a discovered attack where the piece that was blocked attacks one piece while the piece moving out of the way threatens another.
Double check – check delivered by two pieces at the same time. In chess notation, it is sometimes symbolized by "++".
Fork – when a piece attacks two or more enemy pieces at the same time.
Interference – interrupting the line between an attacked piece and its defender by sacrificially interposing a piece. Opportunities for interference are rare because the defended object must be more valuable than the sacrificed piece, and the interposition must itself represent a threat.
King walk – several successive movements of the king, usually in the endgame to get it from a safe square (where it was hiding during the middlegame) to a more active position. Not to be confused with "king hunt", where a player forces his opponent's king out of safety and chases it across the board with a series of checks.
Outpost – square where a piece can attack the opponent's position without being attacked by enemy pawns. Knights are good pieces to occupy outposts.
Overloading – giving a defensive piece an additional defensive assignment which it cannot complete without abandoning its original defensive assignment.
Pawn promotion – moving a pawn to the back row to be promoted to a knight, a bishop, a rook, or a queen. While this is a rule, it is also a type of move, with tactical significance. Pawn promotion, or the threat of it, often decides the result of a chess endgame.
Underpromotion – promotion to a knight, bishop, or rook is known as an "underpromotion". Although these pieces are less powerful than the queen, there are some situations where it is advantageous to underpromote. For example, since the knight moves in a way which the queen cannot, knight underpromotions can be very useful, and are the most common type of underpromotion. Promoting to a rook or bishop is advantageous in cases where promoting to a queen would result in an immediate stalemate.
In FIDE tournament play, spare queens are provided, one of each colour. In a tournament match between Emil Szalanczy and Thi Mai Hung Nguyen in Budapest, 2009, six queens were on the board at the same time.
Pawn storm – several pawns are moved in rapid succession toward the opponent's defenses.
Pin – piece is under attack and either cannot legally move because it would put its king in check or should not move because it will allow an attack on a more valuable piece.
Absolute pin – pin against the king is called absolute since the pinned piece cannot legally move (as moving it would expose the king to check).
Relative pin – where the piece shielded by the pinned piece is a piece other than the king, but typically more valuable than the pinned piece.
Partial pin – when a rook or queen is pinned along a file or rank, or a bishop or queen is pinned along a diagonal
Situational pin – when a pinned piece is shielding a square and moving out of the way will allow the enemy to move there, resulting in a detrimental situation for the player of the pinned piece, such as checkmate.
Sacrifice – move which deliberately allows the loss of material, either because the player can win the material back or deliver checkmate if it is taken (sham sacrifice or pseudosacrifice), or because the player judges he will have positional compensation (true or positional sacrifice).
Greek gift sacrifice – typical sacrifice of a bishop by White playing Bxh7+ or Black playing Bxh2+.
Queen sacrifice – sacrifice of the queen, invariably tactical in nature.
Plachutta – a piece sacrifices itself on a square where it could be captured by two different pieces in order to deflect them both from crucial squares.
Skewer – attack upon two pieces in a line and is similar to a pin. In fact, a skewer is sometimes described as a "reverse pin"; the difference is that in a skewer, the more valuable piece is in front of the piece of lesser or equal value.
Absolute skewer – when the King is skewered, forcing him to move out of check, exposing the piece behind him in the line of attack.
Relative skewer – the skewered piece can be moved, but doesn't have to be (because it is not the King in check).
Swindle – ruse by which a player in a losing position tricks his opponent, and thereby achieves a win or draw instead of the expected loss. It may also refer more generally to obtaining a win or draw from a clearly losing position.
The exchange – see § Chess tactics above
Triangulation – technique of making three moves to wind up in the same position while the opponent has to make two moves to wind up in the same position. The reason is to lose a tempo and put the opponent in zugzwang.
Undermining – capturing a defensive piece, leaving one of the opponent's pieces undefended or underdefended. Also known as "removal of the guard".
Windmill – repeated series of discovered checks which the opponent cannot avoid, winning large amounts of material.
X-ray attack – indirect attack of a piece through another piece.
Zwischenzug ("Intermediate move") – To make an intermediate move before the expected move to gain an advantage.
Checkmate patterns
Checkmate pattern – a particular checkmate. Some checkmate patterns occur sufficiently frequently, or are otherwise of such interest to scholars, that they have acquired specific names in chess commentary. Here are some of the most notorious:
Back-rank checkmate – checkmate accomplished by a rook or queen on the opponent's first rank, because the king is blocked in by its own pieces (almost always pawns) on its second rank.
Bishop and knight checkmate – fundamental checkmate with a minimum amount of material. It is notoriously difficult to achieve.
Boden's Mate – checkmate pattern characterized by a king being mated by two bishops on criss-crossing diagonals, with possible flight squares blocked by friendly pieces.
Fool's mate – shortest possible checkmate, on Black's second move. It is rare in practice.
Scholar's mate – checkmate in as few as four moves by a player accomplished by a queen supported by a bishop (usually) in an attack on the f7 or f2 square. It is fairly common at the novice level.
Smothered mate – checkmate accomplished by only a knight because the king's own pieces occupy squares to which it would be able to escape.
Defensive tactics
Artificial castling (also known as "castling by hand") – taking several moves to get the king to the position it would be in if castling could have been done.
Block (blocking an attack) – interposing a piece between another piece and its attacker. When the piece being attacked is the king, this is blocking a check.
Blockade – to block a passed pawn with a piece.
Desperado – piece that seems determined to give itself up, typically either (1) to sell itself as dearly as possible in a situation where both sides have hanging pieces or (2) to bring about stalemate if it is captured (or in some instances, to force a draw by threefold repetition if it is not captured).
Luft – German for "air", meaning squares available for the king to escape an attack, typically through a fortress.
X-ray defense – indirect defense of a piece through another piece.
Possible responses to an attack
Capture the attacking piece
Move the attacked piece
Block – interpose another piece in between the two
Guard the attacked piece and permit an exchange
Pin the attacking piece so the capture becomes illegal or unprofitable
Use a zwischenzug
Create a counter-threat
Chess strategy
Chess strategy – aspect of chess playing concerned with evaluation of chess positions and setting of goals and long-term plans for future play. While evaluating a position strategically, a player must take into account such factors as the relative value of the pieces on the board, pawn structure, king safety, position of pieces, and control of key squares and groups of squares (e.g. diagonals, open files, individual squares).
Corresponding squares – usually used as a tool in king and pawn endgames, a pair of corresponding squares are such that if one king is on one of them, the opposing king needs to be on the other.
Fianchetto – moving the pawn in front of the knight and placing the bishop on that square.
Permanent brain – thinking when it is the opponent's turn to move.
Prophylaxis – move that prevents some tactical moves by the opponent.
First-move advantage in chess – theory that White's having the first move gives him an advantage.
Schools of chess
School of chess – group of players that share common ideas about the strategy of the game. There have been several schools in the history of modern chess. Today there is less dependence on schools – players draw on many sources and play according to their personal style.
Modenese Masters – school of chess thought based on teachings of 18th century Italian masters, it emphasized an attack on the opposing king.
Hypermodernism – school of thought based on ideas of some early 20th century masters. Rather than occupying the center of the board with pawns in the opening, control the center by attacking it with knights and bishops from the side.
Game phases
Chess opening – first phase of the game, where pieces are developed before the main battle begins.
Chess middlegame – second phase of the game, usually where the main battle is. Many games end in the middlegame.
Chess endgame – third and final phase of the game, where there are only a few pieces left.
Chess openings
Chess opening – group of initial moves of a chess game. Recognized sequences of opening moves are referred to as openings as finished by White, or defenses as finished by Black, but opening is also used as the general term.
Fool's mate – also known as the Two-Move Checkmate, it is the quickest possible checkmate in chess. A prime example consists of the moves: 1.f3 e5 2.g4 Qh4#
Scholar's mate – checkmate achieved by the moves: 1.e4 e5 2.Qh5 Nc6 3.Bc4 Nf6? 4.Qxf7#. The moves might be played in a different order or in slight variation, but the basic idea is the same: the queen and bishop combine in a simple mating attack on f7 (or f2 if Black is performing the mate).
Smothered mate – checkmate delivered by a knight in which the mated king is unable to move because he is surrounded (or smothered) by his own pieces.
Back rank checkmate – checkmate delivered by a rook or queen along a back rank (that is, the row on which the pieces (not pawns) stand at the start of the game) in which the mated king is unable to move up the board because the king is blocked by friendly pieces (usually pawns) on the second rank (Burgess 2009:16).
Boden's mate – checkmating pattern in chess characterized by bishops on two criss-crossing diagonals (for example, bishops on a6 and f4 delivering mate to a king on c8), with possible flight squares for the king being occupied by friendly pieces. Most often the checkmated king has castled queenside, and is mated on c8 or c1.
Epaulette mate – checkmate where two parallel retreat squares for a checked king are occupied by his own pieces, preventing his escape. The most common Epaulette mate involves the king on his back rank, trapped between two rooks.
Légal's mate – chess opening trap, characterized by a queen sacrifice followed by checkmate with minor pieces if Black accepts the sacrifice. The trap is named after the French player Sire de Légal (1702–1792).
Chess Informant
Chess opening theory table
Encyclopaedia of Chess Openings
Gambit – sacrifice of material (usually a pawn) to gain a positional advantage (usually faster development of pieces)
List of chess openings
List of chess openings named after people
List of chess openings named after places
e4 Openings
King's Pawn Game – Games that start with White moving 1.e4.
Open Game – Games that start with 1.e4 followed by 1...e5 by Black.
Semi-Open Game – Games that start with 1.e4 followed by a move other than 1...e5 by Black.
King's Knight Openings
King's Knight Opening –
Damiano Defense
Elephant Gambit
Evans Gambit
Four Knights Game
Giuoco Piano
Greco Defense
Gunderam Defense
Halloween Gambit
Hungarian Defense
Inverted Hungarian Opening
Irish Gambit
Italian Gambit
Italian Game
Italian Game, Blackburne Shilling Gambit
Jerome Gambit
Konstantinopolsky Opening
Latvian Gambit
Petrov's Defense
Philidor Defense
Ponziani Opening
Rousseau Gambit
Ruy Lopez
Ruy Lopez, Exchange Variation
Scotch Game
Three Knights Opening
Two Knights Defense
Two Knights Defense, Fried Liver Attack
Sicilian Defense
Sicilian Defense –
Chekhover Sicilian
Sicilian Defense, Accelerated Dragon
Sicilian Defense, Alapin Variation
Sicilian Defense, Dragon Variation
Sicilian Defense, Najdorf Variation
Sicilian Defense, Scheveningen Variation
Sicilian, Dragon, Yugoslav attack, 9.Bc4
Smith–Morra Gambit
Wing Gambit
Other e4 opening variations
Alapin's Opening
Alekhine's Defense
Balogh Defense
Bishop's Opening
Bongcloud Attack
Caro–Kann Defense
Center Game
Danish Gambit
Falkbeer Countergambit
Fischer Defense
Frankenstein–Dracula Variation
French Defense
King's Gambit
Lopez Opening
Modern Defense
Monkey's Bum
Napoleon Opening
Nimzowitsch Defense
Owen's Defense
Pirc Defense
Pirc Defense, Austrian Attack
Portuguese Opening
Rice Gambit
Scandinavian Defense
St. George Defense
Vienna Game
Wayward Queen Attack
d4 Openings
Queen's Pawn Game
Closed Game
Semi-Closed Game
Queen's Gambit Openings
Queen's Gambit –
Queen's Gambit Accepted
Queen's Gambit Declined
Albin Countergambit
Baltic Defense
Cambridge Springs Defense
Chigorin Defense
Marshall Defense
Semi-Slav Defense
Slav Defense
Symmetrical Defense
Tarrasch Defense
Indian Defense
Indian Defense –
Black Knights' Tango
Bogo-Indian Defense
Budapest Gambit
East Indian Defense
Grünfeld Defense
Grünfeld Defense, Nadanian Variation
King's Indian Defense
King's Indian Defense, Four Pawns Attack
Neo-Indian Attack
Nimzo-Indian Defense
Old Indian Defense
Queen's Indian Defense
Torre Attack
Trompowsky Attack
Other d4 opening variations
Alapin–Diemer Gambit
Benko Gambit
Benoni Defense
Blackmar–Diemer Gambit
Blumenfeld Gambit
Catalan Opening
Diemer–Duhm Gambit
Dutch Defense
English Defense
Englund Gambit
Keres Defense
London System
Queen's Knight Defense
Polish Defense
Richter–Veresov Attack
Staunton Gambit
Wade Defense
Flank openings
Benko's Opening
Bird's Opening
English Opening
Flank opening
Larsen's Opening
Réti Opening
Zukertort Opening
Réti Opening, King's Indian Attack
Irregular Openings
Amar Opening
Anderssen's Opening
Barnes Opening
Clemenz Opening
Desprez Opening
Dunst Opening
Durkin Opening
Grob's Attack
Irregular chess opening
Mieses Opening
Saragossa Opening
Sokolsky Opening
Van 't Kruijs Opening
Ware Opening
Openings including a trap
Fool's mate
Scholar's mate
Elephant Trap
Halosar Trap
Kieninger Trap
Lasker Trap
Légal Trap
Magnus Smith Trap
Marshall Trap
Monticelli Trap
Mortimer Trap
Noah's Ark Trap
Rubinstein Trap
Siberian Trap
Tarrasch Trap
Würzburger Trap
Endgames
Endgame – phase of the game after the middlegame when there are few pieces left on the board
Checkmate patterns – Patterns of checkmate that occur reasonably often.
Chess endgame literature – Literature on chess endgames.
Endgame maneuvers
Prokeš maneuver – maneuver from an endgame study that sometimes occurs in games.
Endgame positions
Endgame study – A composed position with a goal of either winning or drawing
Réti endgame study – endgame study illustrate how a king can pursue two goals at the same time.
Saavedra position – endgame study in which a surprising underpromotion leads to a win.
Particular endgame situations
Bare king – situation in which one player has only the king left on the board.
Fortress – position in which a player with weaker material is able to keep the stronger side at bay and draw the game instead of lose it.
King and pawn versus king endgame – fundamental endgame with a king and pawn versus a king.
Key square – square that a player needs to occupy (usually by the king in a king and pawn endgame) to achieve some goal.
Opposite-colored bishops endgame – Endgames in which each side has one bishop and the bishops are on opposite colors of the board.
Opposition – When two kings face each other with one square in between (with generalizations).
Pawnless chess endgame – Endgames without pawns.
Queen and pawn versus queen endgame – difficult endgame with a queen and pawn versus a queen.
Queen versus pawn endgame – fundamental endgame with a queen versus an advanced pawn protected by its king.
Rook and bishop versus rook endgame – well-studied endgame with a rook and bishop versus a rook.
Rook and pawn versus rook endgame – fundamental and well-studied endgame with a rook and pawn versus a rook.
Lucena position – one of the most famous and important positions in chess endgame theory, where if the side with the pawn can reach this type of position, he can forcibly win the game.
Philidor position – if the side without the pawn reaches the Philidor Position, he will force a draw.
Two knights endgame – endgame with two knights versus a lone king cannot force checkmate, but they may be able to force a win if the defender has a pawn.
Wrong bishop – situation in some endgames where a player's bishop is on the wrong color of square to accomplish something, i.e. the result would be different if the bishop was on the other color.
Wrong rook pawn – an endgame situation very closely related to the wrong bishop, where having the other rook pawn would have a different result.
Endgame principles
Tarrasch rule – guideline that rooks should usually be placed behind passed pawns – both its own pawns and the opponent's.
Endgame tablebase – computer database of endgame positions giving optimal moves for both sides and the result of optimal moves (a win for one player or a draw).
Venues (who and where to play)
Casual play
Chess clubs
Chess club
Online chess
Internet chess server
Chess.com
RedhotPawn.com
Schemingmind.com
GameKnot.com
Lichess.org (Open source)
Playchess (Chessbase.com)
chess24.com
Correspondence chess
Correspondence chess –
Correspondence chess server – arguably the most convenient form of correspondence chess.
Competitive chess
Chess around the world –
Chess rating system – dynamic rating system based on a player's performance, with a higher number indicating a better player.
Chess tournament – chess competition among several to many players.
Swiss-system tournament – A tournament format designed to handle a relatively large number of players playing a small number of rounds in a relatively short time.
Round-robin tournament – A tournament format for a small to moderate number of players in which each player plays each other table. It may be lengthy, depending on the number of rounds played.
Knockout tournament – A tournament format of several stages in which players are paired off and half are eliminated in each stage.
Internet Computer Chess Tournament – tournament for chess engines held over the Internet.
FIDE World Rankings – list of the highest-rated players in the world.
Simultaneous exhibition – demonstration in which one player plays against a large number of opponents simultaneously.
Titles
Chess title –
Grandmaster – the highest title other than World Champion
International Master – lower title than Grandmaster
FIDE Master – lower title than International Master
Candidate Master – Lower title than FIDE Master
Chess expert – A title awarded by the United States Chess Federation to players of below master strength
Woman Grandmaster – Available to women only, lower requirements than Grandmaster
Woman International Master – Available to women only, lower requirements than International Master
Woman FIDE Master – Available to women only, lower requirements than FIDE Master
Woman Candidate Master – Available to women only, lower requirements than Candidate Master
International Correspondence Chess Grandmaster – The highest title awarded by the International Correspondence Chess Federation
FIDE titles – lifetime titles awarded by FIDE
Computer chess
Computer chess –
Chess engine –
Human–computer chess matches –
Internet chess server –
Chess software –
History of chess
History of chess
Shatranj – old form of chess, from which modern chess gradually developed, that came to the Western world from India via Sassanid Persia.
Romantic chess –
Café de la Régence –
Human–computer chess matches –
Deep Blue (chess computer) –
Deep Blue versus Garry Kasparov
Deep Blue – Kasparov, 1996, Game 1
Deep Blue – Kasparov, 1997, Game 6
Online chess
Famous games
Immortal Game
Immortal losing game
Immortal Zugzwang Game
Immortal Draw
Evergreen game
Polish Immortal
Peruvian Immortal
The Game of the Century
Lasker versus Bauer, Amsterdam, 1889
Morphy versus the Duke of Brunswick and Count Isouard (the Opera Game)
Kasparov versus the World
Poole versus HAL 9000
more...
History of chess, by period
Timeline of chess
Years in chess
1914 in chess
1915 in chess
1916 in chess
1917 in chess
1918 in chess
1932 in chess
1933 in chess
1939 in chess
1940 in chess
1941 in chess
1942 in chess
1943 in chess
1944 in chess
1945 in chess
1962 in chess
1969 in chess
1970 in chess
1971 in chess
1972 in chess
1973 in chess
1974 in chess
1975 in chess
1976 in chess
1988 in chess
1989 in chess
1990 in chess
1991 in chess
1992 in chess
1993 in chess
1994 in chess
1995 in chess
1996 in chess
1997 in chess
1998 in chess
1999 in chess
2000 in chess
2001 in chess
2002 in chess
2003 in chess
2004 in chess
2005 in chess
2006 in chess
2007 in chess
2008 in chess
2009 in chess
2010 in chess
2011 in chess
2012 in chess
2013 in chess
2014 in chess
2015 in chess
2016 in chess
2017 in chess
2018 in chess
2019 in chess
2020 in chess
Chess players
Chess prodigy – child who plays chess so well as to be able to beat Masters and even Grandmasters, often at a very young age.
List of chess families
List of chess grandmasters
List of chess players
Comparison of top chess players throughout history
World chess championship
World Chess Championships
World Chess Championship 1886
World Chess Championship 1889
World Chess Championship 1891
World Chess Championship 1892
World Chess Championship 1894
World Chess Championship 1897
World Chess Championship 1907
World Chess Championship 1908
World Chess Championship 1910 (Lasker–Janowski)
World Chess Championship 1910 (Lasker–Schlechter)
World Chess Championship 1921
World Chess Championship 1927
World Chess Championship 1929
World Chess Championship 1934
World Chess Championship 1935
World Chess Championship 1937
World Chess Championship 1948
World Chess Championship 1951
World Chess Championship 1954
World Chess Championship 1957
World Chess Championship 1958
World Chess Championship 1960
World Chess Championship 1961
World Chess Championship 1963
World Chess Championship 1966
World Chess Championship 1969
World Chess Championship 1972
World Chess Championship 1975
World Chess Championship 1978
World Chess Championship 1981
World Chess Championship 1984
World Chess Championship 1985
World Chess Championship 1986
World Chess Championship 1987
World Chess Championship 1990
World Chess Championship 1993
Classical World Chess Championship 1995
Classical World Chess Championship 2000
Classical World Chess Championship 2004
FIDE World Chess Championship 1996
FIDE World Chess Championship 1998
FIDE World Chess Championships 1998–2004
FIDE World Chess Championship 1999
FIDE World Chess Championship 2000
FIDE World Chess Championship 2002
FIDE World Chess Championship 2004
FIDE World Chess Championship 2005
World Chess Championship 2006
World Chess Championship 2007
World Chess Championship 2008
World Chess Championship 2010
World Chess Championship 2012
World Chess Championship 2013
World Chess Championship 2014
World Chess Championship 2016
World Chess Championship 2018
World Chess Championship 2021
Women's World Chess Championship
List of chess world championship matches
World Amateur Chess Championship
Candidates Tournament
World Championship of Chess Composition
World Computer Chess Championship
World Computer Speed Chess Championship
Interregnum of World Chess Champions
Interzonal
World Junior Chess Championship
World Senior Chess Championship
World Chess Solving Championship
World Team Chess Championship
World Youth Chess Championship
Science of chess
Psychology and chess
Chess blindness –
Chess as mental training –
Chess therapy –
Chess programming
Board representation –
Chess engine –
Minimax –
Null-move heuristic –
Portable Game Notation –
Transposition table –
Endgame tablebase –
Chess theory
Chess theory –
First-move advantage in chess –
Chess opening theory table –
Chess problem –
Chess composer –
Endgame study –
Glossary of chess problems –
Motif (chess composition) –
Rundlauf –
Types of chess problems
Directmates – White to move first and checkmate Black within a specified number of moves against any defense. These are often referred to as "mate in n", where n is the number of moves within which mate must be delivered. In composing and solving competitions, directmates are further broken down into three classes:
Two-movers – White to move and checkmate Black in two moves against any defense.
Three-movers – White to move and checkmate Black in no more than three moves against any defense.
More-movers – White to move and checkmate Black in n moves against any defense, where n is some particular number greater than three.
Fairy chess – chess problems that differ from classical (also called orthodox) chess problems in that they are not direct mates. Although the term "fairy chess" is sometimes used for games, it is usually applied to problems with new stipulations, new rules, a new board, or fairy chess pieces, to express an idea or theme impossible in "orthochess". See also the section on chess variants, below.
Helpmates – Black to move first cooperates with White to get Black's own king mated in a specified number of moves.
Selfmates – White moves first and forces Black (in a specified number of moves) to checkmate White.
Helpselfmates – White to move first cooperates with Black to get a position of selfmate in one move.
Reflexmates – form of selfmate with the added stipulation that each side must give mate if it is able to do so. (When this stipulation applies only to Black, it is a semi-reflexmate.)
Seriesmovers – one side makes a series of moves without reply to achieve a stipulated aim. Check may not be given except on the last move. A seriesmover may take various forms:
Seriesmate – directmate with White playing a series of moves without reply to checkmate Black.
Serieshelpmate – helpmate in which Black plays a series of moves without reply after which White plays one move to checkmate Black.
Seriesselfmate – selfmate in which White plays a series of moves leading to a position in which Black is forced to give mate.
Seriesreflexmate – reflexmate in which White plays a series of moves leading to a position in which Black can, and therefore must, give mate.
Chess puzzle –
Joke chess problem –
Combinatorial game theory
Solving chess –
Retrograde analysis –
Chess in culture
Chess aesthetics
Chess game collections
Chess libraries
Chess media
Chess in popular media
Chess organizations
Chess venues (who and where to play)
Chess variants
Chess media
Chess columns in newspapers –
Chess endgame literature –
Chess libraries
Chess essays
The Morals of Chess, by Benjamin Franklin
Chess video games
Battle Chess
Chessmaster
Fritz
Chess books
A History of Chess
Basic Chess Endings
Chess endgame literature
Chess opening book
Encyclopedia of Chess Openings
Göttingen manuscript
Handbuch des Schachspiels
Lasker's Manual of Chess
Modern Chess Openings
My 60 Memorable Games
My Great Predecessors
My System
The Game and Playe of the Chesse
The Game of Chess
The Oxford Companion to Chess
more...
Periodicals
British Chess Magazine
Chess Informant
Chess Life
CHESS magazine
EG
New In Chess
Shakhmatny Bulletin
Shakhmaty v SSSR
The Week in Chess
64
more...
Chess websites
ChessCafe.com – publishes endgame studies, book reviews and other articles related to chess on a weekly basis. It was founded in 1996 by Hanon Russell, and is well known as a repository of articles about chess and its history.
Chessgames.com – Internet chess community with over 197,000 members. The site maintains a large database of chess games, where each game has its own discussion page for comments and analysis.
FIDE Online Arena – Fédération internationale des échecs or World Chess Federation's (FIDE) commercial Internet chess server devoted to chess playing and related activities.
Internet chess servers – websites that allow players to play each other online
Free Internet Chess Server – volunteer-run Internet chess server. It was organized as a free alternative to the Internet Chess Club (ICC), after that site began charging for membership.
Internet Chess Club – commercial Internet chess server devoted to the play and discussion of chess and chess variants.
Playchess – commercial Internet chess server edited by ChessBase devoted to the play and discussion of chess and chess variants.
SchemingMind – privately owned international correspondence chess club founded in 2002. Most games and tournaments are played on a correspondence chess server owned by the club for this purpose.
The Week in Chess – one of the first, if not the first, Internet-based chess news services.
Chess in popular media
Chess in the arts and literature
Chess in early literature
Chess-themed movies
Knight Moves
Pawn Sacrifice
Searching for Bobby Fischer
Chess organizations
FIDE
Professional Chess Association
Some influential chess persons
Paul Morphy – (June 22, 1837 – July 10, 1884) – American chess player. He is considered to have been the greatest chess master of his era and an unofficial World Chess Champion. He was called "The Pride and Sorrow of Chess" because he had a brief and brilliant chess career, retiring from the game at the age of 21.
Wilhelm (later William) Steinitz (May 17, 1836 – August 12, 1900) – Austrian and then American chess player and the first undisputed world chess champion from 1886 to 1894. From the 1870s onwards, commentators have debated whether Steinitz was effectively the champion earlier.
Emanuel Lasker (December 24, 1868 – January 11, 1941) – was a German chess player, mathematician, and philosopher who was the second formally recognized World Chess Champion, a position from which he dominated chess for 27 years (from 1894 to 1921).
José Raúl Capablanca (19 November 1888 – 8 March 1942) – Cuban chess player who was world chess champion from 1921 to 1927. Nicknamed the "Human Chess Machine" due to his mastery over the board and his relatively simple style of play, he was renowned for his exceptional endgame skill and speed of play, and is widely regarded as the most naturally talented chess player in history.
15 Founders of FIDE – established FIDE on July 20, 1924, at the 1st unofficial Chess Olympiad.
Alexander Alekhine (October 31, 1892 – March 24, 1946) – in 1927, he became the fourth World Chess Champion by defeating Capablanca, widely considered invincible, in what would stand as the longest chess championship match held until 1985. Alekhine is highly regarded as a chess writer and theoretician, producing innovations in a wide range of chess openings, and giving his name to Alekhine's Defense and several other opening variations.
Mikhail Botvinnik (August 4, 1911 – May 5, 1995) – Soviet and Russian International Grandmaster and three-time World Chess Champion. Working as an electrical engineer and computer scientist at the same time, he was one of the very few famous chess players who achieved distinction in another career while playing top-class competitive chess. He was also a pioneer of computer chess. He was World Champion from 1948 to 1963, with two interruptions. He briefly lost the World Championship to Vasily Smyslov and then to Mikhail Tal, but won it back from both of them in rematches.
Mikhail Tal (November 9, 1936 – June 28, 1992) – Soviet-Latvian chess Grandmaster and the eighth World Chess Champion, widely regarded as a creative genius and the best attacking player of all time, known above all for improvisation and unpredictability. Every game, he once said, was as inimitable and invaluable as a poem.
Vasily Smyslov – Soviet and Russian chess grandmaster, and World Chess Champion (from 1957 to 1958) known for his positional style, and, in particular, his precise handling of the endgame, but many of his games featured spectacular tactical shots as well. He made enormous contributions to chess opening theory in many openings, including the English Opening, Grünfeld Defense, and the Sicilian Defense.
Tigran Petrosian (June 17, 1929 – August 13, 1984) – Soviet Armenian grandmaster, and World Chess Champion from 1963 to 1969. He was nicknamed "Iron Tigran" due to his almost impenetrable defensive playing style, which emphasized safety above all else.
Boris Spassky (born January 30, 1937) – the 10th World Chess Champion and a prominent Soviet and, later, French player.
Bobby Fischer (March 9, 1943 – January 17, 2008) – American chess Grandmaster and the 11th World Chess Champion. He is widely considered the greatest chess player of all time. Fischer was also a best-selling chess author.
Anatoly Karpov (born May 23, 1951) – Russian chess grandmaster and former World Champion, a position he held from 1975 to 1985 and from 1993 to 1999, when he resigned his title in protest against FIDE's new world championship rules.
Garry Kasparov – (born 13 April 1963) – Russian (formerly Soviet) chess grandmaster, a former World Chess Champion, writer and political activist, considered by many to be the greatest chess player of all time. He held the official FIDE world title from 1985 until 1993, when a dispute with FIDE led him to set up a rival organization, the Professional Chess Association.
Viswanathan Anand (born 11 December 1969) – Indian chess Grandmaster. Anand has won the World Chess Championship five times (2000, 2007, 2008, 2010, 2012), and was undisputed World Champion from 2007 to 2013.
Magnus Carlsen (born 30 November 1990) – Norwegian chess Grandmaster, former chess prodigy, and current World Chess Champion, who is the number-one-ranked player in the world. His peak rating is the highest in history as of 2021-01-02.
more...
Some influential persons who played chess
Ben Franklin
Chess variants
Chess variant – games similar to chess but with different rules or pieces.
Fairy chess piece – pieces used in chess variants other than the usual pieces.
Variants with a different starting position
Displacement chess – starting position is slightly altered to negate players' knowledge of openings.
Chess960 – variant created by Bobby Fischer, in which the starting position of the pieces on the 1st and 8th ranks are random, resulting in 960 possible starting positions. White and Black starting positions must be mirrored and king must start between rooks allowing castling.
Transcendental Chess – similar to Chess960, except that there is no castling, starting positions are not necessarily mirrored, and bishops must start in opposite color squares. There are 8,294,400 possible starting positions.
Variants with different forces
Chess handicap – giving an advantage to a weaker player to allow equal chances of winning. Usually the advantage given is in material, extra moves or extra time.
Dunsany's Chess – Black starts just as in traditional chess, while White starts with only 32 pawns. Black wins by taking all the pawns while avoiding stalemate, White wins by checkmating the black king.
Variants with a different board
Minichess – board has less squares, e.g. 3×3, 5×5, 5×6, etc.
Los Alamos chess – 6×6 variant without bishops.
Grid chess – 8×8 board with a 4×4 grid, dividing the board in 16 spaces of 2×2 squares each. Works just like traditional chess, except that a piece must cross at least one grid line at each move.
Cylinder chess – played on a cylinder, which results in joining the right and left sides of the board.
Circular chess – variant played on a circular board.
Alice Chess – played with two boards, one of which starts empty. After the completion of each move, the piece that moved is transferred to the same square of the other board (after a move on the second board, the piece returns to the first board).
Hexagonal chess – any of various variants played on a hexagonal board or board with hexagonal cells.
Three-dimensional chess – any of various variants with multiple boards at different levels, resulting in gameplay in three dimensions.
Star Trek Tri-Dimensional Chess
Cubic Chess – pieces are replaced by cubes, with the piece figures on their sides, making easier to shift the piece types under special rules of promotion.
Flying chess – played with two boards, one of which represents the upper level, the other the lower. Only some pieces are allowed to move on the upper level.
Dragonchess – created by Gary Gygax, co-creator of the famed role-playing game Dungeons & Dragons, the pieces are inspired on characters and monsters from the fantasy RPG.
Variants with unusual rules
Losing chess – objective of each player is to lose all their pieces instead of checkmating the enemy king. Capturing, as in checkers, is compulsory.
Atomic chess – whenever a capture occurs, the surrounding pieces are also captured, resembling the idea of an explosion.
Three checks chess – a player wins by checking the opponent king three times.
Extinction chess – the objective is to capture all of a particular type of piece of the opponent (e.g., both knights, all pawns, or the queen).
Crazyhouse – a captured piece can be introduced back to the board by the player who captured it, as a piece of his own.
Knight relay chess – pieces defended by a knight may move as a knight. Knights cannot capture or be captured.
Andernach chess – after a capture, the capturing piece changes its color.
Checkless chess – any move resulting in check is not allowed, except checkmate.
Circe chess – captured pieces instantly return to their starting positions.
Legan chess – starting positions of pieces are concentrated on opposite corners of the board. Pawn movement becomes diagonal and capturing orthogonal.
Madrasi chess – whenever a piece is attacked by an enemy piece of the same type, it cannot move.
Monochromatic chess – a piece may only move to a square of the same color as the one it occupies. Knights follow special rules for movement.
Patrol chess – capturing and checking are not allowed unless the capturing or checking piece is guarded by a friendly piece.
PlunderChess – capturing pieces gain a limited ability to move as the captured piece.
Variants with incomplete information and elements of chance
Kriegspiel – a player can see his own pieces, but not the enemy pieces.
Dark chess – a player can only see the squares occupied by his own pieces and squares his pieces could move to.
Penultima – spectators of the game secretly decide the moving and capturing rules for each piece, which the players gradually find out during the game.
Dice chess – players roll dice before each move to determine which piece types may be moved.
Knightmare Chess – fantasy variant published by Steve Jackson Games, including cards that change aspects of the game.
Multimove variants
Marseillais chess – each player moves twice per turn. If the first move gives check, the player doesn't make the second move that turn.
Progressive chess – the number of moves played each turn increases progressively. White starts with one move, then Black plays two moves, then White plays 3 moves, etc.
Avalanche chess – after each move, it is obligatory for the player to move an opponent pawn one square towards himself.
Monster chess – Black plays as in traditional chess, but White has only one king and four pawns, and moves twice a turn.
Kung-fu chess – a variant with no turns, pieces can be moved freely, each piece having its own delay time between two moves. A real-time strategy game, played mostly online.
Multiplayer variants
Bughouse chess – variant with four players and two boards, 2 vs 2, captured pieces by a player are transferred to his partner, who may introduce them to his board.
Three-player chess – specially connected three-sided board for three players.
Four-player chess – extended cross-shaped board for four players.
Forchess – four player variant inside a regular board, with specific initial configuration.
Djambi – 9×9 variant for four players with special pieces and rules.
Bosworth – four player variant on a 6×6 board, pieces are put into play gradually as the game progresses.
Enochian chess – four player variant with complex rules created by William Wynn Westcott, one of the three founders of the Hermetic Order of the Golden Dawn.
Variants with unusual pieces
Fairy chess piece
Hippogonal
Grasshopper
Grasshopper chess
Berolina chess
Maharajah and the Sepoys
Omega Chess
Stealth Chess
Pocket Mutation Chess
Baroque chess
Butterfly chess
Chess with different armies
Duell
Gess
Wildebeest Chess
Variants with bishop+knight and rook+knight compounds
Seirawan chess
Janus Chess
Capablanca Chess
Capablanca Random Chess
Embassy Chess
Modern chess
Grand Chess
Games inspired by chess
Arimaa
Icehouse pieces
Martian chess
Historical variants
History of chess
Cox–Forbes theory
Liubo
Chaturanga
Chaturaji
Shatranj
Abu Bakr bin Yahya al-Suli
Tamerlane chess
Hiashatar
Senterej
Lewis chessmen
Xiangqi and variants
Xiangqi
Encyclopedia of Chinese Chess Openings
Banqi
Giog
Shogi and variants
Shogi
Shogi strategy and tactics
History of shogi
Meijin
Ryu-oh
Computer shogi
Shogi variant
Micro shogi
Minishogi
Kyoto shogi
Judkins shogi
Whale shogi
Tori shogi
Yari shogi
Heian shogi
Sho shogi
Cannon shogi
Hasami shogi
Annan shogi
Unashogi
Wa shogi
Chu shogi
Heian dai shogi
Akuro
Dai shogi
Tenjiku shogi
Dai dai shogi
Maka dai dai shogi
Ko shogi
Tai shogi
Taikyoku shogi
Sannin shogi
Yonin shogi
Edo-era shogi sources
Other national variants
Janggi
Makruk
Sittuyin
Chess combined with other sports and pastimes
Chess boxing
Human chess
Shot chess
Strip chess
Chess variants software
ChessV
Fairy-Max
Fictional variants
Wizard's chess
See also
Glossary of chess
Glossary of chess problems
Hippogonal
Morphy number
References
External links
Predator at the Chessboard – A Field Guide to Chess Tactics – Learn chess tactics
The Blue Book of Chess; "Teaching the Rudiments of the Game, and Giving an Analysis of All the Recognized Openings" by Howard Staunton
Chess Strategy, free lessons on basic elements.
ChessGames.com – online chess database and community
Chess records – details of longest game, most passed pawns, fewest captures etc.
A sample chess game
How to Play Chess For Beginners and Parents
International organizations
FIDE – World Chess Federation
Official rules – FIDE Laws of Chess
FIDE list of top rated players
ICCF – International Correspondence Chess Federation
ACP – Association of Chess Professionals
News
Chessbase news
The Week in Chess
Online play
Chess.com Play Online Against Human Players
ChessFriends.com
Sparkchess
Chess
Chess
|
34607
|
https://en.wikipedia.org/wiki/386BSD
|
386BSD
|
386BSD (also known as "Jolix") is a discontinued Unix-like operating system based on the Berkeley Software Distribution (BSD). It was released in 1992 and ran on PC-compatible computer systems based on the 32-bit Intel 80386 microprocessor. 386BSD innovations included role-based security, ring buffers, self-ordered configuration and modular kernel design.
History
386BSD was written mainly by Berkeley alumni Lynne Jolitz and William Jolitz. William Jolitz had considerable experience with prior BSD releases while at the University of California at Berkeley (2.8 and 2.9BSD) and both contributed code developed at Symmetric Computer Systems during the 1980s, to Berkeley. Work on porting 4.3BSD-Reno and later 4.3BSD Net/2 to the Intel 80386 was done for the University of California by William Jolitz at Berkeley. 4.3BSD Net/2 was an incomplete non-operational release, with portions withheld by the University of California as encumbered (i.e. subject to an AT&T UNIX source code license). The 386BSD releases made to the public beginning in 1992 were based on portions of the 4.3BSD Net/2 release coupled with additional code (see "Missing Pieces I and II", Dr. Dobb's Journal, May–June 1992) written by William and Lynne Jolitz to make a complete operational release.
The port began in 1989 and the first, incomplete traces of the port can be found in 4.3BSD Net/2 of 1991. The port was made possible as Keith Bostic, partly influenced by Richard Stallman, had started to remove proprietary AT&T out of BSD in 1988. The port was first released in March 1992 (version 0.0) and in a much more usable version on July 14, 1992 (version 0.1). The porting process with code was extensively documented in an 18-part series written by Lynne Jolitz and William Jolitz in Dr. Dobb's Journal beginning in January 1991.
FreeBSD and NetBSD
After the release of 386BSD 0.1, a group of users began collecting bug fixes and enhancements, releasing them as an unofficial patchkit. Due to differences of opinion between the Jolitzes and the patchkit maintainers over the future direction and release schedule of 386BSD, the maintainers of the patchkit founded the FreeBSD project in 1993 to continue their work. Around the same time, the NetBSD project was founded by a different group of 386BSD users, with the aim of unifying 386BSD with other strands of BSD development into one multi-platform system. Both projects continue to this day.
Lawsuit
Due to a lawsuit (UNIX System Laboratories, Inc. v. Berkeley Software Design, Inc.), some potentially so-called encumbered source was agreed to have been distributed within the Berkeley Software Distribution Net/2 from the University of California, and a subsequent release (1993, 4.4BSD-Lite) was made by the university to correct this issue. However, 386BSD, Dr. Dobb's Journal, and William Jolitz and Lynne Jolitz were never parties to these or subsequent lawsuits or settlements arising from this dispute with the University of California, and continued to publish and work on the 386BSD code base before, during, and after these lawsuits without limitation. There has never been any legal filings or claims from the university, USL, or other responsible parties with respect to 386BSD. Finally, no code developed for 386BSD done by William Jolitz and Lynne Jolitz was at issue in any of these lawsuits.
Release 1.0
In late 1994, a finished version 386BSD Release 1.0 was distributed by Dr. Dobb's Journal on CDROM only due to the immense size (600 MB) of the release (the "386BSD Reference CD-ROM") and was a best-selling CDROM for three years (1994–1997). 386BSD Release 1.0 contained a completely new kernel design and implementation, and began the process to incorporate recommendations made by earlier Berkeley designers that had never been attempted in BSD.
Release 2.0
On August 5, 2016, an update was pushed to the 386BSD GitHub repository by developer Ben Jolitz, named version 2.0. According to the official website, Release 2.0 "built upon the modular framework to create self-healing components." However, , almost all of the documentation remains the same as version 1.0, and a changelog was not available.
Relationship with BSD/386
386BSD is often confused with BSD/386 which was a different project developed by BSDi, a Berkeley spinout, starting in 1991. BSD/386 used the same 386BSD code contributed to the University of California on 4.3BSD NET/2. Although Jolitz worked briefly for UUNET (which later spun out BSDi) in 1991, the work he did for them diverged from that contributed to the University of California and did not appear in 386BSD. Instead, William Jolitz gave regular code updates to Donn Seeley of BSDi for packaging and testing, and returned all materials when William Jolitz left that company following fundamental disagreements on company direction and goals.
Copyright and use of the code
All rights with respect to 386BSD and JOLIX are now held exclusively by William Jolitz and Lynne Jolitz. 386BSD public releases ended in 1997 since code is now available from the many 386BSD-derived operating systems today, along with several derivatives thereof (such as FreeBSD, NetBSD and OpenBSD). Portions of 386BSD may be found in other open systems such as OpenSolaris.
Further reading
Jolitz, William F. and Jolitz, Lynne Greer: Porting UNIX to the 386: A Practical Approach, 17-part series in Dr. Dobb's Journal, January 1991 – July 1992:
Jan/1991: DDJ "Designing a Software Specification"
Feb/1991: DDJ "Three Initial PC Utilities"
Mar/1991: DDJ "The Standalone System"
Apr/1991: DDJ "Language Tools Cross-Support"
May/1991: DDJ "The Initial Root Filesystem"
Jun/1991: DDJ "Research and the Commercial Sector: Where Does BSD Fit In?"
Jul/1991: DDJ "A Stripped-Down Kernel"
Aug/1991: DDJ "The Basic Kernel"
Sep/1991: DDJ "Multiprogramming and Multiprocessing, Part I"
Oct/1991: DDJ "Multiprogramming and Multiprocessing, Part II"
Nov/1991: DDJ "Device Autoconfiguration"
Feb/1992: DDJ "UNIX Device Drivers, Part I"
Mar/1992: DDJ "UNIX Device Drivers, Part II"
Apr/1992: DDJ "UNIX Device Drivers, Part III"
May/1992: DDJ "Missing Pieces, Part I"
Jun/1992: DDJ "Missing Pieces, Part II"
Jul/1992: DDJ "The Final Step: Running Light with 386BSD"
Jolitz, William F. and Jolitz, Lynne Greer: Operating System Source Code Secrets Vol 1 The Basic Kernel, 1996,
Jolitz, William F. and Jolitz, Lynne Greer: Operating System Source Code Secrets Vol 2 Virtual Memory, 2000,
References
External links
William Jolitz's 386bsd Notebook
Jolix.com
Porting UNIX to the 386: A Practical Approach
Memories of 386BSD releases by Lynne Jolitz
The unknown hackers - Salon.com
386BSD Design Notes Professional Video Series
Frequently asked questions of 386BSD - active Q/A by authors
Raising Top Quality Rabble; article mentioning 386BSD
Archived comment on "Raising Top Quality Rabble" with remarks on the history of 386BSD by Lynne Jolitz
Remarks on the history of 386BSD by Greg Lehey
More information on the various releases of 386BSD
Browsable 386BSD kernel sources
Berkeley Software Distribution
Discontinued operating systems
Free software operating systems
1992 software
|
79099
|
https://en.wikipedia.org/wiki/Maple%20%28software%29
|
Maple (software)
|
Maple is a symbolic and numeric computing environment as well as a multi-paradigm programming language. It covers several areas of technical computing, such as symbolic mathematics, numerical analysis, data processing, visualization, and others. A toolbox, MapleSim, adds functionality for multidomain physical modeling and code generation.
Maple's capacity for symbolic computing include those of a general-purpose computer algebra system. For instance, it can manipulate mathematical expressions and find symbolic solutions to
certain problems, such as those arising from ordinary and partial differential equations.
Maple is developed commercially by the Canadian software company Maplesoft. The name 'Maple' is a reference to the software's Canadian heritage.
Overview
Core functionality
Users can enter mathematics in traditional mathematical notation. Custom user interfaces can also be created. There is support for numeric computations, to arbitrary precision, as well as symbolic computation and visualization. Examples of symbolic computations are given below.
Maple incorporates a dynamically typed imperative-style programming language (resembling Pascal), which permits variables of lexical scope. There are also interfaces to other languages (C, C#, Fortran, Java, MATLAB, and Visual Basic), as well as to Microsoft Excel.
Maple supports MathML 2.0, which is a W3C format for representing and interpreting mathematical expressions, including their display in web pages. There is also functionality for converting expressions from traditional mathematical notation to markup suitable for the typesetting system LaTeX.
Architecture
Maple is based on a small kernel, written in C, which provides the Maple language. Most functionality is provided by libraries, which come from a variety of sources. Most of the libraries are written in the Maple language; these have viewable source code. Many numerical computations are performed by the NAG Numerical Libraries, ATLAS libraries, or GMP libraries.
Different functionality in Maple requires numerical data in different formats. Symbolic expressions are stored in memory as directed acyclic graphs. The standard interface and calculator interface are written in Java.
History
The first concept of Maple arose from a meeting in late 1980 at the University of Waterloo. Researchers at the university wished to purchase a computer powerful enough to run the Lisp-based computer algebra system Macsyma. Instead, they opted to develop their own computer algebra system, named Maple, that would run on lower cost computers. Aiming for portability, they began writing Maple in programming languages from the BCPL family (initially using a subset of B and C, and later on only C). A first limited version appeared after three weeks, and fuller versions entered mainstream use beginning in 1982. By the end of 1983, over 50 universities had copies of Maple installed on their machines.
In 1984, the research group arranged with Watcom Products Inc to license and distribute the first commercially available version, Maple 3.3. In 1988 Waterloo Maple Inc. (Maplesoft) was founded. The company’s original goal was to manage the distribution of the software, but eventually it grew to have its own R&D department, where most of Maple's development takes place today (the remainder being done at various university laboratories).
In 1989, the first graphical user interface for Maple was developed and included with version 4.3 for the Macintosh. X11 and Windows versions of the new interface followed in 1990 with Maple V. In 1992, Maple V Release 2 introduced the Maple "worksheet" that combined text, graphics, and input and typeset output. In 1994 a special issue of a newsletter created by Maple developers called MapleTech was published.
In 1999, with the release of Maple 6, Maple included some of the NAG Numerical Libraries. In 2003, the current "standard" interface was introduced with Maple 9. This interface is primarily written in Java (although portions, such as the rules for typesetting mathematical formulae, are written in the Maple language). The Java interface was criticized for being slow; improvements have been made in later versions, although the Maple 11 documentation recommends the previous ("classic") interface for users with less than 500 MB of physical memory.
Between 1995 and 2005 Maple lost significant market share to competitors due to a weaker user interface. With Maple 10 in 2005, Maple introduced a new "document mode" interface, which has since been further developed across several releases.
In September 2009 Maple and Maplesoft were acquired by the Japanese software retailer Cybernet Systems.
Version history
Maple 1.0: January, 1982
Maple 1.1: January, 1982
Maple 2.0: May, 1982
Maple 2.1: June, 1982
Maple 2.15: August, 1982
Maple 2.2: December, 1982
Maple 3.0: May, 1983
Maple 3.1: October, 1983
Maple 3.2: April, 1984
Maple 3.3: March, 1985 (first public available version)
Maple 4.0: April, 1986
Maple 4.1: May, 1987
Maple 4.2: December, 1987
Maple 4.3: March, 1989
Maple V: August, 1990
Maple V R2: November 1992
Maple V R3: March 15, 1994
Maple V R4: January, 1996
Maple V R5: November 1, 1997
Maple 6: December 6, 1999
Maple 7: July 1, 2001
Maple 8: April 16, 2002
Maple 9: June 30, 2003
Maple 9.5: April 15, 2004
Maple 10: May 10, 2005
Maple 11: February 21, 2007
Maple 11.01: July, 2007
Maple 11.02: November, 2007
Maple 12: May, 2008
Maple 12.01: October, 2008
Maple 12.02: December, 2008
Maple 13: April 28, 2009
Maple 13.01: July, 2009
Maple 13.02: October, 2009
Maple 14: April 29, 2010
Maple 14.01: October 28, 2010
Maple 15: April 13, 2011
Maple 15.01: June 21, 2011
Maple 16: March 28, 2012
Maple 16.01: May 16, 2012
Maple 17: March 13, 2013
Maple 17.01: July, 2013
Maple 18: Mar 5, 2014
Maple 18.01: May, 2014
Maple 18.01a: July, 2014
Maple 18.02: Nov, 2014
Maple 2015.0: Mar 4, 2015
Maple 2015.1: Nov, 2015
Maple 2016.0: March 2, 2016
Maple 2016.1: April 20, 2016
Maple 2016.1a: April 27, 2016
Maple 2017.0: May 25, 2017
Maple 2017.1: June 28, 2017
Maple 2017.2: August 2, 2017
Maple 2017.3: October 3, 2017
Maple 2018.0: March 21, 2018
Maple 2019.0: March 14, 2019
Maple 2020.0: March 12, 2020
Features
Features of Maple include:
Support for symbolic and numeric computation with arbitrary precision
Elementary and special mathematical function libraries
Complex numbers and interval arithmetic
Arithmetic, greatest common divisors and factorization for multivariate polynomials over the rationals, finite fields, algebraic number fields, and algebraic function fields
Limits, series and asymptotic expansions
Gröbner basis
Differential Algebra
Matrix manipulation tools including support for sparse arrays
Mathematical function graphing and animation tools
Solvers for systems of equations, diophantine equations, ODEs, PDEs, DAEs, DDEs and recurrence relations
Numeric and symbolic tools for discrete and continuous calculus including definite and indefinite integration, definite and indefinite summation, automatic differentiation and continuous and discrete integral transforms
Constrained and unconstrained local and global optimization
Statistics including model fitting, hypothesis testing, and probability distributions
Tools for data manipulation, visualization and analysis
Tools for probability and combinatoric problems
Support for time-series and unit based data
Connection to online collection of financial and economic data
Tools for financial calculations including bonds, annuities, derivatives, options etc.
Calculations and simulations on random processes
Tools for text mining including regular expressions
Tools for signal processing and linear and non-linear control systems
Discrete math tools including number theory
Tools for visualizing and analysing directed and undirected graphs
Group theory including permutation and finitely presented groups
Symbolic tensor functions
Import and export filters for data, image, sound, CAD, and document formats
Technical word processing including formula editing
Programming language supporting procedural, functional and object-oriented constructs
Tools for adding user interfaces to calculations and applications
Tools for connecting to SQL, Java, .NET, C++, Fortran and http
Tools for generating code for C, C#, Fortran, Java, JavaScript, Julia, Matlab, Perl, Python, R, and Visual Basic
Tools for parallel programming
Examples of Maple code
The following code, which computes the factorial of a nonnegative integer, is an example of an imperative programming construct within Maple:
myfac := proc(n::nonnegint)
local out, i;
out := 1;
for i from 2 to n do
out := out * i
end do;
out
end proc;
Simple functions can also be defined using the "maps to" arrow notation:
myfac := n -> product(i, i = 1..n);
Integration
Find
.
int(cos(x/a), x);
Output:
Determinant
Compute the determinant of a matrix.
M := Matrix([[1,2,3], [a,b,c], [x,y,z]]); # example Matrix
LinearAlgebra:-Determinant(M);
Series expansion
series(tanh(x), x = 0, 15)
Solve equations numerically
The following code numerically calculates the roots of a high-order polynomial:
f := x^53-88*x^5-3*x-5 = 0
fsolve(f)
-1.097486315, -.5226535640, 1.099074017
The same command can also solve systems of equations:
f := (cos(x+y))^2 + exp(x)*y+cot(x-y)+cosh(z+x) = 0:
g := x^5 - 8*y = 2:
h := x+3*y-77*z=55;
fsolve( {f,g,h} );
{x = -1.543352313, y = -1.344549481, z = -.7867142955}
Plotting of function of single variable
Plot with ranging from -10 to 10:
plot(x*sin(x), x = -10..10);
Plotting of function of two variables
Plot with and ranging from -1 to 1:
plot3d(x^2+y^2, x = -1..1, y = -1..1);
Animation of functions
Animation of function of two variables
plots:-animate(subs(k = 0.5, f), x=-30..30, t=-10..10, numpoints=200, frames=50, color=red, thickness=3);
Animation of functions of three variables
plots:-animate3d(cos(t*x)*sin(3*t*y), x=-Pi..Pi, y=-Pi..Pi, t=1..2);
Fly-through animation of 3-D plots.
M := Matrix([[400,400,200], [100,100,-400], [1,1,1]], datatype=float[8]):
plot3d(1, x=0..2*Pi, y=0..Pi, axes=none, coords=spherical, viewpoint=[path=M]);
Laplace transform
Laplace transform
f := (1+A*t+B*t^2)*exp(c*t);
inttrans:-laplace(f, t, s);
inverse Laplace transform
inttrans:-invlaplace(1/(s-a), s, x);
Fourier transform
Fourier transform
inttrans:-fourier(sin(x), x, w)
Integral equations
Find functions that satisfy the integral equation
.
eqn:= f(x)-3*Int((x*y+x^2*y^2)*f(y), y=-1..1) = h(x):
intsolve(eqn,f(x));
Use of the Maple engine
The Maple engine is used within several other products from Maplesoft:
Moebius, DigitalEd’s online testing suite, uses Maple to algorithmically generate questions and grade student responses.
MapleNet allows users to create JSP pages and Java Applets. MapleNet 12 and above also allow users to upload and work with Maple worksheets containing interactive components.
MapleSim, an engineering simulation tool.
Maple Quantum Chemistry Package from RDMChem computes and visualizes the electronic energies and properties of molecules.
Listed below are third-party commercial products that no longer use the Maple engine:
Versions of Mathcad released between 1994 and 2006 included a Maple-derived algebra engine (MKM, aka Mathsoft Kernel Maple), though subsequent versions use MuPAD.
Symbolic Math Toolbox in MATLAB contained a portion of the Maple 10 engine, but now uses MuPAD (starting with MATLAB R2007b+ release).
Older versions of the mathematical editor Scientific Workplace included Maple as a computational engine, though current versions include MuPAD.
See also
Comparison of computer algebra systems
Comparison of numerical-analysis software
Comparison of programming languages
Comparison of statistical packages
List of computer algebra systems
List of computer simulation software
List of graphing software
List of numerical-analysis software
Mathematical software
SageMath (an open source algebra program)
References
External links
Maplesoft, division of Waterloo Maple, Inc. – official website
Maple Online Help – online documentation
MaplePrimes – a community website for Maple users
MapleCloud – an online Maple application viewer
C (programming language) software
Computational notebook
Computer algebra system software for Linux
Computer algebra system software for MacOS
Computer algebra system software for Windows
Computer algebra systems
Cross-platform software
Data mining and machine learning software
Data visualization software
Data-centric programming languages
Econometrics software
Functional languages
Interactive geometry software
IRIX software
Linear algebra
Maplesoft
Mathematical optimization software
Mathematical software
Numerical analysis software for Linux
Numerical analysis software for MacOS
Numerical analysis software for Windows
Numerical programming languages
Numerical software
Parallel computing
Physics software
Plotting software
Products introduced in 1982
Proprietary commercial software for Linux
Proprietary cross-platform software
Regression and curve fitting software
Simulation programming languages
Software modeling language
Statistical programming languages
Theorem proving software systems
Time series software
|
33518395
|
https://en.wikipedia.org/wiki/ClickSoftware%20Technologies
|
ClickSoftware Technologies
|
ClickSoftware, A Salesforce Company offers automated mobile workforce management and service optimization solutions for enterprise and small businesses, both for mobile and in-house resources. They have over 700 employees in offices in North America, EMEA, and APAC.
ClickSoftware coined the term service chain optimization in 1996.
History
ClickSoftware was founded in 1997 by Moshe BenBassat and headquartered in Givat Shmuel, Israel. The company's name had earlier been ClickService Software, and IET–Intelligent Electronics before that. BenBassat has held faculty positions at the University of Southern California, at Tel Aviv University and at UCLA. In an interview with The Wall Street Transcript, BenBassat explained that the company developed from what had originally been his own consulting practice.
On Oct 21, 2002, ClickSoftware announced to restate financial statements for 2000 and 2001, as well as the first six months of 2002, following a review of the statements by the company's audit committee.
Shares of ClickSoftware began trading on the NASDAQ in 2000, at the height of the dot-com bubble. In 2003 ClickSoftware was chosen to optimize the scheduling and street-level routing of Deutsche Telekom's T-Com division's field service engineers. At the Summer Olympics in China in 2008 ClickSoftware was responsible for coordinating the activities of hundreds of telecommunications technicians.
In 2013 ClickSoftware successfully integrated its software with the native salesforce1 platform making its products available on the salesforce app exchange.
In 2014, ClickSoftware acquired Xora Inc., a cloud-based mobile workforce management company.
On April 30, 2015, ClickSoftware announced signing a definitive agreement to be acquired by private funds managed by Francisco Partners Management L.P., a technology-focused private equity firm, in an all-cash transaction valued at approximately $438 million. With the completion of the transaction, on July 13, 2015, ClickSoftware became a privately held company, and its ordinary shares ceased to trade on the NASDAQ Global Select Market. The Company continues to operate under the same brand as ClickSoftware Technologies Ltd. With the conclusion of the deal, Dr. Moshe BenBassat, ClickSoftware's Founder and CEO, retired from the CEO role, though remained a board member of ClickSoftware and also serves as an active advisor in further advancing its strategic goals and growth objectives. Paul Ilse, an Operating Partner with Francisco Partners, has initially been named as Chief Executive Officer of ClickSoftware. On October 1, 2015 Tom Heiser was appointed CEO, replacing Paul Ilse.
Street level routing is used in ClickSoftware and gives service organizations "eyes" on how to best direct field technicians from job to job taking into consideration the exact mapping of the road.
In 2017, ClickSoftware announced the signing of a global reseller agreement with SAP enabling the reselling of the ClickSoftware Field Service Edge solution as the SAP Scheduling and Resource management application by ClickSoftware.
In 2018, former Autotask CEO Mark Cattini joined ClickSoftware as CEO, followed by previous Autotask co-workers Elmer Lai who joined ClickSoftware as CFO and Patrick Burns who joined as Senior Vice President of Product Management.
On August 7, 2019, Salesforce.com announced an agreement to acquire ClickSoftware.
Acquisitions
Acquisition of Xora
On March 5, 2014 it was announced that ClickSoftware would acquire Xora Inc., based in Mountain View, California, which provides software-as-a-service (SaaS) solutions for companies whose success depends on the productivity and efficiency of ‘always mobile’ workers. The acquisition was for approximately $15 million, including all working capital and net cash adjustments.
Partners
ClickSoftware has a large partnership ecosystem with resellers, system integrators and OEM such as IBM, SAP, Salesforce.com, Accenture, Capgemini, Infosys, EPI-USE, Atos, Infor, Diabsolut and more. StreetSmart products are sold through carrier partners such as Verizon, AT&T and Sprint Corporation.
See also
Decision support system
Service chain optimization
Field service management
Enterprise resource planning
Enterprise mobility management
Customer relationship management
References
Software companies based in Massachusetts
Human resource management software
Companies based in Burlington, Massachusetts
Software companies established in 1979
1979 establishments in Israel
Software companies of the United States
Salesforce
|
50058607
|
https://en.wikipedia.org/wiki/Modeling%20and%20simulation%20of%20batch%20distillation%20unit
|
Modeling and simulation of batch distillation unit
|
Aspen Plus, Aspen HYSYS, ChemCad and MATLAB, PRO are the commonly used process simulators for modeling, simulation and optimization of a distillation process in the chemical industries. Distillation is the technique of preferential separation of the more volatile components from the less volatile ones in a feed followed by condensation. The vapor produced is richer in the more volatile components. The distribution of the component in the two phase is governed by the vapour-liquid equilibrium relationship. In practice, distillation may be carried out by either two principal methods. The first method is based on the production of vapor boiling the liquid mixture to be separated and condensing the vapors without allowing any liquid to return to the still. There is no reflux. The second method is based on the return of part of the condensate to still under such conditions that this returning liquid is brought into intimate contact with the vapors on their way to condenser.
Chemical process modeling
Chemical Process modeling is a technique used in chemical engineering process design. Process modeling is defined as the physical, mathematical or logical representation of the real process, system or phenomena using model library present in the process simulator software. In this technique by using process simulator software we define a system of interconnected components. A system is defined as group of object that are joined together in some regular order or interdependence toward the accomplishment of some purpose. Which system are then solved so that the steady-state or dynamic behavior of the system can be predicted. Components of the system and connections are represented as a process flow diagram. A flow diagram for the ammonia process (Finlayson, 2006) is shown in figure 1 below using aspen plus software.
The most important result of developing of mathematical model of chemical engineering system is the understanding that is gained what really make the process tick. Mathematical models can be useful in all phase of chemical engineering from research and development to plant operations and even in business and economics studies. The basis for the mathematical models are the fundamentals physical and chemical law, such as the laws of conservation of mass, energy and momentum, degree of freedom. Mathematical modeling is very much an art. It takes experience, practice and brain power to be a good mathematical modeler.
Process simulation
A simulation is the representation of the real world process or system over a period of time. Simulation can be done by hand or on a computer, simulation involves the generation of artificial history of the system and the observation of artificial history to draw inferences concerning of the operating characteristic of the real system. Thus, simulation modelling can be used both as an analysis tool for predicating the effect of changes to existing system and as a design tool to predict the performance of new system under the varying set of circumstances. Process simulation describes processes flow diagram where various unit operations are present and connected by product streams.
It is extensively used both in educational arena and industry to predicate the behavior of a process using material balance equations, equilibrium relationship, reaction kinetics, etc.
Batch distillation
In batch distillation, the feed is charged to the still pot to which heat is supplied continuously through a steam jacket or a steam coil. As the mixture boils, it generates a vapour richer in the more volatile. But as boiling continue, concentration of more volatile in the liquid decrease. It is generally assumed that equilibrium vaporization occurs in the still. The vapour is led to a condenser and the condensate or the top product is collected in the receiver. At the beginning, the condensate will be pretty rich in the more volatiles, but the concentrations of the more volatiles in it decrease as the condensate keep on accumulating in the receiver. The condensate is usually withdrawn intermittently having products or cuts of different concentrations. Batch distillation is used when the feed rate is not large enough to justify installation of a continuous distillation unit. It may also be used when the constituents greatly differ in volatility. Figure 1 show the batch distillation setup.
Batch distillation of binary mixture
Let L be the moles of material in the still and x be the concentration of the volatile component (i.e. A) and let the moles of accumulated condensate be D. Concentration of the equilibrium vapour is Over a small time, the change in the amount of liquid in the still is and the amount of vapour withdrawn is . The following differential mass balance equation may be written as:Let L be the moles of material in the still and x be the concentration of the volatile component (i.e. A) and let the moles of accumulated condensate be D. Concentration of the equilibrium vapour is Over a small time, the change in the amount of liquid in the still is and the amount of vapour withdrawn is . The following
differential mass balance equation may be written as:
Total material balance: = ----- (i)
Component A balance: ----- (ii) ----- (iii)
Equation (i) means that the total amount of vapour generated must be equal to the decrease in the total amount of liquid. Similarly, equation (ii) means that loss in the number of moles of A from the still because of vaporization is the same as the amount of A in the small amount of vapour generated.
Putting in Equation (iii) and rearranging,
= ------(iv)
If distillation starts with F moles of feed of concentration and continues till the amount of liquid reduces to W moles (composition =xw), the above equation can be integrated to give
= = ------(v)
Equation (v) is the basic equation of batch distillation and is called as the Rayleigh equation.Rayleigh equation is used for calculation of data in the batch distillation column.
Aspen Plus software
History
During the 1970s, the research have develop a novel technology at the Massachusetts Institute of Technology (MIT) with United States Department of Energy funding. The undertaking known as the Advanced System for Process Engineering (ASPEN) Project, was originally intended to design nonlinear simulation software that could aid in development of synthetic fuels. In 1981, AspenTech, a publicly traded company was founded to commercialize the simulation software package. Aspen Tech went public in October 1994 and has acquired 19 industry-leading companies as a part of its mission to offer complete, integrated solution to the process industries.
As the complexity of a plant integrated with the several process unit increase, solving a large equation set becomes a challenge. In this situation, we usually use the process flowsheets simulator.
Type of Aspen simulator package
The sophisticated Aspen Software tool can simulate large process with a high degree of accuracy. It has a model library that includes mixers, splitters, as phase separator, heat exchanger, distillation columns, and reactor pressure changers manipulators, etc. By interconnecting several unit operations, we are able to develop a process flow diagram (PFD) for a complete plant. To solve the model structure of either a single unit of a chemical plant, required Fortran code are built-in in the Aspen simulator.
Aspen simulator has been developed for the simulation of wide variety of processes such as chemical and petrochemical, petroleum refining, polymer, and coal based processes.
Nowadays, different Aspen package are available for simulations with promising performance. Briefly, some of them are presented below.
Aspen Plus – This type of process simulator is used for steady state simulation of chemicals, petrochemicals and petroleum industries. It is also used for performance monitoring, design, optimization and business planning.
Aspen Dynamics –This type of process simulator is used for dynamics study and closed loop control of several process industries. Aspen Dynamics is integrated with Aspen plus.
Aspen Batch CAD – this simulator is typically used for batch processing, reaction and distillations. It allow us to derive reaction and kinetic information form the experimental data to create a process simulation.
Aspen Chromatography-This is a dynamic simulation software package used for both batch chromatography and chromatography simulated moving bed processes.
Aspen Properties- It is useful for the thermophysical properties calculation.
Aspen Polymer Plus – It is a modeling tool for steady state and dynamic simulation and optimization of polymer processes. This is available within Aspen Plus or Aspen Properties rather than via an external menu.
Aspen HYSIS – this process modeling package is typically used for steady state simulation, performance monitoring, design, optimization and business planning for petroleum refining, and oil and gas industries.
Aspen simulate the performance of the designed process. A solid understanding of the underlying chemical engineering principles is needed to supply reasonable value of input parameters and analyse the result obtained. In addition to the process flow diagram, required input information to simulate a process are: setup, components properties, streams and blocks.
Simulation result of batch distillation unit
The BatchFrac is rigorous model used for simulation of batch distillation column present in the model library of the software. It also includes the reactions occurred in any stage of
the separator. BatchFrac model does not consider column hydraulics, and there is negligible vapour holdup and constant liquid holdup. Modeling and simulation of batch distillation unit is done with the help of one of the most important process simulators (aspen plus) used in chemical industry with the following data given in the table and check the simulation result.
Various steps are involved in the simulation of batch distillation column using aspen plus software is :
Understand the problem statement and input stream data
Specifying the components- add Ethanol and water from the components list
Specifying the property method-UNIFAC
Go to the simulation library
Creating flowsheet with help of model library present in the software, for batch distillation choose batchfrac model form library( figure 2)
Specifying input stream information that is temperature, pressure, Composition type and flow rate of the components
Temperature = 373 K, Pressure=1 bar
Flow basis = Volume( 50 L/hr)
Composition type = mole fraction
Ethanol - 0.5, Water – 0.5
Specifying block information what type of distillation column is present (Pot +overhead condenser type)
Configuring settings –types of global unit used – METCBR
Running the simulation
Viewing the result- check streams result or steam table obtain from the running the simulation (figure 3) Similarly we can do simulation for other distillation column like fractional distillation by using Redfrac model from model library present in the software.
See also
Flash distillation
Fractional distillation
Steam distillation
Process optimization
Process design
Software Process simulation
Modeling and simulation
Computer simulation
Advanced Simulation Library
List of chemical process simulators
References
Distillation
Simulation
|
3309319
|
https://en.wikipedia.org/wiki/Architecture%20of%20macOS
|
Architecture of macOS
|
The architecture of macOS describes the layers of the operating system that is the culmination of Apple Inc.'s decade-long research and development process to replace the classic Mac OS.
After the failures of their previous attempts -- Pink, which started as an Apple project but evolved into a joint venture with IBM called Taligent, and Copland, which started in 1994 and was cancelled two years later -- Apple began development of Mac OS X with the acquisition of NeXT's NeXTSTEP in 1997.
Note that Mac OS X was renamed to OS X in 2012 and then again to macOS in 2016.
Development
NeXTSTEP
NeXTSTEP used a hybrid kernel that combined the Mach 2.5 kernel developed at Carnegie Mellon University with subsystems from 4.3BSD. NeXTSTEP also introduced a new windowing system based on Display PostScript that intended to achieve better WYSIWYG systems by using the same language to draw content on monitors that drew content on printers. NeXT also included object-oriented programming tools based on the Objective-C language that they had acquired from Stepstone and a collection of Frameworks (or Kits) that were intended to speed software development. NeXTSTEP originally ran on Motorola's 68k processors, but was later ported to Intel's x86, Hewlett-Packard's PA-RISC and Sun Microsystems' SPARC processors. Later on, the developer tools and frameworks were released, as OpenStep, as a development platform that would run on other operating systems.
Rhapsody
On February 4, 1997, Apple acquired NeXT and began development of the Rhapsody operating system. Rhapsody built on NeXTSTEP, porting the core system to the PowerPC architecture and adding a redesigned user interface based on the Platinum user interface from Mac OS 8. An emulation layer called Blue Box allowed Mac OS applications to run within an actual instance of the Mac OS and an integrated Java platform. The Objective-C developer tools and Frameworks were referred to as the Yellow Box and also made available separately for Microsoft Windows. The Rhapsody project eventually bore the fruit of all Apple's efforts to develop a new generation Mac OS, which finally shipped in the form of Mac OS X Server.
Mac OS X
At the 1998 Worldwide Developers Conference (WWDC), Apple announced a move that was intended as a response to complaints from Macintosh software developers who were not happy with the two options (Yellow Box and Blue Box) available in Rhapsody. Mac OS X would add another developer API to the existing ones in Rhapsody. Key APIs from the Macintosh Toolbox would be implemented in Mac OS X to run directly on the BSD layers of the operating system instead of in the emulated Macintosh layer. This modified interface, called Carbon, would eliminate approximately 2000 troublesome API calls (of about 8000 total) and replace them with calls compatible with a modern OS.
At the same conference, Apple announced that the Mach side of the kernel had been updated with sources from the OSFMK 7.3 (Open Source Foundation Mach Kernel) and the BSD side of the kernel had been updated with sources from the FreeBSD, NetBSD and OpenBSD projects. They also announced a new driver model called I/O Kit, intended to replace the Driver Kit used in NeXTSTEP citing Driver Kit's lack of power management and hot-swap capabilities and its lack of automatic configuration capability.
At the 1999 WWDC, Apple revealed Quartz, a new Portable Document Format (PDF) based windowing system for the operating system that was not encumbered with licensing fees to Adobe like the Display PostScript windowing system of NeXTSTEP. Apple also announced that the Yellow Box layer had been renamed Cocoa and began to move away from their commitment to providing the Yellow Box on Windows. At this WWDC, Apple also showed Mac OS X booting off of a HFS Plus formatted drive for the first time.
The first public release of Mac OS X released to consumers was a Public Beta released on September 13, 2000.
References
External links
Official Website
MacOS Mojave
Mac OS X Reviews
Mac OS X Internals
Operating systems by architecture
MacOS
|
55923897
|
https://en.wikipedia.org/wiki/Constellation%20Software
|
Constellation Software
|
Constellation Software is a Canadian diversified software company. It is based in Toronto, Canada, is listed on the Toronto Stock Exchange, and is a constituent of the S&P/TSX 60.
The company was founded by Mark Leonard, a former venture capitalist, in 1996. It went public in 2006, and now has 13,000 employees spread over 6 operating segments.
Business
The company's business strategy is to acquire software companies, and then hold them for the long term. It has acquired over 500 businesses since being founded. It focuses on vertical market software companies (i.e. those that create software for a particular industry or market, as opposed to creating software usable for a wide variety of markets). Most of its acquisitions are relatively small (for less than $5 million), although the company has indicated that it may pursue larger acquisitions in the future. For instance, Constellation acquired Acceo Solutions for $250 million in January 2018, the second-largest acquisition in its history. Although the company has experienced great success with this strategy in the past (its stock has increased 30 times since its IPO in 2006), it has experienced more competition in acquiring companies in recent years, especially from private equity and hedge funds. As of 2016, 67% of revenue was from customers in the public sector, while the other 33% was from customers in the private sector. 12% of revenue was from Canada, 52% from the US, 30% from Europe, and 5% from the rest of the world.
Operating Segments
Constellation Software has six operating segments:
Volaris Group: focuses on acquiring software businesses serving various areas, including agri-business, financial services, and education. It has approximately 45 constituent software businesses.
N. Harris Computer Corporation: provides mission critical software solutions for the Public Sector, Healthcare, Utilities and Private Sector verticals throughout North America, Europe, Asia and Australia.. It has 31 constituent businesses.
Jonas Software: operates 70 companies, primarily in the hospitality and construction sectors.
Vela Software: operates 8 divisions, primarily focuses on the industrial sector, including oil and gas and manufacturing
Perseus Operating Group: operates 56 companies in a variety of industries, including home building, pulp and paper, dealership, finance, healthcare, digital marketing, and real estate.
Total Specific Solutions (TSS): focuses on software companies in the UK and Europe. Total was acquired in December 2013 for $360 million. In January 2021, this operating segment was spun-off to Topicus.com.
Controversy
The founder and chairman of Constellation Software, Mark Leonard, has long maintained a low profile, declining media interviews and making few public appearances.
In 2016, the founder of Innoprise Software sued Harris Computer Systems for giving away its software for free, thus reducing the value of a revenue-sharing agreement.
In mid-2018, the company cancelled its quarterly earnings calls, a highly unusual step for a public company. Analysts suggest the company took this step because it was worried about leaking information about potential acquisitions to its competitors.
Management Team
Mark Leonard - President & Chairman of the Board
Jamal Baksh - Chief Financial Officer
Mark Miller - Director, CEO & COO
Bernard Anzarouth - Chief Investment Officer
Source:
References
External links
Software companies of Canada
Companies listed on the Toronto Stock Exchange
Software companies established in 1995
Canadian companies established in 1995
1995 establishments in Ontario
Companies based in Toronto
2006 initial public offerings
|
30736670
|
https://en.wikipedia.org/wiki/Cities%20in%20Motion
|
Cities in Motion
|
Cities in Motion is a business simulation game developed by Colossal Order and published by Paradox Interactive. It was released for Microsoft Windows on February 23, 2011, with OS X and Linux ports coming at later dates. The goal of the game is to implement and improve a public transport system in 4 European cities - Amsterdam, Berlin, Helsinki and Vienna. This can be achieved by building lines for metro trains, trams, boats, buses and helicopters.
The game is available for purchase on a digital disc, downloaded via Steam, and a DRM-free download via various other distributors.
History
On April 5, 2011 Paradox Interactive released the DLC Cities in Motion: Design Classics, followed on May 20, 2011 by Cities in Motion: Design Marvels, featuring five new vehicles in each release. A third DLC, Cities in Motion: Design Now, was released on 14 June 2011, and included 5 new vehicles for each method of transportation. Cities in Motion: Metro Stations was released on 14 June 2011 featuring 2 new metro stations.
On May 19, 2011 Paradox Interactive announced Cities in Motion: Tokyo, an expansion containing a new city, Tokyo, and campaign, new vehicles and the introduction of the Monorail to the game. Tokyo was released on 31 May 2011. A second expansion, German Cities, was released on 14 September 2011. It contained 2 new cities, Cologne and Leipzig. A poll on the game's Facebook page made the city of Munich a free download for all users in addition to the expansion pack. During their Holiday Teaser, Paradox Interactive released a photo of the Statue of Liberty with the title Cities in Motion. U.S. Cities was soon revealed in a press conference in January 2012. The game was released on 17 January 2012, featuring New York City and San Francisco as the two new cities. In addition, 5 new vehicles and 2 new methods of transportation were added to the game, making it the largest expansion yet.
On May 20, 2011 Paradox Interactive released the Mac version of Cities in Motion.
On November 20, 2012, the London DLC was released.
A port of Cities in Motion to Linux was announced by Paradox Interactive in 2013, with it eventually arriving via Steam on January 9, 2014.
Sequel
On August 14, 2012 at the annual Gamescom video games trade fair in Cologne, Paradox Interactive announced the sequel, named Cities in Motion 2. It was released six months later on April 2, 2013.
See also
Traffic Giant
Transport Tycoon
Cities in Motion 2
Cities: Skylines - a full city simulator also by Colossal Order
References
External links
Official webpage
2011 video games
Business simulation games
Linux games
MacOS games
Paradox Interactive games
Transport simulation games
Video games developed in Finland
Video games set in Tokyo
Windows games
Single-player video games
|
68792288
|
https://en.wikipedia.org/wiki/Surface%20Laptop%20Studio
|
Surface Laptop Studio
|
Surface Laptop Studio is a newly introduced product line by Microsoft at their Surface Event on September 22, 2021. It was announced by the company alongside the Surface Go 3 and Surface Pro 8, Surface Duo 2 and a bunch of Surface accessories. The device is a new form factor featuring a dual-pivoting screen that flips into tablet mode. The laptop is powered by the new Windows 11 operating system.
Features
Windows 11 operating system
Intel Tiger Lake 11th Gen Core i5 or Core i7 processor
Intel Iris Xe graphics, Nvidia GeForce RTX 3050 Ti (Consumer), or NVIDIA RTX A2000 (Enterprise) GPU with 4GB of GDDR6 RAM
120Hz refresh rate and Dolby Vision support
16 or 32GB of LPDDR4X RAM
256GB to 2TB NVME SSD storage
2 Thunderbolt 4 USB-C ports
Configurations
Hardware
A new three-position display with a complete overhaul.
14.4-inch touch display 2400 x 1600 pixels 201 PPI 3:2 aspect ratio
The first Surface Laptop to contain 2 USB-C ports with Thunderbolt 4.
It comes with a removable SSD.
Precision Haptic touchpad
Software
Surface Laptop Studio will be powered by the new Windows 11 operating system with a 30-day trial of Microsoft 365. Consumer models will get the Home edition and the business models will get the Pro edition of the operating system. The device also supports Windows Hello login using biometric facial recognition.
Timeline
References
Studio
2-in-1 PCs
Computer-related introductions in 2021
|
11381701
|
https://en.wikipedia.org/wiki/Usage%20share%20of%20operating%20systems
|
Usage share of operating systems
|
The usage share of operating systems is the percentage of computing devices that run each operating system (OS) at any particular time. All such figures are necessarily estimates because data about operating system share is difficult to obtain. There are few reliable primary sources and no agreed methodologies for its collection. Operating systems are used in numerous device types, from embedded devices without a screen through to supercomputers.
Most device types that people interact with access the web, so using web access statistics helps compare the usage share of operating systems across most device types, and also the usage share of operating systems used for the same types.
Android, an operating system using the Linux kernel, is the world's most-used operating system when judged by web use. It has 41% of the global market, followed by Windows with 32%, Apple iOS with 16%, then Chrome OS at 1.1% also using the Linux kernel. These numbers do not include embedded devices or game consoles.
For smartphones and other pocket-sized devices, Android leads with 73% market share, and Apple's iOS has 27%.
For desktop and laptop computers, Windows is the most used at 75%, followed by Apple's macOS at 16%, and Linux-based operating systems, including Google's Chrome OS, at 5% (thereof "desktop Linux" at 2.35%).
With tablets, Apple's iOS has 55% and Android has 45%.
For the above devices, smartphones and other pocket-sized devices make up 55%, desktops and laptops 43%, and tablets 2.5%.
Linux has completely dominated the supercomputer field since 2017, with 100% of the top 500 most powerful supercomputers in the world running a Linux distribution. Linux is also most used for (web) servers, and then most often Ubuntu used, the most common Linux distribution.
Embedded devices are the most numerous type of device (with specific operating systems made for them), yet a high percentage are standalone or do not have a web browser, which makes their usage share difficult to measure. Hypothetically some specific operating system used in embedded devices is more popular than the ones mentioned above.
Worldwide device shipments
In May 2020, Gartner predicted a decline in all market segments for 2020 (from already declining market in 2019) due to COVID-19, predicting a decline by 13.6% for all devices, while "Work from Home Trend Saved PC Market from Collapse", with them only predicting to decline by 10.5% for PCs. However, in the end according to Gartner, PC shipments grew "10.7% in Fourth Quarter of 2020 and [...] reached 275 million units in 2020, a 4.8% increase from 2019 and the highest growth in ten years." Apple in 4th place for PCs had the largest growth in shipments for a company in Q4 of 31.3%, while "the fourth quarter of 2020 was another remarkable period of growth for Chromebooks, with shipments increasing around 200% year over year to reach 11.7 million units. In 2020, Chromebook shipments increased over 80% to total nearly 30 million units, largely due to demand from the North American education market." Chromebooks sold more than Apple's Macs worldwide.
According to Gartner, the following is the worldwide device shipments (referring to wholesale) by operating system, which includes smartphones, tablets, laptops and PCs together.
Shipments (to stores) do not mean sales to consumers (not necessarily in the year of shipment), therefore suggesting the numbers indicate popularity and/or usage could be misleading. Not only do smartphones sell in higher numbers than traditional PCs but also as a whole a lot more, by dollar value with the gap only projected to widen, to well over double.
For 2015 (and earlier), Gartner reports for "the year, worldwide PC shipments declined for the fourth consecutive year, which started in 2012 with the launch of tablets" with an 8% decline in PC sales for 2015 (not including cumulative decline in sales over the previous years). Gartner includes Macs (running macOS) in PC sales numbers (but not e.g. iPads and Androids), and they individually had a slight increase in sales in 2015.
On 28 May 2015, Google announced that there were 1.4 billion Android users and 1 billion Google play users active during that month. This changed to 2 billion monthly active users in May 2017.
On 27 January 2016, Paul Thurrott summarized the operating system market, the day after Apple announced "one billion devices":
Microsoft backed away from their goal of one billion Windows 10 devices in three years (or "by the middle of 2018") and reported on 26 September 2016 that Windows 10 was running on over 400 million devices, and in March 2019 on more than 800 million.
By late 2016, Android had been explained to be "killing" Apple's iOS market share (i.e. its declining sales of smartphones, not just relatively but also by number of units, when the whole market is increasing) with
As of 9 May 2019, the biggest smartphone companies (by market share) were Samsung, Huawei and Apple, respectively.
Gartner's own press release said, "Apple continued its downward trend with a decline of 7.7 percent in the second quarter of 2016", which is their decline, based on absolute number of units, that underestimates the relative decline (with the market increasing), along with the misleading "1.7 percent [point]" decline. That point decline means an 11.6% relative decline (from 14.6% down to 12.9%).
Although in units sold Apple is declining, they are almost the only vendor making any profit in the smartphone sector from hardware sales alone. In Q3 2016 for example, they captured 103.6% of the market profits.
There are more mobile phone owners than toothbrush owners, with mobile phones the fastest growing technology in history. There are a billion more active mobile phones in the world than people (and many more than 10 billion sold so far with less than half still in use), explained by the fact that some people have more than one, such as an extra for work. All the phones have an operating system, but only a fraction of them are smartphones with an OS capable of running modern applications. Currently 3.1 billion smartphones and tablets are in use across the world (with tablets, a small fraction of the total, generally running the same operating systems, Android or iOS, the latter being more popular on tablets. In 2019, a variant of iOS called iPadOS built for iPad tablets was released).
Tablet computers
In 2015, eMarketer estimated at the beginning of the year that the tablet installed base would hit one billion for the first time (with China's use at 328 million, which Google Play doesn't serve or track, and the United States's use second at 156 million). At the end of the year, because of cheap tablets not counted by all analysts that goal was met (even excluding cumulative sales of previous years) as:
This conflicts with statistics from IDC that say the tablet market contracted by 10% in 2015 with only Huawei, ranked fifth, with big gains, more than doubling their share; for fourth quarter 2015, the five biggest vendors were the same except that Amazon Fire tablets ranked third worldwide, new on the list, enabled by its not quite tripling of market share to 7.9%, with its Fire OS Android-derivative.
Web clients
The most recent data from various sources published during the last twelve months is summarized in the table below. All of these sources monitor a substantial number of web sites; statistics related to one web site only are excluded.
Android currently ranks highest, above Windows (incl. Xbox console) systems. Windows Phone accounted for 0.51% of the web usage, before it was discontinued.
Considering all personal computing devices, Microsoft Windows is well below 50% usage share on every continent, e.g. at 31% in the US and in many countries lower, e.g. China, and in India at 19% and Windows' lowest share globally was 30% in July 2021, and 28% in the US.
On weekends iOS tops Windows in the US (and on some weekends Android is also more popular than Windows), and iOS alone got even with Windows for the month of November 2019, in large part due to the spike in sales for the 5 days around Thanksgiving. That season iOS had a 46% lead over Windows and, along with Android, contributed to a higher market share of mobile devices over desktops for 6 weeks. Before iOS became the most popular operating system in any independent country, it was most popular in Guam, an unincorporated territory of the United States, for four consecutive quarters in 2017-18, although Android is now the most popular there. iOS has been the highest ranked OS in Jersey (a British Crown dependency in Europe) for years, by a wide margin, and iOS was also highest ranked in Falkland Islands, a British Overseas Territory, for one quarter in 2019, before being overtaken by Android in the following quarter. iOS is competitive with Windows in Sweden, where some days it is more used.
The designation of an "Unknown" operating system is strangely high in a few countries such as Madagascar where it was at 32.44% (no longer near as high). This may be due to the fact that StatCounter uses browser detection to get OS statistics, and there the most common browsers are not often used. The version breakdown for browsers in Madagascar shows "Other" at 34.9%, and Opera Mini 4.4 is the most popular known browser at 22.1% (plus e.g. 3.34% for Opera 7.6). However browser statistics without version-breakdown has Opera at 48.11% with the "Other" category very small.
In China, Android got to be the highest ranked operating system in July 2016 (Windows has occasionally topped it since then, while since April 2016 it or all non-mobile operating systems haven't outranked mobile operating systems, meaning Android plus iOS). In the Asian continent as a whole, Android has been ranked highest since February 2016 and Android alone has the majority share, because of a large majority in all the most populous countries of the continent, up to 84% in Bangladesh, where it has had over 70% share for over four years. Since August 2015, Android is ranked first, at 48.36% in May 2016, in the African continent when it took a big jump ahead of Windows 7, and thereby Africa joined Asia as a mobile-majority continent. China is no longer a desktop-majority country, joining India, which has a mobile-majority of 71%, confirming Asia's significant mobile-majority.
Online usage of Linux kernel derivatives (Android + Chrome OS + other Linux) exceeds that of Windows. This has been true since some time between January and April 2016, according to W3Counter and StatCounter.
However, even before that, the figure for all Unix-like OSes, including those from Apple, was higher than that for Windows.
Notes
Desktop and laptop computers
Windows is still the dominant desktop OS, but the dominance varies by region and it has gradually lost market share to other desktop operating systems (not just to mobile) with the slide very noticeable in the US, where macOS usage has more than quadrupled from Jan. 2009 to Dec. 2020 to 30.62% (i.e. in Christmas month; and 34.72% in April 2020 in the middle of COVID-19, and iOS was more popular overall that year; globally Windows lost to Android that year, as for the two years prior), with Windows down to 61.136% and Chrome OS at 5.46%, plus traditional Linux at 1.73%.
There is little openly published information on the device shipments of desktop and laptop computers. Gartner publishes estimates, but the way the estimates are calculated is not openly published. Another source of market share of various operating systems is StatCounter basing its estimate on web use (although this may not be very accurate). Also, sales may overstate usage. Most computers are sold with a pre-installed operating system, with some users replacing that OS with a different one due to personal preference, or installing another OS alongside it and using both. Conversely, sales underestimate usage by not counting unauthorized copies. For example, in 2009, approximately 80% of software sold in China consisted of illegitimate copies. In 2007, the statistics from an automated update of IE7 for registered Windows computers differed with the observed web browser share, leading one writer to estimate that 25–35% of all Windows XP installations were unlicensed.
The usage share of Microsoft's (then latest operating system) Windows 10 has slowly increased since July/August 2016, reaching around 27.15% (of all Windows versions, not all desktop or all operating systems) in December 2016. It eventually reached 79.79% on 5 October 2021, the same day on which its successor Windows 11 was released.
Web analysis shows significant variation in different parts of the world. For example, macOS use varies a lot by region, in North America claims 16.82% (17.52% in the US) whereas in Asia it is only 4.4%. In the United States usage of Windows XP has dropped to 0.38% (of all Windows versions), and its global average to 0.59%, while in Africa it is still at 2.71%, and it still has double-digit share in at least one country.
The 2019 Stack Overflow developer survey provides no detail about particular versions of Windows. The desktop operating system share among those identifying as professional developers was:
Windows: 45.3%
macOS: 29.2%
Linux: 25.3%
BSD/Unix: 0.1%
Microsoft data on Windows usage
In June 2016, Microsoft claimed Windows 10 had half the market share of all Windows installations in the US and UK, as quoted by BetaNews:
===Desktop computer games===
The digital video game distribution platform Steam publishes a monthly "Hardware & Software Survey", with the statistics below:
These figures, as reported by Steam, do not include SteamOS statistics.
Mobile devices
Smartphones
By Q1 2018, mobile operating systems on smartphones included Google's dominant Android (and variants) and Apple's iOS which combined had an almost 100% market share.
Smartphone penetration vs. desktop use differs substantially by country. Some countries, like Russia, still have smartphone use as low as 22.35% (as a fraction of all web use), but in most western countries, smartphone use is close to 50% of all web use. This doesn't mean that only half of the population has a smartphone, could mean almost all have, just that other platforms have about equal use. Smartphone usage share in developing countries is much higher in Bangladesh, for example, Android smartphones had up to 84% and currently 70% share, and in Mali smartphones had over 90% (up to 95%) share for almost two years. (A section below has more information on regional trends on the move to smartphones.)
There is a clear correlation between the GDP per capita of a country and that country's respective smartphone OS market share, with users in the richest countries being much more likely to choose Apple's iPhone, with Google's Android being predominant elsewhere.
Note
Table is only showing mobile OS market share not the overall market share. Wikimedia Foundation statistics consider tablets as part of the mobile OS market share.
Tablet computers
Tablet computers, or simply tablets, became a significant OS market share category starting with Apple's iPad. In Q1 2018, iOS had 65.03% market share and Android had 34.58% market share. Windows tablets may not get classified as such by some analysts, and thus barely register; e.g. 2-in-1 PCs may get classified as "desktops", not tablets.
Since 2016, in South America (and Cuba in North America), Android tablets have gained majority, and in Asia in 2017 Android was slightly more popular than the iPad, which was at 49.05% usage share in October 2015. In Africa, Android tablets are much more popular while elsewhere the iPad has a safe margin.
, Android has made steady gains to becoming the most popular tablet operating system: that is the trend in many countries, having already gained the majority in large countries (India at 63.25%, and in Indonesia at 62.22%) and in the African continent with Android at 62.22% (first to gain Android majority in late 2014), with steady gains from 20.98% in August 2012 (Egypt at 62.37%, Zimbabwe at 62.04%), and South America at 51.09% in July 2015. (Peru at 52.96%). Asia is at 46%. In Nepal, Android gained majority lead in November 2014 but lost it down to 41.35% with iOS at 56.51%. In Taiwan, as of October 2016, Android after having gained a confident majority, has been on a losing streak. China is a major exception to Android gaining market share in Asia (there Android phablets are much more popular than Android tablets, while similar devices get classified as smartphones) where the iPad/iOS is at 82.84% in March 2015.
Crossover to smartphones having majority share
According to StatCounter web use statistics (a proxy for all use), smartphones are more popular than desktop computers globally (and Android in particular more popular than Windows). Including tablets with mobiles/smartphones, as they also run so-called mobile operating systems, even in the United States (and most countries) are mobiles including tablets more popular than other (older originally made for desktops) operating systems (such as Windows and macOS). Windows in the US (at 33.42%) has only 8% head-start (2.55-percentage points) over iOS only; with Android, that mobile operating system and iOS have 52.14% majority. Alternatively, Apple, with iOS plus their non-mobile macOS (9.33%) has 20% more share (6.7-percentage points more) than Microsoft's Windows in the country where both companies were built.
Although desktop computers are still popular in many countries (while overall down to 44.9% in the first quarter of 2017), smartphones are more popular even in many developed countries. A few countries on all continents are desktop-minority with Android more popular than Windows; many, e.g. Poland in Europe, and about half of the countries in South America, and many in North America, e.g. Guatemala, Honduras, Haiti; up to most countries in Asia and Africa with smartphone-majority because of Android, Poland and Turkey in Europe highest with 57.68% and 62.33%, respectively. In Ireland, smartphone use at 45.55% outnumbers desktop use and mobile as a whole gains majority when including the tablet share at 9.12%. Spain was also slightly desktop-minority. As of July 2019, Sweden had been desktop-minority for eight weeks in a row.
The range of measured mobile web use varies a lot by country, and a StatCounter press release recognizes "India amongst world leaders in use of mobile to surf the internet" (of the big countries) where the share is around (or over) 80% and desktop is at 19.56%, with Russia trailing with 17.8% mobile use (and desktop the rest).
Smartphones (discounting tablets), first gained majority in December 2016 (desktop-majority was lost the month before), and it wasn't a Christmas-time fluke, as while close to majority after smartphone majority happened again in March 2017.
In the week of 7–13 November 2016, smartphones alone (without tablets) overtook desktop for the first time, albeit for a short period. Examples of mobile-majority countries include Paraguay in South America, Poland in Europe and Turkey and most of Asia and Africa. Some of the world is still desktop-majority, with for example the United States at 54.89% (but not on all days). However, in some territories of the United States, such as Puerto Rico, desktop is significantly under majority, with Windows just under 25%, overtaken by Android.
On 22 October 2016 (and subsequent weekends), mobile showed majority. Since 27 October, the desktop hasn't had a majority, including on weekdays. Smartphones alone have showed majority since 23 December to the end of the year, with the share topping at 58.22% on Christmas Day. To the "mobile"-majority share of smartphones, tablets could be added giving a 63.22% majority. While an unusually high top, a similar high also occurred on Monday 17 April 2017, with the smartphone share slightly lower and tablet share slightly higher, combining to 62.88%.
Formerly, according to a StatCounter press release, the world has turned desktop-minority; , at about 49% desktop use for that month, but mobile wasn't ranked higher, tablet share had to be added to it to exceed desktop share.
For the Christmas season (i.e. temporarily, while desktop-minority remains and smartphone-majority on weekends), the last two weeks in December 2016, Australia (and Oceania in general) was desktop-minority for the first time for an extended period, i.e. every day from 23 December.
In South America, smartphones alone took majority from desktops on Christmas Day, but for a full-week-average, desktop is still at least at 58%.
The UK desktop-minority dropped down to 44.02% on Christmas Day and for the eight days to the end of the year. Ireland joined some other European countries with smartphone-majority, for three days after Christmas, topping that day at 55.39%.
In the US, desktop-minority happened for three days on and around Christmas (while a longer four-day stretch happened in November, and happens frequently on weekends).
According to StatCounter web use statistics (a proxy for all use), in the week from 7–13 November 2016, "mobile" (meaning smartphones) alone (without tablets) overtook desktop, for the first time, with them highest ranked at 52.13% (on 27 November 2016) or up to 49.02% for a full week. Mobile-majority applies to countries such as Paraguay in South America, Poland in Europe and Turkey; and the continents Asia and Africa. Large regions of the rest of the world are still desktop-majority, while on some days, the United States, (and North America as a whole) isn't; the US is desktop-minority up to four days in a row, and up to a five-day average. Other examples, of desktop-minority on some days, include the UK, Ireland, Australia (and Oceania as a whole); in fact, at least one country on every continent has turned desktop-minority (for at least a month). On 22 October 2016 (and subsequent weekends), mobile has shown majority.
Previously, according to a StatCounter press release, the world has turned desktop-minority; , at about 49% desktop use for that month, with desktop-minority stretching up to an 18-weeks/4-months period from 28 June to 31 October 2016, while whole of July, August or September 2016, showed desktop-majority (and many other long sub-periods in the long stretch showed desktop-minority; similarly only Fridays, Saturdays and Sundays are desktop-minority). The biggest continents, Asia and Africa, have shown vast mobile-majority for long time (any day of the week), as well as several individual countries elsewhere have also turned mobile-majority: Poland, Albania (and Turkey) in Europe and Paraguay and Bolivia in South America.
According to StatCounter's web use statistics, Saturday 28 May 2016, was the day when smartphones ("mobile" at StatCounter, that now counts tablets separately) became a most used platform, ranking first, at 47.27%, above desktops. The next day, desktops slightly outnumbered "mobile" (unless counting tablets: some analysts count tablets with smartphones or separately while others with desktops even when most tablets are iPad or Android, not Windows devices).
Since Sunday 27 March 2016, the first day the world dipped to desktop-minority, it has happened almost every week, and by week of 11–17 July 2016, the world was desktop-minority, followed by the next week, and thereon also for a three-week period. The trend is still stronger on weekends, with e.g. 17 July 2016 showed desktop at 44.67%, "mobile" at 49.5% plus tablets at 5.7%. Recent weekly data shows a downward trend for desktops.
According to StatCounter web use statistics (a proxy for overall use), on weekends desktops worldwide lose about 5 percent points, e.g. down to 51.46% on 15 August 2015, with the loss in (relative) web use going to mobile (and also a minuscule increase for tablets), mostly because Windows 7, ranked 1st on workdays, declines in web use, with it shifting to Android and lesser degree to iOS.
Two continents, have already crossed over to mobile-majority (because of Android), based on StatCounters web use statistics. In June 2015, Asia became the first continent where mobile overtook desktop (followed by Africa in August; while Nigeria had mobile majority in October 2011, because of Symbian that later had 51% share, then Series 40 dominating, followed by Android as dominating operating system) and as far back as October 2014, they had reported this trend on a large scale in a press release: "Mobile usage has already overtaken desktop in several countries including India, South Africa and Saudi Arabia". In India, desktop went from majority, in July 2012, down to 32%. In Bangladesh desktop went from majority, in May 2013, down to 17%, with Android alone now accounting for majority web use. Only a few African countries were still desktop-majority and many have a large mobile majority including Ethiopia and Kenya, where mobile usage is over 72%.
The popularity of mobile use worldwide has been driven by the huge popularity increase of Android in Asian countries, where Android is the highest ranked operating system statistically in virtually every south-east Asian country, while it also ranks most popular in almost every African country. Poland has been desktop-minority since April 2015, because of Android being vastly more popular there, and other European countries, such as Albania (and Turkey), have also crossed over. The South America continent is somewhat far from losing desktop-majority, but Paraguay had lost it . Android and mobile browsing in general has also become hugely popular in all other continents where desktop has a large desktop base and the trend to mobile is not as clear as a fraction of the total web use.
While some analysts count tablets with desktops (as some of them run Windows), others count them with mobile phones (as the vast majority of tablets run so-called mobile operating systems, such as Android or iOS on the iPad). iPad has a clear lead globally, but has clearly lost the majority to Android in South America, and a number of Eastern European countries such as Poland; lost virtually all African countries and has lost the majority twice in Asia, but gained the majority back (while many individual countries, e.g. India and most of the middle East have clear Android majority on tablets). Android on tablets is thus second most popular after the iPad.
In March 2015, for the first time in the US the number of mobile-only adult internet users exceeded the number of desktop-only internet users with 11.6% of the digital population only using mobile compared to 10.6% only using desktop; this also means the majority, 78%, use both desktop and mobile to access the internet. A few smaller countries in North America, such as Haiti (because of Android) have gone mobile majority (mobile went to up to 72.35%, and is at 64.43% in February 2016).
Revenue
The region with the largest Android usage also has the largest mobile revenue.
Public servers on the Internet
Internet based servers' market share can be measured with statistical surveys of publicly accessible servers, such as web servers, mail servers or DNS servers on the Internet: the operating systems powering such servers are found by inspecting raw response messages. This method gives insight only into market share of operating systems that are publicly accessible on the Internet.
There will be differences in the result depending on how the sample is done and observations weighted. Usually the surveys are not based on a random sample of all IP addresses, domain names, hosts or organisations, but on servers found by some other method. Additionally, many domains and IP addresses may be served by one host and some domains may be served by several hosts or by one host with several IP addresses.
Note Revenue comparisons often include "operating system software, other bundled software" and are not appropriate for usage comparison as the Linux operating system costs nothing (including "other bundled software"), except if optionally using commercial distributions such as Red Hat Enterprise Linux (in that case, cost of software for all software bundled with hardware has to be known for all operating systems involved, and subtracted). In cases where no-cost Linux is used, such comparisons underestimate Linux server popularity and overestimate other proprietary operating systems such as Unix and Windows.
Mainframes
Mainframes are larger and more powerful than servers, but not supercomputers. They are used to process large sets of data, for example enterprise resource planning or credit card transactions.
The most common operating system for mainframes is IBM's z/OS. Operating systems for IBM Z generation hardware include IBM's proprietary z/OS, Linux on IBM Z, z/TPF, z/VSE and z/VM.
Gartner reported on 23 December 2008, that Linux on System z was used on approximately 28% of the "customer z base" and that they expected this to increase to over 50% in the following five years. Of Linux on IBM Z, Red Hat and Micro Focus compete to sell RHEL and SLES respectively:
Prior to 2006, Novell claimed a market share of 85% or more for SUSE Linux Enterprise Server.
Red Hat has since claimed 18.4% in 2007 and 37% in 2008.
Gartner reported at the end of 2008 that Novell's SUSE Linux Enterprise Server had an 80% share of mainframe Linux.
Decline
Like today's trend of mobile devices from personal computers, in 1984 for the first time estimated sales of desktop computers ($11.6 billion) exceeded mainframe computers ($11.4 billion). IBM received the vast majority of mainframe revenue.
From 1991 to 1996, AT&T Corporation briefly owned NCR, one of the major original mainframe producers. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a rough consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks.
In 2012, NASA powered down its last mainframe, an IBM System z9. However, IBM's successor to the z9, the z10, led a New York Times reporter to state four years earlier that "mainframe technology—hardware, software and services—remains a large and lucrative business for IBM, and mainframes are still the back-office engines behind the world's financial markets and much of global commerce". , while mainframe technology represented less than 3% of IBM's revenues, it "continue[d] to play an outsized role in Big Blue's results".
Supercomputers
The TOP500 project lists and ranks the 500 fastest supercomputers for which benchmark results are submitted. Since the early 1990s, the field of supercomputers has been dominated by Unix or Unix-like operating systems, and starting in 2017, every top 500 fastest supercomputer uses Linux as its supercomputer operating system.
The last supercomputer to rank #1 while using an operating system other than Linux was ASCI White, which ran AIX. It held the title from November 2000 to November 2001, and was decommissioned in 2006. Then in June 2017, two AIX computers held rank 493 and 494, the last non-Linux systems before they dropped off the list.
Historically all kinds of Unix operating systems dominated, and in the end ultimately Linux remains.
Market share by category
See also
Comparison of operating systems
List of operating systems
Timeline of operating systems
Usage share of web browsers
Mobile OS market share
Notes
References
Usage share of operating systems
Operating systems
|
44432875
|
https://en.wikipedia.org/wiki/Lords%20of%20Xulima
|
Lords of Xulima
|
Lords of Xulima is a 2014 role-playing video game developed by Numantian Games, released on Microsoft Windows, Mac OS X, and Linux. It is the first title released by Numantian Games, an indie game development studio based in Madrid, Spain. After a successful Kickstarter campaign, Lords of Xulima initially released on Steam for Windows, followed by Mac and Linux.
Gameplay
Lords of Xulima is a turn-based role-playing video game with a first person combat view. The movement and exploration in the game takes place in a 2D isometric view.
Parties make up of 6 characters, with 9 classes to choose from.
The turn-based combat in Lords of Xulima is not timed, so players can take their time and decide their next move. Enemies on the map are finite, as players receive a bonus for completely clearing an area of enemies. Some of the enemies are static, while the rest are initiated as a random encounter.
Development
Lords of Xulima was developed by Numantian Games, an independent game studio, with a custom made engine.
On August 8, 2014, Lords of Xulima entered Steam Early Access. Four months later, it officially released on Steam on November 14, 2014.
Reception
Lords of Xulima has received "mixed or average" reviews, with a 71/100 based on 9 critic reviews on Metacritic.
Sequel
Lords of Xulima II is in the early development stages with no projected release date.
References
External links
Crowdfunded video games
Dungeon crawler video games
Fantasy video games
Indie video games
Kickstarter-funded video games
Linux games
MacOS games
Role-playing video games
Single-player video games
Steam Greenlight games
Video games developed in Spain
Video games featuring protagonists of selectable gender
Windows games
|
39282484
|
https://en.wikipedia.org/wiki/Nike%20Zeus
|
Nike Zeus
|
Nike Zeus was an anti-ballistic missile (ABM) system developed by the US Army during the late 1950s and early 1960s that was designed to destroy incoming Soviet intercontinental ballistic missile warheads before they could hit their targets. It was designed by Bell Labs' Nike team, and was initially based on the earlier Nike Hercules anti-aircraft missile. The original, Zeus A, was designed to intercept warheads in the upper atmosphere, mounting a 25 kiloton W31 nuclear warhead. During development, the concept changed to protect a much larger area and intercept the warheads at higher altitudes. This required the missile to be greatly enlarged into the totally new design, Zeus B, given the tri-service identifier XLIM-49, mounting a 400 kiloton W50 warhead. In several successful tests, the B model proved itself able to intercept warheads, and even satellites.
The nature of the strategic threat changed dramatically during the period that Zeus was being developed. Originally expected to face only a few dozen ICBMs, a nationwide defense was feasible, although expensive. In 1957, growing fears of a Soviet sneak attack led it to be repositioned as a way to protect Strategic Air Command's bomber bases, ensuring a retaliatory strike force would survive. But when the Soviets claimed to be building hundreds of missiles, the US faced the problem of building enough Zeus missiles to match them. The Air Force argued they close this missile gap by building more ICBMs of their own instead. Adding to the debate, a number of technical problems emerged that suggested Zeus would have little capability against any sort of sophisticated attack.
The system was the topic of intense inter-service rivalry throughout its lifetime. When the ABM role was given to the Army in 1958, the United States Air Force began a long series of critiques on Zeus, both within defense circles and in the press. The Army returned these attacks in kind, taking out full page advertisements in popular mass market news magazines to promote Zeus, as well as spreading development contracts across many states in order to garner the maximum political support. As deployment neared in the early 1960s, the debate became a major political issue. The question ultimately became whether or not a system with limited effectiveness would be better than nothing at all.
The decision whether to proceed with Zeus eventually fell to President John F. Kennedy, who became fascinated by the debate about the system. In 1963, the United States Secretary of Defense, Robert McNamara, convinced Kennedy to cancel Zeus. McNamara directed its funding towards studies of new ABM concepts being considered by ARPA, selecting the Nike-X concept which addressed Zeus' various problems by using an extremely high-speed missile, Sprint, along with greatly improved radars and computer systems. The Zeus test site built at Kwajalein was briefly used as an anti-satellite weapon.
History
Early ABM studies
The first known serious study on attacking ballistic missiles with interceptor missiles was carried out by the Army Air Force in 1946, when two contracts were sent out as Project Wizard and Project Thumper to consider the problem of shooting down missiles of the V-2 type. These projects identified the main problem being one of detection; the target could approach from anywhere within hundreds of miles, and reach their targets in only five minutes. Existing radar systems would have difficulty seeing the missile launch at those ranges, and even assuming one had detected the missile, existing command and control arrangements would have serious problems forwarding that information to the battery in time for them to attack. The task appeared impossible at that time.
These results also suggested that the system might be able to work against longer-ranged missiles. Although these traveled at very high speeds, their higher altitude trajectories made detection simpler, and the longer flight times provided more time to prepare. Both projects were allowed to continue as research efforts. They were transferred to the US Air Force when that force separated from the Army in 1947. The Air Force faced significant budget constraints and canceled Thumper in 1949 in order to use its funds to continue its GAPA surface-to-air missile (SAM) efforts. The next year, Wizard's funding was also rolled into GAPA to develop a new long-range SAM design, which would emerge a decade later as the CIM-10 Bomarc. ABM research at the Air Force practically, although not officially, ended.
Nike II
By the early 1950s the Army was firmly established in the surface-to-air missile field with their Nike and Nike B missile projects. These projects had been led by Bell Labs, working with Douglas.
The Army contacted the Johns Hopkins University Operations Research Office (ORO) to consider the task of shooting down ballistic missiles using a Nike-like system. The ORO report took three years to complete, and the resulting The Defense of the United States Against Aircraft and Missiles was comprehensive. While this study was still progressing, in February 1955 the Army began initial talks with Bell, and in March they contracted Bell's Nike team to begin a detailed 18-month study of the problem under the name Nike II.
The first section of the Bell study was returned to the Army Ordnance department at the Redstone Arsenal on 2 December 1955. It considered the full range of threats including existing jet aircraft, future ramjet powered aircraft flying at up to , short-range ballistic missiles of the V-2 type flying at about the same speed, and an ICBM reentry vehicle (RV) traveling at . They suggested that a missile with a common rocket booster could serve all of these roles by changing between two upper stages; one with fins for use in the atmosphere against aircraft, and another with vestigial fins and thrust vectoring for use above the atmosphere against missiles.
Considering the ICBM problem, the study went on to suggest that the system would have to be effective between 95 and 100% of the time in order to be worthwhile. They considered attacks against the RV while the missile was in the midcourse, just as it reached the highest point in its trajectory and was traveling at its slowest speed. Practical limitations eliminated this possibility, as it required the ABM to be launched at about the same time as the ICBM in order to meet in the middle, and they could not imagine a way to arrange this. Working at much shorter ranges, during the terminal phase, seemed the only possible solution.
Bell returned a further study, delivered on 4 January 1956, that demonstrated the need to intercept the incoming warheads at altitude, and suggested that this was within the abilities of an upgraded version of the Nike B missile. Given a terminal speed up to 5 miles per second (), combined with the time it would take an interceptor missile to climb to the RV's altitude, the system required that the RV be initially detected at about range. Due to the RV's relatively small size and limited radar signature, this would demand extremely powerful radars.
To ensure the destruction of the RV, or at least render the warhead within it unusable, the W31 would have to be fired when it was within a few hundred feet of the RV. Given the angular resolution of existing radars, this limited the maximum effective range significantly. Bell considered an active radar seeker, which improved accuracy as it flew toward the RV, but these proved too large to be practical. A command guidance system like the early Nike systems seemed to be the only solution.
The interceptor would lose maneuverability as it climbed out of the atmosphere and its aerodynamic surfaces became less effective, so it would have to be directed onto the target as rapidly as possible, leaving only minor fine-tuning later in the engagement. This required that accurate tracks be developed for both the warhead and outgoing missile very quickly in comparison to a system like Nike B where the guidance could be updated throughout the engagement. This, in turn, demanded new computers and tracking radars with much higher processing rates than the systems used on earlier Nikes. Bell suggested that the recently introduced transistor offered the solution to the data processing problem.
After running 50,000 simulated intercepts on analog computers, Bell returned a final report on the concept in October 1956, indicating that the system was within the state of the art. A 13 November 1956 memo gave new names to the entire Nike series; the original Nike became Nike Ajax, Nike B became Nike Hercules, and Nike II became Nike Zeus.
Army vs. Air Force
The Army and Air Force had been involved in interservice fighting over missile systems since they split in 1947. The Army considered surface-to-surface missiles (SSM) an extension of conventional artillery, and surface-to-air designs as the modern replacement for their anti-aircraft artillery. The Air Force considered the nuclear SSM to be an extension of their strategic bombing role, and any sort of long-range anti-aircraft system to be their domain as it would integrate with their fighter fleet. Both forces were developing missiles for both roles, leading to considerable duplication of effort which was widely seen as wasteful.
By the mid-1950s some of these projects were simply tit-for-tat efforts. When the Army's Hercules began deployment, the Air Force complained that it was inferior to their Bomarc and that the Army was "unfit to guard the nation". When the Army started its Jupiter missile efforts, the Air Force worried it would trump their Atlas ICBM effort and responded by starting its own IRBM, Thor. And when the Army announced Nike II, the Air Force reactivated Wizard, this time as a long-range anti-ICBM system of much greater performance than Zeus.
In a 26 November 1956 memorandum, US Secretary of Defense Charles Erwin Wilson attempted to end the fighting between the forces and prevent duplication of effort. His solution was to limit the Army to weapons with range, and those involved in surface-to-air defense to only . The memo also placed limits on Army air operations, severely limiting the weight of the aircraft it was allowed to operate. To some degree, this simply formalized what had largely already been the case in practice, but Jupiter fell outside the range limits and the Army was forced to hand them to the Air Force.
The result was another round of fighting between the two forces. Jupiter had been designed to be a highly accurate weapon able to attack Soviet military bases in Europe, as compared to Thor, which was intended to attack Soviet cities and had accuracy on the order of several miles. Losing Jupiter, the Army was eliminated from any offensive strategic role. In return, the Air Force complained that Zeus was too long-ranged and the ABM effort should center on Wizard. But the Jupiter handover meant that Zeus was now the only strategic program being carried out by the Army, and its cancellation would mean "virtually the surrender of the defense of America to the U.S.A.F at some future date."
Gaither Report, missile gap
In May 1957, Eisenhower tasked the President's Science Advisory Committee (PSAC) to provide a report on the potential effectiveness of fallout shelters and other means of protecting the US population in the event of a nuclear war. Chaired by Horace Rowan Gaither, the PSAC team completed their study in September, publishing it officially on 7 November as Deterrence & Survival in the Nuclear Age, but today known as the Gaither Report. After ascribing an expansionist policy to the USSR, along with suggestions that they were more heavily developing their military than the US, the Report suggested that there would be a significant gap in capability in the late 1950s due to spending levels.
While the report was being prepared, in August 1957 the Soviets launched their R-7 Semyorka (SS-6) ICBM, and followed this up with the successful launch of Sputnik 1 in October. Over the next few months, a series of intelligence reviews resulted in ever increasing estimates of the Soviet missile force. National Intelligence Estimate (NIE) 11-10-57, issued in December 1957, stated that the Soviets would have perhaps 10 prototype missiles in service by mid-1958. But after Nikita Khrushchev claimed to be producing them "like sausages", the numbers began to rapidly inflate. NIE 11-5-58, released in August 1958, suggested there would be 100 ICBMs in service by 1960, and 500 by 1961 or 1962 at the latest.
With the NIE reports suggesting the existence of the gap Gaither predicted, near panic broke out in military circles. In response, the US began to rush its own ICBM efforts, centered on the SM-65 Atlas. These missiles would be less susceptible to attack by Soviet ICBMs than their existing bomber fleet, especially in future versions which would be launched from underground silos. But even as Atlas was rushed, it appeared there would be a missile gap; NIE estimates made during the late 1950s suggested the Soviets would have significantly more ICBMs than the US between 1959 and 1963, at which point US production would finally catch up.
With even a few hundred missiles, the Soviets could afford to target every US bomber base. With no warning system in place, a sneak attack could destroy a significant amount of the US bomber fleet on the ground. The US would still have the airborne alert force and its own small ICBM fleet, but the USSR would have its entire bomber fleet and any missiles they did not launch, leaving them with a massive strategic advantage. To ensure this could not happen, the Report called for the installation of active defenses at SAC bases, Hercules in the short term and an ABM for the 1959 period, along with new early warning radars for ballistic missiles to allow alert aircraft to get away before the missiles hit. Even Zeus would come too late to cover this period, and some consideration was given to an adapted Hercules or a land based version of the Navy's RIM-8 Talos as an interim ABM.
Zeus B
Douglas Aircraft had been selected to build the missiles for Zeus, known under the company designation DM-15. This was essentially a scaled-up Hercules with an improved, more powerful single piece booster replacing Hercules' cluster of four smaller boosters. Intercepts could take place at the limits of the Wilson requirements, at ranges and altitudes of about . Prototype launches were planned for 1959. For more rapid service entry there had been some consideration given to an interim system based on the original Hercules missile, but these efforts were dropped. Likewise, early requirements for a secondary anti-aircraft role were also eventually dropped.
Wilson signaled his intention to retire in early 1957, and Eisenhower began looking for a replacement. During his exit interview, only four days after Sputnik, Wilson told Eisenhower that "trouble is rising between the Army and the Air Force over the 'anti-missile-missile'." The new Secretary of Defense, Neil McElroy, took office on 9 October 1957. McElroy was previously president of Procter & Gamble and was best known for the invention of the concept of brand management and product differentiation. He had little federal experience, and the launch of Sputnik left him little time to ease into the position.
Shortly after taking office, McElroy formed a panel to investigate ABM issues. The panel examined the Army and Air Force projects, and found the Zeus program considerably more advanced than Wizard. McElroy told the Air Force to stop work on ABM missiles and use Wizard funding for the development of long-range radars for early warning and raid identification. These were already under development as the BMEWS network. The Army was handed the job of actually shooting down the warheads, and McElroy gave them free hand to develop an ABM system as they saw fit, free of any range limitations.
The team designed a much larger missile with a greatly enlarged upper fuselage and three stages, more than doubling the launch weight. This version extended range, with interceptions taking place as far as downrange and over in altitude. An even larger booster took the missile to hypersonic speeds while still in the lower atmosphere, so the missile fuselage had to be covered over completely with a phenolic ablative heat shield to protect the airframe from melting. Another change was to combine the aerodynamic controls used for control in the lower atmosphere with the thrust vectoring engines, using a single set of movable jet vanes for both roles.
The new DM-15B Nike Zeus B (the earlier model retroactively becoming the A) received a go-ahead for development on 16 January 1958, the same date the Air Force was officially told to stop all work on a Wizard missile. On 22 January 1958, the National Security Council gave Zeus S-Priority, the highest national priority. Additional funds were requested to the Zeus program to ensure an initial service date in the fourth quarter of 1962, but these were denied, delaying service entry until some time in 1963.
Exchange ratio and other problems
With their change of fortunes after McElroy's 1958 decision, Army General James M. Gavin publicly stated that Zeus would soon replace strategic bombers as the nation's main deterrent. In response to this turn of events, the Air Force stepped up their policy by press release efforts against the Army, as well as agitating behind the scenes within the Defense Department.
As part of their Wizard research, the Air Force had developed a formula that compared the cost of an ICBM to the ABM needed to shoot it down. The formula, later known as the cost-exchange ratio, could be expressed as a dollar figure; if the cost of the ICBM was less than that figure, the economic advantage was in favor of the offense – they could build more ICBMs for less money than the ABMs needed to shoot them down. A variety of scenarios demonstrated that it was almost always the case that the offense had the advantage. The Air Force ignored this inconvenient problem while they were still working on Wizard, but as soon as the Army was handed sole control of the ABM efforts, they immediately submitted it to McElroy. McElroy identified this as an example of interservice fighting, but was concerned that the formula might be correct.
For an answer, McElroy turned to the Re-entry Body Identification Group (RBIG), a sub-group of the Gaither Committee led by William E. Bradley, Jr. that had been studying the issue of penetrating a Soviet ABM system. The RBIG had delivered an extensive report on the topic on 2 April 1958 which suggested that defeating a Soviet ABM system would not be difficult. Their primary suggestion was to arm US missiles with more than one warhead, a concept known as Multiple Re-entry Vehicles (MRV). Each warhead would also be modified with radiation hardening, ensuring only a near miss could damage it. This would mean that the Soviets would have to launch at least one interceptor for each US warhead, while the US could launch multiple warheads without building a single new missile. If the Soviets added more interceptors to counter the increased number of US warheads, the US could counter this with a smaller number of new missiles of their own. The cost balance was always in favor of the offense. This basic concept would remain the primary argument against ABMs for the next two decades.
Turning this argument about, the RBIG delivered a report to McElroy that agreed with the Air Force's original claims on the ineffectiveness of ABMs based on cost. But then they went on to consider the Zeus system itself, and noted that its use of mechanically steered radars, with one radar per missile, meant that Zeus could only launch a small number of missiles at once. If the Soviets also deployed MRV, even a single ICBM would cause several warheads to arrive at the same time, and Zeus would simply not have time to shoot at them all. They calculated that only four warheads arriving within one minute would result in one of them hitting the Zeus base 90% of the time. Thus one or two Soviet missiles would destroy all 100 Zeus missiles at the base. The RBIG noted that an ABM system "demands such a high rate of fire from an active defense system, in order to intercept the numerous reentry bodies which arrive nearly simultaneously, that the expense of the required equipment may be prohibitive". They went on to question the "ultimate impossibility" of an ABM system.
Project Defender
McElroy responded to the RBIG report in two ways. First, he turned to the newly created ARPA group to examine the RBIG report. ARPA, directed by Chief Scientist Herbert York, returned another report broadly agreeing with everything they said. Considering both the need to penetrate a Soviet ABM and a potential US ABM system, York noted that:
When this report was received, McElroy then charged ARPA to begin studying long-term solutions to the ICBM defense, looking for systems that would avoid the apparently insurmountable problem presented by the exchange ratio.
ARPA responded by forming Project Defender, initially considering a wide variety of far-out concepts like particle beam weapons, lasers and huge fleets of space-borne interceptor missiles, the latter known as Project BAMBI. In May 1958, York also began working with Lincoln Labs, MIT's radar research lab, to begin researching ways to distinguish warheads from decoys by radar or other means. This project emerged as the Pacific Range Electromagnetic Signature Studies, or Project PRESS.
More problems
In the midst of the growing debate over Zeus' abilities, the US conducted its first high yield, high altitude tests – Hardtack Teak on 1 August 1958, and Hardtack Orange on 12 August. These demonstrated a number of previously unknown or underestimated effects, notably that nuclear fireballs grew to very large size and caused all of the air in or immediately below the fireball to become opaque to radar signals, an effect that became known as nuclear blackout. This was extremely worrying for any system like Zeus, which would not be able to track warheads in or behind such a fireball, including those of the Zeus' own warheads.
If this were not enough, there was a growing awareness that simple radar reflectors could be launched along with the warhead that would be indistinguishable to Zeus' radars. This problem was first alluded to in 1958 in public talks that mentioned Zeus' inability to discriminate targets. If the decoys spread apart further than the lethal radius of the Zeus' warhead, several interceptors will be required to guarantee that the warhead hiding among the decoys will be destroyed. Decoys are lightweight, and would slow down when they began to reenter the upper atmosphere, allowing them to be picked out, or decluttered. But by that time it would be so close to the Zeus base that there might not be time for the Zeus to climb to altitude.
In 1959 the Defense Department ordered one more study on the basic Zeus system, this time by the PSAC. They put together a heavyweight group with some of the most famous and influential scientists forming its core, including Hans Bethe who had worked on the Manhattan Project and later on the hydrogen bomb, Wolfgang Panofsky, the director of the High-Energy Physics Lab at Stanford University, and Harold Brown, director of the Lawrence Livermore weapons lab, among similar luminaries. The PSAC report was almost a repeat of the RBIG. They recommended that Zeus should not be built, at least without significant changes to allow it to better deal with the emerging problems.
Throughout, Zeus was the focus of fierce controversy in both the press and military circles. Even as testing started, it was unclear if development would continue. President Eisenhower's defense secretaries, McElroy (1957–59) and Thomas S. Gates, Jr. (1959–61), were unconvinced that the system was worth the cost. Eisenhower was highly skeptical, questioning whether an effective ABM system could be developed in the 1960s. Another harsh critic on cost grounds was Edward Teller, who simply stated that the exchange ratio meant the solution was to build more ICBMs.
Kennedy and Zeus
John F. Kennedy campaigned on the platform that Eisenhower was weak on defense and that he was not doing enough to solve the looming missile gap. After his win in the 1960 elections he was flooded with calls and letters urging that Zeus be continued. This was a concentrated effort on the part of the Army, who was fighting back against similar Air Force tactics. They also deliberately spread the Zeus contracts over 37 states in order to gain as much political and industrial support as possible, while taking out advertisements in major mass-market magazines like Life and The Saturday Evening Post promoting the system.
Kennedy appointed Army General Maxwell D. Taylor as his Chairman of the Joint Chiefs of Staff. Taylor, like most Army brass, was a major supporter of the Zeus program. Kennedy and Taylor initially agreed to build a huge Zeus deployment with seventy batteries and 7,000 missiles. Robert McNamara was also initially in favor of the system, but suggested a much smaller deployment of twelve batteries with 1,200 missiles. A contrary note was put forth by Jerome Wiesner, recently appointed as Kennedy's scientific advisor, and chair of the 1959 PSAC report. He began to educate Kennedy on the technical problems inherent to the system. He also had lengthy discussions with David Bell, the budget director, who came to realize the enormous cost of any sort of reasonable Zeus system.
Kennedy was fascinated by the Zeus debate, especially the way that scientists were lined up on diametrically opposed positions for or against the system. He commented to Wiesner, "I don’t understand. Scientists are supposed to be rational people. How can there be such differences on a technical issue?" His fascination grew and he eventually compiled a mass of material on Zeus which took up one corner of a room where he spent hundreds of hours becoming an expert on the topic. In one meeting with Edward Teller, Kennedy demonstrated that he knew more about the Zeus and ABMs than Teller. Teller then expended considerable effort to bring himself up to the same level of knowledge. Wiesner would later note that the pressure to make a decision built up until "Kennedy came to feel that the only thing anybody in the country was concerned about was Nike-Zeus."
To add to the debate, it was becoming clear that the missile gap was fictional. The first Corona spy satellite mission in August 1960 put limits on their program that appeared to be well below the lower bound of any of the estimates, and a follow-up mission in late 1961 clearly demonstrated the US had a massive strategic lead. A new intelligence report published in 1961 reported that the Soviets had no more than 25 ICBMs and would not be able to add more for some time. It was later demonstrated the actual number of ICBMs in the Soviet fleet at that time was four.
Nevertheless, Zeus continued slowly moving towards deployment. On 22 September 1961, McNamara approved funding for continued development, and approved initial deployment of a Zeus system protecting twelve selected metropolitan areas. These included Washington/Baltimore, New York, Los Angeles, Chicago, Philadelphia, Detroit, Ottawa/Montreal, Boston, San Francisco, Pittsburgh, St. Louis, and Toronto/Buffalo. However, the deployment was later overturned, and in January 1962 only the development funds were released.
Nike-X
In 1961, McNamara agreed to continue development funding through FY62, but declined to provide funds for production. He summed up both the positives and the concerns this way:
Looking for a near term solution, McNamara once again turned to ARPA, asking it to consider the Zeus system in depth. The agency returned a new report in April 1962 that contained four basic concepts. First was the Zeus system in its current form, outlining what sort of role it might play in various war fighting scenarios. Zeus could, for instance, be used to protect SAC bases, thereby requiring the Soviets to expend more of their ICBMs to attack the bases. This would presumably mean less damage to other targets. Another considered the addition of new passive electronically scanned array radars and computers to the Zeus, which would allow it to attack dozens of targets at once over a wider area. Finally, in its last concept, ARPA replaced Zeus with a new very high speed, short range missile designed to intercept the warhead at altitudes as low as , by which time any decoys or fireballs would be long gone. This last concept became Nike-X, an ad hoc name suggested by Jack Ruina while describing the ARPA report to PSAC.
Perfect or nothing
As work on Nike-X began, high-ranking military and civilian officials began to press for Zeus deployment as an interim system in spite of the known problems. They argued the system could be upgraded in-place as the new technologies became available. McNamara was opposed to early deployment, while Congressman Daniel J. Flood would be a prime force for immediate deployment.
McNamara's argument against deployment rested on two primary issues. One was the apparent ineffectiveness of the system, and especially its benefit-cost ratio compared to other options. For instance, fallout shelters would save more Americans for far less money, and in an excellent demonstration of his approach to almost any defense issue, he noted:
The second issue, ironically, came about due to concerns about a Soviet ABM system. The US's existing SM-65 Atlas and SM-68 Titan both used re-entry vehicles with blunt noses that greatly slowed the warheads as they entered the lower atmosphere and made them relatively easy to attack. The new LGM-30 Minuteman missile used sharp-nosed reentry shapes that traveled at much higher terminal speeds, and included a number of decoy systems that were expected to make interception very difficult for the Soviet ABMs. This would guarantee the US's deterrent. If there was a budget choice to be made, McNamara supported Minuteman, although he tried not to say this.
In one particularly telling exchange between McNamara and Flood, McNamara initially refuses to choose one option over the other:
But later, Flood managed to get a more accurate statement out of him:
Cancellation and the ABM gap
By 1963 McNamara had convinced Kennedy that the Zeus was simply not worth deploying. The earlier concerns about cost and effectiveness, as well as new difficulties in terms of attack size and decoy problems, led McNamara to cancel the Zeus project on 5 January 1963. In its place they decided to continue work on Nike-X. Nike-X development was based in the existing Nike Zeus Project Office until their name was changed to Nike-X on 1 February 1964.
While reporting to the Senate Armed Services Committee in February, McNamara noted that they expected the Soviets to have an initial ABM system deployed in 1966, and then later stated that the Nike-X would not be ready for use until 1970. Noting a "defensive gap", Strom Thurmond began an effort to deploy the existing Zeus as an interim system. Once again the matter spilled over into the press.
On 11 April 1963, Thurmond led Congress in an effort to fund deployment of Zeus. In the first closed session of the Senate in twenty years, Zeus was debated and the decision was made to continue with the planned development of Nike-X with no Zeus deployment. The Army continued the testing program until December 1964 at White Sands Missile Range, and May 1966 at Kwajalein Missile Range.
Testing
As the debate over Zeus raged, the Nike team was making rapid progress developing the actual system. Test firings of the original A models of the missile began in 1959 at White Sands Missile Range. The first attempt on 26 August 1959 was of a live booster stage and dummy sustainer, but the booster broke up shortly before booster/sustainer separation. A similar test on 14 October was a success, followed by the first two-stage attempt on 16 December. The first complete test of both stages with active guidance and thrust vectoring was successfully carried out on 3 February 1960. Data collected from these tests led to changes to the design to improve speed during the ascent. The first test of the Zeus B took place in May 1961. A number of Zeus missiles broke up during early test flights due to excessive heating of the control surfaces, and numerous changes were worked into the system to address this.
Additional tracking tests were carried out by Target Tracking Radars (TTRs) at Bell's Whippany, NJ labs and an installation on Ascension Island. The latter was first used in an attempt to track a SM-68 Titan on 29 March 1961, but the data download from Cape Canaveral simulating Zeus Acquisition Radar (ZAR) information failed. A second test on 28 May was successful. Later in the year the Ascension site tracked a series of four test launches, two Atlas, two Titan, generating tracking information for as long as 100 seconds. A ZAR at White Sands reached initial operation in June 1961, and was tested against balloons, aircraft, parachutes deployed from sounding rockets and Hercules missiles. A TTR was completed at White Sands in November, and testing with the complete system of ZAR, TTR and MTR ("all-up" tests) began that month. On 14 December a Zeus passed within of a Nike Hercules being used as a test target, a success that was repeated in March 1962. On 5 June 1963, President Kennedy and Vice President Lyndon Johnson visited White Sands to view missile launches, including a Zeus launch.
The need to test Zeus against targets flying realistic ICBM profiles presented a problem. While White Sands was fine for testing the basic missile and guidance systems, it was too small to test Zeus at its maximum range. Such testing began at Point Mugu in California. where the Zeus missiles could fly out over the Pacific. Consideration was given to using Point Mugu to launch against ICBMs flying from Cape Canaveral, but range safety requirements placed limits on the potential tests. Likewise, the Atlantic Test Range, to the northeast of Canaveral, had a high population density and little land available for building accurate downrange tracking stations, Ascension being the only suitable location.
Eventually Kwajalein Island was selected, as it was 4,800 miles from California, perfect for ICBMs, and already had a US Navy base with considerable housing stocks and an airstrip. The Zeus site, known as the Kwajalein Test Site, was officially established on 1 October 1960. As it grew in size, it eventually led to the entire island complex being handed over to the Army from the Navy on 1 July 1964. The site took up a considerable amount of the empty land to the north side of the airfield. The launchers were located on the far southwestern corner of the island, with the Target Tracking Radars, Missile Tracking Radars (MTRs) and various control sites and generators running along the northern side of the airfield. The ZAR transmitter and receiver were some distance away, off the northeastern edge of the airfield.
A minor Army-Air Force fight then broke out about what targets would be used for the Kwajalein tests. The Army favored using its Jupiter design, fired from Johnston Atoll in the Pacific, while the Air Force recommended using Atlas fired from Vandenberg AFB in California. The Army had already begun converting the former Thor launchers to Jupiter when an Ad Hoc Panel formed by the Department of Defense considered the issue. On 26 May 1960 they decided in favor of Atlas, and this was made official on 29 June when the Secretary of Defense ended pad conversion and additional Jupiter production earmarked for Zeus testing.
A key development of the testing program was a miss-distance indicator system, which independently measured the distance between the Zeus and the target at the instant the computers initiated the detonation of the warhead. There were concerns that if the Zeus' own radars were used for this ranging measure, any systematic error in ranging would also be present in the test data, and thus would be hidden. The solution was the use of a separate UHF-frequency transmitter in the warhead reentry vehicle, and a receiver in the Zeus. The received signal was retransmitted to the ground, where its Doppler shift was examined to extract the range information. These instruments eventually demonstrated that the Zeus' own tracking information was accurate. For visual tracking, a small conventional warhead was used, which provided a flash that could be seen on long exposure photographs of the interceptions.
On 24 January 1962, the Zeus Acquisition Radar at Kwajalein achieved its first returns from an ICBM target, and on 18 April was used to track Kosmos 2. On the 19 January it reacquired Kosmos 2 and successfully transferred the track to one of the TTRs. On 26 June the first all-up test against an Atlas target was attempted. The ZAR began successfully tracking the target at and properly handed off to a TTR. The TTR switched tracks from the missile fuselage to the warhead at . When the fuselage began to break up, the computer switched to clutter mode, which watched the TTR data for any derivation from the originally calculated trajectory, which would indicate that it had begun tracking debris. It also continued to predict the location of the warhead, and if the system decided it was tracking debris, it would wait for the debris and warhead to separate enough to begin tracking them again. However, the system failed to properly record when the warhead was lost, and tracking was never regained.
A second test on 19 July was a partial success, with the Zeus passing within of the target. The control system ran out of hydraulic fluid during the last 10 seconds of the approach, causing the large miss distance, but the test was otherwise successful. The guidance program was updated to stop the rapid control cycling that led to the fluid running out. A third attempt on 12 December successfully brought the missile to very close distances, but the second missile of the planned two missile salvo failed to launch due to an instrument problem. A similar test on 22 December also suffered a failure in the second missile, but the first passed only from its target.
Of the tests carried out over the two year test cycle, ten of them were successful in bringing the Zeus within its lethal range.
Anti-satellite use
In April 1962, McNamara asked the Nike team to consider using the Zeus site on Kwajalein as an operational anti-satellite base after the main Zeus testing had completed. The Nike team responded that a system could be readied for testing by May 1963. The concept was given the name Project Mudflap.
Development was a straightforward conversion of the DM-15B into the DM-15S. The changes were mainly concerned with providing more upper stage maneuverability through the use of a new two-stage hydraulic pump, batteries providing 5 minutes of power instead of 2, and an improved fuel in the booster to provide higher peak altitudes. A test of the new booster with a DM-15B upper was carried out at White Sands on 17 December 1962, reaching an altitude of , the highest of any launch from White Sands to that point. A second test with a complete DM-15S on 15 February 1963 reached .
Testing then moved to Kwajalein. The first test on 21 March 1963 failed when the MTR failed to lock onto the missile. A second on 19 April also failed when the missile's tracking beacon failed 30 seconds before intercept. The third test, this time using an actual target consisting of an Agena-D upper stage equipped with a Zeus miss-distance transmitter, was carried out on 24 May 1963, and was a complete success. From that point until 1964, one DM-15S was kept in a state of instant readiness and teams continually trained on the missile.
After 1964 the Kwajalein site was no longer required to be on alert, and returned primarily to Zeus testing. The system was kept active in a non-alert role between 1964 and 1967, known as Program 505. In 1967 it was replaced by a Thor based system, Program 437. A total of 12 launches, including those at White Sands, were carried out as part of the 505 program between 1962 and 1966.
Description
Nike Zeus was originally intended to be a straightforward development of the earlier Hercules system giving it the ability to hit ICBM warheads at about the same range and altitude as the maximum performance of the Hercules. In theory, hitting a warhead is no more difficult than an aircraft; the interceptor does not have to travel any further or faster, the computers that guide it simply have to select an intercept point farther in front of the target to compensate for the target's much higher speed. In practice, the difficulty is detecting the target early enough that the intercept point is still within range of the missile. This demands much larger and more powerful radar systems, and faster computers.
Early detection
When Zeus was still in the early stages of design, Bell Labs suggested using two similar radars to provide extended range tracking and improve reaction times. Located at the Zeus bases would be the Local Acquisition Radar (LAR), a UHF monopulse radar able to track between 50 and 100 targets. The Forward Acquisition Radar (FAR) would be positioned ahead of the Zeus bases to provide early warning of up to 200 to 300 seconds of tracking data on up to 200 targets. The FAR would broadcast 10 MW pulses at UHF between 405 and 495 MHz, allowing it to detect a 1 square metre radar reflection at or a more typical 0.1 m2 target at . Each track would be stored as a 200 bit record including location, velocity, time of measure and a measure of the quality of the data. Clouds of objects would be tracked as a single object with additional data indicating the width and length of the cloud. Tracks could be updated every five seconds while the target was in view, but the antenna rotated at a relatively slow 4 RPM so targets moved significantly between rotations. Each FAR could feed data to up to three Zeus sites.
By the time the Zeus plans were being finalized in 1957, plans for FAR were deemphasized, and LAR had been upgraded to become the Zeus Acquisition Radar (ZAR) which provided wide area early warning and initial tracking information. This enormously powerful radar was driven by multiple 1.8 MW klystrons and broadcast through three wide antennas arranged as the outside edges of a rotating equilateral triangle. The ZAR spun at 10 RPM, but with three antennas it simulated a single antenna rotating three times as fast. Each target was scanned every two seconds, providing much more data than the earlier FAR/LAR concept.
The signal was received on a separate set of three antennas, situated at the centre of an diameter Luneburg lens, which rotated synchronously with the broadcaster under a diameter dome. Multiple feed horns were used in the receiver to allow reception from many vertical angles at once. Around the receiver dome was a large field of wire mesh, forming a flat ground plane reflector. The ZAR operated in the UHF on various frequencies between 495 and 605 MHz, giving it frequency agility. ZAR had detection range on the order of on a 0.1 m2 target.
The entire transmitter was surrounded by a high clutter fence located away from the antenna, which reflected the signal away from local objects on the ground that would otherwise create false returns. The ZAR was so powerful that the microwave energy at close range was far beyond the mandated safety limits and potentially lethal within . In order to allow for maintenance while the radar was operating, the equipment areas were shielded in a partial Faraday cage of metal foil, and a metal tunnel was run from the outside of the clutter fence, which blocked the signal outside the fence line. The other radars completing the system featured similar protection.
Battery layout
Data from the ZARs were passed to the appropriate Zeus Firing Battery to attack, with each ZAR being able to send its data to up to ten batteries. Each battery was self-contained after handoff, including all of the radars, computers and missiles needed to perform an intercept. In a typical deployment, a single Zeus Defense Center would be connected to three to six batteries, spread out by as much as .
Targets picked out by the ZAR were then illuminated by the Zeus Discrimination Radar (ZDR, also known as Decoy Discrimination Radar, DDR or DR). ZDR imaged the entire cloud using a chirped signal that allowed the receiver to accurately determine range within the cloud by passing each frequency in the chirp to a separate range gate. The range resolution was 0.25 microseconds, about . As the signal was spread out over the entire cloud, it had to be very powerful; the ZDR produced 40 MW 2 µs pulses in the L-band between 1270 and 1400 MHz. To ensure no signal was lost by scanning areas that were empty, the ZDR used a Cassegrain reflector that could be moved to focus the beam as the cloud approached to keep the area under observation constant.
Data from the ZDR was passed to the All-Target Processor (ATP), which ran initial processing on as many as 625 objects in a cloud. As many as 50 of these could be picked out for further processing in the Discrimination and Control Computer (DCC), which ran more tests on those tracks and assigned each one a probability of being the warhead or decoy. The DCC was able to run 100 different tests. For exoatmospheric signals the tests included measure of radar return pulse-to-pulse to look for tumbling objects, as well as variations in signals strength due to changes in frequency. Within the atmosphere, the primary method was examining the velocities of the objects to determine their mass.
Any target with a high probability was then passed to the Battery Control Data Processor (BCDP), which selected missiles and radars for an attack. This started with the assignment of a Target Tracking Radar (TTR) to a target passed to it from the DCC. TTRs operated in the C band from 5250 to 5750 MHz at 10 MW, allowing tracking of a 0.1 m2 target at , a range they expected to be able to double with a new maser-based receiver design. Once targets were being successfully tracked and a firing order was received, the BCDP selected available Zeus missiles for launch and assigned a Missile Tracking Radar (MTR) to follow them. These were much smaller radars operating in the X-band between 8500 and 9600 MHz and assisted by a transponder on the missile, using only 300 kW to provide missile tracking to . The wide variety of available frequencies allowed up to 450 MTRs to be operating in a single Defense Center. Information from the ZDR, TTR and MRTs was all fed to the Target Intercept Computer (TIC) which handled the interceptions. This used twistor memory for ROM and core memory for RAM. Guidance commands were sent to the missiles in-flight via modulation of the MTR signal.
The nominal battery consisted of a single DR, three TTRs, two TICs driving six MRTs, and 24 missiles. This basic battery layout could attack three warheads at once, normally using two missiles per salvo in case one failed in flight. More typically, two targets would be attacked while the third system stood by as a hot backup that could take over in-flight. A maximally expanded battery included three DRs, ten TTRs, six TICs driving eighteen MTRs and 72 missiles. Sites requiring higher traffic handling would not build larger systems, but instead deploy additional batteries fed from the same ZAR and Defense Center.
It was expected that the ZAR would take 20 seconds to develop a track and hand off a target to one of the TTRs, and 25 seconds for the missile to reach the target. With these sorts of salvo rates, a fully expanded Zeus installation was expected to be able to successfully attack 14 "bare" warheads per minute. Its salvo rate against warheads with decoys is not recorded, but would depend on the ZDR's processing rate more than any physical limit. The actual engagement would normally take place at about due to accuracy limitations, beyond that missiles could not be guided accurately enough to bring them within their lethal range against a shielded warhead.
Zeus missiles
The original Zeus A was similar to the original Hercules, but featured a revised control layout and gas puffers for maneuvering at high altitudes where the atmosphere was too thin for the aerodynamic surfaces to be effective. The Zeus B interceptor was longer at , wide, and in diameter. This was so much larger than the earlier Hercules that no attempt was made to have them fit into the existing Hercules/Ajax launchers. Instead, the B models were launched from silos, thus the change of numbering from MIM (mobile surface launched) to LIM (silo launched). Since the missile was designed to intercept its targets in space, it did not need the large maneuvering fins of the A model. Rather, it featured a third rocket stage with small control jets to allow it to maneuver in space. Zeus B had a maximum range of and altitude of .
Zeus A was designed to attack warheads through shock effects, like the Hercules, and was to be armed with a relatively small nuclear warhead. As the range and altitude requirements grew, along with a better understanding of weapons effects at high altitude, the Zeus B was intended to attack its targets through the action of neutron heating. This relied on the interceptor's warhead releasing a huge number of high energy neutrons (similar to the neutron bomb), some of which would hit the enemy warhead. These would cause fission to occur in some of the warhead's own nuclear fuel, rapidly heating the "primary", hopefully enough to cause it to melt. For this to work, the Zeus mounted the W50, a 400 kt enhanced radiation warhead, and had to maneuver within 1 km of the target warhead. Against shielded targets, the warhead would be effective to as little as .
Specifications
There are at least five Zeus models mentioned in various sources, A, B, C, S and X2, the last of which became Spartan. None of the sources explicitly list the differences of all of these in a single table. Different sources appear to confuse measures between the Zeus A, B and Spartan. The A and Spartan figures are taken from US Strategic and Defensive Missile Systems 1950–2004, B from the Bell Labs history.
See also
Project Wizard was the US Air Force's on-again, off-again ABM system that was ultimately replaced by Nike Zeus.
Violet Friend was an Royal Air Force project similar to Zeus in many ways.
The A-35 anti-ballistic missile system was a Soviet system roughly equivalent to the Nike Zeus.
The A-135 anti-ballistic missile system replaced the A-35, and is roughly equivalent of NIke-X.
Explanatory notes
References
Citations
General bibliography
External links
AT&T Archives: Nike Zeus Missile System, made early in the program
The Range Goes Green, movie of a Zeus test launch at White Sands
AT&T Archives: A 20-year History of Antiballistic Missile Systems, has several shots showing Atlas RVs and their resulting debris cloud, giving an indication of the magnitude of the discrimination problem.
Anti-ballistic missiles of the United States
Anti-satellite missiles
Cold War surface-to-air missiles of the United States
Missile defense
Project Nike
|
53862244
|
https://en.wikipedia.org/wiki/IOS%2011
|
IOS 11
|
iOS 11 is the eleventh major release of the iOS mobile operating system developed by Apple Inc., being the successor to iOS 10. It was announced at the company's Worldwide Developers Conference on June 5, 2017, and released on September 19, 2017. It was succeeded by iOS 12 on September 17, 2018.
Changes
Among iOS 11's changes: the lock screen and Notification Center were combined, allowing all notifications to be displayed directly on the lock screen. The various pages of the Control Center were unified, gaining custom settings and the ability to 3D Touch icons for more options. The App Store received a visual overhaul to focus on editorial content and daily highlights. A "Files" file manager app allowed direct access to files stored locally and in cloud services. Siri was updated to translate between languages and use a privacy-minded "on-device learning" technique to better understand a user's interests and offer suggestions. The camera had new settings for improved portrait-mode photos and utilised new encoding technologies to reduce file sizes on newer devices. In a later release, Messages was integrated with iCloud to better synchronize messages across iOS and macOS devices. A previous point release also added support for person-to-person Apple Pay payments. The operating system also introduced the ability to record the screen, limited forms of drag-and-drop functionality, and support for augmented reality. Certain new features appeared only on iPad, including an always-accessible application dock, cross-app drag-and-drop, and a new user interface to show multiple apps at once.
History
Introduction and initial release
iOS 11 was introduced at the Apple Worldwide Developers Conference keynote address on June 5, 2017. The first developer beta version was released after the keynote presentation, with the first public beta released on June 26, 2017.
iOS 11 was officially released by Apple on September 19, 2017.
Updates
System features
Lock screen
The lock screen and Notification Center are combined, allowing users to see all notifications directly on the lock screen. Scrolling up and down will either show or hide notifications.
Control Center
The Control Center redesign unifies its pages and allows users to 3D Touch (or long press on devices without 3D Touch) buttons for more options. Sliders adjust volume and brightness. The Control Center is customizable via the Settings app, and allows more settings to be shown, including cellular service, Low Power Mode, and a shortcut to the Notes app.
Siri
The Siri intelligent personal assistant has a more human voice and supports language translation, with English, Chinese, French, German, Italian and Spanish available at launch. It will also support follow-up questions by users. Users will also be able to type to Siri.
Siri will be able to use "on-device learning", a privacy-minded local learning technique to understand a user's behavior and interests inside different apps, to offer better suggestions and recommendations.
Settings
A new "Do Not Disturb While Driving" mode lets users block unnecessary notifications as long as their iPhone is connected to a vehicle through Bluetooth. An auto-reply feature sends a specific reply to senders of messages to let them know the user is currently unavailable through text. Passengers can be granted full notification access to the phone.
A new "Smart Invert" feature, dubbed a "dark mode" by some publications, inverts the colors on the display, except for images, some apps, and some user interface elements. Using the iPhone X, which utilizes OLED technology, some news outlets have reported that this feature can conserve battery life by turning off pixels when black, saving energy by preventing itself from displaying a white pixel.
Users get expanded control over apps' location usage, with every app featuring a "While Using the App" location toggle in Settings. This differs from previous iOS versions, in which apps were only required to have "Never" or "Always" location options.
Users can remove rarely used apps without losing the app's data using the "Offload App" button. This allows for a later reinstallation of the app (if available on the App Store), in which data returns and usage can continue. Users can also have those apps removed automatically with the "Offload Unused Apps" setting. When an app is offloaded, the app appears on the home screen as a grayed-out icon.
Personalized suggestions will help the user free up storage space on their device, including emptying Photos trash, backing up messages, and enabling iCloud Photo Library for backing up photos and videos.
iPad
Certain new iOS 11 features will appear only on iPad. The application dock gets an overhaul, bringing it closer to the design seen on macOS, and is accessible from any screen, letting users more easily open apps in split-screen view. Users can also drag-and-drop files across different apps. A new multitasking interface shows multiple apps on the screen at the same time in floating "windows." Additionally, through a combination of "slide over," "split view," and "picture-in-picture" modes, users can have up to four active apps on-screen at the same time.
Each letter on the iPad keyboard features an alternative background number or symbol, accessible by pulling down on the respective key and releasing.
The Control Center is visible in the multitasking window on iPads.
Running iOS 11, the 9.7-inch, 10.5-inch and the 2nd-generation 12.9-inch iPad Pros now have flashlight support.
Camera
iOS 11 introduces optical image stabilization, flash photography and high dynamic range for portrait photos.
Live Photos receives new "Loop", "Bounce" and "Long Exposure" effects, and uses High Efficiency Image File Format to decrease photo sizes.
On devices with an Apple A10 chip or newer, photos can be compressed in the new High Efficiency Image File Format and videos can be encoded in the new High Efficiency Video Coding video compression format, enabling improved quality while also decreasing size by half.
Wallpapers
Apple significantly changed the wallpapers available for use with iOS 11. In the initial beta version, released after Apple's developer conference, Apple included one new wallpaper, and removed all the six "Live" animated fish wallpapers, introduced with the iPhone 6S in 2015. The iOS 11.2 release later brought iPhone X/8/8 Plus-exclusive wallpapers to older iPhones.
iPhone X exclusively features six "Live" wallpapers and seven new "Dynamic" wallpapers.
Other changes
iOS 11 introduces native support for QR code scanning, through the Camera app. Once a QR code is positioned in front of the camera, a notification is created offering suggestions for actions based on the scanned content. Twitter users have so far discovered that joining Wi-Fi networks and adding someone to the contacts list are supported through QR codes.
Third-party keyboards can add a one-handed mode.
Users are able to record the screen natively. In order to record the screen, users must first add the feature to the Control Center through the Settings app. Once added, users can start and stop recordings from a dedicated Control Center icon, with a distinctly colored bar appearing at the top of the screen indicating active recording. Pressing the bar gives the option to end recording, and videos are saved to the Photos app.
When an iOS 11 device is attempting to connect to a Wi-Fi network, nearby iOS 11 or macOS High Sierra devices already connected can wirelessly send the password, streamlining the connection process.
The volume change overlay no longer covers the screen while playing video, and a smaller scrubber appears on the top right of the screen.
After a user takes a screenshot, a thumbnail of the screenshot will appear at the bottom left of the screen. The user can then tap the thumbnail to bring up an interface that allows them to crop, annotate, or delete the screenshot.
Third-party apps are also able to take advantage of iCloud Keychain to allow autofilling passwords.
The user's airline flight information can be viewed in Spotlight through a dedicated widget.
iOS 11 switches the top-left cellular network strength icons from five dots to four signal bars, similar to that before iOS 7.
A new automatic setup feature called "Quick Start" aims to simplify the first-time setup of new devices, with wireless transfer between the old and new device, transferring preferences, Apple ID and Wi-Fi info, preferred Settings, and iCloud Keychain passwords.
Similar to iPad, drag-and-drop file support is available on iPhone, though with more limitations, specifically only supported within apps, not between.
Many of Apple's pre-installed applications, including Notes, Contacts, Reminders, Maps, and App Store, have redesigned home screen icons.
An "Emergency SOS" feature was added that disables Touch ID after pressing the Sleep/Wake button five times in quick succession. It prevents Touch ID from working until the iPhone's passcode has been entered.
iOS 11 adds support for 8-bit and 10-bit HEVC. Devices with an Apple A9 chip or newer support hardware decoding, while older devices support software-based decoding.
When a device running iOS 11 or later is activated, Apple's verification server will check the device's UDID before it could be set up. If the device's UDID is malformed or not present in Apple's database, the device cannot be activated and will be denied access to the verification server. If said device is connected to iTunes, an error message will appear stating that the iPhone could not be activated because "the activation information could not be obtained from the device."
App features
Mail
Where there is empty space in the Mail app, users can draw inline.
Messages
The Messages application synchronizes messages across iOS and macOS through iCloud, reflecting message deletion across devices. This feature was temporarily removed in the fifth beta release and returned on May 29, 2018 when iOS 11.4 was released.
At the time of the iOS 11 announcement in June 2017, Apple presented functionality letting users send person-to-person payments with Apple Pay through Messages. By the time of the iOS 11 release in September 2017, the feature was not present, having been removed in an earlier beta version, with Apple announcing the feature as "coming this fall with an update to iOS 11". It was launched a few days after the iOS 11.2 update went live, although initially only available in the United States.
A new app drawer for iMessage apps aims to simplify the experience of using apps and stickers, and an optimized storage system reduces the backup size of messages.
The Messages app also incorporates a "Business Chat" feature for businesses to communicate directly with customers through the app. This can be accessed through a message icon next to search results of businesses. However, this feature was not included with the initial release of iOS 11 (instead launching with iOS 11.3).
The Messages app on the iPhone X introduces face-tracking emoji called "Animoji" (animated emoji), using Face ID.
App Store
The App Store receives a complete redesign, with a greater focus on editorial content such as daily highlights, and a design described as "cleaner and more consistent" to other apps developed by Apple. The app's design mimics the design seen in the Apple Music app in iOS 10.
Maps
At select locations, Apple Maps will offer indoor maps for shopping malls and airports.
New lane guidance and speed limit features aim to guide drivers on unfamiliar roads.
Photos
The Photos app in iOS 11 gains support for viewing animated GIFs. Users can access GIF images inside an album titled "Animated".
Memories can be viewed while the phone is in portrait orientation.
Podcasts
The Podcasts app receives a redesign similar to the App Store, with a focus on editorial content.
Notes
The Notes app has a built-in document scanner using the device's camera, and the feature removes artifacts such as glare and perspective.
An "Instant Notes" feature on the iPad Pro allows the user to start writing a note from the lock screen by putting the Apple Pencil onto the screen.
The app also allows users to input inline tables.
Where there is open space in the Notes app, the user can draw inline.
Files
A new "Files" app lets users browse the files stored on their device, as well as those stored across various cloud services, including iCloud Drive, Dropbox, OneDrive, and Google Drive. The app supports organization through structured sub-folders and various file-based options, and it also includes a built-in player for FLAC audio files. The Files app is available on both iPad and iPhone.
Safari
The user's flight information can be found in the Safari app.
Calculator
The Calculator app receives a redesign, with rounded buttons, replacing the grid ones seen on iOS 7.
Developer APIs
A new "ARKit" application programming interface (API) lets third-party developers build augmented reality apps, taking advantage of a device's camera, CPU, GPU, and motion sensors. The ARKit functionality is only available to users of devices with Apple A9 and later processors. According to Apple, this is because "these processors deliver breakthrough performance that enables fast scene understanding and lets you build detailed and compelling virtual content on top of real-world scenes."
A new "Core ML" software framework will speed up app tasks involving artificial intelligence, such as image recognition.
A new "Depth" API allows third-party camera app developers to take advantage of the iPhone 7 Plus, iPhone 8 Plus, and iPhone X's dual-camera "Portrait mode". This will let apps implement the same depth-sensing technology available in the default iOS Camera app, to simulate a shallow depth-of-field.
A new "Core NFC" framework gives developers limited access to the near field communication (NFC) chip inside supported iPhones, opening potential use cases in which apps can scan nearby environments and give users more information.
Removed functionality
Apps must be compiled for 64-bit architecture in order to be supported on iOS 11. 32-bit apps are not supported or shown in the App Store in iOS 11, and users who attempt to open such apps receive an alert about the app's incompatibility.
iOS 11 drops the native system integration with Twitter, Facebook, Flickr, and Vimeo.
The iCloud Drive app is removed and replaced by the Files app.
The ability to trigger multitasking using 3D Touch was removed from the original iOS 11 release. In response to a bug report, an Apple engineer wrote that "Please know that this feature was intentionally removed". Apple's software engineering chief Craig Federighi wrote in reply to an email that the company had to "temporarily drop support" due to a "technical constraint," pledging to bring it back in a future update to iOS 11. It was brought back in iOS 11.1.
In iOS 11.2 the toggle switch for Wi-Fi and Bluetooth were unexpectedly changed to not completely turning off those two things but temporarily until the next day. The real turn off is in the Settings app.
Reception
iOS 11 received mixed reviews. Critics praised the application dock and new multitasking interface on the iPad, crediting them for renewing the user experience. Further praise was directed at the redesigned Control Center offering customizable toggles; criticism was widely focused on its lack of third-party app support, lack of Wi-Fi network selection ability and for difficult usage on small screen sizes, along with its instability. Critics also noted the new augmented reality development tools, but said their impact would depend on third-party apps and how fast developers embraced them. Praise was also directed at the App Store's redesign and the new file-management tools. Shortly after release, it was discovered that disabling Wi-Fi and Bluetooth connections through the Control Center does not disable the respective chips in the device in order to remain functional for background connectivity, a design decision sparking criticism for "misleading" users and reducing security due to potential vulnerabilities in inactive open connections. The iOS 11.2 update added warning messages and a new toggle color to explain the new functions. iOS 11 has also received continuous criticism from critics and end-users for its perceived stability and performance issues, particularly on older devices; Apple has issued numerous software updates to address such issues and has dedicated iOS 12 mainly toward stability and performance improvements in response.
Two months after release, 52% of iOS devices were running iOS 11, a slower adoption rate than previous iOS versions. The number increased to 85% of devices by September 2018.
Dieter Bohn of The Verge liked the new Control Center setup, including customizable toggles and 3D Touch-expandable options, writing that "there are a few panels that I'm really impressed with", specifically highlighting the Apple TV remote as a possible replacement of the normal remote. He did, however, note the lack of third-party access to Control Center, with a hope for support in the future, and a lack of Wi-Fi network selection ability. He praised the screen-recording functionality, calling it "super neat". Bohn severely criticized the notifications view, writing that he has a "very serious disagreement" with Apple on how to manage it, elaborating that he prefers to use that screen as much as possible while stating that "Apple's philosophy is that I'm trying way too hard" to control speedy notifications. Bohn liked the new Files app, new drag-and-drop functionality on the home screen enabling users to drag multiple apps at once, and significantly praised multitasking on iPad. Writing that "Multitasking on the iPad is a near-revelatory experience", he enjoyed the application dock and the ability to place up to three apps on the screen at once with more freedom on placement. Bohn conceded that "It's not as intuitive nor as simple as easy to manipulate as a traditional windowing system like you'll get on a Mac, PC, or Chromebook", but still praised it for being "radically more powerful than what has ever been available on an iPad before". Finally, Bohn praised Siri for improvements to the voice, highlighted augmented reality allowing for "incredible games", and reiterated an earlier sentiment that iOS 11 is "the most ambitious software update from Apple in a very, very long time".
Macworlds Jason Snell wrote that the hype surrounding iOS 11 is "justified". Snell praised the new "smoother" transfer mode of data and settings between an old iPhone and a new iPhone, referring to the previous experience of doing it manually as "a frustrating exercise in entering in passwords repeatedly while tapping through a long series of questions about activating or deactivating numerous iOS features." He also praised the Control Center design, calling it "a great upgrade", though also highlighting the inability to easily switch Wi-Fi networks. Snell noted that the App Store's design had been unchanged for years, but received a full redesign in iOS 11, and wrote that Apple's commitment to editorial pages was "impressive", making the App Store "a richer, more fun experience." Regarding the introduction of augmented reality, he stated that most apps using it were "bad", though some also "mind-blowingly good," adding that the "huge potential" depended on how third-party apps were using it. Snell also praised improvements to the iPad experience, including multitasking and drag-and-drop across apps, the latter of which he stated "actually surpasses my expectations" due to ease of use. His review summarization states that iOS 11 is "Apple’s most ambitious and impressive upgrade in years."
Romain Dillet of TechCrunch focused mostly on the iPad in his review, writing that iOS 11 "turns your iPad into a completely different machine", with "much more efficient" multitasking and improved ease of access with the application dock. He also praised the design overhaul of the App Store, calling it "a huge improvement compared to the previous App Store", and also highlighted design changes in other apps, including "a huge bold header with the name of the app or section". Although he acknowledged that "Many tech friends have told me that they hate this change," Dillet stated that "I think most people will like it. It’s visually pleasing and distinctive." He stated that augmented reality will become more relevant in the days following the iOS 11 release as third-party developers incorporate features into their apps, and praised Apple for creating the ARKit development tools as it "makes it much easier to implement augmented reality features". In conclusion, Dillet wrote that "Ten years ago, iOS started as a constrained operating system. It is now one of the biggest digital playgrounds".
Devindra Hardawar of Engadget stated that the focus of iOS 11 was "all about transforming iOS into something more desktop-like", with many enhancements for iPad while "leaving the iPhone a bit behind." He had mixed feelings about the Control Center, writing that, on small phone screen sizes, it "feels like a jumbled mess," and adding that true comfort may only be present with larger screens, a troubling situation for owners of non-Plus devices. However, he praised the ability to customize the buttons, including removing those the user never uses, and the ability to quickly record the screen or enable accessibility features. He called the new app designs "attractive", and favorably pointed out the new app drawer at the bottom of conversations in the Messages app, referring to it as "a big improvement over the messy interface of last year." He praised Siri for an improved voice, the Photos app for creating better Memories, and new social features in Apple Music, though noting the lack of people in his social circle using the service. Referencing IKEA's "IKEA Place" app, which uses augmented reality to virtually place objects in a room, he significantly praised the performance of the augmented reality technology on iPhone, writing that "It did a great job of rendering furniture in physical spaces using both the iPhone 8, and, even more impressively, it ran smoothly on my iPhone 6S". Finally, Hardawar also enjoyed new functionality on iPad, calling multitasking, the application dock and drag-and-drop "dramatic changes," and highlighting the "particularly useful" experience of dragging Internet content directly from the web into the new Files app. In summarization, he recognized the significant strides made for iPad with iOS 11, writing that "it's a shame that iOS 11 doesn't bring more to the table on the iPhone", though acknowledging the rise of augmented reality.
In November 2017, Apple's App Store support page was updated to reflect that 52% of iOS devices were running iOS 11, a slower migration rate than for the release of iOS 10 the year prior, which saw 60% user adoption by October 2016. The number increased to 59% of devices by December 2017.
Design inconsistencies and software bugs
In September 2017, Jesus Diaz of Fast Company criticized design details in iOS 11 and Apple's built-in apps not adhering to Apple's user interface guidelines. Headers not being aligned properly between different apps, elements not being centered, and different colors and sizing caused Diaz to write that "When it comes to software, Apple’s attention to detail is crumbling away." However, he also looked back in history, mentioning that Apple Music's original design, a lack of optical typography alignment in the Calendar app, and previously-fixed iOS design mistakes being ported to the macOS software had established that "This inconsistency and lack of attention to detail are not new at Apple." He firmly stated: "Perhaps this is inevitable, given the monumental task of having to update the operating system every year. But for a company that claims to have an obsessive attention to detail, this is not acceptable."
In November 2017, Gizmodos Adam Clark Estes wrote extensively on software bugs and product imperfections experienced while using iOS 11. Estes pointed to issues such as keyboard covering up messages and a disappearing reply field in the Messages app, the letter "i" converting to a Unicode symbol, and the screen becoming unresponsive, writing that "The new operating system has turned my phone into a bug-infested carcass of its former self, and the frustration of trying to use it sometimes makes me want to die, too." He also wrote on the aspect of technology becoming more advanced and sophisticated, explaining that "back when the iPhone 4 came out [...] smartphones were a lot simpler. The cameras were joyfully crappy. The screens were small. The number of apps we could download and things we could connect to was paltry compared to today. [...] We should expect some bugs, I guess. More complex pieces of technology contain more points of failure, and I’m oversimplifying the issue." He concluded by theorizing on technological development, writing: "However, I am trying to understand exactly how my life with computers veered so dramatically from the days of Windows 95 when nothing worked right, to the golden age of the iPhone 4 when everything seemed perfect, to now when just a handful of iOS bugs make me feel like the world is falling apart. [...] Maybe I’m the annoying thing, the whiny one who’s upset that nothing seems perfect any more. Or maybe, just maybe, Apple is slipping, and we were wrong to trust it all along."
Problems
Wi-Fi and Bluetooth Control Center toggles
Shortly after iOS 11 was released, Vices Motherboard discovered new behaviors by the Wi-Fi and Bluetooth toggles in the Control Center. When users tap to turn off the features, iOS 11 only disconnects the chips from active connections, but does not disable the respective chips in the device. The report further states that "it's a feature, not a bug", referencing documentation pages by Apple confirming the new toggle behaviors as a means to disconnect from connections but remain active for AirDrop transfers, AirPlay streaming, Apple Pencil input, handoff and other features. Security researcher Andrea Barisani told Motherboard that the new user interface was "not obvious at all", making the user experience "more uncomfortable". In October 2017, the Electronic Frontier Foundation published an article, calling the interface "misleading" and "bad for user security", due to a higher risk of security vulnerabilities with Wi-Fi and Bluetooth chips activated while not in active use. The Foundation recommended that Apple fix the "loophole in connectivity", writing that "It's simply a question of communicating better to users, and giving them control and clarity when they want their settings off - not "off-ish"".
iOS 11.2 changes this behavior slightly, by turning the toggles white and showing a warning message that explains the functions of the toggles in the Control Center, when the toggles are turned off.
Battery drain issues
Some users have experienced battery drain problems after updating to iOS 11. In a poll on its website, 70% of 9to5Mac visitors reported decreased battery life after updating to the new operating system. However, in an article featuring Twitter complaints of battery life, Daily Express wrote that "honestly, this is to be expected. It happens every year, and it's completely normal. Major iOS releases will hammer the battery on your device much faster during the first few days of use", with Forbes stating in an article that "The days after you install a new version of iOS, your iDevice is busy doing all sorts of housekeeping. Practically all of your apps have updates, so iOS is busy downloading and installing them in the background. [...] Additionally, after you install a new version of iOS, it has to do something called "re-indexing." During this process, iOS 11 will comb through all of the data on your device so that it can be cataloged for quick Spotlight searching." The article further states that "The good news is that both of these things are temporary".
Within a week of the launch of the 11.3.1 update, users began reporting continued issues with this update regarding battery drainage. Some of these reports indicated drains from 57% down to 3% in just 3 minutes. Even users with the health of the battery measuring 96% noticed iPhones draining at around 1% per minute. In addition to battery drains, some iPhone users noticed their devices having excessive heat buildup.
It has been recommended by technology experts that users not upgrade their software until the release of a version subsequent to 11.3.1 unless specifically plagued by the 'third party display issue'.
Calculator bug
In October 2017, users reported on Reddit that quickly typing in an equation in the built-in iOS calculator app gives incorrect answers, most notably making the query "1+2+3" result in "24" rather than "6". Analysts have blamed an animation lag caused during the redesign of the app in iOS 11. The problem can be worked around by typing the numbers slowly, or by downloading alternative calculator apps from the App Store that do not have this problem. With a large amount of bug reports filed, Apple employee Chris Espinosa indicated on Twitter that the company was aware of the issue. iOS 11.2 fixed the issue.
Keyboard autocorrect bugs
In November 2017, users reported a bug in the default iOS keyboard, in which pressing "I" resulted in the system rendering the text as "!" or "A" along with an incomprehensible symbol featuring a question mark in a box. The symbol is known as Variation Selector 16 for its intended purpose of merging two characters into an emoji. Apple acknowledged the issue in a support document, advising users to set up a Text Replacement feature in their device's keyboard settings as a temporary workaround. The company confirmed to The Wall Street Journal that devices using older iOS 11 versions, as opposed to just the latest 11.1 version at the time of the publication, were affected by the issue, and an Apple spokesperson announced that "A fix will be released very soon". iOS 11.1.1 was released on November 9, 2017, fixing the issue.
At the end of the month, another keyboard autocorrection bug was reported, this time replacing the word "It" with "I.T." MacRumors suggested users set up the Text Replacement feature the same way they did for the earlier autocorrection issue, though its report notes that "some users insist this solution does not solve the problem". It was fixed with the release of iOS 11.2.
December 2 crashes
In early December, users wrote on Twitter and Reddit that, at exactly 12:15 a.m. local time on December 2, any App Store app that sends local notifications would cause the device to repeatedly restart. Reddit users reported that disabling notifications or turning off background app refresh would stop the issue, while Apple staff on Twitter reported that it was a bug in date handling, recommending users to manually set the date prior to December 2. MacRumors wrote that the issue "looks like it's limited to devices running iOS 11.1.2", with users on the 11.2 beta release not affected. iOS 11.2, released on the same day, fixed the issue.
iOS 11.2 HomeKit vulnerability
In December 2017, 9to5Mac uncovered a security vulnerability in iOS 11.2 within Apple's HomeKit smart home system, allowing unauthorized access to smart locks and garage door openers. It noted that Apple had already issued a server-side fix that, while preventing unauthorized access, also limited HomeKit functionality, with an upcoming software fix for the iOS operating system intended to restore the lost functionality. On December 13, 2017, Apple released iOS 11.2.1, which fixed the limitation on remote access.
Supported devices
iOS 11 drops support for devices with a 32-bit processor: specifically the iPhone 5, iPhone 5C, and the fourth-generation iPad. It is the first version of iOS to run exclusively on iOS devices with 64-bit processors.
iPhone
iPhone 5S
iPhone 6
iPhone 6 Plus
iPhone 6S
iPhone 6S Plus
iPhone SE (1st generation)
iPhone 7
iPhone 7 Plus
iPhone 8
iPhone 8 Plus
iPhone X
iPod Touch
iPod Touch (6th generation)
iPad
iPad Air
iPad Air 2
iPad (2017)
iPad (2018)
iPad Mini 2
iPad Mini 3
iPad Mini 4
iPad Pro (9.7-inch)
iPad Pro (10.5-inch)
iPad Pro (12.9-inch 1st generation)
iPad Pro (12.9-inch 2nd generation)
See also
iOS version history
Android Oreo
Windows 10 Mobile
References
External links
11
2017 software
Tablet operating systems
|
26092568
|
https://en.wikipedia.org/wiki/Plan%20Calcul
|
Plan Calcul
|
Plan Calcul was a French governmental program to promote a national or European computer industry and associated research and education activities.
The plan was approved in July 1966 by President Charles de Gaulle, in the aftermath of two key events that made his government worry about French dependence on the US computer industry. In the mid-1960s, the United States denied export licenses for American-made IBM and CDC computers to the French Commissariat à l'énergie atomique in order to prevent it from perfecting its H bomb. Meanwhile, in 1964, General Electric had acquired a majority of Compagnie des Machines Bull, the largest French computer manufacturer, which had the second highest market share in France, after IBM, and was a leading IT equipment maker in Europe. Following this partial takeover, known as "Affaire Bull", GE-Bull dropped two Bull computers from its product line.
Responsibility for administering the plan was given to a newly created government agency, (Information Bureau), answering directly to the prime minister.
As part of the program, in December 1966, the Compagnie internationale pour l'informatique (CII) was established as a manufacturer of commercial and scientific computers, initially under licence from Scientific Data Systems. The new company was intended to compete not only in the process control and military market, where its staff was already seasoned, but also in the office computing sector of the French market, where IBM and Bull were dominant at the time. The plan enacted government subsidies for CII between 1967 and 1971, and was reconducted for another four years. A minor side of the plan was devoted to peripherals, while CII's main parent company, Thomson-CSF, received government support to develop its semiconductor plants and R & D. Overall, while CII mainframes benefitted from preferential procurement by the French government, the Plan Calcul left peripherals, components and small computers makers compete on the free market. The same went for software companies, which were already thriving in France.
On the research side, the program also led to the creation of L'Institut de recherche en informatique et en automatique (IRIA) in 1967, which later became INRIA. It was accompanied with a vast educational effort in programming and computer science.
In the late 1960s, CII announced its new, internally designed mainframes Iris 50 [1970] and Iris 80 [1971], and developed a mini-computer, Mitra 15 (1971), which became a commercial success in the following decade. The company also was a minority participant in the production of magnetic periphals thru part ownership of Magnetic Peripherals Inc.
IBM had more than 50% market share in almost every European country. Information Bureau head warned that international cooperation was necessary, however, as "something must happen or there won't be a European computer industry". The French government had spent more than $100 million on Plan Calcul in the first five years, and planned to spend more than that amount in the next five. France expected CII to reach $200 million in revenue before 1975. That year, CII began negotiations with Siemens and Philips to form a joint European company, Unidata, which shipped its first computers in 1974. Yet a new President of the Republic was elected then, former Finance minister Giscard d'Estaing, who was a strong opponent of the Plan Calcul; meanwhile, CII's sleeping partner, CGE-Alcatel, woke up to oppose the domination of its archrival Siemens over the European computer industry. Unidata was terminated and CII was absorbed into Honeywell-Bull in 1976.
This government initiative was ultimately deemed a failure.
See also
References
History of computing
Politics of France
Science and technology in France
1966 in France
Computer-related introductions in 1966
|
37138481
|
https://en.wikipedia.org/wiki/Act-On
|
Act-On
|
Act-On Software is a software-as-a-service product for marketing automation developed by Act-On, a company headquartered in Portland, Oregon. The company was founded in 2008, retailing its software exclusively through Cisco, which provided $2 million in funding. It is used mostly by medium-sized businesses. It developed an internal sales department to market the software directly to users with $74 million in funding raised. Act-On has received positive reviews for use by small to medium-sized businesses due to its ease-of-use, simplicity and cost.
History
Act-On was founded in 2008 by Raghu Raghavan, formerly a founder of Responsys, after he saw "potential for a sophisticated, but affordable SaaS marketing tool mid-market companies could easily use." Act-On initially sold through alternate channels, but later created its own sales team. Entering the market after several competitors had been established, Act-On had a second-mover advantage, learning from the success and failures of earlier market entrants. Raghavan commented in a 2013 interview, "We came in, we saw all of the things that had not been done right in this space, and I think it allowed us to build a company in a whole new way to attack what we saw in the monsters' market."
In 2009, the company's investors tried to convince founder Raghu Raghavan to move Act-On to Silicon Valley, but it remained in Oregon. The company raised a second round of funding for $4 million in November 2010
and a third round for $10 million in June 2011. In March 2011, Act-On re-launched its software with a new user interface. Later that year, Act-On expanded into larger offices in Beaverton, Oregon, Roseville, California and also established a new location in Silicon Valley. An additional $16 million in funding was raised the following year.
By 2013, the company had 140 employees, up from 11 in 2010 and 35 in 2011. In early 2016, it expanded again into larger offices in Portland.
In April 2014, $42 million in additional funding was raised, which was the largest funding round in the Oregon technology market since the dot com bubble. Act-On now employs 425 people, up from 250 in 2014, and has expanded to include multiple offices in the U.S. and two abroad. It serves over 3,000 customers worldwide. In 2015, Act-On expanded its executive team with a new CFO, formerly of the company Jive, a VP of demand generation, previously with ExactTarget, and a VP of cloud operations.
In June 2020, Act-On Software rebranded itself to fit the modern marketing tool needs, prioritizing engagement, updates on services, and product inventory.
Features
Act-On is a subscription-based software-as-a-service (SaaS) product for marketing automation. Its software products are for email marketing, landing pages, social media prospecting, CRM integration, lead management, webinar management, and analytics.
Act-On has a Twitter Prospector tool introduced in 2010 that automates the publishing and monitoring of content on Twitter, tracking prospective customers and measuring their activity. An Act-On Insight tool, released in June 2012, compares a company's social media marketing performance to competitors. Its Hot Prospects tool, introduced at the 2011 Dreamforce conference, creates a dashboard in Salesforce.com that scores the likelihood a prospect is ready to make a purchasing decision. A set of software tools for search engine optimization, pay-per-click advertising and other inbound tactics was introduced in May 2013 under the name Act-On Inbound. Act-On also introduced a mobile app and mobile optimization features. In July 2014, Act-On announced a set of product updates intended to improve data visualization and customize the user experience. Enhancements included a responsive email composer and expanded CRM integrations.
In March 2015, It introduced Act-On Anywhere, a Chrome application allowing users access to marketing automation data and functionalities across any web-based browser. Allowing users to embed calls-to-action in web pages and blogs from any web-based content management system, this extension along with Act-On's open APIs, supports a larger vision for an open marketing ecosystem, in which third-party applications can plug and play with Act-On – ensuring that end-users can continue to leverage their current systems and augment the use of those systems using engagement data collected within marketing automation.
In June 2015, Act-On released Data Studio, an advanced data access and analytics tool allowing users to visualize, select, configure and export Act-On data to any Business Intelligence (BI) platform. Offered as part of Act-On's enterprise package, the feature equips users with built-in wizards, filters and templates to extract and report on engagement data in real-time.
Act-On is intended for marketing departments of between one and 15 people, as a low-cost alternative to enterprise software suites. It is one of few marketing automation vendors with adoption outside the technology industry, and currently has a 7 percent share of the marketing automation market overall.
Users can manage WebEx and GoToWebinar events within the software. It also integrates with data and analytics services, such as Google Analytics. More connectors are available for Microsoft Dynamics, WordPress, Salesforce, SugarCRM, Oktopost and others.
As part of its Open Marketing Ecosystem, the Act-On platform offers native integrations with all major CRM systems, so as to be vendor agnostic without being indifferent. A separate version for agencies has an agency dashboard to centrally manage multiple client campaigns and is sold at a lower bulk price. Act-On also promotes its agency partners and third-party applications on the Act-On Partner Exchange (APEX), manages an educational resource called the Act-On Center of Excellence (ACE) and provides professional services.
Reception
Act-On continues to draw praise for its growth, product and leadership. The company was named to the Inc. 500 List of Fastest-Growing Privately Held Companies in 2013, 2014 and 2015, Deloitte's Technology Fast 500 List in 2013 and 2014, and Portland Business Journal's List of 100 Fastest-Growing Private Companies in 2013, 2014, and 2015.
In early 2015, the Portland Business Journal anointed Act-On's CEO Raghu Raghavan the Technology CEO of the Year, for his ability to manage the company's growth across multiple offices and continents.
In February, Act-On published the results of a three-month survey, detailing differences between top and average performing B2B companies. The report found that few marketers owned the customer lifecycle "end to end," and top-performers were likelier to focus on customer retention and expansion, rather than their mid-size, average-performing peers. For its efforts promoting this report, Act-On was shortlisted for MarketingProfs' 2015 B2B Bright Bulb Awards.
In the 2014 Forrester Wave Report on Lead-to-Revenue Management Vendors, Act-On was ranked a leader in both categories: Small Marketing Teams and Large Enterprises. Forrester noted that Act-On did an admirable job delivering functionality for "nearly every criterion" evaluated: Simplicity, feature-set, software and support. In Gleanster's Gleansight 2014 benchmark, Act-On received a "Best" ranking in three out of four categories: Ease of deployment, ease of use and overall value. In the 2014 VEST report from analyst firm Raab Associates, Act-On was awarded "good scores" on the "product and vendor dimensions," numbering among "strong leaders" in the "crowded small-to-midsize company segment." The 2015 VEST report named Act-On as "the only privately held company to rank as a leader across all major categories of business."
References
External links
Official website
HA Advantage case study by Target Marketing
Software companies based in Oregon
Companies based in Portland, Oregon
American companies established in 2008
Software companies of the United States
2008 establishments in Oregon
Software companies established in 2008
|
46305796
|
https://en.wikipedia.org/wiki/Actran
|
Actran
|
ACTRAN (acronym of ACoustic TRANsmission, also known as the Acoustic NASTRAN) is a finite element-based computer aided engineering software modeling the acoustic behavior of mechanical systems and parts. Actran is being developed by Free Field Technologies, a Belgian software company founded in 1998 by Jean-Pierre Coyette and Jean-Louis Migeot. Free Field Technologies is a wholly owned subsidiary of the MSC Software Corporation since 2011. Free Field Technologies and MSC Software are part of Hexagon AB since 2017.
History
The development of Actran started in 1998 when Jean-Pierre Coyette, now professor of the Louvain School of Engineering – Université catholique de Louvain, and Jean-Louis Migeot, now professor at the Université Libre de Bruxelles and past-president of the Royal Academy of Science, Letters and Fine Arts of Belgium - Académie royale des sciences, des lettres et des beaux-arts de Belgique, cofounded the Free Field Technologies SA software company.
The original idea was to develop a finite element-based simulation tool for vibro-acoustic applications able to overcome the limitations of the then dominant Boundary Element Method. The use of finite elements enabled the simulation of complex noise sources, the combination of multiple materials in the same model and the handling of multi-million degrees-of-freedom models. The initial target application was the prediction of the acoustic transmission through complex partitions (hence the name ACTRAN: ACoustic TRANsmission). A central feature of Actran was the use of Infinite Elements (IE) as an alternative to BEM for modelling non-reflecting boundary conditions and calculating the far field. Actran uses conjugated infinite elements, an extension of the wave envelope technique.
Early developments were funded by an industrial consortium and the first commercial release was made broadly available in 2002, after the three-years exclusivity period given to the members of the consortium ended.
Software modules
Actran is written in the Python and C++ languages and is compatible with both Linux and Windows operating systems.
The Actran software is currently divided and licensed into different modules depending on the target application and the physics involved:
Actran Acoustics: basic module for acoustic radiation analysis and weakly coupled vibro-acoustic simulations; typical applications are: noise radiation from powertrains, noise transmission through mufflers and silencers.
Actran VibroAcoustics: module dedicated to strongly coupled vibro-acoustic simulations; typical applications are: sound transmission through structures (walls, windows, etc.), loudspeakers, underwater acoustics;
Actran AeroAcoustics: module dedicated to the computational aeroacoustics; typical applications are HVAC ducts, centrifugal and axial fans, side window noise.
Actran for Trimmed bodies: module dedicated to trimmed body analyses; typical applications are car cabins and aircraft fuselages;
Actran SEA: module dedicated to SEA analyses; typical applications are transportation vehicles studies at mid- and high-frequencies;
Actran TM: module dedicated to turbomachinery noise; typical applications are turbofan engine inlets;
Actran DGM: module solving the Linearized Euler Equations. This module is a time domain explicit solver and the numerical scheme is the Discontinuous Galerkin Method (DGM); typical applications are turbofan engine by-pass exhaust ducts and turbine exhausts ducts.
Actran VI: user interface common to all modules. It is used to pre-process Actran modules including generating and modifying acoustic meshes and to post-process the results.
Actran Student Edition: software limited release freely available to students.
Software Interoperability
Actran is integrated with MSC Nastran for vibro-acoustic simulations. Either a MSC Nastran model is translated into an Actran input file, or structural modes are used as part of an Actran analysis. Structural modes can be computed also with other third party software.
Actran is coupled with other MSC Software time domain solvers:
MSC Adams for moving mechanisms and impact noise studies;
Dytran and MSC Nastran SOL700 for sloshing noise analysis;
MSC Marc for acoustic radiation analysis from objects subject to large deformations and strain.
See also
MSC Software
Nastran
References
External links
Product lifecycle management
Simulation software
Computer-aided engineering software for Linux
|
55566453
|
https://en.wikipedia.org/wiki/Banktivity
|
Banktivity
|
Banktivity, formerly known as iBank is a personal finance management suite designed for macOS and iOS platforms by IGG Software and debuted in 2003 as a Mac desktop software.
History
There have been a total of 7 releases for the desktop version since its inception. Over this same period, IGG Software Inc. also released iOS versions for the iPhone (2009), the iPad (2012), and Apple Watch. Features differ between the apps, specifically the lack of reporting on the iPhone version.
In February 2016, IGG Software officially dropped the iBank name and rebranded as Banktivity. The name was derived from "Bank" and "Activity"according to IGG Software's president Ian Gillespie.
As noted by Forbes, MacWorld, MacLife Magazine, Endgadet and others, Banktivity is considered in the Mac user community as viable alternative to Quicken for Mac.
Banktivity's features include a proprietary cloud sync that can be used between macOS and iOS devices, traditional and envelope budgets, multi-currency support and investment portfolio management.
References
Business software
MacOS-only software
Accounting software
Financial software
PIM-software for MacOS
|
5689970
|
https://en.wikipedia.org/wiki/Flow-based%20programming
|
Flow-based programming
|
In computer programming, flow-based programming (FBP) is a programming paradigm that defines applications as networks of "black box" processes, which exchange data across predefined connections by message passing, where the connections are specified externally to the processes. These black box processes can be reconnected endlessly to form different applications without having to be changed internally. FBP is thus naturally component-oriented.
FBP is a particular form of dataflow programming based on bounded buffers, information packets with defined lifetimes, named ports, and separate definition of connections.
Introduction
Flow-based programming defines applications using the metaphor of a "data factory". It views an application not as a single, sequential process, which starts at a point in time, and then does one thing at a time until it is finished, but as a network of asynchronous processes communicating by means of streams of structured data chunks, called "information packets" (IPs). In this view, the focus is on the application data and the transformations applied to it to produce the desired outputs. The network is defined externally to the processes, as a list of connections which is interpreted by a piece of software, usually called the "scheduler".
The processes communicate by means of fixed-capacity connections. A connection is attached to a process by means of a port, which has a name agreed upon between the process code and the network definition. More than one process can execute the same piece of code. At any point in time, a given IP can only be "owned" by a single process, or be in transit between two processes. Ports may either be simple, or array-type, as used e.g. for the input port of the Collate component described below. It is the combination of ports with asynchronous processes that allows many long-running primitive functions of data processing, such as Sort, Merge, Summarize, etc., to be supported in the form of software black boxes.
Because FBP processes can continue executing as long they have data to work on and somewhere to put their output, FBP applications generally run in less elapsed time than conventional programs, and make optimal use of all the processors on a machine, with no special programming required to achieve this.
The network definition is usually diagrammatic, and is converted into a connection list in some lower-level language or notation. FBP is often a visual programming language at this level. More complex network definitions have a hierarchical structure, being built up from subnets with "sticky" connections. Many other flow-based languages/runtimes are built around more traditional programming languages, the most notable example is RaftLib which uses C++ iostream-like operators to specify the flow graph.
FBP has much in common with the Linda language in that it is, in Gelernter and Carriero's terminology, a "coordination language": it is essentially language-independent. Indeed, given a scheduler written in a sufficiently low-level language, components written in different languages can be linked together in a single network. FBP thus lends itself to the concept of domain-specific languages or "mini-languages".
FBP exhibits "data coupling", described in the article on coupling as the loosest type of coupling between components. The concept of loose coupling is in turn related to that of service-oriented architectures, and FBP fits a number of the criteria for such an architecture, albeit at a more fine-grained level than most examples of this architecture.
FBP promotes high-level, functional style of specifications that simplify reasoning about system behavior. An example of this is the distributed data flow model for constructively specifying and analyzing the semantics of distributed multi-party protocols.
History
Flow-Based Programming was invented by J. Paul Morrison in the early 1970s, and initially implemented in software for a Canadian bank. FBP at its inception was strongly influenced by some IBM simulation languages of the period, in particular GPSS, but its roots go all the way back to Conway's seminal paper on what he called coroutines.
FBP has undergone a number of name changes over the years: the original implementation was called AMPS (Advanced Modular Processing System). One large application in Canada went live in 1975, and, as of 2013, has been in continuous production use, running daily, for almost 40 years. Because IBM considered the ideas behind FBP "too much like a law of nature" to be patentable they instead put the basic concepts of FBP into the public domain, by means of a Technical Disclosure Bulletin, "Data Responsive Modular, Interleaved Task Programming System", in 1971. An article describing its concepts and experience using it was published in 1978 in the IBM Research IBM Systems Journal under the name DSLM. A second implementation was done as a joint project of IBM Canada and IBM Japan, under the name "Data Flow Development Manager" (DFDM), and was briefly marketed in Japan in the late '80s under the name "Data Flow Programming Manager".
Generally the concepts were referred to within IBM as "Data Flow", but this term was felt to be too general, and eventually the name "Flow-Based Programming" was adopted.
From the early '80s to 1993 J. Paul Morrison and IBM architect Wayne Stevens refined and promoted the concepts behind FBP. Stevens wrote several articles describing and supporting the FBP concept, and included material about it in several of his books.. In 1994 Morrison published a book describing FBP, and providing empirical evidence that FBP led to reduced development times.
Concepts
The following diagram shows the major entities of an FBP diagram (apart from the Information Packets). Such a diagram can be converted directly into a list of connections, which can then be executed by an appropriate engine (software or hardware).
A, B and C are processes executing code components. O1, O2, and the two INs are ports connecting the connections M and N to their respective processes. It is permitted for processes B and C to be executing the same code, so each process must have its own set of working storage, control blocks, etc. Whether or not they do share code, B and C are free to use the same port names, as port names only have meaning within the components referencing them (and at the network level, of course).
M and N are what are often referred to as "bounded buffers", and have a fixed capacity in terms of the number of IPs that they can hold at any point in time.
The concept of ports is what allows the same component to be used at more than one place in the network. In combination with a parametrization ability, called Initial Information Packets (IIPs), ports provide FBP with a component reuse ability, making FBP a component-based architecture. FBP thus exhibits what Raoul de Campo and Nate Edwards of IBM Research have termed configurable modularity.
Information Packets or IPs are allocated in what might be called "IP space" (just as Linda's tuples are allocated in "tuple space"), and have a well-defined lifetime until they are disposed of and their space is reclaimed - in FBP this must be an explicit action on the part of an owning process. IPs traveling across a given connection (actually it is their "handles" that travel) constitute a "stream", which is generated and consumed asynchronously - this concept thus has similarities to the lazy cons concept described in the 1976 article by Friedman and Wise.
IPs are usually structured chunks of data - some IPs, however, may not contain any real data, but are used simply as signals. An example of this is "bracket IPs", which can be used to group data IPs into sequential patterns within a stream, called "substreams". Substreams may in turn be nested. IPs may also be chained together to form "IP trees", which travel through the network as single objects.
The system of connections and processes described above can be "ramified" to any size. During the development of an application, monitoring processes may be added between pairs of processes, processes may be "exploded" to subnets, or simulations of processes may be replaced by the real process logic. FBP therefore lends itself to rapid prototyping.
This is really an assembly line image of data processing: the IPs travelling through a network of processes may be thought of as widgets travelling from station to station in an assembly line. "Machines" may easily be reconnected, taken off line for repair, replaced, and so on. Oddly enough, this image is very similar to that of unit record equipment that was used to process data before the days of computers, except that decks of cards had to be hand-carried from one machine to another.
Implementations of FBP may be non-preemptive or preemptive - the earlier implementations tended to be non-preemptive (mainframe and C language), whereas the latest Java implementation (see below) uses Java Thread class and is preemptive.
Examples
"Telegram Problem"
FBP components often form complementary pairs. This example uses two such pairs. The problem described seems very simple as described in words, but in fact is surprisingly difficult to accomplish using conventional procedural logic. The task, called the "Telegram Problem", originally described by Peter Naur, is to write a program which accepts lines of text and generates output lines containing as many words as possible, where the number of characters in each line does not exceed a certain length. The words may not be split and we assume no word is longer than the size of the output lines. This is analogous to the word-wrapping problem in text editors.
In conventional logic, the programmer rapidly discovers that neither the input nor the output structures can be used to drive the call hierarchy of control flow. In FBP, on the other hand, the problem description itself suggests a solution:
"words" are mentioned explicitly in the description of the problem, so it is reasonable for the designer to treat words as information packets (IPs)
in FBP there is no single call hierarchy, so the programmer is not tempted to force a sub-pattern of the solution to be the top level.
Here is the most natural solution in FBP (there is no single "correct" solution in FBP, but this seems like a natural fit):
where DC and RC stand for "DeCompose" and "ReCompose", respectively.
As mentioned above, Initial Information Packets (IIPs) can be used to specify parametric information such as the desired output record length (required by the rightmost two components), or file names. IIPs are data chunks associated with a port in the network definition which become "normal" IPs when a "receive" is issued for the relevant port.
Batch update
This type of program involves passing a file of "details" (changes, adds and deletes) against a "master file", and producing (at least) an updated master file, and one or more reports. Update programs are generally quite hard to code using synchronous, procedural code, as two (sometimes more) input streams have to be kept synchronized, even though there may be masters without corresponding details, or vice versa.
In FBP, a reusable component (Collate), based on the unit record idea of a Collator, makes writing this type of application much easier as Collate merges the two streams and inserts bracket IPs to indicate grouping levels, significantly simplifying the downstream logic. Suppose that one stream ("masters" in this case) consists of IPs with key values of 1, 2 and 3, and the second stream IPs ("details") have key values of 11, 12, 21, 31, 32, 33 and 41, where the first digit corresponds to the master key values. Using bracket characters to represent "bracket" IPs, the collated output stream will be as follows:
( m1 d11 d12 ) ( m2 d21 ) ( m3 d31 d32 d33 ) (d41)
As there was no master with a value of 4, the last group consists of a single detail (plus brackets).
The structure of the above stream can be described succinctly using a BNF-like notation such as
{ ( [m] d* ) }*
Collate is a reusable black box which only needs to know where the control fields are in its incoming IPs (even this is not strictly necessary as transformer processes can be inserted upstream to place the control fields in standard locations), and can in fact be generalized to any number of input streams, and any depth of bracket nesting. Collate uses an array-type port for input, allowing a variable number of input streams.
Multiplexing processes
Flow-based programming supports process multiplexing in a very natural way. Since components are read-only, any number of instances of a given component ("processes") can run asynchronously with each other.
When computers usually had a single processor, this was useful when a lot of I/O was going on; now that machines usually have multiple processors, this is starting to become useful when processes are CPU-intensive as well. The diagram in this section shows a single "Load Balancer" process distributing data between three processes, labeled S1, S2 and S3, respectively, which are instances of a single component, which in turn feed into a single process on a "first-come, first served" basis.
Simple interactive network
In this general schematic, requests (transactions) coming from users enter the diagram at the upper left, and responses are returned at the lower left. The "back ends" (on the right side) communicate with systems at other sites, e.g. using CORBA, MQSeries, etc. The cross-connections represent requests that do not need to go to the back ends, or requests that have to cycle through the network more than once before being returned to the user.
As different requests may use different back-ends, and may require differing amounts of time for the back-ends (if used) to process them, provision must be made to relate returned data to the appropriate requesting transactions, e.g. hash tables or caches.
The above diagram is schematic in the sense that the final application may contain many more processes: processes may be inserted between other processes to manage caches, display connection traffic, monitor throughput, etc. Also the blocks in the diagram may represent "subnets" - small networks with one or more open connections.
Comparison with other paradigms and methodologies
Jackson Structured Programming (JSP) and Jackson System Development (JSD)
This methodology assumes that a program must be structured as a single procedural hierarchy of subroutines. Its starting point is to describe the application as a set of "main lines", based on the input and output data structures. One of these "main lines" is then chosen to drive the whole program, and the others are required to be "inverted" to turn them into subroutines (hence the name "Jackson inversion"). This sometimes results in what is called a "clash", requiring the program to be split into multiple programs or coroutines. When using FBP, this inversion process is not required, as every FBP component can be considered a separate "main line".
FBP and JSP share the concept of treating a program (or some components) as a parser of an input stream.
In Jackson's later work, Jackson System Development (JSD), the ideas were developed further.
In JSD the design is maintained as a network design until the final implementation stage. The model is then transformed into a set of sequential processes to the number of available processors. Jackson discusses the possibility of directly executing the network model that exists prior to this step, in section 1.3 of his book (italics added):
The specification produced at the end of the System Timing step is, in principle, capable of direct execution. The necessary environment would contain a processor for each process, a device equivalent to an unbounded buffer for each data stream, and some input and output devices where the system is connected to the real world. Such an environment could, of course, be provided by suitable software running on a sufficiently powerful machine. Sometimes, such direct execution of the specification will be possible, and may even be a reasonable choice.
FBP was recognized by M A Jackson as an approach that follows his method of "Program decomposition into sequential processes communicating by a coroutine-like mechanism"
Applicative programming
W.B. Ackerman defines an applicative language as one which does all of its processing by means of operators applied to values. The earliest known applicative language was LISP.
An FBP component can be regarded as a function transforming its input stream(s) into its output stream(s). These functions are then combined to make more complex transformations, as shown here:
If we label streams, as shown, with lower case letters, then the above diagram can be represented succinctly as follows:
c = G(F(a),F(b));
Just as in functional notation F can be used twice because it only works with values, and therefore has no side effects, in FBP two instances of a given component may be running concurrently with each other, and therefore FBP components must not have side-effects either. Functional notation could clearly be used to represent at least a part of an FBP network.
The question then arises whether FBP components can themselves be expressed using functional notation. W.H. Burge showed how stream expressions can be developed using a recursive, applicative style of programming, but this work was in terms of (streams of) atomic values. In FBP, it is necessary to be able to describe and process structured data chunks (FBP IPs).
Furthermore, most applicative systems assume that all the data is available in memory at the same time, whereas FBP applications need to be able to process long-running streams of data while still using finite resources. Friedman and Wise suggested a way to do this by adding the concept of "lazy cons" to Burge's work. This removed the requirement that both of the arguments of "cons" be available at the same instant of time. "Lazy cons" does not actually build a stream until both of its arguments are realized - before that it simply records a "promise" to do this. This allows a stream to be dynamically realized from the front, but with an unrealized back end. The end of the stream stays unrealized until the very end of the process, while the beginning is an ever-lengthening sequence of items.
Linda
Many of the concepts in FBP seem to have been discovered independently in different systems over the years. Linda, mentioned above, is one such. The difference between the two techniques is illustrated by the Linda "school of piranhas" load balancing technique - in FBP, this requires an extra "load balancer" component which routes requests to the component in a list which has the smallest number of IPs waiting to be processed. Clearly FBP and Linda are closely related, and one could easily be used to simulate the other.
Object-oriented programming
An object in OOP can be described as a semi-autonomous unit comprising both information and behaviour. Objects communicate by means of "method calls", which are essentially subroutine calls, done indirectly via the class to which the receiving object belongs. The object's internal data can only be accessed by means of method calls, so this is a form of information hiding or "encapsulation". Encapsulation, however, predates OOP - David Parnas wrote one of the seminal articles on it in the early 70s - and is a basic concept in computing. Encapsulation is the very essence of an FBP component, which may be thought of as a black box, performing some conversion of its input data into its output data. In FBP, part of the specification of a component is the data formats and stream structures that it can accept, and those it will generate. This constitutes a form of design by contract. In addition, the data in an IP can only be accessed directly by the currently owning process. Encapsulation can also be implemented at the network level, by having outer processes protect inner ones.
A paper by C. Ellis and S. Gibbs distinguishes between active objects and passive objects. Passive objects comprise information and behaviour, as stated above, but they cannot determine the timing of this behaviour. Active objects on the other hand can do this. In their article Ellis and Gibbs state that active objects have much more potential for the development of maintainable systems than do passive objects. An FBP application can be viewed as a combination of these two types of object, where FBP processes would correspond to active objects, while IPs would correspond to passive objects.
Actor model
FBP considers Carl Hewitt's Actor as an asynchronous processes with 2 ports: one for input messages and one for control signals. A control signal is emitted by the actor itself after each round of execution. The purpose of this signal is to avoid parallel execution of the actor's body and so to allow to access the fields of the actor object without synchronization.
See also
Active objects
Actor model
Apache NiFi
BMDFM
Communicating Sequential Processes (CSP)
Concurrent computing
Dataflow
Data flow diagram
Dataflow programming
FBD - Function Block Diagrams (a programming language in the IEC 61131 standard)
Functional reactive programming
Linda (coordination language)
Low-code development platforms
MapReduce
Node-RED
Pipeline programming
VRL Studio
Wayne Stevens
XProc
Yahoo Pipes
References
External links
Concurrent programming languages
Parallel computing
Programming paradigms
Visual programming languages
|
92171
|
https://en.wikipedia.org/wiki/Webcam
|
Webcam
|
A webcam is a video camera that feeds or streams an image or video in real time to or through a computer network, such as the Internet. Webcams are typically small cameras that sit on a desk, attach to a user's monitor, or are built into the hardware. Webcams can be used during a video chat session involving two or more people, with conversations that include live audio and video.
Webcam software enables users to record a video or stream the video on the Internet. As video streaming over the Internet requires much bandwidth, such streams usually use compressed formats. The maximum resolution of a webcam is also lower than most handheld video cameras, as higher resolutions would be reduced during transmission. The lower resolution enables webcams to be relatively inexpensive compared to most video cameras, but the effect is adequate for video chat sessions.
The term "webcam" (a clipped compound) may also be used in its original sense of a video camera connected to the Web continuously for an indefinite time, rather than for a particular session, generally supplying a view for anyone who visits its web page over the Internet. Some of them, for example, those used as online traffic cameras, are expensive, rugged professional video cameras.
Technology
Webcams typically include a lens, an image sensor, support electronics, and may also include one or even two microphones for sound.
Image sensor
Image sensors can be CMOS or CCD, the former being dominant for low-cost cameras, but CCD cameras do not necessarily outperform CMOS-based cameras in the low-price range. Most consumer webcams are capable of providing VGA-resolution video at a frame rate of 30 frames per second. Many newer devices can produce video in multi-megapixel resolutions, and a few can run at high frame rates such as the PlayStation Eye, which can produce 320×240 video at 120 frames per second. The Wii Remote contains an image sensor with a resolution of 1024×768 pixels. Common resolutions of laptops' built-in webcams are 720p (HD), and in lower-end laptops 480p. The earliest known laptops with 1080p (Full HD) webcams like the Samsung 700G7C were released in the early 2010s.
As the bayer filter is proprietary, any webcam contains some built-in image processing, separate from compression.
Optics
Various lenses are available, the most common in consumer-grade webcams being a plastic lens that can be manually moved in and out to focus the camera. Fixed-focus lenses, which have no provision for adjustment, are also available. As a camera system's depth of field is greater for small image formats and is greater for lenses with a large f-number (small aperture), the systems used in webcams have a sufficiently large depth of field that the use of a fixed-focus lens does not impact image sharpness to a great extent.
Most models use simple, focal-free optics (fixed focus, factory-set for the usual distance from the monitor to which it is fastened to the user) or manual focus.
Compression
Digital video streams are represented by huge amounts of data, burdening its transmission (from the image sensor, where the data is continuously created) and storage alike.
Most if not all cheap webcams come with built-in ASIC to do video compression in real-time.
Support electronics read the image from the sensor and transmit it to the host computer. The camera pictured to the right, for example, uses a Sonix SN9C101 to transmit its image over USB. Typically, each frame is transmitted uncompressed in RGB or YUV or compressed as JPEG. Some cameras, such as mobile-phone cameras, use a CMOS sensor with supporting electronics "on die", i.e. the sensor and the support electronics are built on a single silicon chip to save space and manufacturing costs. Most webcams feature built-in microphones to make video calling and videoconferencing more convenient.
Interface
Typical interfaces used by articles marketed as a "webcam" are USB, Ethernet and IEEE 802.11 (denominated as IP camera). Further interfaces such as e.g. Composite video, S-Video or FireWire were also available.
The USB video device class (UVC) specification allows inter-connectivity of webcams to computers without the need for proprietary device drivers.
Software
Various proprietary as well as free and open-source software is available to handle the UVC stream. One could use Guvcview or GStreamer and GStreamer-based software to handle the UVC stream. Another could use multiple USB cameras attached to the host computer the software resides on, and broadcast multiple streams at once over (Wireless) Ethernet, such as MotionEye. MotionEye can either be installed onto a Raspberry Pi as MotionEyeOs, or afterwards on Raspbian as well. MotionEye can also be set up on Debian, Raspbian is a variant of Debian. Note that MotionEye V4.1.1 ( Aug '21 ) can only run on Debian 10 Buster ( oldstable ) and Python 2.7. Newer versions such as 3.X are not supported at this point of time according to Ccrisan, foundator and author of MotionEye.
Characteristics
Webcams are known for their low manufacturing cost and their high flexibility, making them the lowest-cost form of videotelephony. As webcams evolved simultaneously with display technologies, USB interface speeds and broadband internet speeds, the resolution went up from gradually from 320×240, to 640×480, and some now even offer 1280×720 (aka 720p) or 1920×1080 (aka 1080p) resolution.
Webcams can come with different presets and Fields of View (FOV). Individual users can make use of less than 90° Horizontal FOV for home offices and live streaming. Webcams with as much as 360° Horizontal FOV can be used for small- to medium- sized rooms (sometimes even large rooms). Depending on the users' purposes, webcams in the market can display the whole room or just the general vicinity.
Despite the low cost, the resolution offered as of 2019 is impressive, with now the low-end webcams offering resolutions of 720p, mid-range webcams offering 1080p resolution, and high-end webcams offering 4K resolution at 60 fps.
Webcams have become a source of security and privacy issues, as some built-in webcams can be remotely activated by spyware. To address this concern, many webcams come with a physical lens cover.
Uses
The most popular use of webcams is the establishment of video links, permitting computers to act as videophones or videoconference stations. For example, Apple's iSight camera, which is built into Apple laptops, iMacs and a majority of iPhones, can be used for video chat sessions, using the Messages instant messaging program. Other popular uses include security surveillance, computer vision, video broadcasting, and for recording social videos.
The video streams provided by webcams can be used for a number of purposes, each using appropriate software:
Video monitoring
Webcams may be installed at places such as childcare centres, offices, shops and private areas to monitor security and general activity.
Commerce
Webcams have been used for augmented reality experiences online. One such function has the webcam act as a "magic mirror" to allow an online shopper to view a virtual item on themselves. The Webcam Social Shopper is one example of software that utilizes the webcam in this manner.
Videocalling and videoconferencing
Webcams can be added to instant messaging, text chat services such as AOL Instant Messenger, and VoIP services such as Skype, one-to-one live video communication over the Internet has now reached millions of mainstream PC users worldwide. Improved video quality has helped webcams encroach on traditional video conferencing systems. New features such as automatic lighting controls, real-time enhancements (retouching, wrinkle smoothing and vertical stretch), automatic face tracking and autofocus, assist users by providing substantial ease-of-use, further increasing the popularity of webcams.
Since the middle of 2020, remote and hybrid work has increased the popularity of webcams. Businesses, schools, and individuals have relied on video conferencing instead of spending on business travel for meetings. Moreover, the number of video conferencing cameras and software have multiplied since then due to their popularity.
Webcam features and performance can vary by program, computer operating system, and also by the computer's processor capabilities. Video calling support has also been added to several popular instant messaging programs.
Video security
Webcams can be used as security cameras. Software is available to allow PC-connected cameras to watch for movement and sound, recording both when they are detected. These recordings can then be saved to the computer, e-mailed, or uploaded to the Internet. In one well-publicised case, a computer e-mailed images of the burglar during the theft of the computer, enabling the owner to give police a clear picture of the burglar's face even after the computer had been stolen.
Unauthorized access of webcams can present significant privacy issues (see "Privacy" section below).
In December 2011, Russia announced that 290,000 Webcams would be installed in 90,000 polling stations to monitor the 2012 Russian presidential election.
Video clips and stills
Webcams can be used to take video clips and still pictures. Various software tools in wide use can be employed for this, such as PicMaster and Microsoft's Camera app (for use with Windows operating systems), Photo Booth (Mac), or Cheese (with Unix systems). For a more complete list see Comparison of webcam software.
Input control devices
Special software can use the video stream from a webcam to assist or enhance a user's control of applications and games. Video features, including faces, shapes, models and colors can be observed and tracked to produce a corresponding form of control. For example, the position of a single light source can be tracked and used to emulate a mouse pointer, a head-mounted light would enable hands-free computing and would greatly improve computer accessibility. This can be applied to games, providing additional control, improved interactivity and immersiveness.
FreeTrack is a free webcam motion-tracking application for Microsoft Windows that can track a special head-mounted model in up to six degrees of freedom and output data to mouse, keyboard, joystick and FreeTrack-supported games. By removing the IR filter of the webcam, IR LEDs can be used, which has the advantage of being invisible to the naked eye, removing a distraction from the user. TrackIR is a commercial version of this technology.
The EyeToy for the PlayStation 2, PlayStation Eye for the PlayStation 3, and the Xbox Live Vision camera and Kinect motion sensor for the Xbox 360 and are color digital cameras that have been used as control input devices by some games.
Small webcam-based PC games are available as either standalone executables or inside web browser windows using Adobe Flash.
Astro photography
With very-low-light capability, a few specific models of webcams are very popular to photograph the night sky by astronomers and astro photographers. Mostly, these are manual-focus cameras and contain an old CCD array instead of comparatively newer CMOS array. The lenses of the cameras are removed and then these are attached to telescopes to record images, video, still, or both. In newer techniques, videos of very faint objects are taken for a couple of seconds and then all the frames of the video are "stacked" together to obtain a still image of respectable contrast.
Laser beam profiling
A webcam's CCD response is linear proportional to the incoming light. Therefore, webcams are suitable to record laser beam profiles, after the lens is removed. The resolution of a laser beam profiler depends on the pixel size. Commercial webcams are usually designed to record color images. The size of a webcam's color pixel depends on the model and may lie in the range of 5 to 10 µm. However, a color pixel consists of four black and white pixels each equipped with a color filter (for details see Bayer filter). Although these color filters work well in the visible, they may be rather transparent in the near infrared. By switching a webcam into the Bayer-mode it is possible to access the information of the single pixels and a resolution below 3 µm was possible.
History
Early development (Early 1990s)
First developed in 1991, a webcam was pointed at the Trojan Room coffee pot in the Cambridge University Computer Science Department (initially operating over a local network instead of the web). The camera was finally switched off on August 22, 2001. The final image captured by the camera can still be viewed at its homepage. The oldest continuously operating webcam, San Francisco State University's FogCam, has run since 1994 and was slated to turn off August 2019. It updates every 20 seconds. After the publicity following extensive news coverage about the planned ending of FogCam, SFSU agreed to continue maintaining the FogCam and keep it running.
IndyCam
The released in 1993 SGI Indy is the first commercial computer to have a standard video camera, and the first SGI computer to have standard video inputs.
The maximum supported input resolution is 640×480 for NTSC or 768×576 for PAL. A fast machine is required to capture at either of these resolutions, though; an Indy with slower R4600PC CPU, for example, may require the input resolution to be reduced before storage or processing. However, the Vino hardware is capable of DMAing video fields directly into the framebuffer with minimal CPU overhead.
Commercial webcam (Mid 90s)
Connectix QuickCam
The first widespread commercial webcam, the black-and-white QuickCam, entered the marketplace in 1994, created by the U.S. computer company Connectix. QuickCam was available in August 1994 for the Apple Macintosh, connecting via a serial port, at a cost of $100. Jon Garber, the designer of the device, had wanted to call it the "Mac-camera", but was overruled by Connectix's marketing department; a version with a PC-compatible parallel port and software for Microsoft Windows was launched in October 1995. The original QuickCam provided 320x240-pixel resolution with a grayscale depth of 16 shades at 60 frames per second, or 256 shades at 15 frames per second. These cam were tested on several Delta II launch using a variety of communication protocols including CDMA, TDMA, GSM and HF.
In 2010, Time Magazine named the QuickCam as one of the top computer devices of all time.
Videoconferencing via computers already existed, and at the time client-server based videoconferencing software such as CU-SeeMe had started to become popular.
RS/6000 integrated webcam
The first widely known laptop with integrated webcam option, at a pricepoint starting at US$ 12,000, was an IBM RS/6000 860 laptop and his ThinkPad 850 sibling, released in 1996.
Entering the mainstream (Late 90s - 2000s)
One of the most widely reported-on webcam sites was JenniCam, created in 1996, which allowed Internet users to observe the life of its namesake constantly, in the same vein as the reality TV series Big Brother, launched four years later. Other cameras are mounted overlooking bridges, public squares, and other public places, their output made available on a public web page in accordance with the original concept of a "webcam". Aggregator websites have also been created, providing thousands of live video streams or up-to-date still pictures, allowing users to find live video streams based on location or other criteria.
In the late 1990s, Microsoft NetMeeting was the only videoconferencing software on PC in widespread use, making use of webcams. In the following years, instant messaging clients started adding webcam support: Yahoo Messenger introduced this with version 5.5 in 2002, allowing video calling in 20 frames per second using a webcam. MSN Messenger gained this in version 5.0 in 2003.
Around the turn of the 21st century, computer hardware manufacturers began building webcams directly into laptop and desktop screens, thus eliminating the need to use an external USB or FireWire camera. Gradually webcams came to be used more for telecommunications, or videotelephony, between two people, or among several people, than for offering a view on a Web page to an unknown public.
Later developments (2010s - Present)
For less than US$100 in 2012, a three-dimensional space webcam became available, producing videos and photos in 3D anaglyph image with a resolution up to 1280 × 480 pixels. Both sender and receiver of the images must use 3D glasses to see the effect of three dimensional image.
Webcams are considered an essential accessory for working from home, mainly to compensate for lower quality video processing with the built-in camera of the average laptop. As a result of the COVID-19 pandemic, webcams initially sold out, or their prices were being marked up by third party sellers. Most laptops before and during the pandemic were made with cameras capping out at 720p recording quality at best, compared to the industry standard of 1080p or 4K seen in smartphones and televisions from the same period. The backlog on new developments for built-in webcams is the result of a design flaw with laptops being too thin to support the 7mm camera modules to fit inside, instead resorting to ~2.5mm. Also the camera components are more expensive and not a high level of demand for this feature, companies like Apple have not updated their webcams since 2012. Smartphones started to be used as a backup option or webcam replacement, with kits including lighting and tripods or downloadable apps.
Privacy
Many users do not wish the continuous exposure for which webcams were originally intended, but rather prefer privacy. Such privacy is lost when malware allow malicious hackers to activate the webcam without the user's knowledge, providing the hackers with a live video and audio feed. This is a particular concern on many laptop computers, as such cameras normally cannot be physically disabled if hijacked by such a Trojan Horse program or other similar spyware programs.
Cameras such as Apple's older external iSight cameras include lens covers to thwart this. Some webcams have built-in hardwired LED indicators that light up whenever the camera is active, sometimes only in video mode. However, it is possible for malware to circumvent the indicator and activate the camera surreptitiously, as researchers demonstrated in the case of a MacBook's built-in camera in 2013.
Various companies sell sliding lens covers and stickers that allow users to retrofit a computer or smartphone to close access to the camera lens as needed. One such company reported having sold more than 250,000 such items from 2013 to 2016. However, any opaque material will work. Prominent users include former FBI director James Comey.
The process of attempting to hack into a person's webcam and activate it without the webcam owner's permission has been called camfecting, a portmanteau of cam and infecting. The remotely activated webcam can be used to watch anything within the webcam's field of vision. Camfecting is most often carried out by infecting the victim's computer with a virus.
In January 2005, some search engine queries were published in an online forum which allow anyone to find thousands of Panasonic and Axis high-end web cameras, provided that they have a web-based interface for remote viewing. Many such cameras are running on default configuration, which does not require any password login or IP address verification, making them viewable by anyone.
In the 2010 Robbins v. Lower Merion School District "WebcamGate" case, plaintiffs charged that two suburban Philadelphia high schools secretly spied on students by surreptitiously remotely activating iSight webcams embedded in school-issued MacBook laptops the students were using at home—thereby infringing on their privacy rights. School authorities admitted to secretly snapping over 66,000 photographs, including shots of students in the privacy of their bedrooms, and some with teenagers in various states of undress. The school board involved quickly disabled their laptop spyware program after parents filed lawsuits against the board and various individuals.
Effects on modern society
Webcams allow for inexpensive, real-time video chat and webcasting, in both amateur and professional pursuits. They are frequently used in online dating and for online personal services offered mainly by women when camgirling. However, the ease of webcam use through the Internet for video chat has also caused issues. For example, moderation system of various video chat websites such as Omegle has been criticized as being ineffective, with sexual content still rampant. In a 2013 case, the transmission of nude photos and videos via Omegle from a teenage girl to a schoolteacher resulted in a child pornography charge.
YouTube is a popular website hosting many videos made using webcams. News websites such as the BBC also produce professional live news videos using webcams rather than traditional cameras.
Webcams can also encourage telecommuting, enabling people to work from home via the Internet, rather than traveling to their office. This usage was crucial to the survival of many businesses during the COVID-19 pandemic, when in-person office work was discouraged.
The popularity of webcams among teenagers with Internet access has raised concern about the use of webcams for cyber-bullying. Webcam recordings of teenagers, including underage teenagers, are frequently posted on popular Web forums and imageboards such as 4chan.
Descriptive names and terminology
Videophone calls (also: videocalls and video chat), differ from videoconferencing in that they expect to serve individuals, not groups. However that distinction has become increasingly blurred with technology improvements such as increased bandwidth and sophisticated software clients that can allow for multiple parties on a call. In general everyday usage the term videoconferencing is now frequently used instead of videocall for point-to-point calls between two units. Both videophone calls and videoconferencing are also now commonly referred to as a video link.
Webcams are popular, relatively low cost devices which can provide live video and audio streams via personal computers, and can be used with many software clients for both video calls and videoconferencing.
A videoconference system is generally higher cost than a videophone and deploys greater capabilities. A videoconference (also known as a videoteleconference) allows two or more locations to communicate via live, simultaneous two-way video and audio transmissions. This is often accomplished by the use of a multipoint control unit (a centralized distribution and call management system) or by a similar non-centralized multipoint capability embedded in each videoconferencing unit. Again, technology improvements have circumvented traditional definitions by allowing multiple party videoconferencing via web-based applications.
A separate webpage article is devoted to videoconferencing.
A telepresence system is a high-end videoconferencing system and service usually employed by enterprise-level corporate offices. Telepresence conference rooms use state-of-the art room designs, video cameras, displays, sound-systems and processors, coupled with high-to-very-high capacity bandwidth transmissions.
Typical use of the various technologies described above include calling or conferencing on a one-on-one, one-to-many or many-to-many basis for personal, business, educational, deaf Video Relay Service and tele-medical, diagnostic and rehabilitative use or services. New services utilizing videocalling and videoconferencing, such as teachers and psychologists conducting online sessions, personal videocalls to inmates incarcerated in penitentiaries, and videoconferencing to resolve airline engineering issues at maintenance facilities, are being created or evolving on an ongoing basis.
See also
Action camera
Camera phone
Camfecting
Camgirling
CCTV
Comparison of webcam software
Document camera
IP camera
iSight and IBM UltraPort cameras
List of webcameras and videophones
Optic Nerve (GCHQ)
Pan tilt zoom camera
QuickCam
Trail Camera – special outdoor Digital Camera that operates on batteries and saves motion-detected images to SDcard
References
Bibliography
Mulbach, Lothar; Bocker, Martin; Prussog, Angela. "Telepresence in Videocommunications: A Study on Stereoscopy and Individual Eye Contact", Human Factors, June 1995, Vol.37, No.2, p. 290, , Gale Document Number: GALE|A18253819. Accessed December 23, 2011 via General Science eCollection (subscription).
Further reading
Bajaj, Vikas. Transparent Government, Via Webcams in India, The New York Times, July 18, 2011, p.B3. Published online: July 17, 2011.
getlumina. Tips for Finding a Good Webcam, JAN 28, 2021.
Computing input devices
English inventions
Film and video technology
Privacy
Teleconferencing
Videotelephony
World Wide Web
Articles containing video clips
|
16352281
|
https://en.wikipedia.org/wiki/0.0.0.0
|
0.0.0.0
|
In the Internet Protocol Version 4, the address is a non-routable meta-address used to designate an invalid, unknown or non-applicable target. This address is assigned specific meanings in a number of contexts, such as on clients or on servers.
As a host address
Uses include:
A way to specify "any IPv4 address at all". It is used in this way when configuring servers (i.e. when binding listening sockets). This is known to TCP programmers as INADDR_ANY. (bind(2) binds to addresses, not interfaces.)
The address a host claims as its own when it has not yet been assigned an address. Such as when sending the initial DHCPDISCOVER packet when using DHCP.
The address a host assigns to itself when address request via DHCP has failed, provided the host's IP stack supports this. This usage has been replaced with the APIPA mechanism in modern operating systems.
A way to explicitly specify that the target is unavailable.
A way to route a request to a nonexistent target instead of the original target. Often used for adblocking purposes.
In the context of servers, can mean "all IPv4 addresses on the local machine". If a host has two IP addresses, and , and a server running on the host is configured to listen on , it will be reachable at both of those IP addresses.
Routing
In the context of routing tables, a network destination of is used with a network mask of 0 to depict the default route as a destination subnet. This destination is expressed as in CIDR notation. It matches all addresses in the IPv4 address space and is present on most hosts, directed towards a local router.
In routing tables, can also appear in the gateway column. This indicates that the gateway to reach the corresponding destination subnet is unspecified. This generally means that no intermediate routing hops are necessary because the system is directly connected to the destination.
In IPv6
In IPv6, the all-zeros address is typically represented by (two colons), which is the short notation of . The IPv6 variant serves the same purpose as its IPv4 counterpart.
See also
Reserved IP addresses
localhost
References
In the notation "{0,0}" is used to designate 0.0.0.0/x (x being anything from 0 to 32). Quote: "{ 0, 0 } This host on this network. MUST NOT be sent, except as a source address as part of an initialization procedure by which the host learns its own IP address."
External links
Routing
IP addresses
0 (number)
|
31946148
|
https://en.wikipedia.org/wiki/List%20of%20Linux%20adopters
|
List of Linux adopters
|
Linux adopters are companies, organizations and individuals who have moved from other operating systems to Linux (i.e. "desktop Linux"). Such Linux has not displaced Microsoft Windows to a large degree, except on servers. However, Microsoft has adopted the Linux kernel in recent versions of Windows 10, in addition to mainly using its own kernel. Linux is the most popular operating system running on Microsoft Azure, i.e. for Microsoft's customers. Microsoft also has an operating system, Azure Sphere, based on the Linux kernel only.
Government
As local governments come under pressure from institutions such as the World Trade Organization and the International Intellectual Property Alliance, some have turned to Linux and other free software as an affordable, legal alternative to both pirated software and expensive proprietary computer products from Microsoft, Apple and other commercial companies. The spread of Linux affords some leverage for these countries when companies from the developed world bid for government contracts (since a low-cost option exists), while furnishing an alternative path to development for countries like India and Pakistan that have many citizens skilled in computer applications but cannot afford technological investment at "First World" prices. The cost factor is not the only one being considered though – many governmental institutions (in public and military sectors) from North America and European Union make the transition to Linux due to its superior stability and openness of the source code which in its turn leverages information security.
Africa
The South African Social Security Agency (SASSA) deployed multi-station Linux desktops to address budget and infrastructure constraints in 50 rural sites.
First National Bank switched more than 12,000 desktop computers to Linux by 2007.
Asia
East
The People's Republic of China exclusively uses Linux as the operating system for its Loongson processor family, with the aim of technology independence.
Kylin, used by People's Liberation Army in The People's Republic of China. The first version used FreeBSD, but since release 3.0, it employs Linux.
State owned Industrial and Commercial Bank of China (ICBC) is installing Linux in all of its 20,000 retail branches as the basis for its web server and a new terminal platform. (2005)
North Korea uses a Linux distribution developed by the Korea Computer Center, called Red Star OS, on their computers. Prior to its release in 2008, Red Hat Linux or Windows XP were used.
West
In 2003, the Turkish government decided to create its own Linux distribution, Pardus, developed by UEKAE (National Research Institute of Electronics and Cryptology). The first version, Pardus 1.0, was officially announced on 27 December 2005.
North
In late 2010, Vladimir Putin signed a plan to move the Russian Federation government towards free software including Linux in the second quarter of 2012.
South
Government of India's CDAC developed an Indian Linux distribution, BOSS GNU/Linux (Bharat Operating System Solutions). It is customized to suit Indian's digital environment and supports most of the Indian languages.
The Government of Kerala, India, announced its official support for free/open-source software in its State IT Policy of 2001, which was formulated after the first-ever free software conference in India, "Freedom First!", held in July 2001 in Trivandrum, the capital of Kerala, where Richard Stallman inaugurated the Free Software Foundation of India. Since then, Kerala's IT Policy has been significantly influenced by FOSS, with several major initiatives such as IT@School Project, possibly the largest single-purpose deployment of Linux in the world, and leading to the formation of the International Centre for Free and Open Source Software (ICFOSS) in 2009.
In March 2014, with the end of support for Windows XP, the Government of Tamil Nadu, India has advised all its departments to install BOSS Linux (Bharat Operating System Solutions).
The Government of Pakistan established a Technology Resource Mobilization Unit in 2002 to enable groups of professionals to exchange views and coordinate activities in their sectors and to educate users about free software alternatives. Linux is an option for poor countries which have little revenue for public investment; Pakistan is using open-source software in public schools and colleges, and hopes to run all government services on Linux eventually.
South-East
In 2010, the Philippines fielded an Ubuntu-powered national voting system.
In July 2010, Malaysia had switched 703 of the state's 724 agencies to free and open-source software with a Linux-based operating system used. The Chief Secretary to the Government cited, "(the) general acceptance of its promise of better quality, higher reliability, more flexibility and lower cost".
Americas
North
Cuba
Students from the University of Information Science in Cuba launched its own distribution of Linux called Nova to promote the replacement of Microsoft Windows on civilian and government computers, a project that is now supported by the Cuban Government. By early 2011 the Universidad de Ciencias Informáticas announced that they would migrate more than 8000 PCs to this new operating system.
U.S.
In July 2001, the White House started switching whitehouse.gov to an operating system based on Red Hat Linux and using the Apache HTTP Server. The installation was completed in February 2009. In October 2009, the White House servers adopted Drupal, an open-source content management system software distribution.
The United States Department of Defense uses Linux - "the U.S. Army is the single largest installed base for Red Hat Linux" and the US Navy nuclear submarine fleet runs on Linux, including their sonar systems.
In June 2012, the US Navy signed a US$27,883,883 contract with Raytheon to install Linux ground control software for its fleet of vertical take-off and landing (VTOL) Northrup-Grumman MQ8B Fire Scout drones. The contract involves Naval Air Station Patuxent River, Maryland, which has already spent US$5,175,075 in preparation for the Linux systems.
In April 2006, the US Federal Aviation Administration announced that it had completed a migration to Red Hat Enterprise Linux in one third of the scheduled time and about US$15 million under budget. The switch saved a further US$15 million in datacenter operating costs.
The US National Nuclear Security Administration operates the world's tenth fastest supercomputer, the IBM Roadrunner, which uses Red Hat Enterprise Linux along with Fedora as its operating systems.
The city government of Largo, Florida uses Linux and has won international recognition for their implementation, indicating that it provides "extensive savings over more traditional alternatives in city-wide applications."
South
Brazil uses PC Conectado, a program utilizing Linux.
In 2004, Venezuela's government approved the 3390 decree, to give preference to using free software in public administration. One result of this policy is the development of Canaima, a Debian-based Linux distribution.
Europe
Austria
Austria's city of Vienna has chosen to start migrating its desktop PCs to Debian-based Wienux. However, the idea was largely abandoned, because the necessary software was incompatible with Linux.
Czech Republic
Czech Post migrated 4000 servers and 12,000 clients to Novell Linux in 2005
France
In 2007, France's national police force (the National Gendarmerie) started moving their 90,000 desktops from Windows XP to a Ubuntu based OS, GendBuntu, over concerns about the additional training costs of moving to Windows Vista, and following the success of OpenOffice.org roll-outs. The force saved about €50 million on software licensing between 2004 and 2008. The migration largely completed in 2014.
France's Ministry of Agriculture uses Mandriva Linux.
The French Parliament switched to using Ubuntu on desktop PCs in 2007. However, in 2012, it was decided to let each Member of Parliament choose between Windows and Linux.
Germany
The city government of Munich, Germany, chose in 2003 to start to migrate its 14,000 desktops to Debian-based LiMux. Even though more than 80 percent of workstations used OpenOffice and 100 percent used Firefox/Thunderbird five years later (November 2008), an adoption rate of Linux itself of only 20 percent (June 2010) was achieved. The effort was later reorganized, focusing on smaller deployments and winning over staff to the value of the program. By the end of 2011 the program had exceeded its goal and changed over 9000 desktops to Linux. The city of Munich reported at the end of 2012 that the migration to Linux was highly successful and has already saved the city over €11 million (US$14 million). Recently the Deputy Mayor Josef Schmid said that the city is putting together an independent expert group to look at moving back to Microsoft due to issues in LiMux, the primary issues have been of compatibility; users in the rest of Germany that use other (Microsoft) software have had trouble with the files generated by Munich's open-source applications. The second is price, with Schmid saying that the city now has the impression that "Linux is very expensive" due to custom programming, The independent group will advise the best course of action, and if that group recommends using Microsoft software, Schmid says that a switch back isn't impossible. The city council said they already saved more than US$10 million, and there is no major issue with the switch to Linux. Some observers, such as Silviu Stahie of Softpedia have indicated that the attempted rejection of Linux has been influenced by Microsoft and its supporters, and that this is predominantly a political issue and not a technical one. Microsoft's German headquarters has committed to move to Munich as part of this issue. In February 2017, the city council considered the move from the Linux-based OS to Windows 10 while shortly before Microsoft Germany moved its headquarters to Munich.
The Federal Employment Office of Germany (Bundesagentur für Arbeit) has migrated 13,000 public workstations from Windows NT to openSUSE.
Iceland
Iceland has announced in March 2012, that it wishes to migrate to open-source software in public institutions. Schools have already migrated from Windows to Ubuntu Linux.
North Macedonia
Republic of North Macedonia's Ministry of Education and Science deployed more than 180,000 Ubuntu based classroom desktops, and has encouraged every student in the Republic of North Macedonia to use Ubuntu computer workstations.
The Netherlands
The Dutch Police Internet Research and Investigation Network (iRN) has only used free and open-source software based on open standards, publicly developed with the source code available on the Internet for audit, since 2003. They use 2200 Ubuntu workstations.
Russia
In 2014, Russia announced plans to move their Ministry of Health to Linux as a counter to sanctions over the annexation of Crimea by the Russian Federation and a means of hurting US corporate interests, such as Microsoft.
In 2018, Russia began adopting Astra Linux, an operating system which is certified to handle data classified as "special importance", on their military computer systems.
Spain
Spain was noted as the furthest along the road to Linux adoption in 2003, for example with Linux distribution LinEx
The regional Andalusian Autonomous Government of Andalucía in Spain developed its own Linux distribution, called Guadalinex in 2004.
The city government of Barcelona in Spain announced in 2018 that it would migrate all desktop software from proprietary to free/open-source alternatives, and would gradually migrate from proprietary operating systems to Linux.
Switzerland
Switzerland's Canton of Solothurn decided in 2001 to migrate its computers to Linux, but in 2010 the Swiss authority has made a U-turn by deciding to use Windows 7 for desktop clients.
United Kingdom
Hackney London Borough Council used Linux laptops for its 4000 employees to allow working from home during the COVID-19 pandemic.
Education
Linux is often used in technical disciplines at universities and research centres. This is due to several factors, including that Linux is available free of charge and includes a large body of free/open-source software. To some extent, technical competence of computer science and software engineering academics is also a contributor, as is stability, maintainability, and upgradability. IBM ran an advertising campaign entitled "Linux is Education" featuring a young boy who was supposed to be "Linux".
Examples of large scale adoption of Linux in education include the following:
The OLPC XO-1 (previously called the MIT $100 laptop and The Children's Machine), is an inexpensive laptop running Linux, which will be distributed to millions of children as part of the One Laptop Per Child project, especially in developing countries.
Europe
Germany
Germany has announced that 560,000 students in 33 universities will migrate to Linux.
In 2012, the Leibniz-Rechenzentrum (Leibniz Supercomputing Centre) (LRZ) of the Bavarian Academy of Sciences and Humanities unveiled the SuperMUC, the world's fourth most powerful supercomputer. The computer is x86-based and features 155,000 processor cores with a maximum speed of 3 petaflops of processing power and 324 terabytes of RAM. Its operating system is SUSE Linux Enterprise Server.
Italy
Schools in Bolzano, Italy, with a student population of 16,000, switched to a custom distribution of Linux, (FUSS Soledad GNU/Linux), in September 2005.
North Macedonia
Republic of North Macedonia deployed 5,000 Linux desktops running Ubuntu across all 468 public schools and 182 computer labs (December 2005). Later in 2007, another 180,000 Ubuntu thin client computers were deployed.
U.K.
In 2013, Westcliff High School for Girls in the United Kingdom successfully moved from Windows to openSUSE.
Orwell High School, a school with about 1,000 students in Felixstowe, England, has switched to Linux. The school has just received Specialist School for Technology status through a government initiative.
Switzerland
All primary and secondary public schools in the Swiss Canton of Geneva, have switched to using Ubuntu for the PCs used by teachers and students in 2013–14. The switch has been completed by all of the 170 primary public schools and over 2,000 computers. The migration of the canton's 20 secondary schools is planned for the school year 2014-15
Americas
Brazil has 35 million students in over 50,000 schools using 523,400 computer stations all running Linux.
22,000 students in the US state of Indiana had access to Linux Workstations at their high schools in 2006.
In 2009, Venezuela's Ministry of Education began a project called Canaima-educativo, to provide all students in public schools with "Canaimita" laptop computers with the Canaima Debian-based Linux distribution pre-installed, as well as with open-source educational content.
Asia
China
The Chinese government is buying 1.5 million Linux Loongson PCs as part of its plans to support its domestic industry. In addition the province of Jiangsu will install as many as 150,000 Linux PCs, using Loongson processors, in rural schools starting in 2009.
Indonesia
By December 2013, about 500 Indonesian schools were running openSUSE.
Georgia
In 2004, Georgia began running all its school computers and LTSP thin clients on Linux, mainly using Kubuntu, Ubuntu and stripped Fedora-based distros.
India
The Indian government's tablet computer initiative for student use employs Linux as the operating system as part of its drive to produce a tablet PC for under 1,500 rupees (US$35).
The Indian state of Tamil Nadu plans to distribute 100,000 Linux laptops to its students.
Government officials of Kerala, India announced they will use only free software, running on the Linux platform, for computer education, starting with the 2,650 government and government-aided high schools.
The Indian state of Tamil Nadu has issued a directive to local government departments asking them to switch over to open-source software, in the wake of Microsoft's decision to end support for Windows XP in April 2014.
Philippines
The Philippines has deployed 13,000 desktops running on Fedora, the first 10,000 were delivered in December 2007 by Advanced Solutions Inc. Another 10,000 desktops of Edubuntu and Kubuntu are planned.
Russia
Russia announced in October 2007, that all its school computers will run on Linux. This is to avoid cost of licensing currently unlicensed software.
Home
Sony's PlayStation 3 came with a hard disk (20 GB, 60 GB or 80 GB) and was specially designed to allow easy installation of Linux on the system. However, Linux was prevented from accessing certain functions of the PlayStation such as 3D graphics. Sony also released a Linux kit for its PlayStation 2 console (see Linux for PlayStation 2). PlayStation hardware running Linux has been occasionally used in small scale distributed computing experiments, due to the ease of installation and the relatively low price of a PS3 compared to other hardware choices offering similar performance. As of April 1, 2010, Sony disabled the ability to install Linux "due to security concerns" starting with firmware version 3.21.
In 2008, many netbook models were shipped with Linux installed, usually with a lightweight distribution, such as Xandros or Linpus, to reduce resource consumption on their limited resources.
Through 2007 and 2008, Linux distributions with an emphasis on ease of use such as Ubuntu became increasingly popular as home desktop operating systems, with some OEMs, such as Dell, offering models with Ubuntu or other Linux distributions on desktop systems.
In 2011, Google introduced its Chromebooks, web thin clients based on Linux and supplying just a web browser, file manager and media player. They also have the ability to remote desktop into other computers via the free Chrome Remote Desktop extension. In 2012 the first Chromebox, a desktop equivalent of the Chromebook, was introduced. By 2013 Chromebooks had captured 20-25% of the US market for sub-$300 laptops.
Android, created by Google in 2007, is the smartphone & tablet operating system which, as of late 2013, runs on 80% of smartphones and 60% of tablets, worldwide; it is pre-installed on devices by brand hardware manufacturers.
In 2013, Valve publicly released ports of Steam and the Source engine to Linux, allowing many popular titles by Valve such as Team Fortress 2 and Half-Life 2 to be played on Linux. Later that same year, Valve announced their upcoming Steam Machine consoles, which would by default run SteamOS, an operating system based on the Linux kernel. Valve has created a compatibility layer called Proton. Proton makes it possible to run many games on Linux.
In March 2014, Ubuntu claimed 22,000,000 users.
Businesses and non-profits
Linux is used extensively on servers in businesses, and has been for a long time. Linux is also used in some corporate environments as the desktop platform for their employees, with commercially available solutions including Red Hat Enterprise Linux, SUSE Linux Enterprise Desktop, and Ubuntu.
Free I.T. Athens, founded in 2005 in Athens, Georgia, United States, is a non-profit organization dedicated to rescuing computers from landfills, recycling them or refurbishing them using Linux exclusively.
Burlington Coat Factory has used Linux exclusively since 1999.
Ernie Ball, known for its famous Super Slinky guitar strings, has used Linux as its desktop operating system since 2000.
Novell is undergoing a migration from Windows to Linux. Of its 5500 employees, 50% were successfully migrated as of April 2006. This was expected to rise to 80% by November.
Wotif, the Australian hotel booking website, migrated from Windows to Linux servers to keep up with the growth of its business.
Union Bank of California announced in January 2007 that it would standardize its IT infrastructure on Red Hat Enterprise Linux in order to lower costs.
Peugeot, the European car maker, announced plans to deploy up to 20,000 copies of Novell's Linux desktop, SUSE Linux Enterprise Desktop, and 2,500 copies of SUSE Linux Enterprise Server, in 2007.
Mindbridge, a software company, announced in September 2007 that it had migrated a large number of Windows servers onto a smaller number of Linux servers and a few BSD servers. It claims to have saved "bunches of money."
Virgin America, the low cost U.S. airline, uses Linux to power its in-flight entertainment system, RED.
Amazon.com, the US based mail-order retailer, uses Linux "in nearly every corner of its business".
Google uses a version of Ubuntu internally nicknamed Goobuntu. In August 2017, Google announced that it would be replacing Goobuntu with gLinux, an in-house distro based on the Debian Testing branch.
IBM does extensive development work for Linux and also uses it on desktops and servers internally. The company also created a TV advertising campaign: IBM supports Linux 100%.
Wikimedia Foundation moved to running its Wikipedia servers on Ubuntu in late 2008, after having previously used a combination of Red Hat and Fedora.
DreamWorks Animation adopted the use of Linux since 2001, and uses more than 1,000 Linux desktops and more than 3,000 Linux servers.
The Chicago Mercantile Exchange employs an all-Linux computing infrastructure and has used it to process over a quadrillion dollars worth of financial transactions
The Chi-X pan-European equity exchange runs its MarketPrizm trading platform software on Linux.
The London Stock Exchange uses the Linux-based MillenniumIT Millennium Exchange software for its trading platform and predicts that moving to Linux from Windows will give it an annual cost savings of at least £10 million ($14.7 million) from 2011 to 2012.
The New York Stock Exchange uses Linux to run its trading applications.
Mobexpert Group, the leading furniture manufacturer and retailer in Romania, extensively uses Linux, LibreOffice and other free software in its data communications and processing systems, including some desktops.
American electronic music composer Kim Cascone migrated from Apple Mac to Ubuntu for his music studio, performance use and administration in 2009.
Laughing Boy Records under the direction of owner Geoff Beasley switched from doing audio recording on Windows to Linux in 2004 as a result of Windows spyware problems.
Nav Canada's new Internet Flight Planning System for roll-out in 2011, is written in Python and runs on Red Hat Linux.
Electrolux Frigidaire Infinity i-kitchen is a "smart appliance" refrigerator that uses a Linux operating system, running on an embedded 400 MHz Freescale i.MX25 processor with 128 MB of RAM and a 480×800 touch panel.
DukeJets LLC (USA) and Duke Jets Ltd. (Canada), air charter brokerage companies, switched from Windows to Ubuntu Linux in 2012.
Banco do Brasil, the biggest bank in Brazil, has moved nearly all desktops to Linux, except some corporate ones and a few that are need to operate some specific hardware. They began migration of their servers to Linux in 2002. Branch servers and ATMs all run Linux. The distribution of choice is openSUSE 11.2.
KLM, the Royal Aviation Company of the Netherlands, uses Linux on the OSS-based version of its KLM WebFarm.
Ocado, the online supermarket, uses Linux in its data centres.
Kazi Farms Group, a large poultry and food products company in Bangladesh, migrated 1000 computers to Linux. An associated TV channel, Deepto TV, as well as an associated daily newspaper Dhaka Tribune also migrated to Linux.
Zando Computer, an IT consulting company located in Bucharest, Romania uses Linux for its business needs (server and desktop). The company recommends to its clients and actively deploys Linux, LibreOffice (OpenDocument format solutions) and other categories of free software.
Nvidia, at the 2015 Consumer Electronics Show, company CEO Jen-Hsun Huang made his extensive presentations using Ubuntu Linux.
Statistical Office of the Republic of Serbia ICT report for 2017, showed that 19.8% of the companies in Serbia use Linux as their main operating system (up from 14.5% in 2016). Linux is largely used in Serbian large enterprises (companies that fulfil two out of three conditions: 250+ employees, revenue of 35+ million euros, total assets of 17.5+ million euros), where Linux adoption has reached 40.9%.
Scientific institutions
NASA decided to switch the International Space Station laptops running Windows XP to Debian 6.
Both CERN and Fermilab use Scientific Linux in all their work; this includes running the Large Hadron Collider or the Dark Energy Camera or the 20,000 internal servers of CERN
WLCG is composed of 576 sites with more than 390,000 processors and 150 petabytes of storage and uses Linux on all its nodes.
Canada's largest super computer, the IBM iDataPlex cluster computer at the University of Toronto uses Linux as its operating system.
The Internet Archive uses hundreds of x86 servers to catalogue the Internet, all of them running Linux.
ASV Roboat autonomous robotic sailboat runs on Linux
Tianhe-I, the world's fastest super computer as of October 2010, located at the National Centre for Supercomputing in Tianjin, China runs Linux.
The University of Portsmouth in the United Kingdom has deployed a "cost effective" high performance computer that will be used to analyse data from telescopes around the world, run simulations and test the current theories about the universe. Its operating system is Scientific Linux. Dr David Bacon of the University of Portsmouth said: "Our Institute of Cosmology is in a great position to use this high performance computer to make real breakthroughs in understanding the universe, both by analysing the very latest astronomical observations, and by calculating the consequences of mind-boggling new theories...By selecting Dell’s industry-standard hardware and open-source software we’re able to free up budget that would have normally been spent on costly licences and reinvest it."
In September 2011, ten autonomous unmanned air vehicles were flown in flocking flight by the École Polytechnique Fédérale de Lausanne’s Laboratory of Intelligence Systems in Lausanne, Switzerland. The UAVs each sense each other and control their own flight in relation to each other, each has an independent processor running Linux to accomplish this.
Celebrities
British actor Stephen Fry, in August 2012 stated that he uses Linux. "Do I use Linux on any of my devices? Yes – I use Ubuntu these-days; it seems the friendliest."
In 2008, Jamie Hyneman, co-host of the American television series Mythbusters, advocated Linux-based operating systems as a solution to software bloat.
Science fiction writer Cory Doctorow uses Ubuntu.
Actor Wil Wheaton sometimes uses Linux distributions, but not as his primary operating system.
See also
Comparison of open source and closed source
References
Linux
|
29324444
|
https://en.wikipedia.org/wiki/Engineering%20Code%20Snippets%20Project
|
Engineering Code Snippets Project
|
ECS (Engineering Code Snippets) is an open-source project for engineering software codes and programs, developed at the Katholieke Universiteit Leuven. ECS was started in order to provide code snippets, and complete programs which can prove useful to engineers, scientists and technologists worldwide. Code snippets in different platforms such as C, C++, Java, MATLAB, PHP, C#, and HTML are being made available through this project. All submitted codes are reviewed to ensure that they run without errors, and only then, they are published. The project uses a unique tagging feature for each code snippet which include a title, brief description about the code, specific instructions for running the code, and the creation date.
Motivation and History
The web is cluttered with snippets of code, which often serve no meaningful purpose. This makes the life of a programmer very difficult, with unreliable codes that do not compile being inserted into large software projects resulting in bugs that go undetected until reported by users. As such, this project aims to provide a comprehensive framework where code snippets are well categorized, and serve the specific function of engineering projects. The attempt made in this project is to be able to organize code snippets by category, and provide an open source framework, which makes it easy for programmers to access a growing, uncluttered database online and use it for the purposes of their own software development efforts. This open-source resource also aims to facilitate the ease of use of snippets for the purposes of education of engineering principles, by providing examples that fit the curricula of engineering schools.
The foundations of this project were laid in November 2009, and an intranet effort was launched as a start up point. Later this project was moved to the world wide web to facilitate global access. The project pages are licensed with a Creative Commons Attribution license, and the project aims to fulfill the GNU Project aims of freedom to run the program, freedom to access the code, freedom to redistribute the program to anyone, and freedom to improve the software,.
See also
GNU Free Documentation License
Free Software Foundation
ECS Official Site: https://sites.google.com/site/engineeringcodesnippets
References
Computer programming
Free software culture and documents
|
55364
|
https://en.wikipedia.org/wiki/MMX%20%28instruction%20set%29
|
MMX (instruction set)
|
MMX is a single instruction, multiple data (SIMD) instruction set architecture designed by Intel, introduced on January 8, 1997 with its Pentium P5 (microarchitecture) based line of microprocessors, named "Pentium with MMX Technology". It developed out of a similar unit introduced on the Intel i860, and earlier the Intel i750 video pixel processor. MMX is a processor supplementary capability that is supported on IA-32 processors by Intel and other vendors .
The New York Times described the initial push, including Super Bowl advertisements, as focused on "a new generation of glitzy multimedia products, including videophones and 3-D video games."
MMX has subsequently been extended by several programs by Intel and others: 3DNow!, Streaming SIMD Extensions (SSE), and ongoing revisions of Advanced Vector Extensions (AVX).
Overview
Naming
MMX is officially a meaningless initialism trademarked by Intel; unofficially, the initials have been variously explained as standing for
MultiMedia eXtension,
Multiple Math eXtension, or
Matrix Math eXtension.
Advanced Micro Devices (AMD), during one of its many court battles with Intel, produced marketing material from Intel indicating that MMX stood for "Matrix Math Extensions". Since an initialism cannot be trademarked, this was an attempt to invalidate Intel's trademark. In 1995, Intel filed suit against AMD and Cyrix Corp. for misuse of its trademark MMX. AMD and Intel settled, with AMD acknowledging MMX as a trademark owned by Intel, and with Intel granting AMD rights to use the MMX trademark as a technology name, but not a processor name.
Technical details
MMX defines eight processor registers, named MM0 through MM7, and operations that operate on them. Each register is 64 bits wide and can be used to hold either 64-bit integers, or multiple smaller integers in a "packed" format: one instruction can then be applied to two 32-bit integers, four 16-bit integers, or eight 8-bit integers at once.
MMX provides only integer operations. When originally developed, for the Intel i860, the use of integer math made sense (both 2D and 3D calculations required it), but as graphics cards that did much of this became common, integer SIMD in the CPU became somewhat redundant for graphical applications. Alternatively, the saturation arithmetic operations in MMX could significantly speed up some digital signal processing applications.
To avoid compatibility problems with the context switch mechanisms in existing operating systems, the MMX registers are aliases for the existing x87 floating-point unit (FPU) registers, which context switches would already save and restore. Unlike the x87 registers, which behave like a stack, the MMX registers are each directly addressable (random access).
Any operation involving the floating-point stack might also affect the MMX registers and vice versa, so this aliasing makes it difficult to work with floating-point and SIMD operations in the same program. To maximize performance, software often used the processor exclusively in one mode or the other, deferring the relatively slow switch between them as long as possible.
Each 64-bit MMX register corresponds to the mantissa part of an 80-bit x87 register. The upper 16 bits of the x87 registers thus go unused in MMX, and these bits are all set to ones, making them Not a Number (NaN) data types, or infinities in the floating-point representation. This can be used by software to decide whether a given register's content is intended as floating-point or SIMD data.
Software support
Software support for MMX developed slowly. Intel's C Compiler and related development tools obtained intrinsics for invoking MMX instructions and Intel released libraries of common vectorized algorithms using MMX. Both Intel and Metrowerks attempted automatic vectorization in their compilers, but the operations in the C programming language mapped poorly onto the MMX instruction set and custom algorithms as of 2000 typically still had to be written in assembly language.
Successors
AMD, a competing x86 microprocessor vendor, enhanced Intel's MMX with their own 3DNow! instruction set. 3DNow is best known for adding single-precision (32-bit) floating-point support to the SIMD instruction-set, among other integer and more general enhancements.
Following MMX, Intel's next major x86 extension was the Streaming SIMD Extensions (SSE), introduced with the Pentium III family in 1999, roughly a year after AMD's 3DNow! was introduced.
SSE addressed the core shortcomings of MMX (inability to mix integer-SIMD ops with any floating-point ops) by creating a new 128-bit wide register file (XMM0–XMM7) and new SIMD instructions for it. Like 3DNow!, SSE focused exclusively on single-precision floating-point operations (32-bit); integer SIMD operations were still performed using the MMX register and instruction set. However, the new XMM register-file allowed SSE SIMD-operations to be freely mixed with either MMX or x87 FPU ops.
Streaming SIMD Extensions 2 (SSE2), introduced with the Pentium 4, further extended the x86 SIMD instruction set with integer (8/16/32 bit) and double-precision floating-point data support for the XMM register file. SSE2 also allowed the MMX operation codes (opcodes) to use XMM register operands, extended to even wider YMM and ZMM registers by later SSE revisions.
MMX in embedded applications
Intel's and Marvell Technology Group's XScale microprocessor core starting with PXA270 include an SIMD instruction set architecture extension to the ARM architecture core named Intel Wireless MMX Technology (iwMMXt) which functions are similar to those of the IA-32 MMX extension. It provides arithmetic and logic operations on 64-bit integer numbers, in which the software may choose to instead perform two 32-bit, four 16-bit or eight 8-bit operations in one instruction. The extension contains 16 data registers of 64-bits and eight control registers of 32-bits. All registers are accessed through standard ARM architecture coprocessor mapping mechanism. iwMMXt occupies coprocessors 0 and 1 space, and some of its opcodes clash with the opcodes of the earlier floating-point extension, FPA.
Later versions of Marvell's ARM processors support both Wireless MMX (WMMX) and Wireless MMX2 (WMMX2) opcodes.
See also
Extended MMX
References
External links
Intel Intrinsics Guide
Intel Pentium Processor with MMX Technology Documentation
IA Software Developer's Manual, Vol 1 (PDF), see chapter 8 for MMX programming
Computer-related introductions in 1997
SIMD computing
X86 instructions
|
5514699
|
https://en.wikipedia.org/wiki/Files%20transferred%20over%20shell%20protocol
|
Files transferred over shell protocol
|
Files transferred over Shell protocol (FISH) is a network protocol that uses Secure Shell (SSH) or Remote Shell (RSH) to transfer files between computers and manage remote files.
The advantage of FISH is that all it requires on the server-side is an SSH or RSH implementation, Unix shell, and a set of standard Unix utilities (like ls, cat or dd—unlike other methods of remote access to files via a remote shell, scp for example, which requires scp on the server side). Optionally, there can be a special FISH server program (called start_fish_server) on the server, which executes FISH commands instead of Unix shell and thus speeds up operations.
The protocol was designed by Pavel Machek in 1998 for the Midnight Commander software tool.
Protocol messages
Client sends text requests of the following form:
#FISH_COMMAND arguments...
equivalent shell commands,
which may be multi-line
Fish commands are all defined, shell equivalents may vary.
Fish commands always have priority: the server is expected to execute a fish command if it understands it. If it does not, however, it can try to execute a shell command.
When there is no special server program, Unix shell ignores the fish command as a comment and executes the equivalent shell command(s).
Server replies are multi-line, but always end with
### xyz<optional text>
line. ### is a prefix to mark this line, xyz is the return code.
Return codes are a superset to those used in FTP.
The codes 000 and 001 are special, their meaning depends on presence of server output before the end line.
Session initiation
The client initiates SSH or RSH connection with echo FISH:;/bin/sh as the command executed on remote machine. This should make it possible for the server to distinguish FISH connections from normal RSH or SSH.
The first two commands sent to the server are FISH and VER to negotiate FISH protocol, its version and extensions.
#FISH
echo; start_fish_server; echo '### 200'
#VER 0.0.2 <feature1> <feature2> <...>
echo '### 000'
The server may reply to VER command with a lines like
VER 0.0.0 <feature2> <...>
### 200
which indicates supported version of the FISH protocol and supported extensions.
Implementations
Midnight Commander
Lftp
fish:// KDE kioslave (with konqueror, Krusader or Dolphin)
tramp-fish.el implemented it in Emacs TRAMP (but might have been thrown away if nobody needed it); Emacs TRAMP in overall has similar goals to FISH: remote access to files through a remote Unix shell.
See also
SSHFS
SSH File Transfer Protocol
External links
README.fish from Midnight Commander
Network file transfer protocols
|
955672
|
https://en.wikipedia.org/wiki/IEEE%20802.16
|
IEEE 802.16
|
IEEE 802.16 is a series of wireless broadband standards written by the Institute of Electrical and Electronics Engineers (IEEE). The IEEE Standards Board established a working group in 1999 to develop standards for broadband for wireless metropolitan area networks. The Workgroup is a unit of the IEEE 802 local area network and metropolitan area network standards committee.
Although the 802.16 family of standards is officially called WirelessMAN in IEEE, it has been commercialized under the name "WiMAX" (from "Worldwide Interoperability for Microwave Access") by the WiMAX Forum industry alliance. The Forum promotes and certifies compatibility and interoperability of products based on the IEEE 802.16 standards.
The 802.16e-2005 amendment version was announced as being deployed around the world in 2009.
The version IEEE 802.16-2009 was amended by IEEE 802.16j-2009.
Standards
Projects publish draft and proposed standards with the letter "P" prefixed. Once a standard is ratified and published, that "P" gets dropped and replaced by a trailing dash and suffix year of publication.
Projects
802.16e-2005 Technology
The 802.16 standard essentially standardizes two aspects of the air interface – the physical layer (PHY) and the media access control (MAC) layer. This section provides an overview of the technology employed in these two layers in the mobile 802.16e specification.
PHY
802.16e uses scalable OFDMA to carry data, supporting channel bandwidths of between 1.25 MHz and 20 MHz, with up to 2048 subcarriers. It supports adaptive modulation and coding, so that in conditions of good signal, a highly efficient 64 QAM coding scheme is used, whereas when the signal is poorer, a more robust BPSK coding mechanism is used. In intermediate conditions, 16 QAM and QPSK can also be employed. Other PHY features include support for multiple-input multiple-output (MIMO) antennas in order to provide good non-line-of-sight propagation (NLOS) characteristics (or higher bandwidth) and hybrid automatic repeat request (HARQ) for good error correction performance.
Although the standards allow operation in any band from 2 to 66 GHz, mobile operation is best in the lower bands which are also the most crowded, and therefore most expensive.
MAC
The 802.16 MAC describes a number of Convergence Sublayers which describe how wireline technologies such as Ethernet, Asynchronous Transfer Mode (ATM) and Internet Protocol (IP) are encapsulated on the air interface, and how data is classified, etc. It also describes how secure communications are delivered, by using secure key exchange during authentication, and encryption using Advanced Encryption Standard (AES) or Data Encryption Standard (DES) during data transfer. Further features of the MAC layer include power saving mechanisms (using sleep mode and idle mode) and handover mechanisms.
A key feature of 802.16 is that it is a connection-oriented technology. The subscriber station (SS) cannot transmit data until it has been allocated a channel by the base station (BS). This allows 802.16e to provide strong support for quality of service (QoS).
QoS
Quality of service (QoS) in 802.16e is supported by allocating each connection between the SS and the BS (called a service flow in 802.16 terminology) to a specific QoS class. In 802.16e, there are 5 QoS classes:
The BS and the SS use a service flow with an appropriate QoS class (plus other parameters, such as bandwidth and delay) to ensure that application data receives QoS treatment appropriate to the application.
Certification
Because the IEEE only sets specifications but does not test equipment for compliance with them, the WiMAX Forum runs a certification program wherein members pay for certification. WiMAX certification by this group is intended to guarantee compliance with the standard and interoperability with equipment from other manufacturers. The mission of the Forum is to promote and certify compatibility and interoperability of broadband wireless products.
See also
WiBro
WiMAX
WiBAS
WiMAX MIMO
Wireless mesh network
4G LTE
References
External links
IEEE Std 802.16-2004
IEEE Std 802.16e-2005
IEEE Std 802.16-2009
IEEE Std 802.16-2012
IEEE Std 802.16.1-2012
IEEE Std 802.16.1-2017
The WiMAX Forum
The implications of WiMAX for competition and regulation A paper of the OECD, Organisation for Economic Co-operation and Development
IEEE 802.16m Technology Introduction
IEEE 802
WiMAX
Wireless networking standards
|
13503628
|
https://en.wikipedia.org/wiki/Wii%20system%20software
|
Wii system software
|
The Wii system software is a discontinued set of updatable firmware versions and a software frontend on the Wii home video game console. Updates, which could be downloaded over the Internet or read from a game disc, allowed Nintendo to add additional features and software, as well as to patch security vulnerabilities used by users to load homebrew software. When a new update became available, Nintendo sent a message to the Wii Message Board of Internet-connected systems notifying them of the available update.
Most game discs, including first-party and third-party games, include system software updates so that systems that are not connected to the Internet can still receive updates. The system menu will not start such games if their updates have not been installed, so this has the consequence of forcing users to install updates in order to play these games. Some games, such as online games like Super Smash Bros. Brawl and Mario Kart Wii, contain specific extra updates, such as the ability to receive Wii Message Board posts from game-specific addresses; therefore, these games always require that an update be installed before their first time running on a given console.
Technology
IOS
The Wii's firmware has many active branches known as IOSes, thought by the Wii homebrew developers to stand for "Input Output Systems" or "Internal Operating Systems". The currently active IOS, also simply referred to as just "IOS," runs on a separate ARM926EJ-S processor, unofficially nicknamed Starlet. The patent for the Wii U shows a similar device which is simply named "Input/Output Processor". IOS controls I/O between the code running on the main Broadway processor and the various Wii hardware that does not also exist on the GameCube.
Except for bug fixes, new IOS versions do not replace existing IOS versions. Instead, Wii consoles have multiple IOS versions installed. All native Wii software (including games distributed on Nintendo optical discs, the System Menu itself, Virtual Console games, WiiWare, and Wii Channels), with the exception of certain homebrew applications, have the IOS version hardcoded into the software.
When the software is run, the IOS that is hardcoded gets loaded by the Wii, which then loads the software itself. If that IOS does not exist on the Wii, in the case of disc-based software, it gets installed automatically (after the user is prompted). With downloaded software, this should not theoretically happen, as the user cannot access the shop to download software unless the player has all the IOS versions that they require. However, if homebrew is used to forcefully install or run a piece of software when the required IOS does not exist, the user is brought back to the system menu.
Nintendo created this system so that new updates would not unintentionally break compatibility with older games, but it does have the side effect that it uses up space on the Wii's internal NAND Flash memory. IOSes are referred to by their number, which can theoretically be between 3 and 255, although many numbers are skipped, presumably being development versions that were never completed.
Only one IOS version can run at any given time. The only time an IOS is not running is when the Wii enters GameCube backward compatibility mode, during which the Wii runs a variant of IOS specifically for GameCube games, MIOS, which contains a modified version of the GameCube's IPL. Custom IOSes, called cIOSes, can be installed with homebrew. The main purpose of cIOS is to allow homebrew users to use other homebrew apps such as USB Loader GX (allows games stored in the WBFS file format to be run from a USB stick).
User interface
The system provides a graphical interface to the Wii's abilities. All games run directly on the Broadway processor, and either directly interface with the hardware (for the hardware common to the Wii and GameCube), or interface with IOS running on the ARM architecture processor (for Wii-specific hardware). The ARM processor does not have access to the screen, and therefore neither does IOS. This means that while a piece of software is running, everything seen on the screen (including the HOME button menu) comes from that software, and not from any operating system or firmware. Therefore, the version number reported by the Wii is actually only the version number of the System Menu. This is why some updates do not result in a change of the version number: the System Menu itself is not updated, only (for example) IOSes and channels. As a side effect, this means it is impossible for Nintendo to implement any functions that would affect the games themselves, for example an in-game system menu (similar to the Xbox 360's in-game Dashboard or the PlayStation 3's in-game XMB).
The Wii Menu (known internally as the System Menu) is the name of the user interface for the Wii game console, and it is the first thing to be seen when the system boots up. Similar to many other video game consoles, the Wii is not only about games. For example, it is possible to install applications such as Netflix to stream media (without requiring a disc) on the Wii. The Wii Menu let users access both game and no-game functions through built-in applications called Channels, which are designed to represent television channels. There are six primary channels: the Disc Channel, Mii Channel, Photo Channel, Wii Shop Channel, Forecast Channel and News Channel, although the latter two were not initially included and only became available via system updates. Some of the functions provided by these Channels on the Wii used to be limited to a computer, such as a full-featured web browser and digital photo viewer. Users can also use Channels to create and share cartoon-like digital avatars called Miis and download new games and Channels directly from the Wii Shop Channel. New Channels include, for example, the Everybody Votes Channel and the Internet Channel. Separate Channels are graphically displayed in a grid and can be navigated using the pointer capability of the Wii Remote. Users can also rearrange these Channels if they are not satisfied with how the Channels are originally organized on the menu.
Network features
The Wii system supports wireless connectivity with the Nintendo DS handheld console with no additional accessories. This connectivity allows players to use the Nintendo DS microphone and touch screen as inputs for Wii games. Pokémon Battle Revolution is the first example Nintendo has given of a game using Nintendo DS-Wii connectivity. Nintendo later released the Nintendo Channel for the Wii allowing its users to download game demos or additional data to their Nintendo DS.
Like many other video game consoles, the Wii console is able to connect to the Internet, although this is not required for the Wii system itself to function. Each Wii has its own unique 16-digit Wii Code for use with Wii's non-game features. With Internet connection enabled users are able to access the established Nintendo Wi-Fi Connection service. Wireless encryption by WEP, WPA (TKIP/RC4) and WPA2 (CCMP/AES) is supported. AOSS support was added in System Menu version 3.0.
As with the Nintendo DS, Nintendo does not charge for playing via the service; the 12-digit Friend Code system controls how players connect to one another. The service has a few features for the console, including the Virtual Console, WiiConnect24 and several Channels. The Wii console can also communicate and connect with other Wii systems through a self-generated wireless LAN, enabling local wireless multiplayer on different television sets. The system also implements console-based software, including the Wii Message Board. One can connect to the Internet with third-party devices as well.
The Wii console also includes a web browser known as the Internet Channel, which is a version of the Opera 9 browser with menus. It is meant to be a convenient way to access the web on the television screen, although it is far from offering a comfortable user interface compared with modern Internet browsers. A virtual keyboard pops up when needed for input, and the Wii Remote acts like a mouse, making it possible to click anywhere on the screen and navigate through web links. However, the browser cannot always handle all the features of most normal web pages, although it does support Adobe Flash, thus capable of playing Flash games. Some third-party services such as the online BBC iPlayer were also available on the Wii via the Internet Channel browser, although BBC iPlayer was later relaunched as the separate BBC iPlayer Channel on the Wii. In addition, Internet access including the Internet Channel and system updates may be restricted by the parental controls feature of the Wii.
Backward compatibility
The original designs of the Nintendo Wii console, more specifically the Wii models made pre-2011 were fully backward compatible with GameCube devices including game discs, memory cards and controllers. This was because the Wii hardware had ports for both GameCube memory cards, and peripherals and its slot-loading drive was able to accept and read the previous console's discs. GameCube games work with the Wii without any additional configuration, but a GameCube controller is required to play GameCube titles; neither the Wii Remote or the Classic Controller functions in this capacity. The Wii supports progressive-scan output in 480p-enabled GameCube titles. Peripherals can be connected via a set of four GameCube controller sockets and two Memory Card slots (concealed by removable flip-open panels). The console retains connectivity with the Game Boy Advance and e-Reader through the Game Boy Advance Cable, which is used in the same manner as with the GameCube; however, this feature can only be accessed on select GameCube titles which previously utilized it.
There are also a few limitations in the backward compatibility. For example, online and LAN features of certain GameCube games were not available since the Wii does not have serial ports for the Nintendo GameCube Broadband Adapter and Modem Adapter. The Wii uses a proprietary port for video output, and is incompatible with all Nintendo GameCube audio/video cables (composite video, S-Video, component video and RGB SCART). The console also lacks the GameCube footprint and high-speed port needed for Game Boy Player support. Furthermore, only GameCube functions were available and only compatible memory cards and controllers could be used when playing a GameCube game. This is due to the fact that the Wii's internal memory would not save GameCube data.
Because of the original device's backward compatibility with earlier Nintendo products players can enjoy a massive selection of older games on the console in addition to hundreds of newer Wii game titles. However, South Korean units lack GameCube backward compatibility. Also, the redesigned Wii Family Edition and Wii Mini, launched in 2011 and 2013 respectively, had this compatibility stripped out. Nevertheless, there is another service called Virtual Console which allow users to download older games from prior Nintendo platforms (namely the Nintendo Entertainment System, Super NES and Nintendo 64) onto their Wii console, as well as games from non-Nintendo platforms such as the Genesis and TurboGrafx-16.
List of additional Channels
This is a list of new Wii Channels released beyond the four initial Channels (i.e. Disc Channel, Mii Channel, Photo Channel and Wii Shop Channel) included in the original consoles. The News Channel and the Forecast Channel were released as part of system updates so separate downloads were not required. As of January 30, 2019, all channels listed below have been discontinued with the exception of the Wii Fit Channel and the Internet Channel.
History of updates
System version 1.0 was released on launch day, and was designed mainly for offline use, as connecting to the internet would trigger an update prompt to install 2.0. For a while after that, the Wii received new features such as the Forecast Channel, as well as bug fixes.
Some of these updates also included fixes to block the early forms of homebrew, the first of which was an SSL issue in the Wii Shop Channel. Later in 2007, Nintendo added code to block the GameCube Action Replay, although this update was bundled with several other features in the 3.0 update.
A week after Wii Freeloader released, Nintendo released an update containing a new IOS with the bug exploited by Freeloader fixed, although this new IOS was not used by the Wii Menu. Later that year, Nintendo released a new Wii Menu that copied this fix to the IOS user by the Wii Menu. In addition, code was added to the Wii Menu to delete the primary homebrew entrypoint on every boot, although this code was very buggy and was easily bypassed. Nintendo also patched the hole used to extract the private encryption keys of the Wii, and finally made a small change to the Mii Channel to convince people to update.
Nintendo's next few updates made similar small changes to various channels, and one of them copied the fix for the previous IOS bug to every IOS, as well as a few other exploit fixes. A few weeks later, Nintendo ported these new fixes to every IOS, made a failed attempt to block a specific homebrew IOS, and made their second attempt at fixing the main homebrew entrypoint. This attempt at stopping the homebrew entrypoint was then superseded by a successful attempt in 2009, along with other IOS fixes, and some features.
Later that year, Nintendo released another homebrew-blocking update, but unlike the previous updates, it offered no new features; instead, it updated the Wii Shop Channel to require the new version. In addition to fixing homebrew bugs, it aggressively checks for the Homebrew Channel and deletes it if it is present, replaced several IOSes used by homebrew with nonfunctional versions, and updated a bootloader to overwrite the one used by homebrew, unexpectedly causing many consoles to refuse to boot. Two similar updates were then released throughout 2010, although the only attempts to stop Wii homebrew past that were in the Wii U's Wii Mode feature.
The final update delivered in PAL and American regions added support to transfer content to the Wii U. However, two updates were released in Japan past this point that only affected Dragon Quest X players, solely updating the IOS used by Dragon Quest X.
See also
Nintendo Wi-Fi Connection
WiiConnect24
Wii Shop Channel
Other gaming platforms from Nintendo:
Nintendo 3DS system software
Nintendo DSi system software
Wii U system software
Nintendo Switch system software
Other gaming platforms from the next generation:
PlayStation 4 system software
PlayStation Vita system software
Xbox One system software
Other gaming platforms from this generation:
PlayStation 3 system software
PlayStation Portable system software
Xbox 360 system software
References
External links
Wii System Menu and Feature Updates
Site documenting all updates during an update and how they affect homebrew and other hacks
Wii
Game console operating systems
Proprietary operating systems
|
17815480
|
https://en.wikipedia.org/wiki/Proofpoint%2C%20Inc.
|
Proofpoint, Inc.
|
Proofpoint, Inc. is an American enterprise security company based in Sunnyvale, California that provides software as a service and products for email security, data loss prevention, electronic discovery, and email archiving.
In 2021, Proofpoint was acquired by private equity firm Thoma Bravo for $12.3 billion.
History
Company
The company was founded in June 2002 by Eric Hahn, formerly the CTO of Netscape Communications. It launched July 21, 2003, after raising a $7 million Series A funding round, releasing its first product, and lining up six customers as references, and was backed by venture investors Benchmark Capital and Stanford University. An additional $9 million in Series B funding led by New York-based RRE Ventures was announced in October, 2003.
Proofpoint became a publicly traded company in April 2012. At the time of its initial public offering (IPO), the company's shares traded at $13 apiece; investors purchased more than 6.3 million shares through the IPO, raising more than $80 million.
On April 26, 2021, Proofpoint announced that it had agreed to be acquired by the private equity firm Thoma Bravo.
Product history
The company's first product was the Proofpoint Protection Server (PPS) for medium and large businesses. It incorporated what was described as "MLX Technology", proprietary machine learning algorithms applied to the problem of accurately identifying spam email using 10,000 different attributes to differentiate between spam and valid email. The company joined dozens of other anti-spam software providers in a business opportunity fueled by an exponential increase in spam volume that was threatening worker productivity, making spam a top business priority. According to the 2004 National Technology Readinesed the number of spam detection attributes to more than 50,000.
In 2004, strict new HIPAA regulations governing financial disclosures and the privacy of health care data prompted Proofpoint to begin developing new products that would automatically identify and intercept outbound email containing sensitive information.
In March 2004, Proofpoint introduced its first hardware appliance, the P-Series Message Protection Appliance (later renamed Proofpoint Messaging Security Gateway), using a hardened Linux kernel and Proofpoint's Protection Server 2.0 software. It was tested by Infoworld and found to stop 94% of spam.
Another product introduction in November 2004 included Protection Server 3.0, with Email Firewall and MLX-based Dynamic Reputation Analysis, and the Content Security Suite, plug-in modules designed for scanning outbound messages and their attachments to assist in compliance with data protection regulations such as Sarbanes-Oxley, HIPAA, and Gramm-Leach-Bliley. In combination, this was known as the Proofpoint Messaging Security Gateway Appliance. It was reviewed by ChannelWeb, which observed that it used a "combination of technologies: policy-based management, a spam-filtering engine and adaptive learning technology."
Proofpoint introduced a new product, the Network Content Sentry, as an add-on appliance to the Content Security Suite in August 2005. Designed to monitor online messaging other than email, the appliance monitors Web mail, message boards, blogs and FTP-based communications. Proofpoint also introduced policy-based email encryption features, using identity-based encryption technology licensed from Voltage Security.
Virtual appliance development
In a step towards simpler operational requirements, the Proofpoint Messaging Security Gateway Virtual Edition was released in April 2007. The product runs as a virtual appliance on a host running VMware's virtual server software. Moving a dedicated hardware appliance to a virtual server eliminates problems associated with proprietary hardware and reduces upgrade costs, though it does require knowledge of VMware's virtual server architecture.
Proofpoint Messaging Security Gateway V5.0 was released in June 2007, and was based on a new, integrated architecture, combining all its capabilities into a single platform. It could be run either as a dedicated appliance, virtual appliance, or software suite.
ICSA Labs, an independent division of Verizon Business, announced in April 2007, that it had certified six anti-spam products under their new testing program, one of which was the Proofpoint Messaging Security Gateway. The goal of ICSA Labs' anti-spam product testing and certification is to evaluate product effectiveness in detecting and removing spam. The guidelines also address how well the products recognize e-mail messages from legitimate sources.
Software as a service
Moving into the software-as-a-service business, Proofpoint introduced Proofpoint on Demand, a hosted version of its email security and data loss prevention offerings. In May, 2008, the company's hosted offerings were expanded with the introduction of Proofpoint on Demand—Standard Edition. The product is targeted at small-to-medium size businesses that need email security but do not run their own servers or have on-site IT personnel.
Products
Proofpoint products are designed to solve three business problems: advanced cybersecurity threats, regulatory compliance, and brand-impostor fraud (which it calls "digital risk"). These products work across email, social media, mobile devices, and the cloud.
Email security
Proofpoint offers software or SaaS aimed at different facets of email security. Its flagship product the Proofpoint Messaging Security Gateway. The Messaging Security Gateway is a web-based application that offers spam protection; based on both user defined rules as well as dynamically updated definitions, anti-virus scanning, and configurable email firewall rules.
Additionally, in June 2008, Proofpoint acquired Fortiva, Inc., a provider of on-demand email archiving software for legal discovery, regulatory compliance and email storage management. Fortiva used Exchange journaling to automatically archive all internal and external communications so that end users can search all archived messages, including attachments, directly from a search folder in Outlook.
Cybersecurity
Proofpoint's security portfolio includes products that stop both traditional cyberattacks (delivered via malicious attachments and URLs) and socially engineered attacks—such as business email compromise (BEC) and credential phishing—that do not use malware. It uses a blend of sandbox analysis, reputational analysis, automated threat data, human threat intelligence and attributes such as sender/recipient relationship, headers, and content, and more to detect potential threats. Automated encryption, data-loss prevention and forensics-gathering tools are designed to speed incident response and mitigate the damage and costs of any threats that do get through. The portfolio also includes protection from social-media account takeovers, harmful mobile apps, and rogue Wi-Fi networks.
Regulatory compliance
Proofpoint's compliance products are designed to reduce the manual labor involved in identifying potentially sensitive data, managing and supervising it in compliance with government and industry rules, and producing it quickly in e-discovery legal requests.
Digital risk
Proofpoint's digital risk products are aimed at companies seeking to stop cybercriminals from impersonating their brand to harm customers, partners, and the brand's reputation. Its email digital risk portfolio includes authentication technology to prevent email domain spoofing. On social media, it stops scams in which fraudsters create fake customer-service accounts to find people seeking help over social media and trick them into handing over account credentials or visiting a malicious website. And in mobile, it finds counterfeit apps distributed through mobile app stores.
In the 2016 Forrester Wave for Digital Risk Monitoring, Q3 2016 the Proofpoint digital risk / social media product was included in an evaluation of the nine top vendors in this emerging market. These monitor "digital" (i.e. social, mobile, web, and dark web) channels to detect, prevent, malicious or unwanted content from undermining organizational efforts to build brand across all major social media platforms. On October 23, 2014 Proofpoint acquired Nexgate, Inc., a social media and security compliance vendor. On November 4, 2015 Proofpoint acquired Socialware Inc., a compliance workflow & content capture and review technology company.
Acquisitions
References
External links
2002 establishments in California
2012 initial public offerings
Anti-spam
Companies based in Sunnyvale, California
Companies formerly listed on the Nasdaq
Computer archives
Computer security software companies
Networking companies of the United States
Software companies based in the San Francisco Bay Area
Software companies established in 2002
Software companies of the United States
Spam filtering
|
913183
|
https://en.wikipedia.org/wiki/Scrolling
|
Scrolling
|
In computer displays, filmmaking, television production, and other kinetic displays, scrolling is sliding text, images or video across a monitor or display, vertically or horizontally. "Scrolling," as such, does not change the layout of the text or pictures but moves (pans or tilts) the user's view across what is apparently a larger image that is not wholly seen. A common television and movie special effect is to scroll credits, while leaving the background stationary. Scrolling may take place completely without user intervention (as in film credits) or, on an interactive device, be triggered by touchscreen or a keypress and continue without further intervention until a further user action, or be entirely controlled by input devices.
Scrolling may take place in discrete increments (perhaps one or a few lines of text at a time), or continuously (smooth scrolling). Frame rate is the speed at which an entire image is redisplayed. It is related to scrolling in that changes to text and image position can only happen as often as the image can be redisplayed. When frame rate is a limiting factor, one smooth scrolling technique is to blur images during movement that would otherwise appear to "jump".
Computing
Implementation
Scrolling is often carried out on a computer by the CPU (software scrolling) or by a graphics processor. Some systems feature hardware scrolling, where an image may be offset as it is displayed, without any frame buffer manipulation (see also hardware windowing). This was especially common in 8 and 16bit video game consoles.
UI paradigms
In a WIMP-style graphical user interface (GUI), user-controlled scrolling is carried out by manipulating a scrollbar with a mouse, or using keyboard shortcuts, often the arrow keys. Scrolling is often supported by text user interfaces and command line interfaces. Older computer terminals changed the entire contents of the display one screenful ("page") at a time; this paging mode requires fewer resources than scrolling. Scrolling displays often also support page mode. Typically certain keys or key combinations page up or down; on PC-compatible keyboards the page up and page down keys or the space bar are used; earlier computers often used control key combinations. Some computer mice have a scroll wheel, which scrolls the display, often vertically, when rolled; others have scroll balls or tilt wheels which allow both vertical and horizontal scrolling.
Some software supports other ways of scrolling. Adobe Reader has a mode identified by a small hand icon ("hand tool") on the document, which can then be dragged by clicking on it and moving the mouse as if sliding a large sheet of paper. When this feature is implemented on a touchscreen it is called kinetic scrolling. Touch-screens often use inertial scrolling, in which the scrolling motion of an object continues in a decaying fashion after release of the touch, simulating the appearance of an object with inertia. An early implementation of such behavior was in the "Star7" PDA of Sun Microsystems ca. 1991–1992.
Scrolling can be controlled in other software-dependent ways by a PC mouse. Some scroll wheels can be pressed down, functioning like a button. Depending on the software, this allows both horizontal and vertical scrolling by dragging in the direction desired; when the mouse is moved to the original position, scrolling stops. A few scroll wheels can also be tilted, scrolling horizontally in one direction until released. On touchscreen devices, scrolling is a multi-touch gesture, done by swiping a finger on the screen vertically in the direction opposite to where the user wants to scroll to.
If any content is too wide to fit on a display, horizontal scrolling is required to view all of it. In applications such as graphics and spreadsheets there is often more content than can fit either the width or the height of the screen at a comfortable scale, and scrolling in both directions is necessary.
Text
In languages written horizontally, such as most Western languages, text documents longer than will fit on the screen are often displayed wrapped and sized to fit the screen width, and scrolled vertically to bring desired content into view. It is possible to display lines too long to fit the display without wrapping, scrolling horizontally to view each entire line. However, this requires inconvenient constant line-by-line scrolling, while vertical scrolling is only needed after reading a full screenful.
Software such as word processors and web browsers normally uses word-wrapping to display as many words in a single line as will fit the width of the screen or window or, for text organised in columns, each column.
Demos
Scrolling texts, also referred to as scrolltexts or scrollers, played an important part in the birth of the computer demo culture. The software crackers often used their deep knowledge of computer platforms to transform the information that accompanied their releases into crack intros. The sole role of these intros was to scroll the text on the screen in an impressive way.
Many scrollers were plain horizontal scrollers, but demo coders also paid a lot of attention to creating new and different types of scrolling. The characters could, for example, continuously alter their shape, take unusual flying paths or incorporate color effects such as raster bars. Sometimes it makes the text nearly unreadable.
Film and television
Scrolling is commonly used to display the credits at the end of films and television programs.
Scrolling is often used in the form of a news ticker towards the bottom of the picture for content such as television news, scrolling sideways across the screen, delivering short-form content.
Video games
In computer and video games, scrolling of a playing field allows the player to control an object in a large contiguous area. Early examples of this method include Taito's 1974 vertical-scrolling racing video game Speed Race, Sega's 1976 forward-scrolling racing games Moto-Cross (Fonz) and Road Race, and Super Bug. Previously the flip-screen method was used to indicate moving backgrounds.
The Namco Galaxian arcade system board introduced with Galaxian in 1979 pioneered a sprite system that animated pre-loaded sprites over a scrolling background, which became the basis for Nintendo's Radar Scope and Donkey Kong arcade hardware and home consoles such as the Nintendo Entertainment System.
Parallax scrolling, which was first featured in Moon Patrol, involves several semi-transparent layers (called playfields), which scroll on top of each other at varying rates in order to give an early pseudo-3D illusion of depth.
is a method used in side-scrolling beat 'em up games with a downward camera angle where players can move up and down in addition to left and right.
Studies
A 1993 article by George Fitzmaurice studied spatially aware palmtop computers. These devices had a 3D sensor, and moving the device caused the contents to move as if the contents were fixed in place. This interaction could be referred to as “moving to scroll.” Also, if the user moved the device away from their body, they would zoom in; conversely, the device would zoom out if the user pulled the device closer to them. Smartphone cameras and “optical flow” image analysis utilize this technique nowadays.
A 1996 research paper by Jun Rekimoto analyzed tilting operations as scrolling techniques on small screen interfaces. Users could not only tilt to scroll, but also tilt to select menu items. These techniques proved especially useful for field workers, since they only needed to hold and control the device with one hand.
A more recent study from 2013 by Selina Sharmin, Oleg Špakov, and Kari-Jouko Räihä explored the action of reading text on a screen while the text auto-scrolls based on the user's eye tracking patterns. The control group simply read text on a screen and manually scrolled. The study found that participants preferred to read primarily at the top of the screen, so the screen scrolled down whenever participants’ eyes began to look toward the bottom of the screen. This auto-scrolling caused no statistically significant difference in reading speed or performance.
See also
Flip pagean alternate visual effect for navigating digital publications
Notes
References
Television technology
Video game design
Computer graphics
Demo effects
User interface techniques
|
12552512
|
https://en.wikipedia.org/wiki/List%20of%20LDAP%20software
|
List of LDAP software
|
The following is a list of software programs that can communicate with and/or host directory services via the Lightweight Directory Access Protocol (LDAP).
Client software
Cross-platform
Admin4 - an open source LDAP browser and directory client for Linux, OS X, and Microsoft Windows, implemented in Python.
Apache Directory Server/Studio - an LDAP browser and directory client for Linux, OS X, and Microsoft Windows, and as a plug-in for the Eclipse development environment.
FusionDirectory, a web application under license GNU General Public License developed in PHP for managing LDAP directory and associated services.
JXplorer - a Java-based browser that runs in any operating environment.
JXWorkBench - a Java-based plugin to JXplorer that includes LDAP reporting using the JasperReports reporting engine.
LDAP Account Manager - a PHP based webfrontend for managing various account types in an LDAP directory.
phpLDAPadmin - a web-based LDAP administration tool for creating and editing LDAP entries in any LDAP server.
LDAP User Manager - A simple PHP interface to add LDAP users and groups. Also has a self-service password change feature. Designed to be run as a Docker container.
SLAMD - an open source load generation software suite, for testing multiple application protocols, including LDAP. Also contains tools for creating test data and test scripts.
RoundCube - an open source and free PHP IMAP client with support with LDAP based address books.
GOsa² - provides a powerful framework for managing accounts and systems in LDAP databases
web2ldap, a web application under license Apache License 2.0 developed in Python for managing LDAP directories.
OpenDJ - a Java-based LDAP server and directory client that runs in any operating environment, under license CDDL
Linux/UNIX
Evolution - the contacts part of GNOME's PIM can query LDAP servers.
KAddressBook - the address book application for KDE, capable of querying LDAP servers.
OpenLDAP - a free, open source implementation.
OpenDJ - a free, open source implementation.
diradm / diradm-2 - A nearly complete nss/shadow suite for managing POSIX users/groups/data in LDAP.
System Security Services Daemon (SSSD) - a system service to access remote directories and authentication mechanisms
Mac OS X
Contacts - an LDAP-aware address book application built into Mac OS X.
Directory Utility - a utility for configuring access to several types of directory servers, including LDAP; built into Mac OS X.
Workgroup Manager - a utility for configuring access to several types of directory servers, including LDAP; built into Mac OS X Server and one of Apple's Server Admin Tools.
OpenDJ - a free, open source implementation.
Slapd - from the Univ of Michigan
Microsoft Windows
Active Directory Explorer - a freeware LDAP client tool from Microsoft
LDAP Admin - a free, open source LDAP directory browser and editor
Ldp is an LDAP client included with Microsoft Windows
NetTools - is a freeware utility for AD troubleshooting and includes an LDAP client
OpenDJ - a free, open source implementation
Middleware
Json2Ldap - a JSON-RPC-to-LDAP gateway
Server software
Notes
References
Directory services
Lists of software
|
2368499
|
https://en.wikipedia.org/wiki/Brendan%20Kennelly
|
Brendan Kennelly
|
Brendan Kennelly (17 April 1936 – 17 October 2021) was an Irish poet and novelist. He was Professor of Modern Literature at Trinity College, Dublin until 2005. Following his retirement he was titled "Professor Emeritus" by Trinity College.
Early life
Kennelly was born in Ballylongford, County Kerry, on 17 April 1936. He was one of eight children of Tim Kennelly and Bridie (Ahern). His father worked as a publican and garage proprietor; his mother was a nurse. Kennelly was educated at the inter-denominational St. Ita's College, Tarbert, County Kerry. He was then awarded a scholarship to study English and French at Trinity College Dublin. There he was editor of Icarus and captained the Trinity Gaelic Football Club. He graduated from Trinity in 1961 with first-class honours, before obtaining a Doctor of Philosophy there five years later. He also studied at Leeds University for one year under the tutelage of Norman Jeffares.
Poetry
Kennelly's poetry can be scabrous, down-to-earth, and colloquial. He avoided intellectual pretension and literary posturing, and his attitude to poetic language could be summed up in the title of one of his epic poems, "Poetry my Arse". Another long (400-page) epic poem, "The Book of Judas", published in 1991, topped the Irish best-seller list.
A prolific and fluent writer, there are more than fifty volumes of poetry to his credit, including My Dark Fathers (1964), Collection One: Getting Up Early (1966), Good Souls to Survive (1967), Dream of a Black Fox (1968), Love Cry (1972), The Voices (1973), Shelley in Dublin (1974), A Kind of Trust (1975), Islandman (1977), A Small Light (1979), and The House That Jack Didn't Build (1982).
Kennelly edited several other anthologies, including "Between Innocence and Peace: Favourite Poems of Ireland" (1993), "Ireland's Women: Writings Past and Present, with Katie Donovan and A. Norman Jeffares" (1994), and "Dublines," with Katie Donovan (1995). He also authored two novels, "The Crooked Cross" (1963) and "The Florentines" (1967), and three plays in a Greek Trilogy, Antigone, Medea, and The Trojan Women.
Kennelly was an Irish language (Gaelic) speaker, and translated Irish poems in "A Drinking Cup" (1970) and "Mary" (Dublin 1987). A selection of his collected translations was published as "Love of Ireland: Poems from the Irish" (1989).
Style
Language was important in Kennelly's work – in particular the vernacular of the small and isolated communities in North Kerry where he grew up, and of the Dublin streets and pubs where he became both roamer and raconteur for many years. His language is also grounded in the Irish-language poetic tradition, oral and written, which can be both satirical and salacious in its approach to human follies.
Regarding the oral tradition, Kennelly was a great reciter of verse with tremendous command and the rare ability to recall extended poems by memory, both his own work and others, and recite them on call verbatim. He commented on his own use of language: "Poetry is an attempt to cut through the effects of deadening familiarity … to reveal that inner sparkle."
Personal life
Kennelly married Margaret (Peggy) O'Brien in 1969. They were colleagues at the time, and she taught at English at the University of Massachusetts, Amherst at the time of his death. Together, they had one child, Doodle Kennelly. They resided in Sandymount before getting divorced, which Kennelly attributed to his overindulgence in alcohol. He ultimately became teetotal in about 1985. Doodle died in April 2021, six months before her father.
Kennelly died on 17 October 2021, at a care home in Listowel, where he resided in the two years leading up to his death. He was 85 years old.
Awards and honours
1967 Æ Memorial Prize
1988 Critics Special Harvey's Award
1996 IMPAC International Dublin Literary Award
1999 American Ireland Fund Literary Award
2003 The Ireland Funds of France Wild Geese Award
2010 Irish PEN Award
List of works
Cast a Cold Eye (1959) with Rudi Holzapfel
The Rain, the Moon (1961) with Rudi Holzapfel
The Dark About Our Loves (1962) Rudi Holzapfel
Green Townlands (1963) Rudi Holzapfel
Let Fall No Burning Leaf (1963)
The Crooked Cross (1963) novel;
My Dark Fathers (1964)
Up and at It (1965)
Collection One: Getting Up Early (1966)
Good Souls to Survive (1967)
The Florentines (1967) novel
Dream of a Black Fox (1968)
Selected Poems (1969)
A Drinking Cup, Poems from the Irish (1970)
The Penguin Book of Irish Verse (1970, 1981) editor
Bread (1971)
Love Cry (1972)
Salvation, The Stranger (1972)
The Voices (1973)
Shelley in Dublin (1974)
A Kind of Trust (1975)
New and Selected Poems (1976)
The Boats Are Home (Gallery Press, 1980)
Moloney Up and at It (Mercier Press, 1984)
Cromwell (Beaver Row Press, 1983; Bloodaxe Books, 1987)
Mary, from the Irish of Muireadach Albanach Ó Dálaigh (Aisling Press, 1987)
Landmarks of Irish Drama (Methuen, 1988)
Love of Ireland: Poems from the Irish (Mercier Press, 1989) [anthology]
A Time for Voices: Selected Poems 1960–1990 (Bloodaxe Books, 1990)
Euripides' Medea (Bloodaxe Books, 1991)
The Book of Judas (Bloodaxe Books, 1991)
Breathing Spaces: Early Poems (Bloodaxe Books, 1992)
Euripides' The Trojan Women (Bloodaxe Books, 1993)
Journey into Joy: Selected Prose, ed. Åke Persson (Bloodaxe Books, 1994)
Between Innocence and Peace: Favourite Poems of Ireland (Mercier Press, 1994) [anthology]
Poetry My Arse (Bloodaxe Books, 1995)
Dublines, with Katie Donovan (Bloodaxe Books, 1996) [anthology]
Sophocles' Antigone: a new version (Bloodaxe Books, 1996)
Lorca: Blood Wedding (Bloodaxe Books, 1996)
The Man Made of Rain (Bloodaxe Books, 1998)
The Singing Tree (Abbey Press, 1998)
Begin (Bloodaxe Books, 1999)
Glimpses (Bloodaxe Books, 2001)
The Little Book of Judas (Bloodaxe Books, 2002)
Martial Art (Bloodaxe Books, 2003) [versions of Martial]
Familiar Strangers: New & Selected Poems (Bloodaxe Books, 2004)
Now (Bloodaxe Books, 2006)
When Then Is Now: Three Greek Tragedies (Bloodaxe Books, 2006) [versions of Sophocles' Antigone and Euripides''' Medea and The Trojan Women]
Reservoir Voices (Bloodaxe Books, 2009)
The Essential Brendan Kennelly: Selected Poems (Bloodaxe Books, UK & Ireland, 2011, Wake Forest University Press, USA, 2011)
Guff'' (Bloodaxe Books, 2013)
References
External links
Bloodaxe Books (Publisher of Kennelly's work in the UK and Ireland)
Wake Forest University Press (US publisher)
1936 births
2021 deaths
Academics of Trinity College Dublin
Alumni of Trinity College Dublin
Alumni of the University of Leeds
Irish dramatists and playwrights
Irish male dramatists and playwrights
Irish editors
People from County Kerry
Translators from Irish
20th-century Irish novelists
20th-century Irish male writers
Irish male novelists
20th-century Irish poets
Irish male poets
21st-century Irish poets
Irish PEN Award for Literature winners
20th-century Irish translators
21st-century Irish translators
21st-century Irish male writers
|
26206476
|
https://en.wikipedia.org/wiki/MeeGo
|
MeeGo
|
MeeGo is a discontinued Linux distribution hosted by the Linux Foundation, using source code from the operating systems Moblin (produced by Intel) and Maemo (produced by Nokia). Primarily targeted at mobile devices and information appliances in the consumer electronics market, MeeGo was designed to act as an operating system for hardware platforms such as netbooks, entry-level desktops, nettops, tablet computers, mobile computing and communications devices, in-vehicle infotainment devices, SmartTV / ConnectedTV, IPTV-boxes, smart phones, and other embedded systems.
Nokia wanted to make MeeGo its primary smartphone operating system in 2010, but after a change in direction it was stopped in February 2011, leaving Intel alone in the project. The Linux Foundation canceled MeeGo in September 2011 in favor of Tizen, which Intel then joined in collaboration with Samsung. A community-driven successor called Mer was formed that year. A Finnish start-up, Jolla, picked up Mer to develop a new operating system: Sailfish OS, and launched Jolla Phone smartphone at the end of 2013. Another Mer derivative called Nemo Mobile was also developed.
History
MeeGo T01 was first announced at Mobile World Congress in February 2010 by Intel and Nokia in a joint press conference. The stated aim is to merge the efforts of Intel's Moblin and Nokia's Maemo former projects into one new common project that would drive a broad third party application ecosystem. According to Intel, MeeGo was developed because Microsoft did not offer comprehensive Windows 7 support for the Atom processor. On 16 February 2010 a tech talk notice was posted about the former Maemo development project founded in 2009 and code named Harmattan, that originally slated to become Maemo 6. Those notice stated that Harmattan is now considered to be a MeeGo instance (though not a MeeGo product), and Nokia is giving up the Maemo branding for Harmattan on the Nokia N9 and beyond. (Any previous Maemo versions up to Maemo 5, a.k.a. Fremantle, will still be referred to as Maemo.) In addition it was made clear that only the naming was given up whilst development on that Harmattan will continue so that any schedules will be met.
Aminocom and Novell also played a large part in the MeeGo effort, working with the Linux Foundation on their build infrastructure and official MeeGo products. Amino was responsible for extending MeeGo to TV devices, while Novell was increasingly introducing technology that was originally developed for openSUSE, (including Open Build Service, ZYpp for package management, and other system management tools). In November 2010, AMD also joined the alliance of companies that were actively developing MeeGo.
Quite noticeable changes in the project setup happened on 11 February 2011 when Nokia officially announced to switch over to Windows Phone 7 and thus abandoning MeeGo and the partnership. Nokia CEO Stephen Elop said in an interview with Engadget: "What we’re doing is not thinking of MeeGo as the Plan B. We’re thinking about MeeGo and related development work as what’s the next generation." Nokia did eventually release one MeeGo smartphone that year running "Harmattan", the Nokia N9.
On 27 September 2011, it was announced by Intel employee Imad Sousou that in collaboration with Samsung MeeGo will be replaced by Tizen during 2012.
Community developers from the Mer (software distribution) project however started to continue MeeGo without Intel and Nokia. At a later time some of the former MeeGo developers from Nokia headed for founding the company Jolla that after some time popped up with a MeeGo and its free successor Mer based OS platform they called Sailfish OS.
Overview
MeeGo is intended to run on a variety of hardware platforms including hand-helds, in-car devices, netbooks and televisions. All platforms share the MeeGo core, with different "User Experience" ("UX") layers for each type of device. MeeGo is designed by combining the best of both Intel's Fedora-based Moblin and Nokia's Debian-based Maemo. When it was first announced, the then President and CEO of Nokia, Olli-Pekka Kallsvuo, said that MeeGo would create an ecosystem, which is the best among other operating systems and will represent players from different countries.
System requirements
MeeGo provides support for both ARM and Intel x86 processors with SSSE3 enabled and uses btrfs as the default file system.
User interfaces
Within the MeeGo project there are several graphical user interfaces – internally called User Experiences ("UX").
Netbook
The Netbook UX is a continuation of the Moblin interface. It is written using the Clutter-based Mx toolkit, and uses the Mutter window manager.
Samsung Netbook NP-N100 use MeeGo for its operating system.
MeeGo's netbook version uses several Linux applications in the background, such as Evolution (Email, calendar), Empathy (instant messaging), Gwibber (microblogging), Chromium (web browser), and Banshee (multimedia player), all integrated into the graphical user interface.
Handset
The Handset UX is based on Qt, with GTK+ and Clutter included to provide compatibility for Moblin applications. To support the hundreds of Hildon-based Maemo applications, users have to install the Hildon library ported by the maemo.org community. Depending on the device, applications will be provided from either the Intel AppUp or the Nokia Ovi digital software distribution systems.
The MeeGo Handset UX's "Day 1" prerelease was on 30 June 2010. The preview was initially available for the Aava Mobile Intel Moorestown platform, and a 'kickstart' file provided for developers to build an image for the Nokia N900.
Smartphone
MeeGo OS v1.2 "Harmattan" is used in Nokia N9 and N950 phones.
Tablet
Intel demonstrated the Tablet UX on a Moorestown-based tablet PC at COMPUTEX Taipei in early June 2010.
Since then, some information appeared on MeeGo website indicating there will be a Tablet UX part of the MeeGo project, but it is not known if this UX will be the one demonstrated by Intel. This Tablet UX will be fully free like the rest of the MeeGo project and will be coded with Qt and the MeeGo Touch Framework. Intel has revealed interest in combining Qt with Wayland instead of X11 in MeeGo Touch in order to utilize the latest graphics technologies supported by Linux kernel, which should improve user experiences and reduce system complexity.
Minimum hardware requirements are currently unknown.
The WeTab runs MeeGo T01 with a custom user interface and was made available in September 2010.
In-Vehicle infotainment
The GENIVI Alliance, a consortium of several car makers and their industry partners, uses Moblin with Qt as base for its 'GENIVI 1.0 Reference Platform' for In-Vehicle Infotainment (IVI) and automotive navigation system as a uniformed mobile computing platform. Graham Smethurst of GENIVI Alliance and BMW Group announced in April 2010 the switch from Moblin to MeeGo.
Smart TV
Intel planned to develop a version of MeeGo for IPTV set top boxes, but had since cancelled.
Licensing
The MeeGo framework consists of a wide variety of original and upstream components, all of which are licensed under licenses certified by the Free Initiative (such as the GNU General Public License). In order to allow hardware vendors to personalize their device's user experiences, the project's license policy requires that MeeGo's reference User Experience subsystems be licensed under a Permissive free software license – except for libraries that extend MeeGo API's (which were licensed under the GNU Lesser General Public License to help discourage fragmentation), or applications (which can be licensed separately).
Technical foundations
The MeeGo Core integrates elements of two other Linux distributions: Maemo (a distribution which Nokia derived from Debian) and Moblin (which Intel derived from Fedora).
MeeGo uses RPM software repositories. It is one of the first Linux distributions to deploy Btrfs as the default file system.
Although most of the software in MeeGo's Jolla interface use the Qt widget toolkit, it also supports GTK+. The final revision of MeeGo Qt v4.7, Qt Mobility v1.0, OpenGL ES v2.0. MeeGo also supports the Accounts & SSO, Maliit, oFono software frameworks.
MeeGo compiles software with the openSUSE Build Service.
Derivatives
As with Moblin before, MeeGo also serves as a technology pool from which software vendors can derive new products.
MeeGo/Harmattan
Even though MeeGo was initiated as collaboration between Nokia and Intel, the collaboration was formed when Nokia was already developing the next incarnation of its Maemo Linux distribution. As a result, the Maemo 6 base operating system was kept intact while the Handset UX was shared, with the name changed to "MeeGo/Harmattan".
On 21 June 2011, Nokia announced its first MeeGo/Harmattan smartphone device, Nokia N9.
Mer
The original Mer project was a free re-implementation of Maemo, ported to the Nokia Internet Tablet N800. When MeeGo first appeared this work was discontinued and the development effort went to MeeGo.
After both Nokia and then Intel abandoned MeeGo, the Mer project was revived and continued to develop the MeeGo codebase and tools. It is now being developed in the open by a meritocratic community. Mer provides a Core capable of running various UXs developed by various other projects, and will include maintained application development APIs, such as Qt, EFL, and HTML5/WAC.
Some of the former MeeGo user interface were already ported to run on top of Mer, such as the handset reference UX, now called Nemo Mobile. There are also a couple of new tablet UXes available, such as Cordia and Plasma Active. Mer is considered to be the legitimate successor of Meego, as the other follow-up project Tizen (see below) changed the APIs fundamentally.
Nemo Mobile
Nemo Mobile is a community driven operating system incorporating Mer targeted at mobile phones and tablet.
Sailfish OS
Sailfish OS is an operating system developed by the Finnish startup Jolla. It also incorporates Mer. After Nokia abandoned their participation in the MeeGo project, the directors and core professionals from Nokia's N9 team left the company and together formed Jolla, to bring MeeGo back into the market mainstream. This effort eventually resulted in the creation of the Sailfish OS.
The Sailfish OS and the Sailfish OS SDK are based on the core and the tools of the Mer core distribution, which is a revival of the core of the MeeGo project (a meritocracy-governed and managed successor of the MeeGo OS, but without its own Graphical User Interface and system kernel). Sailfish includes a multi-tasking user interface that Jolla intends to use to differentiate its smartphones from others and as a competitive advantage against devices that run Google's Android or Apple's iOS.
Among other things, the Sailfish OS is characterised by:
can be used with a wide range of devices in the same way as MeeGo
Jolla continues to use the MeeGo APIs (via Mer), which consists of:
Qt 4.7 [Qt47]
Qt Mobility 1.0 [QtMob]
OpenGL ES 2.0 [OGLES]
updated version, like Qt 5.0 are or will be used in/via Mer core;
an in-house Jolla GUI (successor of swipe UI) for smartphone devices;
uses QML, Qt and HTML5;
thanks to Mer, the core can run on various hardware like Intel, ARM and any other which has a kernel able to work with the Mer core;
open source, except for some of Jolla's UI elements. Those interested in further development can become involved through the Mer project or the Sailfish Alliance or Jolla;
Jolla, i.e. the Sailfish team, is an active contributor to the Mer project
Tizen
Although Tizen was initially announced as a continuation of the MeeGo effort, there is little shared effort and architecture between these projects, since Tizen inherited much more from Samsung's LiMo than from MeeGo. As most of the Tizen work is happening behind closed doors and is done by Intel and Samsung engineers, the people involved in the former MeeGo open source project continued their work under Mer and projects associated with it. Because Tizen does not use the Qt framework, which is the core part of Meego's API (see above), Tizen cannot technically be considered to be a derivate of MeeGo.
SUSE and Smeegol Linux
On 1 June 2010, Novell announced that they would ship a SUSE Linux incarnation with MeeGo's Netbook UX (MeeGo User Experience) graphical user interface.
A MeeGo-based Linux distribution with this user interface is already available from openSUSE's Goblin Team under the name Smeegol Linux, this project combines MeeGo with openSUSE to get a new netbook-designed Linux distribution. What makes Smeegol Linux unique when compared to the upstream MeeGo or openSUSE is that this distribution is at its core based on openSUSE but has the MeeGo User Experience as well as a few other changes such as adding the Mono-based Banshee media player, NetworkManager-powered network configuration, a newer version of Evolution Express, and more. Any end-users can also build their own customized Smeegol Linux OS using SUSE Studio.
Fedora
Fedora 14 contains a selection of software from the MeeGo project.
Linpus
Linpus Technologies is working on bringing their services on top of MeeGo Netbook and MeeGo Tablet.
Splashtop
The latest version of the instant-on OS Splashtop-platform (by Splashtop Inc. which was previously named DeviceVM Inc.) is compliant with MeeGo, and future version of Splashtop will be based on MeeGo and will be available for commercial use in the first half of 2011.
Release schedule
It was announced at the Intel Developer Forum 2010 that MeeGo would follow a six-month release schedule. Version 1.0 for Atom netbooks and a code drop for the Nokia N900 became available for download .
Project planning
Launch
In February 2011, Nokia announced a partnership with Microsoft for mobile handsets and the departure of Nokia's MeeGo team manager Alberto Torres, leading to speculation as to Nokia's future participation in MeeGo development or using Windows Phone by Nokia.
In September 2011, Nokia began shipping the first MeeGo smartphone Nokia N9, ahead of the Windows Phone 7 launch expected later this year. The first MeeGo-based tablet WeTab was launched in 2010 by Neofonie.
In early July 2012, Nokia's Meego development lead Sotiris Makyrgiannis and other team members left Nokia.
Companies supporting the project
See also
Comparison of mobile operating systems
Sailfish OS – the operating system by Jolla with the Mer core, the legacy of MeeGo OS by Nokia&Intel partnership developed further by Jolla
Mer core – the core stack of code by merproject.org, one of main parts of Sailfish OS, free open source software which initially has consisted in about 80% of the MeeGo original open source code.
Nokia X platform – the next Linux project by Nokia
KaiOS
Hongmeng OS
References
External links
ARM operating systems
Discontinued Linux distributions
RPM-based Linux distributions
Free mobile software
Intel software
Linux Foundation projects
Mobile operating systems
Nokia platforms
Tablet operating systems
Linux distributions
|
46787942
|
https://en.wikipedia.org/wiki/Estoire%20des%20Engleis
|
Estoire des Engleis
|
Estoire des Engleis (English: History of the English) is a chronicle of English history composed by Geffrei Gaimar. Written for the wife of a landholder in Lincolnshire and Hampshire, it is the oldest known history chronicle in the French language. Scholars have proposed various dates for the chronicle's writing; the middle-to-late 1130s is commonly accepted. Largely based upon, or directly translated from, pre-existing chronicles, the Estoire des Engleis documents English history from the 495 landing of Cerdic of Wessex to the death of William II in 1100. The original chronicle opened with England's mythical Trojan beginnings, but all portions which document the period before Cerdic have been lost.
History
Geffrei Gaimar wrote the Estoire des Engleis for Constance, the wife of Ralf FitzGilbert. FitzGilbert, who, according to Gaimar in the chronicle's epilogue, commissioned its writing, possessed land in Lincolnshire and Hampshire. Gaimar himself may have been FitzGilbert's chaplain, or perhaps a secular clerk. Scholars have varying opinions concerning the date of the chronicle's writing, with commonly accepted ranges including March 1136 – April 1137 and 1135–1140. Gaimar possibly started the chronicle's composition in Hampshire and completed it in Lincolnshire.
Ian Short, an emeritus professor of French at Birkbeck, University of London, stated that the chronicle was written "to provide a vast panorama of the Celto-British, Anglo-Saxon, and Anglo-Norman dynasties in the British Isles from Trojan times until the death of William Rufus." It is the oldest known history chronicle written in the French language. The chronicle was written with a Norman bias, stating that the Normans were the true successors to the English throne. As mentioned in the chronicle's epilogue, it opened with England's mythical Trojan beginnings when it was first written. However, this first portion of the chronicle, known as the Estoire des Troiiens, along with another early part named the Estoire des Bretuns, has been lost. The present-day copy begins with the 495 landing of Cerdic of Wessex in England, and ends with William II's death in 1100. The chronicle was written in couplets, containing 6,532 octosyllables. Four manuscripts of the Estoire des Engleis currently exist. The title Estoire des Engleis derives from the British Library version.
The Estoire des Engleis is based upon pre-existing chronicles. For instance, the now-lost portions of the chronicle, the Estoire des Troiiens and Estoire des Bretuns, likely used information from the Historia Regum Britanniae, which was written by Geoffrey of Monmouth. After this, starting with Cerdic, the chronicle is primarily a translation of the Anglo-Saxon Chronicle to about 959. Its accuracy is questioned by scholars, but the chronicle is nonetheless recognized as valuable in other areas of study.
References
1130s books
12th-century history books
English chronicles
|
36481204
|
https://en.wikipedia.org/wiki/Teledyne%20CARIS
|
Teledyne CARIS
|
Teledyne CARIS, A business unit of Teledyne Digital Imaging, Inc. is a Canadian software company that develops and supports geomatics software for marine and land applications. The company is headquartered in Fredericton, New Brunswick, Canada. CARIS also has offices in the Netherlands, the United States and Australia, and has re-sellers offering sales and support of software products to more than 75 countries.
History
The company was founded in Fredericton in 1979 as Universal Systems Ltd. and was a spin-off from research into data structures and computer-aided cartography at the University of New Brunswick's Department of Survey Engineering (now the Department of Geodesy and Geomatics Engineering). The company's first commercial software product was called "CARIS" whose acronym stood for "Computer Aided Resource Information System". The company name was changed from Universal Systems Ltd. to CARIS in the early 2000s in recognition of this first product and recognized brand. CARIS was wholly acquired on May 3, 2016, by international conglomerate Teledyne Technologies and renamed "Teledyne CARIS."
Products
Teledyne CARIS Inc. provides Geographic Information Systems (GIS) and related software for terrestrial applications such as land management, municipal planning and geology, as well as marine and hydrographic applications. The marine and hydrographic software is designed to handle the hydrographic workflow from the time sensor data is recorded through to its inclusion in a nautical chart or other GIS product. The company terms this the "Ping-to-Chart" workflow and has trademarked the term "Ping-to-Chart".
A CARIS Spatial Archive, often referred to as a CSAR file, is a data storage mechanism designed for storing large amounts of bathymetric data.
Standards work
Teledyne CARIS Inc. is a member of the Open Geospatial Consortium (OGC) and is a proponent of interoperability between different GIS systems. It markets its own GIS server which implements several OGC specifications, including Web Map Service (WMS) and Web Feature Service (WFS).
Through liaisons with the International Hydrographic Organization (IHO), and with the hydrographic community, Teledyne CARIS Inc. has been closely involved in the development of the S-57, S-100, and other IHO standards. The company has also participated in the development, implementation, production and usage of electronic chart specifications such as Electronic Navigational Chart (ENC) and Digital Nautical Chart (DNC).
References
External links
www.caris.com - official website
Software companies of Canada
Companies based in Fredericton
GIS software
Teledyne Technologies
Canadian companies established in 1979
Software companies established in 1979
2016 mergers and acquisitions
Canadian subsidiaries of foreign companies
|
34459484
|
https://en.wikipedia.org/wiki/Hackerspace.gr
|
Hackerspace.gr
|
Hackerspace.gr ('hsgr') is a hackerspace in Athens, Greece, established in 2011. It operates as a cultural center, computer laboratory and meeting place (with free wireless access). Hackerspace.gr promotes creative coding and hardware hacking through its variety of activities. According to its website: "Hackerspace.gr is a physical space dedicated to creative code and hardware hacking, in Athens".
Vision
Hackerspace.gr vision is inspired by the Open Source philosophy. The main values, according to its vision page, are Excellence, Sharing, Consensus, and Do-ocracy. It is a self-funded community, through a membership fee, individual donations and supporters. Every year Hackerspace.gr publishes its annual financial balance titled "The cost of Hacking".
Events
It organizes workshops, lectures, entertainment and informational events. The events calendar lists several events weekly. Furthermore, hackerspace.gr is open for visitors as long as any of the administrators are in the premises.
Projects
Hackerspace.gr is an incubator place for many projects. Currently there is an OpenROV Taskforce on Hackerspace.gr. Verese community, a project participating on Mozilla WebFWD, is hosting its regular meetings at Hackerspace.gr. Ardupad was also incubated at Hackerspace.gr. A USB drop is located in the central area of the hackerspace. A custom open hardware delta 3D printer design, Anadelta is developed to cover its members need for a large 3D printer.
Services
Hackerspace provides several online services to its members, visitors, and the general public. In particular some of its members are running an instance of the etherpad lite collaborative editor, a diaspora pod and a Jabber/XMPP service.
Mobile Hackerspace
Hackerspace.gr usually deploys its geodesic dome in order to establish a mobile hackerspace when a large number of its members participate in events and venues that are away from its physical location providing tools, equipment and free of charge services for attendees.
Libre Space Foundation
hackerspace.gr is utilized as the headquarters of Libre Space Foundation, an open space technologies non-profit, as its laboratory and main working space. Libre Space Foundation shares its testing and manufacturing equipment with hackerspace.gr's users and visitors.
Libre Space Foundation has deployed its first SatNOGS ground station on the rooftop of hackerspace.gr and has used its machine, and electronics facilities for the manufacture, integration and initial testing of UPSat the first open source satellite, and also the first satellite made in Greece.
References
External links
Hackerspaces
Cultural centers
Culture in Athens
Computer clubs in Greece
DIY culture
|
35819849
|
https://en.wikipedia.org/wiki/Vagrant%20%28software%29
|
Vagrant (software)
|
Vagrant is an open-source software product for building and maintaining portable virtual software development environments; e.g., for VirtualBox, KVM, Hyper-V, Docker containers, VMware, and AWS. It tries to simplify the software configuration management of virtualization in order to increase development productivity. Vagrant is written in the Ruby language, but its ecosystem supports development in a few other languages.
History
Vagrant was first started as a personal side-project by Mitchell Hashimoto in January 2010. The first version of Vagrant was released in March 2010. In October 2010, Engine Yard declared that they were going to sponsor the Vagrant project. The first stable version, Vagrant 1.0, was released in March 2012, exactly two years after the original version was released. In November 2012, Mitchell formed an organization called HashiCorp to support the full-time development of Vagrant; Vagrant remained permissively licensed free software. HashiCorp now works on creating commercial editions and provides professional support and training for Vagrant.
Vagrant was originally tied to VirtualBox, but version 1.1 added support for other virtualization software such as VMware and KVM, and for server environments like Amazon EC2. Vagrant is written in Ruby, but it can be used in projects written in other programming languages such as PHP, Python, Java, C#, and JavaScript. Since version 1.6, Vagrant natively supports Docker containers, which in some cases can serve as a substitute for a fully virtualized operating system.
Architecture
Vagrant uses "Provisioners" and "Providers" as building blocks to manage the development environments. Provisioners are tools that allow users to customize the configuration of virtual environments. Puppet and Chef are the two most widely used provisioners in the Vagrant ecosystem (Ansible has been available since at least 2014). Providers are the services that Vagrant uses to set up and create virtual environments. Support for VirtualBox, Hyper-V, and Docker virtualization ships with Vagrant, while VMware and AWS are supported via plugins.
Vagrant sits on top of virtualization software as a wrapper and helps the developer interact easily with the providers. It automates the configuration of virtual environments using Chef or Puppet, and the user does not have to directly use any other virtualization software. Machine and software requirements are written in a file called "Vagrantfile" to execute necessary steps in order to create a development-ready box. "Box" is a format and an extension (.box) for Vagrant environments that is copied to another machine in order to replicate the same environment. The official Vagrant documentation details the installation, command line usage, and relevant configuration of Vagrant.
References
External links
List of Vagrant boxes
Cross-platform software
Provisioning
Virtualization-related software for Linux
|
2602235
|
https://en.wikipedia.org/wiki/PCI%20eXtensions%20for%20Instrumentation
|
PCI eXtensions for Instrumentation
|
PCI eXtensions for Instrumentation (PXI) is one of several modular electronic instrumentation platforms in current use. These platforms are used as a basis for building electronic test equipment, automation systems, and modular laboratory instruments. PXI is based on industry-standard computer buses and permits flexibility in building equipment. Often modules are fitted with custom software to manage the system.
Overview
PXI is designed for measurement and automation applications that require high-performance and a rugged industrial form-factor. With PXI, one can select the modules from a number of vendors and integrate them into a single PXI system, over 1150 module types available in 2006. A typical 3U PXI module measures approximately (4x6") in size, and a typical 8-slot PXI chassis is 4U high and half rack width, full width chassis contain up to 18 PXI slots.
PXI uses PCI-based technology and an industry standard governed by the PXI Systems Alliance (PXISA) to ensure standards compliance and system interoperability. There are PXI modules available for almost every conceivable test, measurement, and automation application, from the ubiquitous switching modules and DMMs, to high-performance microwave vector signal generation and analysis. There are also companies specializing in writing software for PXI modules, as well as companies providing PXI hardware-software integration services.
PXI is based on CompactPCI, and it offers all of the benefits of the PCI architecture including performance, industry adoption, COTS technology. PXI adds a rugged CompactPCI mechanical form-factor, an industry consortium that defines hardware, electrical, software, power and cooling requirements. Then PXI adds integrated timing and synchronization that is used to route synchronization clocks, and triggers internally. PXI is a future-proof technology, and is designed to be simply and quickly reprogrammed as test, measurement, and automation requirements change.
Most PXI instrument modules are register-based products, which use software drivers hosted on a PC to configure them as useful instruments, taking advantage of the increasing power of PCs to improve hardware access and simplify embedded software in the modules. The open architecture allows hardware to be reconfigured to provide new facilities and features that are difficult to emulate in comparable bench instruments. PXI system data bandwidth performance easily exceeds the performance of the older VXI test standard. There is debate within the technical community as to whether newer standards such as LXI will surpass PXI in both performance and overall cost of ownership.
PXI modules providing the instrument functions are plugged into a PXI chassis which may include its own controller running an industry standard operating system such as Windows 7, Windows XP, Windows 2000, or Linux (which is not yet PXI System Alliance approved), or a PCI-to-PXI bridge that provides a high-speed link to a desktop PC controller. Likewise, multiple PXI racks can be linked together with PCI bridge cards, to build very large systems such as multiple source microwave signal generator test stands for complex ATE applications.
CompactPCI and PXI products are interchangeable, i.e. they can be used in either CompactPCI or PXI chassis, but installation in the alternative chassis type may eliminate certain clocking and triggering features. So for example you could mount a CompactPCI Network interface controller in a PXI rack to provide additional network interface functions to a test stand. Conversely, a PXI module installed in a CompactPCI chassis would not utilize the additional clocking and triggering features of the PXI module.
PXI Systems Alliance
PCI eXtensions for Instrumentation (PXI) is a modular instrumentation platform originally introduced in 1997 by National Instruments. PXI is promoted by the 69-member PXI Systems Alliance (PXISA), whose sponsor members are (in alphabetical order) ADLINK, Cobham Wireless, Keysight Technologies, Marvin Test Solutions, National Instruments, Pickering Interfaces and Teradyne.
Executive Members of the alliance include Alfamation, Beijing Pansino Solutions Technology Co., CHROMA ATE Inc, GOEPEL electronic, MAC Panel, and Virginia Panel Corp. Another 56 associate member organizations that do not have voting rights are supporting PXI and use the PXI logo on their products and marketing material.
Two major players in 2006
In 2006 two major players, National Instruments and Agilent Technologies (now Keysight Technologies), entered the PXI test market.
National Instruments; National Instruments introduced the CompactPCI-based PXI standard in 1990s. National Instruments is the major PXI provider on the market.
Acqiris; was acquired by Agilent Nov 2006.
PXIT; an early PXI entrant (acquired by Agilent Nov 2006).
Keithley Instruments; launched a range of 35 Data Acquisition and Instrumentation PXI cards in Nov 2006.
Market estimate
The PXI worldwide market size, estimated in 2007, was and was expected to grow to over by 2010. The PXI market is also larger than the VME eXtensions for Instrumentation (VXI) market at an estimated size in 2007 of . The LAN eXtensions for Instrumentation (LXI) standard has grown to within . .
PXI Express
PXI Express is an adaptation of PCI Express to the PXI form factor, developed in 2005. This increases the available system data rate to 6 GByte/s in each direction. PXI Express also allows for the use of hybrid slots, compatible with both PXI and PXI Express modules. In 2015 National Instruments extended the standard to use PCI Express 3.x, increasing the system bandwidth to 24 GByte/s.
MXI link
An MXI link provides a PC with direct control over the PXI backplane and connected cards using a PXI card and a connected PCI/CompactPCI card. This interface provides a maximum data throughput of using fiber-optic or copper cabling, and can support a maximum length of using a fiber-optic connection.
PXI MultiComputing (PXImc)
PXImc is an interconnection standard that allows multiple PXI systems to be linked together, with each system potentially including both instrumentation and processing. Using PXImc, data gathered from one system can be processed in parallel on multiple computing nodes, or a single PC can access instruments in several PXI chassis.
References
External links
pxisa.org - Overview of PXI
pxionline.com - PXI Test and Technology magazine
pickeringtest.com - PXImate Book provides overview of the PXI Standard, module types and cabling
PXI / PXI Express / PXImc tutorial
Peripheral Component Interconnect
|
22982978
|
https://en.wikipedia.org/wiki/Resilient%20control%20systems
|
Resilient control systems
|
In our modern society, computerized or digital control systems have been used to reliably automate many of the industrial operations that we take for granted, from the power plant to the automobiles we drive. However, the complexity of these systems and how the designers integrate them, the roles and responsibilities of the humans that interact with the systems, and the cyber security of these highly networked systems have led to a new paradigm in research philosophy for next-generation control systems. Resilient Control Systems consider all of these elements and those disciplines that contribute to a more effective design, such as cognitive psychology, computer science, and control engineering to develop interdisciplinary solutions. These solutions consider things such as how to tailor the control system operating displays to best enable the user to make an accurate and reproducible response, how to design in cybersecurity protections such that the system defends itself from attack by changing its behaviors, and how to better integrate widely distributed computer control systems to prevent cascading failures that result in disruptions to critical industrial operations. In the context of cyber-physical systems, resilient control systems are an aspect that focuses on the unique interdependencies of a control system, as compared to information technology computer systems and networks, due to its importance in operating our critical industrial operations.
Introduction
Originally intended to provide a more efficient mechanism for controlling industrial operations, the development of digital control systems allowed for flexibility in integrating distributed sensors and operating logic while maintaining a centralized interface for human monitoring and interaction. This ease of readily adding sensors and logic through software, which was once done with relays and isolated analog instruments, has led to wide acceptance and integration of these systems in all industries. However, these digital control systems have often been integrated in phases to cover different aspects of an industrial operation, connected over a network, and leading to a complex interconnected and interdependent system. While the control theory applied is often nothing more than a digital version of their analog counterparts, the dependence of digital control systems upon the communications networks, has precipitated the need for cybersecurity due to potential effects on confidentiality, integrity and availability of the information. To achieve resilience in the next generation of control systems, therefore, addressing the complex control system interdependencies, including the human systems interaction and cybersecurity, will be a recognized challenge.
Defining resilience
Research in resilience engineering over the last decade has focused in two areas, organizational and information technology. Organizational resilience considers the ability of an organization to adapt and survive in the face of threats, including the prevention or mitigation of unsafe, hazardous or compromising conditions that threaten its very existence. Information technology resilience has been considered from a number of standpoints . Networking resilience has been considered as quality of service. Computing has considered such issues as dependability and performance in the face of unanticipated changes . However, based upon the application of control dynamics to industrial processes, functionality and determinism are primary considerations that are not captured by the traditional objectives of information technology. .
Considering the paradigm of control systems, one definition has been suggested that "Resilient control systems are those that tolerate fluctuations via their structure, design parameters, control structure and control parameters". However, this definition is taken from the perspective of control theory application to a control system. The consideration of the malicious actor and cyber security are not directly considered, which might suggest the definition, "an effective reconstitution of control under attack from intelligent adversaries," which was proposed. However, this definition focuses only on resilience in response to a malicious actor. To consider the cyber-physical aspects of control system, a definition for resilience considers both benign and malicious human interaction, in addition to the complex interdependencies of the control system application .
The use of the term “recovery” has been used in the context of resilience, paralleling the response of a rubber ball to stay intact when a force is exerted on it and recover its original dimensions after the force is removed. Considering the rubber ball in terms of a system, resilience could then be defined as its ability to maintain a desired level of performance or normalcy without irrecoverable consequences. While resilience in this context is based upon the yield strength of the ball, control systems require an interaction with the environment, namely the sensors, valves, pumps that make up the industrial operation. To be reactive to this environment, control systems require an awareness of its state to make corrective changes to the industrial process to maintain normalcy. With this in mind, in consideration of the discussed cyber-physical aspects of human systems integration and cyber security, as well as other definitions for resilience at a broader critical infrastructure level, the following can be deduced as a definition of a resilient control system:
"A resilient control system is one that maintains state awareness and an accepted level of operational normalcy in response to disturbances, including threats of an unexpected and malicious nature"
Considering the flow of a digital control system as a basis, a resilient control system framework can be designed. Referring to the left side of Fig. 1, a resilient control system holistically considers the measures of performance or normalcy for the state space. At the center, an understanding of performance and priority provide the basis for an appropriate response by a combination of human and automation, embedded within a multi-agent, semi-autonomous framework. Finally, to the right, information must be tailored to the consumer to address the need and position a desirable response. Several examples or scenarios of how resilience differs and provides benefit to control system design are available in the literature.
Areas Of resilience
Some primary tenets of resilience, as contrasted to traditional reliability, have presented themselves in considering an integrated approach to resilient control systems. These cyber-physical tenants complement the fundamental concept of dependable or reliable computing by characterizing resilience in regard to control system concerns, including design considerations that provide a level of understanding and assurance in the safe and secure operation of an industrial facility. These tenants are discussed individually below to summarize some of the challenges to address in order to achieve resilience.
Human systems
The benign human has an ability to quickly understand novel solutions, and provide the ability to adapt to unexpected conditions. This behavior can provide additional resilience to a control system, but reproducibly predicting human behavior is a continuing challenge. The ability to capture historic human preferences can be applied to bayesian inference and bayesian belief networks, but ideally a solution would consider direct understanding of human state using sensors such as an EEG. Considering control system design and interaction, the goal would be to tailor the amount of automation necessary to achieve some level of optimal resilience for this mixed initiative response. Presented to the human would be that actionable information that provides the basis for a targeted, reproducible response.
Cyber security
In contrast to the challenges of prediction and integration of the benign human with control systems, the abilities of the malicious actor (or hacker) to undermine desired control system behavior also create a significant challenge to control system resilience. Application of dynamic probabilistic risk analysis used in human reliability can provide some basis for the benign actor. However, the decidedly malicious intentions of an adversarial individual, organization or nation make the modeling of the human variable in both objectives and motives. However, in defining a control system response to such intentions, the malicious actor looks forward to some level of recognized behavior to gain an advantage and provide a pathway to undermining the system. Whether performed separately in preparation for a cyber attack, or on the system itself, these behaviors can provide opportunity for a successful attack without detection. Therefore, in considering resilient control system architecture, atypical designs that imbed active and passively implemented randomization of attributes, would be suggested to reduce this advantage.
Complex networks and networked control systems
While much of the current critical infrastructure is controlled by a web of interconnected control systems, either architecture termed as distributed control systems (DCS) or supervisory control and data acquisition (SCADA), the application of control is moving toward a more decentralized state. In moving to a smart grid, the complex interconnected nature of individual homes, commercial facilities and diverse power generation and storage creates an opportunity and a challenge to ensuring that the resulting system is more resilient to threats. The ability to operate these systems to achieve a global optimum for multiple considerations, such as overall efficiency, stability and security, will require mechanisms to holistically design complex networked control systems. Multi-agent methods suggest a mechanism to tie a global objective to distributed assets, allowing for management and coordination of assets for optimal benefit and semi-autonomous, but constrained controllers that can react rapidly to maintain resilience for rapidly changing conditions.
Base Metrics for Resilient Control Systems
Establishing a metric that can capture the resilience attributes can be complex, at least if considered based upon differences between the interactions or interdependencies. Evaluating the control, cyber and cognitive disturbances, especially if considered from a disciplinary standpoint, leads to measures that already had been established. However, if the metric were instead based upon a normalizing dynamic attribute, such a performance characteristic that can be impacted by degradation, an alternative is suggested. Specifically, applications of base metrics to resilience characteristics are given as follows for type of disturbance:
Physical Disturbances:
Time Latency Affecting Stability
Data Integrity Affecting Stability
Cyber Disturbances:
Time Latency
Data Confidentiality, Integrity and Availability
Cognitive Disturbances:
Time Latency in Response
Data Digression from Desired Response
Such performance characteristics exist with both time and data integrity. Time, both in terms of delay of mission and communications latency, and data, in terms of corruption or modification, are normalizing factors. In general, the idea is to base the metric on “what is expected” and not necessarily the actual initiator to the degradation. Considering time as a metrics basis, resilient and un-resilient systems can be observed in Fig. 2.
Dependent upon the abscissa metrics chosen, Fig. 2 reflects a generalization of the resiliency of a system. Several common terms are represented on this graphic, including robustness, agility, adaptive capacity, adaptive insufficiency, resiliency and brittleness. To overview the definitions of these terms, the following explanations of each is provided below:
Agility: The derivative of the disturbance curve. This average defines the ability of the system to resist degradation on the downward slope, but also to recover on the upward. Primarily considered a time based term that indicates impact to mission. Considers both short term system and longer term human responder actions.
Adaptive Capacity: The ability of the system to adapt or transform from impact and maintain minimum normalcy. Considered a value between 0 and 1, where 1 is fully operational and 0 is the resilience threshold.
Adaptive Insufficiency: The inability of the system to adapt or transform from impact, indicating an unacceptable performance loss due to the disturbance. Considered a value between 0 and -1, where 0 is the resilience threshold and -1 is total loss of operation.
Brittleness: The area under the disturbance curve as intersected by the resilience threshold. This indicates the impact from the loss of operational normalcy.
Phases of Resilient Control System Preparation and Disturbance Response:
Recon: Maintaining proactive state awareness of system conditions and degradation
Resist: System response to recognized conditions, both to mitigate and counter
Respond: System degradation has been stopped and returning system performance
Restore: Longer term performance restoration, which includes equipment replacement
Resiliency: The converse of brittleness, which for a resilience system is “zero” loss of minimum normalcy.
Robustness: A positive or negative number associated with the area between the disturbance curve and the resilience threshold, indicating either the capacity or insufficiency, respectively.
On the abscissa of Fig. 2, it can be recognized that cyber and cognitive influences can affect both the data and the time, which underscores the relative importance of recognizing these forms of degradation in resilient control designs. For cybersecurity, a single cyberattack can degrade a control system in multiple ways. Additionally, control impacts can be characterized as indicated. While these terms are fundamental and seem of little value for those correlating impact in terms like cost, the development of use cases provide a means by which this relevance can be codified. For example, given the impact to system dynamics or data, the performance of the control loop can be directly ascertained and show approach to instability and operational impact.
Resilience Manifold for Design and Operation
The very nature of control systems implies a starting point for the development of resilience metrics. That is, the control of a physical process is based upon quantifiable performance and measures, including first principles and stochastic. The ability to provide this measurement, which is the basis for correlating operational performance and adaptation, then also becomes the starting point for correlation of the data and time variations that can come from the cognitive, cyber-physical sources. Effective understanding is based upon developing a manifold of adaptive capacity that correlates the design (and operational) buffer. For a power system, this manifold is based upon the real and reactive power assets, the controllable having the latitude to maneuver, and the impact of disturbances over time. For a modern distribution system (MDS), these assets can be aggregated from the individual contributions as shown in Fig. 3. For this figure, these assets include: a) a battery, b) an alternate tie line source, c) an asymmetric P/Q-conjectured source, d) a distribution static synchronous compensator (DSTATCOM), and e) low latency, four quadrant source with no energy limit.
Examples of Resilient Control System Developments
1) When considering the current digital control system designs, the cyber security of these systems is dependent upon what is considered border protections, i.e., firewalls, passwords, etc. If a malicious actor compromised the digital control system for an industrial operation by a man-in-the-middle attack, data can be corrupted with the control system. The industrial facility operator would have no way of knowing the data has been compromised, until someone such as a security engineer recognized the attack was occurring. As operators are trained to provide a prompt, appropriate response to stabilize the industrial facility, there is a likelihood that the corrupt data would lead to the operator reacting to the situation and lead to a plant upset. In a resilient control system, as per Fig. 1, cyber and physical data is fused to recognize anomalous situations and warn the operator.
2) As our society becomes more automated for a variety of drivers, including energy efficiency, the need to implement ever more effective control algorithms naturally follow. However, advanced control algorithms are dependent upon data from multiple sensors to predict the behaviors of the industrial operation and make corrective responses. This type of system can become very brittle, insofar as any unrecognized degradation in the sensor itself can lead to incorrect responses by the control algorithm and potentially a worsened condition relative to the desired operation for the industrial facility. Therefore, implementation of advanced control algorithms in a resilient control system also requires the implementation of diagnostic and prognostic architectures to recognize sensor degradation, as well as failures with industrial process equipment associated with the control algorithms.
Resilient Control System Solutions and the Need for Interdisciplinary Education
In our world of advancing automation, our dependence upon these advancing technologies will require educated skill sets from multiple disciplines. The challenges may appear simply rooted in better design of control systems for greater safety and efficiency. However, the evolution of the technologies in the current design of automation has created a complex environment in which a cyber-attack, human error (whether in design or operation), or a damaging storm can wreak havoc on the basic infrastructure. The next generation of systems will need to consider the broader picture to ensure a path forward where failures do not lead to ever greater catastrophic events. One critical resource are students who are expected to develop the skills necessary to advance these designs, and require both a perspective on the challenges and the contributions of others to fulfill the need. Addressing this need, courses have been developed to provide the perspectives and relevant examples to overview the issues and provide opportunity to create resilient solutions at such universities as George Mason University and Northeastern. The tie to critical infrastructure operations is an important aspect of these courses.
Through the development of technologies designed to set the stage for next generation automation, it has become evident that effective teams are comprised several disciplines. However, developing a level of effectiveness can be time consuming, and when done in a professional environment can expend a lot of energy and time that provides little obvious benefit to the desired outcome. It is clear that the earlier these STEM disciplines can be successfully integrated, the more effective they are at recognizing each other’s contributions and working together to achieve a common set of goals in the professional world. Team competition at venues such as Resilience Week will be a natural outcome of developing such an environment, allowing interdisciplinary participation and providing an exciting challenge to motivate students to pursue a STEM education.
Standardizing Resilience and Resilient Control System Principles
Standards and policy that define resilience nomenclature and metrics are needed to establish a value proposition for investment, which includes government, academia and industry. The IEEE Industrial Electronics Society has taken the lead in forming a technical committee toward this end. The purpose of this committee will be to establish metrics and standards associated with codifying promising technologies that promote resilience in automation. This effort is distinct from more supply chain community focus on resilience and security, such as the efforts of ISO and NIST
Notes
References
Attribution
National security policies
Computer security
Control engineering
|
10466641
|
https://en.wikipedia.org/wiki/GEC%204000%20series
|
GEC 4000 series
|
The GEC 4000 was a series of 16/32-bit minicomputers produced by GEC Computers Ltd in the United Kingdom during the 1970s, 1980s and early 1990s.
History
GEC Computers was formed in 1968 as a business unit of the GEC conglomerate. It inherited from Elliott Automation the ageing Elliott 900 series, and needed to develop a new range of systems. Three ranges were identified, known internally as Alpha, Beta, and Gamma. Alpha appeared first and became the GEC 2050 8-bit minicomputer. Beta followed and became the GEC 4080. Gamma was never developed, so a few of its enhanced features were consequently pulled back into the 4080. The principal designer of the GEC 4080 was Dr. Michael Melliar-Smith and the principal designer of the 4060 and 4090 was Peter Mackley.
The 4000 series systems were developed and manufactured in the UK at GEC Computers' Borehamwood offices in Elstree Way. Development and manufacture transferred to the company's new factories in Woodside Estate, Dunstable in the late 1970s. In 1979, GEC Computers was awarded the Queen's Award for Technical Achievement for the development of the 4000 series, particularly Nucleus. By 1991, the number of systems manufactured was falling off, so manufacture was transferred to GPT's Beeston, Nottinghamshire factory and development returned to Borehamwood. The last systems were manufactured around 1995. There were still a few GEC 4220 systems operating in 2018 with maintenance provided by Telent, and some GEC 4310 were operating until 2013. London Underground continues to use GEC 4190 systems in 2022.
Nucleus
The GEC 4000 series hardware and firmware included a pioneering facility known as Nucleus. Nucleus implements a number of features which are more usually implemented within an operating system kernel, and consequently operating systems running on GEC 4000 series systems do not need to directly provide these features themselves. Nucleus firmware cannot be reprogrammed by any code running on the system, and this made the systems particularly attractive to a number of security applications.
Nucleus performs:
process scheduling
context switching
efficient semaphores
asynchronous message passing
memory segmentation and protection
error handling
I/O directly by processes, and routing of interrupts back to processes
There is no provision for running any supervisor/privileged/kernel mode code on the 4000 systems—all operating system code runs as processes. Hence, device drivers, file system code, and other features which are often found within operating system kernels must be run in processes on the 4000 systems. Inherent in this is that they are all running in their own address spaces, protected from the actions of each other, just as all processes are.
Nucleus is configured by a set of system tables, and processes which have a need to modify the operation of nucleus are given access to the relevant system tables. This would be the case for processes which directly change the state of other processes, processes which allocate and delete memory segments, processes which can change the routing of messages between other processes or change the mapping of I/O devices to processes, etc. Normally system table access is limited to relatively few trusted processes, and other processes which need to perform operations such as loading processes, allocating memory, etc. will pass a message to the relevant trusted process which it will vet before performing the action and replying.
Instruction set
The 4000 series has a CISC instruction set. It has 8-bit bytes, big-endian, byte-addressable memory, two's complement arithmetic, and base-16 excess-64 floating point format (same as IBM System/360).
The model numbers below 4090 are 16-bit processors, and model numbers from 4090 upwards are mixed 16-bit and 32-bit. This relates to pointer sizes available to programs. All systems support 16-bit pointers, which is known as CST (Current Segment Table) addressing. The 32-bit systems also support 32-bit pointers, known as PAS (Paged Address Space) addressing. Each process has a PAST (Program Accessible Segment Table) which lists which of the system's memory segments the program is permitted to access. CST addressing allows four of the PAST entries to be mapped at addresses 0, 16KiB, 32KiB, and 48KiB, giving the 16-bit/64KiB address space. Programs which use more than 64KiB of memory must explicitly map the PAST entries they require at any moment into their 4 CST entries, although Nucleus will automatically map different code segments into the CSTs. PAS addressing allows programs to view their address space as a flat 32-bit address space, with successive PAST entries appearing every 16KiB, and Nucleus performing the PAST entry segment mapping automatically. The 32-bit systems support mixing of CST and PAS addressing in the same process. All instructions are 16 bits wide, except for some PAS addressing instructions which are 32 bits. Instructions can only be run from CST address space.
The 32-bit A register is the main accumulator register. There is a 32-bit B register too, which is most commonly used together with the A register as a 64-bit BA register for double-precision floating point operations. A 16-bit X register is used mainly for array indexing, and two 16-bit Y and Z registers are used as 16-bit pointers. A 16-bit L register points to function local data, and a G register always contains zero which can be used as a 16-bit global pointer, and also an 8-bit, 16-bit, or 32-bit zero value. The 16-bit S (sequence) register points to the next instruction to be obeyed. The 8-bit EC register contains condition codes bits. (Some of this is illustrated in the much simpler instruction set of the GEC 2050.) A read-only 'keys' register allows programs to read the value set on the front panel toggle switches by the operations staff. No 32-bit PAS pointer register exists—32-bit PAS pointers reside in memory in the 16-bit CST address space, and are accessed by using a 16-bit pointer. There is no instruction set support for a stack. There are a number of registers inaccessible to programs which are used by Nucleus, such as the hardware segment registers which point to the running process's four CSTs, master segment and PAS segments, and the system tables.
The instruction set contains instructions which operate register-register, store-register, register-store, and store-store. There are a set of string manipulation instructions which operate on variable lengths of store, copying, comparing, or scanning for a pattern. There are a number of Nucleus instructions for tasks such as sending a message to another process or a peripheral device, receiving a message or interrupt, changing a CST entry to point to a different segment which is accessible to the process, etc.
The 4080 has a two-stage instruction pipeline. This becomes a four-stage pipeline for the 4220, the highest-performing system in the series. The entry-level 415x and 4x6x systems have only a single-stage pipeline.
The normal operating mode of the CPU is called Full Nucleus. All systems also support a limited mode of operation called Basic Test. In Basic Test mode, Nucleus is disabled, I/O is performed differently, and only a single program can run, restricted to the bottom 64KiB of store, but all other non-nucleus and non-PAS instructions operate normally. This mode is used very early during booting to set up the system tables required by Nucleus, before obeying a Switch Full Nucleus instruction. Once the system has switched to Full Nucleus, it cannot return to Basic Test mode without operator intervention at the front panel, in effect killing any operating system which was running. Basic Test mode is also used to run certain test software (hence the name).
Input/output
The 4000 I/O design is based around a number of Input/Output Processors known as IOPs, each of which interfaces between the store and a set of I/O controllers. The IOPs are controlled by the Nucleus function in the CPU, but once an I/O event is triggered, they operate autonomously without interaction with the CPU until the I/O completes. The Normal Interface IOPs can each support up to 255 or 256 simultaneous I/O operations, each on a separate Way. The I/O controllers on each IOP would each occupy one or more Ways, depending on how many simultaneous I/O operations they need to handle. The IOP polices each Way's access to main store, allowing only access to successive memory locations defined for the I/O operation that Way is currently performing. The earlier IOPs performed 8-bit and 16-bit wide store accesses, with a burst mode for doing up to 8 transfers together for higher throughput I/O controllers. The later IOPs added 32-bit wide store accesses.
All systems have at least one IOP. On the 4080, this first IOP was called the Basic Multiplexer Channel, or BMC, and the 4080 front panel provides for controlling both the CPU and the BMC. The entry level 415x and 4x6x systems have their first IOP (Integral Multiplexer Channel, or IMC) integrated into the Nucleus firmware, and thus I/O operations on the IMC did have some impact on CPU performance, although the 4x6x systems could have external IOPs added. The 4000 series Nucleus I/O instructions and system tables allow for up to 8 IOPs, although most of the models in the 4000 series range had some type of hardware limitation which reduced this. The 408x systems had 4-ported store, with the CPU and first IOP sharing one of these, and up to three additional IOPs connected to the remaining store ports. (Early documentation shows these additional store ports were also designed to connect additional CPUs, although this was not a configuration which was ever sold using 4080 processors.) Later models had more store ports, depending on how many store port boards could be fitted into the system. The 4190 could support the full complement of eight IOPs, and the 4190D supported eight IOPs with two CPUs.
Some commonly used I/O Controllers are the interval timer, system console controller, punched tape reader and punch controllers, line printer controller (all these use a single Way), a number of SMD (and earlier disk bus interface) disk controllers for controlling up to four drives (all using two Ways), Pertec PPC magnetic tape controllers for up to four ½" tape drives, and a number of multi-ported synchronous and asynchronous serial communication controllers (using between 4 and 32 Ways). A digital I/O board (using four Ways) was commonly used for direct process control interfacing, and for providing a fast parallel link between systems. A CAMAC crate controller was also available (again, used for process control interfacing). The Normal Interface bus which these controllers plug into is a published interface, and many customers also built their own controllers for their own specific process control requirements. The earlier GEC 2050 minicomputer used an 8-bit version of the Normal Interface, and most I/O Controllers could be used on both ranges of systems.
All the IOPs designed and built through the 1970s provided the same Normal Interface bus for I/O Controllers, and the I/O controllers could generally be used in any of them. In the 1980s, some more specialised IOPs were designed. A Direct Memory Access Director (DMAD) IOP allowed for a new type of I/O controller which had more freedom to access main memory, and allowed the design of more intelligent communications controllers. A SCSI IOP generated a SCSI bus for attaching more modern disks, and also included an integrated Interval Timer, system console controller, and Calendar Clock so that an additional Normal Interface IOP and separate controllers was not required to support just these functions.
Customers
Users of GEC 4000 series systems included many British university physics and engineering departments, the central computing service of University College London (Euclid) and Keele University, the JANET academic/research network X.25 switching backbone, Rutherford-Appleton Laboratory, Daresbury Laboratory, Harwell Laboratory, NERC, Met Office, CERN, ICI, British Telecom, SIP (Italian telco), and Plessey. British Steel Corporation and BHP Steel used them for real-time control of rolling steel mills, British Rail and London Underground for real-time train scheduling, London Fire Brigade and Durham Fire Brigade for command and control systems. The computers controlled most of the world's national Videotex systems, including the Prestel viewdata service.
At the Rutherford-Appleton Laboratory a GEC 4000 system was used to control the synchrotron and injectors used for the ISIS neutron spallation source until 1998.
A GEC 4080M was used as the central processor for the radar system of the ill-fated Nimrod AEW.3 airborne early warning aircraft.
The Central Electricity Generating Board used GEC 4080 processors at three of their Grid Control Centres. Known as GI74, they were used to collect data from substations and display this on the wall diagrams and tabular VDUs.
Models
A number of variants of the GEC 4000 processor were produced, including (in approximate chronological order):
4080: original 1973 model with 64–256 KiB of core memory
4082: 4080 with up to 1 MiB of memory
4070: entry-level model without memory interleaving
4085: 4082 with semiconductor memory
4060: entry-level model based on AMD Am2900 bit-slice processors
4062/4065: 4060 supporting up to 1 MiB memory
4080M: compact ruggedised 4080 for military applications
4090: Am2900-based with 32-bit addressing extensions and up to 4 MiB of memory
4190: revised 4090 with up to 16 MiB memory
4180: cheaper, slower version of the 4190 (no memory cache, no fast multiply unit)
4060M: compact ruggedised 4060 for military applications
4160: 4065 with the 4090 32-bit addressing extensions
4150: desktop 4160
4162: 4160 with DMAD IOP(s) for high speed communications controllers
4195: compact 4190
4185: cheaper, slower version of the 4195 (no memory cache, no fast multiply unit)
4151: rackmount 4150
4190D: dual-processor 4190
4193: 4195 with SCSI IOP replacing the default Normal Interface IOP
4220: Reimplement 4190 using gate array processor technology
4310: Motorola 88100 MVME187-based system emulating a GEC 4220
Software
Several operating systems were available for the GEC 4000 series, including:
COS: Core Operating System, for diskless real-time systems
DOS: Disk Operating System, for real-time systems, providing a filesystem and swapping facilities
OS4000: a multi-user system supporting batch and interactive use, and transaction processing
SCP-2: Secure Operating System (DOD A1/B3) multilevel security
Programming languages available included Babbage (a high-level assembly language), FORTRAN IV, CORAL 66, ALGOL, APL and BASIC.
See also
GEC Series 63
GEC 2050 8-bit minicomputer
References
External links
25 years of GEC 4000 series
"GEC 4000 family", Which Computer?, May 1979
"GEC 4000 Computer", The Centre for Computing History – Computer Museum
Minicomputers
GEC Computers
Computers using bit-slice designs
|
49754502
|
https://en.wikipedia.org/wiki/Mothership%20HackerMoms
|
Mothership HackerMoms
|
Mothership HackerMoms is a nonprofit hackerspace/makerspace in Berkeley, California, founded in 2011. It was the first all-women hackerspace.,
The 1000-square-foot space provides space for members to work on projects and hold events while their children are onsite with child care. They have participated in DIY culture events such as Maker Faire, Bizarre Bazaar, and Oakland Art Murmur, and have access to another San Francisco Bay Area hackerspace, Ace Monster Toys. The members define Mothership HackerMoms as a hackerspace deliberately to include themselves in the open source and free culture movements. Members have held workshops on how to use Drupal and Illustrator, how to lay a resin coating on a canvas, done painting, drawing, photography, and made linoprints.
See also
Double Union
Noisebridge
References
External links
Culture of San Francisco
DIY culture
Hackerspaces
Hackerspaces in the San Francisco Bay Area
Non-profit organizations based in San Francisco
2011 establishments in California
Women in California
|
3275060
|
https://en.wikipedia.org/wiki/PLECS
|
PLECS
|
PLECS (Piecewise Linear Electrical Circuit Simulation) is a software tool for system-level simulations of electrical circuits developed by Plexim. It is especially designed for power electronics but can be used for any electrical network. PLECS includes the possibility to model controls and different physical domains (thermal, magnetic and mechanical) besides the electrical system.
Most circuit simulation programs model switches as highly nonlinear elements. Due to steep voltage and current transient, the simulation becomes slow when switches are commutated. In most simplistic applications, switches are modelled as variable resistors that alternate between a very small and a very large resistance. In other cases, they are represented by a sophisticated semiconductor model.
When simulating complex power electronic systems, however, the processes during switching are of little interest. In these situations it is more appropriate to use ideal switches that toggle instantaneously between a closed and an open circuit. This approach, which is implemented in PLECS, has two major advantages: Firstly, it yields systems that are piecewise-linear across switching instants, thus resolving the otherwise difficult problem of simulating the non-linear discontinuity that occurs in the equivalent-circuit at the switching instant. Secondly, to handle discontinuities at the switching instants, only two integration steps are required (one for before the instant, and one after). Both of these advantages speed up the simulation considerably, without sacrificing accuracy. Thus the software is ideally suited for modelling and simulation of complex drive systems and modular multilevel converters, for example.
In recent years, PLECS has been extended to also support model-based development of controls with automatic code generation. In addition to software, the PLECS product family includes real-time simulation hardware for both hardware-in-the-loop (HIL) testing and rapid control prototyping.
Integration with MATLAB/Simulink or Standalone
The PLECS software is available in two editions: PLECS Blockset for integration with MATLAB®/Simulink®, and PLECS Standalone, a completely independent product.
When using PLECS Blockset, the control loops are usually created in Simulink, while the electrical circuits are modelled in PLECS. PLECS Standalone on the other hand can be operated independently from other software and offers an all-in-one solution for modelling electrical circuits and controls in a single environment. Both editions are interoperable with each other.
The main difference between the two versions is that PLECS Standalone runs faster than PLECS Blockset due to its optimised engine.
Add-on PLECS Coder
A code generator usually converts some intermediate representation of source code into machine code. The PLECS Coder is an add-on to PLECS Blockset and PLECS Standalone. It generates ANSI-C code from a PLECS model which can be compiled to execute on the simulation host or a separate target. The target can be an embedded control platform or a real-time digital simulator. The PLECS Coder can also produce embedded code for specific hardware targets.
Add-on PLECS PIL
In the Model-based design of control loops, Processor-in-the-Loop (PIL) simulation can accelerate the development process. It allows engineers to test their control algorithms on the real hardware inside a virtual circuit simulator. As an add-on to PLECS Blockset and PLECS Standalone, PLECS PIL provides that solution.
Hardware for Real-Time Simulations
The PLECS RT Box is a real-time simulator specially designed for power electronics applications. It is a processing unit for both real-time hardware-in-the-loop (HIL) testing and rapid control prototyping. A PLECS RT Box can be programmed and operated from PLECS. Thus, a software license of PLECS (Blockset or Standalone) and a PLECS Coder license are required to operate the hardware.
References
External links
Electronic circuit simulators
Simulation software
Power electronics
|
3677871
|
https://en.wikipedia.org/wiki/MacBook%20%282006%E2%80%932012%29
|
MacBook (2006–2012)
|
The MacBook is a line of Macintosh notebook computers designed, manufactured and sold by Apple Inc. from May 2006 to February 2012. A new line of computers by the same name was released in 2015, serving the same purpose as an entry-level laptop. It replaced the iBook series of notebooks as a part of Apple's transition from PowerPC to Intel processors. Positioned as the low end of the MacBook family, below the premium ultra-portable MacBook Air and the powerful MacBook Pro, the MacBook was aimed at the consumer and education markets. It was the best-selling Macintosh ever. For five months in 2008, it was the best-selling laptop of any brand in US retail stores. Collectively, the MacBook brand is the "world's top-selling line of premium laptops."
There have been four separate designs of the MacBook. The original model used a combination of polycarbonate and fiberglass casing which was modeled after the iBook G4. The second type was introduced in October 2008 alongside the 15-inch MacBook Pro; the MacBook shared the more expensive laptop's unibody aluminium casing, but omitted FireWire. A third design, introduced in late 2009, had a polycarbonate unibody casing.
On July 20, 2011, the MacBook was discontinued for consumer purchase as it had been effectively superseded by the MacBook Air which had a lower entry price. Apple continued to sell the MacBook to educational institutions until February 2012.
1st generation: Polycarbonate
The original MacBook, available in black or white colors, was released on May 16, 2006, and used the Intel Core Duo processor and 945GM chipset, with Intel's GMA 950 integrated graphics on a 667 MHz front side bus. Later revisions of the MacBook moved to the Core 2 Duo processor and the GM965 chipset, with Intel's GMA X3100 integrated graphics on an 800 MHz system bus. Sales of the black polycarbonate MacBook ceased in October 2008, after the introduction of the aluminum MacBook.
While thinner than its predecessor – the iBook G4 – the MacBook is wider than the 12-inch model due to its widescreen display. In addition, the MacBook was one of the first (the first being the MacBook Pro) to adopt Apple's MagSafe power connector and it replaced the iBook's mini-VGA display port with a mini-DVI display port. The iBook's discrete graphics chip was initially replaced by an integrated Intel GMA solution, though the latest revisions of the MacBook were upgraded with the more powerful Nvidia GeForce 9400M and later the 320M.
While the MacBook Pro largely followed the industrial design standard set by the PowerBook G4, the MacBook was Apple's first notebook to use features now standard in its notebooks – the glossy display, the sunken keyboard design and the non-mechanical magnetic latch. With the late 2007 revision, the keyboard received several changes to closely mirror the one which shipped with the iMac, by adding the same keyboard short-cut to control multimedia, and removing the embedded numeric keypad and the Apple logo from the command keys.
A more expensive black model was offered until the introduction of the unibody aluminum MacBook. The polycarbonate MacBook was the only Macintosh notebook (until the new 2015 model) to be offered in more than one color since the iBook G3 (Clamshell).
Ports
The ports are all on the left edge; on early models, from left to right, they are the MagSafe power connector, Gigabit Ethernet, mini-DVI, FireWire 400, 2 USB 2.0 ports, audio in, audio out and Kensington Security Slot.
For the unibody polycarbonate MacBook (2009), the ports from left to right are the MagSafe power connector, Gigabit Ethernet, Mini DisplayPort, 2 USB 2.0 ports, audio out and Kensington Security Slot.
On the front, there is a power light and an infrared receiver, while on the right edge, there is only the optical drive.
User serviceability
The polycarbonate Intel MacBook is easier for users to fix or upgrade than its predecessor. Where the iBook required substantial disassembly to access internal components such as the hard drive, users only need to remove the battery and the RAM door to access or replace the internal hard disk drive. Apple provides do-it-yourself manuals for these tasks.
Quality problems
In February 2007, the MacBook was recalled because the graphics card and hard drive caused the computer to overheat, forcing the unit to shut down.
Some early polycarbonate MacBook models suffered from random shutdowns; Apple released a firmware update to resolve them.
There were also cases reported of discolored or chipping palmrests. In such cases, Apple asked affected owners to contact AppleCare.
There were problems with batteries on some models from 2007 not being read by the MacBook. This is caused by a logic board fault and not a fault with the battery.
In February 2010, Apple announced a recall for MacBooks bought between 2006 and 2007 for hard drive issues. This is caused by heat and other problems.
Model specifications
Apple used the A1181 code, printed on the case, for this family of models, though 17 variations may be counted if color is included.
{| class="wikitable collapsible" style="width: 100%;"
|-
! style="text-align:center;" colspan="9"| Table of models for MacBook A1181 family
|-
!style="background:#ffdead;width:8%;"|Model
!style="background:#ffdead;width:11.5%;"|Mid 2006
!style="background:#ffdead;width:11.5%;"|Late 2006
!style="background:#ffdead;width:11.5%;"|Mid 2007
!style="background:#ffdead;width:11.5%;"|Late 2007 (Santa Rosa)
!style="background:#ffdead;width:11.5%;"|Early 2008
!style="background:#ffdead;width:11.5%;"|Late 2008(White)
!style="background:#ffdead;width:11.5%;"|Early 2009 (White)
!style="background:#ffdead;width:11.5%;"|Mid 2009 (White)
|-
!style="width:8%;"|Component
! Intel Core Duo
!colspan=7| Intel Core 2 Duo
|-
! Release date
|May 16, 2006
|November 8, 2006
|May 15, 2007
|November 1, 2007
|February 26, 2008
|October 14, 2008
|January 21, 2009
|May 27, 2009
|-
! Model numbers
|MA254*/A MA255*/A MA472*/A
|MA699*/A MA700*/A MA701*/A
|MB061*/A MB062*/A MB063*/A
|MB061*/B MB062*/B MB063*/B
|MB402*/A MB403*/A MB404*/A
|MB402*/B
|MB881*/A
|MC240*/A
|-
! Model identifier
|MacBook1,1
|colspan=2|MacBook2,1
|MacBook3,1
|MacBook4,1
|MacBook4,2
|colspan=2|MacBook5,2
|-
! Display
|colspan=8|13.3-inch glossy widescreen LCD, 1280 × 800 pixel resolution (WXGA, 16:10 = 8:5 aspect ratio)
|-
! Front side bus
|colspan=3|667 MHz
|colspan=3|800 MHz
|colspan=2|1066 MHz
|-
! Processor
|1.83 GHz or 2 GHz Intel Core Duo (T2400/T2500)
|1.83 GHz or 2 GHz Intel Core 2 Duo (T5600/T7200)
|2 GHz or 2.16 GHz Intel Core 2 Duo (T7200/T7400)
|2 GHz or 2.2 GHz Intel Core 2 Duo (T7300/T7500)
|2.1 GHz or 2.4 GHz Intel Core 2 Duo (T8100/T8300)
|2.1 GHz Intel Core 2 Duo (T8100)
|2 GHz Intel Core 2 Duo (P7350)
|2.13 GHz Intel Core 2 Duo (P7450)
|-
! MemoryTwo slots forDDR2 SDRAM
|512 MB (two 256 MB) 667 MHz PC2-5300Expandable to 2 GB
|512 MB (two 256 MB) or 1 GB (two 512 MB) 667 MHz PC2-5300Expandable to 4 GB (3 GB usable)5
|1 GB (two 512 MB) 667 MHz PC2-5300Expandable to 4 GB (3 GB usable)5
|colspan=2|1 GB (two 512 MB) or 2 GB (two 1 GB) 667 MHz PC2-5300Expandable to 6 GB (4 GB supported by Apple)
|1 GB (two 512 MB) 667 MHz PC2-5300Expandable to 6 GB (4 GB supported by Apple)
|2 GB (two 1 GB) 667 MHz PC2-5300Expandable to 8 GB 800 Mhz PC2-6400 (4 GB supported by Apple)6
|2 GB (two 1 GB) 800 MHz PC2-6400Expandable to 8 GB (4 GB supported by Apple)6
|-
! GraphicsShared with system memory
|colspan=3|Intel GMA 950 using 64 MB RAM (up to 224 MB in Windows through Boot Camp).
|colspan=3|Intel GMA X3100 using 144 MB RAM (up to 384 MB available in Windows through Boot Camp)
|colspan=2|Nvidia GeForce 9400M using 256 MB RAM
|-
! rowspan=2| Hard drive2
|60 or 80 GBOptional 100 or 120 GB
|60, 80, 120 GBOptional 160 or 200 GB, 4200-rpm
|80, 120, 160 GBOptional 200 GB, 4200-rpm
|80, 120, 160 GBOptional 250 GB
|120, 160, 250 GB
|120 GBOptional 160 or 250 GB
|120 GBOptional 160, 250, 320 GB
|160 GBOptional 250, 320, 500 GB
|-
| colspan="8" | Serial ATA 5400-rpm unless specified
|-
! Combo drive3Base model only
|8× DVD read, 24× CD-R and 10× CD-RW recording
|colspan=4|8× DVD read, 24× CD-R and 16× CD-RW recording
|colspan=3| N/A
|-
! Internal slot-loading SuperDrive3
|8× double-layer discs reads. 4× DVD±R & RW recording. 24× CD-R and 10× CD-RW recording
|2.4× DVD+R DL writes, 6× DVD±R read, 4× DVD±RW writes, 24× CD-R, and 10× CD-RW recording
|colspan=6|4× DVD+R DL writes, 8× DVD±R read, 4× DVD±RW writes, 24× CD-R, and 10x CD-RW recording
|-
! Connectivity
| Integrated AirPort Extreme 802.11a/b/gGigabit EthernetBluetooth 2.0 + EDR
| Integrated Airport Extreme 802.11a/b/g/n Gigabit EthernetBluetooth 2.0 + EDR
|colspan=4| Integrated Airport Extreme 802.11a/b/g/n Gigabit EthernetBluetooth 2.0 + EDR
| colspan="2" | Integrated Airport Extreme 802.11a/b/g/n Gigabit EthernetBluetooth 2.1 + EDR
|-
! Peripherals
|colspan=8| 2 × USB 2.01 × Firewire 4001 × Optical digital / analog audio line-in1 × Optical digital / analog audio line-out
|-
! Camera
|colspan=8| iSight Camera (640 × 480 0.3 MP)
|-
! Video out
|colspan=6| Mini DVI-I (integrated digital + analog)
|colspan=2| Mini DVI-D (digital-only, no analog)
|-
! Original Operating system
|colspan=3|Mac OS X 10.4 Tiger
| colspan="5" |Mac OS X 10.5 Leopard
|-
! Latest release operating system
|Mac OS X 10.6.8 Snow Leopard
|colspan=5|Mac OS X 10.7.5 Lion
|colspan=2|OS X 10.11 El Capitan
|-
! Battery
| colspan="8" | 55-watt-hour removable lithium-polymer
|-
! Weight'|colspan=2| ||
|colspan=5|
|-
! Dimensions
|colspan=8|1.08 in × 12.78 in × 8.92 in (27.5 mm × 325 mm × 227 mm)
|}Notes:1 Requires the purchase of a wireless-N enabler software from Apple in order to enable the functionality. Also enabled in Mac OS X 10.6 and later.
2 Hard drives noted are options available from Apple. As the hard drive is a user-replaceable part, there are custom configurations available, including use of 7200-rpm drives or SSDs.
3 Given optical drive speed is its maximum.
4 Beginning with the early 2008 revision, the Apple Remote became an optional add-on.
5 Expandable to 4 GB, with 3.3 GB usable.
6 Expandable to 8 GB, but with only 6 GB working stably with a Mac OS X older than 10.6.6 due to a software bug.
1st generation: Aluminum Unibody
On October 14, 2008, Apple announced a MacBook featuring a new Nvidia chipset at a Cupertino, California press conference with the tagline: "The spotlight turns to notebooks". This MacBook was still the first generation MacBook because it only replaced the black polycarbonate models to be the new higher-spec MacBook. It was replaced by the 13-inch MacBook Pro the following year.
The chipset brought a 1066 MHz system bus, use of DDR3 system memory, and integrated Nvidia GeForce 9400M graphics. Other changes include a display which uses LED backlights (replacing the fluorescent tube backlights used in the previous model) and arsenic-free glass, a new Mini DisplayPort (replacing the polycarbonate MacBook's mini-DVI port), a multi-touch glass trackpad which also acts as the mouse button, and the removal of the FireWire 400 port (thus it doesn't support Target Disk Mode, used for data transfers or operating system repairs without booting the system).
There was only one product cycle of the aluminum MacBook, as Apple rebranded the next revision in June 2009 as a 13-inch MacBook Pro using the same chassis with an added FireWire port and SD card slot.
Design
The design had stylistic traits of the MacBook Air which were also implemented into the design of the MacBook Pro. This model is thinner than the original polycarbonate MacBooks, and it made use of a unibody aluminum case with tapered edges. The keyboard of the higher-end model included a backlight.
Reception
Although Gizmodo concluded it to be "our favorite MacBook to date," they did claim, at this time, its display was inferior to that found on the MacBook Pro and MacBook Air, alleging a smaller viewing angle, washed-out colors, and dimmer backlighting. Similarly, AppleInsider and Engadget concluded it "may well be Apple's best MacBook to date" and "these are terrific choices—not only from an industrial design standpoint, but in specs as well" respectively, while also drawing attention to a lower quality display as compared with the MacBook Pro and MacBook Air. Charlie Sorrel of Wired News reached a similar conclusion about the MacBook display, citing its poor contrast and lack of vertical angle in comparison with the MacBook Pro and even the older white MacBook. Peter Cohen wrote an article discussing the loss of the FireWire port for Macworld, saying "The absence of FireWire ports is certainly an inconvenience for some users. But it shouldn’t be considered a deal-breaker for most of us, anyway."
Model specifications Notes:1 Hard drives noted are options available from Apple. As the hard drive is a user-replaceable part, there are custom configurations available, including use of 7,200-rpm drives and SSDs.
2 Given optical drive speed is its maximum.
2nd generation: Polycarbonate Unibody
On October 20, 2009, Apple released the second generation MacBook that introduced a new polycarbonate (plastic) unibody design, faster DDR3 memory, a multi-touch trackpad, an LED-backlit display, and a built-in seven-hour battery. The polycarbonate unibody MacBook, like its aluminum predecessor, lacks FireWire and, like the 13-inch MacBook Pro, has a combined audio in/out port. There is no infrared port and the Apple Remote is not included. On May 18, 2010, the MacBook was refreshed with a faster processor, a faster graphics card, improved battery life, and the ability to pass audio through the Mini DisplayPort connector. On July 20, 2011, the MacBook was discontinued for consumer purchases, but was still available to educational institutions until February 2012. It was the last Mac to use a plastic shell, as every Mac since has used aluminum.
Design
Unlike the MacBook Air, the MacBook follows the same design first seen in the MacBook Pro; however, it is rounder on the edges than previous laptops in the MacBook line. This model has an all-white fingerprint-resistant glossy palm rest, unlike the grayish surface of its predecessor, and uses a multi-touch glass trackpad like the one found on the MacBook Pro. The video-out port is Mini DisplayPort. The bottom of the MacBook features a rubberized non-slip finish. This was prone to peeling off and Apple offered free replacements fitted by authorised agents until at least 2015 internationally. The built-in battery of the late 2009 revision, a feature introduced earlier in the year with the MacBook Pro, is claimed by Apple to last seven hours compared with five hours in the older models. However, in tests conducted by Macworld, the battery was found to last only about four hours while playing video at full brightness with AirPort turned off. However, Apple's battery life was calculated with the brightness at the middle setting and while browsing websites and editing word documents, not with video and at full brightness. Gizmodo also reached about the same conclusion in their tests, but with AirPort turned on. The battery included in the mid-2010 model holds an additional five watt-hours over the previous model's and is claimed to last up to ten hours.
Reception
Despite being hailed by Slashgear as "one of the best entry-level notebooks Apple have produced," the unibody MacBook has received criticism for its lack of a FireWire port and SD card slot. Nilay Patel of Engadget added that the USB ports were easily dented and the bottom of the laptop became worn and discolored after a few days. He also drew particular attention to the fact that the price was not lowered, stating the small price difference between the MacBook and the MacBook Pro makes it a "wasted pricing opportunity." However, most critics agree that the unibody MacBook's display is significantly better than its predecessor's. AppleInsider states the new display "delivers significantly better color and viewing angle performance" than the previous MacBook, but still "not as vivid and wide-angle viewable as the MacBook Pro screens."
Model specifications
|}Notes:''
1 Memory noted are the options available from Apple. As memory is a user-replaceable part, there are custom configurations possible, including use of two 2 GB RAM modules, for 4 GB of RAM, two 4 GB RAM modules, for 8 GB of RAM, and two 8 GB RAM modules, for 16 GB of RAM. Modules must be PC3-8500S, CL 7, 1.5 volts. Also possible: 2 + 1 = 3 GB; 4 + 1 = 5 GB; 8 + 1 = 9 GB; 4 + 2 = 6 GB; 8 + 2 = 10 GB; 8 + 4 = 12 GB. Modules may be 1Rx8 or 2Rx8.
2 Hard drives noted are options available from Apple. As the hard drive is a user-replaceable part, there are custom configurations possible, including capacities up to 2 TB and SSDs. For rotating drives, 5,400 rpm is recommended, for power and cooling reasons.
3 Noted optical drive speed is its maximum. It is possible to replace the optical drive with a caddy which accommodates an SSD or a second hard drive. Look for caddies which are intended for MacBook A1342 models; there are similar (but slightly different) caddies which are intended for Mac mini models.
Criticisms and defects
The rubber bottom of unibody MacBooks have been known to peel off. Apple has noticed this as a flaw and will replace the bottom for free, with or without a warranty. Some consumers have also reported defects in their LCD displays in mid-2010–2011 models.
The MagSafe power adapter of MacBooks has been known to fray, break, and stop working. Following a lawsuit, Apple replaces these adapters for US residents with affected adapters, purchased (or received as a gift) with computers or as an accessory.
Some MacBooks are affected by the iSeeYou vulnerability, potentially allowing their iSight cameras to record the user without the user's knowledge.
Supported operating systems
See also
Comparison of Macintosh models
MacBook family
MacBook (12-inch)
MacBook Air
MacBook Pro
References
External links
MacBook Developer Note
MacBook Buyer's Guide
Another Blog all about the Macbook with diagrams
Computer-related introductions in 2006
X86 Macintosh computers
Products and services discontinued in 2012
|
10649253
|
https://en.wikipedia.org/wiki/Berkeley%20Packet%20Filter
|
Berkeley Packet Filter
|
The Berkeley Packet Filter (BPF) is a technology used in certain computer operating systems for programs that need to, among other things, analyze network traffic (and eBPF is an extended BPF JIT virtual machine in the Linux kernel). It provides a raw interface to data link layers, permitting raw link-layer packets to be sent and received. BPF is available on most Unix-like operating systems and eBPF for Linux and for Microsoft Windows. In addition, if the driver for the network interface supports promiscuous mode, it allows the interface to be put into that mode so that all packets on the network can be received, even those destined to other hosts.
BPF supports filtering packets, allowing a userspace process to supply a filter program that specifies which packets it wants to receive. For example, a tcpdump process may want to receive only packets that initiate a TCP connection. BPF returns only packets that pass the filter that the process supplies. This avoids copying unwanted packets from the operating system kernel to the process, greatly improving performance.
BPF is sometimes used to refer to just the filtering mechanism, rather than to the entire interface. Some systems, such as Linux and Tru64 UNIX, provide a raw interface to the data link layer other than the BPF raw interface but use the BPF filtering mechanisms for that raw interface.
Raw interface
BPF provides pseudo-devices that can be bound to a network interface; reads from the device will read buffers full of packets received on the network interface, and writes to the device will inject packets on the network interface.
In 2007, Robert Watson and Christian Peron added zero-copy buffer extensions to the BPF implementation in the FreeBSD operating system, allowing kernel packet capture in the device driver interrupt handler to write directly to user process memory in order to avoid the requirement for two copies for all packet data received via the BPF device. While one copy remains in the receipt path for user processes, this preserves the independence of different BPF device consumers, as well as allowing the packing of headers into the BPF buffer rather than copying complete packet data.
Filtering
BPF's filtering capabilities are implemented as an interpreter for a machine language for the BPF virtual machine, a 32-bit machine with fixed-length instructions, one accumulator, and one index register. Programs in that language can fetch data from the packet, perform arithmetic operations on data from the packet, and compare the results against constants or against data in the packet or test bits in the results, accepting or rejecting the packet based on the results of those tests.
BPF is often extended by "overloading" the load (ld) and store (str) instructions.
Traditional Unix-like BPF implementations can be used in userspace, despite being written for kernel-space. This is accomplished using preprocessor conditions.
Extensions and optimizations
Some projects use BPF instruction sets or execution techniques different from the originals.
Some platforms, including FreeBSD, NetBSD, and WinPcap, use a just-in-time (JIT) compiler to convert BPF instructions into native code in order to improve performance. Linux includes a BPF JIT compiler which is disabled by default.
Kernel-mode interpreters for that same virtual machine language are used in raw data link layer mechanisms in other operating systems, such as Tru64 Unix, and for socket filters in the Linux kernel and in the WinPcap and Npcap packet capture mechanism.
Since version 3.18, the Linux kernel includes an extended BPF virtual machine with ten 64-bit registers, termed extended BPF (eBPF). It can be used for non-networking purposes, such as for attaching eBPF programs to various tracepoints. Since kernel version 3.19, eBPF filters can be attached to sockets, and, since kernel version 4.1, to traffic control classifiers for the ingress and egress networking data path. The original and obsolete version has been retroactively renamed to classic BPF (cBPF). Nowadays, the Linux kernel runs eBPF only and loaded cBPF bytecode is transparently translated into an eBPF representation in the kernel before program execution. All bytecode is verified before running to prevent denial-of-service attacks. Until Linux 5.3, the verifier prohibited the use of loops.
A user-mode interpreter for BPF is provided with the libpcap/WinPcap/Npcap implementation of the pcap API, so that, when capturing packets on systems without kernel-mode support for that filtering mechanism, packets can be filtered in user mode; code using the pcap API will work on both types of systems, although, on systems where the filtering is done in user mode, all packets, including those that will be filtered out, are copied from the kernel to user space. That interpreter can also be used when reading a file containing packets captured using pcap.
Another user-mode interpreter is uBPF, which supports JIT and eBPF (without cBPF). Its code has been reused to provide eBPF support in non-Linux systems. Microsoft's "eBPF on Windows" builds on uBPF and the PREVAIL formal verifier. rBPF, a Rust rewrite of uBPF, is used by Solana (blockchain platform) as the execution engine.
Programming
Classic BPF is generally emitted by a program from some very high-level textual rule describing the pattern to match. One such representation is found in libpcap. Classic BPF and eBPF can also be written either directly as machine code, or using an assembly language for a textual representation. Notable assemblers include Linux kernel's tool (cBPF), (cBPF), and the assembler (eBPF). The command can also act as a disassembler for both flavors of BPF. The assembly languages are not necessarily compatible with each other.
eBPF bytecode has recently become a target of higher-level languages. LLVM added eBPF support in 2014, and GCC followed in 2019. Both toolkits allow compiling C and other supported languages to eBPF. A subset of P4 can also be compiled into eBPF using BCC, an LLVM-based compiler kit.
History
The original paper was written by Steven McCanne and Van Jacobson in 1992 while at Lawrence Berkeley Laboratory.
In August 2003, SCO Group publicly claimed that the Linux kernel was infringing Unix code which they owned. Programmers quickly discovered that one example they gave was the Berkeley Packet Filter, which in fact SCO never owned. SCO has not explained or acknowledged the mistake but the ongoing legal action may eventually force an answer.
Security concerns
The Spectre attack could leverage the Linux kernel's eBPF interpreter or JIT compiler to extract data from other kernel processes. A JIT hardening feature in the kernel mitigates this vulnerability.
A Chinese computer-security outfit, Pangu Lab, has claimed BPF was used by the NSA in a high-tech backdoor for Linux systems.
See also
Data link layer
References
Further reading
External links
– an example of conventional BPF
eBPF.io - Introduction, Tutorials & Community Resources
bpfc, a Berkeley Packet Filter compiler, Linux BPF JIT disassembler (part of netsniff-ng)
BPF Documentation, for Linux kernel
Linux filter documentation, for both cBPF and eBPF bytecode formats
Internet Protocol based network software
Packets (information technology)
|
52444382
|
https://en.wikipedia.org/wiki/Operations%20management%20for%20services
|
Operations management for services
|
Operations management for services has the functional responsibility for producing the services of an organization and providing them directly to its customers. It specifically deals with decisions required by operations managers for simultaneous production and consumption of an intangible product. These decisions concern the process, people, information and the system that produces and delivers the service. It differs from operations management in general, since the processes of service organizations differ from those of manufacturing organizations.
In a post-industrial economy, service firms provide most of the GDP and employment. As a result, management of service operations within these service firms is essential for the economy.
The services sector treats services as intangible products, service as a customer experience and service as a package of facilitating goods and services. Significant aspects of service as a product are a basis for guiding decisions made by service operations managers. The extent and variety of services industries in which operations managers make decisions provides the context for decision making.
The six types of decisions made by operations managers in service organizations are: process, quality management, capacity & scheduling, inventory, service supply chain and information technology.
Definition of services
There have been many different definitions of service. Russell and Taylor (2011) state that one of the most pervasive, and earliest definitions is “services are intangible products”. According to this definition, service is something that cannot be manufactured. It can be added after manufacturing (e.g. product repair) or it can stand alone as a service (e.g. dentistry) delivered directly to the customer. This definition has been expanded to include such ideas as “service is a customer experience.” In this case the customer is brought into the definition as the experience the customer receives while “consuming” the service.
A third definition of service concerns the perceived service as consisting of physical facilitating goods, explicit service and implicit service. In this case the facilitating goods are the buildings and inventory used to provide the service. For example, in a restaurant the facilitating goods are the building and the food. The explicit service is what is perceived as the observable part of the service (the sights, sounds and look of the service). In a restaurant the explicit service is the time spent waiting for service, the appearance of the facility and the employees, and the ambience of sounds and light and the decor. The implicit service is the feeling of safety, psychological well-being and happiness associated with the service.
Comparison of manufacturing and services
According to Fitzsimmons, Fitzsimmons and Bordoloi (2014) differences between manufactured goods and services are as follows:
Simultaneous production and consumption. High contact services (e.g. haircuts) must be produced in the presence of the customer, since they are consumed as produced. As a result, services cannot be produced in one location and transported to another, like goods. Service operations are therefore highly dispersed geographically close to the customers. Furthermore, simultaneous production and consumption allows the possibility of self-service involving the customer at the point of consumption (e.g. gas stations). Only low-contact services produced in the "backroom" (e.g., check clearing) can be provided away from the customer.
Perishable. Since services are perishable, they cannot be stored for later use. In manufacturing companies, inventory can be used to buffer supply and demand. Since buffering is not possible in services, highly variable demand must be met by operations or demand modified to meet supply.
Ownership. In manufacturing, ownership is transferred to the customer. Ownership is not transferred for service. As a result, services cannot be owned or resold.
Tangibility. A service is intangible making it difficult for a customer to evaluate the service in advance. In the case of a good, customers can see it and evaluate it. Assurance of quality service is often done by licensing, government regulation, and branding to assure customers they will receive a quality service.
These four comparisons indicate how management of service operations are quite different from manufacturing regarding such issues as capacity requirements (highly variable), quality assurance (hard to quantify), location of facilities (dispersed), and interaction with the customer during delivery of the service (product and process design).
Service industries
Industries have been defined by economists as consisting of four parts: Agriculture, Mining and Construction, Manufacturing, and Service. Services have existed for centuries. Early service was associated with servants. Servants were hired to do tasks that the wealthy did not want to do for themselves (e.g. cleaning the house, cooking, and washing clothes). Later, services became more organized and were provided to the general public.
In 1900 the U.S. service industry (e.g., consisting of banks, professional services, schools and general stores) was fragmented, except for the railroads and communications. Services were largely local in nature and owned by entrepreneurs and families. The U.S. in 1900 had 31% employment in services, 31% in manufacturing and 38% in agriculture.
Services have now evolved to become the dominant form of employment in industrialized economies. Much of the world has progressed, or is progressing, from agricultural to industrial and now post-industrial economies. The U.S. Bureau of Labor Statistics provides a table of the employment of the 151 million people by industry in the U.S. for 2014.
Source:
The table shows that service industries now constitute 83% of employment in the U.S., while agriculture, mining, construction and manufacturing are only 17% of the total employment. Service industries are very diversified ranging from those that are highly capital intensive (e.g. banks, utilities, airlines, and hospitals) to those that are highly people intensive (e.g. retail, wholesale, and professional services). In capital intensive services the focus is more on technology and automation, while in people intensive services the focus is more on managing service employees that deliver the service.
Service and manufacturing industries are highly interrelated. Manufacturing provides tangible facilitating goods needed to provide services; and services such as banking, accounting and information systems provide important service inputs to manufacturing. Manufacturing companies have an opportunity to provide more services along with their products. This can be an important point of product differentiation, leading to increased sales and profitability for manufacturers.
While the focus is often on service industries, there is an opportunity to apply service principles to internal services in an organization, particularly by focusing on internal customers. Internal services such as payroll, accounting, legal, information systems or human resources often have not identified their internal customers, nor do they understand their customer needs. Service ideas ranging from process design, to lean systems, quality management, capacity and scheduling have been widely applied to internal services.
Service design
Service design begins with a business strategy and service strategy. The business strategy defines what business the firm is in, for example, the Walt Disney Company defines its business strategy "as making people happy." A business strategy also defines the target market, competitors, financial goals, new products, how the company competes, and perhaps some aspects of operations.
Following from the business strategy is the service concept. It must provide the rationale for why the customer should buy the service offered. It defines what the customer is receiving and what the service organization is providing. The service concept includes:
Organizing Idea. The vision and essence of the service.
Service Provided. The process and results designed by the provider.
Service Received. The customer experience and outcomes expected.
Managers can use the service concept to create organizational alignment and develop new services. It provides a means for describing the service business from an operations point of view.
After defining the service concept, operations can proceed to define the service-product bundle (or service package) for the organization. It consists of five parts: service facility, facilitating goods, information, explicit service and implicit services. It is important to carefully define each of these elements so that operations can subsequently design and manage a service operation. The service-product bundle must come first before operations decisions.
An example of service-product bundle characteristics follows:
Service Facility: Accessible by public transportation, sufficient parking, interior decorating, architecture, facility layout and traffic flow
Facilitating goods: sufficient inventory, quality and selection
Information: Is it accurate, up-to-date, timely, and useful to the customer and service providers
Explicit service: waiting time, training and appearance of personnel, and consistency
Implicit service: Sense of well-being, privacy and security, atmosphere, attitude of service providers.
Once the service package is specified, operations is ready to make decisions concerning the process, quality, capacity, inventory, supply chain and information systems. These are the six decision responsibilities of service operations. Other decision responsibilities such as market choice, product positioning, pricing, advertising and channels belong to the marketing function. Finance takes care of financial reporting, investments, capitalization, and profitability.
Operations decisions
Process decisions
Process decisions include the physical processes and the people that deliver the services to the customer. A service process consists of all the routines, tasks and steps that are used to deliver service to customers along with the jobs and training for service employees. There are many ways to organize a process to provide customer service in an effective and efficient manner to deliver the service-product bundle. Several ideas have been advanced on how to design a service process.
Customer contact
Design of a service system must consider the degree of customer contact. The importance of customer contact was first noted by Chase and Tansik (1983). They argued that high customer contact processes should be designed and managed differently from low-contact processes. High-contact processes have the customer in the system while providing the service. This can lead to difficulties in standardizing the service or inefficiencies when the customer makes demands or expects unique services. On the other hand, high-contact also provides the possibility of self-service where customers provide part of the service themselves (e.g. filing your own gas tank, or packing your own groceries). Low-contact services are performed away from the customer in what is often called "the back room." In this case, the service process can be more standardized and efficient (e.g. check clearing in a bank, filling orders in a warehouse) since the customer is not in the system to request preferences, customization or changes. Low-contact services can be managed more like manufacturing, high-contact services cannot.
Production-line approach
In 1972 Levitt introduced the "production-line approach to service". He argued that service processes could be made more efficient by standardizing them and automating them like manufacturing. He gave the example of McDonald's that has standardized both the services at the front counter and the backroom for producing the food. They have limited the menu, simplified the jobs, trained the managers (at "Hamburger U"), automated production and instituted standards for courtesy, cleanliness, speed and quality. As a result, McDonald's has become a model for other service processes which have been designed for high efficiency, not only in fast food, but in many other services. At the same time, it's leaves open the option for more customized and flexible services for customers who are willing to pay more for "better" or more personalized service. While these services are less efficient, they cater more to unique customer's needs.
Service process matrices
Many different service process matrices have been proposed for explaining the relationship between service products that are selected and corresponding processes. One of these is shown below.
The Service Delivery System Matrix by Collier and Meyer (1998) illustrates the various types of routings used for service process depending on the amount of customization and customer involvement in the process. With high levels of customization and customer involvement, there are many pathways and jumbled flows for service. As a result, the service delivery of Customer-Routed services is less efficient than Co-routed or Provider-Routed processes that have less customization and less customer involvement. Process that should be used for each combination of customization and customer involvement are shown on the diagonal of this matrix.
Self-service
Self-service is in wide use. For example, in the 1960s gas station attendants came out and pumped your gas, cleaned your windshield and even checked your oil. Fast food is famous for self-service, since customers have been trained to order their own food, pay immediately, find a table, and clean up the trash. ATM's have replaced many traditional tellers and online banking provides even more self-service.
When self-service is accepted by the customer, it can reduce costs and even provide better service in the customer's eyes—faster service with less hassle. Self-service falls in the provider-routed or co-routed part of the Service delivery matrix. Services that were previously customer-routed have been moved down the diagonal to be more efficient and accepted by customers.
Service Blueprint
The service blueprint is a way to describe the flow of a customer through a service operation from the start to the finish, along with the actions provided by the service providers both in interaction with the customer and in the "back room" out of sight of the customer. For example, if a customer wishes to purchase a suit, the service blueprint starts with entry to the store, next the customer is greeted by a sales representative, the customer then provides information on his/her needs, the sales representative searches for appropriate suits, one or more suits are selected and tried-on for a fitting, a suit is selected and then alterations are done (which take place away from the customer), the customer pays for the suit and returns later to pick it up. A blueprint flowchart shows every step in the process and can be used to illustrate the process and improve it.
Lean thinking
If lean thinking is applied, the time taken for each step in a service blueprint flowchart can be recorded, or a separate value-stream map can be constructed. Then the process can be analyzed for time reductions to reduce waiting and non-value added steps. Changes are made to reduce time and waste in the process. Waste is anything that does not add value to the process including waiting time in line, possibility of more self-service, customer hassle, and defects in service. But, lean thinking also requires attention to the customer and the people providing the service. It is important to apply important principles such as completely solve the customer's problem, don't waste time and provide exactly what the customer requires.
Leite and Vieira (2015) state that service managers must realize that the customer will be happy if the service provided meets or exceeds expectations. Also the interaction between the customer and the people providing the service is essential to achieve satisfied customers. Employee involvement is often emphasized as part of lean thinking to achieve high levels of commitment by service employees.
Queuing
Queuing is an analytic method for determining waiting time when customers must wait in line to get service. The length of the queue and waiting time can be calculated based on the arrival rate, service rate, number of servers and type of lines. There are many formulas for various types of queuing theory problems. The formulas generally predict that the average service time must be significantly less than the average time between arrivals when there is randomness in arrivals and/or service time. The reason for this is that a long line will build up when randomness of arrivals occurs faster than the average and service times are longer than the average. If the distributions of arrival times and service times are known, formulas are available for calculating the exact waiting times and line lengths for many different queuing configurations of servers, types of lines, server distributions and arrival distributions.
Service-profit chain
Heskett, Sasser and Schlensinger (1997) proposed the service-profit chain as a way to design service processes. The service-profit chain links various aspects and tasks required to deliver superior service and profits. It starts with a high level of internal quality leading to employee satisfaction and productivity to deliver superior external customer service leading to customer satisfaction, customer loyalty and finally high revenues and profits.
Every link in this chain is important and the linkage between the service providers and the customer is essential in service operations. The service manager should not break any of the links in order to receive the results of high probability and growth.
Quality management
SERVQUAL measurement
Using the customer experience approach, a questionnaire called SERVQUAL has been developed to measure the customer's perception of the service. The dimensions of SERVQUAL are designed to measure the customer experience in both explicit and implicit measures. The dimensions are:
Tangible: Cleanliness, appearance of facilities and employees
Reliability: Accurate, dependable and consistent services without errors
Responsiveness: Promptly assist customers in a timely manner
Assurance: Conveying knowledge, trust and confidence
Empathy: Caring, approach-ability and relating to customers
A debate about SERVQUAL has ensued about whether customer service should be measured in absolute terms or relative to expectations. Some argue that if high levels on all SERVQUAL dimensions are provided then the service is high quality. Others argue that ultimately the service result is judged by the customer relative to the customer's expectations and not by the service provider. If customer expectations are low, even low levels on SERVQUAL dimensions provides high quality.
Quality management approaches
Quality management practices for services have much in common with manufacturing, despite the fact that the product is intangible. The following approaches are widely used for quality improvement in both manufacturing and services:
The Baldrige Awards: A comprehensive framework for quality improvement in organizations
The W. Edwards Deming Management Method: Fourteen Points for Management
Joseph Juran's Approach: Planning, Improvement and Control
Six Sigma: DMAIC (Design, Measurement, Analysis, Improvement and Control)
These approaches have several things in common. They begin with defining and measuring the customer's needs (e.g. using SERVQUAL). Any service that does not meet a customer's need is considered a defect. Then these approaches seek to reduce defects through statistical methods, cause-and-effect analysis, problem solving teams, and involvement of employees. They focus on improving the processes that underlie production of the service.
In addition to intangibility, there are two approaches about quality that are unique to service operations management.
Service recovery
For manufactured products, quality problems are handled through warranties, returns and repair after the product is delivered. In high contact services there is no time to fix quality problems later; they must be handled by service recovery as the service is delivered. For example, if soup is spilled on the customer in a restaurant, the waiter might apologize, offer to pay to have the suit cleaned and provide a free meal. If a hotel room is not ready when promised, the staff could apologize, offer to store the customer's luggage or provide an upgraded room. Service recovery is intended to fix the problem on the spot and go even further to offer the customer some form of consolation and compensation. The objective is to make the customer satisfied with the situation, even though there was a service failure.
Service guarantee
A service guarantee is similar to a manufacturing guarantee, except the service product cannot be returned. A service guarantee provides a specific monetary reward for failure of service delivery. Some examples are:
Your package will be delivered by the time promised or you will not pay.
We will fix your automobile or give you $100 if you must bring it back for repair.
Customers that are not satisfied with their haircut, get the next haircut free.
Service guarantees serve to assure the customer of quality and they provide a way for the employees to know the cost of service failure.
Capacity and scheduling
Forecasting
Forecasting demand is a prerequisite for managing capacity and scheduling. Forecasting demand often uses big data to predict customer behavior. The data comes from scanners at retail locations or other service locations. In some cases traditional time series methods are also used to predict trends and seasonality. Future demand is forecasted based on past demand patterns. Many of the same time-series and statistical methods for forecasting are used for manufacturing or service operations.
Capacity planning
Capacity planning is quite different between manufacturing and services given that service cannot be stored or shipped to another location. As a result, location of services is very dispersed to be near the customer. Customers are only willing to travel short distances to receive most services. Exceptions are health care when the illness requires a specialist, airline transportation when the service is to move the customer, and other services where local expertise is not available. Aside from these exceptions, location analysis depends on the "drawing power" based on the distance a customer is willing to travel to a service site relative to competitive offerings and locations. The drawing power of a site for a particular customer is high if the site is close by and provides the required service. High drawing power is related to high sales and profits. This is very different from manufacturing locations which depend on the cost of building a factory plus the cost of transporting the goods to the customers. Manufacturing plants are located on the basis of low costs rather than high revenues and profits for services.
A second difference from manufacturing is planning for capacity utilization once a facility is built. Since the product cannot be stored in inventory and sold later, service capacity is perishable and must meet peak demand at any point in time. There are two ways to deal with this problem. First, management can attempt to reduce peak demand and level it over time by the following actions.
Higher prices during peak-demand times
A reservation system to limit peak demand
Advertising and promotion to shift peak demand
Management can also use various methods to manage the supply of services including:
Part-time labor
Hiring and Layoff of Employees
Using Overtime
Subcontracting
While some of these same mechanisms are used in manufacturing, they are much more crucial in service operations.
Revenue management
Revenue management is unique to services, since capacity is perishable. This applies to the airline industry. When the plane leaves the runway, empty seats generate no revenue, but the cost of the flight is almost the same. As a result, mathematical models have been formulated to allocate capacity at various prices and times as the flight is booked in advance. Initially, a certain number of seats are reserved for first class, coach, premium coach and various other categories. Based on the elasticity of demand, seats prices are lowered at the last minute in order to fill empty seats and maximize the revenue of the flight. Similar models have also been developed for revenue management in hotels, where the capacity is also perishable.
Scheduling
Scheduling has some differences between manufacturing and service. In manufacturing, jobs are scheduled through a factory to sequence them in the best order to meet due dates and reduce costs. In services, it is customers who are being scheduled. As a result, waiting time becomes much more critical. While manufacturing orders don't mind waiting in line or waiting in inventory, real customer's do mind. Some of the scheduling applications for services are: scheduling of patients to operating rooms in hospitals and scheduling students to classes. Many scheduling problems have been solved by using operations research methods to optimize the schedule.
Inventory
Inventory management and control is needed in service operations with facilitating goods. Almost every service uses some amount of facilitating goods. The presence of facilitating goods is critical in retail and wholesale operations but these operations don't manufacture anything, rather they distribute goods and provide service while doing it. One difference from manufacturing inventories is that services use only finished goods, while manufacturing has finished goods, work-in-process and raw-materials inventories. As a result, manufacturing uses a Materials Requirements Planning System, while services do not. Services use Replenishment inventory control systems such as order-point and periodic-review systems.
Service supply chains
Supply chains for service operations are critical to supply facilitating goods. A typical hospital supply chain is an example. A hospital will use many goods from suppliers to construct and furnish the building. During day-to-day operation of the hospital, inventories of supplies will be held for the operating rooms and throughout the building. The pharmacy will hold drugs and the kitchen will need supplies of food. The supply chain of facilitating goods in hospitals is extensive.
Purchasing controls a large part of costs in retail and wholesale operations, approximately 75% of all costs are for purchased goods. Outside of retail and wholesale operations, facilitating goods are a much smaller part of total costs reaching a low of 10% for most professional services. Both manufacturing and service organizations purchase goods and must deal with outsourcing and offshoring, as well as, domestic products.
Service inputs are critical for manufacturing including capital from banks, energy, information systems and human resources. Services are part of the manufacturing supply chain, just like the physical inputs of products from other manufacturing companies.
Both manufacturing and service operations can purchase services from outside the organization. Internal business services such as accounting, legal, human resources, call centers, and information systems may be outsourced in part or entirely. Some of these services can also be purchased from offshore. Logistics services may be outsourced to Third Party Logistics (3PL) providers. These services include transportation, warehousing, order fulfillment, returns and tariffs.
Information technology
The Internet and information technology has dramatically changed the delivery of services. Some of the major changes are as follows:
Providing information and knowledge directly to consumers. Before the Internet, consumers used a variety of sources for acquiring knowledge including libraries, phone calls, universities and personal contacts. Now information can be provided immediately as a service by searching the Internet.
Providing service at a distance. Services such as call centers, banking, entertainment and legal services can be provided over long distances, even internationally.
Reservations can be made on the Internet to reserve capacity more easily than by calling ahead for the reservation.
Facilitating goods can be ordered directly by the Internet and delivered without traveling to a retail store. The services provided includes browsing for merchandise, order entry, order checking, payment, order confirmation, notification of delivery and return services.
Internal information systems now provide an array of management information to help managers make better decisions.
Management science and operations research (MSOR)
Analysis using MSOR methods has been extensive in services. Areas where they have been heavily applied are in inventory, capacity, scheduling, queuing and forecasting. With the advent of the Internet, information systems, big data and analytics, there are many opportunities to make improvements in decision making for services. The analytic techniques include statistics, management science and operations research.
References
Further reading
Management by type
|
22905574
|
https://en.wikipedia.org/wiki/GEC%202050
|
GEC 2050
|
The GEC 2050 was an 8-bit minicomputer produced during the 1970s, initially by Marconi Elliott Computer Systems of the UK, before the company renamed itself GEC Computers Limited. The first models were labeled MECS 2050, before being renamed GEC 2050.
The GEC 2050 was commonly used as a Remote Job Entry station, supporting a punched card reader, line printer, system console, and a data link to a remote mainframe computer system, and GEC Computers sold a complete RJE package including the system, peripherals, and RJE software. Another turnkey application was a ticketing system, whose customers included Arsenal Football Club. The system was also commonly used for road traffic control and industrial process automation.
The GEC 2050 supported up to 64KiB of magnetic core memory in 4KiB, 8KiB and 16KiB modules. The system had a single Channel Controller for performing autonomous I/O, and used the same peripheral I/O controllers as the GEC 4000 series minicomputer.
Instruction set
Although CISC, the instruction set is sufficiently simple to be tabulated in its entirety:
Using the opcode 29 as an illustration, the assembler code (AD X2,X1,offset) causes the contents of the memory location 'offset(X1)' to be added to register X2. Thus, register X1 is being used as the index register, and the offset, v, is specified in the second byte of the instruction. G is a dummy index register whose value is always zero, and hence causes the offsets to be treated as absolute addresses in the zeroth (global) segment. (Incidentally, since X3 is the standard index register, the assembler program allows ',X3,address' to be abbreviated to ',address'.)
The conditional jump instructions are listed in pairs, the former opcode is for a forward jump, and the latter one for a backward jump. Again, the offset of the jump is obtained from the second byte of the instruction. Thus, all instructions in rows 0 to 7 and row 9 consist of two bytes (the opcode and a data byte) while all the other instructions consist of just a single opcode byte.
The main accumulator register, A, can be set to be 1, 2, 3 or 4 bytes in length, using the SETL instructions. This controls how many bytes are loaded (or stored) in a memory-access instruction. The JIL instruction performs a Jump Indirect, like the JI instruction, but saves the value of the program counter, S, into index register X2. This allows very simple non-recursive subroutine calls to be achieved. More complex subroutine calls involve the use of the PREP instruction, which saves the return information in the first bytes of the current memory segment. Such calls, too, cannot be recursive.
User experience
This section describes a work session on this computer, at one typical installation in 1975. The programmer might arrive, to work on a Fortran-II program that he had already started writing in the previous session, carrying a teleprinter paper listing of that program that has been annotated with the new changes that are to be made, and the punch tape that contains the machine-readable source code of the program. He would first need to turn on the computer at the switch on the conventional mains socket on the wall, and then at the front-panel on/off switch. Since the magnetic core memory, which is non-volatile memory, would generally still contain the previous user's program, the programmer might need to load the punched tape called Minisystem (containing the object code of a small, memory monitor program). This tape, which was stored in a small cardboard box on a shelf near the computer, would be entered from the left of the tape-reader. The tape-reader was an integral part of the front panel of the computer, and would spill out the tape that it had read, on to the floor, on the right-hand side. Once read, the Minisystem could be started by flicking the Run switch on the front panel.
COMMAND
>L
L 049A
A 0522
D 063E
LINK 0691
EDIT 1090
MAIN 155E
28A2 3FFF
>
The text editor program, EDIT, could then be called from the teleprinter keyboard, at the Minisystem's '>' prompt. The programmer would then load the source tape into the reader, and while this, too, was being read in, and spilled out all over the floor, the programmer could be busy winding up the Minisystem tape, into a tidy reel again, using a hand-turned winch.
Eventually, once the source tape had finished being read, the text editor program would prompt for a new command, which was the invitation to edit the program. Though having changed little in effect over the decades, editing has changed enormously in feeling: only one line of the program was 'displayed' at a time (physically printing it out on the paper); inserted text was printed below the point in the line where it was being inserted, and the rubout key merely crossed-out the text that was to be deleted; the string-find and string-substitute facilities were very rudimentary; and the teleprinter worked at 110 baud (making an enormous clunking and whirring racket as it did so).
At the end of the edit session, the new version of the source program would be output: both as a typed listing, and as a new punched tape. Whilst the paper-tape punch was doing this, again spilling out its product (albeit not so fast as the reader, and off to the left of the machine) from its front panel mounting, the programmer could be winding up the old version of the source tape, for it to be kept as a backup version. The free end of the new tape, which was still being punched out, could be labelled in pencil with its program name, version number, and date of punching.
Unfortunately, with only 16 KiBytes of core store, the Minisystem and Fortran compiler would not both fit in memory together, so the next stage would be to load the Fortran compiler tape (which was stored in another cardboard box on the shelf in the computer room). Whilst this was being read in, and spilling out the other side, the newly punched source tape could be torn off from the free end that was protruding out of the punch, and wound up using the hand winch. It would be loaded into the tape reader once the compiler had finished being read in, and the compiler tape would be wound back into a tidy reel.
The first pass of the source tape through the tape reader was generally used just for checking for syntax errors in the program, so the generation of the object tape from the tape punch would be suppressed. If any errors or warnings were detected, it would be necessary to load the Minisystem tape again, and to run the editor program to make the corrections, and to generate a new version of the source tape. Otherwise, the source tape could be wound up again, and loaded back into the tape reader for a second pass. This time, it would be read in, haltingly, whilst the paper tape punch worked flat-out to produce the corresponding object tape (usually two or three times longer in length than the Fortran source tape).
At the end, with two tapes all over the floor, the Minisystem would have to be read once again, whilst the object and source tapes were being wound up. The linking-loader program, LINK, could then be called from the keyboard, at the Minisystem's prompt, and the object tape fed through the reader. The linking-loader also required the library tape, containing the Fortran library functions, to be loaded into the reader. Both tapes would eventually need to be wound up, but this tended not to be done immediately, because of the programmer's eagerness at finally being in a position to run the program. The user's program (called MAIN) could be called at the Minisystem's prompt.
Depending on what happened during the program execution, the programmer might need to read the newest source tape back in to the editor program, yet again, ready to go round the software development cycle once more.
See also
GEC Computers Limited
GEC 4000 series
External links
Computing at Chilton, GEC 2050 Remote Job Entry Station
GEC 2050 processor
Minicomputers
GEC Computers
Remote job entry
|
17980101
|
https://en.wikipedia.org/wiki/SOHH
|
SOHH
|
SOHH (Support Online Hip Hop) is a hip hop news website. Felicia Palmer and Steven Samuel founded the website in 1996. In 2000, Rolling Stone magazine writer Mark Binelli called it the "best overall hip-hop site".
History
Felicia Palmer attended Cornell University to become a veterinarian, but due to the intense science courses, she changed her focus to business management studies. After graduating, she worked for a small licensing firm founded by two young women. She credited her experience there for giving her the motivation to launch SOHH. Her husband, Steven Samuel, was part the rap group, the Troubleneck Brothers, who released their first album, Fuck All Y'all, in 1992. After leaving the group, Samuel worked as a postal foot messenger in New York. Palmer and Samuel officially launched the website in 1996. After twelve months, the membership grew to 75,000 people. Samuel stated, "We picked up a book on HTML and pretty much figured out how to launch the site in a week". SOHH began as a magazine, but was changed to a website due to high printing costs. It is the longest running online hip hop community. The website averages 1.5 million unique visitors a month.
Defacement
In late June 2008, a series of attacks took place against the website. The attack took place in stages, the message board section was infiltrated first, which SOHH then shut down. On June 23, 2008, an apparent hate group which identified themselves as "Anonymous" organized DDOS attacks against the website, successfully eliminating 60% of the website's service capacity. On June 27, 2008, the hackers utilized cross-site scripting to deface the website's main page with satirical images and headlines referencing numerous racial stereotypes and slurs, and also successfully stole information from SOHH employees.
Following the defacement, the website was shut down by its administration. AllHipHop, an unrelated website, also had its forum raided. By the evening of June 27, 2008 AllHipHop.com was back online and released an official statement in which it referred to the perpetrators as "cyber terrorists" announced that it would cooperate with SOHH "...to ensure the capture of these criminals and prevention of repeat offenses." On June 30, 2008 SOHH placed an official statement regarding the attack on its main page. The statement alleged that the attackers were "specifically targeting Black, Hispanic, Asian and Jewish youth who ascribe to hip-hop culture," and listed several hip hop oriented websites which it claimed were also attacked by the hackers. It concluded with a notice that it would be cooperating with the FBI.
When interviewed, Felicia Palmer, co-founder of SOHH, confirmed that an FBI probe was ongoing, and that each time the website was attacked, data on the suspects was retrieved. Palmer indicated that some of the attackers were "located within the United States, between the ages of 16-21" and that a few of them were based in Waco, Texas. Initially under the impression that the hackers were pranksters, she came to believe they were "beyond pranksters" and the attack was racist in nature.
References
External links
Companies based in Jersey City, New Jersey
Hip hop websites
Internet properties established in 1996
Mass media in Hudson County, New Jersey
American entertainment news websites
|
25201872
|
https://en.wikipedia.org/wiki/Indira%20Gandhi%20Delhi%20Technical%20University%20for%20Women
|
Indira Gandhi Delhi Technical University for Women
|
Indira Gandhi Delhi Technical University for Women (IGDTUW), is an all women's university located in New Delhi, India on the heritage campus at Kashmere Gate, Delhi. It was founded as the Indira Gandhi Institute of Technology in 1998. In May 2013 it gained autonomy and became the first women's technical university in India established by Govt. of Delhi. The university offers BBA and BTech, MTech, and PhD programs in four branches of engineering i.e. in the field of Computer Science and Engineering, Electronics and Communication Engineering, Mechanical and Automation Engineering and Information Technology. The university also offers 5 years program in Bachelors of Architecture (B.Arch.) and a 2-year post graduate program in M. Plan (Urban Planning).
History
Indira Gandhi Institute of Technology was established by Department of Training and Technical Education, Govt. of Delhi, in 1998 as the first woman Engineering College in India. It was the first institute to become a constituent college of Guru Gobind Singh Indraprastha University. From May, 2013 IGIT has acquired the status of the first Women's technical University under Govt. of Delhi and rechristened as Indira Gandhi Delhi Technical University for Women.
Indira Gandhi Delhi Technical University for Women (IGDTUW) was established by the Govt. NCT of Delhi in May, 2013 vide Delhi Act 09 of 2012, as a non-affiliating University to facilitate and promote studies, research, technology, innovation, incubation and extension work in emerging areas of professional education among women, with focus on engineering, technology, applied sciences, architecture and its allied areas with the objective to achieve excellence in these and related fields.
Erstwhile Indira Gandhi Institute of Technology (IGIT) was established in 1998 by Directorate of Training and Technical Education, Govt. of NCT of Delhi as the first engineering college for women only. In 2002, the college became the first constituent college of Guru Gobind Singh Indraprastha University. Over the years erstwhile IGIT has significantly contributed to the growth of quality technical education in the country and has become not only one of the premier institutions of Delhi but as the most prestigious college of north India.
Administration
Dr. Nupur Prakash is the founder Vice-Chancellor of Indira Gandhi Delhi Technical University for Women established by Govt. of Delhi in 2013 as the first Women's Technical University. Dr.R.K. Singh holds the position of the Officiating Registrar of the university. Currently, Dr. Amita Dev is the Vice-chancellor of the university.
Departments
The departments offering courses include:
Computer Science and Engineering
Electronics and Communication Engineering
Information Technology
Mechanical and Automation Engineering
Applied Sciences & Humanities
Management
Architecture and Planning
Courses and admissions
The university offers undergraduate Bachelor of Technology courses in four different fields i.e. Computer Science and Engineering(CSE), Electronics and Communication Engineering(ECE), Information Technology(IT), Mechanical and Automation Engineering(MAE). It also offers undergraduate Bachelor of Architecture (B.Arch.) course as well as postgraduate M. Plan (Urban Planning) and other postgraduate courses along with PhD.
Undergraduate Admissions
For BTech courses students are selected through Joint Admission Counseling (JAC) (based on IITJEE MAINS rank).
JAC DELHI members are
1. Delhi Technological University
2. Netaji Subhas University of Technology
3. Indraprastha Institute of Information Technology Delhi (IIIT,Delhi)
4. Indira Gandhi Delhi Technical University for Women,(IGDTUW).
Postgraduate Admissions
For M.Tech. and M. Plan courses the students are admitted based on valid GATE scores making them eligible for a monthly GATE Scholarship. The PhD admission is based on entrance test and interview.
Scholarships and Financial Aid
IGDTUW offers scholarship to all GATE qualified M. Tech/M. Plan students and few Full Time Research Scholars registered in the PhD Program under various Fellowship Schemes and Sponsored Research Projects. Fellowships are granted to candidates who qualify the RAT examination and an Interview. The number of fellowships are limited and is subject to availability of financial assistance.
Rankings
The National Institutional Ranking Framework (NIRF) ranked it 145 among engineering colleges in 2020.
Facilities
The University Campus has an auditorium, a Library, sporting facilities, academic laboratories, a dispensary, Computer Centre, a bank and a guest house. The university campus offers a common room for students which is equipped with fitness equipments, yoga facility and indoor games.
Library cum Learning Resource Centre (LRC)
Learning Resource Centre (LRC) serves as the premier source of academic information for the IGDTUW community through its collections, educational and consulting services. The LRC has a highly selective collection of print, electronic, and audiovisual materials in the areas of science, engineering, technology and management to support the learning and research activities of students and faculty. A number of e- journals are being subscribed through consortia mode subscription. All these journals are available online to the member of the LRC in campus LAN. The Digital Library section has e-materials like CDs, DVDs and digital thesis of final year students and are available through an Open Source Institution Repository Software within the campus premises.
Issue/ Return Timing: 9:15 AM to 5:00 PM (Monday to Friday)
Lunch Break: 1:45 PM to 2:20 PM
For Weekend Students: 10 AM to 4 PM (Saturday and Sunday)
Library is closed on University Holidays.
Computer Center
The university has on campus computing facility (computer lab) housed in centralized air conditioned premises. Computer Center at the university is equipped with the newly procured Computer Systems with a high end configuration for the students of the institute.
Dispensary
The dispensary is equipped with over the counter medications, bed to rest in, medical equipments, physical screening tools and first aid supplies. A team of one registered medical practitioner along with one registered nurse is available from 09: 00 am to 05: 00 pm.
Bank
The Punjab and Sind bank is available in the university premises.
Opening Hours (Lunch Break: 2pm to 2: 30pm): -
Monday to Friday: 10 am to 4pm.
Saturday: 10 am to 1pm.
Guest House
The guest house within the campus has limited accommodation for the staying purpose of parents/individuals visiting the campus. The rooms are comfortable with all modern facilities available within at nominal charges.
Residential student halls
The university campus has two hostel wings - Krishna Hostel and Kaveri Hostel, to accommodate approximately 340 students. These two hostels are located in the university campus. These hostels provide a safe, secure and clean environment for the students to grow, learn and mature in the society away from their own homes. The hostel authorities always facilitate to create an environment for the students to study, do well in their academics and focus on their career and future. All rooms are on twin/triple sharing basis and are equipped with individual beds, chairs, built-in cupboards and study tables.
Student Life and Culture
Taarangana is the cultural fest of IGDTUW and was held for the first time on 31 January - 1 February 2014. It is organised henceforth every year in January ending or February starting. Innerve is the annual technical fest of All the Technical Departments of IGDTUW. It is organised every year in October. Entrepreneurship Summit is a two-day event which celebrates the spirit of entrepreneurship by bringing various stakeholders of entrepreneurial ecosystem under one roof. It is organized by EDC every year in March. Other fests organised by IGDTUW include Espectro, Impulse, Tremors, Exebec.
IGDTUW has clubs for extra-curricular activities such as Technoliterati-The Literary Society, Antargat- the Creative Society for waste management practices, The Economics Society, Greensphere- The Environmental Society, Tarannum- the Music Society, ZENA- The fashion society, RAHNUMA-Dramatics Society, HYPNOTICS –Dance Society. The students also participate in national competitions like Baja SAE India, Supra SAE India among others gaining practical exposure to their engineering course.
Notable alumni
Durga Shakti Nagpal, Indian Administrative Service
References
1998 establishments in Delhi
All India Council for Technical Education
Colleges of the Guru Gobind Singh Indraprastha University
Education in Delhi
Educational institutions established in 1998
Engineering colleges in Delhi
Women's engineering colleges in India
Women's universities and colleges in Delhi
Monuments and memorials to Indira Gandhi
|
208588
|
https://en.wikipedia.org/wiki/Charles%20Geschke
|
Charles Geschke
|
Charles Matthew "Chuck" Geschke (September 11, 1939 – April 16, 2021) was an American businessman and computer scientist best known for founding the graphics and publishing software company Adobe Inc. with John Warnock in 1982, and co-creating the PDF document format with John Warnock.
Early life and education
Charles Matthew Geschke was born in Cleveland, Ohio, on September 11, 1939. He attended Saint Ignatius High School.
Geschke earned an AB in classics in 1962 and an MS in mathematics in 1963, both from Xavier University. He taught mathematics at John Carroll University from 1963 to 1968. In 1972, he completed his PhD studies in computer science at Carnegie Mellon University under the advice of William Wulf. He was a co-author of Wulf's 1975 book The Design of an Optimizing Compiler.
Career
Geschke started working at Xerox's Palo Alto Research Center (PARC) in October 1972. His first project was to build a mainframe computer. Afterward, he worked on programming languages and developed tools that were used to build the Xerox Star workstation.
In 1978, Geschke started the Imaging Sciences Laboratory at PARC, and conducted research in the areas of graphics, optics, and image processing. He hired John Warnock, and together they developed Interpress, a page description language (PDL) that could describe forms as complex as typefaces. Unable to convince Xerox management of the commercial value of Interpress, the two left Xerox to start their own company.
Adobe
Geschke and Warnock founded Adobe in Warnock's garage in 1982, naming the company after the Adobe Creek that ran behind Warnock's home. Interpress eventually evolved into PostScript. Its use on Apple computers resulted in one of the first desktop publishing (DTP) systems which allowed users to compose documents on a personal computer and see them on screen exactly as they would appear in print, a process known as WYSIWYG, an acronym for What You See Is What You Get. Previously, graphic designers had been forced to view their work in text-only format while they worked, until they printed, or hit "print preview". Because of the high quality and speed at which printing and composing could be done in WYSIWYG, the innovation "spawned an entire industry" in modern printing and publishing.
From December 1986 until July 1994 Geschke was Adobe's Chief Operating Officer, and from April 1989 until April 2000 he was the company's president. Geschke retired as president of Adobe in 2000, shortly before his partner Warnock left as CEO. Geschke had also served as Co-Chairman of the Board of Adobe from September 1997 to 2017.
Adobe was mentioned in Forbes 400 Best Big Companies in 2009, and was ranked 1,069th on the Forbes Global 2000 list in 2010.
1992 kidnapping
On the morning of May 26, 1992, as Geschke was arriving for work in Mountain View, California, he was kidnapped at gunpoint from the Adobe parking lot by two men, Mouhannad Albukhari, 26, of San Jose, and Jack Sayeh, 25, of Campbell. A spokesperson for the FBI reported that the agency had monitored phone calls that the kidnappers had made to Geschke's wife, demanding a ransom. The spokesperson added that Albukhari had been arrested after he had picked up the $650,000 ransom that Geschke's daughter had left at a drop-off point. An FBI agent explained that, "[a]fter a gentlemanly discussion", Albukhari had brought them to a bungalow in Hollister, where Sayeh had been holding Geschke hostage. Geschke was released unhurt after being held for four days, although he stated that he had been chained. The two kidnappers were eventually sentenced to life terms in state prison.
Awards
In 1999, Geschke was inducted as a fellow of the Association for Computing Machinery (ACM).
In 2002, he was made a fellow of the Computer History Museum for "his accomplishments in the commercialization of desktop publishing with John Warnock and for innovations in scalable type, computer graphics and printing."
In October 2006, Geschke, along with co-founder John Warnock received the annual AeA Medal of Achievement, making them the first software executives to receive this award. In 2008 he received the Computer Entrepreneur Award from the IEEE Computer Society. He also won the 2008 National Medal of Technology and Innovation, awarded by President Barack Obama. On October 15, 2010, the Marconi Society co-awarded Geschke and Warnock the Marconi Prize.
On Sunday, May 20, 2012, Geschke delivered the commencement speech at John Carroll University in University Heights, Ohio, where he had been a mathematics professor early in his career and was awarded an honorary doctorate of Humane Letters.
Affiliations
Geschke served on the boards of the San Francisco Symphony, the National Leadership Roundtable on Church Management, the Commonwealth Club of California, Tableau Software, the Egan Maritime Foundation, and the Nantucket Boys and Girls Club. He was also a member of the computer science advisory board at Carnegie Mellon University.
In 1995, he was elected to the National Academy of Engineering. In 2008, he was elected to the American Academy of Arts and Sciences. In 2010, he completed his term as Chairman of the Board of Trustees of the University of San Francisco. In 2012, he was elected to the American Philosophical Society.
Personal life
Geschke was a Catholic and met his wife Nancy "Nan" McDonough at a religious conference on social action in the spring of 1961. They married in 1964. Both were graduates of Catholic institutions. In 2012 they received the St. Elizabeth Ann Seton Award from the National Catholic Educational Association (NCEA) for their contributions to Catholic education.
Geschke's mother was a bankruptcy court paralegal. Both Geschke's father and paternal grandfather worked as letterpress photo engravers. Geschke's father helped during the early days of Adobe by checking color separation work with his engraver's loupe. Geschke described his father's acknowledgment of the high quality of the halftone patterns as "a wonderful moment".
Death
Geschke, a longtime resident of Los Altos, died on April 16, 2021, at the age of 81. The cause of death was cancer.
He is survived by his wife, three children and seven grandchildren.
References
External links
Biography at Computer History Museum
Biography on Adobe Web site
Los Altos Town Crier: A dramatic kidnapping revisited (part 1/4)
Los Altos Town Crier: Two days of terror, uncertainty (part 2/4)
Los Altos Town Crier: Chuck's dramatic rescue (part 3/4)
Los Altos Town Crier: Aftermath of a kidnapping (part 4/4)
Driving Adobe: Co-founder Charles Geschke on Challenges, Change and Values interview of Charles Geschke's roles in Adobe
Image of Charles Geschke
Online Copy of Geschke's PhD Thesis
Publications on DBLP
Profile at the ACM Digital Library
The Legacy Of Chuck Geschke, Co-Founder Of Adobe April 26, 2021 Obituary on All Things Considered
American technology company founders
Engineers from California
1939 births
2021 deaths
Adobe Inc. people
Fellows of the Association for Computing Machinery
Members of the United States National Academy of Engineering
National Medal of Technology recipients
Carnegie Mellon University alumni
Saint Ignatius High School (Cleveland) alumni
Xavier University alumni
Businesspeople from Cleveland
Scientists from Cleveland
People from Los Altos, California
Scientists at PARC (company)
Catholics from California
Members of the American Philosophical Society
|
48707
|
https://en.wikipedia.org/wiki/GNU%20Octave
|
GNU Octave
|
GNU Octave is software featuring a high-level programming language, primarily intended for numerical computations. Octave helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with MATLAB. It may also be used as a batch-oriented language. As part of the GNU Project, it is free software under the terms of the GNU General Public License.
History
The project was conceived around 1988. At first it was intended to be a companion to a chemical reactor design course. Real development was started by John W. Eaton in 1992. The first alpha release dates back to 4 January 1993 and on 17 February 1994 version 1.0 was released. Version 6.4.0 was released on Oct 30, 2021.
The program is named after Octave Levenspiel, a former professor of the principal author. Levenspiel was known for his ability to perform quick back-of-the-envelope calculations.
Development history
Developments
In addition to use on desktops for personal scientific computing, Octave is used in academia and industry. For example, Octave was used on a massive parallel computer at Pittsburgh Supercomputing Center to find vulnerabilities related to guessing social security numbers.
Acceleration with OpenCL or CUDA is also possible with use of GPUs.
Technical details
Octave is written in C++ using the C++ standard library.
Octave uses an interpreter to execute the Octave scripting language.
Octave is extensible using dynamically loadable modules.
Octave interpreter has an OpenGL-based graphics engine to create plots, graphs and charts and to save or print them. Alternatively, gnuplot can be used for the same purpose.
Octave includes a Graphical User Interface (GUI) in addition to the traditional Command Line Interface (CLI); see #User interfaces for details.
Octave, the language
The Octave language is an interpreted programming language. It is a structured programming language (similar to C) and supports many common C standard library functions, and also certain UNIX system calls and functions. However, it does not support passing arguments by reference although function arguments are copy-on-write to avoid unnecessary duplication.
Octave programs consist of a list of function calls or a script. The syntax is matrix-based and provides various functions for matrix operations. It supports various data structures and allows object-oriented programming.
Its syntax is very similar to MATLAB, and careful programming of a script will allow it to run on both Octave and MATLAB.
Because Octave is made available under the GNU General Public License, it may be freely changed, copied and used. The program runs on Microsoft Windows and most Unix and Unix-like operating systems, including Linux, Android, and macOS.
Notable features
Command and variable name completion
Typing a TAB character on the command line causes Octave to attempt to complete variable, function, and file names (similar to Bash's tab completion). Octave uses the text before the cursor as the initial portion of the name to complete.
Command history
When running interactively, Octave saves the commands typed in an internal buffer so that they can be recalled and edited.
Data structures
Octave includes a limited amount of support for organizing data in structures. In this example, we see a structure "x" with elements "a", "b", and "c", (an integer, an array, and a string, respectively):
octave:1> x.a = 1; x.b = [1, 2; 3, 4]; x.c = "string";
octave:2> x.a
ans = 1
octave:3> x.b
ans =
1 2
3 4
octave:4> x.c
ans = string
octave:5> x
x =
{
a = 1
b =
1 2
3 4
c = string
}
Short-circuit Boolean operators
Octave's && and || logical operators are evaluated in a short-circuit fashion (like the corresponding operators in the C language), in contrast to the element-by-element operators & and |.
Increment and decrement operators
Octave includes the C-like increment and decrement operators ++ and -- in both their prefix and postfix forms.
Octave also does augmented assignment, e.g. x += 5.
Unwind-protect
Octave supports a limited form of exception handling modelled after the unwind_protect of Lisp. The general form of an unwind_protect block looks like this:
unwind_protect
body
unwind_protect_cleanup
cleanup
end_unwind_protect
As a general rule, GNU Octave recognizes as termination of a given block either the keyword end (which is compatible with the MATLAB language) or a more specific keyword end_block. As a consequence, an unwind_protect block can be terminated either with the keyword end_unwind_protect as in the example, or with the more portable keyword end.
The cleanup part of the block is always executed. In case an exception is raised by the body part, cleanup is executed immediately before propagating the exception outside the block unwind_protect.
GNU Octave also supports another form of exception handling (compatible with the MATLAB language):
try
body
catch
exception_handling
end
This latter form differs from an unwind_protect block in two ways. First, exception_handling is only executed when an exception is raised by body. Second, after the execution of exception_handling the exception is not propagated outside the block (unless a rethrow( lasterror ) statement is explicitly inserted within the exception_handling code).
Variable-length argument lists
Octave has a mechanism for handling functions that take an unspecified number of arguments without explicit upper limit. To specify a list of zero or more arguments, use the special argument varargin as the last (or only) argument in the list.
function s = plus (varargin)
if (nargin==0)
s = 0;
else
s = varargin{1} + plus (varargin{2:nargin});
end
end
Variable-length return lists
A function can be set up to return any number of values by using the special return value varargout. For example:
function varargout = multiassign (data)
for k=1:nargout
varargout{k} = data(:,k);
end
end
C++ integration
It is also possible to execute Octave code directly in a C++ program. For example, here is a code snippet for calling rand([10,1]):
#include <octave/oct.h>
...
ColumnVector NumRands(2);
NumRands(0) = 10;
NumRands(1) = 1;
octave_value_list f_arg, f_ret;
f_arg(0) = octave_value(NumRands);
f_ret = feval("rand", f_arg, 1);
Matrix unis(f_ret(0).matrix_value());
C and C++ code can be integrated into GNU Octave by creating oct files, or using the MATLAB compatible MEX files.
MATLAB compatibility
Octave has been built with MATLAB compatibility in mind, and shares many features with MATLAB:
Matrices as fundamental data type.
Built-in support for complex numbers.
Powerful built-in math functions and extensive function libraries.
Extensibility in the form of user-defined functions.
Octave treats incompatibility with MATLAB as a bug; therefore, it could be considered a software clone, which does not infringe software copyright as per Lotus v. Borland court case.
MATLAB scripts from the MathWorks' FileExchange repository in principle are compatible with Octave. However, while they are often provided and uploaded by users under an Octave compatible and proper open source BSD license, the FileExchange Terms of use prohibit any usage beside MathWorks' proprietary MATLAB.
Syntax compatibility
There are a few purposeful, albeit minor, syntax additions:
Comment lines can be prefixed with the # character as well as the % character;
Various C-based operators ++, --, +=, *=, /= are supported;
Elements can be referenced without creating a new variable by cascaded indexing, e.g. [1:10](3);
Strings can be defined with the double-quote " character as well as the single-quote ' character;
When the variable type is single (a single-precision floating-point number), Octave calculates the "mean" in the single-domain (MATLAB in double-domain) which is faster but gives less accurate results;
Blocks can also be terminated with more specific Control structure keywords, i.e., endif, endfor, endwhile, etc.;
Functions can be defined within scripts and at the Octave prompt;
Presence of a do-until loop (similar to do-while in C).
Function compatibility
Many, but not all, of the numerous MATLAB functions are available in GNU Octave, some of them accessible through packages in Octave Forge. The functions available as part of either core Octave or Forge packages are listed online.
A list of unavailable functions is included in the Octave function __unimplemented.m__. Unimplemented functions are also listed under many Octave Forge packages in the Octave Wiki.
When an unimplemented function is called the following error message is shown:
octave:1> guide
warning: the 'guide' function is not yet implemented in Octave
Please read <http://www.octave.org/missing.html> to learn how you can contribute missing functionality.
error: 'guide' undefined near line 1 column 1
User interfaces
Octave comes with an official graphical user interface (GUI) and an integrated development environment (IDE) based on Qt. It has been available since Octave 3.8, and has become the default interface (over the command line interface) with the release of Octave 4.0.
It was well-received by EDN contributor, who said "[Octave] now has a very workable GUI."
Several 3rd-party graphical front-ends have also been developed, like ToolboX for coding education.
GUI applications
With Octave code, the user can create GUI applications GUI Development (GNU Octave (version 6.3.0)). Below are some examples:
Button, edit control, checkbox# create figure and panel on it
f = figure;
# create a button (default style)
b1 = uicontrol (f, "string", "A Button", "position",[10 10 150 40]);
# create an edit control
e1 = uicontrol (f, "style", "edit", "string", "editable text", "position",[10 60 300 40]);
# create a checkbox
c1 = uicontrol (f, "style", "checkbox", "string", "a checkbox", "position",[10 120 150 40]);Textboxprompt = {"Width", "Height", "Depth"};
defaults = {"1.10", "2.20", "3.30"};
rowscols = [1,10; 2,20; 3,30];
dims = inputdlg (prompt, "Enter Box Dimensions", rowscols, defaults);Listbox with message boxes.my_options = {"An item", "another", "yet another"};
[sel, ok] = listdlg ("ListString", my_options, "SelectionMode", "Multiple");
if (ok == 1)
msgbox ("You selected:");
for i = 1:numel (sel)
msgbox (sprintf ("\t%s", my_options{sel(i)}));
endfor
else
msgbox ("You cancelled.");
endifRadiobuttons# create figure and panel on it
f = figure;
# create a button group
gp = uibuttongroup (f, "Position", [ 0 0.5 1 1])
# create a buttons in the group
b1 = uicontrol (gp, "style", "radiobutton", "string", "Choice 1", "Position", [ 10 150 100 50 ]);
b2 = uicontrol (gp, "style", "radiobutton", "string", "Choice 2", "Position", [ 10 50 100 30 ]);
# create a button not in the group
b3 = uicontrol (f, "style", "radiobutton","string", "Not in the group","Position", [ 10 50 100 50 ]);
Packages
Octave also has packages available for free. Those packages are located at Octave-Forge Octave Forge - Packages. Available packages are:
bim - Package for solving Diffusion Advection Reaction (DAR) Partial Differential Equations
bsltl - The BSLTL package is a free collection of OCTAVE/MATLAB routines for working with the biospeckle laser technique
cgi - Common Gateway Interface for Octave
communications - Digital Communications, Error Correcting Codes (Channel Code), Source Code functions, Modulation and Galois Fields
control - Computer-Aided Control System Design (CACSD) Tools for GNU Octave, based on the proven SLICOT Library
data-smoothing - Algorithms for smoothing noisy data
database - Interface to SQL databases, currently only postgresql using libpq
dataframe - Data manipulation toolbox similar to R data
dicom - Digital communications in medicine (DICOM) file io
divand - divand performs an n-dimensional variational analysis (interpolation) of arbitrarily located observations
doctest - The Octave-Forge Doctest package finds specially-formatted blocks of example code within documentation files
econometrics - Econometrics functions including MLE and GMM based techniques
fem-fenics - pkg for the resolution of partial differential equations based on fenics
financial - Monte Carlo simulation, options pricing routines, financial manipulation, plotting functions and additional date manipulation tools
fits - The Octave-FITS package provides functions for reading, and writing FITS (Flexible Image Transport System) files
fpl - Collection of routines to export data produced by Finite Elements or Finite Volume Simulations in formats used by some visualization programs
fuzzy-logic toolkit - A mostly MATLAB-compatible fuzzy logic toolkit for Octave (fails to install due to long-standing bug)
ga - Genetic optimization code
general - General tools for Octave
generate_html - This package provides functions for generating HTML pages that contain the help texts for a set of functions
geometry - Library for geometric computing extending MatGeom functions
gsl - Octave bindings to the GNU Scientific Library
image - The Octave-forge Image package provides functions for processing images
image-acquisition - The Octave-forge Image Acquisition package provides functions to capture images from connected devices
instrument-control - Low level I/O functions for serial, i2c, parallel, tcp, gpib, vxi11, udp and usbtmc interfaces
interval - The interval package for real-valued interval arithmetic allows one to evaluate functions over subsets of their domain
io - Input/Output in external formats e.g. Excel
level-set - Routines for calculating the time-evolution of the level-set equation and extracting geometric information from the level-set function
linear-algebra - Additional linear algebra code, including general SVD and matrix functions
lssa - A package implementing tools to compute spectral decompositions of irregularly-spaced time series
ltfat - The Large Time/Frequency Analysis Toolbox (LTFAT) is a MATLAB/Octave toolbox for working with time-frequency analysis, wavelets and signal processing
mapping - Simple mapping and GIS .shp and raster file functions
mataveid - System identification package for both MATLAB and GNU Octave
matavecontrol - Control toolbox for both MATLAB and GNU Octave
miscellaneous - Miscellaneous tools that would fit nowhere else
mpi - Octave bindings for basic Message Passing Interface (MPI) functions for parallel computing
msh - Create and manage triangular and tetrahedral meshes for Finite Element or Finite Volume PDE solvers
mvn - Multivariate normal distribution clustering and utility functions
nan - A statistics and machine learning toolbox for data with and w/o missing values
ncarray - Access a single or a collection of NetCDF files as a multi-dimensional array
netcdf - A MATLAB compatible NetCDF interface for Octave
nurbs - Collection of routines for the creation, and manipulation of Non-Uniform Rational B-Splines (NURBS), based on the NURBS toolbox by Mark Spink
ocs - Package for solving DC and transient electrical circuit equations
octclip - This package allows users to do boolean operations with polygons using the Greiner-Hormann algorithm
octproj - This package allows users to call functions of PROJ
optics - Functions covering various aspects of optics
optim - Non-linear optimization toolkit
optiminterp - An optimal interpolation toolbox for octave
parallel - Parallel execution package
quaternion - Quaternion package for GNU Octave, includes a quaternion class with overloaded operators
queueing - The queueing package provides functions for queueing networks and Markov chains analysis
secs1d - A Drift-Diffusion simulator for 1d semiconductor devices
secs2d - A Drift-Diffusion simulator for 2d semiconductor devices
secs3d - A Drift-Diffusion simulator for 3d semiconductor devices
signal - Signal processing tools, including filtering, windowing and display functions
sockets - Socket functions for networking from within octave
sparsersb - Interface to the librsb package implementing the RSB sparse matrix format for fast shared-memory sparse matrix computations
splines - Additional spline functions
statistics - Additional statistics functions for Octave
stk - The STK is a (not so) Small Toolbox for Kriging
strings - Additional functions for manipulation and analysis of strings
struct - Additional structure manipulation functions
symbolic - The Octave-Forge Symbolic package adds symbolic calculation features to GNU Octave
tisean - Port of TISEAN 3
tsa - Stochastic concepts and maximum entropy methods for time series analysis
vibes - The VIBes API allows one to easily display results (boxes, pavings) from interval methods
video - A wrapper for ffmpeg's libavformat and libavcodec, implementing addframe, avifile, aviinfo and aviread
vrml - 3D graphics using VRML
windows - Provides COM interface and additional functionality on Windows
zeromq - ZeroMQ bindings for GNU Octave
Comparison with other similar software
Other free alternatives to MATLAB include Scilab and FreeMat. Octave is more compatible with MATLAB than Scilab is, and FreeMat has not been updated since June 2013.
See also
List of numerical-analysis software
Comparison of numerical-analysis software
List of statistical packages
List of numerical libraries
MATLAB
Notes
References
Further reading
External links
Array programming languages
Articles with example MATLAB/Octave code
Cross-platform free software
Data analysis software
Data mining and machine learning software
Free educational software
Free mathematics software
Free software programmed in C++
Octave
High-priority free software projects
Numerical analysis software for Linux
Numerical analysis software for MacOS
Numerical analysis software for Windows
Numerical programming languages
Science software that uses Qt
Software that uses Qt
|
175015
|
https://en.wikipedia.org/wiki/The%20Specials
|
The Specials
|
The Specials, also known as The Special AKA, are an English 2 Tone and ska revival band formed in 1977 in Coventry. After some early changes, the first stable lineup of the group consisted of Terry Hall and Neville Staple on vocals, Lynval Golding and Roddy Radiation on guitars, Horace Panter on bass, Jerry Dammers on keyboards, John Bradbury on drums, and Dick Cuthell and Rico Rodriguez on horn. Their music combines a "danceable ska and rocksteady beat with punk's energy and attitude". Lyrically, they present a "more focused and informed political and social stance".
The band wore mod-style "1960s period rude boy outfits (pork pie hats, tonic and mohair suits and loafers)". In 1980, the song "Too Much Too Young", the lead track on their The Special AKA Live! EP, reached No. 1 in the UK Singles Chart. In 1981, the recession-themed single "Ghost Town" also hit No. 1 in the UK.
After seven consecutive UK Top 10 singles between 1979 and 1981, main lead vocalists Hall and Staple, along with guitarist Golding, left to form Fun Boy Three. Continuing as "The Special AKA" (a name they used frequently on earlier Specials releases), a substantially revised Specials line-up issued new material until 1984, including the top 10 UK hit single "Free Nelson Mandela". After this, founder and songwriter Jerry Dammers dissolved the band and pursued political activism.
The group re-formed in 1993, and have continued to perform and record with varying line-ups, none of them involving Dammers.
Career
Founding and early years (1977–78)
The group was formed in 1977 by songwriter/keyboardist Dammers, vocalist Tim Strickland, guitarist/vocalist Lynval Golding, drummer Silverton Hutchinson and bassist Horace Panter (a.k.a. Sir Horace Gentleman). Strickland was replaced by Terry Hall shortly after the band's formation. The band was first called the Automatics, then the Coventry Automatics. Vocalist Neville Staple and guitarist Roddy Byers (usually known as Roddy Radiation) joined the band the following year; the new line-up changed their name to the Special AKA. Joe Strummer of the Clash had attended one of their concerts, and invited the Special AKA to open for his band in their "On Parole" UK tour. This performance gave the Special AKA a new level of national exposure, and they briefly shared the Clash's management.
The Specials began at the same time as Rock Against Racism, which was first organised in 1978. According to Dammers, anti-racism was intrinsic to the formation of the Specials, in that the band was formed with the goal of integrating black and white people. Many years later Dammers stated that "Music gets political when there are new ideas in music, ...punk was innovative, so was ska, and that was why bands such as the Specials and the Clash could be political".
Ascendancy of the Specials (1979–81)
In 1979, shortly after drummer Hutchinson left the band to be replaced by John Bradbury, Dammers formed the 2 Tone Records label and released the band's debut single "Gangsters", a reworking of Prince Buster's "Al Capone". The record became a Top 10 hit that summer. The band had begun wearing mod/rude boy/skinhead-style two-tone tonic suits, along with other elements of late 1960s teen fashions. Changing their name to the Specials, they recorded their eponymous debut album in 1979, produced by Elvis Costello. Horn players Dick Cuthell and Rico Rodriguez were featured on the album, but would not be official members of the Specials until their second album.
The Specials led off with Dandy Livingstone's "Rudy, A Message to You" (slightly altering the title to "A Message to You, Rudy") and also had covers of Prince Buster and Toots & the Maytals songs from the late 1960s. In 1980, the EP Too Much Too Young (predominantly credited to the Special A.K.A.) was a No. 1 hit in the UK Singles Chart, despite controversy over the song's lyrics, which reference teen pregnancy and promote contraception.
Reverting once again to the name of the Specials, the band's second album, More Specials, was not as commercially successful and was recorded at a time when, according to Hall, conflicts had developed in the band. Female backing vocalists on the Specials' first two studio albums included: Chrissie Hynde; Rhoda Dakar (then of the Bodysnatchers and later of the Special AKA); and Belinda Carlisle, Jane Wiedlin and Charlotte Caffey of the Go-Go's. In the first few months of 1981, the band took a break from recording and touring, and then released "Ghost Town", a non-album single, which hit No. 1 in 1981. At their Top of the Pops recording of the song, however, Staple, Hall and Golding announced they were leaving the band. Golding later said: "We didn't talk to the rest of the guys. We couldn't even stay in the same dressing room. We couldn't even look at each other. We stopped communicating. You only realise what a genius Jerry was years later. At the time, we were on a different planet." Shortly afterwards, the three left the band to form Fun Boy Three.
Band split, rebirth as the Special AKA (1982–84)
For the next few years, the group was in a seemingly constant state of flux. Adding Dakar to the permanent line-up, the group recorded "The Boiler" with Dakar on vocals, Dammers on keyboards, Bradbury on drums, John Shipley (from the Swinging Cats) on guitar, Cuthell on brass and Nicky Summers on bass. The single was credited to "Rhoda with the Special AKA". The track describes an incident of date rape, and its frank and harrowing depiction of the matter meant that airplay was severely limited. Nevertheless, it managed to reach No. 35 on the UK charts, and American writer Dave Marsh later identified "The Boiler" as one of the 1,001 best "rock and soul" singles of all time in his book The Heart of Rock & Soul.
After going on tour with Rodriguez, the band (without Dakar, and as "Rico and the Special AKA") also recorded the non-charting (and non-album) single "Jungle Music". The line-up for the single was Rodriguez (vocal, trombone), Cuthell (cornets), Dammers (keyboards), Bradbury (drums), Shipley (guitar), returning bassist Panter, and new additions Satch Dickson and Groco (percussion) and Anthony Wymshurst (guitar).
Rodriguez and the three newcomers were all dropped for the next single, "War Crimes", which brought back Dakar and added new co-vocalists Egidio Newton and Stan Campbell, as well as violinist Nick Parker. Follow-up single "Racist Friend" was a minor hit (UK No. 60), with the band establishing themselves as a septet: Dakar, Newton, Campbell, Bradbury, Cuthell, Dammers and Shipley.
The new line-up (still known as the Special AKA) finally issued a new full-length album In the Studio in 1984. Officially, the band was now a sextet: Dakar, Campbell, Bradbury, Dammers, Shipley and new bassist Gary McManus. Cuthell, Newton, Panter and Radiation all appeared on the album as guests; as did saxophonist Nigel Reeve, and Claudia Fontaine and Caron Wheeler of the vocal trio Afrodiziak. Both critically and commercially, In The Studio was less successful than previous efforts, although the 1984 single "Free Nelson Mandela" was a No. 9 UK hit. The latter contributed to making Mandela's imprisonment a cause célèbre in the UK, and became popular with anti-apartheid activists in South Africa. Dammers then dissolved the band and pursued political activism.
Later developments
Since the break-up of the original line-up, various members of the band performed in other bands and have reformed several times to tour and record in Specials-related projects. However, there has never been a complete reunion of the original line-up.
After their departure from the Specials, Golding, Hall and Staple founded the pop band Fun Boy Three and enjoyed commercial success from 1981 to 1983 with hits such as "Tunnel of Love", "It Ain't What You Do (It's the Way That You Do It)", "Our Lips Are Sealed" and "The Lunatics (Have Taken Over the Asylum)". The group ended with Hall's sudden departure, leading to a 15-year rift with Staple.
After Fun Boy Three, Staple and Golding joined Pauline Black of the Selecter in the short-lived band Sunday Best, releasing the single "Pirates on the Airwaves".
In 1990, Bradbury, Golding, Panter and Staple teamed up with members of the Beat to form Special Beat, performing the music of the two bands and other ska and Two Tone classics. The group, undergoing many line-up changes, toured and released several live recordings through the 1990s.
A 1994 single credited to "X Specials" featured Staple, Golding, Radiation, and Panter. A cover of the Slade song "Coz I Love You", the project was produced by Slade's Jim Lea.
Moving into production and management, Staple "discovered" and produced bhangra pop fusion artist Johnny Zee. Throughout the 1980s and 1990s, Staple would stay active producing and guesting with a variety of artists, including International Beat, Special Beat, Unwritten Law, Desorden Publico, the Planet Smashers and others, as well as leading his own bands and starting the Rude Wear clothing line. He sang with the 1990s Specials line-up, and again from 2009 to 2012.
Panter went on to join with members of the Beat and Dexys Midnight Runners to form General Public, and then Special Beat. He joined the 1990s Specials before training as a primary school teacher at the University of Central England in Birmingham. He continued to play with latter-day Special Neol Davies in the blues outfit Box of Blues. However, he rejoined the band for their 2009 reunion and continues as a member.
Golding teamed up with Dammers for a brief spell of club DJing, and then worked with Coventry band After Tonight. After Special Beat, he went on to lead the Seattle-based ska groups Stiff Upper Lip, and more recently, Pama International, as well as many collaborations with other ska bands. He has also toured with the Beat. He joined the 1990s Specials line-up, but left in 2000. He rejoined in 2009, and continues with the group.
Radiation fronted and worked with numerous artists including the Tearjerkers (a band that he had begun in the last months of the Specials), the Bonediggers, the Raiders and Three Men & Black (including Jean-Jacques Burnel of the Stranglers), Jake Burns (Stiff Little Fingers), Pauline Black, Bruce Foxton (the Jam), Dave Wakeling (the Beat, General Public) and Nick Welsh (Skaville UK). He also fronts the Skabilly Rebels, a band that mixes rockabilly with ska. He joined the 1990s Specials line-up and again in 2009, continuing to 2014.
Bradbury continued through the Special AKA era, then formed the band JB's Allstars, before moving into production. He joined Special Beat for several years, then a reformed Selecter, before retiring from music to work as an IT specialist. He rejoined the band for their 2009 reunion, and continued to perform with them until his death in 2015.
From 1984 until 1987, Hall fronted the Colourfield, with some commercial success. After they disbanded, he pursued a solo career, working mostly in the new wave genre. He co-wrote a number of early Lightning Seeds releases. He also performed some vocals for a Dub Pistols album. He and Eurythmics member David A. Stewart formed the duo Vegas in the early 1990s, releasing an eponymous album in 1992. Hall joined the Specials for their 2009 reunion, and continues to perform with them.
In 2006, Dammers formed large, jazz-style ensemble the Spatial AKA Orchestra.
Reunions and current events
The Specials Mk.2 (1993-1998)
The first reunion under the Specials name occurred in 1993. Producer Roger Lomas was asked by Trojan Records to get some musicians together to back ska legend Desmond Dekker on a new album. He approached all members of The Specials and the four that were willing to participate were Roddy Radiation, Neville Staple, Lynval Golding and Horace Panter, they were also joined by Aitch Bembridge, who had been the drummer in The Selecter. Bembridge, had played with soul singer Ray King in the 1970s, who mentored and worked with Dammers, Staple, Golding and Hutchinson in their days before the Specials. A group of studio musicians filled out the band, including keyboardist Mark Adams. The album, released by Trojan Records as King of Kings, was credited to Desmond Dekker and the Specials.
The release of the album with Desmond Dekker created some buzz for the band, and led to an offer from a Japanese promoter to book a tour. Rejoined by Golding, along with Bembridge & Adams from the "King of Kings" sessions, the band added horn players Adam Birch and Jonathan Read and began rehearsing and playing live. Initially using the names the Coventry Specials, The X Specials, and Specials2, they shortly reverted to The Specials after accepting that it was the name promoters were using anyway, although the line-up was referred to as Specials MkII by those involved. This line-up went on to tour internationally and released two studio albums: Today's Specials, a collection of mostly reggae and ska covers in 1996, and Guilty 'til Proved Innocent! in 1998, a collection of original compositions. The band toured heavily in support of both releases - including headlining the Vans Warped Tour - and received positive reviews of their live shows.
Despite the live success, the band fizzled out after a 1998 Japan tour (which Panter missed due to illness), although limited touring with a different line-up continued into 2000. The release of the earlier Trojan sessions, Skinhead Girl in 2000 and Conquering Ruler in 2001, would be the last heard from the band for some time.
After completing a similar project with The Selecter in 1999, Roger Lomas brought the group back into the studio to record a number of classic songs from the Trojan Records back catalogue. Two weeks before this project, Golding left the group to concentrate on domestic life in Seattle. Turning to another Selecter veteran for help, the band replaced him on guitar with Neol Davies. Davies, Staple, Radiation and Panter, joined by a group of session musicians, recorded a wealth of tracks that eventually saw release by Trojan sub-label Receiver Records as Skinhead Girl in 2000 and Conquering Ruler in 2001.
The Specials reunion (2008-present)
Terry Hall & Friends
In 2007, Hall teamed up with Golding for the first time in 24 years, to play Specials songs at two music festivals. At Glastonbury Festival, they appeared on the Pyramid Stage with Lily Allen to perform "Gangsters". In May 2009, Golding claimed that Allen's reuniting him with Hall played a "massive part" in the group's later reformation. Later the same day, they played on the Park Stage, with Damon Albarn of Blur on piano and beatboxer Shlomo providing rhythm, to perform "A Message to You, Rudy". At GuilFest, Golding joined the Dub Pistols to again perform "Gangsters". In 2007, Golding regularly performed concerts and recorded with Pama International, a collective of musicians who were members of Special Beat.
On 30 March 2008, Hall stated that the Specials would be reforming for tour dates in autumn 2008, and possibly for some recording. This was officially confirmed on 7 April 2008. On 6 September 2008, six members of the band performed on the main stage at the Bestival, billed as the "Surprise Act". By December 2008, the band had announced 2009 tour dates to celebrate their 30th anniversary, although founder member Dammers was not joining the band on the tour.
Hall was quoted as saying, "The door remains open to him". However, Dammers described the new reunion as a "takeover" and claimed he had been forced out of the band. Around that same time, longtime Specials fan Amy Winehouse joined Dammers onstage at Hyde Park, singing the song he wrote for the Specials, "Free Nelson Mandela", for Mandela's 90th birthday concert, dubbed 46664 after Mandela's prison number, and also the name of his AIDS charity, which received money raised by the birthday bash.
30th Anniversary Tour and beyond
On 10 April 2009, the Specials guested on BBC Two's Later... with Jools Holland. The following month, Bradbury and Golding expressed their intentions to release further original Specials material at a later date. On 8 June 2009, it was announced that the Specials would embark on a second leg of their 30th anniversary tour, taking in the locations and venues that they missed earlier in the year. In July and August 2009, the Specials toured Australia and Japan. In October, the band picked up the Inspiration Award at the Q Awards. In 2010, they performed at the Dutch festival Lowlands.
In an interview at the Green Room in Manchester in November 2010, Hall confirmed that there would be further Specials dates in the autumn of 2011, and confessed to having enjoyed playing live again: "It's a celebration of something that happened in your life that was important, and we're going to do that again next year, but then maybe that'll be it". In late 2010, the band re-released "A Message to You, Rudy" as a Haiti Special Fund available to download from iTunes in both the UK and the US, with proceeds going to aid the UNICEF effort to help children in earthquake-stricken Haiti.
In February 2012, it was announced that the Specials would perform at Hyde Park with Blur and New Order to celebrate the 2012 Summer Olympics closing ceremony. Panter said that the band were excited to be involved in such a momentous event: "We have been keeping it under our pork pie hats for a month or so now. I think it is going to be the only chance people get to see the Specials performing in the UK this year." The Specials' performance was said to have remained synonymous with Britain's political and social upheaval of the late 1970s and early 1980s.
In August 2012, the Specials released a new live album, More... Or Less. – The Specials Live, featuring "the best of the best" performances from their 2011 European tour, selected by the band themselves on a double-disc CD and double-vinyl LP.
Neville Staple leaves The Specials
In January 2013, the Specials announced the departure of Staple with the following message on their website: "We are very sad Neville cannot join us on the Specials' UK tour in May 2013 or indeed on the future projects we have planned. He has made a huge contribution to the fantastic time and reception we have received since we started and reformed in 2009. However, he missed a number of key shows last year due to ill health, and his health is obviously much more important. We wish him the very best for the future".
The Specials completed a North American tour in 2013, performing to sold-out crowds in Chicago, Los Angeles, San Diego, San Francisco, Seattle, Portland and Vancouver.
Roddy Radiation leaves The Specials
In February 2014, it was revealed that Roddy Radiation had left the reformed group. In spite of his departure, the Specials played an extensive tour in the autumn of 2014 with Steve Cradock (Ocean Colour Scene, Paul Weller) as lead guitarist.
John Bradbury passes away
Drummer John Bradbury died on 28 December 2015 at the age of 62. On 22 March 2016, the Specials announced that The Libertines drummer Gary Powell would be performing on their upcoming tours. Powell was replaced by PJ Harvey/Jazz Jamaica drummer Kenrick Rowe on the Encore album and subsequent tour.
Change of direction for the band and No.1 album
In 2017, the band invited 20 year old Birmingham native Saffiyah Khan to a show after a photo of her confronting an "English Defence League goon" in a Specials t-shirt at a counter-demonstration went viral. Less than two years later, Khan had performed on stage for the first time, recorded a song and toured North America with the band.
On 29 October 2018, the Specials announced a UK tour in 2019 to coincide with the release of a new album, Encore.
On 1 February 2019, the band announced a Spring North American tour to promote the 1 February 2019 release of Encore (out via Island Records). The following week, Encore debuted at number 1 on the UK Albums Chart, giving the band their first chart-topping album since 1980. During late 2019, The Specials invited 17-year-old artist and photographer Sterling Chandler to join and photograph the band on the remaining leg of the tour.
In March 2021, the band announced a UK tour. On 7 July 2021, Horace Panter announced a new 12-track Specials album that was released on 23 August 2021, titled Protest Songs 1924-2012. Vocalist Hannah Hu joined the band for their 2021 tour and also sang on the new album.
Members
Current members
Lynval Golding – rhythm guitar, vocals (1977–81, 1993, 1994–1998, 2008–present)
Horace Panter – bass guitar (1977–81, 1982, 1993, 1994–1998, 2000-2001, 2008–present)
Terry Hall – vocals (1977–81, 2008–present)
Current touring musicians
Tim Smart – trombone (2008–present)
Nikolaj Torp Larsen – keyboards (2008–present)
Steve Cradock – lead guitar (2014–present)
Pablo Mendelssohn – trumpet (2014–present)
Jake Fletcher – guitar (2017–present)
Kenrick Rowe – drums (2019–present)
Hannah Hu – backing vocals (2021–present)
Stan Samuel – rhythm guitar (2021–present)
Sid Gauld – trumpet (2021–present)
Former members
The Coventry Automatics
Silverton Hutchinson – drums (1977–79)
Tim Strickland – vocals (1977)
The Specials (original line-up)
Jerry Dammers – keyboards, principal songwriter (1977–84)
Roddy Radiation – lead guitar, vocals (1978–81, 1993, 1996–2001, 2008–14)
Neville Staple – toasting, vocals, percussion (1978–81, 1993, 1996–2001, 2008–12)
John Bradbury – drums (1979–84, 2008–15; his death)
Dick Cuthell – flugel horn (1979–84)
Rico Rodriguez – trombone (1979–81, 1982; died 2015)
The Special A.K.A.
Rhoda Dakar – vocals (1981–82, 1982–84)
John Shipley – guitar (1981–84)
Satch Dixon – percussion (1982)
Tony 'Groco' Uter – percussion (1982)
Anthony Wimshurst – guitar (1982)
Stan Campbell – vocals (1982–84)
Egidio Newton – vocals, percussion (1982–83)
Nick Parker – violin (1982)
Gary McManus – bass guitar (1983–84)
The Specials Mk.2. (1993-1998)
Jon Read – trumpet, percussion, bass (1996–1998, 2008–14)
Adam Birch – trumpet (1996–1998)
Mark Adams – keyboards (1993, 1994–1998)
Kendell – vocals (1998)
Charley Harrington Bembridge (Aitch Hyatt) – drums (1993, 1994–1998)
Trojan era Specials
Neol Davies – rhythm guitar, vocals (2000-2001)
Anthony Harty - drums, percussion (2000-2001)
Justin Dodsworth - keyboards (2000-2001)
Steve Holdway - trombone (2000-2001)
Paul Daleman - trumpet (2000-2001)
Leigh Malin - tenor sax (2000-2001)
The Specials (2008-present)
Drew Stansall – saxophone, flute (2008–2012)
Gary Powell – drums (2016–2019)
Timeline
Discography
As The Specials
The Specials (1979)
More Specials (1980)
Today's Specials (1996)
Guilty 'til Proved Innocent! (1998)
Skinhead Girl (2000)
Conquering Ruler (2001)
Encore (2019)
Protest Songs 1924-2012 (2021)
As The Special A.K.A.
In the Studio (1984)
References
Further reading
Williams, Paul (1995) You're Wondering Now – A History of the Specials, ST Publishing.
Panter, Horace (2007) Ska'd for Life – A Personal Journey with the "Specials", Sidgwick & Jackson,
Chambers, Pete (2008) 2-Tone-2: Dispatches from the Two Tone City, 30 Years on, Tencton Planet Publications.
Staple, Neville (2009) Original Rude Boy, Aurum Press.
Williams, Paul (2009) You're Wondering Now-The Specials From Conception to Reunion, Cherry Red Books.
Thompson, Dave (2011) Wheels Out of Gear: 2-Tone, the Specials and a World In Flame, Soundcheck Books.
External links
The Specials history
The Specials on Facebook
The Specials Youtube channel
The official Specials fan forum
The Specials official website
The Specials profile Unofficial 2 Tone website
"The Specials at the Rico", BBC Coventry & Warwickshire, 15 May 2009
Musical groups from Coventry
Musical groups established in 1977
English ska musical groups
Second-wave ska groups
Political music groups
NME Awards winners
2 Tone Records artists
Chrysalis Records artists
|
3598781
|
https://en.wikipedia.org/wiki/Extreme%20programming%20practices
|
Extreme programming practices
|
Extreme programming (XP) is an agile software development methodology used to implement software projects. This article details the practices used in this methodology. Extreme programming has 12 practices, grouped into four areas, derived from the best practices of software engineering.
Fine scale feedback
Pair programming
Pair programming means that all code is produced by two people programming on one task on one workstation. One programmer has control over the workstation and is thinking mostly about the coding in detail. The other programmer is more focused on the big picture, and is continually reviewing the code that is being produced by the first programmer. Programmers trade roles after minute to hour periods.
The pairs are not fixed; programmers switch partners frequently, so that everyone knows what everyone is doing, and everybody remains familiar with the whole system, even the parts outside their skill set. This way, pair programming also can enhance team-wide communication. (This also goes hand-in-hand with the concept of Collective Ownership).
Planning game
The main planning process within extreme programming is called the Planning Game. The game is a meeting that occurs once per iteration, typically once a week. The planning process is divided into two parts:
Release Planning: This is focused on determining what requirements are included in which near-term releases, and when they should be delivered. The customers and developers are both part of this. Release Planning consists of three phases:
Exploration Phase: In this phase the customer will provide a shortlist of high-value requirements for the system. These will be written down on user story cards.
Commitment Phase: Within the commitment phase business and developers will commit themselves to the functionality that will be included and the date of the next release.
Steering Phase: In the steering phase the plan can be adjusted, new requirements can be added and/or existing requirements can be changed or removed.
Iteration Planning: This plans the activities and tasks of the developers. In this process the customer is not involved. Iteration Planning also consists of three phases:
Exploration Phase: Within this phase the requirement will be translated to different tasks. The tasks are recorded on task cards.
Commitment Phase: The tasks will be assigned to the programmers and the time it takes to complete will be estimated.
Steering Phase: The tasks are performed and the end result is matched with the original user story.
The purpose of the Planning Game is to guide the product into delivery. Instead of predicting the exact dates of when deliverables will be needed and produced, which is difficult to do, it aims to "steer the project" into delivery using a straightforward approach. The Planning Game approach has also been adopted by non-software projects and teams in the context of business agility.
Release planning
Exploration phase
This is an iterative process of gathering requirements and estimating the work impact of each of those requirements.
Write a Story: Business has come with a problem; during a meeting, development will try to define this problem and get requirements. Based on the business problem, a story (user story) has to be written. This is done by business, where they point out what they want a part of the system to do. It is important that development has no influence on this story. The story is written on a user story card.
Estimate a Story: Development estimates how long it will take to implement the work implied by the story card. Development can also create spike solutions to analyze or solve the problem. These solutions are used for estimation and discarded once everyone gets clear visualization of the problem. Again, this may not influence the business requirements.
Split a Story: Every design critical complexity has to be addressed before starting the iteration planning. If development isn't able to estimate the story, it needs to be split up and written again.
When business cannot come up with any more requirements, one proceeds to the commitment phase.
Commitment phase
This phase involves the determination of costs, benefits, and schedule impact. It has four components:
Sort by Value: Business sorts the user stories by Business Value.
Sort by Risk: Development sorts the stories by risk.
Set Velocity: Development determines at what speed they can perform the project.
Choose scope: The user stories that will be finished in the next release will be picked. Based on the user stories the release date is determined.
Sort by value
The business side sorts the user stories by business value. They will arrange them into three piles:
Critical: stories without which the system cannot function or has no meaning.
Significant Business Value: Non-critical user stories that have significant business value.
Nice to have: User stories that do not have significant business value.
Sort by risk
The developers sort the user stories by risk. They also categorize into three piles: low, medium and high risk user stories. The following is an example of an approach to this:
Determine Risk Index: Give each user story an index from 0 to 2 on each of the following factors:
Completeness (do we know all of the story details?)
Complete (0)
Incomplete (1)
Unknown (2)
Volatility (is it likely to change?)
low (0)
medium (1)
high (2)
Complexity (how hard is it to build?)
simple (0)
standard (1)
complex (2)
All indexes for a user story are added, assigning the user stories a risk index of low (0–1), medium (2–4), or high (5–6).
Steering phase
Within the steering phase the programmers and business people can "steer" the process. That is to say, they can make changes. Individual user stories, or relative priorities of different user stories, might change; estimates might prove wrong. This is the chance to adjust the plan accordingly.
Iteration planning
Considering team velocity storypoints to be planned. Iteration duration can be 1 to 3 weeks.
Exploration phase
The exploration phase of the iteration planning is about creating tasks and estimating their implementation time.
Translate the requirement to tasks: Place on task cards.
Combine/Split task: If the programmer cannot estimate the task because it is too small or too big, the programmer will need to combine or split the task.
Estimate task: Estimate the time it will take to implement the task.
Commitment phase
Within the commitment phase of the iteration planning programmers are assigned tasks that reference the different user stories.
A programmer accepts a task: Each programmer picks a task for which he or she takes responsibility.
Programmer estimates the task: Because the programmer is now responsible for the task, he or she should give the eventual estimation of the task.
Set load factor: The load factor represents the ideal amount of hands-on development time per programmer within one iteration. For example, in a 40-hour week, with 5 hours dedicated to meetings, this would be no more than 35 hours.
Balancing: When all programmers within the team have been assigned tasks, a comparison is made between the estimated time of the tasks and the load factor. Then the tasks are balanced out among the programmers. If a programmer is overcommitted, other programmers must take over some of his or her tasks and vice versa.
Steering phase
The implementation of the tasks is done during the steering phase of the iteration.
Get a task card: The programmer gets the task card for one of the tasks to which he or she has committed.
Find a Partner: The programmer will implement this task along with another programmer. This is further discussed in the practice Pair Programming.
Design the task: If needed, the programmers will design the functionality of the task.
Implement the task using Test-driven development (TDD) (see below)
Run Functional test: Functional tests (based on the requirements in the associated user story and task card) are run.
Test driven development
Unit tests are automated tests that test the functionality of pieces of the code (e.g. classes, methods). Within XP, unit tests are written before the eventual code is coded. This approach is intended to stimulate the programmer to think about conditions in which his or her code could fail. XP says that the programmer is finished with a certain piece of code when he or she cannot come up with any further conditions under which the code may fail.
Test driven development proceeds by quickly cycling through the following steps, with each step taking minutes at most, preferably much less. Since each user story will usually require one to two days of work, a very large number of such cycles will be necessary per story.
Write unit test: The programmers write a minimal test that should fail because the functionality hasn't been fully implemented in the production code.
Watch the new test fail: The programmers verify the test does indeed fail. While it may seem like a waste of time, this step is critical because it verifies that your belief about the state of the production code is correct. If the test does not fail, the programmers should determine whether there is a bug in the test code, or that the production code does support the functionality described by the new test.
Write code: The programmers write just enough production code so the new test will pass.
Run test: The unit tests are executed to verify that the new production code passes the new test, and that no other tests are failing.
Refactor: Remove any code smells from both the production and test code.
For a more intense version of the above process, see Uncle Bob's Three Rules of TDD.
Whole team
Within XP, the "customer" is not the one who pays the bill, but the one who really uses the system. XP says that the customer should be on hand at all times and available for questions. For instance, the team developing a financial administration system should include a financial administrator.
Continuous process
Continuous integration
The development team should always be working on the latest version of the software. Since different team members may have versions saved locally with various changes and improvements, they should try to upload their current version to the code repository every few hours, or when a significant break presents itself. Continuous integration will avoid delays later on in the project cycle, caused by integration problems.
Design improvement
Because XP doctrine advocates programming only what is needed today, and implementing it as simply as possible, at times this may result in a system that is stuck. One of the symptoms of this is the need for dual (or multiple) maintenance: functional changes start requiring changes to multiple copies of the same (or similar) code. Another symptom is that changes in one part of the code affect many other parts. XP doctrine says that when this occurs, the system is telling you to refactor your code by changing the architecture, making it simpler and more generic.
Small releases
The delivery of the software is done via frequent releases of live functionality creating concrete value. The small releases help the customer to gain confidence in the progress of the project. This helps maintain the concept of the whole team as the customer can now come up with his suggestions on the project based on real experience.
Shared understanding
Coding standard
Coding standard is an agreed upon set of rules that the entire development team agree to adhere to throughout the project. The standard specifies a consistent style and format for source code, within the chosen programming language, as well as various programming constructs and patterns that should be avoided in order to reduce the probability of defects. The coding standard may be a standard conventions specified by the language vendor (e.g. The Code Conventions for the Java Programming Language, recommended by Sun), or custom defined by the development team.
Extreme Programming backers advocate code that is self-documenting to the furthest degree possible. This reduces the need for code comments, which can get out of sync with the code itself.
Collective code ownership
Collective code ownership (also known as "team code ownership" and "shared code") means that everyone is responsible for all the code; therefore, everybody is allowed to change any part of the code. Collective code ownership is not only an organizational policy but also a feeling. "Developers feel team code ownership more when they understand the system context, have contributed to the code in question, perceive code quality as high, believe the product will satisfy the user needs, and perceive high team cohesion." Pair programming, especially overlapping pair rotation, contributes to this practice: by working in different pairs, programmers better understand the system context and contribute to more areas of the code base.
Collective code ownership may accelerate development because a developer who spots an error can fix it immediately, which can reduce bugs overall. However, programmers may also introduce bugs when changing code that they do not understand well. Sufficiently well-defined unit tests should mitigate this problem: if unforeseen dependencies create errors, then when unit tests are run, they will show failures.
Collective code ownership may lead to better member backup, greater distribution of knowledge and learning, shared responsibility of the code, greater code quality, and reduced rework. But it may as well lead to increased member conflict, increase of bugs, changes of developers mental flow and breaks of their reasoning, increased development time, or less understanding of the code.
Simple design
Programmers should take a "simple is best" approach to software design. Whenever a new piece of code is written, the author should ask themselves 'is there a simpler way to introduce the same functionality?'. If the answer is yes, the simpler course should be chosen. Refactoring should also be used to make complex code simpler.
System metaphor
The system metaphor is a story that everyone - customers, programmers, and managers - can tell about how the system works. It's a naming concept for classes and methods that should make it easy for a team member to guess the functionality of a particular class/method, from its name only. For example a library system may create loan_records(class) for borrowers(class), and if the item were to become overdue it may perform a make_overdue operation on a catalogue(class). For each class or operation the functionality is obvious to the entire team.
Programmer welfare
Sustainable pace
The concept is that programmers or software developers should not work more than 40 hour weeks, and if there is overtime one week, that the next week should not include more overtime. Since the development cycles are short cycles of continuous integration, and full development (release) cycles are more frequent, the projects in XP do not follow the typical crunch time that other projects require (requiring overtime).
Also, included in this concept is that people perform best and most creatively if they are well rested.
A key enabler to achieve sustainable pace is frequent code-merge and always executable & test covered high quality code. The constant refactoring way of working enforces team members with fresh and alert minds. The intense collaborative way of working within the team drives
a need to recharge over weekends.
Well-tested, continuously integrated, frequently deployed code and environments also minimize the frequency of unexpected production problems and outages, and the associated after-hours nights and weekends work that is required.
See also
Extreme programming
Continuous integration
Multi-stage continuous integration
Class-responsibility-collaboration card
References
External links
XP Practices
Kent Beck XP Practices
Ron Jeffries XP Practices
Software development process
Extreme programming
|
46977873
|
https://en.wikipedia.org/wiki/Virtual%20currency%20law%20in%20the%20United%20States
|
Virtual currency law in the United States
|
United States virtual currency law is financial regulation as applied to transactions in virtual currency in the U.S. The Commodity Futures Trading Commission has regulated and may continue to regulate virtual currencies as commodities. The Securities and Exchange Commission also requires registration of any virtual currency traded in the U.S. if it is classified as a security and of any trading platform that meets its definition of an exchange.
The regulatory structure also includes tax regulations and FINCEN transparency regulations between financial exchanges and the individuals and corporations with whom they conduct business.
The regulatory and market environment
The Internal Revenue Service (IRS) describes Virtual Currencies (VCs) as "a digital representation of value that functions as a medium of exchange, a unit of account, and/or a store of value [and] does not have legal tender status in any jurisdiction." Although, electronic payment systems have been part of American life since at least 1871 when Western Union "introduced money transfer" through the telegraph and in 1914 "introduced the first consumer charge-card", virtual currencies differ from these digital payment structures because unlike traditional digital transfers of value, virtual currencies do not represent a claim on value; rather the virtual currency are the value.
The National Automated Clearing House Association (NACHA), through the Automated Clearing House (ACH) moves almost $39 trillion and 22 billion electronic financial transactions each year. These electronic transfers of money through the ACH Network represent a claim to physical legal tender. Alternatively, "unlike electronic money, a VC, particularly in its decentralised variant, does not represent a claim on the issuer."
Electronic payment networks, such as the ACH, have decreased the costs and time required to transfer value and increased reliability and transparency. However, traditional electronic payment networks, even with transnational networks and satellite communications, differ from a virtual currency. For example, the Bitcoin exchange Coinbase charges only 1% on all Bitcoin exchanges to legal tender. Compare this to "2%-4% for traditional online payment systems, like PayPal and credit card companies, or a global average of 7.49% for remittance sent through major remittance corridors. The lower costs of transferring value is a great incentive to both users and merchants. Faster transaction speed is also an advantage of using VC. VC may also help to reduce identity theft because of the cryptographic nature of some of the currencies.
Some experts predict various types of VCs will continue to increase, and the demand for the financial system to adopt methods of accepting these currencies will continue to grow. In 2011, Microsoft's Director of Corporate Affairs sent a letter to the Reserve Bank of Australia asking, "whether the domestic payments infrastructure could be modified or adjusted in some way to facilitate and manage the exchange of value beyond traditional currencies".
The online sale of goods and services in the United States accounted for an annual total of $283 billion transactions from the start of 3rd quarter 2013 to the end of 2nd quarter 2014 (adjusted for seasonal variation). VCs are increasing as a percentage of these transactions. The Bitcoin exchange company Coinbase offers a payment service that allows merchants to receive Bitcoin and then automatically exchange the Bitcoin into fiat currency. The speed of this exchange helps merchants to avoid the volatility of Bitcoin. In September 2014, eBay announced that its payment processor Braintree will be accepting Bitcoin. As of November 2014, the market capitalization of Bitcoin was just below $5 billion, but has reached historic highs close to $14 billion. The growth of Internet use and the virtual world is also increasing. World Internet use increased from 15.8% in 2005 to 38.1% in 2013.
This Internet growth is characterized by a consumer demand for a decentralized Internet experience that is not limited or dependent on traditional institutions and governments. This movement aims to create an Internet based on the idea of Virtual, Distributed Parallel (VDP) States, "acting as a kind of organizational counterpoint to that State's governing bodies". Cryptocurrency and other virtual currencies are the VDP movements' currency alternative to traditional currency and traditional financial institutions.
Regulatory authority
The U.S. Congress has the power to regulate VCs as securities, through its power to coin money and prohibit private currencies, and through its constitutional power to regulate insterstate commerce. In a November 2014 decision, the Court upheld the power of regulators to prosecute a defendant who "designed, created and minted coins called 'Liberty Dollars,' coins 'in resemblance or in similitude' [or made to look like] of U.S. coins." Although the defendant did not pass the Liberty Dollars currency as a counterfeit, the currency were in close enough "resemblance of coins of the United States or of foreign countries" and consequently fell under the authority of 18 U.S.C.A. § 486.123 The Court has not decided if § 486 includes the power to prohibit VCs, but if a Court decides that the purpose and intent of VC resembles United States or foreign currency it may fall under § 486.
The Stamp Payment Act of 1862 prohibits anyone from "mak[ing], issu[ing], circulat[ing], or pay[ing] out any note, check, memorandum, token, or other obligation for a less sum than $1, intended to circulate as money or to be received or used in lieu of lawful money of the United States". The Court has not decided if Congress has the power to prohibit VCs under this Act or any other existing regulation or statute.
Tax regulations
The IRS treats VC as property and requires for gains or losses upon an exchange of VC to be calculated. This means that every VC user must track the gains or losses of every one of their VC transactions to stay in compliance with IRS regulations. The Tax Foundation, a tax policy research organization, argues that the IRS got it wrong by categorizing VC as property because the required record keeping creates compliance obstacles, and by categorizing VC as property, the IRS is ignoring how VC is used and treating it as something that people hold for an investment.
The pseudonymity of VC accounts allow users to hide funds and evade taxes. Similar to receiving cash, merchants may not report the earnings to the IRS if the merchant believes the IRS will not be able to account for the transaction. The IRS may be able to audit a VC exchange the merchant uses, but if the merchant is using a personal VC account or using multiple exchanges the IRS may not be able to track these transactions.
Electronic Fund Transfer Act
Virtual currencies lack many of the regulations and consumer protections that legal tender currencies have. Under U.S. law, a cardholder of a credit card is protected from liability in excess of $50 if the card was used for an unauthorized transaction.
The Electronic Fund Transfer Act (EFTA) was written to protect consumers in transfers through ATMs, point-of-sale terminals, ACH systems, remote transfers, and remittance transfers. However, the EFTA does not apply to VCs, and due to the nature of many VCs, it may not be possible for VCs to be in complete compliance with the Act. For example, the regulations require for a consumer to be allowed 30 minutes to cancel an electronic transfer. Many VCs, such as Bitcoin, do not allow chargebacks, so cancelling the Bitcoin transfer is not possible. Additionally, a credit card that transacts in VC is not protected by the fifty-dollar maximum liability for the holder of the credit card.
FinCEN regulations
In 2013, the Financial Crimes Enforcement Network (FinCEN) released a paper stating exchanges and administrators of VC are subject to the Bank Secrecy Act (BSA) and must register as a Money Services Business (MSB). The stated purpose of this legislation was to prevent financial exchanges from being used to launder money or finance crime, including terrorism. The European Central Bank has also recommended registering exchanges to "reduce the incentive for terrorists, criminals and money launderers to make use of these virtual currency schemes for illegal purposes".
Monetary policy
The current amount of VC use in the global market is unlikely to significantly affect the Federal Reserve's ability to conduct monetary policy; however, if the size of the VC market were to grow larger it may affect monetary policy. Even with the impact VC could have on monetary policy, the Reserve does not have the authority to supervise or regulate VC.
According to May 9, 2014 meeting of the Federal Advisory Council and Board of Governors of the Federal Reserve, Bitcoin was deemed to "not present a threat to economic activity by disrupting traditional channels of commerce" but rather a potential "boon". Its global transmissibility opens new markets to merchants and service providers" and "capital flows from the developed to the developing world should increase". In its 2009 Report to Congress, the U.S. Treasury claimed that the dollar will continue to be a major reserve currency "as long as the United States maintains sound macroeconomic policies and deep, liquid, and open financial markets".
According to former CIA CTO Gus Hunt, the "Government's going to learn from Bitcoin, and all the official government currencies are going to become crypto currencies themselves". Under 12 U.S. Code § 411, the Federal Reserve has the authority to issue Federal Reserve notes, and under 12 U.S.C.A. § 418, the Treasury Department "in order to furnish suitable notes for circulation...shall cause plates and dies to be engraved" and print numbered quantities. The Secretary of the Treasury has the authority to "mint and issue coins". However, it is uncertain if this authority includes the authority to "mint" electronic coins for a government-backed cryptocurrency protocol. According to the Federal Reserve Bank of St. Louis's Director of Research, "the most important aspect of this technology revolution is, in my view, the threat of entry into the money and payment system and what I think it will do is to force traditional institutions, including central banks, to either adapt or die".
Illegal activities with virtual currency
Money laundering
The culture of laundering money in the Bitcoin network is so prevalent there is even a website called bitlaunder.com. The company bitlaunder.com claims they are "experts at laundering Bitcoin" and they "use the most sophisticated methods available to completely anonymize your Bitcoins and obscure their history from forensic tracing". The U.S. Government Accountability Office reported that the pseudonymity in VCs makes it difficult for the government to detect money laundering and other financial crimes, and it may be necessary to rely on international cooperation to address these crimes. Similarly, the European Banking Authority claimed that regulations should strive for "global coordination, otherwise it will be difficult to achieve a successful regulatory regime". In spite of the best regulations from the United States and the European Union, the inherent nature of the Bitcoin protocol allows for pseudonymous transfers of Bitcoins to or from anywhere in the world, so illegal transactions will not be completely eliminated through regulations.
Anonymity in Bitcoins and Altcoins (forks from the Bitcoin protocol) can be increased by adding software augmentations to the VC. Zerocoin, for example, uses an algorithmic process called "zero-knowledge proof" to hide the value of the coins. Dark Wallet anonymously combines transfers of VC to obscure the origin of the transfer, and the developers intend to integrate the software into a Tor network in the future. One of the developers of Dark Wallet described it as "just money laundering software". He said, "I want a private means for black market transactions", "whether they're for non-prescribed medical inhalers, MDMA for drug enthusiasts, or weapons." A crypto-currency known as Darkcoin offers even more anonymity than Bitcoin. Similar to Dark Wallet, Darkcoin combines transactions to increase the difficulty of analyzing where the currency was sent. "Some users may be trading Bitcoins for Darkcoins and back again, using the Darkcoin network as a giant bitcoin-laundering service."
Other forms of VC have also been used for making illegal transactions. The VC service and exchange Liberty Reserve allegedly laundered over 6 billion dollars from crimes such as "credit card fraud, identity theft, investment fraud, computer hacking, child pornography, and narcotics trafficking". E-gold, a company with a VC tied to the value of gold, pleaded guilty to money laundering and running an unlicensed money transmitting business, and consequently had to forfeit $45,816,817.84 to the government.
Although the Bank Secrecy Act (BSA) applies to VC exchanges and administrators, VC is still used to finance crime and launder money because not every transaction in VC networks are required to comply with the BSA and not every online exchange complies with the BSA. In September 2014, Robert M. Faiella, a/k/a "BTCKing", pleaded guilty to operating an unlicensed exchange that exchanged over a million in cash for Bitcoin, used for criminal enterprise and known as "Silk Road". Despite BSA regulations, Faiella and the users of his exchange, were able to hide their identity through both pseudonymous Bitcoin addresses and an anonymous network that hid their IP addresses.
Transactions on Tor
In November 2014, the FBI, "as part of a coordinated international law enforcement action", seized dozens of "dark markets", including Silk Road II operating on the anonymous Tor network. These markets accepted payment in Bitcoins or similar crypto-currencies, and operated both domestically and internationally. Although the FBI was successful in cracking through the anonymous Tor network and discovering the origin of the illegal Bitcoin markets Silkroad I and II and similar illegal markets, the methods the FBI used may not be legal or available, in every case, under the U.S. Constitution's prohibition against unreasonable searches and seizures.
October 2014, the court decided the fate of the defendant regarding his role in the first Silkroad, but the court refused to decide whether his Fourth Amendment rights were violated because he never pleaded that he had a right to privacy in the server that was searched. The Court claimed that the defendant did not plead a violation of his Fourth Amendment rights because either "he in fact has no personal privacy interest in the Icelandic server, or because he has made a tactical decision not to reveal that he does" thus claiming that Ulbricht "therefore has no basis to challenge". This is significant because the Court did not decide if the techniques the FBI used to locate the defendant IP address violated the Fourth Amendment.
Operating behind the anonymous Tor network might give a subjective expectation of privacy, but this may not be reasonable expectation of privacy that would survive the Katz test because the Tor software explicitly states that it "can't solve all anonymity problems". Under Warshak, the defendant had a "reasonable expectation of privacy" in the content of his email; however, unlike an email, an IP address is generally visible to everyone, The FBI claimed they found Silkroad's IP address by "typing in miscellaneous entries into the username, password, and CAPTCHA fields contained in the interface" to find an IP address associated with an application misconfigured to the Tor network.
Securities fraud
The Securities and Exchange Commission (SEC) treats securities crimes committed with Bitcoin and VCs as money, and it is likely that anti-gambling regulations will be enforced with the same reasoning. In July 2013, Trendon T. Shavers was charged by the SEC for "defrauding investors in a Ponzi scheme involving Bitcoin" that amounted to over 700,000 Bitcoin or $4.5 million based on the average price of Bitcoin in 2011 and 2012 when the investments were offered and sold. Shavers implemented the scheme through Bitcoin Savings and Trust (BTCST), "an unincorporated online investment scheme" that was not registered with the SEC. "The collective loss to BTCST investors who suffered net losses (there were also net winners) was 265,678 bitcoins, or more than $149 million at current exchange rates" from September 2014.
Shavers attempted to argue the investments were not securities because Bitcoin is not money. However, in a precedent determining decision, the magistrate judge determined that Bitcoin is money, and thus the investments were securities. The magistrate judge stated, "[i]t is clear that Bitcoin can be used as money. It can be used to purchase goods or services, and as Shavers stated, used to pay for individual living expenses. The only limitation of Bitcoin is that it is limited to those places that accept it as currency. However, it can also be exchanged for conventional currencies, such as the U.S. dollar, Euro, Yen, and Yuan. Therefore, Bitcoin is a currency or form of money, and investors wishing to invest in BTCST provided an investment of money." This decision paved the way for other regulators to treat Bitcoin and VCs as money, so it is likely this decision will be cited if regulators decide to prosecute VC transactions under the UIGEA, Illegal Gambling Business Act, Wire Act, or any other regulation involving financial transactions.
Consumer warnings
In August 2014, the Consumer Financial Protection Bureau (CFPB) released a consumer advisory to warn consumers of the risk of VCs. The advisory warned consumers of hackers, scammers, loss of VCs by losing the private key, fewer regulations, and an inability to make chargebacks. States have also released consumer advisories and warned users that VCs are not insured by the FDIC, highly volatile, often associated with criminal enterprises, new, and unproven technology. David S. Cohen, the Under Secretary for Terrorism and Financial Intelligence at the Treasury Department, stated that VCs pose "clear risks to consumers and investors" because the "anonymity and transaction irrevocability [of VCs] expose[s] them to fraud and theft, [a]nd unlike FDIC insured banks and credit unions that guarantee the safety of deposits, there are no such safeguards provided to virtual wallets".
The result of this weak regulatory environment makes VCs prone to volatility, market manipulation, money laundering, fraud, and illegal transactions. On August 11, 2014, the Consumer Financial Protection Bureau (CFPB) released a consumer advisory warning on VC and began accepting complaints on VC products and services. Additionally, many U.S. states have released consumer warnings regarding virtual currencies.
Online gambling
The federal legality of online gambling with Bitcoins in the United States has not yet been decided; however, the legality of online gambling with legal tender currency has been decided. In April 2011, the FBI indicted the "founders of the three largest Internet poker companies doing business in the United States—PokerStars, Full Tilt Poker, and Absolute Poker...with bank fraud, money laundering, and illegal gambling". In 2006, the United States enacted the Unlawful Internet Gambling Enforcement Act (UIGEA), yet the poker companies continued to operate until the 2011 indictment. Similar to the 2011 indictment, the Justice Department may be collecting evidence and building a case against the Bitcoin gambling sites before they launch an indictment.
The UIGEA does not expressly prohibit Internet gambling, but it does make it illegal for an online gambling business to knowingly accept fund transfers. The Bitcoin gambling sites are currently circumventing this legislation by keeping their funds in bitcoin cryptocurrency wallets. However, in order for these sites to exchange their Bitcoins for a fiat currency they must use a financial exchange, so even by receiving their earnings with Bitcoin, the online gambling sites may come into jurisdiction of the UIGEA if the gambling business accepts payment through "(i) automated clearing house (ACH) systems, (ii) card systems, (iii) check collection systems, (iv) money transmitting businesses, and (v) wire transfer systems."
The Illegal Gambling Business Act may also prohibit Bitcoin gambling sites because the act broadly prohibits all gambling businesses that are in (i) "violation of the law of a State or political subdivision in which it is conducted; (ii) involves five or more persons who conduct, finance, manage, supervise, direct, or own all or part of such business; and (iii) has been or remains in substantially continuous operation for a period in excess of thirty days or has a gross revenue of $2,000 in any single day." Under IRS regulations Bitcoin and other VCs are treated as property, so losses and gains must be calculated to determine the value of the virtual currency. If an online gambling business earned the value of at least $2,000 dollars in Bitcoin "in any single day", they may fall under this act.
The Federal Wire Act (Wire Act) prohibits "bets or wagers on any sporting event or contest". Some Bitcoin gambling sites have a mixture of betting on sports and traditional casino games, and it is conceivable the bets on sporting events could fall within the language of the Wire Act. The Wire Act expressly mentions "money or credit as a result of bets or wagers", and VCs may fall under the intent of the Wire Act because they operate as credits that can be redeemed or exchanged at VC exchanges, and they operate like money because they facilitate transactions.
Some online wagers do not fit under the typical definition of gambling or a game of chance. The Commodity Futures Trading Commission (CFTC) refers to these as "Event Contracts". In December 2011, the CFTC ordered an online business to cease listing Political Events Contracts (i.e., betting on who will be elected) for trade, as it is contrary to the public interest. The CFTC's jurisdiction is being tested by online businesses that accept virtual currency for event contracts. A website, accepting Bitcoin and other VCs, called predictious.com lists trades such as trying to call who will be elected, whether a celebrity will have a boy or girl child, or who will be the winner of a science competition.
Federal Deposit Insurance Corporation
The Federal Deposit Insurance Corporation (FDIC) does not insure VCs.
Federal Election Commission
In a May 2014 Advisory Opinion, the Federal Election Commission (FEC) decided that Bitcoin donations are permitted under FEC laws. This decision will permit microdonations, and it may encourage more people to donate to campaigns.
See also
Bitcoin
Digital currency exchanger
Exchange (organized market)
Federal Reserve
Liberty Reserve
Legality of bitcoin by country or territory
Cryptocurrencies in Europe
References
External links
Edward V. Murphy, M Maureen Murphy, Michael V. Seitzinger (2015), Bitcoin: Questions, Answers, and Analysis of Legal Issues, Congressional Research Service.
E-commerce in the United States
Alternative currencies
Financial regulation in the United States
|
34991135
|
https://en.wikipedia.org/wiki/MediaGoblin
|
MediaGoblin
|
GNU MediaGoblin (also shortened to MediaGoblin or GMG) is a free, decentralized Web platform (server software) for hosting and sharing many forms of digital media. It strives to provide an extensible, federated, and freedom-respectful software replacement to major media publishing services such as Flickr, DeviantArt, and YouTube.
History
The origins of GNU MediaGoblin date back to 2008, when a gathering was held at the Free Software Foundation in order to discuss the path that Internet communities should take.
The answer was that restrictive and centralized structures were both technically and ethically doubtful, and may harm the typical fairness and availability of the Internet.
Several projects have since appeared to prevent this, including Identi.ca, Libre.fm, Diaspora, among others.
The MediaGoblin project remains in active development.
Design and features
MediaGoblin is part of GNU, and its code is released under the terms of the GNU Affero General Public License; meaning that it adheres to the principles of free and open-source software. The copyright on everything else (e.g. design, logo) is given to the public domain. Christine Lemmer Webber, the core developer, came up with the name "MediaGoblin" which also makes a pun with the pronunciation of "gobbling".
The main page displays an upper banner with MediaGoblin's typeface and an authentication section for users. The remaining space is left to show thumbnails of the latest posted works. Each user owns a personal profile comprised by two vertical sections – one for uploads arranged as a gallery and another with a customizable text box. For displaying media, the platform focuses on the work itself rather than overstocking with options and buttons; nonetheless, comments can be added under the artwork description. Some other features like tags, metadata, theming, Creative Commons licensing and GPS support can be enabled as separate plug-ins to enrich the usage of GNU MediaGoblin.
The platform successfully hosts and displays many sorts of media:
As of version 0.3.1 it includes support for plain text (ASCII art), images (PNG and JPEG).
HTML5 capabilities are widely used to play video and/or audio contained in WebM format; while FLAC, WAV and MP3 uploads are automatically transcoded to Vorbis audio and then encapsulated into WebM.
3D models support (preview and renderization) was added on 22 October 2012 and is achieved by means of HTML5 Canvas, Thingiview, WebGL and Blender.
Mascot
The project mascot is a purple goblin called Gavroche wearing clothing that resembles a stereotypical artist costume.
See also
PeerTube
Plumi
Creative Commons
Free culture movement
List of software under the GNU AGPL
List of computing mascots
References
External links
GNU MediaGoblin website
Free content management systems
Free image galleries
Free server software
GNU Project software
Internet services supporting OpenID
Software using the GNU AGPL license
Video hosting software
Web hosting
|
43672606
|
https://en.wikipedia.org/wiki/Openpass
|
Openpass
|
OpenPass is a method for data recording on RFID card in an integrated access control system, with proprietary software by different providers.
OpenPass is released under the GPL license.
The OpenPass system
The OpenPass system consist of:
contactless smartcard ISO 15693, without proprietary restrictions;
ticket counters and gates compatible with the standard
central server and open platform for data collection
web service connection and data transmission real-time or [batch].
To allow the exchange of information between heterogeneous access control systems, OpenPass defines an interchange format through the use of metalanguage XML. Access credentials are stored on the smartcard and organized through the use of markers, represented in XML. The XML representations are made public by the server OpenPass.
With a single RFID card, the user access to all sites in the system: each company is able to issue the ticket yourself and the card is recognized by every member of the system independently.
The OpenPass standard defines a distributed network of data centers, that are web service connected with the server. The information collected by data centers regarding sales and passages to the gates and they are sent to the central server. The exchange format between server and collection centers is XML.
The OpenPass server receives the data and stores them in a centralized SQL database, where each data item is related to a UID and any personal details of the customer.
Highlights of OpenPass
The methodology OpenPass is characterized by:
data accessible and understandable by users;
fast and "hands-free" passage to the gates;
online recharge and presale pass;
safety of access credentials are encrypted in emission;
multifunctionality;
management division of the profits from sales of tickets valid for multiple domains;
possibility of batch operation of control system, in hostile contexts where the connection is not continuous;
reducing the cost of cards and equipment since it is not bound to proprietary software.
OpenPass projects
For now, OpenPass is applied to control systems access to the ski lifts, in Italy and France:
Italy: Skipass Lombardia
Skipass Lombardia was the first example in Europe of open standard for integrated access control systems with heterogeneous and proprietary software. OpenPass has created an integrated system for all companies of skilifts in the Lombardy Region: 310 ski lifts, 46 companies in 30 ski areas.
France: Nordic Pass Rhône Alpes
The Federation of Nordic skiing in the French region of Rhône-Alpes(FRAN) has promoted the Nordic Pass Rhône Alpes: a project of integrated access to 5000 km of ski runs in 83 stations of Nordic skiing, with OpenPass standard.
Italy: SkiArea VCO
In the Alps of Piedmont, Neveazzurra ski resort has implemented SkiArea VCO: a project of integrated access to the stations of Neveazzurra resort: Alpe Devero, Antrona Cheggio, Ceppo Morelli, Domobianca (Domodossola), Druogno, Formazza, Macugnaga, Mottarone, Piana di Vigezio, Pian di Sole, San Domenico (Varzo).
References
Free software
|
39802440
|
https://en.wikipedia.org/wiki/Trusted%20execution%20environment
|
Trusted execution environment
|
A trusted execution environment (TEE) is a secure area of a main processor. It guarantees code and data loaded inside to be protected with respect to confidentiality and integrity. A TEE as an isolated execution environment provides security features such as isolated execution, integrity of applications executing with the TEE, along with confidentiality of their assets. In general terms, the TEE offers an execution space that provides a higher level of security for trusted applications running on the device than a rich operating system (OS) and more functionality than a 'secure element' (SE).
History
The Open Mobile Terminal Platform (OMTP) first defined TEE in their "Advanced Trusted Environment:OMTP TR1" standard, defining it as a "set of hardware and software components providing facilities necessary to support Applications" which had to meet the requirements of one of two defined security levels. The first security level, Profile 1, was targeted against only software attacks and while Profile 2, was targeted against both software and hardware attacks.
Commercial TEE solutions based on ARM TrustZone technology, conforming to the TR1 standard, were later launched, such as Trusted Foundations developed by Trusted Logic.
Work on the OMTP standards ended in mid 2010 when the group transitioned into the Wholesale Applications Community (WAC).
The OMTP standards, including those defining a TEE, are hosted by GSMA.
Details
The TEE typically consists of a hardware isolation mechanism, plus a secure operating system running on top of that isolation mechanism – however the term has been used more generally to mean a protected solution. Whilst a GlobalPlatform TEE requires hardware isolation, others such as EMVCo use the term TEE to refer to both hardware/software and only software-based solutions. FIDO uses the concept of TEE in the restricted operating environment for TEEs based on hardware isolation. Only trusted applications running in a TEE have access to the full power of a device's main processor, peripherals and memory, while hardware isolation protects these from user installed apps running in a main operating system. Software and cryptographic isolation inside the TEE protect the trusted applications contained within from each other.
Service providers, mobile network operators (MNO), operating system developers, application developers, device manufacturers, platform providers and silicon vendors are the main stakeholders contributing to the standardization efforts around the TEE.
To prevent simulation of hardware with user-controlled software, a so-called "hardware root of trust" is used. This is a set of private keys that are embedded directly into the chip during manufacturing; one-time programmable memory such as eFuses are usually used on mobile devices. These cannot be changed, even after device resets, and whose public counterparts reside in a manufacturer database, together with a non-secret hash of a public key belonging to the trusted party (usually a chip vendor) which is used to sign trusted firmware alongside the circuits doing cryptographic operations and controlling access. The hardware is designed in a way which prevents all software not signed by the trusted party's key from accessing the privileged features. The public key of the vendor is provided at runtime and hashed; this hash is then compared to the one embedded in the chip. If the hash matches, the public key is used to verify a digital signature of trusted vendor-controlled firmware (such as a chain of bootloaders on Android devices or 'architectural enclaves' in SGX). The trusted firmware is then used to implement remote attestation.
When an application is attested, its untrusted component loads its trusted component into memory; the trusted application is protected from modification by untrusted components with hardware. A nonce is requested by the untrusted party from verifier's server, and is used as a part of a cryptographic authentication protocol, proving integrity of the trusted application. The proof is passed to the verifier, which verifies it. A valid proof cannot be computed in a simulated hardware (i.e. QEMU) because in order to construct it, access to the keys baked into hardware is required; only trusted firmware has access to these keys and/or the keys derived from them or obtained using them. Because only the platform owner is meant to have access to the data recorded in the foundry, the verifying party must interact with the service set up by the vendor. If the scheme is implemented improperly, the chip vendor can track which applications are used on which chip and selectively deny service by returning a message indicating that authentication has not passed.
To simulate hardware in a way which enables it to illicitly pass remote authentication, an attacker would have to extract keys from the hardware, which is costly because of the equipment and technical skill required to execute it. For example, using focused ion beams, scanning electron microscopes, microprobing, and chip decapsulation or even impossible, if the hardware is designed in such a way that reverse-engineering destroys the keys. In most cases, the keys are unique for each piece of hardware, so that a key extracted from one chip cannot be used by others (for example physically unclonable functions).
Though deprivation of ownership is not an inherent property of TEEs (it is possible to design the system in a way that allows only the user who has obtained ownership of the device first to control the system), in practice all such systems in consumer electronics are intentionally designed so as to allow chip manufacturers to control access to attestation and its algorithms. It allows manufacturers to grant access to TEEs only to software developers who have a (usually commercial) business agreement with the manufacturer, and to enable such use cases as tivoization and DRM.
Uses
There are a number of use cases for the TEE. Though not all possible use cases exploit the deprivation of ownership, TEE is usually used exactly for this.
Premium Content Protection/Digital Rights Management
Note: Much TEE literature covers this topic under the definition "premium content protection" which is the preferred nomenclature of many copyright holders. Premium content protection is a specific use case of Digital Rights Management (DRM), and is controversial among some communities, such as the Free Software Foundation. It is widely used by copyrights holders to restrict the ways in which end users can consume content such as 4K high definition films.
The TEE is a suitable environment for protecting digitally encoded information (for example, HD films or audio) on connected devices such as smart phones, tablets and HD televisions. This suitability comes from the ability of the TEE to deprive owner of the device from reading stored secrets, and the fact that there is often a protected hardware path between the TEE and the display and/or subsystems on devices.
The TEE is used to protect the content once it is on the device: while the content is protected during transmission or streaming by the use of encryption, the TEE protects the content once it has been decrypted on the device by ensuring that decrypted content is not exposed to the environment not approved by app developer OR platform vendor.
Mobile financial services
Mobile Commerce applications such as: mobile wallets, peer-to-peer payments, contactless payments or using a mobile device as a point of sale (POS) terminal often have well-defined security requirements. TEEs can be used, often in conjunction with near field communication (NFC), SEs and trusted backend systems to provide the security required to enable financial transactions to take place
In some scenarios, interaction with the end user is required, and this may require the user to expose sensitive information such as a PIN, password or biometric identifier to the mobile OS as a means of authenticating the user. The TEE optionally offers a trusted user interface which can be used to construct user authentication on a mobile device.
With the rise of cryptocurrency, TEEs are increasingly used to implement crypto-wallets, as they offer the ability to store tokens more securely than regular operating systems, and can provide the necessary computation and authentication applications.
Authentication
The TEE is well-suited for supporting biometric ID methods (facial recognition, fingerprint sensor and voice authorization), which may be easier to use and harder to steal than PINs and passwords. The authentication process is generally split into three main stages:
Storing a reference "template" identifier on the device for comparison with the "image" extracted in next stage.
Extracting an "image" (scanning the fingerprint or capturing a voice sample, for example).
Using a matching engine to compare the "image" and the "template".
A TEE is a good area within a mobile device to house the matching engine and the associated processing required to authenticate the user. The environment is designed to protect the data and establish a buffer against the non-secure apps located in mobile OSes. This additional security may help to satisfy the security needs of service providers in addition to keeping the costs low for handset developers.
Enterprise, government, and cloud
The TEE can be used by governments, enterprises, and cloud service providers to enable the secure handling of confidential information on mobile devices and on server infrastructure. The TEE offers a level of protection against software attacks generated in the mobile OS and assists in the control of access rights. It achieves this by housing sensitive, ‘trusted’ applications that need to be isolated and protected from the mobile OS and any malicious malware that may be present. Through utilizing the functionality and security levels offered by the TEE, governments and enterprises can be assured that employees using their own devices are doing so in a secure and trusted manner. Likewise, server-based TEEs help defend against internal and external attacks against backend infrastructure.
Secure modular programming
With the rise of software assets and reuses, modular programming is the most productive process to design software architecture, by decoupling the functionalities into small independent modules. As each module contains everything necessary to execute its desired functionality, the TEE allows to organize the complete system featuring a high level of reliability and security, while preventing each module from vulnerabilities of the others.
In order for the modules to communicate and share data, TEE provide means to securely have payloads sent/received between the modules, using mechanisms such as objects serialization, in conjunction with proxies.
See Component-based software engineering
TEE Operating Systems
Hardware support
The following hardware technologies can be used to support TEE implementations:
AMD:
Platform Security Processor (PSP)
AMD Secure Encrypted Virtualization and the Secure Nested Paging extension
ARM:
TrustZone
IBM:
IBM Secure Service Container, formerly zACI, first introduced in IBM z13 generation machines (including all LinuxONE machines) in driver level 27.
IBM Secure Execution, introduced in IBM z15 and LinuxONE III generation machines on April 14, 2020.
Intel:
Trusted Execution Technology
SGX Software Guard Extensions
"Silent Lake" (available on Atom processors)
RISC-V:
MultiZone™ Security Trusted Execution Environment
Keystone Customizable TEE Framework
Penglai Scalable TEE for RISC-V
See also
Open Mobile Terminal Platform
Trusted Computing Group
FIDO Alliance
Java Card
Intel Management Engine
Intel LaGrande
Software Guard Extensions
AMD Platform Security Processor
Trusted Platform Module
ARM TrustZone
NFC Secure Element
Next-Generation Secure Computing Base
References
Security
Security technology
Mobile security
Mobile software
Standards
|
53900562
|
https://en.wikipedia.org/wiki/List%20of%20the%20most%20common%20passwords
|
List of the most common passwords
|
This is a list of the most common passwords, discovered in various data breaches. Common passwords generally are not recommended on account of low password strength.
List
NordPass
NordPass conducted the most breached passwords research in 2021. The company gathered top 200 worst passwords this year from a database of 275,699,516 passwords.
SplashData
The Worst Passwords List is an annual list of the 25 most common passwords from each year as produced by internet security firm SplashData. Since 2011, the firm has published the list based on data examined from millions of passwords leaked in data breaches, mostly in North America and Western Europe, over each year. In the 2016 edition, the 25 most common passwords made up more than 10% of the surveyed passwords, with the most common password of 2016, "123456", making up 4%.
Keeper
Password manager Keeper compiled its own list of the 25 most common passwords in 2016, from 25 million passwords leaked in data breaches that year.
National Cyber Security Centre
The National Cyber Security Centre (NCSC) compiled its own list of the 20 most common passwords in 2019, from 100 million passwords leaked in data breaches that year.
See also
Password cracking
10,000 most common passwords
Notes
References
External links
Skullsecurity list of breached password collections
Security
Password authentication
|
56393307
|
https://en.wikipedia.org/wiki/Amber%20Road%2C%20Inc.
|
Amber Road, Inc.
|
Amber Road, Inc. () was a US-based software company specializing in Global Trade Management (GTM) solutions. It was acquired by E2open for $425 million in 2019. Amber Road was headquartered in East Rutherford, New Jersey, with its European headquarters in Munich, Germany. The company has offices in Tysons Corner, VA and Raleigh, NC in the United States, Hong Kong, Shanghai and Shenzhen (China), as well as Bangalore (India).
History
The company was founded in 1990 by James and John Preuninger under the name of Management Dynamics in the USA. In 2011, the company was renamed Amber Road.
Their Europe, the Middle East and Africa headquarters was opened in Munich during 2013 and the company was listed on the New York Stock Exchange in 2014.
The following companies have been acquired since its inception: Bridgepoint (2005), NextLinx (2005), EasyCargo (2013), Global Trade Academy (2015) and ecVision (2015).
Product
Amber Road develops and programs Global Trade Management solutions, which are offered as Software as a Service. The task of Global Trade Management software is to ensure transparency and automation in foreign trade and supply chain management.
See also
International trade
Supply chain management
Software as a service
References
External links
Software companies of the United States
Companies formerly listed on the New York Stock Exchange
1990 establishments in New Jersey
American companies established in 1990
Companies based in Bergen County, New Jersey
East Rutherford, New Jersey
2014 initial public offerings
2019 mergers and acquisitions
1990 establishments in the United States
Software companies established in 1990
Companies established in 1990
|
56968115
|
https://en.wikipedia.org/wiki/Phyllis%20Schneck
|
Phyllis Schneck
|
Dr. Phyllis Schneck is an American executive and cybersecurity professional. As of May 2017, she became the managing director at Promontory Financial Group. Schneck served in the Obama administration as Deputy Under Secretary for Cybersecurity and Communications for the National Protection and Programs Directorate (NPPD), at the Department of Homeland Security.
Career
She holds a Ph.D. in Computer Science from Georgia Tech.
She was chairman of the Board of Directors of the National Cyber-Forensics and Training Alliance, a partnership between corporations, government and law enforcement for cyber analysis to combat international cybercrime. Schneck also served as the Vice Chairman of the National Institute of Standards and Technology Information Security and Privacy Advisory Board. Schneck spent eight years as chairman of the National Board of Directors of the FBI's InfraGard program, growing the organization from 2,000 to over 30,000 members nationwide.
Schneck was service vice president of Research integration for Secure Computing Corporation, where she conceived and built the early intelligence practice into a data-as-a-service program. She also worked as vice president of corporate strategy at SecureWorks, and founder/CEO of Avalon Communications, which was acquired by SecureWorks.
Prior to joining government in 2013, Schneck worked in the private sector, at McAfee. She testified before Congress on cybersecurity technology and policy.
Government position
From 2013 to 2017, Schneck served in the Obama Administration as the Deputy Under Secretary for Cybersecurity and Communications for the National Protection and Programs Directorate (NPPD). She was the chief cybersecurity official for the Department of Homeland Security (DHS) and supported its mission of strengthening the security and resilience of the nation's critical infrastructure.
Awards
Loyola University Maryland David D. Lattanze Center 2012 Executive of the Year
Information Security Magazine's Top 25 Women Leaders in Information Security
Johns Hopkins University Woodrow Wilson Award 2016
References
Living people
Obama administration personnel
Johns Hopkins University alumni
Georgia Tech people
Year of birth missing (living people)
|
695391
|
https://en.wikipedia.org/wiki/Commercial%20software
|
Commercial software
|
Commercial software, or seldom payware, is a computer software that is produced for sale or that serves commercial purposes. Commercial software can be proprietary software or free and open-source software.
Background and challenge
While software creation by programming is a time and labor-intensive process, comparable to the creation of physical goods, the reproduction, duplication and sharing of software as digital goods is in comparison disproportionately easy. No special machines or expensive additional resources are required, unlike almost all physical goods and products. Once a software is created it can be copied in infinite numbers, for almost zero cost, by anyone. This made commercialization of software for the mass market in the beginning of the computing era impossible. Unlike hardware, it was not seen as trade-able and commercialize-able good. Software was plainly shared for free (hacker culture) or distributed bundled with sold hardware, as part of the service to make the hardware usable for the customer.
Due to changes in the computer industry in the 1970s and 1980s, software slowly became a commercial good by itself. In 1969, IBM, under threat of antitrust litigation, led the industry change by starting to charge separately for (mainframe) software and services, and ceasing to supply source code. In 1983 binary software became copyrightable by the Apple vs. Franklin law decision, before only source code was copyrightable. Additionally, the growing availability of millions of computers based on the same microprocessor architecture created for the first time a compatible mass market worth and ready for binary retail software commercialization.
Commercialization models for software
Common business wisdom is that software as digital good can be commercialized to the mass-market most successful as proprietary good, meaning when the free sharing and copying of the users ("software piracy") can be prevented. Control over this can be achieved by copyright which, along with contract law, software patents, and trade secrets, provides legal basis for the software's owner, the intellectual property (IP) holder, to establish exclusive rights on distribution and therefore commercialization. Technical mechanisms which try to enforce the exclusive distribution right are copy-protection mechanisms, often bound to the physical media (floppy disc, CD, etc.) of the software, and digital rights management (DRM) mechanisms which try to achieve the same also in physical media-less digital distribution of software.
When software is sold in binary form only ("closed source") on the market, exclusive control over software derivatives and further development is additionally achieved. The reverse engineering reconstruction process of a complex software from its binary form to its source code form, required for unauthorized third-party adaption and development, is a burdensome and often impossible process. This creates another commercialization opportunity of software in source code form for a higher price, e.g. by licensing a game engine's source code to another game developer for flexible use and adaption.
This business model, also called "research and development model", "IP-rent model" or "proprietary software business model", was described by Craig Mundie of Microsoft in 2001 as follows: "[C]ompanies and investors need to focus on business models that can be sustainable over the long term in the real world economy…. We emphatically remain committed to a model that protects the intellectual property rights in software and ensures the continued vitality of an independent software sector that generates revenue and will sustain ongoing research and development. This research and development model … based on the importance of intellectual property rights [was the] foundation in law that made it possible for companies to raise capital, take risks, focus on the long term, and create sustainable business models…. [A]n economic model that protects intellectual property and a business model that recoups research and development costs have shown repeatedly that they can create impressive economic benefits and distribute them very broadly."
Free and open-source software commercialization
While less common than commercial proprietary software, free and open-source software may also be commercial software in the free and open-source software (FOSS) domain. But unlike the proprietary model, commercialization is achieved in the FOSS commercialization model without limiting the users in their capability to share, reuse and duplicate software freely. This is a fact that the Free Software Foundation emphasizes, and is the basis of the Open Source Initiative.
Under a FOSS business model, software vendors may charge a fee for distribution and offer pay support and software customization services.
Proprietary software uses a different business model, where a customer of the proprietary software pays a fee for a license to use the software.
This license may grant the customer the ability to configure some or no parts of the software themselves.
Often some level of support is included in the purchase of proprietary software, but additional support services (especially for enterprise applications) are usually available for an additional fee.
Some proprietary software vendors will also customize software for a fee.
Free software is often available at no cost and can result in permanently lower costs compared to proprietary software.
With free software, businesses can fit software to their specific needs by changing the software themselves or by hiring programmers to modify it for them.
Free software often has no warranty, and more importantly, generally does not assign legal liability to anyone.
However, warranties are permitted between any two parties upon the condition of the software and its usage.
Such an agreement is made separately from the free software license.
Reception and impact
All or parts of software packages and services that support commerce are increasingly made available as FOSS software.
This includes products from Red Hat, Apple Inc., Sun Microsystems, Google, and Microsoft.
Microsoft uses "commercial software", to describe their business model but is also mostly proprietary.
A report by Standish Group says that adoption of open source has caused a drop in revenue to the proprietary software industry by about $60 billion per year.
See also
Commercial off-the-shelf
Retail software
Proprietary software
Gratis versus Libre
Shareware
References
Software distribution
|
217392
|
https://en.wikipedia.org/wiki/Executable
|
Executable
|
In computing, executable code, an executable file, or an executable program, sometimes simply referred to as an executable or binary, causes a computer "to perform indicated tasks according to encoded instructions", as opposed to a data file that must be interpreted (parsed) by a program to be meaningful.
The exact interpretation depends upon the use. "Instructions" is traditionally taken to mean machine code instructions for a physical CPU. In some contexts, a file containing scripting instructions (such as bytecode) may also be considered executable.
Generation of executable files
Executable files can be hand-coded in machine language, although it is far more convenient to develop software as source code in a high-level language that can be easily understood by humans. In some cases, source code might be specified in assembly language instead, which remains human-readable while being closely associated with machine code instructions.
The high-level language is compiled into either an executable machine code file or a non-executable machine code – object file of some sort; the equivalent process on assembly language source code is called assembly. Several object files are linked to create the executable. Object files -- executable or not -- are typically stored in a container format, such as Executable and Linkable Format (ELF) or Portable Executable (PE) which is operating system-specific. This gives structure to the generated machine code, for example dividing it into sections such as .text (executable code), .data (initialized global and static variables), and .rodata (read-only data, such as constants and strings).
Executable files typically also include a runtime system, which implements runtime language features (such as task scheduling, exception handling, calling static constructors and destructors, etc.) and interactions with the operating system, notably passing arguments, environment, and returning an exit status, together with other startup and shutdown features such as releasing resources like file handles. For C, this is done by linking in the crt0 object, which contains the actual entry point and does setup and shutdown by calling the runtime library.
Executable files thus normally contain significant additional machine code beyond that directly generated from the specific source code. In some cases, it is desirable to omit this, for example for embedded systems development, or simply to understand how compilation, linking, and loading work. In C, this can be done by omitting the usual runtime, and instead explicitly specifying a linker script, which generates the entry point and handles startup and shutdown, such as calling main to start and returning exit status to the kernel at the end.
Execution
In order to be executed by the system (such as an operating system, firmware, or boot loader), an executable file must conform to the system's application binary interface (ABI). In simple interfaces, a file is executed by loading it into memory and jumping to the start of the address space and executing from there. In more complicated interfaces, executable files have additional metadata specifying a separate entry point. For example, in ELF, the entry point is specified in the header's e_entry field, which specifies the (virtual) memory address at which to start execution. In the GCC (GNU Compiler Collection) this field is set by the linker based on the _start symbol.
See also
Comparison of executable file formats
Executable compression
Executable text
References
External links
EXE File Format at What Is
Computer file systems
Programming language implementation
|
16474580
|
https://en.wikipedia.org/wiki/Stanley%2C%20Inc.
|
Stanley, Inc.
|
Stanley, Inc. (NYSE:SXE), acquired by CGI Group in 2010, was an information technology company based in Arlington, Virginia. Founded in 1966 as a small, entrepreneurial consulting company, it evolved into an employee-owned corporation with almost 5,000 full-time employees before being acquired by CGI Group Inc.
Stanley made its initial public offering on the New York Stock Exchange in October 2006, selling 6.3 million shares for $13.00/share, raising $81.9 million. A majority of stocks are owned by officers, directors and employees (the latter through an employee stock ownership plan).
The company’s largest customer was the U.S. Army. It also held contracts with the U.S. Marine Corps, U.S. Navy, Department of State, and Department of Homeland Security. It operated facilities for the production of United States passports and for mailroom work and data entry for applications for U.S. visa and citizenship.
Union Matter
The company controversially reduced wages when it took over the management of two facilities in St. Albans, Vermont, and Laguna Niguel, California in December 2007 which process immigration documents. The United Electrical workers union (UE) became involved in an effort to create a union to protect low paid employees who process immigration records. As it was about to assume control, Stanley announced that it would be changing job classifications at the facilities to match the classifications specified in its contract, resulting in a pay decrease of about 12 percent which prompted Sen. Bernie Sanders from Vermont to call on the Labor Department to investigate what he charged was a violation of the Service Contract Act.
Department of State Investigation
In March 2008, the U.S. Department of State, which is responsible for issuing U.S. passports, stated that two employees of a Stanley Associates subcontractor had been fired for improperly accessing the passport application file of (then) presidential candidate Barack Obama. The company published a press release on March 21, 2008. In each case, Stanley took immediate disciplinary action and employees were terminated the day the unauthorized search occurred. Stanley also stated its general policy and practice to cooperate fully with any potential Government investigation.
Additional information on the breach of passport information has yet to be disclosed to the public.
Corporate Mergers
In February 2006, Stanley acquired Morgan Research Corporation. At the time of acquisition, MORGAN had annual revenues of approximately $70 million and 480 employees in Alabama, Florida, Oklahoma, and Texas, supporting customers including the U.S. Army Aviation and Missile Command (AMCOM), U.S. Army PEO STRI (Program Executive Officer for Simulation, Training, & Instrumentation), and NASA.
On June 10, 2008, Stanley announced that it was acquiring Oberon Associates, a fellow defense contractor also based out of Virginia. In April 2007, Stanley acquired Lawton, Okla.-based Techrizon LLC, a provider of software, training, simulation and information security solutions.
On May 7, 2010, CGI Group Inc., a leading provider of information technology and business processing services, announced that they have entered into a definitive merger agreement to acquire Stanley. The merger was finalized in August 2010.
Corporate Recognition
Washington Technology listed Stanley at the 50th position in its 2007 list of the top 100 U.S. federal government prime contractors. In the 2008 listing, Stanley rose to the 48th position. In 2009, Stanley rose to the 45th position.
Fortune Magazine included Stanley in its 2007, 2008, and 2009 lists of the "100 Best Companies to Work For". The methodology that Fortune Magazine follows in determining the companies listed in the ranking includes an independent survey of a random sampling of company employees.
Stanley was ranked in the 10th position in the large companies category on the 2009 Best Places to Work in Oklahoma.
The 2009 Washington Business Journal Best Places to Work ranked Stanley in the 17th position in the large companies category.
References
External links
CGI To Acquire Stanley
Business Analytics Solutions
Information technology consulting firms of the United States
Companies based in Alexandria, Virginia
|
39974798
|
https://en.wikipedia.org/wiki/Paul%20Wheaton
|
Paul Wheaton
|
Paul Wheaton is an American permaculture author, master gardener, software engineer, and disciple of the natural agriculturist Sepp Holzer. He is known for writing his book, "Building a Better World in Your Backyard", founding Permies, the largest website devoted to permaculture, as well as for creating and publishing articles, videos, and podcasts on the subject of permaculture.
Wheaton is also the founder of Coderanch, formerly called Javaranch, an online community for Java programmers. He received three Jolt Awards from Dr. Dobb's Journal for his work related to Javaranch. As a software engineer, he has worked on the ground system for the satellite that took pictures for Google Earth and DigitalGlobe.
Wheaton has participated in several documentaries, TED Talk shows, and conferences, on topics related to permaculture, energy, and software engineering.
Early life
Born in Moscow, Idaho, Wheaton was raised in Eastern Oregon and Missoula, Montana.
He began his career as a software engineer and continued to work for several private companies with software and programming. In the early 1990s, Wheaton developed Bananacom, a terminal emulator, which was used by bulletin board system operators in the United States. At one time, Wheaton had hired 14 programmers from Missoula, Montana to work on the Bananacom project. The product gained popularity among its users due to its usability.
Later in 2000, Wheaton worked on the ground system software for the spacecraft that took pictures for Google Earth and DigitalGlobe.
Projects
Permies
In 2000, Wheaton shared his views on lawn care on his website richsoil. Later that year, he launched the website permies, a place for people to discuss lawn care and permaculture. By 2012, permies had become the largest online community dedicated to permaculture. When Wheaton invented his design of natural homes, the first one of it was documented on Permies. The site has attracted notable personalities such as Geoff Lawton and Toby Hemenway, rocket mass heater developers Ernie and Erica Wisner, medical herbalist Michael Pilarski, and others who explore a range of eclectic permaculture topics.
Coderanch
In 1997, Kathy Sierra created javaranch.com, which she transferred to Wheaton In January 2000. In 2009, Javaranch was extended to a new domain coderanch.com with a forum. As of May 2020, Coderanch had more than 3 million posts created by over 332,000 registered members.
Publications
Books and articles
Paul Wheaton authored the book "Building a Better World in Your Backyard, instead of being angry at the bad guys.", which was an iteration of his philosophy of choosing to "build good things rather than be angry at bad guys." He is also the author of many articles, published both on his own website and other major publications. According to Wheaton, the book describes a collection of things people can do individually to make a significant positive global impact.
A Wheaton's article on Hügelkultur appeared in LifeHacker in 2011, which suggested the use of wood to form the backbone of Hügelkultur bed. His article on Aphids and ants was published on Countryside.
Podcasts
In 2011, Wheaton launched a monthly podcast titled "Paul Wheaton Permaculture Podcast". The podcasts mainly consisted of interviews with notable figures in permaculture and educational discussions on various topics of permaculture. In 2019, his Permaculture Podcast ranked number 2 in Feedspot's “Top 15 Permaculture Audio Podcasts & Radio You Must Subscribe and Listen to in 2019."
Wheaton has also participated in other podcasts such as thesurvivalpodcast.com, in which he discussed various topics including Sepp Holzer, wofati, permaculture profitability, rocket mass heaters, light bulbs, and irrigation, and on The Pagan Homesteader podcast where they discussed Hügelkultur and Wofati. Wheaton has also expressed his positions on energy saving methods in a podcast hosted by Abundant Edge.
Other media
In 2008, Wheaton created his YouTube channel called paulwheaton12 which had over 86,000 subscribers and 24 million views as of June, 2020. In his videos, Wheaton discusses various topics related to permaculture, which includes organic horticulture, rocket mass heaters, natural building, alternative energy, homesteading, frugality, raising chickens, wildcrafting, aquaculture, paddock shift systems, and colony collapse disorder. His videos also include interviews with Sepp Holzer and other notable people in the field of permaculture. He further presented his findings during his TEDx Talk, "REALLY Saving Energy: Paul Wheaton at TEDxWhitefish".
Other projects
In 2013, Wheaton produced a documentary of wood-burning stoves, highlighting sustainable ways to heat, which consisted of four segments called "Fire Science", "Sneaky Heat", "Boom Squish", and "Hot Rocket". The documentary was distributed in DVDs, in addition to online streaming and downloadable videos. According to Wheaton, his design of wood burning stoves uses 1/10 of the wood compared to conventional wood burning stoves. He also claimed that his stoves only produce 1/1000 of smoke compared to other stoves.
In 2014, Paul Wheaton crowd-funded and produced a deck of Permaculture Playing Cards, where each card contained information about a different permaculture technique or notable people of permaculture.
Later in the same year, Wheaton produced another documentary titled "World Domination Gardening", which featured a 3 days workshop of Hügelkultur, earthworks, ponds, and swales. The documentary was distributed in sets of three DVDs, called "Sealing a Pond Without a Liner", "Ditches and Swales", and "Hugelkultur and Terracing".<ref name="permies.wdgdvd"
In 2015, Wheaton launched a Kickstarter project to make a documentary of rocket mass heaters. The documentary has been distributed in DVDs, streaming media, downloadable video files, and PDF plans.
In 2017, Wheaton hosted a Permaculture Design Course and an "Appropriate Technology Course" which consisted of a 14 days workshop. The project was able to raise the pledged funds within 22 hours after its Kickstarter project was launched.
Later in 2018, Wheaton produced a documentary called "Rocket Ovens", which he described as an efficient way to cook and bake food making less than 1% of the carbon footprint of that of an electric oven. Distributed in DVDs and streaming videos, the documentary featured environmental friendly ways of cooking, baking, and dehydrating food with lesser amount of wood compared to other forms wood-fired ovens.
Experiments, demonstrations, and opinions
Wheaton set up an experiment to demonstrate how compact fluorescent lamps (CFL bulbs) are not better than incandescent light bulbs. He used a combination of warm clothing, incandescent lights that produce heat as well as light, and various heating devices to keep warm while his 700 square foot house in Montana is set for 40 degrees Fahrenheit all winter. With this demonstration, Wheaton concluded that by heating the person instead of the air, a person can remain comfortable and save hundreds of dollars in energy savings.
Wheaton introduced a lifestyle model called Wheaton Eco Scale in 2010 where he categorized different lifestyles into 10 levels, where level-0 makes the highest carbon footprint and level-10 makes the lowest.
In 2011, Wheaton demonstrated how hand washing a standard sized load of dishes can only use around a gallon of water. He compared that a standard dishwasher uses 15 gallons of water per load while an energy-efficient dishwasher uses around 9 gallons per load.
In 2014, Wheaton produced a documentary on mason bees which was featured in TreeHugger. In the documentary, Wheaton compared mason bees with Honey bees and what humans can do to help, which included keeping bees in refrigerators.
Wheaton has performed testing for heating a person rather than a whole house to save 90% on a heating bill while staying warm.
See also
Ralph Borsodi
Food Not Lawns
David Holmgren
Hugelkultur
Bill Mollison
Natural farming
Scott and Helen Nearing
Mike Oehler
Sepp Holzer
Ran Prieur
References
External links
Permies: permaculture website and forum
Wheaton's blog
Paul Wheaton's page for events at his property, "Wheaton Labs"
"Building a Better World in Your Backyard, instead of being angry at the bad guys" book on Amazon
Living people
American gardeners
Organic gardeners
Permaculturalists
Sustainability advocates
Writers from Missoula, Montana
American software engineers
Year of birth missing (living people)
|
33082159
|
https://en.wikipedia.org/wiki/Enterprise%20release%20management
|
Enterprise release management
|
Enterprise release management (ERM) is a multi-disciplinary IT governance framework for managing software delivery and software change across multiple departments in a large organization. ERM builds upon release management and combines it with other aspects of IT management including Business-IT alignment, IT service management, IT Governance, and Configuration management. ERM places considerable emphasis on project management and IT portfolio management supporting the orchestration of people, process, and technology across multiple departments and application development teams to deliver large, highly integrated software changes within the context of an IT portfolio.
Managing Multiple Releases
Just as traditional release management packages changes together for execution and delivery, so an enterprise release is a mechanism for integrating and managing multiple, independent programs and projects that impact the enterprise. ERM takes an end-to-end life cycle perspective addressing the (strategic) planning, execution and delivery of an organization’s entire change portfolio, even though in reality it is often confined to the latter integration, test and implementation stages of delivery.
An enterprise release consolidates and integrates the deliverables of multiple projects (or more generally, change initiatives) that have to be time-boxed or synchronised so they can be tested and released as a whole. By stressing the need for a cohesive release architecture, ERM aims to supplement portfolio prioritization with greater design governance that serves to improve productivity and reduce change disruption by executing related features together.
While traditional release management addresses fine-grained changes and provides technical support for the project, ERM supports enterprise portfolio/project management (PPM) and brings a pragmatic architectural and execution perspective to the selection and planning to an enterprise release.
Influence of Continuous Delivery and DevOps
Organizations practicing Enterprise Release Management often support software projects across a wide spectrum of software development methodology. An IT portfolio often incorporates more traditional Waterfall model projects alongside more iterative projects using Agile software development. With the increasing popularity of agile development a new approach to software releases known as Continuous delivery is starting to influence how software transitions from development to release. With continuous delivery, transitions from development to release are continuously automated. Changes are committed to code repositories, builds and tests are run immediately in a continuous integration system, and changes can be released to production without the ceremony that accompanies the traditional Software release life cycle.
While continuous delivery and agile software development provide for faster execution on a project level, the accelerated pace made possible by continuous delivery creates challenges for less-agile components in an IT portfolio. ERM provides organizations with a comprehensive view of software change across a large collection of related systems allowing project managers and IT managers to coordinate projects that have adopted more continuous approaches to software delivery with projects that require a slower, more sequential approach for application development.
Enterprise Release Management provides enterprises with a model that can adopt the localized effects of both DevOps and Continuous delivery to the larger IT department.
References
Taborda, L.J. (2011). Enterprise Release Management: Agile Delivery of a Strategic Change Portfolio, Artech House.
Change management
|
38340
|
https://en.wikipedia.org/wiki/Frame%20Relay
|
Frame Relay
|
Frame Relay is a standardized wide area network (WAN) technology that specifies the physical and data link layers of digital telecommunications channels using a packet switching methodology. Originally designed for transport across Integrated Services Digital Network (ISDN) infrastructure, it may be used today in the context of many other network interfaces.
Network providers commonly implement Frame Relay for voice (VoFR) and data as an encapsulation technique used between local area networks (LANs) over a WAN. Each end-user gets a private line (or leased line) to a Frame Relay node. The Frame Relay network handles the transmission over a frequently changing path transparent to all end-user extensively used WAN protocols. It is less expensive than leased lines and that is one reason for its popularity. The extreme simplicity of configuring user equipment in a Frame Relay network offers another reason for Frame Relay's popularity.
With the advent of Ethernet over fiber optics, MPLS, VPN and dedicated broadband services such as cable modem and DSL, Frame Relay has become less popular in recent years.
Technical description
The designers of Frame Relay aimed to provide a telecommunication service for cost-efficient data transmission for intermittent traffic between local area networks (LANs) and between end-points in a wide area network (WAN). Frame Relay puts data in variable-size units called "frames" and leaves any necessary error-correction (such as retransmission of data) up to the end-points. This speeds up overall data transmission. For most services, the network provides a permanent virtual circuit (PVC), which means that the customer sees a continuous, dedicated connection without having to pay for a full-time leased line, while the service-provider figures out the route each frame travels to its destination and can charge based on usage.
An enterprise can select a level of service quality, prioritizing some frames and making others less important. Frame Relay can run on fractional T-1 or full T-carrier system carriers (outside the Americas, E1 or full E-carrier). Frame Relay complements and provides a mid-range service between basic rate ISDN, which offers bandwidth at 128 kbit/s, and Asynchronous Transfer Mode (ATM), which operates in somewhat similar fashion to Frame Relay but at speeds from 155.520 Mbit/s to 622.080 Mbit/s.
Frame Relay has its technical base in the older X.25 packet-switching technology, designed for transmitting data on analog voice lines. Unlike X.25, whose designers expected analog signals with a relatively high chance of transmission errors, Frame Relay is a fast packet switching technology operating over links with a low chance of transmission errors (usually practically lossless like PDH), which means that the protocol does not attempt to correct errors. When a Frame Relay network detects an error in a frame, it simply drops that frame. The end points have the responsibility for detecting and retransmitting dropped frames. (However, digital networks offer an incidence of error extraordinarily small relative to that of analog networks.)
Frame Relay often serves to connect local area networks (LANs) with major backbones, as well as on public wide-area networks (WANs) and also in private network environments with leased lines over T-1 lines. It requires a dedicated connection during the transmission period. Frame Relay does not provide an ideal path for voice or video transmission, both of which require a steady flow of transmissions. However, under certain circumstances, voice and video transmission do use Frame Relay.
Frame Relay originated as an extension of integrated services digital network (ISDN). Its designers aimed to enable a packet-switched network to transport over circuit-switched technology. The technology has become a stand-alone and cost-effective means of creating a WAN.
Frame Relay switches create virtual circuits to connect remote LANs to a WAN. The Frame Relay network exists between a LAN border device, usually a router, and the carrier switch. The technology used by the carrier to transport data between the switches is variable and may differ among carriers (i.e., to function, a practical Frame Relay implementation need not rely solely on its own transportation mechanism).
The sophistication of the technology requires a thorough understanding of the terms used to describe how Frame Relay works. Without a firm understanding of Frame Relay, it is difficult to troubleshoot its performance.
Frame-relay frame structure essentially mirrors almost exactly that defined for LAP-D. Traffic analysis can distinguish Frame Relay format from LAP-D by its lack of a control field.
Protocol data unit
Each Frame Relay protocol data unit (PDU) consists of the following fields:
Flag Field. The flag is used to perform high-level data link synchronization which indicates the beginning and end of the frame with the unique pattern 01111110. To ensure that the 01111110 pattern does not appear somewhere inside the frame, bit stuffing and destuffing procedures are used.
Address Field. Each address field may occupy either octet 2 to 3, octet 2 to 4, or octet 2 to 5, depending on the range of the address in use. A two-octet address field comprises the EA=ADDRESS FIELD EXTENSION BITS and the C/R=COMMAND/RESPONSE BIT.
DLCI-Data Link Connection Identifier Bits. The DLCI serves to identify the virtual connection so that the receiving end knows which information connection a frame belongs to. Note that this DLCI has only local significance. A single physical channel can multiplex several different virtual connections.
FECN, BECN, DE bits. These bits report congestion:
FECN=Forward Explicit Congestion Notification bit
BECN=Backward Explicit Congestion Notification bit
DE=Discard Eligibility bit
Information Field. A system parameter defines the maximum number of data bytes that a host can pack into a frame. Hosts may negotiate the actual maximum frame length at call set-up time. The standard specifies the maximum information field size (supportable by any network) as at least 262 octets. Since end-to-end protocols typically operate on the basis of larger information units, Frame Relay recommends that the network support the maximum value of at least 1600 octets in order to avoid the need for segmentation and reassembling by end-users.
Frame Check Sequence (FCS) Field. Since one cannot completely ignore the bit error-rate of the medium, each switching node needs to implement error detection to avoid wasting bandwidth due to the transmission of erred frames. The error detection mechanism used in Frame Relay uses the cyclic redundancy check (CRC) as its basis.
Congestion control
The Frame Relay network uses a simplified protocol at each switching node. It achieves simplicity by omitting link-by-link flow-control. As a result, the offered load has largely determined the performance of Frame Relay networks. When offered load is high, due to the bursts in some services, temporary overload at some Frame Relay nodes causes a collapse in network throughput. Therefore, Frame Relay networks require some effective mechanisms to control the congestion.
Congestion control in Frame Relay networks includes the following elements:
Admission Control. This provides the principal mechanism used in Frame Relay to ensure the guarantee of resource requirement once accepted. It also serves generally to achieve high network performance. The network decides whether to accept a new connection request, based on the relation of the requested traffic descriptor and the network's residual capacity. The traffic descriptor consists of a set of parameters communicated to the switching nodes at call set-up time or at service-subscription time, and which characterizes the connection's statistical properties. The traffic descriptor consists of three elements:
Committed Information Rate (CIR). The average rate (in bit/s) at which the network guarantees to transfer information units over a measurement interval T. This T interval is defined as: T = Bc/CIR.
Committed Burst Size (BC). The maximum number of information units transmittable during the interval T.
Excess Burst Size (BE). The maximum number of uncommitted information units (in bits) that the network will attempt to carry during the interval.
Once the network has established a connection, the edge node of the Frame Relay network must monitor the connection's traffic flow to ensure that the actual usage of network resources does not exceed this specification. Frame Relay defines some restrictions on the user's information rate. It allows the network to enforce the end user's information rate and discard information when the subscribed access rate is exceeded.
Explicit congestion notification is proposed as the congestion avoidance policy. It tries to keep the network operating at its desired equilibrium point so that a certain quality of service (QoS) for the network can be met. To do so, special congestion control bits have been incorporated into the address field of the Frame Relay: FECN and BECN. The basic idea is to avoid data accumulation inside the network.
FECN means forward explicit congestion notification. The FECN bit can be set to 1 to indicate that congestion was experienced in the direction of the frame transmission, so it informs the destination that congestion has occurred.
BECN means backwards explicit congestion notification. The BECN bit can be set to 1 to indicate that congestion was experienced in the network in the direction opposite of the frame transmission, so it informs the sender that congestion has occurred.
Origin
Frame Relay began as a stripped-down version of the X.25 protocol, releasing itself from the error-correcting burden most commonly associated with X.25. When Frame Relay detects an error, it simply drops the offending packet. Frame Relay uses the concept of shared access and relies on a technique referred to as "best-effort", whereby error-correction practically does not exist and practically no guarantee of reliable data delivery occurs. Frame Relay provides an industry-standard encapsulation, utilizing the strengths of high-speed, packet-switched technology able to service multiple virtual circuits and protocols between connected devices, such as two routers.
Although Frame Relay became very popular in North America, it was never very popular in Europe. X.25 remained the primary standard until the wide availability of IP made packet switching almost obsolete.
It was used sometimes as backbone for other services, such as X.25 or IP traffic. Where Frame Relay was used in the USA also as carrier for TCP/IP traffic, in Europe backbones for IP networks often used ATM or PoS, later replaced by Carrier Ethernet
Relationship to X.25
X.25 was an important early WAN protocol, and is often considered to be the grandfather of Frame Relay as many of the underlying protocols and functions of X.25 are still in use today (with upgrades) by Frame Relay.
X.25 provides quality of service and error-free delivery, whereas Frame Relay was designed to relay data as quickly as possible over low error networks. Frame Relay eliminates a number of the higher-level procedures and fields used in X.25. Frame Relay was designed for use on links with error-rates far lower than available when X.25 was designed.
X.25 prepares and sends packets, while Frame Relay prepares and sends frames. X.25 packets contain several fields used for error checking and flow control, most of which are not used by Frame Relay. The frames in Frame Relay contain an expanded link layer address field that enables Frame Relay nodes to direct frames to their destinations with minimal processing. The elimination of functions and fields over X.25 allows Frame Relay to move data more quickly, but leaves more room for errors and larger delays should data need to be retransmitted.
X.25 packet switched networks typically allocated a fixed bandwidth through the network for each X.25 access, regardless of the current load. This resource allocation approach, while apt for applications that require guaranteed quality of service, is inefficient for applications that are highly dynamic in their load characteristics or which would benefit from a more dynamic resource allocation. Frame Relay networks can dynamically allocate bandwidth at both the physical and logical channel level.
Virtual circuits
As a WAN protocol, Frame Relay is most commonly implemented at Layer 2 (data link layer) of the Open Systems Interconnection (OSI) seven layer model. Two types of circuits exist: permanent virtual circuits (PVCs) which are used to form logical end-to-end links mapped over a physical network, and switched virtual circuits (SVCs). The latter are analogous to the circuit-switching concepts of the public switched telephone network (PSTN), the global phone network.
Local management interface
Initial proposals for Frame Relay were presented to the Consultative Committee on International Telephone and Telegraph (CCITT) in 1984. Lack of interoperability and standardization prevented any significant Frame Relay deployment until 1990, when Cisco, Digital Equipment Corporation (DEC), Northern Telecom, and StrataCom formed a consortium to focus on its development. They produced a protocol that provided additional capabilities for complex inter-networking environments. These Frame Relay extensions are referred to as the local management interface (LMI).
Datalink connection identifiers (DLCIs) are numbers that refer to paths through the Frame Relay network. They are only locally significant, which means that when device-A sends data to device-B it will most likely use a different DLCI than device-B would use to reply. Multiple virtual circuits can be active on the same physical end-points (performed by using subinterfaces).
The LMI global addressing extension gives Frame Relay data-link connection identifier (DLCI) values global rather than local significance. DLCI values become DTE addresses that are unique in the Frame Relay WAN. The global addressing extension adds functionality and manageability to Frame Relay internetworks. Individual network interfaces and the end nodes attached to them, for example, can be identified by using standard address-resolution and discovery techniques. In addition, the entire Frame Relay network appears to be a typical LAN to routers on its periphery.
LMI virtual circuit status messages provide communication and synchronization between Frame Relay DTE and DCE devices. These messages are used to periodically report on the status of PVCs, which prevents data from being sent into black holes (that is, over PVCs that no longer exist).
The LMI multicasting extension allows multicast groups to be assigned. Multicasting saves bandwidth by allowing routing updates and address-resolution messages to be sent only to specific groups of routers. The extension also transmits reports on the status of multicast groups in update messages.
Committed information rate
Frame Relay connections are often given a committed information rate (CIR) and an allowance of burstable bandwidth known as the extended information rate (EIR). The provider guarantees that the connection will always support the C rate, and sometimes the PRa rate should there be adequate bandwidth. Frames that are sent in excess of the CIR are marked as discard eligible (DE) which means they can be dropped should congestion occur within the Frame Relay network. Frames sent in excess of the EIR are dropped immediately.
Market reputation
Frame Relay aimed to make more efficient use of existing physical resources, permitting the over-provisioning of data services by telecommunications companies to their customers, as clients were unlikely to be using a data service 45 percent of the time. In more recent years, Frame Relay has acquired a bad reputation in some markets because of excessive bandwidth overbooking.
Telecommunications companies often sell Frame Relay to businesses looking for a cheaper alternative to dedicated lines; its use in different geographic areas depended greatly on governmental and telecommunication companies' policies. Some of the early companies to make Frame Relay products included StrataCom (later acquired by Cisco Systems) and Cascade Communications (later acquired by Ascend Communications and then by Lucent Technologies).
As of June 2007, AT&T was the largest Frame Relay service provider in the US, with local networks in 22 states, plus national and international networks.
FRF.12
When multiplexing packet data from different virtual circuits or flows, quality of service concerns often arise. This is because a frame from one virtual circuit may occupy the line for a long enough period of time to disrupt a service guarantee given to another virtual circuit. IP fragmentation is a method for addressing this. An incoming long packet is broken up into a sequence of shorter packets and enough information is added to reassemble that long frame at the far end. FRF.12 is a specification from the Frame Relay Forum which specifies how to perform fragmentation on frame relay traffic primarily for voice traffic. The FRF.12 specification describes the method of fragmenting Frame Relay frames into smaller frames.
See also
Multiprotocol label switching
List of device bit rates
References
External links
– Multiprotocol Interconnect over Frame Relay
– PPP in Frame Relay
– Multiprotocol Interconnect over Frame Relay
Broadband Forum - IP/MPLS Forum, MPLS Forum, ATM, and Frame Relay Forum Specifications
Cisco Frame Relay Tutorial
Frame Relay animation
CCITT I.233 ISDN Frame Mode Bearer Services
Network protocols
Link protocols
|
14135609
|
https://en.wikipedia.org/wiki/Rainlendar
|
Rainlendar
|
Rainlendar is a calendar program for Windows, Mac OS X and Linux. Versions prior to version 2 are licensed under the GNU GPL as free software, but subsequent versions are proprietary shareware.
Rainlendar is characterized by very small space and memory requirements, stability, and an easily customizable user-interface (using skins). The calendar can be transparently placed on the desktop and can be managed using the Windows notification area. It has common functions such as task list and a reminder alarm. Different event types can be represented with different symbols. Calendars can also be imported or synchronized using common file formats such as Outlook and iCal files (using a plugin).
In addition to the stand-alone calendar program, a Rainlendar-server is available for Windows and Linux, which can synchronize distributed Rainlendar applications. The program can also be used as a LiteStep plugin.
Rainlendar is available as of 2016 in about 60 languages (via language packs), and thousands of individual skin designs.
Alternatives to Rainlendar include Korganizer, the GNOME calendar (both for Linux) and Lightning.
External links
Last free software version
Skin archives
customize.org
Skinbase
WinCustomize
Calendaring software
Cross-platform software
Formerly free software
Software that uses wxWidgets
|
8049742
|
https://en.wikipedia.org/wiki/Public%20Knowledge%20Project
|
Public Knowledge Project
|
The Public Knowledge Project (PKP) is a non-profit research initiative that is focused on the importance of making the results of publicly funded research freely available through open access policies, and on developing strategies for making this possible including software solutions. It is a partnership between the Faculty of Education at the University of British Columbia, the Canadian Centre for Studies in Publishing at Simon Fraser University, the University of Pittsburgh, Ontario Council of University Libraries, the California Digital Library and the School of Education at Stanford University. It seeks to improve the scholarly and public quality of academic research through the development of innovative online environments.
History
The PKP was founded in 1998 by John Willinsky in the Department of Language and Literacy Education at the Faculty of Education at the University of British Columbia, in Vancouver, British Columbia, Canada, based on his research in education and publishing. Willinsky is a leading advocate of open access publishing, and has written extensively on the value of public research.
The PKP's initial focus was on increasing access to scholarly research and output beyond the traditional academic environments. This soon led to a related interest in scholarly communication and publishing, and especially on ways to make it more cost effective and less reliant on commercial enterprises and their generally restricted access models. PKP has developed free, open source software for the management, publishing, and indexing of journals, conferences, and monographs.
The PKP has collaborated with a wide range of partners interested in making research publicly available, including the Scholarly Publishing and Academic Resources Coalition (SPARC), the Brazilian Institute for Information Science and Technology (IBICT), and the International Network for the Availability of Scientific Publications (INASP).
Together with INASP, the PKP is working with publishers, librarians, and academics in the development of scholarly research portals in the developing world, including African Journals OnLine (AJOL) and Asia Journals Online.
As of 2008, the PKP has joined the Synergies Canada initiative, contributing their technical expertise to integrating work being done within a five-party consortium to create a decentralized national platform for social sciences and humanities research communication in Canada.
Growth 2005 to 2009
The Public Knowledge Project grew between 2005 and 2009. In 2006, there were approximately 400 journals using Open Journal Systems (OJS), 50 conferences using Open Conference Systems (OCS), 4 organizations using the Harvester, and 350 members registered on the online support forum. In 2009, over 5000 journals were using OJS, more than 500 conferences were using OCS, at least 10 organizations are using the Harvester, and there were over 2400 members on the support forum.
Since 2005, there were major releases (version 2) of three software modules (OJS, OCS, Harvester), as well as the addition of Lemon8-XML, with a growing number of downloads being recorded every month for all of the software. From June 12, 2009 to December 21, 2009, there were 28451 downloads of OJS, 6329 of OCS, 1255 of the Harvester, and 1096 of Lemon8-XML. A new module, Open Monograph Press (a publication management system for monographs) has also been released.
The PKP also witnessed increased community programming contributions, including new plugins and features, such as the subscription module, allowing OJS to support full open access, delayed open access, or full subscription-only access. A growing number of translations have been contributed by community members, with Croatian, English, French, German, Italian, Japanese, Portuguese, Russian, Spanish, Turkish, and Vietnamese versions of OJS completed, and several others in production.
Growth from 2010
A German platform, based on OJS, is being developed by the Center for Digital Systems (CeDiS), Free University of Berlin and two other institutions. Funding by the German Research Foundation (DFG) initially runs from 2014 to 2016.
Growth from 2021
According to statistics collected from the PKP Beacon project, which was presented at the Open Publishing Fest with the title "Location of known journals using PKP’s Open Journal Systems", OJS is currently being used by at least 25,000 journals across the world. A daily updated map is available at the PKP site. PKP also released the source dataset (updated yearly) as a dataset in Dataverse and the Beacon source code.
PKP conferences
The PKP holds a biannual conference. The First PKP Scholarly Publishing Conference was held in Vancouver, British Columbia, Canada on July 11–13, 2007 and the Second PKP Scholarly Publishing Conference was also held in Vancouver on July 8–10, 2009. The Third PKP Scholarly Publishing Conference was held in Berlin, Germany between 26 and 28 September 2011. The fourth PKP Scholarly Publishing Conference was held in Mexico City, Mexico on August 19–21, 2013.
Notes on the presentations were recorded on a scholarly publishing blog for both the 2007 and 2009 conferences, and selected papers from the 2007 conference were published in a special issue of the online journal First Monday. Papers from the 2009 conference are available in the inaugural issue of the journal Scholarly and Research Communication.
Last meeting was on 20th Nov in Barcelona.
Software
The PKP's suite of software includes several separate, but inter-related applications to demonstrate the feasibility of open access: the Open Journal Systems, the Open Preprint Systems the Open Monograph Press, the Open Conference Systems (archived), and the PKP Open Archives Harvester (archived). PKP briefly experimented with a new application, Lemon8-XML, but has since opted to incorporate the XML functionality into the existing applications. All of the products are open source and freely available to anyone interested in using them. They share similar technical requirements (PHP, MySQL/PostgreSQL, Apache or Microsoft IIS 6, and a Linux, BSD, Solaris, Mac OS X, or Windows operating system) and need only a minimal level of technical expertise to get up and running. In addition, the software is well supported with a free, online support forum and a growing body of publications and extensive documentation is available on the project web site.
Increasingly, institutions are combining the PKP software, using OJS to publish their research results, OCS to organize their conferences and publish the proceedings, and the OAI Harvester to organize and make the metadata from these publications searchable. Together with other open source software applications such as DSpace (for creating institutional research repositories), institutions are creating their own infrastructure for sharing their research output.
Involved parties
It is a partnership among the following entities:
Simon Fraser University Library
The University of British Columbia Library
Canadian Centre for Studies in Publishing at Simon Fraser University
University of Pittsburgh
Ontario Council of University Libraries
Graduate School of Education at Stanford University
See also
List of open access journals
List of open-access projects
Open access
References
External links
Public Knowledge Project official site
Open Journal Systems
Open Conference Systems
PKP Open Archive Harvester
PKP Open Monograph Press
Installation manual
Academic journal online publishing platforms
Academic publishing
Open access projects
Publication management software
|
45351690
|
https://en.wikipedia.org/wiki/Esbj%C3%B6rn%20Segelod
|
Esbjörn Segelod
|
Esbjörn Segelod (born 1951) is a Swedish organizational theorist and Professor in Business Administration at the Mälardalen University College, School of Business Society and Engineering. He is best known for his work on "knowledge in the software development process" and "software innovativeness"
Life and work
Segelod obtained his PhD at the University of Gothenburg in 1986 with the thesis, entitled "Kalkylering och avvikelser : empiriska studier av stora projekt i kommuner och industri" (Capital expenditure planning and planning deviations).
After graduation Segelod started his academic career at the University of Gothenburg. In the late 1980s Segelod moved to the Uppsala University, where he was affiliated with its Företagsekonomiska institutionen (Institute for Business Administration). In the late 1990s Segelod was appointed Professor in Business Administration at the Mälardalen University College
Segelod's research interest have broadened over the years, focussing on topics as "sophisticated methods of capital budgeting" in the 1980s, "corporate control of investments" in the 1990s, to the "software development process" and "software innovativeness" in the new millennium.
Selected publications
Segelod, Esbjörn. Capital investment appraisal: towards a contingency theory. Chartwell Bratt, 1991.
Segelod, Esbjörn. Renewal through internal development. Avebury, 1995.
Articles, a selection:
Segelod, Esbjörn. "Capital budgeting in a fast-changing world." Long Range Planning 31.4 (1998): 529-541.
Segelod, Esbjörn. "A comparison of managers’ perceptions of short-termism in Sweden and the US." International Journal of Production Economics 63.3 (2000): 243-254.
Segelod, Esbjörn, and Gary Jordan. "The use and importance of external sources of knowledge in the software development process." R&D Management 34.3 (2004): 239-252.
Jordan, Gary, and Esbjörn Segelod. "Software innovativeness: outcomes on project performance, knowledge enhancement, and external linkages." R&D Management 36.2 (2006): 127-142.
Segelod, Esbjörn, and Leif Carlsson. "The emergence of uniform principles of cost accounting in Sweden 1900–36." Accounting, Business & Financial History 20.3 (2010): 327-363.
References
External links
Esbjörn Segelod, Mälardalen University
1951 births
Living people
Swedish business theorists
Swedish economists
University of Gothenburg alumni
Stockholm School of Economics faculty
People from Västra Götaland County
20th-century economists
|
16477296
|
https://en.wikipedia.org/wiki/5283%20Pyrrhus
|
5283 Pyrrhus
|
5283 Pyrrhus is a large Jupiter trojan from the Greek camp, approximately in diameter. It was discovered on 31 January 1989, by American astronomer Carolyn Shoemaker at the Palomar Observatory in California. The dark Jovian asteroid belongs to the 100 largest Jupiter trojans and has a rotation period of 7.3 hours. It was named after Achilles son, Neoptolemus (also called Pyrrhus) from Greek mythology.
Orbit and classification
Pyrrhus is a dark Jovian asteroid orbiting in the leading Greek camp at Jupiter's Lagrangian point, 60° ahead of the Gas Giant's orbit in a 1:1 resonance . It is also a non-family asteroid in the Jovian background population. It orbits the Sun at a distance of 4.4–6.0 AU once every 11 years and 10 months (4,335 days; semi-major axis of 5.2 AU). Its orbit has an eccentricity of 0.15 and an inclination of 17° with respect to the ecliptic. The body's observation arc begins with a precovery taken at Palomar in November 1951, or more than 37 years prior to its official discovery observation.
Naming
This minor planet was named by the discoverer from Greek mythology after Achilles son Neoptolemus (see 2260 Neoptolemus) also known as Pyrrhus. His alternative name, Pyrrhus, origins from the red color of his hair. After his father's death, he was brought by Odysseus to the Trojan War, where he became the most ruthless of all the Greeks. He brutally killed King Priam and several other princes during the destruction of the city of Troy, and took away Hector's wife, Andromache, as his prize. The official was published by the Minor Planet Center on 4 June 1993 ().
Physical characteristics
Pyrrhus is an assumed C-type asteroid, while most larger Jupiter trojans are D-type asteroids. It has a typical V–I color index of 0.95 (also see table).
Rotation period
In September 1996, the first photometric observations Pyrrhus were obtained by Italian astronomer Stefano Mottola using the Bochum 0.61-metre Telescope at ESO's La Silla Observatory in Chile. The lightcurve however showed very little variation. Follow-up observation by Mottola at the Calar Alto Observatory with its 1.2-meter telescope in March 2002 gave a rotation period of hours with a brightness amplitude of 0.11 magnitude ().
Diameter and albedo
According to the surveys carried out by the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, The Infrared Astronomical Satellite IRAS, and the Japanese Akari satellite, Pyrrhus measures between 48.36 and 69.93 kilometers in diameter and its surface has an albedo between 0.072 and 0.100. The Collaborative Asteroid Lightcurve Link derives an albedo of 0.0564 and a diameter of 64.26 kilometers based on an absolute magnitude of 9.7.
References
External links
Lightcurve Database Query (LCDB), at www.minorplanet.info
Dictionary of Minor Planet Names, Google books
Discovery Circumstances: Numbered Minor Planets (5001)-(10000) – Minor Planet Center
Asteroid 5283 Pyrrhus at the Small Bodies Data Ferret
005283
Discoveries by Carolyn S. Shoemaker
Minor planets named from Greek mythology
Named minor planets
19890131
|
39032314
|
https://en.wikipedia.org/wiki/Quantum%20readout
|
Quantum readout
|
Quantum readout is a method to verify the authenticity of an object. The method is secure provided that the object cannot be copied or physically emulated.
Hands-off versus hands-on authentication of objects
When authenticating an object, one can distinguish two cases.
Hands-on authentication: The object is fully under the control of the verifier. The verifier can see if the object is of the correct type, size, weight etc.. For example, he can see the difference between a real tooth and a hologram representing the tooth.
Hands-off authentication: The verifier does not have full control. For example, he has line-of-sight but cannot touch the object.
In the hands-on scenario, physical unclonable functions (PUFs) of various types can serve as great authentication tokens. Their physical unclonability, combined with the verifier's ability to detect spoofing, makes it exceedingly hard for an attacker to create an object that will pass as a PUF clone. However, hands-on authentication requires that the holder of the PUF relinquishes control of it, which may not be acceptable, especially if there is the risk that the verifier is an impostor.
In the hands-off scenario, however, reliable authentication is much more difficult to achieve. It is prudent to assume that the challenge-response behavior of each PUF is known publicly. (An attacker may get hold of a genuine PUF for a while and perform a lot of measurements on it without being discovered.) This is a "worst case" assumption as customary in security research. It poses no problem in the hands-on case, but in the hands-off case it means that spoofing becomes a real danger. Imagine for instance authentication of an optical PUF through a glass fiber. The attacker does not have the PUF, but he knows everything about it. He receives the challenge (laser light) through the fiber. Instead of scattering the light off a physical object, he does the following:
measure the incoming wave front;
look up the corresponding response in his database;
prepare laser light in the correct response state and send it back to the verifier.
This attack is known as "digital emulation".
For a long time spoofing in the hands-off scenario has seemed to be a fundamental problem that cannot be solved.
The traditional approach to remote object authentication is to somehow enforce a hands-on environment, e.g. by having a tamper-proof trusted remote device probing the object. Drawbacks of this approach are (a) cost and (b) unknown degree of security in the face of ever more sophisticated attacks.
Quantum-physical readout of a PUF
The basic scheme
The problem of spoofing in the hands-off case can be solved using two
fundamental information-theoretic properties of quantum physics:
A single quantum in an unknown state cannot be cloned.
When a quantum state is measured most of the information it contains is destroyed.
Based on these principles, the following scheme was proposed.
Enrollment. The usual PUF enrollment. No quantum physics needed. The enrollment data is considered public.
Challenge. A single quantum (e.g. a photon) is prepared in a random state. It is sent to the PUF.
Response. The quantum interacts with the PUF (e.g. coherent scattering), resulting in a unitary transform of the state.
Verification. The quantum is returned to the verifier. He knows exactly what the response state should be. This knowledge enables him to perform a "yes/no" verification measurement.
Steps 2-4 are repeated multiple times in order to exponentially lower the false accept probability.
The crucial point is that the attacker cannot determine what the actual challenge is, because that information is packaged in a "fragile" quantum state. If he tries to investigate the challenge state by measuring it, he destroys part of the information. Not knowing where exactly to look in his challenge-response database, the attacker cannot reliably produce correct responses.
A continuous-variable quantum authentication of PUFs has been also proposed in the literature, which relies on standard wave-front shaping and homodyne detection techniques.
Security assumptions
The scheme is secure only if the following conditions are met,
Physical unclonability of the PUF.
The attacker cannot perform arbitrary unitary transformations on the challenge quantum (i.e. physical emulation of the PUF is supposed to be infeasible).
In multiple-scattering optical systems the above requirements can be met in practice.
Quantum Readout of PUFs is unconditionally secure against digital emulation, but conditionally against physical cloning and physical emulation.
Special security properties
Quantum readout of PUFs achieves
Hands-off object authentication without trusted hardware at the side of the object.
Authentication of a quantum communication channel without a priori shared secrets and without shared entangled particles. The authentication is based on public information.
Imagine Alice and Bob wish to engage in quantum key distribution on an ad hoc basis, i.e. without ever having exchanged data or matter in the past. They both have an enrolled optical PUF. They look up each other's PUF enrollment data from a trusted source. They run quantum key distribution through both optical PUFs; with a slight modification of the protocol, they get quantum key distribution and two-way authentication. The security of their key distribution is unconditional, but the security of the authentication is conditional on the two assumptions mentioned above.
Security proofs
Security has been proven in the case of Challenge Estimation attacks, in which the attacker tries to determine the challenge as best as he can using measurements. There are proofs for n=1,
for quadrature measurements on coherent states
and for fixed number of quanta n>1.
The result for dimension K and n quanta
is that the false acceptance probability in a single round
cannot exceed (n+1)/(n+K).
The security of the continuous-variable quantum authentication of PUFs against an emulation attack, has been also addressed in the framework of Holevo's bound and Fano's inequality, as well as a man-in-the-middle attack.
All of the above security proofs assume a tamper-resistant authentication set-up, which is hard to justify in a remote authentication scenario.
Experimental realization
Quantum readout of speckle-based optical PUFs has been demonstrated in the lab. This realization is known under the name Quantum-Secure Authentication.
This protocol, as well as the protocol in reference, are limited to short distances (< 10 km), due to practical issues associated with the transmission of the quantum states. In a classical setting, by encrypting the entries in the database of challenge-response pars, one can build a protocol which operates over arbitrary distances, and offers security against both classical and quantum adversaries (including emulation attack).
References
External links
http://theconversation.com/quantum-physics-can-fight-fraud-by-making-card-verification-unspoofable-35632
Cryptographic primitives
Quantum cryptography
Quantum information science
|
1987935
|
https://en.wikipedia.org/wiki/LogMeIn%20Hamachi
|
LogMeIn Hamachi
|
LogMeIn Hamachi is a virtual private network (VPN) application developed and released in 2004 by Alex Pankratov. It is capable of establishing direct links between computers that are behind network address translation ("NAT") firewalls without requiring reconfiguration (when the user's PC can be accessed directly without relays from the Internet/WAN side); in other words, it establishes a connection over the Internet that emulates the connection that would exist if the computers were connected over a local area network ("LAN").
Hamachi was acquired from Pankratov by LogMeIn in 2009. It is currently available as a production version for Microsoft Windows and macOS, as a beta version for Linux, and as a system-VPN-based client compatible with Android and iOS.
For paid subscribers Hamachi runs in the background on idle computers. The feature was previously available to all users but became restricted to paid subscribers only.
Operational summary
Hamachi is a proprietary centrally-managed VPN system, consisting of the server cluster managed by the vendor of the system and the client software, which is installed on end-user devices.
Client software adds a virtual network interface to a computer, and it is used for intercepting outbound as well as injecting inbound VPN traffic. Outbound traffic sent by the operating system to this interface is delivered to the client software, which encrypts and authenticates it and then sends it to the destination VPN peer over a specially initiated UDP connection. Hamachi currently handles tunneling of IP traffic including broadcasts and multicast. The Windows version also recognizes and tunnels IPX traffic.
Each client establishes and maintains a control
connection to the server cluster. When the connection is established, the client goes through a login sequence, followed by the discovery process and state synchronization. The login step authenticates the client to the server and vice versa. The discovery is used to determine the topology of the client's Internet connection, specifically to detect the presence of NAT and firewall devices on its route to the Internet. The synchronization step brings a client's view of its private networks in sync with other members of these networks.
When a member of a network goes online or offline, the server instructs other network peers to either establish or tear down tunnels to the former. When establishing tunnels between the peers, Hamachi uses a server-assisted NAT traversal technique, similar to UDP hole punching. Detailed information on how it works has not been made public. This process does not work on certain combinations of NAT devices, requiring the user to explicitly set up a port forward. Additionally 1.0 series of client software are capable of relaying traffic through vendor-maintained 'relay servers'.
In the event of unexpectedly losing a connection to the server, the client retains all its tunnels and starts actively checking their status. When the server unexpectedly loses client's connection, it informs client's peers about the fact and expects them to also start liveliness checks. This enables Hamachi tunnels to withstand transient network problems on the route between the client and the server as well as short periods of complete server unavailability.
Some Hamachi clients also get closed port on other clients, which cannot be repaired by port forwarding.
Hamachi is frequently used for gaming and remote administration. The vendor provides free basic service, and extra features for a fee.
In February 2007, an IP-level block was imposed by Hamachi servers on parts of Vietnamese Internet space due to "the scale of the system abuse originating from blocked addresses". The company is working on a less intrusive solution to the problem.
Addressing
Each Hamachi client is normally assigned an IP address when it logs into the system for the first time. To avoid conflicting with existing private networks on the client side the normal private IP address blocks 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16 are not used.
Before November 19, 2012 the 5.0.0.0/8 range was used. This range was previously unallocated but was allocated to RIPE NCC in late 2010 and space from this range is now being used by hosting providers on the public internet. Hamachi switched to the 25.0.0.0/8 block.
The 25.0.0.0/8 block is allocated to the British Ministry of Defence. Organisations who need to communicate with the MOD may experience problems when more specific Internet routes attract traffic that was meant for internal hosts, or alternatively find themselves unable to reach the legitimate users of those addresses because those addresses are being used internally,
and such "squatting" is against the established practice of the Internet.
The client now supports IPv6, and if this is selected then the address assigned is picked from a range registered to LogMeIn.
The IP address assigned to the Hamachi client is henceforth associated with the client's public crypto key. As long as the client retains its key, it can log into the system and use this IP address. Hamachi creates a single broadcast domain between all clients. This makes it possible to use LAN protocols that rely on IP broadcasts for discovery and announcement services over Hamachi networks.
Security
The following considerations apply to Hamachi's use as a VPN application:
Additional risk of disclosure of sensitive data which is stored or may be logged by the mediation server — minimal where data is not forwarded.
The security risks due to vulnerable services on remote machines otherwise not accessible behind a firewall, common to all VPNs.
Hamachi is stated to use strong, industry-standard algorithms to secure and authenticate the data and its security architecture is open. Despite this, security cannot necessarily be guaranteed.
The existing client-server protocol documentation contains a number of errors, some of which have been confirmed by the vendor, pending correction, with others not yet confirmed.
For the product to work, a "mediation server", operated by the vendor, is required.
This server stores the nickname, maintenance password, statically-allocated 25.0.0.0/8 IP address and the associated authentication token of the user. As such, it can potentially log actual IP addresses of the VPN users as well as various details of the session.
Compatibility
The current builds of Hamachi are available for the following operating systems:
Microsoft Windows (XP or later)
macOS (Mac OS X 10.6 or newer, which run only on x86, not PowerPC or Apple Silicon (ARM).)
Linux (officially supported on Ubuntu 16.04 / CentOS 7.2 x86/x64, beta armel/armhf versions available) Note: Stability issues on Ubuntu 20.04 LTS.
FreeBSD users can install and utilize Linux version, there's a port created in FreeBSD Ports.
iOS (via iOS system VPN)
Android (via Android system VPN)
Prior to versions 1.0.2.0 and 1.0.2.1 for the Windows release, many Windows Vista users had experienced compatibility and connection issues while using Hamachi. As of March 30, 2007, the software now includes Vista tweaks, which answer these OS-related problems, among other specific solutions.
See also
Network address translation (NAT) Overview, related RFCs: RFC 4008, RFC 3022, RFC 1631 (obsolete)
Pertino
Simple Traversal of UDP over NATs (STUN), a NAT traversal protocol defined in RFC 3489 (obsoleted by RFC 5389)
Session Traversal Utilities for NAT (Updated STUN, as defined in RFC 5389)
UDP hole punching another NAT traversal technique
Virtual Private LAN Service
XLink Kai
References
External links
Internet Protocol based network software
Internet software for Linux
MacOS Internet software
Tunneling software
Virtual private networks
Windows Internet software
|
50947725
|
https://en.wikipedia.org/wiki/Protecting%20Cyber%20Networks%20Act
|
Protecting Cyber Networks Act
|
The Protecting Cyber Networks Act (H.R. 1560) is a bill introduced in the 114th Congress by Rep. Devin Nunes (R-CA), chairman of the House Permanent Select Committee on Intelligence. The legislation would allow companies and the government to share information concerning cyber threats. To overcome privacy concerns, the bill expressly forbids companies from sharing information with the National Security Agency (NSA) or Department of Defense (DOD).
Background
A number of major hacking events occurred in 2014 and 2015:
In April 2014, Home Depot's computer systems were breached by hackers who stole the credit card accounts and email addresses of tens of millions of people.
In November 2014, hackers infiltrated Sony Pictures' systems and were able to get access to confidential employee and corporate information.
In January 2015, Anthem was hacked.
In April 2015, Premera Blue Cross had its system compromised. A threat existed that hackers might have accessed the medical and financial information of 11 million people.
Additionally, major U.S. businesses including Target and JPMorganChase have been victims of large-scale cyberattacks resulting in the theft of customer identity information.
The legislation was introduced as response to threats posed by these and other cyberattacks. On April 22, 2015, The Hill newspaper wrote, "Congress has contemplated some form of this law for nearly five years. But catastrophic data breaches within the last year have laid bare hundreds of millions of Americans' credit card data and Social Security numbers, raising public awareness and putting the onus on Capitol Hill to act."
Legislative history
On March 19, 2015, the House Permanent Select Committee on Intelligence held a hearing called "Growing Cyber Threat and its Impact on American Business." In his opening remarks as the committee's chairman, Nunes stated that U.S. companies and American consumers must feel confident that their confidential information stored on IT systems is secure. He said that in light of the major cyber attacks in 2014 and 2015, there is little assurance that personal and corporate information is safe. He said that because of those reasons, Congress needs to strengthen the security of the country's digital infrastructure by creating better methods for businesses and the government to share information on cyber threats.
Five days later, Nunes introduced H.R. 1560: Protecting Cyber Networks Act. On April 13, the House Permanent Select Committee on Intelligence passed an amended version of the bill. On April 22, the House passed the bill by a vote of 307-116. Before final passage of the bill, the House passed an amendment from Rep. Andre Carson (D-Ind.) that would require the inspector general to report on how agencies remove personal information with information they receive. The amendment was proposed in response to concerns from privacy advocates including many Democratic House members.
After passage in the House, the bill was sent to the Senate. As of June 28, 2016, the Senate had not taken action on the bill. However, a companion bill exists in the Senate: the Cybersecurity Information Sharing Act (CISA, S. 754). On October 27, 2015, the Senate approved S. 754 by a vote of 74-21.
Major provisions
Information sharing
The Protecting Cyber Networks Act (PCNA) would allow companies to share certain information with other companies and the government. They would be allowed to share only cybersecurity information; that is, information concerning the protection of their own systems.
PCNA would require the Director of National Intelligence to create regulations that would allow sharing the following types of information:
classified cyber threat indicators with representatives of the private sector with appropriate security clearances;
classified cyber threat indicators that may be declassified and shared at an unclassified level; and
any information in the possession of the Federal Government about imminent or ongoing cyber threats that may allow private companies to prevent or mitigate those threats.
The bill requires the President to submit to Congress policies and procedures on how the government should receive threat indicators when submitted by the private sector, as well as how to develop defensive measures within the federal government. It would require that agencies that receive threat information share it in real time with other relevant agencies.
Defensive protection
The legislation gives private companies the authority to go on the counter-offensive against hackers, meaning a company that was hacked could perform more assertive defensive measures than are currently allowed under the law. However, companies would not be allowed to hack back into other systems or manipulate systems for which they do not have consent to control.
According to the official legislative summary of the bill, the bill "Permits private entities to monitor or operate defensive measures to prevent or mitigate cybersecurity threats or security vulnerabilities, or to identify the source of a threat, on: (1) their own information systems; and (2) with written authorization, the information systems of other private or government entities."
Privacy
PCNA includes safeguards that support privacy. For example, the bill includes requires that companies scrub "unrelated" data of personally identifying information they send the information to the government. Once government agencies receive the information, the agencies must examine the information to ensure that no personally identifiable information is included.
Liability
The bill offers protection from liability for companies who share cybersecurity information and do so lawfully under the bill's provisions.
Support
The White House supports the legislation.
The legislation also received public support from the following organizations:
Agricultural Retailers Association (ARA)
Airlines for America (A4A)
Alliance of Automobile Manufacturers
American Bankers Association (ABA)
American Cable Association (ACA)
American Council of Life Insurers (ACLI)
American Fuel & Petrochemical Manufacturers (AFPM) American Gaming Association
American Gas Association (AGA)
American Insurance Association (AIA) American Petroleum Institute (API)
American Public Power Association (APPA) American Water Works Association (AWWA) ASIS International
Association of American Railroads (AAR)
BITS–Financial Services Roundtable
College of Healthcare Information Management Executives (CHIME) CompTIA–The Computing Technology Industry Association CTIA–The Wireless Association
Edison Electric Institute (EEI)
Federation of American Hospitals (FAH)
Food Marketing Institute (FMI)
GridWise Alliance
HIMSS–Healthcare Information and Management Systems Society HITRUST–Health Information Trust Alliance
Large Public Power Council (LPPC)
National Association of Chemical Distributors (NACD)
National Association of Manufacturers (NAM)
National Association of Mutual Insurance Companies (NAMIC) National Association of Water Companies (NAWC)
National Business Coalition on e-Commerce & Privacy
National Cable & Telecommunications Association (NCTA)
National Rural Electric Cooperative Association (NRECA) NTCA–The Rural Broadband Association
Property Casualty Insurers Association of America (PCI)
The Real Estate Roundtable
Securities Industry and Financial Markets Association (SIFMA) Society of Chemical Manufacturers & Affiliates (SOCMA) Telecommunications Industry Association (TIA)
Transmission Access Policy Study Group (TAPS)
United States Telecom Association (USTelecom)
U.S. Chamber of Commerce
Utilities Telecom Council (UTC)
Opposition
Fifty-five civil liberties groups and security experts publicly opposed the legislation in a signed letter to Congress. "PCNA would significantly increase the National Security Agency's (NSA's) access to personal information, and authorize the federal government to use that information for a myriad of purposes unrelated to cybersecurity," the letter stated.
According to the House Permanent Select Committee on Intelligence, the PCNA expressly forbids companies from sharing information with the National Security Agency (NSA) or Department of Defense (DOD).
A group called Access along with the ACLU and several other groups launched a website called StopCyberspying.com. The site has a petition to the President to reconsider a veto of PCNA or the Senate version of the bill.
The civil liberties groups that oppose the bill are:
Access
Advocacy for Principled Action in Government American-Arab Anti-Discrimination Committee American Civil Liberties Union
American Library Association
Association of Research LibrariesBill of Rights Defense CommitteeBrennan Center for JusticeCenter for Democracy & TechnologyCenter for National Security Studies Constitutional AllianceThe Constitution ProjectCouncil on American-Islamic Relations
Cyber Privacy Project
Defending Dissent Foundation
Demand Progress
DownSizeDC.org
Electronic Frontier Foundation
Fight for the Future
Freedom of the Press Foundation
FreedomWorks
Free Press Action Fund
Government Accountability Project Hackers/Founders
Human Rights Watch
Liberty Coalition
Media Alliance
National Association of Criminal Defense Lawyers New America's Open Technology Institute OpenTheGovernment.org
PEN American Center
Restore the Fourth
R Street
Student Net Alliance
Venture Politics
X-Lab
References
External links
Bill summary
"Congress must act to protect against cyberattacks" by Reps. Devin Nunes (R-CA) and Adam Schiff (D-CA), joint op-ed, The Hill. 3-31-2015.
House Report 114-63 - PROTECTING CYBER NETWORKS ACT. Published by the U.S. Government Publishing Office.
"The Growing Cyber Threat and its Impact on American Business", Congressional hearing. House Permanent Select Committee on Intelligence.
Floor statement upon introduction of the legislation (video).
Transcript, committee markup of H.R. 1560
Proposed legislation of the 114th United States Congress
|
21526
|
https://en.wikipedia.org/wiki/November%2022
|
November 22
|
In ancient astrology, it is the cusp day between Scorpio and Sagittarius.
Events
Pre-1600
498 – After the death of Anastasius II, Symmachus is elected Pope in the Lateran Palace, while Laurentius is elected Pope in Santa Maria Maggiore.
845 – The first duke of Brittany, Nominoe, defeats the Frankish king Charles the Bald at the Battle of Ballon near Redon.
1307 – Pope Clement V issues the papal bull Pastoralis Praeeminentiae which instructed all Christian monarchs in Europe to arrest all Templars and seize their assets.
1574 – Spanish navigator Juan Fernández discovers islands now known as the Juan Fernández Islands off Chile.
1601–1900
1635 – Dutch colonial forces on Taiwan launch a pacification campaign against native villages, resulting in Dutch control of the middle and south of the island.
1718 – Royal Navy Lieutenant Robert Maynard attacks and boards the vessels of the British pirate Edward Teach (best known as "Blackbeard") off the coast of North Carolina. The casualties on both sides include Maynard's first officer Mister Hyde and Teach himself.
1837 – Canadian journalist and politician William Lyon Mackenzie calls for a rebellion against the United Kingdom in his essay "To the People of Upper Canada", published in his newspaper The Constitution.
1869 – In Dumbarton, Scotland, the clipper Cutty Sark is launched.
1873 – The French steamer SS Ville du Havre sinks in 12 minutes after colliding with the Scottish iron clipper Loch Earn in the Atlantic, with a loss of 226 lives.
1901–present
1908 – The Congress of Manastir establishes the Albanian alphabet.
1935 – The China Clipper inaugurates the first commercial transpacific air service, connecting Alameda, California with Manila.
1940 – World War II: Following the initial Italian invasion, Greek troops counterattack into Italian-occupied Albania and capture Korytsa.
1942 – World War II: Battle of Stalingrad: General Friedrich Paulus sends Adolf Hitler a telegram saying that the German 6th Army is surrounded.
1943 – World War II: Cairo Conference: U.S. President Franklin D. Roosevelt, British Prime Minister Winston Churchill, and Chinese Premier Chiang Kai-shek meet in Cairo, Egypt, to discuss ways to defeat Japan.
1943 – Lebanon gains independence from France.
1948 – Chinese Civil War: Elements of the Chinese Communist Second Field Army under Liu Bocheng trap the Nationalist 12th Army, beginning the Shuangduiji Campaign, the largest engagement of the Huaihai Campaign.
1955 – The Soviet Union launches RDS-37, a 1.6 megaton two stage hydrogen bomb designed by Andrei Sakharov. The bomb was dropped over Semipalatinsk.
1956 – The Summer Olympics, officially known as the games of the XVI Olympiad, are opened in Melbourne, Australia.
1963 – U.S. President John F. Kennedy is assassinated and Texas Governor John Connally is seriously wounded by Lee Harvey Oswald, who also kills Dallas Police officer J. D. Tippit after fleeing the scene. U.S Vice President Lyndon B. Johnson is sworn in as the 36th President of the United States afterwards.
1967 – UN Security Council Resolution 242 is adopted, establishing a set of the principles aimed at guiding negotiations for an Arab–Israeli peace settlement.
1971 – In Britain's worst mountaineering tragedy, the Cairngorm Plateau Disaster, five children and one of their leaders are found dead from exposure in the Scottish mountains.
1974 – The United Nations General Assembly grants the Palestine Liberation Organization observer status.
1975 – Juan Carlos is declared King of Spain following the death of Francisco Franco.
1977 – British Airways inaugurates a regular London to New York City supersonic Concorde service.
1988 – In Palmdale, California, the first prototype B-2 Spirit stealth bomber is revealed.
1989 – In West Beirut, a bomb explodes near the motorcade of Lebanese President René Moawad, killing him.
1990 – British Prime Minister Margaret Thatcher withdraws from the Conservative Party leadership election, confirming the end of her Prime-Ministership.
1995 – Toy Story is released as the first feature-length film created completely using computer-generated imagery.
1995 – The 7.3 Gulf of Aqaba earthquake shakes the Sinai Peninsula and Saudi Arabia region with a maximum Mercalli intensity of VIII (Severe), killing eight and injuring 30, and generating a non-destructive tsunami.
2002 – In Nigeria, more than 100 people are killed at an attack aimed at the contestants of the Miss World contest.
2003 – Baghdad DHL attempted shootdown incident: Shortly after takeoff, a DHL Express cargo plane is struck on the left wing by a surface-to-air missile and forced to land.
2003 – England defeats Australia in the 2003 Rugby World Cup Final, becoming the first side from the Northern Hemisphere to win the tournament.
2004 – The Orange Revolution begins in Ukraine, resulting from the presidential elections.
2005 – Angela Merkel becomes the first female Chancellor of Germany.
2012 – Ceasefire begins between Hamas in the Gaza Strip and Israel after eight days of violence and 150 deaths.
2015 – A landslide in Hpakant, Kachin State, northern Myanmar kills at least 116 people near a jade mine, with around 100 more missing.
Births
Pre-1600
1329 – Elisabeth of Meissen, Burgravine of Nuremberg (d. 1375)
1428 – Richard Neville, 16th Earl of Warwick, English kingmaker (d. 1471)
1515 – Mary of Guise, Queen of Scots (d. 1560)
1519 – Johannes Crato von Krafftheim, German humanist and physician (d. 1585)
1532 – Anne of Denmark, Electress of Saxony (d. 1585)
1533 – Alfonso II d'Este, Duke of Ferrara, Italian noble (d. 1597)
1564 – Henry Brooke, 11th Baron Cobham, English politician, Lord Lieutenant of Kent (d. 1610)
1601–1900
1602 – Elisabeth of France (d. 1644)
1635 – Francis Willughby, English ornithologist and ichthyologist (d. 1672)
1643 – René-Robert Cavelier, Sieur de La Salle, French-American explorer (d. 1687)
1690 – François Colin de Blamont, French pianist and composer (d. 1760)
1698 – Pierre de Rigaud, Marquis de Vaudreuil-Cavagnial, Canadian-American soldier and politician, 10th Governor of Louisiana (d. 1778)
1709 – Franz Benda, Czech violinist and composer (d. 1786)
1710 – Wilhelm Friedemann Bach, German organist and composer (d. 1784)
1721 – Joseph Frederick Wallet DesBarres, Swiss-Canadian cartographer and politician, Lieutenant Governor of Nova Scotia (d. 1824)
1728 – Charles Frederick, Grand Duke of Baden (d. 1811)
1744 – Abigail Adams, American wife of John Adams, 2nd First Lady of the United States (d. 1818)
1780 – Conradin Kreutzer, German composer (d. 1849)
1780 – José Cecilio del Valle, Honduran journalist, lawyer, and politician, Foreign Minister of Mexico (d. 1834)
1787 – Rasmus Rask, Danish linguist, philologist, and scholar (d. 1823)
1808 – Thomas Cook, English businessman, founded Thomas Cook Group (d. 1892)
1814 – Serranus Clinton Hastings, American lawyer and politician, 1st Chief Justice of California (d. 1893)
1819 – George Eliot, English novelist and poet (d. 1880)
1820 – Katherine Plunket, Irish supercentenarian (d. 1932)
1824 – Georg von Oettingen, Estonian-German physician and ophthalmologist (d. 1916)
1836 – George Barham, English businessman, founded Express County Milk Supply Company (d. 1913)
1845 – Aleksander Kunileid, Estonian composer and educator (d. 1875)
1849 – Christian Rohlfs, German painter and academic (d. 1938)
1852 – Paul-Henri-Benjamin d'Estournelles de Constant, French politician and diplomat, Nobel Prize laureate (d. 1924)
1856 – Heber J. Grant, American religious leader, 7th President of The Church of Jesus Christ of Latter-day Saints (d. 1945)
1857 – George Gissing, English novelist (d. 1903)
1859 – Cecil Sharp, English folk song scholar (d. 1924)
1861 – Ranavalona III of Madagascar (d. 1917)
1868 – John Nance Garner, American lawyer and politician, 32nd Vice President of the United States (d. 1967)
1869 – André Gide, French novelist, essayist, and dramatist, Nobel Prize laureate (d. 1951)
1870 – Howard Brockway, American pianist, composer, and educator (d. 1951)
1870 – Harry Graham, Australian cricketer (d. 1911)
1873 – Leo Amery, Indian-English journalist and politician, Secretary of State for the Colonies (d. 1955)
1873 – Johnny Tyldesley, English cricketer (d. 1930)
1876 – Percival Proctor Baxter, American lawyer and politician, 53rd Governor of Maine (d.1969)
1876 – Emil Beyer, American gymnast and triathlete (d. 1934)
1877 – Endre Ady, Hungarian journalist and poet (d. 1919)
1877 – Joan Gamper, Swiss-Spanish footballer, founded FC Barcelona (d. 1930)
1881 – Enver Pasha, Ottoman general and politician (d. 1922)
1884 – C. J. "Jack" De Garis, Australian entrepreneur (d. 1926)
1884 – Sulaiman Nadvi, Pakistani historian, author, and scholar (d. 1953)
1890 – Charles de Gaulle, French general and politician, 18th President of France (d. 1970)
1891 – Edward Bernays, Austrian-American publicist (d. 1995)
1893 – Harley Earl, American businessman (d. 1969)
1893 – Lazar Kaganovich, Soviet politician (d. 1991)
1896 – David J. Mays, American lawyer and author (d. 1971)
1897 – Paul Oswald Ahnert, German astronomer and educator (d. 1989)
1897 – Harry Wilson, English-American actor and singer (d. 1987)
1898 – Wiley Post, American pilot (d. 1935)
1899 – Hoagy Carmichael, American singer-songwriter, pianist, and actor (d. 1981)
1900 – Tom Macdonald, Welsh journalist and author (d. 1980)
1900 – Helenka Pantaleoni, American actress and humanitarian, co-founded U.S. Fund for UNICEF (d. 1987)
1901–present
1901 – Béla Juhos, Hungarian-Austrian philosopher from the Vienna Circle (d. 1971)
1901 – Joaquín Rodrigo, Spanish pianist and composer (d. 1999)
1902 – Philippe Leclerc de Hauteclocque, French general (d. 1947)
1902 – Emanuel Feuermann, Austrian-American cellist and educator (d. 1942)
1902 – Humphrey Gibbs, English-Rhodesian politician, 15th Governor of Southern Rhodesia (d. 1990)
1902 – Albert Leduc, Canadian ice hockey player (d. 1990)
1902 – Ethel Smith, American organist (d. 1996)
1904 – Miguel Covarrubias, Mexican painter and illustrator (d. 1957)
1904 – Louis Néel, French physicist and academic, Nobel Prize laureate (d. 2000)
1904 – Fumio Niwa, Japanese author (d. 2005)
1906 – Jørgen Juve, Norwegian football player and journalist (d. 1983)
1909 – Mikhail Mil, Russian engineer, founded the Mil Moscow Helicopter Plant (d. 1970)
1910 – Mary Jackson, American actress (d. 2005)
1911 – Ralph Guldahl, American golfer (d. 1987)
1912 – Doris Duke, American art collector and philanthropist (d. 1993)
1913 – Benjamin Britten, English pianist, composer, and conductor (d. 1976)
1913 – Gardnar Mulloy, American tennis player and coach (d. 2016)
1913 – Cecilia Muñoz-Palma, Filipino lawyer and jurist (d. 2006)
1913 – Jacqueline Vaudecrane, French figure skater and coach (d. 2018)
1914 – Peter Townsend, Burmese-English captain and pilot (d. 1995)
1915 – Oswald Morris, British cinematographer (d. 2014)
1917 – Jon Cleary, Australian author and playwright (d. 2010)
1917 – Andrew Huxley, English physiologist and biophysicist, Nobel Prize laureate (d. 2012)
1917 – Sir Keith Shann, Australian diplomat (d. 1988)
1918 – Claiborne Pell, American captain and politician (d. 2009)
1919 – Máire Drumm, Irish politician (d. 1976)
1920 – Baidyanath Misra, Indian economist (d. 2019)
1920 – Anne Crawford, Israeli-English actress (d. 1956)
1921 – Brian Cleeve, Irish sailor, author, and playwright (d. 2003)
1921 – Rodney Dangerfield, American comedian, actor, rapper, and screenwriter (d. 2004)
1922 – Fikret Amirov, Azerbaijani composer (d. 1984)
1922 – Wiyogo Atmodarminto, Indonesian general and politician, 10th Governor of Jakarta (d. 2012)
1922 – Eugene Stoner, American engineer and weapons designer, designed the AR-15 rifle (d. 1997)
1923 – Arthur Hiller, Canadian actor, director, and producer (d. 2016)
1923 – Dika Newlin, American singer-songwriter and pianist (d. 2006)
1924 – Geraldine Page, American actress and singer (d. 1987)
1924 – Les Johnson, Australian politician (d. 2015)
1925 – Jerrie Mock, American pilot (d. 2014)
1925 – Gunther Schuller, American horn player, composer, and conductor (d. 2015)
1926 – Lew Burdette, American baseball player and coach (d. 2007)
1926 – Arthur Jones, American businessman, founded Nautilus, Inc. and MedX Corporation (d. 2007)
1927 – Steven Muller, German-American scholar and academic (d. 2013)
1927 – Robert E. Valett, American psychologist, teacher, and author (d. 2008)
1928 – Tim Beaumont, English priest and politician (d. 2008)
1929 – Staughton Lynd, American lawyer, historian, author, and activist
1929 – Keith Rayner, Australian Archbishop
1930 – Peter Hall, English actor, director, and manager (d. 2017)
1930 – Peter Hurford, English organist and composer
1932 – Robert Vaughn, American actor and director (d. 2016)
1933 – Merv Lincoln, Australian Olympic athlete (d. 2016)
1934 – Rita Sakellariou, Greek singer (d. 1999)
1935 – Ludmila Belousova, Soviet ice skater (d. 2017)
1936 – John Bird, English actor and screenwriter
1936 – Archie Gouldie, Canadian-American wrestler (d. 2016)
1937 – Nikolai Kapustin, Russian pianist and composer (d. 2020)
1938 – John Eleuthère du Pont, American businessman and philanthropist, founded Delaware Museum of Natural History (d. 2010)
1938 – Henry Lee, Chinese-American criminologist and academic
1939 – Tom West, American engineer and author (d. 2011)
1939 – Mulayam Singh Yadav, Indian politician, 24th Indian Minister of Defence
1940 – Terry Gilliam, American-English actor, director, animator, and screenwriter
1940 – Roy Thomas, American author
1940 – Andrzej Żuławski, Polish director and screenwriter (d. 2016)
1941 – Tom Conti, Scottish actor and director
1941 – Jacques Laperrière, Canadian ice hockey player and coach
1941 – Ron McClure, American jazz bassist
1941 – Volker Roemheld, German physiologist and biologist (d. 2013)
1941 – Terry Stafford, American singer-songwriter (d. 1996)
1941 – Jesse Colin Young, American singer-songwriter and bass player
1942 – Guion Bluford, American colonel, pilot, and astronaut
1942 – Floyd Sneed, Canadian drummer
1943 – Yvan Cournoyer, Canadian ice hockey player and coach
1943 – Billie Jean King, American tennis player and sportscaster
1943 – William Kotzwinkle, American novelist and screenwriter
1943 – Ricky May, New Zealand-Australian jazz singer (d. 1988)
1943 – Mushtaq Mohammad, Pakistani cricketer
1943 – Roger L. Simon, American author and screenwriter
1945 – Elaine Weyuker, American computer scientist, engineer, and academic
1945 – Kari Tapio, Finnish singer (d. 2010)
1946 – Aston Barrett, Jamaican bass player and songwriter
1947 – Sandy Alderson, American businessman and academic
1947 – Rod Price, English guitarist and songwriter (d. 2005)
1947 – Nevio Scala, Italian footballer and manager
1947 – Salt Walther, American race car driver (d. 2012)
1947 – Valerie Wilson Wesley, American journalist and author
1948 – Radomir Antić, Serbian footballer and manager (d. 2020)
1948 – Stewart Guthrie, New Zealand police officer (d. 1990)
1948 – Saroj Khan, Indian dance choreographer, known as "The Mother of Dance/Choreography in India" (d. 2020)
1949 – Richard Carmona, American physician and politician, 17th Surgeon General of the United States
1949 – David Pietrusza, American author and historian
1950 – Lyman Bostock, American baseball player (d. 1978)
1950 – Jim Jefferies, Scottish footballer and manager
1950 – Paloma San Basilio, Spanish singer-songwriter and producer
1950 – Art Sullivan, Belgian singer (d. 2019)
1950 – Steven Van Zandt, American singer-songwriter, guitarist, producer, and actor
1950 – Tina Weymouth, American singer-songwriter and bass player
1951 – Kent Nagano, American conductor, director, and manager
1952 – Nicholas Suntzeff, American astronomer and cosmologist
1953 – Wayne Larkins, English cricketer and footballer
1954 – Denise Epoté, Cameroonian journalist at the head of the Africa management of TV5 Monde
1954 – Paolo Gentiloni, Italian politician, 57th Prime Minister of Italy
1954 – Carol Tomcala, Australian sports shooter
1955 – George Alagiah, British journalist
1955 – James Edwards, American basketball player
1956 – Lawrence Gowan, Scottish-Canadian singer-songwriter and keyboard player
1956 – Richard Kind, American actor
1956 – Ron Randall, American author and illustrator
1957 – Donny Deutsch, American businessman and television host
1957 – Alan Stern, American engineer and planetary scientist
1958 – Horse, Scottish singer-songwriter and guitarist
1958 – Jamie Lee Curtis, American actress
1958 – Lee Guetterman, American baseball player
1958 – Ibrahim Ismail of Johor, Sultan of Johor
1958 – Chic McSherry, Scottish musician, businessman and writer
1958 – Jason Ringenberg, American singer-songwriter and guitarist
1959 – Eddie Frierson, American actor
1959 – Frank McAvennie, Scottish footballer
1959 – Fabio Parra, Colombian cyclist
1959 – Lenore Zann, Australian-Canadian actress, singer, and politician
1960 – Jim Bob, English singer-songwriter and guitarist
1960 – Leos Carax, French actor, director, and screenwriter
1961 – Mariel Hemingway, American actress
1961 – Stephen Hough, English-Australian pianist and composer
1961 – Randal L. Schwartz, American computer programmer and author
1962 – Sumi Jo, South Korean soprano
1962 – Victor Pelevin, Russian engineer and author
1962 – Rezauddin Stalin, Bangladeshi poet and educator
1963 – Hugh Millen, American football player and sportscaster
1963 – Tony Mowbray, English footballer and manager
1963 – Kennedy Pola, Samoan-American football player and coach
1963 – Brian Robbins, American actor, director, producer, and screenwriter
1963 – Corinne Russell, English model, actress, and dancer
1964 – Robbie Slater, English-Australian footballer and sportscaster
1965 – Valeriya Gansvind, Estonian chess player
1965 – Olga Kisseleva, Russian artist
1965 – Jörg Jung, German footballer and manager
1965 – Mads Mikkelsen, Danish actor
1965 – Kristin Minter, American actress
1965 – Sen Dog, Cuban-American rapper and musician
1966 – Ed Ferrara, American wrestler and manager
1966 – Mark Pritchard, English lawyer and politician
1966 – Richard Stanley, South African director, producer, and screenwriter
1967 – Boris Becker, German-Swiss tennis player and coach
1967 – Tom Elliott, Australian investment banker
1967 – Quint Kessenich, American lacrosse player and sportscaster
1967 – Mark Ruffalo, American actor and activist
1967 – Bart Veldkamp, Dutch-Belgian speed skater, coach, and sportscaster
1968 – Sidse Babett Knudsen, Danish actress
1968 – Rasmus Lerdorf, Greenlandic-Canadian computer scientist and programmer, created PHP
1968 – Sarah MacDonald, Canadian organist and conductor
1969 – Byron Houston, American basketball player
1969 – Marjane Satrapi, Iranian author and illustrator
1970 – Marvan Atapattu, Sri Lankan cricketer and coach
1970 – Chris Fryar, American drummer
1970 – Stel Pavlou, English author and screenwriter
1971 – Cath Bishop, English rower
1971 – Kyran Bracken, Irish-English rugby player
1971 – Cecilia Suárez, Mexican actress and producer
1972 – Olivier Brouzet, French rugby player
1972 – Russell Hoult, English footballer, coach, and manager
1972 – Jay Payton, American baseball player and sportscaster
1973 – Dmitri Linter, Russian-Estonian activist
1973 – Chad Trujillo, American astronomer and scholar
1973 – Andrew Walker, Australian rugby player
1974 – Joe Nathan, American baseball player
1974 – David Pelletier, Canadian figure skater and coach
1975 – Aiko, Japanese singer-songwriter
1975 – Joshua Wheeler, American sergeant (d. 2015)
1975 – Yusaku Maezawa, Japanese billionaire entrepreneur and art collector
1976 – Adrian Bakalli, Belgian footballer
1976 – Torsten Frings, German footballer and coach
1976 – Regina Halmich, German boxer and businesswoman
1976 – Ville Valo, Finnish singer-songwriter
1977 – Kerem Gönlüm, Turkish basketball player
1977 – Annika Norlin, Swedish singer-songwriter and guitarist
1977 – Michael Preston, English footballer
1978 – Colin Best, Australian rugby league player
1978 – Mélanie Doutey, French actress and singer
1978 – Karen O, South Korean-American singer-songwriter and pianist
1979 – Jeremy Dale, American illustrator (d. 2014)
1979 – Christian Terlizzi, Italian footballer
1980 – David Artell, English-Gibraltarian footballer and coach
1980 – Shawn Fanning, American computer programmer and businessman, founded Napster
1980 – Rait Keerles, Estonian basketball player
1980 – Yaroslav Rybakov, Russian high jumper
1981 – Asmaa Abdol-Hamid, Arab-Danish social worker and politician
1981 – Ben Adams, English-Norwegian singer-songwriter and producer
1981 – Song Hye-kyo, South Korean actress and singer
1981 – Pape Sow, Senegalese basketball player
1981 – Jenny Owen Youngs, American singer-songwriter and guitarist
1981 – Shangela Laquifa Wadley, American drag queen, comedian and reality television personality
1982 – Xavier Doherty, Australian cricketer
1982 – Alasdair Duncan, Australian journalist and author
1982 – Isild Le Besco, French actress, director, and screenwriter
1982 – Yakubu, Nigerian footballer
1983 – Sei Ashina, Japanese actress
1983 – Corey Beaulieu, American guitarist and songwriter
1983 – Tyler Hilton, American singer-songwriter, guitarist, and actor
1983 – Peter Ramage, English footballer
1983 – Xiao Yu, Taiwanese singer and songwriter
1984 – Scarlett Johansson, American actress
1984 – Nathalie Nordnes, Norwegian singer-songwriter
1985 – Austin Brown, American singer-songwriter, dancer, and producer
1985 – Asamoah Gyan, Ghanaian footballer
1985 – Dieumerci Mbokani, Congolese footballer
1985 – Ava Leigh, English singer-songwriter
1985 – Mandy Minella, Luxembourgian tennis player
1985 – James Roby, English rugby league player
1985 – DeVon Walker, American football player
1986 – Erika Padilla, Filipino actress and host
1986 – Oscar Pistorius, South African sprinter
1987 – Martti Aljand, Estonian swimmer
1987 – Marouane Fellaini, Belgian footballer
1988 – Jamie Campbell Bower, English actor and singer
1988 – Austin Romine, American baseball player
1989 – Candice Glover, American singer-songwriter and actress
1989 – Minehiro Kinomoto, Japanese actor
1989 – Chris Smalling, English footballer
1989 – Gabriel Torje, Romanian footballer
1990 – Jang Dongwoo, South Korean singer and dancer
1990 – Kartik Aaryan, Indian actor
1990 – Brock Osweiler, American football player
1991 – Tarik Black, American professional basketball player
1993 – Tridha Choudhury, Indian actress
1993 – Adèle Exarchopoulos, French actress
1994 – Keiji Tanaka, Japanese figure skater
1994 – Nicolás Stefanelli, Argentine footballer
1994 – Samantha Bricio, Mexican volleyball player
1994 – Dacre Montgomery, Australian actor
1995 – Katherine McNamara, American actress
1996 – Hailey Baldwin, American model
1996 – JuJu Smith-Schuster, American football player
2000 – Auliʻi Cravalho, Hawaiian-American actress and singer
2000 – Baby Ariel, American social media vlogger and singer
2001 – Zhong Chenle, Chinese singer, songwriter, dancer, and actor
Deaths
Pre-1600
365 – Antipope Felix II
950 – Lothair II of Italy (b. 926)
1249 – As-Salih Ayyub, ruler of Egypt
1286 – Eric V of Denmark (b. 1249)
1318 – Mikhail of Tver (b. 1271)
1392 – Robert de Vere, Duke of Ireland (b. 1362)
1538 – John Lambert, English Protestant martyr
1601–1900
1617 – Ahmed I, Sultan of the Ottoman Empire and Caliph of Islam (b. 1590)
1694 – John Tillotson, English archbishop (b. 1630)
1697 – Libéral Bruant, French architect and academic, designed Les Invalides (b. 1635)
1718 – Blackbeard, English pirate (b. 1680)
1758 – Richard Edgcumbe, 1st Baron Edgcumbe, English politician, Lord Lieutenant of Cornwall (db. 1680)
1774 – Robert Clive, English general and politician, Lord Lieutenant of Shropshire (b. 1725)
1794 – John Alsop, American merchant and politician (b. 1724)
1813 – Johann Christian Reil, German physician, physiologist, and anatomist (b. 1759)
1819 – John Stackhouse, English botanist and phycologist (b. 1742)
1871 – Oscar James Dunn, African American activist and politician, Lieutenant Governor of Louisiana 1868-1871 (b. 1826)
1875 – Henry Wilson, American colonel, journalist, and politician, 18th Vice President of the United States (b. 1812)
1886 – Mary Boykin Chesnut, American author (b. 1823)
1896 – George Washington Gale Ferris Jr., American engineer, invented the Ferris wheel (b. 1859)
1900 – Arthur Sullivan, English composer and scholar (b. 1842)
1901–present
1902 – Walter Reed, American physician and entomologist (b. 1851)
1913 – Tokugawa Yoshinobu, Japanese shōgun (b. 1837)
1916 – Jack London, American novelist and journalist (b. 1876)
1917 – Teoberto Maler, Italian-German archaeologist and explorer (b. 1842)
1919 – Francisco Moreno, Argentinian explorer and academic (b. 1852)
1920 – Manuel Pérez y Curis, Uruguayan poet and author (b. 1884)
1923 – Andy O'Sullivan (Irish Republican) died on Hunger Strike
1926 – Darvish Khan, Iranian tar player (b. 1872)
1932 – William Walker Atkinson, American merchant, lawyer, and author (b. 1862)
1941 – Werner Mölders, German colonel and pilot (b. 1915)
1943 – Lorenz Hart, American playwright and composer (b. 1895)
1944 – Arthur Eddington, English astrophysicist and astronomer (b. 1882)
1946 – Otto Georg Thierack, German jurist and politician, German Minister of Justice (b. 1889)
1948 – Fakhri Pasha, Turkish general and politician (b. 1868)
1954 – Jess McMahon, American wrestling promoter, co-founded Capitol Wrestling Corporation (b. 1882)
1955 – Shemp Howard, American actor and comedian (b. 1895)
1956 – Theodore Kosloff, Russian-American actor, ballet dancer, and choreographer (b. 1882)
1963 – Wilhelm Beiglböck, Austrian-German physician (b. 1905)
1963 – Aldous Huxley, English novelist and philosopher (b. 1894)
1963 – John F. Kennedy, American lieutenant and politician, 35th President of the United States (b. 1917)
1963 – C. S. Lewis, British writer, critic and Christian apologist (b. 1898)
1963 – J. D. Tippit, American police officer (Dallas Police Department) (b. 1924)
1967 – Pavel Korin, Russian painter (b. 1892)
1976 – Sevgi Soysal, Turkish author (b. 1936)
1980 – Jules Léger, Canadian journalist and politician, 21st Governor General of Canada (b. 1913)
1980 – Norah McGuinness, Irish painter and illustrator (b. 1901)
1980 – Mae West, American actress, singer, and screenwriter (b. 1893)
1981 – Hans Adolf Krebs, German-English physician and biochemist, Nobel Prize laureate (b. 1900)
1986 – Scatman Crothers, American actor and comedian (b. 1910)
1988 – Luis Barragán, Mexican architect and engineer, designed the Torres de Satélite (b. 1908)
1989 – C. C. Beck, American illustrator (b. 1910)
1989 – René Moawad, Lebanese lawyer and politician, 13th President of Lebanon (b. 1925)
1992 – Sterling Holloway, American actor (b. 1905)
1993 – Anthony Burgess, English novelist, playwright, and critic (b. 1917)
1994 – Minni Nurme, Estonian writer and poet (b. 1917)
1994 – Forrest White, American businessman (b. 1920)
1996 – María Casares, Spanish-French actress (b. 1922)
1996 – Terence Donovan, English photographer and director (b. 1936)
1996 – Mark Lenard, American actor (b. 1924)
1997 – Michael Hutchence, Australian singer-songwriter (b. 1960)
1998 – Stu Ungar, American poker player (b. 1953)
2000 – Christian Marquand, French actor, director, and screenwriter (b. 1927)
2000 – Emil Zátopek, Czech runner (b. 1922)
2001 – Mary Kay Ash, American businesswoman, founded Mary Kay, Inc. (b. 1915)
2001 – Theo Barker, English historian and academic (b. 1923)
2001 – Norman Granz, American-Swiss record producer, founded Verve Records (b. 1918)
2002 – Parley Baer, American actor (b. 1914)
2004 – Arthur Hopcraft, English screenwriter and journalist (b. 1932)
2005 – Bruce Hobbs, American jockey and trainer (b. 1920)
2006 – Asima Chatterjee, Indian chemist (b. 1917)
2006 – Pat Dobson, American baseball player and coach (b. 1942)
2007 – Maurice Béjart, French-Swiss dancer, choreographer, and director (b. 1929)
2007 – Verity Lambert, English television producer (b. 1935)
2008 – MC Breed, American rapper (b. 1971)
2010 – Jean Cione, American baseball player and educator (b. 1928)
2010 – Frank Fenner, Australian virologist and microbiologist (b. 1914)
2011 – Svetlana Alliluyeva, Russian-American author and educator (b. 1926)
2011 – Sena Jurinac, Bosnian-Austrian soprano and actress (b. 1921)
2011 – Lynn Margulis, American biologist and academic (b. 1938)
2011 – Paul Motian, American drummer and composer (b. 1931)
2012 – Pearl Laska Chamberlain, American pilot (b. 1909)
2012 – Bryce Courtenay, South African-Australian author (b. 1933)
2012 – Bennie McRae, American football player (b. 1939)
2012 – P. Govinda Pillai, Indian journalist and politician (b. 1926)
2013 – Don Dailey, American computer programmer (b. 1956)
2013 – Brian Dawson, English singer (b. 1939)
2013 – Jancarlos de Oliveira Barros, Brazilian footballer (b. 1983)
2013 – Tom Gilmartin, Irish businessman (b. 1935)
2013 – Georges Lautner, French director and screenwriter (b. 1926)
2013 – Alec Reid, Irish priest and activist (b. 1931)
2014 – Fiorenzo Angelini, Italian cardinal (b. 1916)
2014 – Don Grate, American baseball and basketball player (b. 1923)
2014 – Marcel Paquet, Belgian-Polish philosopher and author (b. 1947)
2014 – Émile Poulat, French sociologist and historian (b. 1920)
2015 – Abubakar Audu, Nigerian banker and politician, Governor of Kogi State (b. 1947)
2015 – Salahuddin Quader Chowdhury, Bangladeshi politician (b. 1949)
2015 – Ali Ahsan Mohammad Mojaheed, Bangladeshi politician (b. 1948)
2015 – Robin Stewart, Indian-English actor and game show host (b. 1946)
2015 – Kim Young-sam, South Korean soldier and politician, 7th President of South Korea (b. 1929)
2016 – M. Balamuralikrishna, Indian vocalist and singer (b. 1930)
2017 – Bob Avakian, American music producer (b. 1919)
2017 – Dmitri Hvorostovsky, Russian operatic baritone (b. 1962)
2017 – Tommy Keene, American singer songwriter (b. 1958)
2020 – Otto Hutter, Austrian-born British physiologist (b. 1924)
Holidays and observances
Arbour Day (British Virgin Islands)
Christian feast day:
Amphilochius of Iconium
Cecilia
George (Eastern Orthodox, a national holiday in Georgia)
Herbert
Philemon and Apphia
Pragmatius of Autun
November 22 (Eastern Orthodox liturgics)
Day of Justice (Azerbaijan)
Day of the Albanian Alphabet (Albania and ethnic Albanians)
Independence Day, celebrates the independence of Lebanon from France in 1943.
Teacher's Day (Costa Rica)
References
External links
Days of the year
November
|
3069189
|
https://en.wikipedia.org/wiki/WinFax
|
WinFax
|
WinFax (also known as WinFax PRO) was a Microsoft Windows-based software product designed to let computers equipped with fax-modems communicate directly to stand-alone fax machines, or other similarly equipped computers.
History
The product was created by developer Tony Davis at Toronto-based Delrina in 1990, and soon became the company's flagship product. Delrina started out by producing a set of electronic form products known as PerForm and later, FormFlow.
In 1990 Delrina devoted a relatively small space to WinFax at that year's COMDEX, where it easily garnered the most attention of any Delrina product being demonstrated at that show. This interest convinced Delrina of the commercial viability of the product. The rapid acceptance of this program in the market soon overtook that of the initial forms product in terms of revenues, and within a few years of its launch, WinFax would account for 80% of the company's revenues.
Several versions of WinFax were released over the next few years, initially for Windows 3.x and then a Windows 95-based version. Versions were also created for the Apple Macintosh ("Delrina Fax Pro") and DOS ("DosFax"). The Windows versions were also localized to major European and Asian languages. The company made further inroads by establishing tie-ins with modem manufacturers such as U.S. Robotics and Supra that bundled simple versions of the product (called "WinFax LITE") that offered basic functionality. Those wanting more robust features were encouraged to upgrade to the "PRO" version, and were offered significant discounts over the standalone retail version. All of this rapidly established WinFax as the de facto fax software. By 1994, almost 100 companies were bundling versions of WinFax, including IBM, Compaq, AST Research, Gateway 2000, Intel and Hewlett-Packard.
WinFax was frequently used by business travelers as an ad hoc printer. By connecting to a regular phone line, or to an office/hotel room phone via an adapter, a user could send a document to a fax machine (in an era when nearly all business class hotels had a fax machine at the front desk, but very few offered printers for guest use). While the 200DPI was not as smooth as the (max) 300DPI offered by high-end laser printers, it was generally superior to dot matrix.
WinFax PRO 3.0 was launched in November 1993 for Windows 3.x machines. This was followed by a version for Macintosh systems. This version of this product saw long life as a "non-PRO" version that was bundled with various fax modems by the end of its product cycle.
The release of WinFax PRO 4.0 in March 1994 brought together a number of key features and technologies. It introduced an improved OCR engine, introduced improvements aimed specifically at mobile fax users, better on-screen fax viewing capabilities and a focus on consistency and usability of the interface. It also included for the first time the ability to integrate directly with popular new email products such as cc:Mail and Microsoft Mail. It was soon followed by a networked version of the same product, which allowed a number of users to share a single fax modem on a networked system. This version of the product was also bundled with a grayscale scanner manufactured by Fujitsu, and sold as WinFax Scanner.
In 1994 the firm acquired AudioFile, a company that specialized in computer-based voice technology. The company created a product called TalkWorks, which enabled users to use certain fax/modems as a voice mail client. This program would later be bundled with subsequent versions of WinFax and the CommSuite 95 product.
Fate
In January 1995 The New York Times called WinFax "the leader in fax software with two-thirds of the market." By mid-1995, "more than 10 million copies of WinFax" were sold (worldwide).
The final Delrina-made version of WinFax was WinFax PRO 7.0, which shipped in November 1995. There was no intervening version 5.0 or 6.0, and the jump to version 7.0 was purely a marketing decision, based on keeping up with Microsoft's suite of Office products which were then at the same number. It was the first Delrina product designed to work with the Windows 95 operating system, and was a full 32-bit application, setting it apart from its competition at the time.
By the time WinFax PRO 7.0 was being sold from retail shelves, Delrina had been acquired by Symantec in 1995. Symantec continued to market, develop and release four additional major versions of the WinFax PRO software product under the Symantec WinFax PRO banner.
WinFax PRO 10, released in February 2000, was the last major version of WinFax PRO to be developed. In 1999, John W. Thompson, a former IBM executive in sales, marketing and development, replaced Gordon Eubanks as Symantec's CEO. Thompson decided to focus on one technology category: security. Symantec's business model changed from the retail channel software company, to an enterprise security software based company with a retail channel. In a Black Enterprise September 2004 article, Thompson is quoted "We (Symantec) had Java development tools, we had personal contact management systems. We had a whole range of things that didn't relate to anything in common, except they could be moved through the same distribution channel. And my answer is: Who cares about that?".
By the end of 2001, the remaining WinFax PRO developers and support personnel were terminated from their positions. Technical support for WinFax PRO from 2002 through 2006 was outsourced to third-party call centers based out of Oregon, Texas and later India. Symantec discontinued sales and support of WinFax PRO on June 30, 2006. A web based community support forum exists for users of WinFax PRO.
Version history
WinFax 1.0 — 1990 (Windows 3.x)
Delrina WinFax PRO 2.0 — 1991 (Windows 3.x)
DosFax — 1992 (DOS)
Delrina WinFax PRO 3.0 — November 1992 (Windows 3.x)
Delrina WinFax PRO for Networks 3.0 - (Windows 3.x, Windows for Workgroups 3.1x)
Delrina Fax PRO – 1993 (Macintosh)
Delrina WinFax PRO 4.0 — March 1994 (Windows 3.x, later revisions supported Windows 95)
Delrina WinFax PRO for Networks 4.0 - (Windows 3.x, Windows for Workgroups 3.1x)
Delrina WinFax PRO for Networks 4.1 - (Windows 3.x, Windows for Workgroups 3.1x, Windows 95)
Delrina WinFax Scanner — 1994 (Windows 95)
Delrina WinFax PRO 7.0 — November 1995 (Windows 95)
WinFax PRO 7.5 (bundled with TalkWorks) — October 1996 (Windows 95)
WinFax PRO 8.0 (bundled with TalkWorks PRO) — March 1997 (Windows 95, Windows NT)
WinFax PRO for Networks 5.0 Server — July 1997 (Windows 3.x, Windows for Workgroups 3.1x, Windows 95)
TalkWorks PRO 2.0 — August 1998 (Windows 95, Windows NT)
WinFax PRO 9.0 — August 1998 (Windows 9x, Windows NT/2000/XP)
TalkWorks PRO 3.0 — August 1999 (Windows 9x, Windows NT/2000/XP)
WinFax PRO 10.0 — February 2000 (Windows 9x, Windows NT/2000/XP)
WinFax PRO 10.01 — January 2001 (Windows 9x, Windows Me, Windows NT/2000/XP)
WinFax PRO 10.02 — August 2001 (Windows 9x, Windows Me, Windows NT/2000/XP)
WinFax PRO 10.03 — November 2002 (Windows 9x, Windows Me, Windows NT/2000/XP)
WinFax PRO 10.04 — January 2005 (update patch only from version 10.03)
French WinFax, FaxTools elsewhere
BVRP (Bruno Vanryb and Roger Politis), a French startup, partnered with Hayes Microcomputer Products, and used their boosted sales of their "database program called Directory" to "change to software that runs facsimile machines" since they were concerned that they "would be crushed" in the database market. In France they chose to use the name WinFax, whereas elsewhere their product
is sold as "FaxTools."
References
External links
WinFax PRO Support forums (Not affiliated with Symantec)
WinFax PRO Technical Support site (Not affiliated with Symantec)
Federal Junk Fax Prevention Act
Fax software
Communication software
NortonLifeLock software
Windows-only software
|
12840543
|
https://en.wikipedia.org/wiki/Internet%20in%20the%20United%20Kingdom
|
Internet in the United Kingdom
|
The United Kingdom has been involved with the Internet throughout its origins and development. The telecommunications infrastructure in the United Kingdom provides Internet access to businesses and home users in various forms, including fibre, cable, DSL, wireless and mobile.
The share of households with Internet access in the United Kingdom grew from 9 percent in 1998 to 93 percent in 2019. Virtually all adults aged 16 to 44 years in the UK were recent internet users (99%) in 2019, compared with 47% of adults aged 75 years and over; in aggregate, the third-highest in Europe. Online shoppers in the UK spend more per household than consumers in any other country. Internet bandwidth per Internet user was the seventh highest in the world in 2016, and average and peak internet connection speeds were top-quartile in 2017. Internet use in the United Kingdom doubled in 2020.
The Internet country code top-level domain (ccTLD) for the United Kingdom is .uk and is run by Nominet.
History
The UK has been involved in the research and development of packet switching, communication protocols, and internetworking since their origins. The development of these technologies was international from the beginning, although much of the research and development that led to the Internet protocol suite was driven and funded by the United States.
Early years
Pioneering research and development of computers in Britain in the 1940s led to partnerships between the public and private sectors. These relationships brought about sharing and transfer of personnel and concepts between industry and academia or national research bodies. The trackball was invented in 1946 by Ralph Benjamin, while working for the Royal Navy Scientific Service. At the National Physical Laboratory (NPL), Alan Turing worked on computer design, assisted by Donald Davies in 1947.
Christopher Strachey, who became Oxford University's first professor of computation, filed a patent application for time-sharing in 1959. He passed the concept on to J. C. R. Licklider at a UNESCO-sponsored conference on Information Processing in Paris that year.
Packet switching and national data network proposal
After meeting with Licklider in 1965, Donald Davies conceived the idea of packet switching for data communications. He proposed a commercial national data network and developed plans to implement the concept in a local area network, the NPL network, which operated from 1969 to 1986. He and his team, including Derek Barber and Roger Scantlebury, carried out work to analyse and simulate the performance of packet switching networks, including datagram networks. Their research and practice was adopted by the ARPANET in the United States, the forerunner of the Internet, and influenced other researchers in the UK and Europe including Louis Pouzin.
TCP/IP and the early Internet
Donald Davies, Derek Barber and Roger Scantlebury joined the International Networking Working Group (INWG) in 1972 along with researchers from the United States and France. Bob Kahn and Vint Cerf acknowledged Davies and Scantlebury in their 1974 paper "A Protocol for Packet Network Intercommunication".
Peter Kirstein's research group at University College London was one of the first two international connections on the ARPANET in 1973, alongside the Norwegian Seismic Array (NORSAR) which connected via Sweden's Tanum satellite station. The specification of the Transmission Control Program was developed in the U.S. in 1974 through research funded and led by DARPA. The following year, testing began with concurrent implementations at University College London, Stanford University, and BBN. Kirstein co-authored with Vint Cerf one of the most significant early technical papers on the internetworking concept in 1978. His research group at UCL adopted TCP/IP in 1982, a year ahead of ARPANET, and played a significant role in the very earliest experimental Internet work. Kirstein's group included Sylvia Wilbur who programmed the computer used as the local node for the network.
The Royal Signals and Radar Establishment (RSRE) was involved in early research and testing of TCP/IP. The first email sent by a head of state was sent from the RSRE over the ARPANET by Queen Elizabeth II in 1976. RSRE was allocated class A Internet address range 25 in 1979, which later became the Ministry of Defence address space, providing 16.7 million IPv4 addresses.
When American researchers Jon Postel and Paul Mockapetris were designing the Domain Name System in 1984, British researchers expressed a desire to use a country designation. Postel used the ISO standard country abbreviations except for following the "UK" convention already in use in the Name Registration Scheme, rather than the ISO standard "GB". The .uk Internet country code top-level domain (ccTLD) was registered in July 1985, seven months after the original generic top-level domains such as .com and the first country code after .us. At the time, ccTLDs were delegated by Postel to a "responsible person" and Andrew McDowell at UCL managed .uk, the first country code delegation. He later passed it to Dr Willie Black at the UK Education and Research Networking Association (UK ERNA). Black managed the "Naming Committee" until he and John Carey formed Nominet UK in 1996. As one of the first professional ccTLD operators, it became the model for many other operators worldwide.
The UK's national research and education network (NREN), JANET connected with the National Science Foundation Network (NSFNET) in the United States in 1989. JANET adopted Internet Protocol on its existing network in 1991. In the same year, Dai Davies introduced Internet technology into the pan-European NREN, EuropaNet.
Ivan Pope's company, NetNames, developed the concept of a standalone commercial domain name registrar, which would sell domain registration and other associated services to the public. Network Solutions Inc. (NSI), the domain name registry for the .com, .net, and .org top-level domains (TLDs), assimilated this model, which ultimately led to the separation of registry and registrar functions.
Jon Crowcroft and Mark Handley received multiple awards for their work on Internet technology in the 1990s and 2000s. Karen Banks pioneered the use of the Internet to empower women around the world.
Other protocols and networks
During the early 1970s, the NPL team researched internetworking and worked on the European Informatics Network (EIN). Based on datagrams, the network linked Euratom, the French research centre INRIA and the UK’s National Physical Laboratory in 1976. The transport protocol of the EIN was the basis of the one adopted by the International Networking Working Group.
A number of academic and research networks in the early 1970s serving the Science Research Council community became SRCnet, later called SERCnet. Other local and regional academic and research networks were built.
In 1973, Clifford Cocks invented a public-key cryptography algorithm equivalent to what would become (in 1978) the RSA algorithm while working at the Government Communications Headquarters (GCHQ).
Post Office Telecommunications developed an experimental public packet switching network, EPSS, in the 1970s. This was one of the first public data networks in the world when it began operating in 1977. EPSS was replaced with the Packet Switch Stream (PSS) in 1980. PSS connected to the International Packet Switched Service (IPSS), which was created in 1978 through a collaboration between Post Office Telecommunications and two US telecoms companies. IPSS provided worldwide networking infrastructure.
British research contributed to the development of the X.25 standard agreed by the CCITT in 1976 which was utilised by PSS and IPSS. The UK academic community defined the Coloured Book protocols, which came into use as "interim" X.25 standards. These protocols gained some acceptance internationally as the first complete X.25 standard, and gave the UK "several years lead over other countries".
Logica, together with the French company SESA, set up a joint venture in 1975 to undertake the Euronet development, using X.25 protocols to form virtual circuits. It established a network linking a number of European countries in 1979 before being handed over to national PTTs In 1984.
Peter Collinson brought Unix to the University of Kent at Canterbury (UKC) in 1976 and set up the first UUCP test service to Bell Labs in the U.S. in 1979. UKC provided the first connections to non-academic users in the early 1980s. The first UUCP emails from the U.S. arrived in the UK in 1979 and email to Europe (the Netherlands and Denmark) started in 1980, becoming a regular service via EUnet in 1982. Four commercial companies provided electronic mail services in Britain by 1985, enabling subscribers to send email over telephone connections or data networks such as Packet Switch Stream.
In the early 1980s, British academic networks started a standardisation and interconnection effort based on X.25 and the Coloured Book protocols. Known as the United Kingdom Education and Research Networking Association (UK ERNA), and later JNT Association, this became JANET, the UK's high-speed academic and research network that linked all universities, higher education establishments, and publicly funded research laboratories. It began operation in 1984, two years ahead of the NSFNET in the United States.
The National Computing Centre 1976 publication 'Why Distributed Computing' which came from considerable research into future configurations for computer systems, resulted in the UK presenting the case for an international standards committee to cover this area at the ISO meeting in Sydney in March 1977. This international effort ultimately led to the OSI model as an international reference model, published in 1984. For a period in the late 1980s and early 1990s, engineers, organizations and nations became polarized over the issue of which standard, the OSI model or the Internet protocol suite would result in the best and most robust computer networks.
Commercial networking services between the UK and the US were being developed in late 1990.
World Wide Web
In 1989, Tim Berners-Lee, working at CERN in Switzerland, wrote a proposal for "a large hypertext database with typed links". The following year, he specified HTML, the hypertext language, and HTTP, the protocol. These concepts became a world-wide information system known as the World Wide Web (WWW). Operating on the Internet, it allows documents to be created for reading or accessing services with connections to other documents or services, accessed by clicking on hypertext links, enabling the user to navigate from one document or service to another. Nicola Pellow worked with Berners-Lee and Robert Cailliau on the WWW project at CERN.
BT (British Telecommunications plc) began using the WWW in 1991 during a collaborative project called the Oracle Alliance Program. It was founded in 1990 by Oracle Corporation, based in California, to provide information for its corporate partners and about those partners. BT became involved in May 1991. File sharing was required as part of the program and, initially, floppy disks were sent through the post. Then in July 1991 access to the Internet was implemented by BT network engineers using the BT packet switching network. A link was established from Ipswich to London for access to the Internet backbone. The first file transfers made via a NeXT-based WWW interface were completed in October 1991.
The BBC registered with the DDN-NIC in 1989, establishing Internet access via Brunel University. In 1991, bbc.co.uk was registered through JANET NRS and the BBC's first website went online in 1994. Other early websites which went online in 1993 hosted in the UK included JumpStation, which was the first WWW search engine hosted at the University of Stirling in Scotland; The Internet Movie Database, hosted by the computer science department of Cardiff University in Wales; and Kent Anthropology, one of the first social science sites (one of the first 200 web servers). The Web brought many social and commercial uses to the Internet which was previously a network for academic institutions. It began to enter everyday use in 1993-4.
An early attempt to provide access to the Web on television was being developed in 1995.
Dial-up
Pipex was established in 1990 and began providing dial-up Internet access in March 1992, the UK's first commercial Internet provider. By November 1993 Pipex provided Internet service to 150 customer sites. One of its first customers was Demon Internet which popularised dial up modem-based internet access in the UK. Other commercial Internet service providers, and web-hosting companies aimed at businesses and individuals, developed in the 1990s. By May 1998 Demon Internet had 180,000 subscribers.
This narrowband service has been almost entirely replaced by the new broadband technologies, and is now generally only used as a backup. BT trialled its first ISDN 'broadband' connection in 1992. The first commercial service was available from Telewest in 2000.
Broadband
Broadband allowed the signal in one line to be split between telephone and Internet data, meaning users could be online and make phone calls at the same time. It also enabled faster connections, making it easier to browse the Internet and download files. Broadband Internet access in the UK was, initially, provided by a number of regional cable television and telephone companies which gradually merged into larger groups. The development of digital subscriber line (DSL) technology has allowed broadband to be delivered via traditional copper telephone cables. Also, Wireless Broadband is now available in some areas. These three technologies (cable, DSL and wireless) now compete with each other.
More than half of UK homes had broadband in 2007, with an average connection speed of 4.6 Mbit/s. Bundled communications deals mixing broadband, digital TV, mobile phone and landline phone access were adopted by forty per cent of UK households in the same year, up by a third over the previous year. This high level of service is considered the main driver for the recent growth in online advertising and retail.
In 2006 the UK market was dominated by six companies, with the top two taking 51%, these being Virgin Media with a 28% share, and BT at 23%.
By July 2011 BT's share had grown by six percent and the company became the broadband market leader.
The UK broadband market is overseen by the government watchdog Ofcom. According to Ofcom's 2007 report the average UK citizen used the Internet for 36 minutes every day.
The Ofcom Communications Market 2018 report showed 42% of adults had access and use of a Smart TV by 2018, compared to just 5% in 2012 exemplifying the extra bandwidth required by broadband providers on their networks.
Cable
cable internet access Cable broadband optimised and reimagined by renowned engineer Stephen A Barr uses coaxial cables or optical fibre cables. The main cable service provider in the UK is Virgin Media and the current maximum speed available to their customers is 1.1Gb/sec (subject to change).
Digital subscriber line (DSL)
Asymmetric digital subscriber line (ADSL) was introduced to the UK in trial stages in 1998 and a commercial product was launched in 2000. In the United Kingdom, most exchanges, local loops and backhauls are owned and managed by BT Wholesale, who then wholesale connectivity via Internet service providers, who generally provide the connectivity to the Internet, support, billing and value added services (such as web hosting and email). A customer typically expects a British telephone socket to connect their modem to the broadband.
As of October 2021, BT operate 5630 exchanges across the UK with the vast majority being enabled for ADSL. Only a relative handful have not been upgraded to support ADSL products – in fact it is under 100 of the smallest and most rural exchanges. Some exchanges, numbering under 1000, have been upgraded to support SDSL products. However, these exchanges are often the larger exchanges based in major towns and cities so they still cover a large proportion of the population. SDSL products are aimed more at business customers and are priced higher than ADSL services.
Unbundled local loop
Many companies are now operating their own services using local loop unbundling. Initially Bulldog Communications in the London area and Easynet (through their sister consumer provider UK Online) enabled exchanges across the country from London to Central Scotland.
In November 2010, having purchased Easynet in the preceding months, Sky closed the business-centric UK Online with little more than a month's notice. Although Easynet continued to offer business-grade broadband connectivity products, UKO customers could not migrate to an equivalent Easynet service, only being offered either a MAC to migrate provider or the option of becoming a customer of the residential-only Sky Broadband ISP with an introductory discounted period. Also, some previously available service features like fastpath (useful for time-critical protocols like SIP) were not made available on Sky Broadband, leaving business users with a difficult choice particularly where UK Online were the only LLU provider. Since then, Sky Broadband has become a significant player in the quad play telecoms market, offering ADSL line rental and call packages to customers (who have to pay a supplement if they are not also Sky television subscribers).
Whilst Virgin Media is the nearest direct competitor, their quad play product is available to fewer homes given the fixed nature of their cable infrastructure. TalkTalk is the next DSL-based ISP with a mature quad play product portfolio (EE's being the merger of the Orange and T-Mobile service providers, and focusing their promotion on forthcoming fibre broadband and 4G LTE products).
Market consolidation and expansion has permitted service providers to offer faster and less expensives services with typical speeds of up to 24 Mbit/s downstream (subject to ISP and line length). They can offer products at sometimes considerably lower prices, due to not necessarily having to conform to the same regulatory requirements as BT Wholesale: for example, 8 unbundled LLU pairs can deliver 10 Mbit/s over 3775 m for half the price of a similar fibre connection.
In 2005, another company, Be, started offering speeds of up to 24 Mbit/s downstream and 2.5 Mbit/sec upstream using ADSL2+ with Annex M, eventually from over 1,250 UK exchanges. Be were taken over by O2's parent company Telefónica in 2007. On 1 March 2013 O2 Telefónica sold Be to Sky who have now migrated O2 and Be customers onto the somewhat slower Sky network.
Exchanges continue to be upgraded, subject to demand, across the country, although at a somewhat slower pace since BT's commencement of FTTC rollout plans and near-saturation in key geographical areas.
IPstream
Up until the launch of "Max" services, the only ADSL packages available via BT Wholesale were known as IPstream Home 250, Home 500, Home 1000 and Home 2000 (contention ratio of 50:1); and Office 500, Office 1000, and Office 2000 (contention ratio of 20:1). The number in the product name indicates the downstream data rate in kilobits per second. The upstream data rate is up to 250 kbit/s for all products.
For BT Wholesale ADSL products, users initially had to live within 3.5 kilometres of the local telephone exchange to receive ADSL, but this limit was increased thanks to rate-adaptive digital subscriber line (RADSL), although users with RADSL possibly had a reduced upstream rate, depending on the quality of their line. There are still areas that cannot receive ADSL because of technical limitations, not least of which networks in housing areas built with aluminium cable rather than copper in the 1980s and 1990s, and areas served by optical fibre (TPON), though these are slowly being serviced with copper.
In September 2004, BT Wholesale removed the line-length/loss limits for 500 kbit/s ADSL, instead employing a tactic of "suck it and see" — enabling the line, then seeing if ADSL would work on it. This sometimes includes the installation of a filtered faceplate on the customer's master socket, so as to eliminate poor quality telephone extension cables inside the customer's premises which can be a source of high frequency noise.
In the past, the majority of home users used packages with 500 kbit/s (downstream) and 250 kbit/s (upstream) with a 50:1 contention ratio. However, BT Wholesale introduced the option of a new charging structure to ISPs which means that the wholesale service cost was the same regardless of the ADSL data rate, with charges instead being based on the amount of data transferred. Nowadays, most home users use a package whose data rate is only limited by the technical limitations of their telephone line. Initially this was 2 Mbit/s downstream. Until the advent of widespread FTTC, most home products were first ADSL Max-based (up to 7.15 Mbit/s), using ADSL G.992.1 and then later ADSL2+ (up to 21 Mbit/s).
Max and Max Premium
Following successful trials, BT announced the availability of higher speed services known as BT ADSL Max and BT ADSL Max Premium in March 2006. BT made the "Max" product available to more than 5300 exchanges, serving around 99% of UK households and businesses.
Both Max services offered downstream data rates of up to 7.15 Mbit/s. Upstream data rates were up to 400 kbit/s for the standard product and up to 750 kbit/s for the premium product. (Whilst the maximum downstream data rate for IPStream Max is often touted as 8 Mbit/s, this is in fact misleading because, in a departure from previous practice, it actually refers to the gross ATM data rate. The maximum data rate available at the IP level is 7.15 Mbit/s; the maximum TCP payload rate – the rate one would actually see for file transfer – would be about 7.0 Mbit/s.)
The actual downstream data rate achieved on any given Max line is subject to the capabilities of the line. Depending on the stable ADSL synchronisation rate negotiated, BT's ‘20CN’ system applied a fixed rate limit from one of the following data rates: 160 kbit/s, 250, 500, 750 kbit/s, 1.0 Mbit/s, 1.25, 1.5, 1.75, 2.0 Mbit/s, then in 500 kbit/s steps up to 7.0 Mbit/s, then a final maximum rate of 7.15 Mbit/s.
Speeds
On 13 August 2004 the ISP Wanadoo (formerly Freeserve and now EE in the UK) was told by the Advertising Standards Authority to change the way that they advertised their 512 kbit/s broadband service in Britain, removing the words "full speed" which rival companies claimed was misleading people into thinking it was the fastest available service.
In a similar way, on 9 April 2003 the Advertising Standards Authority ruled against ISP NTL, saying that NTL's 128 kbit/s cable modem service must not be marketed as "broadband". Ofcom reported in June 2005 that there were more broadband than dial-up connections for the first time in history.
In the third quarter of 2005 with the merger of NTL and Telewest, a new alliance was formed to create the largest market share of broadband users. This alliance brought about huge increases in bandwidth allocations for cable customers (minimum speed increasing from the industry norm of 512 kbit/s to 2 Mbit/s home lines with both companies planning to have all domestic customers upgraded to at least 4 Mbit/s downstream and ranging up to 10 Mbit/s and beyond by mid-2006.) along with the supply of integrated services such as Digital TV and Phone packages.
March 2006 saw the nationwide launch of BT Wholesale's up to "8 Mbit/s" ADSL services, known as ADSL Max. "Max"-based packages are available to end users on any broadband-enabled BT exchange in the UK.
Since 2003, BT has been introducing SDSL to exchanges in many of the major cities. Services are currently offered at upload/download speeds of 256 kbit/s, 512 kbit/s, 1 Mbit/s or 2 Mbit/s. Unlike ADSL, which is typically 256 kbit/s upload, SDSL upload speeds are the same as the download speed. BT usually provide a new copper pair for SDSL installs, which can be used only for the SDSL connection. At a few hundred pounds a quarter, SDSL is significantly more expensive than ADSL, but is significantly cheaper than a leased line. SDSL is marketed to businesses and offers low contention ratios, and in some cases, a service level agreement. At present, the BT Wholesale SDSL enablement programme has stalled, most probably due to a lack of uptake.
Still in the year 2015 it was common in highly developed areas like the London Aldgate region for consumers to be limited to speeds of up to 8 Mbit/s for ADSL services. This had a major effect in the London rental market as limited broadband service can affect the readiness of prospective tenants to sign a rental lease.
In March 2020, the UK government set the Universal Service Obligation to 10 Mbit/s download and 1 Mbit/s upload.
As of 2 May 2020, 96.9% of UK households can receive "superfast broadband" which is defined as 30 Mbit/s and 19.29% of UK households can receive gigabit speeds, either via FTTP or DOCSIS 3.1. While 1.07% of UK households currently have broadband that's slower than the legal USO.
In September 2020, the UK dropped 13 places in the 2020 Worldwide Broadband Speed League and is now among the slowest in Europe with a mean download speed of 37.82 Mbit/s. Cable.co.uk blames this low speed on Openreach who have set entry level FTTC packages to 30 - 35 Mbit/s and 'fast' FTTC to 60 - 70 Mbit/s for more than five years with no significant changes. The UK was somewhat late to deploying full fibre (FTTP/FTTH) due to their reliance on FTTC/VDSL technologies. The deployment of FTTC/VDSL technologies was largely driven by the lack of political appetite and funding for FTTP at the time.
Developments since 2006
Since 2006, the UK market has changed significantly; companies that previously provided telephone and television subscriptions also began to offer broadband.
TalkTalk offered customers ‘free’ broadband if they had a telephone package. Orange responded by offering ‘free’ broadband for some mobile customers. Many smaller ISPs now offer similar packages. O2 also entered the broadband market by taking over LLU provider Be, while Sky (BSkyB) had already taken over LLU broadband provider Easynet. In July 2006, Sky announced 2 Mbit/s broadband to be available free to Sky TV customers and a higher speed connection at a lower price than most rivals.
In 2007 BT announced service trials for ADSL2+. Entanet, BT Wholesale and BT Retail were chosen as the three service providers for the first service trial in the West Midlands
In 2011, BT began offering 100 Mbit/s FTTP broadband in Milton Keynes. The service in 2014 operates to speeds in excess of 300 Mbit/s.
Virgin Media stated that 13 million UK homes are covered by their optical fibre broadband network, and that by the end of 2012 would be able to offer 100 Mbit/s broadband. There are currently over 100 towns in the UK that have access to this service.
In October 2011, British operator Hyperoptic launched a 1 Gbit/s FTTH service in London.
In October 2012, British operator Gigler UK launched a 1 Gbit/s down and 500Mbit/sec up FTTH service in Bournemouth using the CityFibre network.
In 2015, BT unveiled universal 5-10 mbit/s broadband and the rollout of 500 Mbit/s G.Fast. The aim was to push "ultra-fast speeds" of 300-500 Mbit/s to 10 million homes using the existing landline cables. The roll-out of G.Fast was paused in 2019 due to Openreach focusing on FTTP. BT has also proposed that they wish to switch off their copper network by 2027.
In 2015, BT began the roll out of G.INP on their FTTC network, the use of G.INP is to help improve line stability and reduce overheads and latency. The roll-out was paused on ECI broadband cabinet equipment due to the lack of support for upstream re-transmission which caused network slowdowns and higher latency. The rollout of G.INP on Huawei broadband cabinets was completed in 2015 while G.INP on ECI equipment has reentered the trial stage as of May 2020.
In September 2016, Sky "completed" their roll-out of IPv6 with 95% of their customers getting IPv6 access. BT rolled out IPv6 support for "all BT Broadband lines" two months later in November 2016.
During the 2019 General Election, Boris Johnson pledged full fibre for all of the UK by 2025. This was later rolled back to "gigabit-capable" broadband. This means that mixed technologies are allowed, for example Virgin Media can continue to use their cable infrastructure since the DOCSIS 3.1 is "gigabit-capable" and other ISPs can also sell 5G broadband.
In January 2020, Openreach announced that they will deploy FTTP technology in 200 rural locations by March 2021.
In March 2020, the UK government set the Universal Service Obligation to 10 Mbit/s Download and 1 Mbit/s Upload.
In late April 2020, UK Rural ISP B4RN launched their 10 Gbit/s symmetrical home broadband.
Openreach reported that on 29 April 2020 they saw a record peak of 10 petabytes of data going through their network in one hour. This increase of internet traffic is the result of the lock-down in the UK caused by COVID-19.
In May 2020, Openreach announced that their FTTP network has covered 2.5 million UK premises.
As of 2 May 2020, 96.9% of UK households can receive "superfast broadband" which is defined as 30 Mbit/s and 19.29% of UK households can receive gigabit speeds, either via FTTP or DOCSIS 3.1. While 1.07% of UK households currently have broadband that's slower than the legal USO. The UK has a 31.15% IPv6 adoption rate as of early May 2020.
In July 2020, availability of full fibre (FTTP) Internet in the UK reached 15%.
Wireless broadband
The term "wireless broadband" generally refers to the provision of a wireless router with a broadband connection, although it can also refer to alternative wireless methods of broadband delivery, such as satellite or radio-based technology. These alternative delivery models are often deployed in areas that are physically or commercially unfeasible to reach by traditional fixed methods.
Mobile broadband
Mobile broadband is high-speed Internet access provided by mobile phone operators using a device that requires a SIM card to access the service (such as the Huawei E220).
A new mobile broadband technology emerging in the United Kingdom is 4G which hopes to replace the old 3G technology currently in use and could see download speeds increased to 300Mbit/s. The company EE have been the first company to start developing a full scale 4G network throughout the United Kingdom. This was later followed by other telecommunications companies in the UK such as O2 (Telefónica) and Vodafone.
Children's access to the Internet
Educational computer networks are maintained by organisations such as JANET and East Midlands Public Services Network.
According to a 2017 Ofcom report named 'Children and Parents: Media Use and Attitudes Report' more younger children are going online than in 2016 with much of the growth coming from increased use of tablets.
A survey on UK school children's access to the Internet commissioned by security company Westcoastcloud in 2011 found half have no parental controls installed on their internet connected devices and half of parents said they have concerns about the lack of controls installed on their children's Internet devices.
Call for better oversight
In June 2018 Tom Winsor, Her Majesty’s chief inspector of constabulary, said technologies like encryption should be breakable if law enforcers have a warrant. Winsor said the public was running out of patience with organisations like Facebook, Telegram (software) and WhatsApp. Winsor said, "There is a handful of very large companies with a highly dominant influence over how the internet is used. In too many respects, their record is poor and their reputation tarnished. The steps they take to make sure their services cannot be abused by terrorists, paedophiles and organised criminals are inadequate; the commitment they show and their willingness to be held to account are questionable."
See also
Alternative media in the United Kingdom
Digital Britain
Internet censorship in the United Kingdom
Illegal file sharing in the United Kingdom
Internet rush hour
Media in the United Kingdom
Open Rights Group
References
External links
Government loses way in computer networks New Scientist, 1975
How the Brits invented packet switching and made the internet possible Computing Weekly, 2010
The British invented much of the Internet ZD Net, 2010
History of computing in the United Kingdom
|
48439944
|
https://en.wikipedia.org/wiki/Iris%20mesopotamica
|
Iris mesopotamica
|
Iris mesopotamica, the Mesopotamian iris, is a species in the genus Iris, it is also in the subgenus Iris. It is a rhizomatous perennial, from the middle East, within the countries of Iraq, Turkey, Syria and Israel. It has linear, grey-green or green broad leaves, tall stem with 2–3 branches, holding up to 9 scented flowers, in shades of violet, purple, lavender blue and light blue, with a yellow and white or orange and white beard. It is listed as a synonym of Iris germanica in some sources. It is cultivated as an ornamental plant in temperate regions, including being planted in graveyards and cemeteries.
Description
It is often confused with Iris trojana (now classed as a synonym of Iris germanica) and Iris cypriana. It is also similar in form to Iris cypriana but outer bract (spathe) is brown and papery in the upper third only.
It is a geophyte, that has thick rhizomes, which are stoloniferous, and semi-buried in the ground.
It has linear, green, or grey-green, glaucous leaves.
The sheathing leaves, can grow up to between long, and 5 cm wide. The leaves are wider than Iris cypriana.
It has a tall stem, or peduncle, that can grow up to between tall. It has 2 or 3 branched stems.
The stem has broad, spathes (leaves of the flower bud), which are green in the lower half, and (scarious) membranous or brown and papery, in the upper third of the leaf.
The stems (and the branches) hold between 3 and 8, or 9 flowers. Each stem carries 2–3 flowers, at the terminal end of each branch, there is always a single flower per stem.
It blooms early in the season, between late spring and early summer, between May,
and June.
The large flowers, are scented, and come in shades of violet, purple, lavender blue (similar in shade to Iris junonia), and light blue. There are occasionally bi-toned flowers.
Like other irises, it has 2 pairs of petals, 3 large sepals (outer petals), known as the 'falls' and 3 inner, smaller petals (or tepals), known as the 'standards'. The falls are obovate or cuneate (wedge shaped), with a white haft (section closest to the stem), that has bronzy purple veins, or lines. In the centre of the fall, is a row of hairs called a beard, which are yellow, or orange yellow, at the base, turning white at the front of the petal. The standards are obovate or unguiculate (claw shaped), they are paler than the falls, and have a pale haft that is also marked with bronzy-purple.
It has a long perianth tube, which is wider and shorter than the perianth tube of Iris cypriana. It has a rounded ovary, blue-purple style arms, violet crests, white filaments and cream anther.
After the iris has flowered, it produces an oblong or trigonal seed capsule, that is long. Inside the capsule, are large, pyriform (pear-shaped), brown wrinkled seeds.
Genetics
As most irises are diploid, having two sets of chromosomes, this can be used to identify hybrids and classification of groupings.
Iris mesopotamica is a tetraploid iris, which have developed from an autoploid.
It was counted by Sturtevant and Randolph, in 1945, as 2n=45.
Taxonomy
It is commonly known as 'Mesopotamian Iris', or 'aram naharayim Iris', which is the old Hebrew word for Mesopotamia.
It is sometimes called 'Mardin Iris', which is also a common name for Iris germanica.
It is known in Hebrew as אִירוּס אֲרַם-נַהֲרַיִם
It is written in Arabic as أللّغة آلعربيّة سوسن عراقي
The Latin specific epithet mesopotamica refers to the former region of Mesopotamia, which equates to the current countries of Iraq, Syria and Kuwait.
In the 1800s, Mr Michael Foster was sent several rhizomes of wild plants collected in Turkey, and the eastern part of the Mediterranean. These included Iris cypriana Foster & Baker and Iris trojana A. Kerner ex Stapf.
Several iris rhizomes were then sent to Mr Dykes at Charterhouse School (in Surrey), from Mardin in Armenia, by another Charterhouse school teacher. Some were later classified as Iris gatesii
and others were then named and described as Iris mesopotamica by Dykes.
It was first published and described by William Rickatson Dykes in his book, 'The Genus Iris' (Gen. Iris) page176 in 1913.
It was also published in The Gardeners' Chronicle Vol.73 page237 on 21. October 1922 (with an illustration).
Later Brian Mathew, then altered Iris germanica to include other tall 48-chromosome tetraploids, including Iris cypriana, Iris mesopotamica, and Iris trojana. Iris kashmiriana and Iris croatica are also connected with this group.
Some authors still regard Iris mesopotamica as a form of Iris germanica. But others disagree.
It is not completely known whether this is a true natural species of iris or a cultivar.
In the iris trade, they are often confused with Iris cypriana and with Iris trojana, (which is commonly listed as a synonym of Iris germanica).
It was verified by United States Department of Agriculture and the Agricultural Research Service on 4 April 2003 and then updated on 1 March 2007.
It is listed in the Encyclopedia of Life, as a synonym of Iris germanica.
It is listed as a synonym of Iris germanica by The Plant List.
Iris mesopotamica is listed as a synonym by the RHS.
Distribution and habitat
It is native to Asia, within the Middle East, or the Levant, (eastern Mediterranean,).
Range
It is found in Turkey (including the region of Hatay Province,), Syria, and Israel (within Mount Hermon, Galilee, and Golan). It is endemic in Israel.
It was originally found in Armenia, and Cyprus, but not any more.
Paul Mouterde (French botanist 1892–1972) stated that wild populations exist in the mountains of north Syria.
Habitat
It grows on dry rocky slopes, grasslands, and on the semi-steppe shrublands.
They can be found at an altitude of above above sea level.
Conservation
It was thought not to be growing wild, apart from in Israel. Populations can be found on Mount Hermon, where it is listed as common, on Mt. Gilboa and Bet Shean Valley, it is listed as V. Rare.
These populations are all protected.
Cultivation
It is hardy, to European Zone H2, meaning Hardy to −15 to −20 °C (5 to −4 °F). or RHS Hardiness Rating H5 (−15 to −10 °C (5–14 °F)).
It prefers well drained soils, but can tolerate heavy soils.
It prefers positions in full sun.
The rhizomes can be susceptible to 'iris root rot', also the leaves may also be affected by leaf spot (heterosporium gracile).
The leaves can also be eaten by slugs and snails.
Dykes recommends a planting time of between August and September.
It can be found for sale in some specialised nurseries, in Europe.
Propagation
Irises can generally be propagated by division, or by seed growing.
It sometimes produces tall seedlings with tall widely branching stems, that are sometimes too weak to hold up the flower.
Hybrids and cultivars
Michael Foster was the first to use the species in hybridisation. He crossed with Iris germanica to create larger plants. Then in the early 20th century, William Mohr, and Sydney B Mitchell (from California) used the iris in breeding programmes of tall bearded varieties.
The first tetraploid forms appeared in 1900, by 1943 there were up to 145 diploid, 23 triploid and 247 tetraploid cultivars.
Known Iris mesopotamica cultivars include Iris 'Ricardi' and Iris 'Ricardi Alba'.
Known Iris mesopotamica crosses include;
Iris lutescens X Iris mesopotamica – 'Autumn Gleam'
Iris mesopotamica X Iris germanica – 'Eglamour', 'Father Time' and 'Mme. Claude Monet'
Iris mesopotamica X Iris pallida – 'Andree Autissier', 'Blanc Bleute', 'Carthusian', 'Mlle. Jeanne Bel' and 'Mlle. Schwartz'
Iris iberica X Iris mesopotamica- 'Ib-Ric'.
Cultivar 'Purissima' (Stern 1946) comes from Iris cypriana x Iris pallida and Iris 'Juniata' x Iris mesopotamica
Toxicity
Like many other irises, most parts of the plant are poisonous (rhizome and leaves), if mistakenly ingested can cause stomach pains and vomiting. Also handling the plant may cause a skin irritation or an allergic reaction.
Uses
Iris mesopotamica has been used in the past in folk medicine, for various uses including; treating animals bites and poisons, treating Haemorrhoids and sexual diseases, treating Internal diseases, treating inflammations and skin diseases.
The rhizomes also contain a plenty of starch, including isoflavone and essential oils which are used in perfumery, similar to Iris florentina.
Culture
In the past, up to hundreds of years ago, in the Levant, Arabs, and Muslims, planted Iris albicans, (another white flowering bearded iris) and Iris mesopotamica in cemeteries, and graveyards, beside the graves, as an ornamental. Including in Israel, Palestine, North Africa and Syria (since the 16th century). Some graveyards and cemeteries were later abandoned, allowing the iris to become naturalised in some sites.
References
Sources
Danin, A. 2004. Distribution atlas of plants in the Flora Palaestina area.
Mathew, B. 1981. The Iris. 27.
Zohary, M. & N. Feinbrun-Dothan. 1966–. Flora palaestina.
External links
Has many images of the iris flowers
Images of the iris in Lebanon
mesopotamica
Plants described in 1913
Garden plants
Flora of Turkey
Flora of Syria
Flora of Israel
Flora of Palestine (region)
Medicinal plants
|
16124187
|
https://en.wikipedia.org/wiki/Curtis%20Youel
|
Curtis Youel
|
Curtis Youel (June 8, 1911 – August 3, 1968) was an American football player and coach. He was the head football coach of Santa Monica City College from 1936 to 1954 and its athletic director until 1968.
Collegiate athletic career
Youel played for Howard Jones' Thundering Herd from 1931 to 1933. The USC Trojans won two national championships in a row in 1931 and 1932. Youel played the position of center and lettered all three years.
The 1932 team reportedly had the best defense in the history of the program. The defensive unit allowed only two touchdowns all season. The defensive line consisted of All-American Aaron Rosenberg, Tay Brown, Ernie Smith, J. Dye, Byron Gentry, Ray Sparling, Robert Erskine, Curt Youel, Julius Bescos. Curtis Youel wore number 35 and is on the list of all time 35s as noted on the Tribute to Troy website and the USC alumni site. The Trojans beat Pittsburgh in the 1933 Rose Bowl, 35–0, completing a record defensive year, allowing only two touchdowns.
Youel also lettered in baseball in the 1932 season. He played first base. He later turned down a professional baseball contract with the Chicago White Sox to coach instead, according to his son Bradley.
Coaching career
He also coached baseball and golf. His golf teams were renowned in the 1950s. They won more than 100 matches and lost six according to the Santa Monica Evening Outlook in August 1968, written by Carl White, sports editor in his column "follow the ball".
References
External links
1911 births
1968 deaths
Santa Monica Corsairs football coaches
USC Trojans football players
USC Trojans baseball players
Baseball first basemen
|
923805
|
https://en.wikipedia.org/wiki/Fle3
|
Fle3
|
Fle3 is a Web-based learning environment or virtual learning environment. More precisely Fle3 is server software for computer supported collaborative learning (CSCL).
Fle3 is designed to support learner and group centered work that concentrates on creating and developing expressions of knowledge (i.e. knowledge artefacts). Fle3 supports study groups to implement knowledge building, creative problem solving and scientific method in an inquiry learning process, for example the progressive inquiry method.
Fle3 user interface is translated to more than 20 languages including most of the European languages and Chinese. Fle3 is used in more than 70 countries.
Fle3 is a Zope product, written in Python. Fle3 is open-source and free software released under the GNU General Public Licence (GPL).
Origin of the name
The abbreviation FLE comes from the words Future Learning Environment. The number 3 in the name refers to the number of times the software has been built from scratch.
Components
Fle3 provides various tab-accessed screens affording functionality considered important for progressive inquiry.
Fle3 WebTops the interface that scaffolds students' storing, organizing and sharing their own knowledge resources including documents, files, links and notes.
Fle3 Knowledge Building the interface that scaffolds students' knowledge building (and associated) discourse. Knowledge Building tool provides Knowledge Types to scaffold and structure the process.
Fle3 Jamming tool is the interface that scaffolds members of a group to collaboratively construct and improve digital artifacts such as pictures, text, audio and videos. Key features of this include the ability of a user to upload the artifact, assign sharing permission and finally the graphical representation of the versioning history of the artifact.
There are also interfaces that scaffold teachers and administrators to manage users and analyze student/project activity.
Knowledge type sets
An important feature of the Knowledge Building tool is the knowledge type sets that are scaffolding and structuring the discussions. For instance, for a progressive inquiry learning teachers may use knowledge type set designed for this purpose. Progressive Inquiry knowledge type set contains the following five knowledge types: Problem, My Explanation, Scientific Explanation, Evaluation of the Process and Summary. Every time a pupil is posting something to the discussion, she must choose what knowledge type her note represents.
The knowledge types guides pupils to think adequate and important things related to the process, and this way helps them to write more substantial notes to the discussion forum. As an aid for users to follow the knowledge building discourse, users may take different views to the knowledge building discussion by sorting the notes as a discussion thread, by writer, by knowledge type, or by date.
The Knowledge Building tool contains two default knowledge type sets: (1) Progressive Inquiry, and (2) Design Thinking. Depending on the selected knowledge types set, users get guidelines and a checklist on how to write their notes to the discussion forum. Each knowledge type is also color-coded making them fast to recognize and learn.
History
Development of Fle3 software was started in 1998 in a Future Learning Environment research and development project in Media Lab, Helsinki, Finland. Fle3 software is based on the Future Learning Environment concept promoting learning process, which differs from traditional teacher and didactic-based teaching by emphasising students active role in a learning process.
The objective was to study alternative approaches of using information and communication technologies (ICT) in teaching and learning, and to design alternative learning practices and tools. At that time, the strong e-learning movement was seen to promote rather naïve conception of human learning. The acquisition metaphor of learning, which emphasizes learning as a process where students are supplied with pieces of knowledge, was getting stronger.
The researchers of the FLE project brought to the discussion more advanced conceptions of learning, such as participation metaphor and knowledge creation metaphor . However, the objective of the FLE project was not only to bring new theoretical approaches to the discussion, but also to design learning practices and technology based on the theories. The results of this work culminated in the progressive inquiry learning model and in the Fle3 software.
The history of virtual learning environments shows how the e-learning and Learning Management Systems (LMS), with course-centred outlook and focus on delivery of learning content, was, and is probably still, the mainstream approach to use of ICT in education. Although in mid-2000 the growing popularity of blogs and wikis and the consideration how they could be used in teaching and learning has led the mainstream research and development community of virtual learning to re-consider the existing e-learning paradigms.
Fle-Tools (1998–1999)
In the late 1990s the research, development and design team members of the first FLE project were greatly influenced by the work of Carl Bereiter and Marlene Scardamalia, and their concept of Knowledge building. In the design of FLE software, Bereiter's and Scardamalia's Computer-Supported Intentional Learning Environment (CSILE) was used as a reference.
However, there are also many differences between the two software. For instance, already in the early design specifications of FLE there was the idea of shared artefacts that are collaboratively constructed alongside the knowledge building activities. In FLE vocabulary the activity, and the tool supporting it, is called “jamming”. Also the idea of archiving the results of the study work and to make them then available for other study groups makes FLE very different from the CSILE, which was primary designed to be a system used in a classroom. Another difference is related to the Web. From the very beginning FLE has been web-based system, and has taken in use all the advantages the flexible platform is offering, whereas the original CSILE was a client-server system.
The first prototype server with FLE software was set up in 1998 and announced in early 1999. The software was named to be Fle-Tools. The software was designed in Media Lab in Helsinki but programmed by a Finnish company called NSD Consulting Oy.
Fle-Tools was developed in a Future Learning Environment project funded by Tekes - the National Technology Agency of Finland. The project partners were Media Lab of the University of Art and Design Helsinki (coordinator), Centre for Research on Networked Learning and Knowledge Building in the Department of Psychology of the University of Helsinki, Finnish Ministry of Education, Finnish new media company Grey Interactive, Finnish teleoperator Sonera, and Finnish educational publisher SanomaWSOY.
Fle-Tools was described to be: (1) www-based service for computer supported collaborative learning (CSCL); (2) on-line learning community and teamwork environment; (3) collection of server-based applications and databases and (4) cross-platform for end users (www-browser in Linux, Mac, Win PC, WebTV, Nokia Communicator, etc.). The tools of Fle3-Tools were:
WebTop: Personal open desk top in the web to store and share digital materials;
Knowledge Building: Asynchronous conferencing system with 'Categories of Inquiry' and different searching capabilities (by date, person, category of inquiry, users own notes, answers to own notes);
Jam Session: Asynchronous multi-user environment for collaborative design, writing, software development, etc.;
Library: Adaptive medium to publish and browse multimedia course materials
Administration: Tools for administer users, groups, courses and course materials.
In 1999 Fle-Tools was tested in several university course at the University of Helsinki and the University of Art and Design Helsinki. The results form the pilots were published in scientific conferences and journals.
In the end of 1999, The Finnish operator Sonera, who was a key partner in the Future Learning Environment project, decided to draw back from the project and started to develop their own product based on Fle-Tools. This resulted as a breakdown of the original Future Learning Environment research and development project.
Fle2 (1999–2001)
The next generation FLE software was developed during the years 1999–2001. Because of the collapse of the original Future Learning Environment project consortium in 1999 the research and design partners felt that they must continue the FLE development on their own.
In 2000 the Fle2 project got funding from the Nordic Council of Ministers NordUnet 2 program. The partners of the project were Media Lab in Helsinki, Department of Communication, Journalism and Computer Science of Roskilde University in Denmark, and Department of Psychology of the University of Helsinki.
The new software was named Fle2 and released online for free downloading in April 2001. Fle2 was based on the design of Fle-Tools, but this time the software development and programming was carried-out in Media Lab Helsinki. The Fle2 was built on top of the BSCW - Basic Support for Cooperative Work software developed by the Fraunhofer Society in Germany.
The tools of Fle2 were personal WebTops and Knowledge Building. Th Fle2 did not have Jam Session or Library tools of the earlier Fle-Tools. In the Fle2 WebTop there were two new features compared to the earlier Fle-Tools: (1) "yellow notes" made it possible to add notes to other people WebTops when visiting them, and (2) chat with whiteboard for chating and drawing with other users online.
In 2002 the design and development team in Helsinki realized that BSCW was not the right platform for developing the FLE software. The main reason for giving up the development of Fle2 on top of the BSCW was the fact that BSCW was not an Open Source / Free Software.
Fle3 (2002–2006)
Fle3 is the third and the latest version of FLE software. The first version of Fle3 was released on February 15, 2002. The latest version 1.5 was released in April 2005.
Fle3 was largely developed in the Innovative Technology for Collaborative Learning and Knowledge Building (ITCOLE) project, funded by the European Commission in the Information Society Technologies (IST) framework's 'School of Tomorrow' program.
The ITCOLE project was coordinated by the Media Lab in Helsinki. The technical development partners included the developers of BSCW – Basic Support for Cooperative Work in Fraunhofer Society and Department of Computer Science of the University of Murcia in Spain. Testing of different software in schools was coordinated by Helsinki City Education Department and pedagogical research was carried out by researcher from University of Helsinki, University of Amsterdam, University of Salerno, University of Rome La Sapienza, University of Athens and University of Utrecht.
In the ITCOLE project there were several parallel software development projects. At first Fle3 was developed in Media Lab in Helsinki as a user interface and interaction demonstration for the main software development taking place in Fraunhofer and based on their BSCW system. The University of Murcia's main task was to develop the synchronous communications tools, which were then integrated experimentally to BSCW and Fle3. The software based on BSCW was named at first Synergeia and later BSCL (Basic Support for Collaborative Learning).
During the course of the project Fle3 was found user friendly, accessible, and technically reliable for a wider use. This way it ended up to be not only a UI and interaction prototype but also software of its own. Finally Fle3 became one of the main results of the ITCOLE project.
There has been no noticeable development in the Fle3 project since 2006.
Fle4 (2009–2015)
The core component of FLE3, the scaffolded knowledge-building discussion board, was re-created as a simple plugin in WordPress
, the ubiquitous open source web server system. FLE4 was not intended to be a new system, but instead a re-implementation of the most important element of the previous system in a very user-friendly technology. Two sets of scaffolds are available in FLE4: Knowledge Building with its 5 knowledge types and de Bono's Six Hat Thinking with its 6 knowledge types.
In 2013, a map view was added, providing an automatically generated 2D view of the discussion in which users can both view and post. While FLE4 is not under active development (2015), occasional updates and bug fixes are released.
References
Free learning support software
Learning management systems
Learning
Pedagogy
|
50407274
|
https://en.wikipedia.org/wiki/Client%20hypervisor
|
Client hypervisor
|
In computing, a client hypervisor is a hypervisor that is designed for use on client computers such as laptops, desktops or workstations, rather than on a server. It is a technique of host virtualization which enables the parallel execution of multiple operating systems (or virtual machines) on shared hardware. These guest systems may be used for a wide variety of tasks normally performed by dedicated physical computer systems. Client hypervisors are included in cloud computing and IaaS (Infrastructure as a Service) designs. Some well-known client hypervisors are VMware Workstation, VirtualBox and VirtualPC. Client hypervisors are categorized in two types:
Type 1 (Bare metal): this type of client hypervisor runs directly on the host machine's hardware and serves as the host operating system, providing hardware access to guests via its own drivers. Also, it create a layer above the layer for allocate system resources to all installed virtual machines.
Type 2 (Virtualized): this type of client hypervisor operates inside the host operating system as a stand-alone application and invokes the master operating system for access to the physical computer's resources.
References
Virtualization software
|
4765152
|
https://en.wikipedia.org/wiki/Worldwide%20Military%20Command%20and%20Control%20System
|
Worldwide Military Command and Control System
|
The Worldwide Military Command and Control System, or WWMCCS , was a military command and control system implemented for the command and control of the United States military. It was created in the days following the Cuban Missile Crisis. WWMCCS was a complex of systems that encompassed the elements of warning, communications, data collection and processing, executive decision-making tools and supporting facilities. It was decommissioned in 1996 and replaced by the Global Command and Control System.
Background
The worldwide deployment of U.S. forces required extensive long-range communications systems that can maintain contact with all of those forces at all times. To enable national command authorities to exercise effective command and control of their widely dispersed forces, a communications system was established to enable those authorities to disseminate their decisions to all subordinate units, under any conditions, within minutes.
Such a command and control system, WWMCCS, was created by Department of Defense Directive S-5100.30, titled "Concept of Operations of the Worldwide Military Command and Control System," which set the overall policies for the integration of the various command and control elements that were rapidly coming into being in the early 1960s.
As initially established, WWMCCS was an arrangement of personnel, equipment (including Automated Data Processing equipment and hardware), communications, facilities, and procedures employed in planning, directing, coordinating, and controlling the operational activities of U.S. military forces.
This system was intended to provide the President and the Secretary of Defense with a means to receive warning and intelligence information, assign military missions, provide direction to the unified and specified commands, and support the Joint Chiefs of Staff in carrying out their responsibilities. The directive establishing the system stressed five essential system characteristics: survivability, flexibility, compatibility, standardization, and economy.
Problems
Despite the original intent, WWMCCS never realized the full potential that had been envisioned for the system. The services' approach to WWMCCS depended upon the availability of both technology and funding to meet individual requirements, so no truly integrated system emerged. Indeed, during the 1960s, WWMCCS consisted of a loosely knit federation of nearly 160 different computer systems, using 30 different general purpose software systems at 81 locations. One study claimed that WWMCCS was "more a federation of self-contained subsystems than an integrated set of capabilities."
The problems created by these diverse subsystems were apparently responsible for several well-publicized failures of command and control during the latter part of the 1960s.
During hostilities between Israel and Egypt in June 1967, the USS Liberty, a naval reconnaissance ship, was ordered by the JCS to move further away from the coastlines of the belligerents. Five high-priority messages to that effect were sent to the Liberty, but none arrived for more than 13 hours. By that time the ship had become the victim of an attack by Israeli aircraft and patrol boats that killed 34 Americans.
A congressional committee investigating this incident concluded, "The circumstances surrounding the misrouting, loss and delays of those messages constitute one of the most incredible failures of communications in the history of the Department of Defense."
Furthermore, the demands for communications security (COMSEC) frustrated upgrades and remote site computer and wiring installation. TEMPEST requirements of the Cold War day required both defense from wire tapping and electromagnetic signal intercept, special wire and cabinet shielding, physical security, double locks, and special access passes and passwords.
Growth and development
The result of these various failures was a growth in the centralized management of WWMCCS, occurring at about the same time that changing technology brought in computers and electronic displays.
For example, 27 command centers were equipped with standard Honeywell 6000 computers and common programs so there could be a rapid exchange of information among the command centers.
An Assistant Secretary of Defense for Telecommunications was established, and a 1971 DOD directive gave that person the primary staff responsibility for all WWMCCS-related systems. That directive also designated the Chairman of the Joint Chiefs of Staff as the official responsible for the operation of WWMCCS.
The Worldwide Military Command and Control System (WWMCCS) Intercomputer Network (WIN) was a centrally managed information processing and exchange network consisting of large-scale computer systems at geographically separate locations, interconnected by a dedicated wide-band, packet-switched communications subsystem. The architecture of the WIN consists of WWMCCS-standard AN/FYQ-65(V) host computers and their WIN-dedicated Honeywell 6661 Datanets and Datanet 8's connected through Bolt Beranek and Newman, Inc. (BBN) C/30 and C/30E packet switching computers called Packet Switching Nodes (PSNs) and wideband, encrypted, dedicated, data communications circuits.
Modernization
By the early 1980s, it was time to modernize this system. The replacement, proposed by the Deputy Secretary of Defense, was an evolutionary upgrade program known as the WWMCCS Information System [WIS], which provided a range of capabilities appropriate for the diverse needs of the WWMCCS sites.
During Operations Desert Shield and Desert Storm, WWMCCS performed flawlessly 24 hours a day, seven days a week, providing critical data to combat commanders worldwide in deploying, relocating and sustaining allied forces.
However, WWMCCS was dependent on a proprietary mainframe environment. Information cannot be easily entered or accessed by users, and the software cannot be quickly modified to accommodate changing mission requirements. Operational flexibility and adaptability are limited, since most of the information and software are stored on the mainframe. The system architecture is unresponsive, inflexible, and expensive to maintain.
This new WWMCCS Information System configuration continued to be refined until 1992 when the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence terminated this latest attempt to modernize the WWMCCS ADP equipment.
The continuing need to meet established requirements which couldn't be fulfilled, coupled with a growing dissatisfaction among users with the existing WWMCCS system, drove the conceptualizing of a new system, called GCCS.
On August 30, 1996, Lieutenant General Albert J. Edmonds, Director, Defense Information Systems Agency, officially deactivated the Worldwide Military Command and Control System (WWMCCS) Intercomputer Network (WIN). Concurrently, the Joint Staff declared the Global Command and Control System (GCCS) as the joint command and control system of record.
Computer hardware
Honeywell 6000 Series
The Air Force Systems Command’s Electronic Systems Division awarded a fixed-price, fixed-quantity contract to Honeywell Information Systems, Inc. for 46 million dollars on 15 October 1971. The contract included 35 Honeywell 6000 series systems, some having multiple processors. System models from the H-6060 through the H-6080 were acquired. They ran a specially secured variant of Honeywell’s General Comprehensive Operating Supervisor (GCOS), and for years the vendor maintained and enhanced both the commercial GCOS and the "WWMCCS" GCOS in parallel. Digital transmissions were secured (aka 'scrambled') using Secure Telephone Unit (STU) or Secure Telephone Element modems.
Network
Prototype WWMCCS Network
The Joint Chiefs of Staff issued JCS Memorandum 593-71, "Research, Development, Test, and Evaluation Program in Support of the Worldwide Military Command and Control Standard System." in September 1971. The joint chief memorandum proposed what they called a Prototype WWMCCS Intercomputer Network (PWIN) pronounced as pee-win. The PWIN was created to test the operational benefits of networking WWMCCS. If the prototype proved successful, it would provide a baseline for an operational network. These experiments were conducted from 1971-1977.
PWIN included three sites at the Pentagon, Reston, Virginia and Norfolk, Virginia. The sites included Honeywell H6000 computers, Datanet 355 front end processors and local computer terminals for system users. Connections were provided for remote terminals using microwave, cable, satellite, or landline connections. The PWIN network was based on technology supplied by BBN Technologies, and experience gained from the ARPANET. Honeywell H716 computers, used as Interface Message Processors (IMPs) provided packet switching to network the PWIN sites together. The TELNET protocol was made available to the WWMCCS community for the first time to access remote sites.
The first comprehensive test plan for PWIN was approved on 29 October 1973. On 4 September 1974, the Joint Chiefs recommended that the prototype network be expanded from three sites to six. The recommendation was approved on 4 December 1974. The new sites included the Alternate National Military Command Center; the Military Airlift Command at Scott AFB; and the US Readiness Command headquarters at MacDill AFB.
Testing was conducted in 1976, called Experiment 1 and Experiment 2. Experiment 1, held in September took a crisis scenario borrowed from a previous exercise. Experiment 1 provided a controlled environment to test PWIN. Experiment 2 was held in October, during an exercise called Elegant Eagle 76. Experiment 2 was less controlled, so as to provide information about PWIN being able to handle user demands during a crisis. The results of the experiments were mixed.
Another test called Prime Target 77 was conducted during the spring of 1977. It added two new sites and had even more problems than Experiment 1 and Experiment 2. Ultimately, operational requirements trumped the problems and development of an operational network was recommended during 1977. The Joint Chiefs of Staff approved PWIN’s operational requirements on 18 July 1977. PWIN expanded to include a number of other WWMCCS sites and become an operational WWMCCS Intercomputer Network (WIN). Six initial WIN sites in 1977 increased to 20 sites by 1981.
References
Pearson, David E., The World Wide Military Command and Control System, Maxwell Air Force Base, Alabama: Air University Press., 2000.
External links
WWMCCS Worldwide Military Command and Control System (globalsecurity.org)
C2 Policy Evolution at the U.S. Department of Defense, David Dick and John D. Comerford
The Worldwide Military Command and Control System: Evolution and Effectiveness, David E. Pearson
Annotated bibliography on nuclear command and control from the Alsos Digital Library for Nuclear Issues
Command and control in the United States Department of Defense
Military communications of the United States
United States nuclear command and control
1996 disestablishments in the United States
Command and control systems of the United States military
|
337220
|
https://en.wikipedia.org/wiki/Personal%20identification%20number
|
Personal identification number
|
A personal identification number (PIN), or sometimes redundantly a PIN number or PIN code, is a numeric (sometimes alpha-numeric) passcode used in the process of authenticating a user accessing a system.
The PIN has been the key to facilitating the private data exchange between different data-processing centers in computer networks for financial institutions, governments, and enterprises. PINs may be used to authenticate banking systems with cardholders, governments with citizens, enterprises with employees, and computers with users, among other uses.
In common usage, PINs are used in ATM or POS transactions, secure access control (e.g. computer access, door access, car access), internet transactions, or to log into a restricted website.
History
The PIN originated with the introduction of the automated teller machine (ATM) in 1967, as an efficient way for banks to dispense cash to their customers. The first ATM system was that of Barclays in London, in 1967; it accepted cheques with machine-readable encoding, rather than cards, and matched the PIN to the cheque. 1972, Lloyds Bank issued the first bank card to feature an information-encoding magnetic strip, using a PIN for security. James Goodfellow, the inventor who patented the first personal identification number, was awarded an OBE in the 2006 Queen's Birthday Honours.
Mohamed M. Atalla invented the first PIN-based hardware security module (HSM), dubbed the "Atalla Box," a security system that encrypted PIN and ATM messages and protected offline devices with an un-guessable PIN-generating key. In 1972, Atalla filed for his PIN verification system, which included an encoded card reader and described a system that utilized encryption techniques to assure telephone link security while entering personal ID information that was transmitted to a remote location for verification.
He founded Atalla Corporation (now Utimaco Atalla) in 1972, and commercially launched the "Atalla Box" in 1973. The product was released as the Identikey. It was a card reader and customer identification system, providing a terminal with plastic card and PIN capabilities. The system was designed to let banks and thrift institutions switch to a plastic card environment from a passbook program. The Identikey system consisted of a card reader console, two customer PIN pads, intelligent controller and built-in electronic interface package. The device consisted of two keypads, one for the customer and one for the teller. It allowed the customer to type in a secret code, which is transformed by the device, using a microprocessor, into another code for the teller. During a transaction, the customer's account number was read by the card reader. This process replaced manual entry and avoided possible key stroke errors. It allowed users to replace traditional customer verification methods such as signature verification and test questions with a secure PIN system. In recognition of his work on the PIN system of information security management, Atalla has been referred to as the "Father of the PIN".
The success of the "Atalla Box" led to the wide adoption of PIN-based hardware security modules. Its PIN verification process was similar to the later IBM 3624. By 1998 an estimated 70% of all ATM transactions in the United States were routed through specialized Atalla hardware modules, and by 2003 the Atalla Box secured 80% of all ATM machines in the world, increasing to 85% as of 2006. Atalla's HSM products protect 250million card transactions every day as of 2013, and still secure the majority of the world's ATM transactions as of 2014.
Financial services
PIN usage
In the context of a financial transaction, usually both a private "PIN code" and public user identifier are required to authenticate a user to the system. In these situations, typically the user is required to provide a non-confidential user identifier or token (the user ID) and a confidential PIN to gain access to the system. Upon receiving the user ID and PIN, the system looks up the PIN based upon the user ID and compares the looked-up PIN with the received PIN. The user is granted access only when the number entered matches the number stored in the system. Hence, despite the name, a PIN does not personally identify the user. The PIN is not printed or embedded on the card but is manually entered by the cardholder during automated teller machine (ATM) and point of sale (POS) transactions (such as those that comply with EMV), and in card not present transactions, such as over the Internet or for phone banking.
PIN length
The international standard for financial services PIN management, ISO 9564-1, allows for PINs from four up to twelve digits, but recommends that for usability reasons the card issuer not assign a PIN longer than six digits. The inventor of the ATM, John Shepherd-Barron, had at first envisioned a six-digit numeric code, but his wife could only remember four digits, and that has become the most commonly used length in many places, although banks in Switzerland and many other countries require a six-digit PIN.
PIN validation
There are several main methods of validating PINs. The operations discussed below are usually performed within a hardware security module (HSM).
IBM 3624 method
One of the earliest ATM models was the IBM 3624, which used the IBM method to generate what is termed a natural PIN. The natural PIN is generated by encrypting the primary account number (PAN), using an encryption key generated specifically for the purpose. This key is sometimes referred to as the PIN generation key (PGK). This PIN is directly related to the primary account number. To validate the PIN, the issuing bank regenerates the PIN using the above method, and compares this with the entered PIN.
Natural PINs cannot be user selectable because they are derived from the PAN. If the card is reissued with a new PAN, a new PIN must be generated.
Natural PINs allow banks to issue PIN reminder letters as the PIN can be generated.
IBM 3624 + offset method
To allow user-selectable PINs it is possible to store a PIN offset value. The offset is found by subtracting natural PIN from the customer selected PIN using modulo 10. For example, if the natural PIN is 1234, and the user wishes to have a PIN of 2345, the offset is 1111.
The offset can be stored either on the card track data, or in a database at the card issuer.
To validate the PIN, the issuing bank calculates the natural PIN as in the above method, then adds the offset and compares this value to the entered PIN.
VISA method
The VISA method is used by many card schemes and is not VISA-specific. The VISA method generates a PIN verification value (PVV). Similar to the offset value, it can be stored on the card's track data, or in a database at the card issuer. This is called the reference PVV.
The VISA method takes the rightmost eleven digits of the PAN excluding the checksum value, a PIN validation key index (PVKI, chosen from one to six, a PVKI of 0 indicates that the PIN cannot be verified through PVS) and the required PIN value to make a 64-bit number, the PVKI selects a validation key (PVK, of 128 bits) to encrypt this number. From this encrypted value, the PVV is found.
To validate the PIN, the issuing bank calculates a PVV value from the entered PIN and PAN and compares this value to the reference PVV. If the reference PVV and the calculated PVV match, the correct PIN was entered.
Unlike the IBM method, the VISA method doesn't derive a PIN. The PVV value is used to confirm the PIN entered at the terminal, was also used to generate the reference PVV. The PIN used to generate a PVV can be randomly generated, user-selected or even derived using the IBM method.
PIN security
Financial PINs are often four-digit numbers in the range 0000–9999, resulting in 10,000 possible combinations. Switzerland issues six-digit PINs by default.
Some systems set up default PINs and most allow the customer to set up a PIN or to change the default one, and on some a change of PIN on first access is mandatory. Customers are usually advised not to set up a PIN-based on their or their spouse's birthdays, on driver license numbers, consecutive or repetitive numbers, or some other schemes. Some financial institutions do not give out or permit PINs where all digits are identical (such as 1111, 2222, ...), consecutive (1234, 2345, ...), numbers that start with one or more zeroes, or the last four digits of the cardholder's social security number or birth date.
Many PIN verification systems allow three attempts, thereby giving a card thief a putative 0.03% probability of guessing the correct PIN before the card is blocked. This holds only if all PINs are equally likely and the attacker has no further information available, which has not been the case with some of the many PIN generation and verification algorithms that financial institutions and ATM manufacturers have used in the past.
Research has been done on commonly used PINs. The result is that without forethought, a sizable portion of users may find their PIN vulnerable. "Armed with only four possibilities, hackers can crack 20% of all PINs. Allow them no more than fifteen numbers, and they can tap the accounts of more than a quarter of card-holders."
Breakable PINs can worsen with length, to wit:
Implementation flaws
In 2002, two PhD students at Cambridge University, Piotr Zieliński and Mike Bond, discovered a security flaw in the PIN generation system of the IBM 3624, which was duplicated in most later hardware. Known as the decimalization table attack, the flaw would allow someone who has access to a bank's computer system to determine the PIN for an ATM card in an average of 15 guesses.
Reverse PIN hoax
Rumours have been in e-mail and Internet circulation claiming that in the event of entering a PIN into an ATM backwards, law enforcement will be instantly alerted as well as money being ordinarily issued as if the PIN had been entered correctly. The intention of this scheme would be to protect victims of muggings; however, despite the system being proposed for use in some US states, there are no ATMs currently in existence that employ this software.
Mobile phone passcodes
A mobile phone may be PIN protected. If enabled, the PIN (also called a passcode) for GSM mobile phones can be between four and eight digits and is recorded in the SIM card. If such a PIN is entered incorrectly three times, the SIM card is blocked until a personal unblocking code (PUC or PUK), provided by the service operator, is entered. If the PUC is entered incorrectly ten times, the SIM card is permanently blocked, requiring a new SIM card from the mobile carrier service.
PINs are also commonly used in smartphones, as a form of personal authentication, so that only those who know the PIN will be able to unlock the device. After a number of failed attempts of entering the correct PIN, the user may be blocked from trying again for an allocated amount of time, all of the data stored on the device may be deleted, or the user may be asked to enter alternate information that only the owner is expected to know to authenticate. Whether any of the formerly mentioned phenomena occur after failed attempts of entering the PIN depends largely upon the device and the owner's chosen preferences in its settings.
See also
ATM SafetyPIN software
Transaction authentication number
References
Banking terms
Identity documents
Password authentication
|
531566
|
https://en.wikipedia.org/wiki/Zombie%20%28computing%29
|
Zombie (computing)
|
In computing, a zombie is a computer connected to the Internet that has been compromised by a hacker via a computer virus, computer worm, or trojan horse program and can be used to perform malicious tasks under the remote direction of the hacker. Zombie computers often coordinate together in a botnet controlled by the hacker, and are used for activities such as spreading e-mail spam and launching distributed denial-of-service attacks (DDoS attacks) against web servers. Most victims are unaware that their computers have become zombies. The concept is similar to the zombie of Haitian Voodoo folklore, which refers to a corpse resurrected by a sorcerer via magic and enslaved to the sorcerer's commands, having no free will of its own. A coordinated DDoS attack by multiple botnet machines also resembles a "zombie horde attack", as depicted in fictional zombie films.
Advertising
Zombie computers have been used extensively to send e-mail spam; as of 2005, an estimated 50–80% of all spam worldwide was sent by zombie computers. This allows spammers to avoid detection and presumably reduces their bandwidth costs, since the owners of zombies pay for their own bandwidth. This spam also greatly increases the spread of Trojan horses, as Trojans are not self-replicating. They rely on the movement of e-mails or spam to grow, whereas worms can spread by other means. For similar reasons, zombies are also used to commit click fraud against sites displaying pay-per-click advertising. Others can host phishing or money mule recruiting websites.
Distributed denial-of-service attacks
Zombies can be used to conduct distributed denial-of-service (DDoS) attacks, a term which refers to the orchestrated flooding of target websites by large numbers of computers at once. The large number of Internet users making simultaneous requests of a website's server is intended to result in crashing and the prevention of legitimate users from accessing the site. A variant of this type of flooding is known as distributed degradation-of-service. Committed by "pulsing" zombies, distributed degradation-of-service is the moderated and periodical flooding of websites intended to slow down rather than crash a victim site. The effectiveness of this tactic springs from the fact that intense flooding can be quickly detected and remedied, but pulsing zombie attacks and the resulting slow-down in website access can go unnoticed for months and even years.
The computing facilitated by Internet of Things (IoT) has been productive for modern day usage but it has played a significant role in the increase in such web attacks. The potential of IoT enables every device to communicate efficiently but this increases the need of policy enforcement regarding the security threats. Through these devices, the most prominent attacking behaviors is the DDoS. Research has been conducted to study the impact of such attacks on IoT networks and their compensating provisions for defense.
Notable incidents of distributed denial- and degradation-of-service attacks in the past include the attack upon the SPEWS service in 2003, and the one against Blue Frog service in 2006. In 2000, several prominent Web sites (Yahoo, eBay, etc.) were clogged to a standstill by a distributed denial of service attack mounted by ‘MafiaBoy’, a Canadian teenager.
Smartphones
Beginning in July 2009, similar botnet capabilities have also emerged for the growing smartphone market. Examples include the July 2009 in the "wild" release of the Sexy Space text message worm, the world's first botnet capable SMS worm, which targeted the Symbian operating system in Nokia smartphones. Later that month, researcher Charlie Miller revealed a proof of concept text message worm for the iPhone at Black Hat Briefings. Also in July, United Arab Emirates consumers were targeted by the Etisalat BlackBerry spyware program. In the 2010s, the security community is divided as to the real world potential of mobile botnets. But in an August 2009 interview with The New York Times, cyber security consultant Michael Gregg summarized the issue this way: "We are about at the point with [smart]phones that we were with desktops in the '80s."
See also
BASHLITE
Botnet
Denial-of-service attack
Low Orbit Ion Cannon
Malware
RDP shop
Trojan horse (computing)
References
External links
Botnet operation controlled 1.5 million PCs
Is Your PC a Zombie? on About.com
Intrusive analysis of a web-based proxy zombie network
A detailed account of what a zombie machine looks like and what it takes to "fix" it
Correspondence between Steve Gibson and Wicked
Zombie networks, comment spam, and referer [sic] spam
The New York Times: Phone Hacking Threat is Low, But It Exists
Hackers Target Cell Phones, WPLG-TV/ABC-10 Miami
Researcher: BlackBerry Spyware Wasn’t Ready for Prime Time
Forbes: How to Hijack Every iPhone in the World
Hackers Plan to Clobber the Cloud, Spy on Blackberries
SMobile Systems release solution for Etisalat BlackBerry spyware
LOIC IRC-0 - An Open-Source IRC Botnet for Network Stress Testing
An Open-Source IRC and Webpage Botnet for Network Stress Testing
Computer network security
Denial-of-service attacks
Zombies
Botnets
|
68646739
|
https://en.wikipedia.org/wiki/Delver%20%28video%20game%29
|
Delver (video game)
|
Delver is a 2018 first-person roguelike action dungeon crawler video game developed by Priority Interrupt. It was released for Microsoft Windows, macOS, and Linux on February 2, 2018.
Gameplay
Delver is a first-person roguelike where players assume the role of an explorer as they explore the dungeons in search of the Yithidian Orb. The game's mechanics and presentation is similar to the dungeons of The Legend of Zelda, while incorporating random, procedurally-generated levels in the manner of a roguelike game. There is no jump button. The game always starts on a campfire where weapons and scrolls can be acquired, with a randomly generated loot that includes 2 weapons and 2 potions/food. On each level, the player must explore until they find the rope ladder that can take them to the next level, while avoiding traps and enemies. Dungeons contain varied loot, like potions (whose effects are not revealed until the player actually consumes them,) lamps/candles (that work as light sources,) books, armor, skulls, luxury items (that are automatically traded for gold,) wands, arrows, and melee weapons. Defeating enemies grants experience, once enough experience is gathered, the player will advance one level; something that will grant them an extra hit point and one point to assign to three random stats that vary with every level reached. The player's health is tracked by a number of hearts; if the character loses all his hearts, the game ends in permadeath and the player must start over from a freshly-generated dungeon.
Development and release
On April 20, 2012, an alpha was commercially released for Android, the last update was received on December 26, 2013. The game was placed on an open vote on Steam Greenlight on August 30, 2012, and was greenlit a year later. On September 6, 2013, the game was released to Steam Early Access. The game continued to receive updates, and on February 2, 2018, Priority Interrupt officially released the game out of early access for Windows, macOS, and Linux, with a level editor and Steam Workshop support. On November 15, 2018, the source code for the game was released on GitHub under the terms of the GNU General Public License version 2, but now it is currently using the zlib License.
Reception
A 2012 alpha build for Android was reviewed by Paul Devlin from Pocket Gamer, who rated it 4/5, praising the "fluid experience" of the alpha build, but criticized the "clunky" inventory system. The 2013 initial Steam Early Access release was reviewed by John Walker from Rock Paper Shotgun, who criticized the lack of collectibles and noted a lack of "purpose," and suggesting the inclusion of shops, while also praising its potential. A 2016 build was reviewed by Brendan Caldwell (also from Rock Paper Shotgun,) who praised the opening moments and liked the absence of any plot and the additions to it since the initial 2013 release. It was included in the "10 Mac games you need to play from February 2018" list by Andrew Hayward for Macworld.
References
External links
2018 video games
Android (operating system) games
Commercial video games with freely available source code
Dungeon crawler video games
Early access video games
Indie video games
Linux games
macOS games
Open-source video games
Single-player video games
Software using the zlib license
Steam Greenlight games
Windows games
Roguelike video games
Video games developed in the United States
Video games with Steam Workshop support
|
54255300
|
https://en.wikipedia.org/wiki/Scanitto
|
Scanitto
|
Scanitto Pro is Windows-based software application for image scanning, direct printing and copying, basic editing and text recognition (OCR).
History
The program was first unveiled in 2009 as a spin-off of the scanning master software for Windows Scanitto Lite that replaced different standard scanning tools supplied with the TWAIN scanners. During the first years after invention, the software got the criticism from the independent reviewers for the absence of OCR features. In less than 2 years, the application included text recognition in English, closely followed by the French, German, Italian, Russian, and Spanish vocabularies. In 2011, the application has got its first award. By 2014, application supported 10 languages, followed by new features implementation: pictures upload to Dropbox and Google Drive cloud storage and posting to social media. In 2016, the application was reviewed by Korean author with a criticism for the absence of multi-core CPU support. By early 2017, the application is in the active development stage and was included in the Top5 applications in a category by Polish version of Computer Bild (Komputer Swiat) magazine.
Product Overview
Scanitto employs a TWAIN or WIA driver to interact with the scanner. The software does not include any post-processing filters so the image is scanned as is – output image quality and scanning speed may vary according to resolution, color depth, and device specifications. Once scanning is complete, the user can rotate the image, resize the output by trimming unwanted fragments and fix skews manually or automatically. Scanitto can also recognize simple texts with cleared formatting. The available output formats for text are TXT, RTF, and DOCX file extensions.
Additional Features
Pre-scanning with low resolution, and area selection
Scanning into PDF, BMP, JPG, TIFF, JP2, and PNG
Blank page skipping
Support for sheet feed scanners
Direct printing of scanned documents
Multi-page PDF creation with embedded search
Personalized scanning profiles (presets)
Automatic and manual duplex scanning
References
Links
Official website
Proprietary software
Image scanning
Graphics software
Photo software
Windows graphics-related software
Shareware
Optical character recognition
|
4750384
|
https://en.wikipedia.org/wiki/List%20of%20newspapers%20in%20Connecticut
|
List of newspapers in Connecticut
|
This is a list of newspapers in Connecticut.
Daily newspapers (currently published)This is a list of daily newspapers currently published in Connecticut. For weekly and university newspapers, see List of newspapers in Connecticut.
CTNewsJunkie – Hartford
The Advocate – Stamford
The Bristol Press – Bristol
The Bulletin – Norwich
the Chronicle – Willimantic
The Connecticut Examiner - Old Lyme
The Connecticut Mirror – Hartford
Connecticut Post – Bridgeport
The Day – New London
Fairfield County CT Inquirer – Norwalk
Greenwich Time – Greenwich
Hartford Courant – Hartford
New Britain Herald – New Britain
The Hour – Norwalk
Journal Inquirer – Manchester
The Middletown Press – Middletown
New Haven Register – New Haven
The News-Times – Danbury
Record-Journal – Meriden
The Register Citizen – Torrington
Republican-American – WaterburyWeekly newspapers (currently published)
Afroasia Newspaper – Stamford
Amity Observer – Amity
The Bloomfield Messenger – Bloomfield
The Branford Review – Guilford
The Brookfield Journal – New Milford
the Chronicle Weekly – Willimantic
The Commercial Record – South Windsor
The Darien Times – Darien
Darien News-Review – Darien
East Hartford Gazette – East Hartford
East Haddam News – East Haddam
Fairfield Citizen-News – Fairfield
Fairfield Minuteman – Fairfield
Glastonbury Citizen – Glastonbury
Haddam-Killingworth News - Haddam, Killingworth
Herald Press, 1996 – present
Huntington Herald – Shelton
Inquirer Group – Hartford
Jewish Ledger – West Hartford
Kent Good Times Dispatch – Kent
Killingly Villager – Killingly
Litchfield Enquirer – Litchfield
Monroe Courier – Monroe
Mystic River Press – Mystic
The New Canaan News – New Canaan
New Haven Advocate – New Haven
New Milford Times – New Milford
Newington Town Crier – Bristol
The Newtown Bee – Newtown
North Haven Citizen – North Haven
Northend Agents – Hartford
Pictorial-Gazette – Guilford
Putnam Villager – Putnam, Connecticut
The Redding Pilot – Redding
Reminder News – Vernon
The Ridgefield Press – Ridgefield
Shoreline Times – Guilford
Sol, El – Stamford
The Sound – Madison
Thomaston Express – Thomaston
Thompson Villager – Thompson, Connecticut
Town Times – Durham and Middlefield
Town Times – Watertown
Town Tribune – New Fairfield
Tribuna (a.k.a. La Tribuna, "The Tribune") – Danbury
Trumbull Times – Shelton
Valley News – Bristol
Voices and Voices Weekender – Southbury
West Hartford News – Bristol
The Weston Forum – Weston
The Windsor Locks Journal Weekly – Windsor Locks
The Windsor Journal Weekly – Windsor
Westport Minuteman – Westport
Westport News – Westport
The Wilton Bulletin – Wilton
Windsor Journal – Bristol
Windsor Locks Journal – Bristol
Woodstock Villager – Woodstock, Connecticut
University newspapers
The Campus Lantern – Eastern Connecticut State University (Willimantic)
Charger Bulletin – University of New Haven (West Haven)
The Daily Campus – UConn. (Storrs)
The Echo – Western Connecticut State University (Danbury)
Fairfield Mirror – Fairfield University (Fairfield)
The Hartford Informer – University of Hartford (West Hartford)
Wesleyan Argus – Wesleyan University (Middletown)
Yale Daily News – Yale (New Haven)
The Southern News – Southern Connecticut State University (New Haven)
The Spectrum – Sacred Heart University (Fairfield)
Defunct
American Sentinel, including 1823-1826, weekly
Connecticut Spectator, including May 1814 - December 1814, weekly
The Constitution, former weekly newspaper, including during 1842-1884
The Daily Herald, former daily newspaper
Evening Press, including 1918 - 1919, daily ex. Sun.
The Hartford Times
Middlesex Gazette, including 1790 - 1834 (with gaps), weekly
Middletown Daily Constitution, including 1872 - 1876, daily ex. Sun.
Middletown Daily Sentinel, including January 1876 - June 1876, daily ex. Sun.
Middletown Sun, including 1908 - 1914, daily ex. Sun.
Regional Standard – Guilford
The Middletown Times, daily newspaper in Middletown during 1913-1914 or during 1914-January 1915
The Middletown Tribune, Republican newspaper in Middletown, Connecticut including 1893-1906, daily ex. Sun
News and Advertiser, including 1851-1854, weekly
Penny Press, including 1884 - 1939, daily ex. Sun.
The Sentinel and Witness, former weekly newspaper, including 1869-1884
Danbury
Newspapers published in Danbury, Connecticut:
Farmers Chronicle. W., June 17, 1793-Sept. 19, 1796.
Farmer's Journal. W., March 18, 1790 – June 3, 1793.
Fairfield
Newspapers published in Fairfield, Connecticut:
Fairfield Gazette. W., July 13, 1786-Feb. (?), 1787.
Fairfield Gazette, Or The Independent Intelligencer. W., Feb. 15 (?), 1787-Aug. (?), 1787.
Hartford
Newspapers published in Hartford, Connecticut:
American Mercury. W., July 12, 1784-June 25, 1833.
The Connecticut Courant. W., Oct. 29, 1764-May 31, 1774; Feb. 17, May 5, 12, 1778; Mar. 21, 1791-Dec. 29, 1800+E. Wilder Spaulding. The Connecticut Courant, a Representative Newspaper in the Eighteenth Century. The New England Quarterly, Vol. 3, No. 3 (Jul., 1930), pp. 443-463
The Connecticut Courant, And Hartford Weekly Intelligencer. W., June 7, 1774-Feb. 10, 1778.
The Connecticut Courant And The Weekly Intelligencer. W., Feb. 24, 1778-Mar. 14, 1791.
The Freeman's Chronicle, Or The American Advertiser. W., Sept. 1, 1783-July 8, 1784.
Hartford Gazette. S.W., W., Jan. 13, 1794-Mar. 19, 1795.
Litchfield
Newspapers published in Litchfield, Connecticut:
Collier's Litchfield Weekly Monitor. W., Jan. 7-June 16, 1788.
The Farmer's Monitor. W., Mar. 5-Dec. 31, 1800+
Litchfield-County Monitor. W., Dec. 11, 1790-Jan. 3, 1791.
Litchfield Monitor. W., Jan. 10, 1791-Jan. 4, 1792; Aug. 27, 1794-June 3, 1795.
Litchfield Monitor And Agricultural Register. W., June 10, 1795 – May 11, 1796.
The Litchfield Weekly Monitor. June 23, 1788.
The Monitor. W., Jan. 11, 1792-Aug. 20, 1794; Feb. 28, 1798-Feb. 26, 1800+
Weekly Monitor. W., Mar. Or April 1785-Nov. 28, 1786; June 11-Dec. 31, 1787; June 30, 1788-May 11, 1789; Nov. 17, 1789-Dec. 4, 1790; May 18, 1796-Feb. 21, 1798.
Weekly Monitor And American Advertiser. W., Dec. 21, 1784-Mar. 1785.
Weekly Monitor And Litchfield Town And County Recorder. W., Dec. 5, 1786-June 4, 1787.
Weekly Monitor And The Litchfield Advertiser. W., May 18- June 8, 1789.
Middletown
Newspapers published in Middletown, Connecticut:
The Middlesex Gazette. W., Nov. 8, 1785-Oct. 29, 1787; Mar. 3, 1792-Dec. 26, 1800+
The Middlesex Gazette, Or Federal Adviser. W., Nov. 5, 1787-Feb. 25, 1792.
New Haven
Newspapers published in New Haven, Connecticut:
The Connecticut Gazette. W., July 5, 1765-Feb. 19, 1768.
Connecticut Gazette, With The Freshest Advices Foreign And Domestick. W., Apr. 12, 1755-Apr. 14, 1764.
The Connecticut Journal. W., Sept. 13, 1775-Jan. 3, 1799; Apr. 4, 1799-Dec. 31, 1800+
The Connecticut Journal, And New-Haven Post-Boy. W., Oct. 23, 1767-Sept. 6, 1775.
Connecticut Journal And Weekly Advertiser. W., Jan. 10- Mar. 28, 1799.
The New-Haven Chronicle. W., Apr. 18, 1786-Sept. 11, 1787.
New-Haven Gazette. W., May 13, 1784-Feb. 9, 1786.
The New-Haven Gazette. W., Jan. 5-June 29, 1791.
The New-Haven Gazette, And The Connecticut Magazine. W., Feb. 16, 1786-June 18, 1789.
New London
Newspapers published in New London, Connecticut:
The Bee. W., June 14, 1797-Dec. 31, 1800+
The Connecticut Gazette. W., May 11, 1787-Dec. 25, 1799.
Connecticut Gazette, And The Commercial Intelligencer. W., Jan. 1-Dec. 31, 1800+
Connecticut Gazette, And The Universal Intelligencer. W., Dec. 17, 1773-May 4, 1787.
The New-London Advertiser. W., Mar. 2-Apr. 13, 1795.
The New-London Gazette. W., Nov. 18, 1763-Dec. 10, 1773.
The New-London Summary. W., Sometime Between May 6 And 17, 1763-Sept 23, 1763.
The New-London Summary, Or The Weekly Advertiser. W., Aug. 8, 1758-Sometime Between May 6 And June 10, 1763.
Springer's Weekly Oracle. W., Oct. 21, 1797-Dec. 28, 1800+
Newfield
Newspapers published in Newfield, Connecticut:
American Telegraphe. W., Apr. 8, 1795-July 6, 1796; Apr. 5, 1797-Oct. 29, 1800.
American Telegraphe, & Fairfield County Gazette. W., July 13, 17-Mar. 2, 1797.
Norwich
Newspapers published in Norwich, Connecticut:
Chelsea Courier. W., Nov. 30, 1796-May 24, 1798.
The Courier. W., May 31, 1798-Mar. 15, 1800.
The Oxford English Dictionary attests the first recorded use of the term "Hello" to The Courier in 1826.
The Norwich Packet. W., Dec. 29, 1777-June 1, 1779; June 9, 1791-Dec. 30, 1800+
The Norwich Packet; And The Connecticut, Massachusetts, New-Hampshire, And Rhode-Island Weekly Advertiser. W., Oct. 7, 1773-Dec. 22, 1777.
The Norwich Packet And The Country Journal. W., Feb. 8, 1787-Sept. 24, 1790.
The Norwich Packet And The Weekly Advertiser. W., June 8, 1779-Sept. 26, 1782.
The Norwich Packet, Or The Chronicle Of Freedom. W., Oct. 30, 1783-Apr. 7, 1785.
The Norwich-Packet, Or The Country Journal. W., Apr. 14, 1785-Feb. 1, 1787.
Vox Populi Norwich Packet. W., Oct. 1, 1790-June 2, 1791.
The Weekly Register. W., Nov. 29, 1791-Aug. 19, 1795.
Sharon
Newspapers published in Sharon, Connecticut:
Rural Gazette. W. Mar. 31, 1800.
Stonington
Newspapers published in Stonington, Connecticut:
Impartial Journal. W., Oct. 8 (?) 1799-Dec. 30, 1800+
Journal of the Times. W., Oct. 10, 1798-Sept. 17, 1799.
Windham
Newspapers published in Windham, Connecticut:
The Phenix, Or Windham Herald. W., Mar. 12, 1791-Apr. 12, 1798.
Windham Herald. W., Apr. 19, 1798-Dec. 26, 1800+
See also
List of newspapers in Connecticut in the 18th century
List of radio stations in Connecticut
List of television stations in Connecticut
Adjoining states
List of newspapers in Massachusetts
List of newspapers in New York
List of newspapers in Rhode Island
References
Further reading
External links
http://www.cslib.org/newspaper/newshistory.htm
|
9137253
|
https://en.wikipedia.org/wiki/Demetrius%20of%20Scepsis
|
Demetrius of Scepsis
|
Demetrius of Scepsis () was a Greek grammarian of the time of Aristarchus and Crates (Strab. xiii. p. 609), the first half of the second century BC. He is sometimes simply called the Scepsian (Strab. ix. pp. 438, 439, x. pp. 456, 472, 473, 489), and sometimes simply Demetrius (Strab. xii. pp. 551, 552, xiii. pp. 596, 600, 602).
Diogenes Laërtius mention him as one in a list of well-known namesakes.
He was the author of a very extensive work which is very often referred to, and bore the title . It consisted of at least twenty-six books (Strab. xiii. p. 603 and passim; Athen. iii. pp. 80, 91; Steph. Byz. s.v. ). This work was an historical and geographical commentary on that part of the second book of the Iliad in which the forces of the Trojans are enumerated, known as the Trojan Battle Order or Trojan Catalogue (compare Harpocrat. s. vv. , ; Schol. ad Apollon. Rhod. i. 1123, 1165). The numerous other passages in which Demetrius of Scepsis is mentioned or quoted, are collected by Westermann on Vossius, De Hist. Graec., p. 179, &c.
Demetrius's work has been used by several ancient authors as important source for the Troad region. Among these authors is Strabo in Book 13 of his Geographica. Some of the fragments are also quoted by Athenaeus in his Deipnosophistae.
References
Ancient Greek grammarians
|
2236556
|
https://en.wikipedia.org/wiki/Small%20form%20factor%20%28desktop%20and%20motherboard%29
|
Small form factor (desktop and motherboard)
|
Small form factor (SFF or SFX) is a term used for desktop computers, and their enclosures and motherboards, to indicate that they are designed in accordance with one of several standardized computer form factors intended to minimize the volume and footprint of a desktop computer compared to the standard ATX form factor.
For comparison purposes, the size of an SFF case is usually measured in litres. SFFs are available in a variety of sizes and shapes, including shoeboxes, cubes, and book-sized PCs. Their smaller and often lighter construction has made them popular as home theater PCs and as gaming computers for attending LAN parties. Manufacturers also emphasize the aesthetic and ergonomic design of SFFs since users are more likely to place them on top of a desk or carry them around. Advancements in component technology together with reductions in size means a powerful computer is no longer restricted to the huge towers of old.
Small form factors do not include computing devices that have traditionally been small, such as embedded or mobile systems. However, "small form factor" lacks a normative definition and is consequently open to interpretation and misuse. Manufacturers often provide definitions that serve the interests of their products. According to marketing strategy, one manufacturer may decide to mark their product as "small form factor" while other manufacturers are using different marketing name (such as "Minitower", "Microtower" or "Desktop") for personal computers of similar or even smaller footprint.
History
The acronym SFF originally stood for "Shuttle Form Factor," describing shoebox-sized personal computers with two expansion slots. The meaning of SFF evolved to include other, similar PC designs from brands such as AOpen and First International Computer, with the word "Small" replacing the word "Shuttle".
SFF originally referred to systems smaller than the Micro-ATX. The term SFF is used in contrast with terms for larger systems such as "mini-towers" and "desktops."
Features
Small form factor computers are generally designed to support the same features as modern desktop computers, but in a smaller space. Most accept standard x86 microprocessors, standard DIMM memory modules, standard 8.9 cm (3.5") hard disks, and standard 13.3 cm (5.25") optical drives.
However, the small size of SFF cases may limit expansion options; many commercial offerings provide only one 8.9 cm (3.5") drive bay and one or two 13.3 cm (5.25") external bays. Standard CPU heatsinks do not always fit inside an SFF computer, so some manufacturers provide custom cooling systems. Though limited to one or two expansion cards, a few have the space for -length cards such as the GeForce GTX-295. Most SFF computers use highly integrated motherboards containing many on-board peripherals, reducing the need for expansion cards. As of 2020 many SFF PC cases do not include any expansion bays larger than 2.5 inches (large enough to accommodate SATA SSDs), due to the declining popularity of optical disc drives and 3.5 inch hard drives in the consumer space.
Even if labeled "SFF," cube-style cases that support full-sized (PS2 form factor) power supplies actually have a microATX form factor. True SFF systems use SFX, TFX or smaller power supplies, and some require a laptop-style external "power brick."
Some SFF computers even include compact components designed for mobile computers, such as notebook optical drives, notebook memory modules, notebook processors, and external AC adapters, rather than the internal power supply units found in full-size desktop computers.
Enthusiast Community & Crowdfunding
Crowdfunding and availability of rapid prototyping tools has enabled the production of several mini-ITX cases focusing on efficiently organizing commercial computer components into small volumes including the Ghost S1, DAN A4-SFX, and upcoming Thor Zone MJOLNIR. Communities of enthusiasts and reviewers now develop and promote enhanced SFF assembly, maintenance, and performance criteria. 3D printing and Laser cutting have enabled customization and one-off production by both manufacturers like Lazer3D and individual users with access to the relevant equipment.
SFF types
The many different types of SFFs are categorized loosely by their shape and size. The types below are available .
Cubic / Shoebox
Many SFF computers have a cubic shape. Smaller models are typically sold as barebones units, including a case, motherboard, and power supply designed to fit together. The motherboard lies flat against the base of the case. Upgrade options may be limited by the non-standard motherboards, cramped interior space, and power and airflow concerns. The Power Mac G4 Cube, released in 2000, and the Shuttle XPC are good examples of this design. MSI and Asus produce similar designs. The Xi3 Modular Computer is an example of a cube computer with a little more upgrade possibilities.
Shuttle has adapted several of its XPC models (some 5-series and most later) to alternately accept mini-ITX motherboards. The base of the XPC is provided with mounting points which accommodate both "Shuttle form factor" (ShFF) and mini-ITX motherboards. In order to accommodate mini-ITX motherboards, two of the ShFF mounting points are simply relocated (the remaining mini-ITX mounting points are in common with the remaining ShFF mounting points). A "standard" ShFF motherboard is 20.6 cm (8 1/8″) wide by 27.3 cm (10 3/4″) deep, with the I/O shield and the two PCI slots being located in common with mini-ITX motherboards. Most ShFF systems utilize Shuttle's proprietary heat pipe (liquid cooling) system, "Integrated Cooling Engine" (ICE), for the processor, although several also feature heat pipe cooling for the voltage regulator and/or the chip set (Northbridge). When an ShFF system is upgraded to a mini-ITX motherboard, an Intel or compatible processor fan must replace the ICE cooler. The ShFF's ICE computer fan is so designed that it may be repurposed as a case fan when the case is upgraded to mini-ITX use. When so upgraded, the repurposed fan would be connected to the motherboard's case fan connector (3-pin) while the new CPU fan would be connected to the motherboard's CPU fan connector (4-pin).
AOpen Inc. produced a stackable S120 case, allowing the user to stack up to four components vertically or horizontally. These layers can be for add-on cards, optical drives, and hard drives, using either internal power supplies or external AC adapter power sources. After the S120, AOpen made more small form factor cases for systems with Micro ATX and Mini-ITX.
Nettop
Until 2005, SFF cases were usually sold as barebones units (case, power supply, and motherboard) to system integrators and home-based builders. In 2005, Apple Inc. introduced its Mac Mini (volume of 1.4 L, excluding external power brick). Later in the same year, the first AOpen mini PC MP915 (renamed to XC mini in 2007 since "mini PC" could not be registered as a trademark), was announced. The size of the XC mini series PC—16.5(W) × 5.0(H) × 16.5(D) cm—makes it one of the smallest desktop PC systems (1.3L volume). It was criticized for looking like the Apple Mac Mini but Apple has not taken action on this subject. In February 2007, AOpen redesigned the case of the mini PC MP945 series.
Since 2006, major OEM PC brands such as HP and Dell have begun to sell fully assembled SFF systems. These are often described as bookshelf units since they resemble a miniature tower case small enough to fit on a bookshelf. The HP Slimline series and Dell Dimension C521 (volume 1.65 L) are good examples of this trend. The Maxdata Favorite 300XS is another mini computer. The HP Slimline uses a non-standard motherboard that is very similar in size to Mini-ITX.
In addition to its industrial use, the extremely small Mini-ITX motherboard form factor has also been incorporated into SFF computers. These are often extremely compact, incorporating low-power components such as the VIA C3 processors. The Travla C134 is an example of this design. At 17.8 x 25.4 x 5.1 cm (7 × 10 × 2") the Travla C134 is somewhat larger than the Mac mini which is 16.5 x 16.5 x 5.1 cm (6.5 × 6.5 × 2") and barely bigger than a standard 13.3 cm (5.25") optical drive.
Beginning in 2007, several other companies have released other very small computers that besides a small size, focus on a low price, and extremely high power efficiency (typically 10 W or below in use). These include the Zonbu, fit-PC, Linutop, and A9home. With the release of Intel Atom CPU, AOpen also made Nettop systems: the uBox series with model LE200 and LE210. The uBox series equips a dual core Intel Atom 270/330 processor, single channel DDR-II 533/667 memory, Intel 945GC+ICH7 chipset, three SATA connectors and 5.1 channel high definition audio output.
Home theatre boxes
Essentially a bookshelf-style case lying on its side, a miniature HTPC replicates the look of other smaller-than-rack-sized home theatre components such as a DVR or mini audio receiver. The front panel interface is emphasized, with the optical disc drive rotated relative to the case in order to maintain horizontal mounting, and more motherboard port connectors (such as for USB) are routed to the front panel, they normally are as powerful as PC desktops.
Computer-on-module
A computer-on-module (COM) is a complete computer built on a single circuit board. They are often used as embedded systems due to their small physical size and low power consumption. Gumstix is one manufacturer of COMs.
Ultra-Small Form Factor
Each model of Dell's OptiPlex line of computers typically includes an Ultra-Small Form Factor (USFF) chassis option. In the Core 2 era, these machines used 8.9 cm (3.5") desktop hard drives and external power supplies, such as the OptiPlex 745 and 755. More recent units use 6.4 cm (2.5") laptop hard drives and have integrated power supplies, such as the OptiPlex 990 USFF. The compact size comes at the cost of restricted expandability, as USFF models have no PCI or PCIe slots and may have limited CPU and memory options.
Micro
Starting from Series 5, USFF was replaced with Micro variants, an even smaller size option that uses external power supplies and does not have optical drives.
Ultra-compact Form Factor
Understood as comprising nano-ITX (12 × 12 cm) and pico-ITX (10 × 7.2 cm) boards, the format was championed by VIA Technologies. Intel now describes its own Next Unit of Computing (NUC) products (10.2 x 10.2 cm or 4 × 4") as UCFF.
See also
ATX
Case modding
Nettop
PC-on-a-stick
Mac mini
Business SFF-class nettops: Dell OptiPlex, Fujitsu Esprimo, Lenovo ThinkCentre, HP ProDesk and EliteDesk
Single-board microcontroller
List of Arduino compatibles
Small Form Factor Committee
Small Form Factor Special Interest Group (SFF-SIG)
Low-profile video card
Mini-ITX
References
Motherboard form factors
|
23222029
|
https://en.wikipedia.org/wiki/Ptrace
|
Ptrace
|
ptrace is a system call found in Unix and several Unix-like operating systems. By using ptrace (the name is an abbreviation of "process trace") one process can control another, enabling the controller to inspect and manipulate the internal state of its target. ptrace is used by debuggers and other code-analysis tools, mostly as aids to software development.
Uses
ptrace is used by debuggers (such as gdb and dbx), by tracing tools like strace and ltrace, and by code coverage tools. ptrace is also used by specialized programs to patch running programs, to avoid unfixed bugs or to overcome security features. It can further be used as a sandbox and as a run-time environment simulator (like emulating root access for non-root software).
By attaching to another process using the ptrace call, a tool has extensive control over the operation of its target. This includes manipulation of its file descriptors, memory, and registers. It can single-step through the target's code, can observe and intercept system calls and their results, and can manipulate the target's signal handlers and both receive and send signals on its behalf. The ability to write into the target's memory allows not only its data store to be changed, but also the application's own code segment, allowing the controller to install breakpoints and patch the running code of the target.
As the ability to inspect and alter another process is very powerful, ptrace can attach only to processes that the owner can send signals to (typically only their own processes); the superuser account can ptrace almost any process (except init on kernels before 2.6.26). In Linux systems that feature capabilities-based security, the ability to ptrace is further limited by the CAP_SYS_PTRACE capability or by the YAMA Linux Security Module. In FreeBSD, it is limited by FreeBSD jails and Mandatory Access Control policies.
Limitations
Communications between the controller and target take place using repeated calls of ptrace, passing a small fixed-size block of memory between the two (necessitating two context switches per call); this is acutely inefficient when accessing large amounts of the target's memory, as this can only be done in word sized blocks (with a ptrace call for each word). For this reason the 8th edition of Unix introduced procfs, which allows permitted processes direct access to the memory of another process - 4.4BSD followed, and the use of /proc for debugger support was inherited by Solaris, BSD, and AIX, and mostly copied by Linux. Some, such as Solaris, have removed ptrace as a system call altogether, retaining it as a library call that reinterprets calls to ptrace in terms of the platform's procfs. Such systems use ioctls on the file descriptor of the opened /proc file to issue commands to the controlled process. FreeBSD, on the other hand, extended ptrace to remove mentioned problems, and declared procfs obsolete due to its inherent design problems.
ptrace only provides the most basic interface necessary to support debuggers and similar tools. Programs using it must have intimate knowledge of the specifics of the OS and architecture, including stack layout, application binary interface, system call mechanism, name mangling, the format of any debug data, and are responsible for understanding and disassembling machine code themselves. Further, programs that inject executable code into the target process or (like gdb) allow the user to enter commands that are executed in the context of the target must generate and load that code themselves, generally without the help of the program loader.
Support
Unix and BSD
ptrace was first implemented in Version 6 Unix, and was present in both the SVr4 and 4.3BSD branches of Unix. ptrace is available as a system call on IRIX, IBM AIX, NetBSD, FreeBSD, OpenBSD, and Linux. ptrace is implemented as a library call on Solaris, built on the Solaris kernel's procfs filesystem; Sun notes that ptrace on Solaris is intended for compatibility, and recommends that new implementations use the richer interface that proc supplies instead. UnixWare also features a limited ptrace but like Sun, SCO recommends implementers use the underlying procfs features instead. HP-UX supported ptrace until release 11i v3 (it was deprecated in favour of ttrace, a similar OS-specific call, in 11i v1).
macOS
Apple's macOS also implements ptrace as a system call. Apple's version adds a special option PT_DENY_ATTACH – if a process invokes this option on itself, subsequent attempts to ptrace the process will fail. Apple uses this feature to limit the use of debuggers on programs that manipulate DRM-ed content, including iTunes. PT_DENY_ATTACH on also disables DTrace's ability to monitor the process. Debuggers on OS X typically use a combination of ptrace and the Mach VM and thread APIs. ptrace (again with PT_DENY_ATTACH) is available to developers for the Apple iPhone.
Linux
Linux also gives processes the ability to prevent other processes from attaching to them. Processes can call the prctl syscall and clear their PR_SET_DUMPABLE flag; in later kernels this prevents non-root processes from ptracing the calling process; the OpenSSH authentication agent uses this mechanism to prevent ssh session hijacking via ptrace. Later Ubuntu versions ship with a Linux kernel configured to prevent ptrace attaches from processes other than the traced process' parent; this allows gdb and strace to continue to work when running a target process, but prevents them from attaching to an unrelated running process. Control of this feature is performed via the /proc/sys/kernel/yama/ptrace_scope setting. On systems where this feature is enabled, commands like "gdb --attach" and "strace -p" will not work.
Starting in Ubuntu 10.10, ptrace is only allowed to be called on child processes.
Android
For some Android phones with a locked boot loader, ptrace is used to gain control over the init process to enable a '2nd boot' and replace the system files.
References
External links
Article from Linux Gazette about ptrace
Article about ptrace in linux journal
Unix
Debugging
|
1577861
|
https://en.wikipedia.org/wiki/Xgl
|
Xgl
|
Xgl is an obsolete display server implementation supporting the X Window System protocol designed to take advantage of modern graphics cards via their OpenGL drivers, layered on top of OpenGL. It supports hardware acceleration of all X, OpenGL and XVideo applications and graphical effects by a compositing window manager such as Compiz or Beryl. The project was started by David Reveman of Novell and first released on January 2, 2006. It was removed from the X.org server in favor of AIGLX on June 12, 2008.
History
Xgl was originally developed on public mailing lists, but for a long time, until January 2, 2006 most development of Xgl was done behind closed doors. On that day the source to Xgl was re-opened to the public, and included in freedesktop.org, along with major restructuring to allow a wider range of supported display drivers. X server backends used by Xgl include Xglx and Xegl. In February 2006 the server gained wide publicity after a public display where the Novell desktop team demonstrated a desktop using Xgl with several visual effects such as translucent windows and a rotating 3D desktop. The effects had first been implemented in a composite manager called glxcompmgr (not to be confused with xcompmgr), now deprecated because several effects could not be adequately implemented without tighter interaction between the window manager and the composite manager. As a solution David Reveman developed Compiz, the first proper OpenGL compositing window manager for the X Window System. Later, in September 2006, the Beryl compositing window manager was released as a fork of the original Compiz. Compiz and Beryl have merged back in April 2007, which resulted in the development of Compiz Fusion.
Backends
OpenGL does not specify how to initialize a display and manipulate drawing contexts. Instead these operations are handled by an API specific to the native windowing system. So far there are two different backend approaches to solving this initialization problem. Most likely the majority of each backend will contain the same code and the differences will primarily be in the initialization portions of the servers.
Xglx
Xglx was the first backend implemented for this architecture. It requires an already existing X server to run on top of and uses GLX to create an OpenGL window which Xgl then uses, similar to Xnest. This mode is only intended to be used for development in the future, as it is redundant to require an X server to run Xgl on top of.
At XDevConf 2006 (the 2006 X development conference), NVIDIA made a presentation arguing that this is the wrong direction to take because the layered server abstracts features of the cards away. This makes driver specific capabilities like support for 3D glasses and dual monitor support much more difficult.
However, delegating initialization to an existing X server allows the developers to immediately focus on server functionality rather than dedicating substantial time to specifics of interfacing with numerous video hardware. At the moment, Xglx does not officially support multiple monitors, although it has been achieved on Ubuntu Dapper / ATI / NVIDIA (twinview).
Xegl
Xegl was a long-term goal of X server development. It shares much of the drawing code with the Xglx server, but the initialization of the OpenGL drawable and context management is handled by the EGL API developed by Khronos (EGL is a window system-independent equivalent to the GLX and WGL APIs, which respectively enable OpenGL support in X and Microsoft Windows). The current implementation uses Mesa-solo to provide OpenGL rendering directly to the Linux framebuffer or DRI to the graphics hardware. Xegl can only be run using Radeon R200 graphics hardware and development is currently stalled. It is likely that it will remain so until the Xglx server has proven itself and the closed source drivers add support for the EGL API, when it should be a transparent replacement for the nested Xglx server.
Rationale
Structuring all rendering on top of OpenGL could potentially simplify video driver development. It removes the artificial separation of 2D and 3D acceleration. This is advantageous as 2D operations are frequently unaccelerated (which is counterintuitive, since 2D is a subset of 3D).
It also removes all driver-dependent code from the X server itself, and allows for accelerated Composite and Render operations independent of the graphics driver.
Competitors
Hardware acceleration of 2D drawing operations has been a common feature of many window systems (including X11) for many years. The novelty of Xgl and similar systems is the use of APIs specifically developed for 3D rendering for accelerating 2D desktop operations. Prior to the adoption of anti-aliased drawing by X11, the use of 3D rendering APIs for 2D desktop rendering was undesirable because such APIs did not make the pixel accurate rendering guarantees that are part of the original X11 protocol definition.
Hardware-accelerated OpenGL window and desktop rendering, limited to using OpenGL for texture composition, has been in use in Mac OS X, in a technology called Quartz Extreme, since Mac OS X v10.2. Quartz 2D Extreme is an enhancement of this feature and more directly comparable to Xgl. Like Xgl, Quartz 2D Extreme brings OpenGL acceleration to all 2D drawing operations (not just desktop compositing) and ships with Mac OS X v10.4, but is disabled by default pending a formal declaration of production-readiness. Core Animation is the extension of this effort for Leopard (Mac OS X v10.5).
Several desktop interfaces based on 3D APIs have been developed, more recently OpenCroquet and Sun Microsystems' Project Looking Glass ; these take advantage of 3D acceleration for software built within their own framework, but do not appear to accelerate existing 2D desktop applications rendered within their environment (often via mechanisms like VNC).
Microsoft developed a similar technology based on DirectX, named the DWM, as part of its Windows Vista operating system. This technology was first shown publicly at Microsoft's October 2003 PDC.
Availability
, the Xgl X Server (and related components including the Compiz compositing manager and associated graphical config tools) ships as a non-default in one major Linux distribution, SUSE 10.1, and is included in Frugalware Linux or SUSE Linux Enterprise Desktop 10. Xgl can be set up fairly easily for Ubuntu 6.06 LTS (Dapper Drake) and 6.10 (Edgy Eft) and for Freespire with binary packages from unofficial repositories. Xgl is also available as an overlaid package in Gentoo Linux, and as a PKGBUILD for Arch Linux.
Mandriva Linux 2007 includes official packages to run Compiz, using Xgl and AIGLX. Mandriva provides drak3d, a tool to configure a 3D Desktop in two clicks.
Ubuntu 6.10 "Edgy Eft" and later use AIGLX, not Xgl, by default.
Xgl was removed from X11R7.5 in 2009 due to its being an unmaintained server variant.
See also
X Window System
AIGLX
VirtualGL
OpenGL
Compiz
Beryl
References
External links
Xegl
EGL specifications
Article: The State of Linux Graphics — overview of various approaches to replace the current X server
the video demonstrating Compiz on Xgl
Freedesktop.org
OpenGL
X servers
|
28379436
|
https://en.wikipedia.org/wiki/Epoll
|
Epoll
|
epoll is a Linux kernel system call for a scalable I/O event notification mechanism, first introduced in version 2.5.44 of the Linux kernel. Its function is to monitor multiple file descriptors to see whether I/O is possible on any of them. It is meant to replace the older POSIX select(2) and poll(2) system calls, to achieve better performance in more demanding applications, where the number of watched file descriptors is large (unlike the older system calls, which operate in O(n) time, epoll operates in O(1) time).
epoll is similar to FreeBSD's kqueue, in that it consists of a set of user-space functions, each taking a file descriptor argument denoting the configurable kernel object, against which they cooperatively operate. epoll uses a red-black tree (RB-tree) data structure to keep track of all file descriptors that are currently being monitored.
API
int epoll_create1(int flags);
Creates an epoll object and returns its file descriptor. The flags parameter allows epoll behavior to be modified. It has only one valid value, EPOLL_CLOEXEC. epoll_create() is an older variant of epoll_create1() and is deprecated as of Linux kernel version 2.6.27 and glibc version 2.9.
int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event);
Controls (configures) which file descriptors are watched by this object, and for which events. op can be ADD, MODIFY or DELETE.
int epoll_wait(int epfd, struct epoll_event *events, int maxevents, int timeout);
Waits for any of the events registered for with epoll_ctl, until at least one occurs or the timeout elapses. Returns the occurred events in events, up to maxevents at once.
Triggering modes
epoll provides both edge-triggered and level-triggered modes. In edge-triggered mode, a call to epoll_wait will return only when a new event is enqueued with the epoll object, while in level-triggered mode, epoll_wait will return as long as the condition holds.
For instance, if a pipe registered with epoll has received data, a call to epoll_wait will return, signaling the presence of data to be read. Suppose, the reader only consumed part of data from the buffer. In level-triggered mode, further calls to epoll_wait will return immediately, as long as the pipe's buffer contains data to be read. In edge-triggered mode, however, epoll_wait will return only once new data is written to the pipe.
Criticism
Bryan Cantrill pointed out that epoll had mistakes that could have been avoided, had it learned from its predecessors: input/output completion ports, event ports (Solaris) and kqueue. However, a large part of his criticism was addressed by epoll's EPOLLONESHOT and EPOLLEXCLUSIVE options. EPOLLONESHOT was added in version 2.6.2 of the Linux kernel mainline, released in February 2004. EPOLLEXCLUSIVE was added in version 4.5, released in March 2016.
See also
Input/output completion port (IOCP)
kqueue
libevent
References
External links
epoll manpage
epoll patch
Events (computing)
Linux kernel features
|
143256
|
https://en.wikipedia.org/wiki/Wormhole%20switching
|
Wormhole switching
|
Wormhole flow control, also called wormhole switching or wormhole routing, is a system of simple flow control in computer networking based on known fixed links. It is a subset of flow control methods called Flit-Buffer Flow Control.
Switching is a more appropriate term than routing, as "routing" defines the route or path taken to reach the destination. The wormhole technique does not dictate the route to the destination but decides when the packet moves forward from a router.
Wormhole switching is widely used in multicomputers because of its low latency and small requirements at the nodes.
Wormhole routing supports very low-latency, high-speed, guaranteed delivery of packets suitable for real-time communication.
Mechanism principle
In the wormhole flow control, each packet is broken into small pieces called flits (flow control units).
Commonly, the first flits, called the header flits, holds information about this packet's route (for example, the destination address) and sets up the routing behavior for all subsequent flits associated with the packet. The header flits are followed by zero or more body flits which contain the actual payload of data. Some final flits, called the tail flits, perform some bookkeeping to close the connection between the two nodes.
In wormhole switching, each buffer is either idle, or allocated to one packet. A header flit can be forwarded to a buffer if this buffer is idle. This allocates the buffer to the packet. A body or trailer flit can be forwarded to a buffer if this buffer is allocated to its packet and is not full. The last flit frees the buffer. If the header flit is blocked in the network, the buffer fills up, and once full, no more flits can be sent: this effect is called "back-pressure" and can be propagated back to the source.
The name "wormhole" plays on the way packets are sent over the links: the address is so short that it can be translated before the message itself arrives. This allows the router to quickly set up the routing of the actual message and then "bow out" of the rest of the conversation. Since a packet is transmitted flit by flit, it may occupy several flit buffers along its path, creating a worm-like image.
This behaviour is quite similar to cut-through switching, commonly called "virtual cut-through," the major difference being that cut-through flow control allocates buffers and channel bandwidth on a packet level, while wormhole flow control does this on the flit level.
In case of circular dependency, this back-pressure can lead to deadlock.
In most respects, wormhole is very similar to ATM or MPLS forwarding, with the exception that the cell does not have to be queued.
One thing special about wormhole flow control is the implementation of virtual channels:
A virtual channel holds the state needed to coordinate the handling of the flits of a packet over a channel. At a minimum, this state identifies the output channel of the current node for the next hop of the route and the state of the virtual channel (idle, waiting for resources, or active). The virtual channel may also include pointers to the flits of the packet that are buffered on the current node and the number of flit buffers available on the next node.
Example
Consider the 2x2 network of the figure on the right, with 3 packets to be sent: a pink one, made of 4 flits, 'UVWX', from C to D; a blue one, made of 4 flits 'abcd', from A to F; and a green one, made of 4 flits 'ijkl', from E to H. We assume that the routing has been computed, as drawn, and implies a conflict of a buffer, in the bottom-left router. The throughput is of one flit per time unit.
First, consider the pink flow: at time 1, the flit 'U' is sent to the first buffer; at time 2, the flit 'U' goes through the next buffer (assuming the computation of the route takes no time), and the flit 'V' is sent to the first buffer, and so on.
The blue and green flows requires a step by step presentation:
Time 1: Both the blue and green flows send theirs first flits, 'i' and 'a'.
Time 2: The flit 'i' can go on into the next buffer. But a buffer is dedicated to a packet from its first to its last flit, and so, the 'a' flit can not be forwarded. This is the start of a back-pressure effect. The 'j' flit can replace the 'i' flit. The 'b' flit can be sent.
Time 3: The green packet goes on. The blue 'c' flit can not be forwarded (the buffer is occupied with the 'b' and 'a' flits): this back-pressure effect reaches the packet source.
Time 4: As in time 3
Time 5: The green packet no longer uses the left-down buffer. The blue packet is unblocked and can be forwarded (assuming that the 'unblocked' information can be forwarded in null time)
Time 6-10: The blue packet goes through the network.
Advantages
Wormhole flow control makes more efficient use of buffers than cut-through. Where cut-through requires many packets worth of buffer space, the wormhole method needs very few flit buffers (comparatively).
An entire packet need not be buffered to move on to the next node, decreasing network latency compared to store-and-forward switching.
Bandwidth and channel allocation are decoupled
Usage
Wormhole techniques are primarily used in multiprocessor systems, notably hypercubes. In a hypercube computer each CPU is attached to several neighbours in a fixed pattern, which reduces the number of hops from one CPU to another. Each CPU is given a number (typically only 8-bit to 16-bit), which is its network address, and packets to CPUs are sent with this number in the header. When the packet arrives at an intermediate router for forwarding, the router examines the header (very quickly), sets up a circuit to the next router, and then bows out of the conversation. This reduces latency (delay) noticeably compared to store-and-forward switching that waits for the whole packet before forwarding. More recently, wormhole flow control has found its way to applications in Network On Chip systems (NOCs), of which multi-core processors are one flavor. Here, many processor cores, or on a lower level, even functional units can be connected in a network on a single IC package. As wire delays and many other non-scalable constraints on linked processing elements become the dominating factor for design, engineers are looking to simplify organized interconnection networks, in which flow control methods play an important role.
The IEEE 1355 and SpaceWire technologies use wormhole.
Virtual channels
An extension of worm-hole flow control is Virtual-Channel flow control, where several virtual channels may be multiplexed across one physical channel. Each unidirectional virtual channel is realized by an independently managed pair of (flit) buffers. Different packets can then share the physical channel on a flit-by-flit basis.
Virtual channels were originally introduced to avoid the deadlock problem, but they can be also used to reduce wormhole blocking, improving network latency and throughput. Wormhole blocking occurs when a packet acquires a channel, thus preventing other packets from using the channel and forcing them to stall.
Suppose a packet P0 has acquired the channel between two routers. In absence of virtual channels, a packet P1 arriving later would be blocked until the transmission of P0 has been completed.
If virtual channels are implemented, the following improvements are possible:
Upon arrival of P1, the physical channel can be multiplexed between them on a flit-by-flit basis, so that both packets proceed with half speed (depending on the arbitration scheme).
If P0 is a full-length packet whereas P1 is only a small control packet of size of few flits, then this scheme allows P1 pass through both routers while P0 is slowed down for a short time corresponding to the transmission of few packets. This reduces latency for P1.
Assume that P0 is temporarily blocked downstream from the current router. Throughput is increased by allowing P1 to proceed at the full speed of the physical channel. Without virtual channels, P0 would be occupying the channel, without actually using the available bandwidth (since it is being blocked).
Using virtual channels to reduce wormhole blocking has many similarities to using virtual output queueing to reduce head-of-line blocking.
Routing
A mix of source routing and logical routing may be used in the same wormhole-switched packet.
The value of the first byte of a Myrinet or SpaceWire packet is the address of the packet.
Each SpaceWire switch uses the address to decide how to route the packet.
Source routing
With source routing, the packet sender chooses how the packet is routed through the switch.
If the first byte of an incoming SpaceWire packet is in the range 1 to 31,
it indicates the corresponding port 1 to 31 of the Spacewire switch.
The SpaceWire switch then discards that routing character and sends the rest of the packet out that port.
This exposes the next byte of the original packet to the next SpaceWire switch.
The packet sender may choose to use source routing to explicitly specify the complete path through the network to the final destination in this fashion.
Logical routing
With logical routing, the Spacewire switch itself decides how to route the packet.
If the address (the first byte) of an incoming SpaceWire packet is in the range 32 to 255,
the SpaceWire switch uses that value as an index into an internal routing table that indicates which port(s) to send the packet and whether to delete or retain that first byte.
Address 0 is used to communicate directly with the switch, and may be used to set the routing table entries for that switch.
See also
IEEE 1355
SpaceWire
References
Flow control (data)
Routing
Network on a chip
|
35493282
|
https://en.wikipedia.org/wiki/Travis%20Doering
|
Travis Doering
|
Travis Doering (born July 14, 1991 in Vancouver, British Columbia) is a Canadian systems analyst, writer and film producer.
Career
He is best known for his work as a security consultant and writer, in both the film and news media. In 2018 Doering revealed one of the largest data breaches in Canadian history effecting millions of customers of defunct computer retailer NCIX. In 2015 via the now defunct website Hacker Film Blog, Doering revealed vulnerabilities in Apple's iCloud platform and the breach and subsequent theft of customer data from internet security software company "Bitdefender". In the film industry Doering has served as a technical consultant providing hacking and information technology dialogue on several film and television productions including the Canadian science fiction series "Continuum", the police procedural "Motive" and the American zombie film "Dead Rising: Endgame”. In addition to his work in media, Doering also serves as a systems analyst providing information security consultancy services for high risk individuals and businesses since 2006. Before writing Doering worked in the casting and production department on many Canadian and American film and television productions.
Security Research
In September 2018, Doering posted an editorial title NCIX Data Breach on the blog of his cyber security company Privacy Fly. It outlined a severe data breach at a bankrupt Canadian retailer NCIX in which millions of business records detailing 15 years worth of transactions were sold in a series of backroom deals. The editorial prompted an investigation into the sale by the RCMP and Office of the Information and Privacy Commissioner of British Columbia, as well as a civil lawsuit. In July 2015, Doering created the Hacker Film Blog where he co-authored an article about a security breach at antivirus maker BitDefender. The story was later picked up by Forbes, The Washington Times, and PC World. Two months later in September 2015, Doering posted a documentary titled "Vulnerability: The Secrets Behind iCloud Hacking”. The documentary exposed vulnerabilities being exploited by an underground hacking collective known as RipSec, whose members breached over eleven thousand iCloud accounts, a significant portion of which belonged to Hollywood celebrities like Amanda Seyfried, Kate Mara, and Jamie Foxx.
Edward Snowden Movie
In September 2013, Doering and Film Director Jason Bourque set out to crowdfund a feature film titled "Classified: The Edward Snowden Story". "Classified" was a biographical feature film based on the life of NSA leaker Edward Snowden, in January 2014 production was shut down and the project was cancelled after losing several key donators due to not reaching their total 1.7 million dollar funding goal. When the production was shutdown Bourque and Doering announced that "Classified" would be split into two separate projects one titled "Vulnerability", a documentary that focuses on IT security and the internet. The second, a feature film based on Snowden's life that would be produced in cooperation with likeminded production companies and film distributors in the near future. In January 2014 existing backers from "Classified" had the option to transfer their donations to "Vulnerability" or have the funds fully refunded. "Vulnerability" was released on September 25, 2015.
Filmography
Feature credits
Television credits
References
External links
Doering's Website
Twitter
1991 births
Canadian film producers
Canadian television producers
Living people
People from Vancouver
|
66943379
|
https://en.wikipedia.org/wiki/The%20Preparation%20of%20Programs%20for%20an%20Electronic%20Digital%20Computer
|
The Preparation of Programs for an Electronic Digital Computer
|
The Preparation of Programs for an Electronic Digital Computer (sometimes called WWG, after its authors' initials) was the first book on computer programming. Published in 1951, it was written by Maurice Wilkes, David Wheeler, and Stanley Gill of Cambridge University. The book was based on the authors' experiences constructing and using EDSAC, one of the first practical computers in the world.
Contents
Overview
It was the first book to describe a number of important concepts in programming, including:
the first account of a library of reusable code
the first API
the first explanation of using a memory dump for debugging a program, which the book called a "post-mortem routine"
the first use of the term "assembly" in programming, though with a somewhat different meaning than the modern use of the term
Much of the book is dedicated to explaining the library. This consisted of eighty-eight subroutines implementing mathematical operations like the calculation of trigonometric functions and arithmetic operations on complex numbers. The library was a physical collection stored in a filing cabinet containing punched paper tape encoding the subroutines. This included a "library catalog" describing how a programmer could use each subroutine; today this is called API documentation.
Part one
Chapter 6 - Debugging
This chapter extensively investigates "proofreading" and location of the mistakes in the programs. It also advises against frequent refactoring as it introduces more mistakes as programmer tries to improve the program.
Chapter 7 - Examples of programs for EDSAC
Includes examples of calculations of formula and definite integral, integration of ordinary differential equitation, and evaluation of the Fourier transform by using EDSAC programs.
Chapter 8 - Automatic programming
discusses an assembling (compiling) and interpretation of a program, it also discusses motivation behind "floating addresses" which are, in modern terms, are variable references (akin to C++ variable references) which are replaced by compiler by a real memory addresses on the fly every time the subroutine is invoked.
Part two
This part contains mostly specification on the EDSAC's standard library's subroutines. Among included are subroutines for floating-point, complex numbers, debugging, exponential calculations, integration, differential arithmetic equations, logarithms, quadrature, and trigonometric subroutines.
Publication history
The 1951 book was a mass-printed version of a report titled Report on the Preparation of Programmes for the EDSAC and the Use of the Library of Subroutines written in September 1950 for private circulation and distributed to no more than 100 people. Though written in England, the book was published by Addison-Wesley in the United States.
At the time WWG was published there were very few digital computers in the world. EDSAC, on which the book was based, was the first computer in the world to provide a practical computing service for researchers. Demand for the book was so limited initially that it took six years to sell out the first edition. As computers became more common in the 1950s, the book became the standard textbook on programming for a time. The second edition was printed in 1957. By that time, technology had advanced to the point that WWG was somewhat outdated.
Though WWG was the first published, book-length treatment of computer programming, it was not the first writing on the topic. The subject of programming had been pioneered by Ada Lovelace more than a century prior. It had also been written about more recently by John von Neumann, whose EDVAC Report of 1945 initially inspired Wilkes to create EDSAC.
See also
The C Programming Language
References
External links
The Preparation of Programs for an Electronic Digital Computer second edition (1957) text at The Internet Archive
Handbooks and manuals
Computer programming books
Addison-Wesley books
1951 non-fiction books
|
3125191
|
https://en.wikipedia.org/wiki/Pharmacosiderite
|
Pharmacosiderite
|
Pharmacosiderite is a hydrated basic ferric arsenate, with chemical formula KFe4(AsO4)3(OH)4·(6-7)H2O and a molecular weight of 873.38 g/mol. It consists of the elements arsenic, iron, hydrogen, potassium, sodium and oxygen. It has a Mohs hardness of 2 to 3, about that of a finger nail. Its specific gravity is about 2.7 to 2.9, has indistinct cleavage, and is usually transparent or translucent. It has a yellow or white streak and a yellow, green, brown or red color. Its lustre is adamantine, vitreous and resinous, and it has conchoidal, brittle and sectile fracture.
Pharmacosiderite has an isometric crystal system, with yellowish-green, sharply defined cube crystals. Its crystals are doubly refracting, and exhibit a banded structure in polarized light. When placed in ammonium solution, a crystal changes color to a distinguishing red. Upon placing it into dilute hydrochloric acid the original color is restored.
This secondary origin mineral is normally formed in the oxidation zones of ore deposits. The alteration of arsenopyrite, tennantite and other primary arsenates can form pharmacosiderite. It can also form from precipitation of hydrothermal solutions, but only rarely. It can be found in abundance in Cornwall, Hungary and the U.S. state of Utah.
When it was first discovered, pharmacosiderite was known as cube ore. The present name, given by J. F. L. Hausmann in 1813, is made up of the Greek words for arsenic and iron, the two most significant consisting elements. Pharmakos means poison, which is related to arsenic, and sideros means iron.
Pharmacolite and picropharmacolite, which are different arsenates, are not associated besides via nomenclature. Siderite, a carbonate mineral, only shares the common element iron with pharmacosiderite.
References
WebMineral
Mineral Galleries
MinDat
Attribution:
Potassium minerals
Iron(III) minerals
Arsenate minerals
Cubic minerals
Minerals in space group 215
|
484241
|
https://en.wikipedia.org/wiki/Strategic%20information%20system
|
Strategic information system
|
Strategic information systems (SIS) are information systems that are developed in response to corporate business initiative. They are intended to give competitive advantage to the organization. They may deliver a product or service that is at a lower cost, that is differentiated, that focuses on a particular market segment, or is innovative.
Strategic information management (SIM) is a salient feature in the world of information technology (IT). In a nutshell, SIM helps businesses and organizations categorize, store, process and transfer the information they create and receive. It also offers tools for helping companies apply metrics and analytical tools to their information repositories, allowing them to recognize opportunities for growth and pinpoint ways to improve operational efficiency.
History
The concept of SIS was first introduced into the field of information systems in 1982-83 by Dr. Charles Wiseman, President of a newly formed consultancy called "Competitive Applications," (cf. NY State records for consultancies formed in 1982) who gave a series of public lectures on SIS in NYC sponsored by the Datamation Institute, a subsidiary of Datamation Magazine.
The following quotations from the preface of the first book establishes the basic idea behind the notion of SIS:
"I began collecting instances of information systems used for strategic purposes five years ago, dubbing them "strategic information systems" (Internal Memo, American Can Company (Headquarters), Greenwich, CT, 1980). But from the start I was puzzled by their occurrence. At least theoretically I was unprepared to admit the existence of a new variety of computer application. The conventional view at the time recognized only management information systems, and management support systems, the former used to satisfy the information needs and the latter to automate basic business processes of decision makers. (Cf. articles by Richard L. Nolan, Jack Rockart, Michael Scott Morton, et al. at that time)...But as my file of cases grew, I realized that the conventional perspective on information systems was incomplete, unable to account for SIS. The examples belied the theory, and the theory in general blinded believers from seeing SIS. Indeed, some conventional information systems planning methodologies, which act like theories in guiding the systematic search for computer application opportunities, exclude certain SIS possibilities from what might be found. (ibid.)"
"This growing awareness of the inadequacy of the dominant dogma of the day led me to investigate the conceptual foundations, so to speak, of information systems. At first, I believed that the conventional gospel could be enlarged to accommodate SIS. But as my research progressed, I abandoned this position and concluded that to explain SIS and facilitate their discovery, one needed to view uses of computer (information) technology from a radically different perspective."
"I call this the strategic perspective on information systems (technology). The chapters to follow present my conception of it. Written for top executives and line managers, they show how computers (information technology) can be used to support or shape competitive strategy."
References
Sources
Strategic Information Systems Planning: A Review. Somendra Pant and Cheng Hsu 1995 Information Resources Management Association International Conference, May 21–24, Atlanta, Georgia
Kichan Nam, S. Rajagopalan, H. Raghav Rao, A. Chaudhury, A two-level investigation of information systems outsourcing, Communications of the ACM, v.39 n.7, p. 36-44, July 1996
Thompson SH Teo, Yujun Pian, A contingency perspective on internet adoption and competitive advantage, European Journal of Information Systems, v.12 n.2, p. 78-92, June 2003
Bruce R. Lewis, Terry Anthony Byrd, Development of a measure for the information technology infrastructure construct, European Journal of Information Systems, v.12 n.2, p. 93-109, June 2003
John Mendonca, Organizational impact of information technology: a leadership course for IT, Proceedings of the 5th conference on Information technology education, October 28–30, 2004, Salt Lake City, UT, USA
Kun Chang Lee, Sangjae Lee, In Won Kang, KMPI: measuring knowledge management performance, Information and Management, v.42 n.3, p. 469-482, March 2005
Youlong Zhuang, Albert L. Lederer, A resource-based view of electronic commerce, Information and Management, v.43 n.2, p. 251-261, March 2006
Kit F. Pun, Clement K. Sankat, Man-Yin R. Yiu, Towards formulating strategy and leveraging performance: a strategic information systems planning approach, International Journal of Computer Applications in Technology, v.28 n.2/3, p. 128-139, April 2007
Robert M. Brown, Amy W. Gatian, James O. Hicks, Jr., Strategic information systems and financial performance, Journal of Management Information Systems, v.11 n.4, p. 215-248, March 1995
William R. King, Thompson S. H. Teo, Key dimensions of facilitators and inhibitors for the strategic use of information technology, Journal of Management Information Systems, v.12 n.4, p. 35-53, March 1996
Rajiv Sabherwal, William R. King, An empirical taxonomy of the decision-making processes concerning strategic applications of information systems, Journal of Management Information Systems, v.11 n.4, p. 177-214, March 1995
Hong-Mei Chen, Pauline J. Sheldon, Destination information systems: design issues and directions, Journal of Management Information Systems, v.14 n.2, p. 151-176, September 1997
W. David Wilde, Paul A. Swatman, Federal government policy and community objectives in regional telecommunications: a SISP-based study of Ballarat, Journal of Theoretical and Applied Electronic Commerce Research, v.1 n.1, p. 16-31, April 2006
Mohsen Akbarpour Shirazi, Javad Soroor, An intelligent agent-based architecture for strategic information system applications, Knowledge-Based Systems, v.20 n.8, p. 726-735, December, 2007
Rodrigo Magalhaes, A context-based dynamic capability perspective of IS/IT organisational fit, International Journal of Information Systems and Change Management, v.1 n.4, p. 396-420, January 2006
Thawatchai Jitpaiboon, Sema A. Kalaian, Impacts of IS dependency on IS strategy formulation, International Journal of Information Systems and Change Management, v.1 n.2, p. 187-201, July 2006
P. Gongla, G. Sakamoto, A. Back-Hock, P. Goldweic, L. Ramos, R. C. Sprowls, C.-K. Kim, SPARKA: a knowledge-based system for identifying competitive uses of information technology, IBM Systems Journal, v.28 n.4, p. 628-645, 1989
http://monografias.com/trabajos7/chaof/chaof.shtml
Information systems
|
41461054
|
https://en.wikipedia.org/wiki/Alice%20Corp.%20v.%20CLS%20Bank%20International
|
Alice Corp. v. CLS Bank International
|
Alice Corp. v. CLS Bank International, 573 U.S. 208 (2014), was a 2014 United States Supreme Court decision about patent eligibility. The issue in the case was whether certain patent claims for a computer-implemented, electronic escrow service covered abstract ideas, which would make the claims ineligible for patent protection. The patents were held to be invalid because the claims were drawn to an abstract idea, and implementing those claims on a computer was not enough to transform that abstract idea into patentable subject matter.
Although the Alice opinion did not mention software as such, the case was widely considered as a decision on software patents or patents on software for business methods. Alice and the 2010 Supreme Court decision of Bilski v. Kappos, another case involving software for a business method (which also did not opine on software as such), were the most recent Supreme Court cases on the patent eligibility of software-related inventions since Diamond v. Diehr in 1981.
Background
Alice Corporation ("Alice") owned four patents on electronic methods and computer programs for financial-trading systems. These financial-trading systems described how two parties could settle their exchange through a third party to reduce "settlement risk"—the risk that one party will perform while the other will not. Alice alleged that CLS Bank International and CLS Services Ltd. (collectively "CLS Bank") began to use similar technology in 2002. Alice accused CLS Bank of infringement of Alice's patents, and when the parties did not resolve the issue, CLS Bank filed suit against Alice in 2007, seeking a declaratory judgment that the claims at issue were invalid. Alice counterclaimed, alleging infringement.
The relevant claims are in these patents:
US patent 5,970,479 filed 1992, issued 1999 (available at the USPTO site and )
US patent 6,912,510 filed 2000, issued 2005 (available at the USPTO site and )
US patent 7,149,720 filed 2002, issued 2006 (available at the USPTO site and )
US patent 7,725,375 filed 2005, issued 2010 (available at the USPTO site and )
Rulings in lower courts
District court
In 2007, CLS Bank sued Alice in the United States District Court for the District of Columbia seeking a declaratory judgment that Alice's patents were invalid and unenforceable and that CLS Bank had not infringed them. Alice countersued CLS Bank for infringement of the patents. After the court had allowed initial, limited discovery on the questions of CLS Bank's operations and its relationship to the allegedly infringing CLS Bank system, the court ruled on the parties' cross-motions for summary judgment. It declared each of Alice's patents invalid because the claims concerned abstract ideas, which are not eligible for patent protection under 35 U.S.C. § 101.
The court stated that a method "directed to an abstract idea of employing an intermediary to facilitate simultaneous exchange of obligations in order to minimize risk" is a "basic business or financial concept," and that a "computer system merely 'configured' to implement an abstract method is no more patentable than an abstract method that is simply 'electronically' implemented." In so holding, the district court relied on Bilski v. Kappos as precedent, in which the Supreme Court held that Bilski's claims to business methods for hedging against the risk of price fluctuations in commodities markets were not patent-eligible because they claimed and preempted (i.e., monopolized) the abstract idea of hedging against risk.
Federal Circuit
Alice appealed the decision to the United States Court of Appeals for the Federal Circuit. A panel of the appeals court decided by 2-1 in July 2012 to reverse the lower court's decision. But the members of the Federal Circuit vacated that decision and set the case for reargument en banc. It ordered that the parties (and any amici curiae who cared to brief the matter) address the following questions:
what test should the court adopt to determine whether a computer-implemented invention is a patent-ineligible abstract idea;
whether the presence of a computer in a claim could ever make patent-ineligible subject matter patentable; and
whether method, system, and media claims should be considered equivalent under § 101.
A very fractured panel of ten judges of the Federal Circuit issued seven different opinions, with no single opinion supported by a majority on all points. Seven of the ten judges upheld the district court's decision that Alice's method claims and computer-readable-medium claims were not patent-eligible, but they did so for different reasons. Five of the ten judges upheld the district court's decision that Alice's computer-systems claims were not patent-eligible, and five judges disagreed. The panel as a whole did not agree on a single standard to determine whether a computer-implemented invention is a patent-ineligible abstract idea.
Plurality opinion
In the leading, five-member, plurality opinion written by Judge Lourie, joined by Judges Dyk, Prost, Reyna, and Wallach, the court stated a test that focused on first identifying the abstract idea or fundamental concept applied by the claim and then determining whether the claim would preempt the abstract idea. The analysis involved making four steps:
determine whether the claimed invention fits within one of the four classes in the statute: process, machine, manufacture, or composition of matter;
determine whether the claim poses a risk of "preempting an abstract idea";
identify the idea supposedly at risk of preemption by defining "whatever fundamental concept appears wrapped up in the claim";
in a final step called "inventive concept" analysis, determine whether there is genuine, human contribution to the claimed subject matter. The "balance of the claim," or the human contribution, must "contain[] additional substantive limitations that narrow, confine, or otherwise tie down the claim so that, in practical terms, it does not cover the full abstract idea itself."
The last part of the Federal Circuit plurality analysis "considers whether steps combined with a natural law or abstract idea are so insignificant, conventional, or routine as to yield a claim that effectively covers the natural law or abstract idea itself." The Supreme Court would later adopt a similar principle. In the Supreme Court's opinion, the Court combined the first three steps into one identification step, resulting in a two-step analysis.
Four-judge opinion
Chief Judge Rader and Circuit Judges Linn, Moore, and O'Malley filed an opinion concurring in part and dissenting in part. Their patent-eligibility analysis focused on whether the claim, as a whole, was limited to an application of an abstract idea, or was merely a recitation of the abstract idea. They would have held Alice's system claims patent eligible because they were limited to a computer-implemented application.
Judge Rader's "reflections"
Judge Rader also filed "additional reflections" to the ruling (not joined by any other judges) expressing his view of the patent statute as allowing very broad patentability under § 101, and his understanding that natural laws are restricted to "universal constants created, if at all, only by God, Vishnu, or Allah." Referencing Einstein, he stated that "even gravity is not a natural law."
Opinions supporting patent eligibility of all claims
Judge Newman concurred in part and dissented in part, calling for the Federal Circuit to clarify the interpretation of § 101. She would have held all of Alice's claims patent eligible.
Judges Linn and O'Malley dissented, arguing that all claims were patent eligible. They called for legislative, rather than judicial, action to address the "proliferation and aggressive enforcement of low quality software patents" cited in the many amicus curiae briefs and suggested new laws to limit the term of software patents or limit the scope of such patents.
Supreme Court
Amicus curiae participation
The keen interest of the software industry and patent professionals in the issue was illustrated by many companies and groups filing 52 amicus curiae briefs urging the Supreme Court to decide the issue of software patent eligibility. Those amici included the Electronic Frontier Foundation, Software Freedom Law Center, Institute of Electrical and Electronics Engineers, Intellectual Property Law Association of Chicago, Accenture Global Services. and the USPTO itself for the United States. Nearly all such briefs argued that the patents should be invalidated. They disagreed, however, as to the proper reasoning.
A brief prepared by Google, Amazon and other companies argued that the patent was on an abstract idea, which actually harms innovation, and that the real innovation lies in detailing out a working system.
Microsoft, Adobe and Hewlett-Packard argued it was nothing more than an unpatentable business method (per Bilski v. Kappos) and merely saying to perform it with a computer does not change this fact.
Free Software Foundation and others argued that no software should be patented unless it passes machine-or-transformation test, as this blocks both innovation and scientific collaboration.
IBM disagreed with the "abstract ideas" reasoning and argued that the patent should instead be struck down for being too obvious.
Finally, a consortium of retailer and manufacturers, including Dillard's and Hasbro, simply asked for a clear rule.
Supreme Court opinions
The Court unanimously invalidated the patent, in an opinion by Justice Clarence Thomas.
Majority opinion
Relying on Mayo v. Prometheus, the court found that an abstract idea could not be patented just because it is implemented on a computer. In Alice, a software implementation of an escrow arrangement was not patent eligible because it is an implementation of an abstract idea. Escrow is not a patentable invention, and merely using a computer system to manage escrow debts does not rise to the level needed for a patent. Under Alice, the "Mayo framework" should be used in all cases in which the Court has to decide whether a claim is patent-eligible.
The Court began by recognizing that the patents cover what amounts to a computerized escrow arrangement. The Court held that Mayo explained how to address the problem of determining whether a patent claimed a patent-ineligible abstract idea or instead a potentially patentable practical implementation of an idea. This requires using a "two-step" analysis.
In the first Mayo step, the court must determine whether the patent claim under examination contains an abstract idea, such as an algorithm, method of computation, or other general principle. If not, the claim is potentially patentable, subject to the other requirements of the patent code. If the answer is affirmative, the court must proceed to the next step.
In the second step of analysis, the court must determine whether the patent adds to the idea "something extra" that embodies an "inventive concept."<ref>134 S. Ct. at 2355 ("We have described step two of this analysis as a search for an '"inventive concept—i.e., an element or combination of elements that is 'sufficient to ensure that the patent in practice amounts to significantly more than a patent upon the [ineligible concept] itself.'").</ref>
If there is no addition of an inventive element to the underlying abstract idea, the court should find the patent invalid under § 101. This means that the implementation of the idea must not be generic, conventional, or obvious, if it is to qualify for a patent. Ordinary and customary use of a general-purpose digital computer is insufficient, the Court said—"merely requir[ing] generic computer implementation fail[s] to transform [an] abstract idea into a patent-eligible invention."
The ruling continued with these points:
A mere instruction to implement an abstract idea on a computer "cannot impart patent eligibility."
"[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention."
"Stating an abstract idea 'while adding the words "apply it"' is not enough for patent eligibility."
"Nor is limiting the use of an abstract idea to a particular technological environment."
Concurring opinion
Three justices joined in a concurring opinion (per Justice Sotomayor) that essentially reiterated now-retired Justice Stevens's argument in Bilski, on historical grounds, that business methods were categorically outside the patent system. But because they too agreed that the claimed subject matter was an abstract idea, they joined the main opinion also.
Reception
According to The Washington Post:
[W]hile the court struck down what was universally said to be a bad patent, it didn't do much to say what kinds of software should be patentable. In other words, the court decided the most basic conflict in the case, but more or less declined to offer guidance for other, future cases.
The Electronic Frontier Foundation said that the Supreme Court:
reaffirmed that merely adding "a generic computer to perform generic computer functions" does not make an otherwise abstract idea patentable. This statement (and the opinion itself) makes clear that an abstract idea along with a computer doing what a computer normally does is not something our patent system was designed to protect. Admittedly, the Supreme Court did not offer the clearest guidance on when a patent claims merely an abstract idea, but it did offer guidance that should help to invalidate some of the more egregious software patents out there.
The Software Freedom Law Center said the Supreme Court:
took one more step towards the abolition of patents on software inventions. Upholding its previous positions, the Court held that abstract ideas and algorithms are unpatentable. It also emphasized that one cannot patent "an instruction to apply [an] abstract idea ... using some un-specified, generic computer.""
The Coalition for Patent Fairness, which advocates for patent reform legislation, said:
[N]either the ruling—nor any single act by the court or the executive branch—can do what is needed to make the business model of being a patent troll unprofitable and unattractive."
Some commentators expressed disappointment with the opinion because it did not define more comprehensively the boundaries between abstract ideas and patent-eligible implementations of ideas. They were particularly critical of Justice Thomas's statement—
In any event, we need not labor to delimit the precise contours of the "abstract ideas" category in this case. It is enough to recognize that there is no meaningful distinction between the concept of risk hedging in Bilski and the concept of intermediated settlement at issue here. Both are squarely within the realm of 'abstract ideas' as we have used that term."
For example, Robert Merges said, "To say we did not get an answer is to miss the depth of the non-answer we did get." John Duffy remarked, "[T]he Supreme Court has been remarkably resistant to providing clear guidance in this area, and this case continues that trend."
Richard H. Stern defended the opinion as "the expectable price of unanimity in a nine-member tribunal," arguing that the "greater sensed legitimacy and precedential stability" of a unanimous opinion "outbalanced" the shortcomings of a lack of clear guidance as to details. This commentator also asserted that "it is sensible to make narrow, incremental rulings as to software patent eligibility, because at present we are not so well informed that we can speak with confidence in very broad terms."
Gene Quinn, a patent-lawyer advocate of patenting software, opined that "In what can only be described as an intellectually bankrupt opinion, the Supreme Court never once used the word "software" in its decision. This is breathtaking given that the Supreme Court decision in Alice will render many hundreds of thousands of software patents completely useless." He also opined that "In years to come this decision will be ridiculed for many legitimate reasons."
Subsequent developments
Despite the Court's avoidance of mention of software in the opinion, the Alice decision has had a dramatic effect on the validity of so-called software patents and business-method patents. Since Alice, these patents have suffered a very high mortality rate. Hundreds of patents have been invalidated under §101 of the U.S. patent laws in Federal District Courts. Applying Alice, district court judges have found many of these claims to be patent-ineligible abstract ideas.
Federal Circuit Judge William Curtis Bryson explained the high mortality rate when sitting by designation as a trial judge in the Loyalty v. American Airlines case:
In short, such patents, although frequently dressed up in the argot of invention, simply describe a problem, announce purely functional steps that purport to solve the problem, and recite standard computer operations to perform some of those steps. The principal flaw in these patents is that they do not contain an "inventive concept" that solves practical problems and ensures that the patent is directed to something "significantly more than" the ineligible abstract idea itself. See CLS Bank, 134 S. Ct. at 2355, 2357; Mayo, 132 S. Ct. at 1294. As such, they represent little more than functional descriptions of objectives, rather than inventive solutions. In addition, because they describe the claimed methods in functional terms, they preempt any subsequent specific solutions to the problem at issue. See CLS Bank, 134 S. Ct. at 2354; Mayo, 132 S. Ct. at 1301-02. It is for those reasons that the Supreme Court has characterized such patents as claiming "abstract ideas" and has held that they are not directed to patentable subject matter.
Patent issuance statistics from the PTO show a significant drop in the number of business method patents (PTO class 705) issued in the months following the Alice decision. A graph available here shows that the PTO issued fewer than half the number after Alice that it had issued per month during the period prior to Alice. At the same time, the issuance of other types of software patents rose. (According to the graph, before Alice approximately 10% of software patents issued were business method patents, but afterwards that dropped in half, to 5%.)
See also
List of United States Supreme Court cases, volume 573
Software patents under United States patent law
Mayo Collaborative Servs. v. Prometheus Labs., Inc., a 2012 Supreme Court decision related to health care patent law
DDR Holdings v. Hotels.com, 773 F.3d 1245 (Fed. Cir. 2014), the first post-Alice Federal Circuit decision to uphold the validity of computer-implemented patent claims (applying the two-step framework)
Enfish, LLC v. Microsoft Corp., 822 F.3d 1327 (Fed. Cir. 2016), post-Alice Federal Circuit decision upholding patent claims to a logical model for a computer database.
Amdocs (Israel) Ltd. v. Openet Telecom, Inc., 841 F.3d 1288 (Fed. Cir. 2016), post-Alice'' Federal Circuit decision holding computer software-based patent claims eligible.
References
Further reading
External links
Alice Corporation patent page, including links to judicial orders and opinions, and amicus and party briefs
United States Supreme Court cases
United States patent case law
Software patent case law
2014 in United States case law
United States Supreme Court cases of the Roberts Court
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.