text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
For years, Apple fans have claimed that Macs are invulnerable to attack, while belittling Windows as being full of security holes. Now the tables are turned --- not only has a Trojan infected Macs and created a botnet, but several well-known researchers warn that Mac OS X is less secure than either Windows or Linux. In the last few days, there's been a great deal of publicity about the discovery of the world's first Mac botnet. When Mac users downloaded a pirated copy of iLife, their machines were taken over by a Trojan. At that point, according to Symantec experts Andy Cianciotto and Angela Thigpen: When the Trojanized installer is executed, it also runs the malicious program iworkservices. The Trojan, OSX.Iservice, targets the Mac OS and is compiled as a Mach-O multi-architecture binary. This allows the Trojan to run natively on both PowerPC and x86 architectures. The Trojan acts as a back door and opens a port on the local host for connections. It then attempts to connect to the following remote hosts: Symantec notes, in its description of the Trojan, that the threat of being infected by this malware is low. Still the mere fact of its existence, and a botnet run by it, shows that the claims of Mac folks that Macs are invulnerable to attack, are simply false. If that news wasn't bad enough for Mac fans, the New York Times reports that security researcher Dino A. Dai Zovi claims that Macs are less secure than either Windows or Linux machines. It's no idle claim; Dai Zovi won the Pwn2Own hacking contest in 2007, and is author of the Mac Hacker's handbook. He told the Times that "I have found that Macs are less secure than their current Windows and Linux counterparts. At least for the last several years, Apple has lagged behind in security, largely because the threat hasnt been there." He's not alone. Mac security expert Rich Mogull told the Times the same thing. The most recent versions of Mac OS X are "inherently less secure than the latest versions of Windows," he told the newspaper. Mogull says that several years ago Mac OS X was more secure than Windows because it has a UNIX foundation. But new kinds of attacks means that "all the Unix protections can be circumvented." So for those in the Mac community who believe the Mac is invulnerable, there's this simple message: You're living in the past.
<urn:uuid:ef4a682a-d0e1-440c-b29e-f4453cd621b6>
CC-MAIN-2017-09
http://www.computerworld.com/article/2481597/cybercrime-hacking/researchers--macs-are-less-secure-than-windows-pcs.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00145-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95217
530
2.78125
3
In 2009, the Canadian Space Agency committed $110 million the development of advanced robotics and space exploration technologies. The funding was designed to be spread over three years -- and now, right on schedule, the CSA has unveiled the results of its efforts: a fleet of rovers that will one day explore the moon and Mars. "One day" being sometime around the year 2020. The vehicles range in size and function; some are familiarly Spirit- and Opportunity- and Curiosity-like, while others -- like my personal favorite of the group, the golf cart-esque SL-Commander -- are more distinct in form. Some of the vehicles will work independently, Space Safety Magazinereports, while others are designed to function as aides for astronauts. (The Commander, for example, could ostensibly transport astronauts across lunar or Martian landscapes. Or, because it is awesome, it could also drive autonomously.) Canada's development of this particular fleet, CSA officials point out, makes particular sense. Not only has the country had space-engineering success with its celebrated Canadarm, but it also has a long history of developing technologies for its mining industry. Those are technologies, says Gilles Leclerc, CSA's director-general of space exploration, that will also prove useful in space exploration. Despite their Canadian origin, though, the vehicles could find use in NASA missions, as well. "In fact, we have an invitation right now from NASA to start working on advancing these technologies and taking them to flight for eventually a mission," Leclerc said. And the Canadian rovers should blend right in: All in all, they are quite similar to their American counterparts -- except that, in addition to English and Martian, they also likely speak French.
<urn:uuid:060224d9-8e7a-4c19-826f-83ee82e2a7de>
CC-MAIN-2017-09
http://www.nextgov.com/emerging-tech/2012/10/curiositys-cousins-meet-rover-fleet-canadian-space-agency/58986/?oref=ng-relatedstories
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00321-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964007
354
2.84375
3
Many Small to Medium Enterprises (SMEs) across the UK and beyond are unaware that supercomputing can help them become more energy efficient. With major energy suppliers recently announcing yet more price hikes, companies across all industries are feeling the pinch. Three of the ‘big six’ British energy firms have increased their prices between eight and ten per cent on last year’s costs, and this rise could have a profound effect on the performance of SMEs across the country and beyond. In addition, in our digital age, it is only a natural progression for businesses to increasingly boost their performance by using advanced software for data-intensive tasks such as advanced modeling and simulation, analysis of Big Data and the rendering of high-definition 3D graphics. However, with the additional computational power, time and energy required to complete these demanding tasks, many businesses require further support to meet their customers‘ needs, and to help reduce their carbon footprint. Many SMEs are unaware that this support could arrive in the form of supercomputing, also known as high performance computing. Often the belief is that access to supercomputing technology is limited to only the largest companies with the biggest spending power. Although traditionally the preserve of blue chip companies and academia, SMEs can now benefit from access to supercomputing technology, training and support. The technology can significantly boost their output, while increasing their in-house energy efficiency. So how can supercomputing help businesses achieve this? Significant time savings With the power of supercomputing, businesses can vastly reduce the time they spend on demanding data-intensive tasks, significantly reducing their power consumption. One SME that has benefited from supercomputing support is Wales-based ThinkPlay.TV, an animation company founded in 2006. To date, its virtual animation work and media sets have featured on the likes of Playstation, Wii and Xbox games consoles and national UK television channels. Typically, before the company used supercomputing, scenery would take eight hours for them to render in-house, giving them a maximum of two weeks to meet client deadlines. Furthermore, with only two desktops carrying out rendering for large periods of time, the co-founders were both unable to work on anything else. Now, with the use of supercomputing technology, projects that would normally have taken days to complete, are finished in hours. As work can be completed so much faster, it has helped the firm win bids for more projects, as well as significantly reducing the energy that it spends on in-house computing processes. Accessing supercomputing technology remotely can save businesses energy in a number of ways. Firstly, they do not have to travel to access a supercomputing hub. This in itself presents energy savings to businesses. In addition, with remote access technology, businesses can benefit from high performance computing on their own desktop and laptop computers, without the need to continually update in-house technology. As supercomputing dramatically increases the speed of computing processes, this will then free-up other on-site IT resources. Therefore, using supercomputing boosts companies’ overall IT capacity because all data and power intensive processing is taken off site. Doing so will also help to reduce business’ overall onsite energy consumption. Remote access to high performance computing can also be provided ‘on demand’, so companies’ use is efficient and timely, and therefore energy and resources are not wasted on ‘idling time’ from machines. A company directly benefiting from this remote access to supercomputing technology is Calon Cardio-Technology Ltd. The company is designing and developing the next generation of affordable, implantable micro blood pumps for the treatment of chronic heart failure. Calon Cardio uses supercomputing to simulate the flow of the blood inside the pump. Prior to using supercomputing running just one case could take up to a week, whereas now that process can be shrunk to less than a day, or even a few hours. Green data centre management When considering a potential supercomputing provider, businesses should ensure that the supplier has invested in dedicated Data Centre Infrastructure Management software or DCIM. This allows providers to balance computing capacity, or power drawn, with the current IT load at the time. This means that when the load is low, providers can switch hardware into ‘low activity’ or ‘standby’ modes to save energy and its associated costs. Companies should also be sure to confirm whether a provider’s servers are designed to automatically go into ‘idle’ or ‘deep sleep’ modes during longer periods of inactivity. Ensuring that suppliers have engaged with these critical energy-saving facilities will help to ensure that savings are passed on to businesses, but also ensures that their carbon footprints remain as small as possible. It’s clear to see that supercomputing has a valuable role to play in boosting the competitive capability and energy efficiency of SMEs in a wide range of sectors. The UK Government’s current investment is testament to its perceived value to the future of British business. This is down to the fact that supercomputing can reduce the time taken to complete data-intensive tasks, freeing up business’ IT systems to allow them to complete other tasks more easily. This significant reduction in time-taken to complete tasks also results in increased energy-efficiency for SMEs, allowing them to reinvest this time and money into other aspects of service delivery for their customers. Furthermore, by using a supercomputing provider that is already committed to saving energy, companies can also feel safe in the knowledge that their own commitments to green business practice remain intact. How can businesses find out more about using supercomputing technology? Whilst purchasing a dedicated supercomputer is clearly out of most small companies’ reach, there are now a number of providers in the UK offering companies access to supercomputing technology, training and support. Supercomputing providers across the UK are increasing the level of support available, as they recognise that many businesses have no experience of using this technology. This means businesses don’t need any previous experience of supercomputing to enjoy its benefits. About the Author David Craddock is chief executive officer of HPC Wales. Prior to his appointment, David Craddock was Director of Enterprise and Collaborative Projects at Aberystwyth University, responsible for leading a team of over thirty and developing the enterprise strategy for the University. Working with the senior management team, David also led a number of change management programmes including business planning for the Aberystwyth/Bangor Partnership, and the merger of the BBSRC research funded institute IGER into the University. A BA (Hons) graduate from Middlesex University, David previously worked for two Unilever companies over a 23 year period, mainly in international marketing, product development and business development roles in the detergent and speciality chemicals markets. In addition, he has been Director of two SMEs in the technical textile and electrical engineering markets.
<urn:uuid:67adc1cf-6b93-4e43-8c91-0af6c5812419>
CC-MAIN-2017-09
https://www.hpcwire.com/2013/12/02/saving-energy-supercomputing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00021-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950984
1,449
2.5625
3
Graphene is a one-atom-thick layer of carbon that has been hailed as a potential silicon replacement capable of extending the exponential computing advances that modern society has come to depend on. Despite the material’s profile of being strong, flexible, light-weight and a good conductor, there are still a number of challenges that must addressed before it is suitable for use in microprocessors and other electronics devices. Researchers the world over are pushing hard to advance the status of this potential wonder material. This week two teams of scientists, one hailing from the University of Texas at Austin and the other a combined team from Rice University and the Georgia Institute of Technology, released different findings relating to graphene. The first sheds light on a particular thorny challenge regarding how graphene is used in real-world devices, and the second concerns the brittle nature of graphene sheets, which it has been found are only as strong as their weakest link. Computational modeling was integral to both projects. The University of Texas at Austin team, led by Li Shi, a professor of mechanical engineering, along with graduate research assistant Mir Mohammad Sadeghi and post-doctoral fellow Insun Jo, structured an experiment in order to study thermal conductivity when the thickness of graphene on a substrate was increased. Thermal conductivity is a critical property as electronics components head to the nanoscale. It enables heat to distribute out such that hot spots are prevented. When graphene is in its ideal state, i.e. freely suspended in a vacuum, it is excellent thermal conductivity. Alas real-world conditions are not so ideal. As Li Shi explains: “When you fabricate devices using graphene, you have to support the graphene on a substrate and doing so actually suppresses the high thermal conductivity of graphene.” The team observed that thermal conductivity increased as the number of layers grew from a single one-atom layer up to 34 layers, but not to the point where it was as high as so-called bulk graphite, which is an excellent heat conductor. The findings, which appear in the September 2013 issue of the Proceedings of the National Academy of Sciences, have prompted the team to explore new ways of supporting or connecting graphene with the macroscopic world. Among the techniques they are considering are three-dimensional interconnected foam structures of graphene and ultrathin graphite, as well as hexagonal boron nitride, which has a crystal structure very similar to graphene. Germanane is another material that shows promise for use in electronics or thermoelectric energy conversion devices. The theoretical calculations that underpinned much of this work were performed on the 10-petaflop (peak) Stampede supercomputer. The NSF-funded system, one of the most powerful in the world, is housed at the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. “In order to really understand the physics, you need to include additional theoretical calculations. That’s why we use the supercomputers at TACC,” said Shi. “When you do an experiment, you see a trend, but without doing the calculations you don’t really know what it means. The combination of the two is very powerful. If you just do one without doing the other, you might not develop the understanding needed.” In a separate study, researchers Jun Lou at Rice and Ting Zhu at Georgia Tech also look at the limitations of graphene in real-world settings. The bonds between carbon atoms are known to be the strongest in nature, and it follows that a perfect sheet of graphene would share this property, but in actual applications, graphene sheets do not live up to their theroetical promise. In a first of its kind experiment, the two researchers measured the fracture toughness of graphene that was marred with minor imperfections to simulate real-world conditions and found it to be “substantially lower” than the instrinsic strength of graphene. “Everybody thinks the carbon-carbon bond is the strongest bond in nature, so the material must be very good,” Lou said. “But that’s not true anymore, once you have those defects. The larger the sheet, the higher the probability of defects. That’s well known in the ceramic community.” The Rice team did the experiments and the Georgia Tech team ran computer simulations of the entire fracture process. The modeling was tightly coupled with the experiments, said Zhu. Because most graphene has defects, its actual strength is likely to be significantly lower than the intrinsic strength of a perfect sheet of the atom-thick carbon material. The findings provide a deeper understanding of how defects will affect the handling, processing and manufacture of the materials, said Lou. It also demonstrates the importance of manufacturing graphene sheets that are made to exacting standards, as free from errors as is feasible.
<urn:uuid:fe894d65-3fac-4478-9208-e23cdd2afb8d>
CC-MAIN-2017-09
https://www.hpcwire.com/2014/05/05/graphene-faces-real-world-limitations/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00369-ip-10-171-10-108.ec2.internal.warc.gz
en
0.956805
1,003
3.78125
4
Imagine you are building a house, and have a list of things you want to have in your house, but you can’t afford everything on your list because you are constrained by a budget. What you really want to work out is the combination of items which gives you the best value for your money. This is an example of a optimization problem, where you are trying to find the best combination of things given some constraints. Typically, these are very hard problems to solve because of the huge number of possible combinations. With just 270 on/off switches, there are more possible combinations than atoms in the universe! These types of optimization problems exist in many different domains, such as systems design, mission planning, airline scheduling, financial analysis, web search, and cancer radiotherapy. They are some of the most complex problems in the world, with potentially enormous benefits to businesses, people and science if optimal solutions can be readily computed. Optimization problems are some of the most complex problems to solve. There are many examples of problems where a quantum computer can complement an HPC (high performance computing) system. While a quantum computer is well suited to discrete optimization, the HPC system is much better at large scale numerical simulations. Problems like optimizing cancer radiotherapy, where a patient is treated by injecting several radiation beams into the patient intersecting at the tumor, illustrates how the two systems can work together. The goal when devising a radiation plan is to minimize the collateral damage to the surrounding tissue and body parts, a very complicated optimization problem with thousands of variables. To arrive at the optimal radiation plan requires many simulations until an optimal solution is determined. With a quantum computer, the horizon of possibilities that can be considered between each simulation is much broader. But HPC is still the more powerful computation tool for running simulations. Using the quantum computer with an HPC system will allow faster convergence on an optimal design than is attainable by using HPC alone. Simulating the folding of proteins could lead to a radical transformation of our understanding of complex biological systems and our ability to design powerful new drugs. This application looks into how to use a quantum computer to explore the possible folding configurations of these interesting molecules. With an astronomical number of possible structural arrangements, protein folding is an enormously complex computational problem. Scientific research indicates that nature optimizes the amino acid sequences to create the most stable protein, which correlates well to the search for the lowest energy solutions. With researchers at Harvard, we designed a system for predicting the folding patterns for lattice protein folding models and successfully ran small protein folding problems in hardware. This is an example of using a quantum computer with a conventional or HPC system. EPANET is public domain numerical software that simulates water movement and water quality within pressurized pipe networks. It can model the flow of water in each pipe, the pressure at each node, the height of the water in each tank, the type of chemical concentration throughout the network during a simulation period, water age, source, and tracing. EPANET can compute properties of a water network given discrete choices for the design of network. The quantum computer gives us a tool for designing the optimal network, by penalizing undesirable outcomes in the network such as low pressure or the presence of chemical contaminant levels, while rewarding desirable outcomes such as low cost, low risk, safety, etc. This Quantum-Classical hybrid solution quickly hones in on good solutions by asking the conventional system to evaluate far fewer possibilities. When you look at a photograph it is very easy for you to pick out the different objects in the image: trees, mountains, velociraptors etc. This task is almost effortless for humans, but is in fact a hugely difficult task for computers to achieve. This is because programmers don’t know how to define the essence of a ‘tree’ in computer code. Machine learning is the most successful approach to solving this problem, by which programmers write algorithms that automatically learn to recognize the ‘essences’ of objects by detecting recurring patterns in huge amounts of data. Because of the amount of data involved in this process, and the immense number of potential combinations of data elements, this is a very computationally-expensive optimization problem. As with other optimization problems, these can be mapped to the native capability of the D-Wave QPU. Machines learn to recognize objects by detecting recurring patterns. Quantum hardware, trained using a binary classification algorithm, is able to detect whether or not an image contains a car. Together with researchers at Google, we built software for determining whether or not there is a car in an image using a binary classification algorithm run in hardware. In excess of 500,000 discrete optimization problems were solved during the learning phase, with Google developers accessing the D-Wave system remotely. We built software for automatically applying category labels to news stories and images. We found that our approach provided better labeling accuracy than a state of the art conventional approach. The labeling of news stories can be difficult for computers as they can see the keywords but don’t understand the meaning of the words when combined. For labeling news stories the corpus we used for training and testing performance was the REUTERS corpus, a well-known data set for testing multiple label assignment algorithms. We took a similar approach to labeling images and used the SCENE corpus for training and testing performance, a well-known data set for testing multiple label assignment algorithms. We found that our approach worked extremely well on these problems, demonstrating the quantum computer's ability to do multiple label assignment and to label images. Using unsupervised machine learning approaches, one can automate the discovery of a very sparse way to represent objects. This technique can be used for incredibly efficient compression. The algorithm works by finding a concise representation of the objects being fed into the computer. The techniques involved are closely related to those in compressive sensing. As a test of the unsupervised feature learning algorithm, we discovered an extremely sparse representation of the ‘Frey faces’ data set, and demonstrated the ability to perform video compression on the quantum computer. Many things in the world are uncertain, and governed by the rules of probability. We have in our heads a model of how things will turn out in the future, and the better our model is, the better we are at predicting the future. We can also build computer models to try and capture the statistics of reality. These tend to be very complicated, involving a huge number of variables. In order to check to see if a computer’s statistical model represents reality we need to be able to draw samples from it, and check that the statistics of our model match the statistics of real world data. Monte Carlo simulation, which relies on repeated random sampling to approximate the probability of certain outcomes, is an approach used in many industries such as finance, energy, manufacturing, engineering oil and gas and the environment. For a complex model, with many different variables, this is a difficult task to do quickly. The better our model, the better we are at predicting the future.
<urn:uuid:b56b2b25-9262-4d9c-91d6-d32b4161950c>
CC-MAIN-2017-09
https://www.dwavesys.com/quantum-computing/applications
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00069-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934504
1,444
3.40625
3
In a previous article dealing with online security prompted many readers with home PCs to ask for advice on how to stay safe and still get the most out of the “information highway”. The first and fundamental step is to install software on the computer that will protect you from any nasty surprises. This software should include a program capable of detecting potentially dangerous activity on the computer as well as warning when a dangerous file tries to enter the system. It should also be capable of detecting any suspicious file or activity, even if the software has no prior record of this activity. An old enemy is easy to spot, but what about new threats? Reliable software should be able to detect dangerous behavior without needing a previous description of the culprit. This software should also be able to operate at low levels i.e. when data enters the system, the program should be the first to know it and alert the user when necessary. For example, in the event of an e-mail-borne virus that activates simply when viewed in the Preview Pane (without having to open the message or any attached files), the user should be alerted even before the e-mail program notifies that a new message has been received. Another essential feature of a reliable program is that it should be independent from the rest of the software on the computer. This means that the protection should be the same regardless of whether browsing the Internet with Microsoft Internet Explorer, Netscape Navigator or Opera, or whether mail is processed through Eudora, Outlook Express or Pegasus. So what kind of software are we looking at here? We’re talking about antivirus programs. Although you might think that, as the name suggests, these programs only protect against viruses, they are also a fundamental part of protection against all attacks. By preventing even the smallest amount of software from entering your computer, no one will be able to take control of your system to carry out malicious acts, like hackers. An antivirus doesn’t just simply deal with viruses. As the software is fundamentally designed to root out viruses, it is quite simple to add information to the database to detect Trojans, backdoors, etc. So when an attacker tries to insert a program on a computer, it will be filtered by the antivirus, which will sound the alarm. Flaws or vulnerabilities in software applications are another cause for concern, as they can make an attacker’s job easier, without them even needing to insert programs or code on the victim’s computer. In these cases there is usually a solution or “patch’ available before most users have even realized that a problem exists. All software manufacturers are constantly updating products to protect against possible errors, so it is well worth being aware of these issues and applying the updates where necessary. If in doubt, the manufactures website is usually a reliable source for the latest information and all license holders are entitled to download these updates. So there are good reasons for avoiding pirated software after all! Speaking of updates, remember that all antiviruses should be updated regularly. How often? Well this really depends on the manufacturer. Each vendor will no doubt recommend updating as soon as a new update is available -logically-, but bear in mind that 15 or so new viruses appear every day, waiting two or three days before updating an antivirus can prove to be an increased risk. Another practical security measure is the installation of a personal firewall. Firewalls constantly monitor activity across all ports on a computer. It is possible to configure the firewall to keep a specific port closed or to warn when someone is scanning open ports These tools are generally not easy to configure, and they are not simple to configure, which is why they are not recommended for basic users. Reading the user guide carefully before installing and configuring your personal settings can save you valuable time. To stay on top of security when connected to the Internet, there are several systems for finding out exactly what is happening to your PC at any moment. If you have a personal firewall and an up-to-date antivirus installed, much of this monitoring is carried out automatically by these applications. That said, it is still worth checking the security levels in your browsers. In most cases, security settings in a browser can be configured according to a wide range of security criteria, from accepting almost everything to rejecting all but the most trustworthy information. A balance between security and practicality is normally the most advisable. A powerful tool, called NBSTAT, exists which lets users monitor open connections on their PCs. Simply using the parameter “-a”, you will be able see all active connections on your computer. For example, type “NETSTAT -A” and you will see the following: TCP FCUADRA:1588 WWW.PANDASOFTWARE.COM:80 ESTABLISHED The information after TCP, is the type of protocol used, the first word indicates the name of the local machine, followed by the port in use. This is followed by the website and port to which you’re connected and finally the connection status. The most frequently used ports are those used for http connections (80), e-mail (110 and 25), FTP transfers (21), accessing NNTP newsgroups (119) or IRC chats (194). All ports between 0 and 1,023 are registered for standard services, and those between 1,024 and 49,151 are assigned to non-standard, but recognized functions. Ports from 49,152 to 65,535 however, are dynamic and can be used for a variety of functions, which can unfortunately include the notorious activities of Trojans. If you notice that one of these ports is open, it is time to start worrying as someone else may be accessing your system. The web page http://www.iana.org/assignments/port-numbers has a complete list of these ports. The ports used by Trojans and other malware tend to vary greatly. In fact there are so many that it would be virtually impossible to list them all here. Your antivirus vendor should be able to help you determine whether a connection has been made by an e-mail program or Trojan trying to enter your machine. If you suspect that someone or something is connected to your PC without your consent, you should immediately disconnect. Another solution, although not without risks, is to try to enter the machine that is trying to attack you. However, a safer option is just to disconnect, scan your entire system with your antivirus and then reconnect.
<urn:uuid:cdc1075e-677c-45e9-beea-0326d57ad1a7>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2002/10/15/security-online---some-basic-it-hygiene-tips/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00541-ip-10-171-10-108.ec2.internal.warc.gz
en
0.937559
1,342
3.125
3
The Conficker Conundrum More than a year has passed since Conficker first appeared, yet it is still making the news. The patch for the vulnerability exploited by Conficker was published by Microsoft in October 2008. Yet more than one year later, Conficker continues to infect computers using many advanced malware techniques and exploiting the Windows MS08-067 service vulnerability. The spread of Conficker impacted all types of institutions and organisations. Victims included the British military, Ealing Council's entire IT network was disabled for 4 days, and 800 computers from the Sheffield NHS Trust were infected as well as numerous other companies and organisations worldwide. Microsoft even offered a reward of $250,000 to anyone providing information that led to the arrest and conviction of the creators of this malware. The Conficker worm, which by nature is a particularly damaging strain of virus, appears to be launching brute force attacks to extract passwords from computers and corporate networks. The easier the password, the easier it is for Conficker to decipher it. Once the passwords are detected, cyber criminals can then access computers and use them for their own ends. So why is this still happening? Principally, because of its ability to propagate through USB devices. Removable drives have become a major channel for the spread of malicious code, due to the increasing use of memory sticks and portable hard drives to share information in households and businesses. After inserting an infected USB into an unpatched machine Conficker will be able to bypass the computer security and, by impersonating an administration account, drop a file on the computer system. It will also try to add a scheduled task to run those files. Another reason for the longevity of this worm is that many people are using pirated copies of Windows and, in fear of being detected; they avoid applying the security patches published periodically by Microsoft. In fact, Microsoft allows unrestricted application of critical updates, even on non-legitimate copies of its operating system. Nowadays, most companies have perimeter protection (firewall, etc.), but this does not prevent employees from taking their memory sticks to work, connecting them to the workstation and spreading the malicious code across the network. As this worm can affect all types of USB devices, MP3 players, mobile phones, cameras, and other removable devices are also at risk. So what can users do to mitigate this threat? Users should firstly apply the patch to solve the security issue that lets the Conficker worm spread through the Internet (MS08-067); they then need other solutions such as a USB vaccine protecting not just the computer but also the USB device itself. A security solution which is regularly updated and active should be enough to protect against Conficker and its variants but organisations should alsohabitually scan for vulnerable machines, disinfect infected machines using updated and active antivirus both on networks and stand-alone PCs and make sure their antivirus and security solutions are up-to-date on the latest version and signature database. It is important to note that by just asking people to use a security solution, we should not expect to put a halt to the problem. Making users aware of the threats, teaching children at school how to use technology safely and responsively, and ensuring they have privacy in mind are equally important. Many users are unaware of the dangers, and living under the perception that the digital world is secure, and as we know, that is not the case.Preventative measures must also come from the ‘top-down', legislating, chasing and punishing those that benefit from cybercrime and protecting critical infrastructure. Panda Security is exhibiting at Infosecurity Europe 2010, the No. 1 industry event in Europe held on 27th – 29th April in its new venue Earl's Court, London. The event provides an unrivalled free education programme, exhibitors showcasing new and emerging technologies and offering practical and professional expertise. For further information please visit www.infosec.co.uk
<urn:uuid:5b6708c6-0474-4c72-b496-480d0e168de4>
CC-MAIN-2017-09
http://www.enterprisenetworkingplanet.com/print/netsecur/conficker-conundrum
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00241-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951336
803
2.5625
3
So we have moved on from desktop PCs and have now entered a new era of computing via connected devices such as smartphones, tablets, and phablets, supported by the evolution of technology. In this tech-savvy era, consumers are very much technology-driven: we now have devices which monitor and respond to users based on gestures and context. So what is next, what is the future for mobile computing? We believe it is next-gen wearable devices - next-gen, because the concept is not new. The first documented wearable computer came in 1961; Ed Thorpe and Claude Shannon, both mathematics professors at MIT, created a small device which could predict the octant where a roulette ball would drop and notify the bettor via a tone played into a speaker which octant to bet on. The device was designed to be used by two people, one wearing the input device and one wearing the output device to avoid detection. Other devices were created to help gamers cheat, particularly at blackjack and roulette. These devices were ultimately banned in Las Vegas in 1985. (Whether or not this ban served to delay the deployment of wearable devices is a debate beyond the scope of this particular blog…) While not wearable devices as we would think of them today, these are nevertheless the forerunners of today’s wearables – devices that were designed to be hidden on the person and process data. The use of wearable devices connected to the smartphone in the fitness and sports environment has grown rapidly in the last two years with applications such as Nike+ and Fitbit Tracker allowing data from training sessions to be uploaded and analysed. But what really excites us (measure that excitement rising, you wearable tracker!), is the Google Glass Project : a research and development program announced earlier this year by Google to develop a multi-function augmented reality display device, which we believe will be the next form factor for mobile computing, for displaying and performing functionalities similar to most smartphones. Even though this is not the first prototype device to be introduced into the market, the development from Google is expected to enhance consumer awareness and interest. Classified as a ‘future form factor’ for computing devices, next generation wearables, including smart glasses and other head-mounted displays, will provide a multitude of functions either independently or in conjunction with a third party platform. Also, it is not just Google eyeing this space – other key influential players such as Apple and Sony have already made key strategic moves in this sector. So how much will it be worth? Juniper anticipates that the next-gen wearable devices market, including smart glasses, will be worth more than $1.5 billion by 2014, up from $800 million this year . These revenues will be largely driven by consumer spending on fitness, multi-functional devices, and healthcare.
<urn:uuid:d7e946af-277f-4697-93d7-351be626369d>
CC-MAIN-2017-09
https://www.juniperresearch.com/analystxpress/november-2012/the-evolution-of-wearable-devices-are-they-the-fut
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00417-ip-10-171-10-108.ec2.internal.warc.gz
en
0.967436
574
2.734375
3
Originally published May 15, 2012 Hadoop is one of the up-and-coming technologies for performing analysis on big data, but to date very few universities have included it in their undergraduate or graduate curriculums. In a February 2012 article from InfoWorld, those already using the technology issued the warning that “Hadoop requires extensive training along with analytics expertise not seen in many IT shops today.” A ComputerWorld article singled out MIT and UC Berkeley as having already added Hadoop training and experience to their curriculums. Other educational institutions need to seek out practitioners in their area or poll alumni to determine if individuals that can impart their knowledge to college students are available and if so, prepare a curriculum to start training the next generation of IT employees and imbue them with the skills they will require to meet the challenges of the 21st century. Hadoop is one of the newest technology solutions for performing analysis and deriving business intelligence on big data. On the TechTarget website, it is defined as “… a free, Java-based programming framework that supports the processing of large data sets in a distributed computing environment. It is part of the Apache project sponsored by the Apache Software Foundation.” Hadoop is a combination of many tools and software products of which the primary two are HDFS (Hadoop Distributed File System) and MapReduce. In its current form, these components run primarily on the Linux operating system. Both of these components are Free Open Source Software (FOSS) and are licensed under the Apache License, Version 2.0. HDFS is a file system that distributes the data to be analyzed across all the servers, which are typically inexpensive commodity hardware with internal or direct attached storage, available in a server farm. The data is replicated across several nodes so the failure of any one node does not disrupt the currently executing process. The HDFS file system maintains copies of the master catalog across many of these nodes so it always knows where specific chunks of the data reside. Support for very large datasets is provided by this mechanism of distributed storage. As defined on the Apache Hadoop website, “… MapReduce is a software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.” This software, typically written in Java, is used to map the input data that defines specifications for how it can be broken into chunks and how it is to be processed in parallel on multiple nodes of the cluster. The output of these map tasks is then used as input to the reduce tasks. The data and processing usually reside on the same nodes of the cluster to provide the scalability to handle the very large datasets that are typically processed using MapReduce. The basis of this processing is the mapping of the input data into key-value pairs. This is very similar to XML where each combination of start and end tags contains a specific value within a group of elements (e.g., < FirstName > Alex < /FirstName >). The reduce tasks combine the outputs of the map tasks into smaller sets of values that can then be used in additional analysis tasks. To manage the processing, the Hadoop framework provides a job control mechanism that passes the required data to each of the nodes in the cluster, then starts and monitors the jobs on each of the processing nodes. If a particular node fails, the data and processing are automatically switched to a different node in the cluster, preventing the failure of a process due to a node becoming unavailable. According to an October 2010 article in InfoWorld, the initial use of the Hadoop framework was to index web pages, but it is now being viewed as an alternative to other business intelligence (BI) products that rely on data residing in databases (structured data), since it can work with unstructured data from disparate data sources that database-oriented tools are unable to handle as effectively. The article goes on to state that corporaations “… are dropping data on the floor because they don't have a place to put it” and Hadoop clusters can provide a data storage and processing mechanism for this data so it is not wasted. A very recent InfoWorld article examines the issues involved in the detection of cyber criminals by combining big data with traditional structured data residing in a data warehouse. The article mentions that the biggest problem will be identifying the network activity and behavior of individuals that are accessing the system for legitimate reasons as opposed to those out to steal sensitive information for nefarious purposes. It also mentions the inability of security information and event management (SIEM) and other intrusion detection systems (IDS) currently used for this purpose to correctly detect and report these types of events. They generate mounds of information that cannot be adequately analyzed to help identify the good users versus the bad users on their systems to protect the enterprise and its data. A November 2011 article in ComputerWorld mentions that JPMorgan Chase is using Hadoop “… to improve fraud detection, IT risk management, and self service applications” and that Ebay is using it to “build a new search engine for its auction site.” The article goes on to warn that anyone using Hadoop and its associated technologies needs to consider the security implications of using this data in that environment because the currently provided security mechanisms of access control lists and Kerberos authentication are inadequate for most high security applications. It was noted that most government agencies utilizing Hadoop clusters are firewalling them into “… separate ‘enclaves’ … ” to protect the data and insure that only those with proper security clearance can see the data. One of the individuals interviewed for the article suggested that all sensitive data in transit or stored in a Hadoop cluster be encrypted at the record level. Given all these security concerns, many executives do not view Hadoop as being ready for enterprise consumption. An article in ComputerWorld states that IT training in these skills can be obtained from organizations such as Cloudera, Hortonworks, IBM, MapR and Informatica. Cloudera has been offering this training for three years and they also offer a certification at the end of their four-day training program. According to the education director at Cloudera, their certification is deemed valuable by enterprises and organizations are starting to require the Cloudera Hadoop certification in their job postings. Hortonworks just started offering training and certification classes in February 2012 while IBM has been doing this since October 2011; the big difference between the two is that Hortonworks is targeting IT professionals with Java and Linux experience and IBM is targeting undergraduate and graduate students taking online classes in Hadoop. Upon completion of these classes, they are qualified to take a certification test; however, when the article was written, approximately 1% of students had taken the certification exam. A recent ComputerWorld article mentions that the terms “Hadoop” and “data scientist” are starting to show up in job postings and that some of the most well-known organizations are posting these job requirements. The article mentions that Google has reported that the search term “data scientist” is 20 times higher – so far – in the first quarter of 2012 than it was in the last quarter of 2011 and that there were 195 job listings on Dice.com that mentioned this term. This indicates that the market for technical skills in IT and statistics is growing very quickly as businesses are realizing that this new technology can provide real value to their organizations. They will require a new IT specialty called “data science” to analyze the data extracted and processed using Hadoop and using statistical analysis to derive beneficial insights. To address the shortage of individuals entering the workforce with the skills necessary to effectively utilize technologies like Hadoop, educational institutions need to offer courses in data analysis and data mining using statistical modeling methods as well as more specialized courses in Hadoop technologies like HDFS and MapReduce. These courses should have heavy emphasis on setting up Hadoop, HDFS, Java and any other software required for the environment to operate correctly. Since most students will be performing these tasks on a laptop or in a virtual machine (VM) environment, it may be more desirable to provide a preloaded VM to the students so they can see the end-state they need to achieve. This VM could also be used for the initial programming courses in Java so the students are not burdened with setting up the environment until they are in more advanced courses in operating system technologies.
<urn:uuid:8b06c7a1-c3fa-4a9c-bcd5-c1707690c69c>
CC-MAIN-2017-09
http://www.b-eye-network.com/channels/1531/view/16067
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00361-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950764
1,767
3.078125
3
In the Earth's past there was powerful volcanic activity which could have easily spewed dirt and rocks containing microbes into outer space which not only could have eventually reached Mars but also ended up traveling in orbit through space that we now know as meteors, comets, and asteroids. A Newsweek article of September 21, 1998, p.12 mentions the high possibility of Earth life on Mars. "We think there's about 7 million tons of earth soil sitting on Mars", says scientist and evolutionist Kenneth Nealson. "You have to consider the possibility that if we find life on Mars, it could have come from the Earth" [Weingarten, T., Newsweek, September 21, 1998, p.12]. HAVING THE RIGHT CONDITIONS AND RAW MATERIALS FOR LIFE doesn't mean that life can originate by chance. Proteins can't come into existence unless there's life first! Miller, in his famous experiment in 1953, showed that individual amino acids (the building blocks of life) could come into existence by chance. But, it's not enough just to have amino acids. The various amino acids that make-up life must link together in a precise sequence, just like the letters in a sentence, to form functioning protein molecules. If they're not in the right sequence the protein molecules won't work. It has never been shown that various amino acids can bind together into a sequence by chance to form protein molecules. Even the simplest cell is made up of many millions of various protein molecules. The probability of just an average size protein molecule arising by chance is 10 to the 65th power. Mathematicians have said any event in the universe with odds of 10 to 50th power or greater is impossible! The late great British scientist Sir Frederick Hoyle calculated that the odds of even the simplest cell coming into existence by chance is 10 to the 40,000th power! How large is this? Consider that the total number of atoms in our universe is 10 to the 23rd power. Also, what many don't realize is that Miller had a laboratory apparatus that shielded and protected the individual amino acids the moment they were formed, otherwise the amino acids would have quickly disintegrated and been destroyed in the mix of random energy and forces involved in Miller's experiment. There is no innate chemical tendency for the various amino acids to bond with one another in a sequence. Any one amino acid can just as easily bond with any other. The only reason at all for why the various amino acids bond with one another in a precise sequence in the cells of our bodies is because they're directed to do so by an already existing sequence of molecules found in our genetic code. Of course, once you have a complete and living cell then the genetic code and biological machinery exist to direct the formation of more cells, but how could life or the cell have naturally originated when no directing code and mechanisms existed in nature? Read my Internet article: HOW FORENSIC SCIENCE REFUTES ATHEISM. A partially evolved cell would quickly disintegrate under the effects of random forces of the environment, especially without the protection of a complete and fully functioning cell membrane. A partially evolved cell cannot wait millions of years for chance to make it complete and living! In fact, it couldn't have even reached the partially evolved state. Please read my popular Internet articles listed below: ANY LIFE ON MARS CAME FROM EARTH, SCIENCE AND THE ORIGIN OF LIFE, NATURAL LIMITS OF EVOLUTION, HOW FORENSIC SCIENCE REFUTES ATHEISM, WAR AMONG EVOLUTIONISTS (2nd Edition), NO HALF-EVOLVED DINOSAURS, HOW DID MY DNA MAKE ME? DOES GOD PARTICLE EXPLAIN UNIVERSE'S ORIGIN? Visit my newest Internet site: THE SCIENCE SUPPORTING CREATION Babu G. Ranganathan* Author of popular Internet article, TRADITIONAL DOCTRINE OF HELL EVOLVED FROM GREEK ROOTS * I have had the privilege of being recognized in the 24th edition of Marquis "Who's Who In The East" for my writings on religion and science, and I have given successful lectures (with question and answer time afterwards) defending creation from science before evolutionist science faculty and students at various colleges and universities.
<urn:uuid:d6d8f097-6f9f-43a1-832e-263218d4b4e3>
CC-MAIN-2017-09
http://www.informationweek.com/government/leadership/10-years-on-mars-what-spirit-and-opportunity-discovered/d/d-id/1113555?piddl_msgid=198632
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00537-ip-10-171-10-108.ec2.internal.warc.gz
en
0.951343
907
3.921875
4
A seemingly constant stream of data breaches and this week’s news that Russian hackers have amassed a database of 1.2 billion Internet credentials has many people asking: Isn’t it time we dumped the user name and password? A lot of the best technology of today exploits biometric factors such as retina patterns, fingerprints and voice analysis, but beyond that a number of researchers are looking to tap into the way we think, walk and breathe to differentiate between us and an intruder. Helping to lead the research is DARPA, the U.S. military’s Defense Advanced Research Projects Agency. Its active authentication project is funding research at a number of institutions working on desktop and mobile technologies that work not just for the initial login but continuously while the user is accessing a device. The array of sensors already found in mobile phones makes some of the ideas particularly interesting. The technologies exploit data that’s already available inside devices, but utilize it in new ways, said Richard Guidorizzi, program manager of the project at DARPA. “Except during lab testing, we did not need to create new devices to attach to your phone and drain your battery. They were able to use what was already there with a great deal of success,” he said. So, when might they be available? The project is still going on, but it seems to be attracting interest. “Some of my [teams] are already being approached by some of the largest companies in the world to incorporate their technology into their products, including smartphones and Web-based technologies,” said Guidorizzi. Micro Hand Movements A project underway at the New York Institute of Technology aims to analyze micro movements and oscillations in your hand as you hold a smartphone to determine the identity of the user. It is looking at touch-burst activity, which happens when a user performs a series of touch strokes and gestures, and the pause between those touches and gestures while the user is consuming content. SRI International in Silicon Valley is trying to exploit the accelerometers and gyro sensors already inside smartphones to extract unique and distinguishing characteristics of the way a user walks and stands. Your stride length, the way you balance your body, the speed you walk all are individual to you. Additional sensors can help to determine physical characteristics, such as arm length, and the user’s physical situation, such as proximity to others and whether the user is sitting, standing, picking something up, texting or talking on the phone. The differences in how we use language could be enough to tell us apart. Drexel University is trying to extract author fingerprints from the large volumes of text we typically enter into our PCs and smartphones and then use that to spot when someone else might be at the keyboard. This could be the words used, individual grammar quirks, sentence construction and even the errors individuals are prone to making again and again. The technology can be tied together with another keyboard-based authentication method—the analysis of the way a user types, such as their keyboard speed and pauses between letters—to make an even more secure authentication system. NASA’s Jet Propulsion Laboratory is trying to detect the individual features of your heartbeat from a phone. Microwave signals emitted by the phone are reflected back by your body, collected by sensors in the phone and amplified to detect your heart rhythm. This might have the added bonus of being able to alert you to see a doctor should a subtle change in your heartbeat happen. The last thing anyone wants to see on a PC is an error message, but this particular type of annoyance might turn out to have a role to play in security. By throwing up random error messages and analyzing how users respond to them, the Southwest Research Institute is hoping to identify individuals and spot intruders. So next time your PC tells you it’s out of memory and asks if you want to report the issue, think carefully. It could be testing you. Perhaps most familiar to people through fingerprint sensors, biometric analysis seeks to exploit a wide range of personal characteristics. Li Creative Technologies is developing a voice-based system that can be used to unlock a mobile device. You’ll be prompted to say a passphrase, and the software doesn’t just monitor if the phrase was correct but whether you were the one saying it. A second function continuously monitors what’s being said around the device to detect if another user has picked up the phone and is attempting to access it. The University of Maryland is using visual streams to make sure you’re the one using your PC or phone. On the desktop it looks at things like the way you organize windows and resize them, your work patterns and limitations in mouse movements. On the phone the system pulls in three video streams: an image of you from the front-facing camera, an image of your surroundings (or shoes or pants) captured with the rear-facing camera, and your screen activity from the display. Researchers hope that taken together, these three streams will be distinct enough to authenticate an individual user and keep them authenticated while using the device.
<urn:uuid:395138c4-b6af-444c-b7bc-d2eb3cfefa04>
CC-MAIN-2017-09
http://www.networkworld.com/article/2463341/seven-ways-darpa-is-trying-to-kill-the-password.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00533-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946719
1,046
2.765625
3
The space agency reported that Curiosity is now running on its backup computer system, known as its B-side. It's been taken out of its minimal-activity safe mode and ready to return to full operation. "We are making good progress in the recovery," said Mars Science Laboratory Project Manager Richard Cook in a statement. "One path of progress is evaluating the A-side with intent to recover it as a backup. Also, we need to go through a series of steps with the B-side, such as informing the computer about the state of the rover -- the position of the arm, the position of the mast, that kind of information," he said. Jim Erickson, Curiosity's deputy project manager, told Computerworld on Monday that engineers watching the rover's telemetry last week noticed certain applications would terminate mid-sequence. The cause, he noted, appears to be a file corruption. "We are doing multiple things at the same time," said Erickson. "All we know is the vehicle is telling us that there are multiple errors in the memory. We think it's a hardware error of one type or another but the software did not handle it gracefully. We'd like to have our vehicles withstand hardware trouble and continue to function." Now that NASA's computer specialists have fully switched the rover over onto its redundant, onboard computer system, they are trying to repair the problem on the main system. They also are attempting to shore up the rover's software so it can better withstand hardware glitches. At this point, NASA engineers are looking to keep Curiosity running on the B-side system, while repairing the A-side so it can be on stand-by as the new backup. NASA is on a deadline to get the rover fully functional before April 4, when communication with all Mars rovers and orbiters will end for about a month. A solar conjunction -- when the Sun will be in the path between the Earth and Mars -- is fast approaching and will keep NASA engineers from sending daily instructions to the rover, or from receiving data and images in return. NASA will have to send all operational instructions for that month-long span to Curiosity before the solar conjunction begins. The rover will remain stationary in order to keep it safe while out of contact with Earth. Curiosity, which landed on the Red Planet last August, is on a two-year mission to find out if Mars has ever had an environment that could support life, even in a microbial form. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, or subscribe to Sharon's RSS feed . Her e-mail address is [email protected]. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "NASA Mars rover Curiosity on road to recovery" was originally published by Computerworld.
<urn:uuid:19eb46cd-e88b-4900-8a1d-4a7b85a16c2a>
CC-MAIN-2017-09
http://www.itworld.com/article/2713031/hardware/nasa-mars-rover-curiosity-on-road-to-recovery.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00409-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95271
603
2.625
3
“Getting Smarter and Smarter” – Why Digital Data is Key to Urbanisation by François Mero, Senior VP of Sales EMEA at Talend The phenomenon known as ‘urban transition’, which can be defined as the shift from rural to urban and from agricultural employment to industrial, commercial, or service employment, is now not just a first world characteristic but rather a global reality. In certain countries, the rate of urbanisation could even reach 80%. Back in 2000, there were 213 cities with more than a million inhabitants and 23 metropolises with more than 10 million inhabitants. 2.5 billion more people will soon join the ranks of city dwellers which is certain to have a significant impact on transportation, housing, health, work, safety, etc. Although they currently only occupy around 3% of the planet’s land mass, cities are already home to half of the world’s population, consume 75% of the energy produced and generate 80% of global CO2 emissions. Although it is still a gradual process, this urban transition seems to be accelerating. Combined with other phenomena (the inevitable ‘fade out’ of fossil fuels, scarcity of water resources, climate change, etc.), this evolutionary process has a very significant impact on citizens’ quality of life. For example, the constant increase in car numbers (associated with the increase in population and the increase in GDP) is saturating the roads, mainly in cities. A 2014 study shows that the total cost of traffic jams will reach €350 billion globally over the period 2013-2030, including €22 billion for France (i.e. an increase of 31%). This phenomenon is also likely to contribute to increasing CO2 emissions (by 24% in the United Kingdom over the same 17-year period and by 13% in France). Against this backdrop, preparing for the future by striving to make cities more intelligent and sustainable requires not only decreasing the environmental impact of human activities, but also redefining, among other things, the conditions for accessing resources, waste management, transportation, building insulation and energy management (from production to delivery). The success of this kind of transition is thus based on how well decisions are made by cities themselves. Their awareness and involvement are crucial when it comes to improving the quality of life of their inhabitants and preventing urban life from becoming a wholly unpleasant experience. Big Data + IoT: detecting trends and anticipating the future If Big Data contributes to revolutionising the commercial approaches of companies, with a particularly high potential for transformation in tourism and distribution, an Atelier BNP Paribas study reveals that “Smart Cities will be the veritable El Dorado of Big Data”. The concept of a Smart City – “a city using information and communication technologies to improve the quality of urban services and reduce costs,” according to Wikipedia – is a project-based approach aimed at optimizing transportation, energy distribution and services provided to residents, by installing sensors in parking lots, public transportation stations, garbage dumpsters, the urban lighting system, etc., in order to collect data which will assist cities in decision-making. Digital technologies – at the heart of the Smart City – are not an end in and of themselves, but offer an important potential for transformation. The increased popularity of the concept of “smart cities,” as well as the variety of projects implemented express a new way of thinking about cities and their future, made possible by digital technologies. Faced with the new hazards brought about by urban transition, the real-time collection and analysis of massive volumes of data continuously generated by the sensors – operated by municipal services, urban service operators, businesses and even citizens – become essential. Indeed, beyond the technological aspects, the Smart City is also based on a collaborative and participatory vision. Masdar City in Abu Dhabi and Songdo in South Korea are prime examples of connected cities that, using a local energy optimisation system, materialise the promises of a zero emission, zero waste model. All of the data from the sensors, spread throughout the city, are analysed in real time to optimise a number of aspects of inhabitants’ lives. Due to the variety of types and sources of data, the multiplicity of players (with citizens, first and foremost) and the almost unlimited nature of the volumes of data available, the city itself must implement and pilot a Big Data strategy to become, sustainably “smart”. Ultimately, Big Data contributes not only to a better understanding of how cities work and how their inhabitants behave, but also to removing the barriers between the various players and operators and creating new services which are better suited to new uses. Faced with the potentially problematic consequences of urban transition, we urgently need to make data available for the benefit of citizens. The “loT-data sensors” trilogy could prevent a predictable and premonitory catastrophe… Thus, the question of data governance becomes a central one for municipalities in search of urban renewal.
<urn:uuid:5168e72c-8628-4b27-80bb-bcc943b0ed45>
CC-MAIN-2017-09
https://data-economy.com/getting-smarter-smarter-digital-data-key-urbanisation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00282-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941715
1,034
2.65625
3
Stopping Forged Email 2: DKIM to the Rescue We have recently looked at how hackers and spammers can send forged email and then seen how these forged messages can be almost identical to legitimate messages from the purported senders. In fact, we learned that generally all you can trust in an inbound email message is the internet IP address of the server talking to your inbound email server — as this cannot realistically be forged in any way that would still enable you to receive the message. In our last post in this series, we examined how SPF can be used to help weed out forged email messages based on validating if a message was sent by an approved server by looking at the IP address delivering the email message to you. We found that while SPF can work, it has many significant limitations that cause it to fall far short of being a panacea. So — besides looking at the sending server IP address — what else can we do to determine of a message was forged? It turns out that there is another way — through the use of encryption techniques and digital signatures — to have the sender’s servers transparently “sign” a message in a way that you can verify upon receipt. This is called DKIM. DKIM – Domain Keys Identified Mail: A Simple Explanation DKIM stands for “Domain Keys Identified Mail” … or, re-writing this more verbosely, “Domain-wide validation Mail Identity through use of cryptographic Keys”. To understand DKIM, we need to back up for a second and look at what we mean by “cryptographic keys” and how that can be used. In security, there is a concept called symmetric encryption that everyone is familiar with: you pick a password and use some “cipher” to convert a regular (plaintext) message into an encrypted (ciphertext) message. Someone else who knows the password and cipher can reverse the process to get the regular message back. Another extremely common (e.g. it is the basis for SSL, TLS, S/MIME, PGP and other security technologies), but more complex method is asymmetric encryption. In asymmetric encryption, one can create a “key pair” … a combination of a 2 keys. A message encrypted using Key 1 can only be decrypted with Key 2 and vice versa. We typically call Key 1 our “private key” because we keep that safe and secret. We are happy to publish “Key 2” to the world. What does that buy you? - Signatures: Anything that you encrypt using Key 1 can be decrypted by anyone. But if they can decrypt it, that proves that you sent it (as only you have your secret key and thus “only you” could have encrypted it). - Encryption: Anyone can use your public key to encrypt a message that can only be opened by you (using your secret key). DKIM uses the cool feature asymmetric encryption for signing messages. Here is how it works (a hand waving overview that leaves out many details for the sake of clarity): - Make a Key Pair: The folks in charge of the sender’s servers create a cryptographic key pair - Publish the Public Key: These same folks publish the public key in the DNS for their domain - Sign Messages: Using the private key, the sender’s servers look at selected message headers (e.g. the sender name and address, the subject, the message ID) and the message body, they use a cryptographic “hash” function to make a unique “fingerprint” of this info (e.g. so that any change to that info would change the fingerprint). This fingerprint hash is encrypted using the private key and this “fingerprint” as added to the message as a new header called “DKIM-Signature”. Now, when you receive a message that is signed using DKIM, you know the purported sender, the IP address the message came from and you have this additional “DKIM-Signature.” However, you cannot trust that this signature header is real or has not been tampered with. Fortunately, you do not have to trust it blindly; DKIM allows you to verify it. Here is what happens on the recipient’s side: - Receipt: The recipient’s inbound email server receives the message - Get the Signature: The encrypted DKIM fingerprint is detected and extracted from the message headers - Get the Key: The sender’s purported domain is known; the recipients server looks in DNS to get the sender domain’s public DKIM encryption key - Decryption: The fingerprint is decrypted using the public key. - Fingerprint Check: The recipient then uses the message body and the same headers as the sender to make another fingerprint. If the fingerprints match … then the message has not been altered since it was sent. So, this buys you sender identity verification by: - We know that the message was not modified since it was sent — so the name and the address of the sender (among other things) is the same as it was when it was sent. - We know that the message was sent by a server authorized for sending email for the sender’s domain — as that server used the DKIM private key for that domain. So, through encryption, we have a way to verify that the message was sent by a server authorized to send email fro the sender’s domain … and thus we have a “solid” reason to believe the sender’s identity. Furthermore, this validation does not rely on server IP addresses at all, and thus does not share the weaknesses of SPF. Setting up DKIM With DKIM (as with all anti-fraud solutions for email), it is up to the owner of a domain (e.g. the owner of bankofamerica.com) to setup the DNS settings required for DKIM to be checked by the recipients. If they do not do this, then there is no way to verify DKIM and any DKIM signatures on messages will be ignored. DKIM is set up by adding special entries to the published DNS settings for the domain. You can use a tool, such as this DKIM Generator, to create create your DKIM cryptographic keys and tell you what you should enter in to DNS. Your email provider may have their own tools that assist with this process — as the private key needs to be installed on their mail servers and use of DKIM has to be enabled; we recommend asking your email provider for assistance. We are not going to spend time on the details of the configuration or setup here; instead we will look at the actual utility of DKIM and where it falls on its face and how attackers can get around it. DKIM – The Good Parts Once DKIM has been set up and is used by your sending mail servers, it does an amazing job with anti-fraud…. generally much better than SPF. It also helps ensure that messages have not been modified at all since they were sent … so we can be sure of who sent the message and of what they said; SPF does not provide any kind of assurance that messages were not modified. Use of DKIM is highly recommended for every domain owner and for every email filtering system. However, as we shall see next, its not time to throw a party celebrating the end of fraudulent email. DKIM – Its Limitations Domain Keys Identified Mail has sone significant limitations in the battle against fraud: It can be hard to identify and set up all authorized servers For proper use of DKIM, all servers that send email for your domain must be able to use DKIM and have keys for your domain. This can be difficult if you have vendors or partners that send email for you using your domain or if you otherwise can not be sure that all messages sent will be signed.. In such cases, if you cannot get them to use DKIM, you should have them send email for you using a different domain or a subdomain, so that your main domain can be fully DKIM-enabled and its DNS can tell everyone that DKIM signatures must be present on all messages. E.g. you want to be “strict” with DKIM usage in a way that is hard to do with SPF. If you cannot be strict, then DKIM allows you to be soft … indicating that signatures may or may not be present. In such cases (like with SPF), the absence of a DKIM signature does not make a message invalid; the presence of a valid signature just makes the message certainly valid. If your DKIM setup is “soft”, forgery is simple. DKIM checks only the domain name and the server. If there are two different people in the same organization, [email protected] and [email protected] — either of them can send email legitimately from their @domain.com address using the servers they are authorized to use for domain.com email. However, if [email protected] uses his account to send a message forged to be from [email protected] — DKIM will check out as “OK” … even if DKIM is set as strict. DKIM does not protect against inter-domain forgery at all. Note: using separate DKIM selectors and keys for each unique sender would resolve this problem (and the next one); but this is rarely done. Same Email Provider: Shared Servers Forgery? This is a generalization to inter-domain forgery. If [email protected] and [email protected] were using the same email service provider and the same servers, Jane’s goodguy.com domain is setup with DKIM, and the email provider’s servers are also setup to sign messages from @goodguy.com with appropriate DKIM signatures, what happens when [email protected] logs in to his account and sends a message pretending to be from Jane? The answer depends on the email provider! - The provider could prevent Fred from sending email purporting to be from anyone except himself. This would solve the problem right away but is very restrictive and many providers do not do this. - The provider could associate DKIM keys to specific users or accounts (this is what LuxSci does) … so Fred’s messages would never be signed by valid the “goodguy.com” DKIM keys, no matter what. This also solves the problem. However, if the provider’s servers are not restrictive in one of these (or a similar way), then Fred’s forged email messages will be DKIM-signed with the goodguy.com signature and will look DKIM-valid. Legitimate Message Modification DKIM is very sensitive to message modification; DKIM signature checks will come back “invalid” if even 1 character has been changed. This is generally good, but it is possible that some filtering systems read and “re-write” messages in transit where the “real” message content is unchanged but certain (MIME) “metadata” is replaced with new data (e.g. the unique strings that separate message parts). This breaks DKIM and it can happen more frequently that you might expect. Good spam filters check DKIM before modifying messages; but if you have multiple filtering systems scanning messages, then the DKIM checks of later filters may be broken by the actions of earlier filters. DKIM does not really protect against Spam This is not a limitation of DKIM, but worth noting anyway. All DKIM does is help you identify if a message is forged or not (and if it has been altered or not). Most Spammers are savvy. They use their own legitimate domain names and create valid DKIM (and DPF and DMARC) records so that their email messages look more legitimate. In truth this does not make them look less spammy; it just says that the messages are not forged. Of course, if the spammer is trying to get by your filters by forging the sender address so that the sender is “you” or someone you know, then DKIM can absolutely help. For further DKIM issues and misconceptions, see: 7 common misconceptions about DKIM in the fight against Spam. How Attackers Subvert DKIM So, in the war of escalation where an attacker is trying to get a forged email message into your INBOX, what tricks do they use to get around sender identity validation by DKIM? The protections afforded by DKIM are more significant than those provided by SPF. From an attacker’s perspective, it all comes down to what sender email address (and domain) they are forging. Can they pick an address to construct an email that you will trust that will make it past DKIM? - If the sender address does not support DKIM at all — the attacker is “all set”. - If your DKIM is set up as “weak“, then the attacker can send a forged message with a missing DKIM signature and it will look legitimate. - If the attacker can send you a message from one of the servers authorized by the DKIM for the domain and if that server does not care who initiated the message … but will sign any messages going through it with the proper DKIM keys, then the message will look legitimate. E.g. If the attacker signs up with the same email provider as that used by the forged domain and that provider’s servers do not restrict DKIM key usage, then s/he can send an email from those same servers are the legitimate account and have his/her messages properly signed. This makes the attacker’s email look “Good” even if the forged domain’s DKIM records are “strict”. An attackers options are much more limited with DKIM. S/he can only send fraudulent messages from domains with no or weak DKIM support, or send through non-restrictive shared email servers, or steal the private key used by the sender’s DKIM, or s/he must actually compromise the email account of someone using the same email domain as address that is to be faked. The situation is better, but not perfect … especially as many organizations leave their DKIM configuration as “weak” as they would rather take a chance on forged email rather than have legitimate messages be missed or discarded due to inadvertent message modification or because they were sent from a server without DKIM. We will see in our next post how one can use DMARC to combine the best features of DKIM and SPF to further enhance forged email detection…. and where the gaps that attackers use still remain.
<urn:uuid:f755d531-f18b-4c52-9202-9ca04d1e541b>
CC-MAIN-2017-09
https://luxsci.com/blog/stopping-forged-email-2-dkim-rescue.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174215.11/warc/CC-MAIN-20170219104614-00158-ip-10-171-10-108.ec2.internal.warc.gz
en
0.92845
3,097
2.59375
3
OAK RIDGE, Tenn. – At first glance, the data hall within Oak Ridge National Laboratory resembles many raised floor environments. But a stroll past the dozens of storage cabinets reveals three of the world’s most powerful supercomputers, including a machine that looms as the once and future king of the supercomputing realm. The Oak Ridge Leadership Computing Facility (OLCF) is on the frontier of supercomputing, forging a path toward “exascale” computing. The data center features an unusual concentration of computing horsepower, focusing 18 megawatts of electric capacity on a 20,000 square foot raised-floor area. “The power demands are about what you would see for a small town,” says Rick Griffin, Senior Electrical Engineer at Oak Ridge National Laboratory (ORNL). That power sustains three Cray systems that rank among the top supercomputers in the latest Top 500 list: NOAA’s Gaea (33rd), the University of Tennessee’s Kraken system (21st) and ORNL’s Jaguar, which is currently ranked sixth at 2.37 petaflops, but topped the list when it made its debut in November, 2009. (See our photo feature, Inside the Oak Ridge Supercomputing Facility, for more). Jaguar is currently undergoing a metamorphosis into Titan, an upgraded Cray XE6 system. When it goes live late this year, Titan will be capable of a peak performance of up to 20 petaflops – or 20 million billion calculations per second. Titan will be accelerated by a hybrid computing architecture teaming traditional central processing units (CPUs) from AMD with the latest high-speed graphics processing units (GPUs) from NVIDIA to create a faster and more efficient machine. The Road to Exascale At 20 petaflops, Titan would be significantly more powerful than the current Top 500 champ, the Sequoia supercomputer at Lawrence Livermore National Labs, which clocks in at 16.3 petaflops. The data center team at Oak Ridge expects that Titan will debut as the fastest machine within the Department of Energy, which operates the most powerful research supercomputers in the U.S. But Titan is just a first step toward the goal of creating an exascale supercomputer—one able to deliver 1 million trillion calculations each second – by 2018. Jaguar is being upgraded in several phases. The dual 6 core AMD Opteron chips have been upgraded to a single 16-core Opteron CPU, while Jaguar’s Seastar interconnect has been updated with Cray’s ground-breaking new Gemini interconnect. In the current phase, NVIDIA Tesla 20-series GPUs are being added to the system, which will be upgraded to NVIDIA’s brand new Kepler architecture. Upon completion, Titan will feature 18,688 compute nodes loaded with 299,008 CPUs, with at least 960 of those nodes also housing GPUs to add more parallel computing power. Cooling 54 kilowatts per Cabinet Each of Titan’s 200 cabinets will require up to 54 kilowatts of power, an intense high-density load. The system is cooled with an advanced cooling system developed by Cray, which uses both water and refrigerants. The ECOPhlex (short for PHase-change Liquid Exchange) cooling system uses two cooling loops, one filled with a refrigerant (R-134a ), and the other with chilled water. Cool air flows vertically through the cabinet from bottom to top. As it reaches the top of the cabinet, the server waste heat boils the R-134a, absorbing the heat through a change of phase from a liquid to a gas. It is then returned to the heat exchanger inside a Liebert XDP pumping unit, where it interacts with a chilled water loop and is converted from gas back to liquid. ORNL estimates that the efficiency of ECOPhlex allowed it to save at least $1 million in annual cooling costs on Jaguar. The advanced nature of the ECOPhlex design will allow the existing cooling system for Jaguar to handle the upgrade to Titan, accommodating a 10-fold increase in computing-power within the same 200-cabinet footprint. Upon completion, Titan will require between 10 and 11 megawatts of power. Oak Ridge has 140 additional cabinets for the other systems within its facility, and currently has 14 megawatts of total power for its IT. Another 4.2 megawatts of power is dedicated to Oak Ridge’s chiller plant. Pages: 1 2
<urn:uuid:914b18dd-fce2-45a9-831f-33f303644151>
CC-MAIN-2017-09
http://www.datacenterknowledge.com/archives/2012/09/10/oak-ridge-the-frontier-of-supercomputing/?utm-source=feedburner&utm-medium=feed&utm-campaign=Feed%3A+DataCenterKnowledge+(Data+Center+Knowledge)
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00102-ip-10-171-10-108.ec2.internal.warc.gz
en
0.915599
938
2.734375
3
Not sure I am understanding the question. TCP does not stop anything, but it is polite and shakes hands to establish the connection. Before an HTTP client can get data, it needs to request it with a GET command. Then the server will send the data (and expect acks for data sent). Here is a packet capture. Look at the info column to see the handshake - SYN, SYN/ACK, ACK ......then the http client requests the data with an http GET command...and data begins flowing, while the client ACK's received sequenced packets. When the server is done sending the data, we can see it sends a FIN, ACK followed by the clients ACK. (END of conversation) Here is a really good explanation with pictures. Good reference for review.
<urn:uuid:eeea25e0-cb66-4eae-9d13-ee678fd2dae1>
CC-MAIN-2017-09
http://forum.internetworkexpert.com/forums/t/35200.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00330-ip-10-171-10-108.ec2.internal.warc.gz
en
0.907085
165
3
3
Southern California’s Mount Wilson is a lonesome, hostile peak — prone to sudden rock falls, sometimes ringed by wildfire — that nevertheless has attracted some of the greatest minds in modern science. George Ellery Hale, one of the godfathers of astrophysics, founded the Mount Wilson Observatory in 1904 and divined that sunspots were magnetic. His acolyte Edwin Hubble used a huge telescope, dragged up by mule train, to prove the universe was expanding. Even Albert Einstein made a pilgrimage in the 1930s to hobnob with the astronomers (and suffered a terrible hair day, a photo shows). Today, Mount Wilson is the site of a more terrestrial but no less ambitious endeavor. Scientists from NASA’s Jet Propulsion Laboratory in Pasadena, Calif., and elsewhere are turning the entire Los Angeles metro region into a state-of-the-art climate laboratory. From the ridgeline, they deploy a mechanical lung that senses airborne chemicals and a unique sunbeam analyzer that scans the skies over the Los Angeles Basin. At a sister site at the California Institute of Technology (Caltech), researchers slice the clouds with a shimmering green laser, trap air samples in glass flasks, and stare at the sun with a massive mirrored contraption that looks like God’s own microscope. These folks are the foot soldiers in an ambitious, interagency initiative called the Megacities Carbon Project. They’ve been probing L.A.’s airspace for more than a year, with the help of big-name sponsors like the National Institute of Standards and Technology, the Keck Institute for Space Studies, and the California Air Resources Board. If all goes well, by 2015 the Megacities crew and colleagues working on smaller cities such as Indianapolis and Boston will have pinned down a slippery piece of climate science: an empirical measurement of a city’s carbon footprint.
<urn:uuid:2dfa6d1b-3311-44be-9aef-268126c1b16f>
CC-MAIN-2017-09
http://www.nextgov.com/emerging-tech/2013/02/how-nasa-scientists-are-turning-l-one-big-climate-change-lab/61582/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00330-ip-10-171-10-108.ec2.internal.warc.gz
en
0.895259
388
3.40625
3
Conservation research is not being done in the countries where it is most needed - a situation which is likely to undermine efforts to preserve global biodiversity. That's the conclusion of a new study publishing in the Open Access journal PLOS Biology on 29th March, led by Associate Professor Kerrie Wilson from The University of Queensland and the Australian Research Council Centre of Excellence for Environmental Decisions (CEED). "Our analysis revealed that comparatively less conservation research is undertaken in the world's most biodiverse countries such as Indonesia and Ecuador" says Kerrie Wilson. The study analysed over 10,000 conservation science papers from over 1,000 journals published in 2014. The researchers then compared the countries where these studies were done (and by whom) with the world's most important countries for biodiversity conservation. What they found suggested a massive mismatch in terms of need and effort. "If you dig a little deeper, it gets worse. The science conducted in these countries is often not led by scientists based in those countries and these scientists are also underrepresented in important international forums." What this adds up to, says Wilson, is a widespread bias in the field of conservation science. "If research is biased away from the most important areas for biodiversity conservation then this will accentuate the impacts of the global biodiversity crisis and reduce our capacity to protect and manage the natural ecosystems that underpin human well-being," says Wilson. Biases in conservation science will also undermine our ability to meet Target 19 of the Convention on Biodiversity (CBD). Target 19 states that "By 2020, knowledge, the science base and technologies relating to biodiversity, its values, functioning, status and trends, and the consequences of its loss, are improved, widely shared and transferred, and applied." "Our comprehensive analysis of publishing trends in conservation science literature suggest we won't meet this target if these biases aren't addressed," says Wilson. The researchers believe that a range of solutions is needed. These include reforming open access publishing policies, enhancing science communication strategies, changing author attribution practices, improving representation in international processes, and strengthening infrastructure and human capacity for research in countries where it is most needed. "We won't change the situation by simply ignoring it," says Wilson."Researchers need to examine their own agendas and focus on areas with the greatest need." More information: Wilson KA, Auerbach NA, Sam K, Magini AG, Moss ASL, Langhans SD, et al. (2016) Conservation Research Is Not Happening Where It Is Most Needed. PLoS Biol 14(3): e1002413.DOI: 10.1371/journal.pbio.1002413 News Article | April 1, 2016 Researchers have identified a common, universal facial expression referred to as the "Not Face" that represents 'negation' or negative statements, devoid of the cultural background or nationality the person is related to. This particular look that descriptively entails a set of creased brows, pressed lips and a raised chin was identical across people, despite the fact that they spoke in English, Chinese or Spanish. Further, this facial expression was consistent even with those people who use the American Sign Language (ASL) to convey negative emotions. The researchers made a hypothesis that the universal "not face" would comprise of three basic facial expressions representing negative emotions such as anger, disgust and contempt. The study that was carried out by the Ohio State University, reveals that at the very pace we speak or communicate negatively, our facial muscles inadvertently flexes to form the "not face" expression. It's been observed as an instinctive reaction. "To our knowledge, this is the first evidence that the facial expressions we use to communicate negative moral judgment have been compounded into a unique, universal part of language," said Aleix Martinez, cognitive scientist and professor of electrical and computer engineering at the Ohio State University. Hence for the purpose of the study, 158 Ohio State students were selected and divided into four groups. Each group representing the languages English, Mandarin Chinese, Spanish and ASL, and respectively comprising of students who natively spoke that language. The students were made to sit opposite a digital camera and recorded, while they carried on a generic conversation in their native language. The research carried out a technical analysis of photographic data, frame by frame, and kept track of the distinguishing movements of the facial muscles. Astonishingly, the investigation found clear, identical grammatical markers of negation across the four different groups. The distinct negative facial expressions — the furrowed brows of "anger" combined with the raised chin of "disgust" and the pressed-together lips of "contempt" — were surprisingly common and recurrent across the groups. It was observed that regardless of the language the participants spoke or communicating through sign language, they all expressed the similar "not face" expression while conveying negative statements. The details of the study have been published in the journal Cognition. A facial expression that implies disagreement is the same in several cultures, scientists have found. More The facial expression indicating disagreement is universal, researchers say. A furrowed brow, lifted chin and pressed-together lips — a mix of anger, disgust and contempt — are used to show negative moral judgment among speakers of English, Spanish, Mandarin and American Sign Language (ASL), according to a new study published in the May issue of the journal Cognition. In ASL, speakers sometimes use this "not face" alone, without any other negative sign, to indicate disagreement in a sentence. "Sometimes, the only way you can tell that the meaning of the sentence is negative is, that person made the 'not face' when they signed it," Aleix Martinez, a cognitive scientist and professor of electrical and computer engineering at The Ohio State University, said in a statement. Martinez and his colleagues previously identified 21 distinct facial emotions, including six basic emotions (happiness, sadness, fear, anger, surprise and disgust), plus combinations of those (happy surprise, for example, or the kind of happy disgust someone might express after hearing a joke about poop). The researchers wondered if there might be a basic expression that indicates disapproval across cultures. Disapproval, disgust and disagreement should be foundational emotions to communicate, they reasoned, so a universal facial expression marking these emotions might have evolved early in human history. [Smile Secrets: 5 Things Your Grin Reveals About You] The researchers recruited 158 university students and filmed them in casual conversation in their first language. Some of these students spoke English as a native tongue, while others were native Spanish, Mandarin Chinese or ASL speakers. These languages have different roots and different grammatical structures. English is Germanic, Spanish is in the Latin family and Mandarin developed independently from both. ASL developed in the 1800s from a mix of French and local sign language systems, and has a grammatical structure distinct from English. But despite their differences, all of the groups used the "not face," the researchers found. The scientists elicited the expression by asking the students to read negative sentences or asking them to answer questions that they'd likely answer in the negative, such as, "A study shows that tuition should increase 30 percent. What do you think?" As the students responded with phrases like, "They should not do that," their facial expressions changed. By analyzing the video of the conversations frame by frame and using an algorithm to track facial muscle movement, Martinez and his colleagues were able to show that a combination of anger, disgust and contempt danced across the speakers' faces, regardless of their native tongue. A furrowed brow indicates anger, a raised chin shows disgust and tight lips denote contempt. The "not face" was particularly important in ASL, where speakers can indicate the word "not" either with a sign or by shaking their head as they get to the point of the sentence with the negation. The researchers found, for the first time, that sometimes, ASL speakers do neither — they simply make the "not face" alone. "This facial expression not only exists, but in some instances, it is the only marker of negation in a signed sentence,” Martinez said. The researchers are now building an algorithm to handle massive amounts of video data, and hope to analyze at least 10,000 hours of data from YouTube videos to understand other basic facial expressions and how people use expressions to communicate alongside language. Follow Stephanie Pappas on Twitter and Google+. Follow us @livescience, Facebook & Google+. Original article on Live Science. Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. Initially known to many for their military use, drones have evolved quickly into tools for creating and enjoying new experiences. They have become flying extensions of the human desire to innovate, help people and have fun. Nearly four million commercial drones are expected to sell this year, rising to 16 million a year by 2020, according to a new report by Juniper Research. "Three years ago, this technology was so expensive, so unattainable, that only the professional cinematographer could afford it," said International Drone Racing Association CEO Charles Zablan in an interview with The New York Times. Zablan said that now a full drone racing kit with flying google can be bought for about $1,000. Like many new technologies that become affordable and widely available, these flying robots are proving to be useful as well as entertaining. In war-torn Syria, drones are delivering food to starving villages. Drones carry cargo so frequently in Rwanda that they have their own airport. While drones bring stunning aerial video perspectives on life, they're also inspiring people to create art and invent games that never existed before. Here are five innovative ways drones are bring used: Using drones to capture footage that would normally require expensive helicopters or cranes is more common not just in major Hollywood productions but also in videos created by small production houses and even amateurs. In "First Flight of the Phantom," viewers see the oft-filmed grandeur of NYC from a totally new perspective, with the DJI Phantom moving from street level to building-top in one continuous shot. It's not just for smaller indie films, either—Chappie, Neil Blomkamp's latest venture into South African sci-fi, filmed several of its action shots with aerial drones. This November, the Flying Robot International Film Festival will even make history as the first to feature films exclusively made by, and about, these autonomous flying devices. On the beaches of the Ottawa River, geese reign wide swaths of land as tyrants, proving resistant to all efforts to dislodge them and rendering most of the watery real estate uninhabitable. Ottawa, however, has a new trick up its sleeve. The GooseBuster is a drone fitted with speakers blaring the howl of a grey wolf as it zooms through the air (geese hate flying wolves). Unsurprisingly, it's done wonders, scaring off the winged bullies at lightening speeds. Aside from terrifying geese, drones can also be used to protect endangered animals. Lian Pin Koh and Serge Wich, two scientists spearheading conservation efforts for the Sumatran orangutan, developed an inexpensive, lightweight drone that maps large swaths of land, a process that was once costly and time-consuming. They've even used their drone to take aerial photographs of orangutan treetop nests, something that's been impossible to do in the past. Drones Capture the Eye of the Storm Because drones are unmanned and cheap, scientists can send them into all kinds of dangerous situations. One explorer, Sam Cossman, even sacrificed a camera-mounted drone to capture mind-blowing images and footage of active volcano Vanuatu. For those more interested in academics, drones can venture inside a tornado. Right now, scientists have a lot of questions about how tornadoes are formed, and although the movie Twister showed otherwise, humans can't safely collect data from the center. Engineering students at Oklahoma State University could be changing that in the future. They are working to develop drones capable of flying into dangerous storms and collecting data. NASA is also developing a drone for monitoring dangerous weather: The Hurricane and Severe Storm Sentinel, or HS3, studies the storms from above, closer than any piloted aircraft could ever safely attempt. "Our hope is to be able to make better predictions about the impacts of hurricanes," meteorologist Sharan Majumdar told Discover Magazine. That's certainly a crucial task, considering how hard coastlines around the world have been pummeled by severe weather in the last few years. When these flying machines are used for surveillance and military combat they invoke authoritarian symbolism, so it was shocking for many to see rebellious drones defacing a colossal Calvin Klein outdoor advertisement in New York under the dark of night. Last April, KATSU, a well-known graffiti artist, vandal and ne'er-do-well, used a drone armed with a can of spray paint to draw horizontal slash marks across the gargantuan billboarded face of Kendall Jenner. While the art itself wasn't terribly impressive, it's the kind of performance that could never have been accomplished by mere human hands. As drone technology improves, so, too, will the displays of public tagging. "Seventy percent of the concentration is in maintaining this equilibrium with the two dimensional surface while you are painting," KATSU explained to Wired. But he seemed optimistic about future careers of drones as graffiti artists. "It's exciting to see its first potential use as a device for vandalism." The dream of battling robots to the death has been around ever since robots were first imagined. Something about unmanned machinery summons the inner toddler in everyone who used to mash action figures together until a limb popped off. So it seems only natural that the most exciting use of this high-tech gadgetry is making them fight each other for human amusement. Robot Combat League, anyone? But fighting while flying takes the amusement to new levels. As this video from Intel's Meet the Makers series shows, it takes more than cutting-edge technology to win. Pilots are constantly fixing their fighters on the fly, which requires them to become skilled engineers in order to best their opponents. The Aerial Sports League (ASL) currently leads the pack in flying robot death matches, featuring races and obstacle courses for pilots to navigate. But the real draw is the cage matches (or net matches, more accurately) where two drones try to get the drop on each other by jamming their frames on top of the other's propellers, sending the lesser drone into a crash landing. But ASL founder Maqrue Cornblatt points out that the heavy-duty drones used in these sports are great for reasons other than aiding destruction. "They're really ideal for STEM and educational outreach," he explained. "It's a drone little kids can build and smash and take apart and rebuild over and over again." Cornblatt and his team specifically incorporated "pit stops" that allowed pilots to fix their fighters on the spot, making drones sports a fantastic hobby for burgeoning engineers. "We have an intrinsic human desire to see violence," Cornblatt said. "But to put it in this context, where it's safe and actually educational, is extremely rewarding." Presumably, though, it's only a matter of time before flamethrowers and buzzsaws are added to the fray. As drone technology continues to improve, philanthropists, makers and rebels will find new and interesting ways to entertain, inform and accomplish their goals. Explore further: NYC police see risks with drones' popularity News Article | December 4, 2015 Email, text messaging, and chat apps might seem the perfect tools for deaf people to communicate. But those with little or no hearing are a visual bunch, and many prefer sign language, says Claude Stout, executive director of TDI, an organization that promotes equal-access technology for the deaf and hard of hearing. "We show the nuances of communication," says Stout, through a sign interpreter. "And we use our expressions to show our feelings, and show that we are happy or sad or concerned or upset, just like you can hear those nuances in a person's voice." And signers can talk fast, says Stout, at up to 200 words per minute. Furthermore, signing is often the native language for those who use it. Moving to the keyboard means switching to a second language. That's why Stout and his colleagues at TDI were excited to find Glide, an Israeli startup founded in 2012 that makes a free video-chat app of the same name for Android and iOS. Glide told me that it now has "at least several hundred thousand deaf users." (The app has been installed on more than 20 million devices and Glide claims "millions" of active users.) "They were a community that we found accidentally," says Sarah Snow, Glide's community manager. Snow was making YouTube videos about Glide when she started getting comments from app users asking her to add subtitles. "When I first saw those messages, I didn't know what to think; I didn't know how many deaf users we had," she says. "But I knew that they were a community that always responds to my videos." Finding a trove of unexpected fans, Snow went all-out to cultivate them. That meant not only adding captions to her YouTube clips, but starting to learn American Sign Language (ASL) so she could make videos specifically for this community. Snow has also done meetups for users and institutions for the deaf and hard of hearing, including Gallaudet University in Washington, D.C., and the schools for the deaf in Austin, Chicago, New York City, and Fremont, California. Glide users even proposed their own sign for the app, with the most popular being a fist with thumb and index finger extended out. Snow describes Glide's relationship with its deaf users in a video promoting a South by Southwest panel on the experience she's presenting in March 2016. Glide isn't the first tech company to discover fans it never anticipated. Igniter.com, for instance, was founded in 2008 as a group-dating service focused on New York City, but it soon became the fastest-growing dating site in India. "In January 2010, we made the decision that we are an Indian dating site," Igniter cofounder Adam Sachs told the New York Times in an interview. Glide is far from the first video-chat service: Skype was founded a dozen years ago, and FaceTime debuted on the iPhone 4 in 2010. And of course Snapchat has video. But Glide has one killer feature for deaf people: the ability to leave a video message rather than having to prearrange a live call. "With Glide, they can send a message whenever they want," says Snow. "They don't have to wait for someone to answer a call." With that asynchronous messaging capability, sign language users get the same flexibility everyone else has with tools such as email or Facebook messages. The app could do more, though, Snow soon learned. Glide uses a feature called optimize video frame rate, in which it skips some video frames when bandwidth is limited. A hearing person might appreciate trading quality for speed, but dropped frames could garble sign communication. At the request of deaf users, Glide added a setting to turn off the feature. "If you send a video and you have a poor connection, then it might take a second or two longer to send," says Snow, "but it will play smoothly." In August 2015, TDI chose Glide for its biennial Andrew Saks Engineering Award for enabling technologies. "We wanted to show the world that Glide's video technology is starting out as a mainstream technology for anyone who wants to use it," says Stout. "But Glide [is] proceeding with advertising and promoting this technology as a benefit in the deaf community." The previous two recipients of the award were Microsoft in 2013 for its overall commitment to accessibility, and Google in 2011 for its technology to automatically create captions on YouTube videos. (The feature was introduced in November 2009.) Even the companies that are leading in this field could be doing more, some say. Rikki Poynter is a deaf YouTube videographer who began with instructional videos on makeup (though she can't hear, she can speak pretty clearly) and has since broadened her focus to other issues in the deaf community. The quality of YouTube's automated caption creation, she says, is still poor—she calls the feature "automatic craptions." (We chat over Skype, with me typing questions and her speaking answers.) "People always laugh about it," she says, "but it's not really funny, because that is all that's given to us." People viewing college lectures, for example, could miss key information, she says. Poynter says she has spoken with people at YouTube, who tell her that the technology isn’t far enough along for better quality. But she remains skeptical, noting her experience with Apple's speech recognition. "My boyfriend will talk to his iPhone," she says. "It will come out spot-on." Poynter was featured in a February 2015 BBC article that quotes YouTube product manager Matthew Glotzbach saying, "Although I think having auto caption is better than nothing, I fully admit and I fully recognize that it is by no means good enough yet." A bigger problem, though, is that most YouTubers don't even think about using captions, says Poynter. According to Glotzbach, in the same article, only 25% of YouTube content has captioning. Sarah Snow has taken up the cause with a campaign encouraging users to contact their favorite YouTube creators and post about it using the hashtag #withcaptions. One selling point is that captions are useful in many situations beyond the deaf community, such as in noisy settings like bars, where closed-captioned TV broadcasts are a staple. Stout calls this an example of universal design policies that benefit everyone, not only the deaf or hard of hearing. Much as he likes Glide, Stout looks forward to a day when such video messaging capability is a universal design across devices and doesn’t require a special app. "You could use instant messaging . . . regardless of what technology you use," he says. "In the future, it would be nice to be able to send a video message or to make a video call with anyone, regardless of what technology device they are using."
<urn:uuid:255a57d2-bd6f-4bd3-9a10-4ca9c38b456c>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/asl-1-1278838/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00506-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957794
4,590
3.125
3
Zero to ECDH (Elliptic-Curve Diffie-Hellman) in 30 Minutes This is a quick primer on the elliptic-curve Diffie-Hellman (ECDH) key-agreement scheme. It provides a simple illustration of how the properties of elliptic-curve cryptography (ECC) can be used to build a useful security scheme. A key agreement scheme is a procedure by which two or more parties agree upon a value from which they can subsequently derive one or more keys for use in a symmetric encryption and/or data authentication scheme. Neither party completely determines the key value on their own. Instead, they both contribute to the final key value. And, most important, anyone who observes the exchanges between the two parties cannot tell what the final result will be. It is important to remember that, in their basic form, key-agreement schemes are anonymous. In other words, they don’t tell either party the identity of the other party (the one with whom they have agreed a key), nor whether that party is the one they believe it to be.
<urn:uuid:f7f6face-73a0-464c-9822-116c1fadcd9c>
CC-MAIN-2017-09
https://www.entrust.com/resource/zero-ecdh-elliptic-curve-diffie-hellman-30-minutes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00098-ip-10-171-10-108.ec2.internal.warc.gz
en
0.909982
242
2.96875
3
Math Comprehension Made Easy June 16, 2012 Want to dive into analytics as a data scientist? Get started with Stonehill College‘s “How to Read Mathematics.” The well-structured article by Shai Simonson and Fernando Gouvea details the reading protocol that will allow anyone to get the most out of reading mathematical explanations as opposed to, say, reading poetry or fiction. The authors explain: “Students need to learn how to read mathematics, in the same way they learn how to read a novel or a poem, listen to music, or view a painting. . . . Mathematical ideas are by nature precise and well defined, so that a precise description is possible in a very short space. Both a mathematics article and a novel are telling a story and developing complex ideas, but a math article does the job with a tiny fraction of the words and symbols of those used in a novel. “ The article goes on to explain common mistakes math readers make, such as missing the big picture for the details, reading passively, and reading too fast. A wealth of tips for understanding math texts follows, including examples. Much of this is information I knew, but had trouble articulating when my son was in pre-calc. How I wish I had had this piece then! For anyone looking at a math-heavy field like data analytics, this article is a must-read. Cynthia Murrell, June 16, 2012 Sponsored by PolySpot
<urn:uuid:2db1b3ad-6fff-4134-a7ca-36b0cc13cc44>
CC-MAIN-2017-09
http://arnoldit.com/wordpress/2012/06/16/math-comprehension-made-easy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00274-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955154
308
3.359375
3
Analog. Digital. What’s the Difference? by Paul Wotel Analog phone lines. Analog signals. Digital security. Digital PBX. Analog-to-digital adapters. What does it all mean? In the telecom world, understanding analog versus digital isn't as simple as comparing one technology to another. It depends on what productand in some cases, which product featureyou happen to be talking about. Analog at a glance As a technology, analog is the process of taking an audio or video signal (in most cases, the human voice) and translating it into electronic pulses. Digital on the other hand is breaking the signal into a binary format where the audio or video data is represented by a series of "1"s and "0"s. Simple enough when it's the deviceanalog or digital phone, fax, modem, or likewisethat does all the converting for you. Is one technology better than the other? Analog technology has been around for decades. It's not that complicated a concept and it's fairly inexpensive to use. That's why we can buy a $20 telephone or watch a few TV stations with the use of a well-placed antenna. The trouble is, analog signals have size limitations as to how much data they can carry. So with our $20 phones and inexpensive TVs, we only get The newer of the two, digital technology breaks your voice (or television) signal into binary codea series of 1s and 0stransfers it to the other end where another device (phone, modem or TV) takes all the numbers and reassembles them into the original signal. The beauty of digital is that it knows what it should be when it reaches the end of the transmission. That way, it can correct any errors that may have occurred in the data transfer. What does all that mean to you? Clarity. In most cases, you'll get distortion-free conversations and clearer TV pictures. You'll get more, too. The nature of digital technology allows it to cram lots of those 1s and 0s together into the same space an analog signal uses. Like your button-rich phone at work or your 200-plus digital cable service, that means more features can be crammed into the digital signal. Compare your simple home phone with the one you may have at the office. At home you have mute, redial, and maybe a few speed-dial buttons. Your phone at work is loaded with function keys, call transfer buttons, and even voice mail. Now, before audiophiles start yelling at me through their PC screens, yes, analog can deliver better sound quality than digital…for now. Digital offers better clarity, but analog gives you richer quality. But like any new technology, digital has a few shortcomings. Since devices are constantly translating, coding, and reassembling your voice, you won't get the same rich sound quality as you do with analog. And for now, digital is still relatively expensive. But slowly, digitallike the VCR or the CDis coming down in cost and coming out in everything from cell phones to satellite dishes. When you're shopping in the telecom world, you often see products touted as "all digital." Or warnings such as "analog lines only." What does it mean? The basic analog and digital technologies vary a bit in definition depending on how they're implemented. Analog lines, also referred to as POTS (Plain Old Telephone Service), support standard phones, fax machines, and modems. These are the lines typically found in your home or small office. Digital lines are found in large, corporate phone How do you tell if the phone line is analog or digital? Look at the back of the telephone connected to it. If you see "complies with part 68, FCC Rules" and a Ringer Equivalence Number (REN), then the phone and the line are analog. Also, look at the phone's dialpad. Are there multiple function keys? Do you need to dial "9" for an outside line? These are indicators that the phone and the line are digital. A word of caution. Though digital lines carry lower voltages than analog lines, they still pose a threat to your analog equipment. If you're thinking of connecting your phone, modem, or fax machine to your office's digital phone system, DON'T! At the very least, your equipment may not function properly. In the worst case, you could zap your communications tools into oblivion. How? Let's say you connect your home analog phone to your office's digital line. When you lift the receiver, the phone tries to draw an electrical current to operate. Typically this is regulated by the phone company's central office. Since the typical proprietary digital phone system has no facilities to regulate the current being drawn through it, your analog phone can draw too much currentso much that it either fries itself or in rare cases, damages the phone system's line card. What to do? There are digital-to-analog adapters that not only let you use analog equipment in a digital environment, but also safeguard against frying the internal circuitry of your phone, fax, modem, or laptop. Some adapters manufactured by Konexx come designed to work with one specific piece of office equipment: phone, modem, laptop, or teleconferencer. Simply connect the adapter in between your digital line and your analog device. That's it. Or you can try a universal digital-to-analog adapter such as Hello Direct's LineStein®. It works with any analog communications device. Plus, it's battery powered so you're not running extra cords all over your office. The very nature of digital technologybreaking a signal into binary code and recreating it on the receiving endgives you clear, distortion-free cordless calls. Cordless phones with digital technology are also able to encrypt all those 1s and 0s during transmission so your conversation is safe from eavesdroppers. Plus, more power can be applied to digital signals and thus, you'll enjoy longer range on your cordless phone conversations. The advantage to analog cordless products? Well, they're a bit cheaper. And the sound quality is richer. So unless you need digital security, why not save a few bucks and go with an analog phone? After all, in home or small office environments where you may be the only cordless user, you won't have any interference issues. Keep in mind, when talking about digital and analog cordless phones, you're talking about the signals being transferred between the handset and its base. The phones themselves are still analog devices that can only be used on analog lines. Also, the range of your cordless phoneanalog or digitalwill always depend on the environment. Perhaps the most effective use of the digital versus analog technology is in the booming cellular market. With new phone activations increasing exponentially, the limits of analog are quickly being realized. Digital cellular lets significantly more people use their phones within a single coverage area. More data can be sent and received simultaneously by each phone user. Plus, transmissions are more resistant to static and signal fading. And with the all-in-one phones out nowphone, pager, voice mail, internet accessdigital phones offer more features than their analog predecessors. Analog's sound quality is still superioras some users with dual-transmission phones will manually switch to analog for better sound when they're not concerned with a crowded coverage areabut digital is quickly becoming the norm in the cellular market. What to buy? The first thing to consider when buying analog or digital equipment is where you'll be using it. If you're buying for a proprietary PBX phone system, you'll need to get the digital phone designed for that particular system. Need to connect a conferencer on your digital system? Opt for a digital-to-analog adapter. Shopping for home office equipment? Most everything you'll consider is analog. Want an all-in-one cellular phonepaging, voice mail, web? A digital cellular phone will deliver it all. In fact, the only head-scratcher may be your cordless phone purchase. Looking for security and distortion-free conversations in your small office? Go with a digital 900 MHz or 2.4 GHz cordless phone. Using a cordless at home? An analog phone will give you the richest sound quality and usually
<urn:uuid:636974d0-393e-4d70-83f6-30c065573f9e>
CC-MAIN-2017-09
http://telecom.hellodirect.com/docs/Tutorials/AnalogVsDigital.1.051501-P.asp
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00502-ip-10-171-10-108.ec2.internal.warc.gz
en
0.921744
1,842
2.78125
3
Memory management can be challenge enough on traditional data sets, but when big data enters the picture, things can slow way, way down. A new programming language announced by MIT this week aims to remedy that problem, and so far it's been found to deliver fourfold speed boosts on common algorithms. The principle of locality is what governs memory management in most computer chips today, meaning that if a program needs a chunk of data stored at some memory location, it's generally assumed to need the neighboring chunks as well. In big data, however, that's not always the case. Instead, programs often must act on just a few data items scattered across huge data sets. Fetching data from main memory is the major performance bottleneck in today’s chips, so having to fetch it more frequently can slow execution considerably. “It’s as if, every time you want a spoonful of cereal, you open the fridge, open the milk carton, pour a spoonful of milk, close the carton, and put it back in the fridge,” explained Vladimir Kiriansky, a doctoral student in electrical engineering and computer science at MIT. With that challenge in mind, Kiriansky and other researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created Milk, a new language that lets application developers manage memory more efficiently in programs that deal with scattered data points in large data sets. Essentially, Milk adds a few commands to OpenMP, an API for languages such as C and Fortran that makes it easier to write code for multicore processors. Using it, the programmer inserts a few additional lines of code around any instruction that iterates through a large data collection looking for a comparatively small number of items. Milk’s compiler then figures out how to manage memory accordingly. With a program written in Milk, when a core discovers that it needs a piece of data, it doesn’t request it -- and the attendant adjacent data -- from main memory. Instead, it adds the data item’s address to a list of locally stored addresses. When the list gets long enough, all the chip’s cores pool their lists, group together those addresses that are near each other, and redistribute them to the cores. That way, each core requests only data items that it knows it needs and that can be retrieved efficiently. In tests on several common algorithms, programs written in the new language were four times as fast as those written in existing languages, MIT says. That could get even better, too, as the researchers work to improve the technology further. They're presenting a paper on the project this week at the International Conference on Parallel Architectures and Compilation Techniques.
<urn:uuid:a8fd36d2-6e96-47da-a633-325abc9fa7f7>
CC-MAIN-2017-09
http://www.networkworld.com/article/3120506/this-new-programming-language-promises-a-4x-speed-boost-on-big-data.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00622-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947457
559
3.390625
3
Black Box Explains...SCSI Ultra2 and LVD (Low-Voltage Differential) Small Computer Systems Interface (SCSI), pronounced “scuzzy,” has been the dominant technology used to connect computers and high-speed peripherals since the 1980s. SCSI technology is constantly evolving to accommodate increased bandwidth needs. One of the more recent developments is Ultra2 SCSI. Because Ultra2 SCSI is backward compatible, it works with all legacy equipment. Ultra2 doubles the possible bandwidth on the bus from 40 to 80 MBps! Just as importantly, Ultra2 supports distances up to 12 meters (39.3 ft.) for a multiple-device configuration. Ultra2 uses Low-voltage Differential (LVD) techniques to transfer data at faster rates with fewer errors. Don’t confuse Ultra2 with LVD. Ultra2 is a data-transfer method; LVD is the signaling technique used to transfer the data. Cables are very important when designing or upgrading a system to take advantage of Ultra2 SCSI. Cables and connectors must be of high quality and they should come from a reputable manufacturer to prevent crosstalk and minimize signal radiation. BLACK BOX® Ultra2 LVD cables are constructed of the finest-quality components to provide your system with the maximum protection and highest possible data-transfer rates.
<urn:uuid:461eed9b-caae-42bf-80e7-fe9020f95ad8>
CC-MAIN-2017-09
https://www.blackbox.com/en-pr/products/black-box-explains/black-box-explains-scsi-ultra2-and-lvd-(low-voltage-differential)
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00146-ip-10-171-10-108.ec2.internal.warc.gz
en
0.906387
278
3.09375
3
If you can shop, make appointments or register your car via the Internet from the convenience of your home -- or from halfway around the world, for that matter -- why can't you get your education in the same manner? These days, many students can. Not only are continuing education and college students completing courses at times and places convenient for them, but in recent years, more and more high-school, middle-school and even grade-school students do so as well. At least 15 states have distance education programs for public school students, according to the U.S. Department of Education's 2004 report, Toward a New Golden Age in American Education -- How the Internet, the Law and Today's Students Are Revolutionizing Expectations. A deluge of regional and districtwide programs exist as well, and new programs are cropping up all the time. Within the next decade, the report predicted, every state and most schools will offer some form of virtual schooling. Virtual schools offer students and their families scheduling flexibility, course options, varied learning formats and experience that will benefit them beyond their schooling. Each program, however, is not for everybody, and a wide variety of virtual schools have emerged to cater to the varying needs of both students and the educational systems in which they learn. Though this next evolution in education shows no sign of slowing, some kinks must be worked out -- such as questions about funding and oversight in a school system that no longer fits within previously delineated geographic boundaries. "It's relatively new even at this point," said David Griffith, spokesman for the National Association of State Boards of Education (NASBE), comparing virtual schools -- in existence for only about 10 years -- to charter schools. "Even [with] charter schools, there's a physical location, and the geographic area where students -- the attendance -- would be pulled from is somehow limited and finite. With virtual schools, that's not the case. "You could have students living in one part of the state attending a virtual school in another," he continued. "And one of the things we've seen is it's a question of who pays, and determining which is the student's location for the purposes of making funding calculations and that sort of thing." In many cases, this situation has created friction among virtual and traditional schools, as per-pupil funds are drawn away from traditional schools along with students choosing to attend a virtual or charter school. In Ohio, many public school entities have begun offering online courses to compete with charter schools. Virtual schools have also raised questions about oversight, said Griffith. "If it's removed from the local community, then who is able to guarantee the services are being delivered and students are making the educational progress they should? The local district? The virtual school on the other side of the state? The local district in which the school is being operated? Is it the state, because it does cross district boundaries? These are all issues as we move forward that are going to have to be figured out." Virtual Learning Put to Use For years, Florida has used virtual schools to offer students and families more options. The state has several statewide programs -- including two full-time K-8 pilots aimed at reducing class sizes and the Florida Virtual School (FLVS), a supplemental program that caters to middle- and high-school students. Florida's Legislature has taken virtual schooling in hand and created mechanisms to fund its three statewide programs, as well as a handful of district-level programs. "What we're providing for students is really a choice of how they receive their education," said Bruce Friend, chief administrative officer for the FLVS. "The traditional classroom environment model is not the best learning environment
<urn:uuid:1687ed2b-89d4-4c42-9636-eec54720ed74>
CC-MAIN-2017-09
http://www.govtech.com/e-government/Education-Options.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00498-ip-10-171-10-108.ec2.internal.warc.gz
en
0.974603
756
2.859375
3
This client came to Gillware’s data recovery lab after an accidental reformat cut them off from their collection of family photos and documents. They had wanted to free up space on their drive, and thought they had their files backed up on their computer. Only once they’d continued to use the drive for a few hours after their successful hard drive reformat did they realize they did not have as much data backed up as they’d assumed. Fortunately for our client, situations like these are bread and butter for our data recovery technicians. Hard Drive Reformat Case Study: A Modern Palimpsest Drive Model: Western Digital WD10SPCX-60KHST0 Drive Capacity: 1 TB Operating System: Windows Situation: Hard drive was accidentally reformatted and briefly used Type of Data Recovered: Photos and documents Binary Read: 100% Gillware Data Recovery Case Rating: 9 The More Things Change… Here is a little riddle: How is a reformatted hard drive like the Codex Nitriensis? Believe it or not, the concept of reformatting a data storage device predates hard disk drive technology—and by a lot longer than you’d expect. For thousands of years, humans have stored data by writing it down. (Of course, we still do this today, but not as often as our ancestors did.) In antiquity, material to write on was often scarce or expensive to produce, and had to be rationed carefully and reused. A palimpsest is a manuscript that has been erased so it could be reused. Throughout history, writers have taken used scrolls or manuscripts, scraped or washed the existing text off of them, and wrote new text over them. Archaeologists and historians have, thanks to technological advancement over the past centuries, developed ever-better tools for deciphering the erased text from these palimpsests. A reformatted hard drive is simply a modern palimpsest. You take a hard drive full of data, make it appear blank, and start reusing it. But the old data lives just beneath the surface, out of sight. And just as historians and archaeologists recover data from ancient manuscripts, data recovery experts can examine reformatted hard drives and decipher the data that used to live within them. What do you do when you’ve accidentally formatted a hard drive? It’s hard to inadvertently reformat a hard drive. Accidental reformats are rarely the result of a slip of a finger. If a hard drive is experiencing intermittent firmware or connectivity issues, it may show up as blank and prompt for a reformat. When this happens, it’s easy to accidentally hit the wrong key and tell your computer to format the drive. In these situations, the user usually realizes their mistake right away. But most accidental reformats are intentional reformats which were done by mistake. You may format your hard drive to free up space, assuming you’ve backed up your important data, only to realize some of your data didn’t make it. Or maybe you have several hard drives, and you may have meant to format one hard drive, but plugged in a different one by mistake. In these situations, by the time the user tells the drive to format itself, it’s already too late. Once begun, you can’t stop or undo the reformat. But the user might not realize the error of their ways right away. It might hit them a few minutes, hours, or days later. During this time period, the user could be writing new data to the drive. This can add a layer of complexity to data recovery. If they try to install and run file recovery software, they might just make the problem worse. When you need an expert to recover data from a formatted hard drive, Gillware has your back. Our reformatted hard drive recovery specialists know the ins and outs of hard drive data storage better than anyone else. Gillware’s logical hard drive recovery technicians know where your data goes when you accidentally reformat a hard drive, as well as how to retrieve that data. How Does Gillware Recover Files From a Formatted Hard Drive? Your hard drive has filesystem metadata that defines the size and boundaries of its partitions and points to the locations of all of your files. When you reformat your hard drive, you write new metadata to the drive to define a new filesystem. But instead of erasing everything that used to exist on the drive, you just cover it up and close the paths leading to where your data lives. Immediately after reformatting your drive, most (if not all) of your data is still perfectly intact. When you write new data to the drive, though, this data falls on top of some of the old data. Keep using the reformatted hard drive and old data will gradually vanish, bit by bit. To recover data from a formatted hard drive, our technicians have to be prepared to deal with all of the ugly possibilities of heavy corruption and loss of the old filesystem metadata that can result from an accidental reformat. Many kinds of readily-available file recovery software tools can’t deal with slightly complicated file recovery situations, let alone worst-case scenarios. Our hard drive reformat recovery specialists use our own proprietary imaging and analysis tools to recover files from formatted hard drives. Hard Drive Reformat Recovery Results Our hard drive reformat specialists successfully recovered the vast majority of the client’s data from the client’s modern palimpsest. With the help of our imaging and analytical tools, our logical data recovery technicians could uncover the old filesystem prior to the user’s accidental reformat. The filesystem metadata, including file definitions, pointed to all of the client’s critical data. The vast majority of the data was fully functional. Only a small fraction of the data had been overwritten by the new files the client had made. We rated this hard drive reformat case a 9 on our ten-point rating scale. This client was fortunate that the vast majority of their data was intact. But the actions you take after an accidental reformat can have a big impact on how much of your data we can recover. And which actions you take often depend on why you reformatted the drive in the first place. Our engineers see all too frequently clients who reformatted their drives, then kept using them until they realized they were missing important files, unaware they were compromising the integrity of their old data. This can have adverse effects on data recovery. When you’ve reformatted your hard drive and lost data, you should bring it to a data recovery professional as soon as you notice that you’ve erased critical data.
<urn:uuid:a8e9be7e-5ead-4ad5-aa04-1db89d671677>
CC-MAIN-2017-09
https://www.gillware.com/blog/data-recovery-case/hard-drive-reformat-modern-palimpsest/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00022-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941366
1,386
2.984375
3
802.16: A Look Under the Hood By enabling quick and relatively inexpensive deployment of broadband services infrastructure, the IEEE 802.16 standards for wireless broadband access have the potential to finally address the long-standing “last mile” problem that has plagued the data and telecom carrier industries. In Part 1 of our in-depth look at 802.16, we discussed how the new technology could be utilized and what was happening in the industry. Now let's delve into the nitty-gritty details of how the standards work and what data networking services they enable. Wireless Support for Data Networking Services In addition to the physical layer discussed in Part 1, 802.16 defines a Media Access Control (MAC) layer. The capabilities of this layer allow 802.16 to support a wide array of data networking services, including many services that are already familiar to corporate and residential users using copper or fiber networks. Because they provide the basis for these services, support for both ATM and packet operations was a requirement in the 802.16 design. ATM is important because of its role in telecom carrier infrastructure. For example, ATM is often used to support DSL services. ATM is also widely used to support voice transmissions. When it comes to packet operation, 802.16 supports all of the “usual suspects,” including IPv4, IPv6, Ethernet, and VLAN services. 802.16 accomplishes all of this by dividing its MAC layer into separate sublayers that handle different services, provide common core functions, and implement wireless privacy. Overall, this design gives 802.16 both flexibility and efficiency at the same time. The convergence sublayers map the different services into the core MAC common part sublayer. In addition to relating service data units to MAC connections, the convergence sublayers are responsible for decisions about bandwidth allocation and QoS. They also embody functions to get the most efficient use (maximum effective bits transmitted and received) out of the radio frequencies themselves. The common part sublayer is connection-oriented. All services, even connectionless services such as Ethernet and IP, are mapped into a MAC connection. The common part sublayer includes mechanisms for requesting bandwidth, including bandwidth on demand — a very attractive option for many carriers. Security and More Security Authentication and registration are part of the 802.16 MAC common part sublayer. Authentication is based on the use of PKI technology-based X.509 digital certificates. Just as every Ethernet interface comes with its own unique Ethernet MAC address, every 802.16 customer transceiver will include one built-in certificate for itself and another for its manufacturer. These certificates allow the customer transceiver to uniquely authenticate itself back to the base station. The base station can then check to see if the customer transceiver is authorized to receive service. If the database lookup succeeds, the base station sends the customer transceiver an encrypted authorization key, using the customer transceiver’s public key. This authorization key is used to encrypt and protect any transmissions that follow. Link privacy is implemented as part of another MAC sublayer, called the Privacy sublayer. It operates below the common part sublayer. It is based on the Privacy Key Management protocol that is part of the DOCSIS BPI+ specification. The changes to the DOCSIS design are aimed at integration with the 802.16 MAC. They also enable 802.16 to take advantage of recent advances in cryptographic techniques. Features and Goodies 802.16 supports a wide variety of QoS (Quality of Service) options, based on mechanisms used in DOCSIS. Bandwidth can be allocated to a customer transceiver and managed on that basis, or it can be allocated to individual connections between the base station and the customer transceiver. Some customer transceivers will manage their own allocations, even to the extent of stealing bandwidth from one connection to help another. Customer transceivers are permitted to negotiate with the base station for changes in allocations. These design choices enable services as diverse as connection-oriented, constant-bandwidth ATM and connectionless, bursty IP traffic to co-exist in the same box. 802.16 is flexible enough to permit a single customer transceiver to simultaneously employ one set of 802.16 MAC connections for individual ATM connections and another set for sharing among numerous IP end users. 802.16 uses scheduling services to implement bandwidth allocation and QoS. Unsolicited grant services provide a fixed, regular allocation. This mechanism is well suited for ATM or T1/E1 over ATM. There is relatively low overhead because there is no need to support requests for changes to the allocation. At the same time, delivery delay and jitter are minimized. For flexibility, 802.16 also specifies a wide variety of mechanisms to request bandwidth allocation changes, including MAC protocol requests and various types of polling. The same mechanisms also can be applied to deliver best effort service, which makes no guarantees for throughput or delay. In addition to extending 802.16 operations to the 2-11 GHz range, 802.16a also extends the reach of 802.16 beyond the limits of communication between a base station and a customer transceiver. It does this by enhancing the base standard to support mesh deployment. In mesh deployments, a customer transceiver can act as an intermediary between another customer transceiver and the base station. In other words, the customer transceiver is acting as a switch between locations. 802.16 vs. 802.11 It is natural to ask whether 802.16 will replace or compete with 802.11. This question will become even more pertinent once the 802.16 working group completes its work on mobility. Assuming the ratification of a standard for 802.16 mobility and good non-line-of-sight operation inside buildings, which standard should you use, or both? 802.11 is rapidly becoming established. It is cheap and easy to install, and its well-publicized problems with security are being addressed. 802.11 is normally deployed using a hotspot approach. Hotspots are chosen to provide the desired campus coverage. The access points are then attached to the corporate LAN backbone. In comparison, there are a variety of enterprise network architectures that can be implemented using 802.16. The technology could simply connect campuses to each other or could also work directly with end-user laptops and desktop systems, perhaps replacing all or part of the wired campus backbone. While there will certainly be some overlap, the two standards have some important differences. 802.11 has wide 20 MHz channels and a MAC that is designed to support tens of users over a relatively small radius of 100-300 meters. (MACs that use more power to attain the 300m limit may be non-standard.) On the other hand, 802.16a allows the operator to control channel bandwidth, and its MAC is designed to support thousands of simultaneous users over a 50 km radius. (This reach has not been demonstrated yet; working products may have a somewhat smaller range.). The maximum data rate for 802.16 is higher than that of 802.11, partially because it gets nearly twice the number of bits per second from a single Hertz of frequency. In addition, 802.16 offers a variety of QoS choices, while 802.11 supports only best-effort service (with the possible addition of priorities, as in 802.11e). Because of these options, 802.16a requires more configuration in order to manage the users and the services they receive. The fact that 802.16a supports mesh network topology while 802.11 does not may be more significant to carriers than to enterprise IT managers, given the wide radius of coverage offered by a single 802.16 base station. Even more important than any of these technical differences are the issues of when the 802.16 standards will be completed and when 802.16 products will become available. Millions of 802.11 NIC cards are being installed today, while 802.16 products will not be available for another twelve to eighteen months. By that time, there will be a very large and significant installed base of 802.11 interfaces in offices and homes. This will provide considerable inertia against any change from 802.11 to 802.16. For the pendulum to swing in the 802.16 direction, there must be significant and compelling benefits for enterprises and individual users to make the switch. What Does the Future Hold? As you can see, much care and work have gone into the design of 802.16. Becoming an expert will mean learning many details. However, it is worth understanding at least the rudiments of this technology because it has the potential to revolutionize how companies and carriers design and evolve their networks. At the end of the day, everyone would like to be able to do more while spending less money, and obviating the need for wires can result in a considerable reduction in infrastructure costs, which means wireless data networking — in its many forms — is clearly here to stay. Beth Cohen is president of Luth Computer Specialists, Inc., a consulting practice specializing in IT infrastructure for smaller companies. She has been in the trenches supporting company IT infrastructure for over 20 years in a number of different fields, including architecture, construction, engineering, software, telecommunications, and research. She is currently consulting, teaching college IT courses, and writing a book about IT for the small enterprise. Debbie Deutsch is a principal of Beech Tree Associates, a data networking and information assurance consultancy. She is a data networking industry veteran with 25 years experience as a technologist, product manager, and consultant, including contributing to the development of the X.500 series of standards and managing certificate-signing and certificate management system products. Her expertise spans wired and wireless technologies for Enterprise, Carrier, and DoD markets.
<urn:uuid:954a1e2e-1a18-442d-a7a8-c4a7eb3f8b68>
CC-MAIN-2017-09
http://www.enterprisenetworkingplanet.com/print/netsp/article.php/3068551/80216-A-Look-Under-the-Hood.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00198-ip-10-171-10-108.ec2.internal.warc.gz
en
0.937309
2,002
3.03125
3
Wednesday, Mayor Gavin Newsom announced the development of a first-of-its-kind solar mapping Web site. The Web site enables users to visualize the potential environmental benefits and monetary savings as a result of installing solar energy panels on their property. The San Francisco Solar Map solution is a Web map portal that enables San Francisco city residents and business owners to enter their home or building address and see an aerial view of their individual structure. By clicking on the image of the structure, users are provided with the following information: - The estimated amount of solar energy paneling that could be installed on the structure's roof - The estimated amount of solar energy that could be generated based on the structure's location - Potential electricity cost reduction resulting from installation of solar energy panels - Potential carbon dioxide/greenhouse gas (CO2) reduction as a result of panel installation - Potential rebate amount from the State of California as a result of solar energy panel investment - Case studies of other San Franciscans who have already installed solar panels and their stories - Contact information for local solar panel system installers Using Google Maps as a visualization platform, the San Francisco Solar Map solution enables users to view the results of an analysis of the solar potential for their home or commercial building. The resulting map is developed from detailed aerial photographs and other data. It bases its calculations on structure roof square-footage and considers factors such as a pitched roof and obstructions including tree shading and nearby tall buildings. "San Francisco is committed to clean energy, and to making sure resources are available to make it easier for San Franciscans to go solar," said Mayor Gavin Newsom. "By using our solar mapping program, residents and businesses can quickly determine whether specific photovoltaic projects will pencil out, which is the first step to getting more renewable energy in the city." The San Francisco Solar Map solution is available at no charge via the Internet. The solution is adaptable for use in other cities and counties throughout the world.
<urn:uuid:74973d51-3253-4f19-9d28-96eae2961657>
CC-MAIN-2017-09
http://www.govtech.com/e-government/San-Francisco-Unveils-Innovative-Solar-Mapping.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00374-ip-10-171-10-108.ec2.internal.warc.gz
en
0.91979
407
2.828125
3
The world will be introduced to the new Internet Protocol (IP) language this summer as web giants such as Google, Microsoft Bing, Yahoo, Facebook and over 1,000 more sites around the world agreed to trial IPv6. World IPv6 Day will take place on June 8th this year and all associated websites will enable the new Internet Protocol v6 (IPv6) addressing standards on their services. Confusion still exists over what IPv6 is and how it will affect people. Each time you go online you are assigned an IP address to identify your device, whether that be a PC, mobile or tablet. This allows you to connect with other websites, with websites currently using the old IPv4 standard. This standard allows a certain amount of addresses and these are about to run out – therefore the adoption of IPv6 is a race against time. Internet users do not need to worry as they can test their internet connection compatibility thanks to Google at http://ipv6test.google.com/. IPv4 is still, for now, being catered for so even if broadband providers in the UK lack IPv6 support, users shouldn’t have any trouble using their connection as normal. Google admits that it could still take “years for the Internet to transition fully to IPv6”, which is in large part due to the slow pace of progress by large ISPs and hardware manufacturers (such as broadband routers). For more information on visit the ispreview website.
<urn:uuid:1a549fd3-c8cc-4601-9e40-deb9684cb0e3>
CC-MAIN-2017-09
https://www.gradwell.com/2012/01/19/world-ipv6-launch-day-has-been-announced/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00494-ip-10-171-10-108.ec2.internal.warc.gz
en
0.930703
303
2.546875
3
/ October 1, 2013 The Turanor PlanetSolar, which means “power of the sun” in J.R.R. Tolkien mythology, is the world’s largest solar-powered boat. The 102-foot catamaran recently docked in France for the winter after a trip around the world in May 2012. The boat traveled more than 37,282 miles in 584 days, with scientists collecting air and water data along the way. The boat contains 29,124 photovoltaic cells, which power 8.5 tons of lithium-ion batteries.
<urn:uuid:7f5da3ad-d8a4-488e-ad4c-8e600a5938ed>
CC-MAIN-2017-09
http://www.govtech.com/photos/Photo-of-the-Week-Sunshine-Power.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00018-ip-10-171-10-108.ec2.internal.warc.gz
en
0.935621
119
2.953125
3
How To Use Web Search Engines Page 4 -- How Search Engines Work What follows is a basic explanation of how search engines work. For more detailed and technical information about current methods used by search engines like Google, check out our discussion of Search Engine Ranking Algorithms Search engines use automated software programs knows as spiders or bots to survey the Web and build their databases. Web documents are retrieved by these programs and analyzed. Data collected from each web page are then added to the search engine index. When you enter a query at a search engine site, your input is checked against the search engine's index of all the web pages it has analyzed. The best urls are then returned to you as hits, ranked in order with the best results at the top. This is the most common form of text search on the Web. Most search engines do their text query and retrieval using keywords. What is a keyword, exactly? It can simply be any word on a webpage. For example, I used the word "simply" in the previous sentence, making it one of the keywords for this particular webpage in some search engine's index. However, since the word "simply" has nothing to do with the subject of this webpage (i.e., how search engines work), it is not a very useful keyword. Useful keywords and key phrases for this page would be "search," "search engines," "search engine methods," "how search engines work," "ranking" "relevancy," "search engine tutorials," etc. Those keywords would actually tell a user something about the subject and content of this page. Unless the author of the Web document specifies the keywords for her document (this is possible by using meta tags), it's up to the search engine to determine them. Essentially, this means that search engines pull out and index words that appear to be significant. Since since engines are software programs, not rational human beings, they work according to rules established by their creators for what words are usually important in a broad range of documents. The title of a page, for example, usually gives useful information about the subject of the page (if it doesn't, it should!). Words that are mentioned towards the beginning of a document (think of the "topic sentence" in a high school essay, where you lay out the subject you intend to discuss) are given more weight by most search engines. The same goes for words that are repeated several times throughout the document. Some search engines index every word on every page. Others index only part of the document. Full-text indexing systems generally pick up every word in the text except commonly occurring stop words such as "a," "an," "the," "is," "and," "or," and "www." Some of the search engines discriminate upper case from lower case; others store all words without reference to capitalization. The Problem With Keyword Searching Keyword searches have a tough time distinguishing between words that are spelled the same way, but mean something different (i.e. hard cider, a hard stone, a hard exam, and the hard drive on your computer). This often results in hits that are completely irrelevant to your query. Some search engines also have trouble with so-called stemming -- i.e., if you enter the word "big," should they return a hit on the word, "bigger?" What about singular and plural words? What about verb tenses that differ from the word you entered by only an "s," or an "ed"? Search engines also cannot return hits on keywords that mean the same, but are not actually entered in your query. A query on heart disease would not return a document that used the word "cardiac" instead of "heart." Refining Your Search Most sites offer two different types of searches--"basic" and "refined" or "advanced." In a "basic" search, you just enter a keyword without sifting through any pulldown menus of additional options. Depending on the engine, though, "basic" searches can be quite complex. Advanced search refining options differ from one search engine to another, but some of the possibilities include the ability to search on more than one word, to give more weight to one search term than you give to another, and to exclude words that might be likely to muddy the results. You might also be able to search on proper names, on phrases, and on words that are found within a certain proximity to other search terms. Some search engines also allow you to specify what form you'd like your results to appear in, and whether you wish to restrict your search to certain fields on the internet (i.e., usenet or the Web) or to specific parts of Web documents (i.e., the title or URL). Many, but not all search engines allow you to use so-called Boolean operators to refine your search. These are the logical terms AND, OR, NOT, and the so-called proximal locators, NEAR and FOLLOWED BY. Boolean AND means that all the terms you specify must appear in the documents, i.e., "heart" AND "attack." You might use this if you wanted to exclude common hits that would be irrelevant to your query. Boolean OR means that at least one of the terms you specify must appear in the documents, i.e., bronchitis, acute OR chronic. You might use this if you didn't want to rule out too much. Boolean NOT means that at least one of the terms you specify must not appear in the documents. You might use this if you anticipated results that would be totally off-base, i.e., nirvana AND Buddhism, NOT Cobain. Not quite Boolean + and - Some search engines use the characters + and - instead of Boolean operators to include and exclude terms. NEAR means that the terms you enter should be within a certain number of words of each other. FOLLOWED BY means that one term must directly follow the other. ADJ, for adjacent, serves the same function. A search engine that will allow you to search on phrases uses, essentially, the same method (i.e., determining adjacency of keywords). Phrases: The ability to query on phrases is very important in a search engine. Those that allow it usually require that you enclose the phrase in quotation marks, i.e., "space the final frontier." Capitalization: This is essential for searching on proper names of people, companies or products. Unfortunately, many words in English are used both as proper and common nouns--Bill, bill, Gates, gates, Oracle, oracle, Lotus, lotus, Digital, digital--the list is endless. All the search engines have different methods of refining queries. The best way to learn them is to read the help files on the search engine sites and practice! Most of the search engines return results with confidence or relevancy rankings. In other words, they list the hits according to how closely they think the results match the query. However, these lists often leave users shaking their heads on confusion, since, to the user, the results may seem completely irrelevant. Why does this happen? Basically it's because search engine technology has not yet reached the point where humans and computers understand each other well enough to communicate clearly. Most search engines use search term frequency as a primary way of determining whether a document is relevant. If you're researching diabetes and the word "diabetes" appears multiple times in a Web document, it's reasonable to assume that the document will contain useful information. Therefore, a document that repeats the word "diabetes" over and over is likely to turn up near the top of your list. If your keyword is a common one, or if it has multiple other meanings, you could end up with a lot of irrelevant hits. And if your keyword is a subject about which you desire information, you don't need to see it repeated over and over--it's the information about that word that you're interested in, not the word itself. Some search engines consider both the frequency and the positioning of keywords to determine relevancy, reasoning that if the keywords appear early in the document, or in the headers, this increases the likelihood that the document is on target. For example, one method is to rank hits according to how many times your keywords appear and in which fields they appear (i.e., in headers, titles or plain text). Another method is to determine which documents are most frequently linked to other documents on the Web. The reasoning here is that if other folks consider certain pages important, you should, too. If you use the advanced query form on AltaVista, you can assign relevance weights to your query terms before conducting a search. Although this takes some practice, it essentially allows you to have a stronger say in what results you will get back. As far as the user is concerned, relevancy ranking is critical, and becomes more so as the sheer volume of information on the Web grows. Most of us don't have the time to sift through scores of hits to determine which hyperlinks we should actually explore. The more clearly relevant the results are, the more we're likely to value the search engine. Some search engines are now indexing Web documents by the meta tags in the documents' HTML (at the beginning of the document in the so-called "head" tag). What this means is that the Web page author can have some influence over which keywords are used to index the document, and even in the description of the document that appears when it comes up as a search engine hit. This is obviously very important if you are trying to draw people to your website based on how your site ranks in search engines hit lists. There is no perfect way to ensure that you'll receive a high ranking. Even if you do get a great ranking, there's no assurance that you'll keep it for long. For example, at one period a page from the Spider's Apprentice was the number- one-ranked result on Altavista for the phrase "how search engines work." A few months later, however, it had dropped lower in the listings. There is a lot of conflicting information out there on meta-tagging. If you're confused it may be because different search engines look at meta tags in different ways. Some rely heavily on meta tags, others don't use them at all. The general opinion seems to be that meta tags are less useful than they were a few years ago, largely because of the high rate of spamdexing (web authors using false and misleading keywords in the meta tags). Note: Google, currently the most popular search engine, does not index the keyword metatags. Be aware of this is you are optimizing your webpages for the Google engine. It seems to be generally agreed that the "title" and the "description" meta tags are important to write effectively, since several major search engines use them in their indices. Use relevant keywords in your title, and vary the titles on the different pages that make up your website, in order to target as many keywords as possible. As for the "description" meta tag, some search engines will use it as their short summary of your url, so make sure your description is one that will entice surfers to your site. Note: The "description" meta tag is generally held to be the most valuable, and the most likely to be indexed, so pay special attention to this one. In the keyword tag, list a few synonyms for keywords, or foreign translations of keywords (if you anticipate traffic from foreign surfers). Make sure the keywords refer to, or are directly related to, the subject or material on the page. Do NOT use false or misleading keywords in an attempt to gain a higher ranking for your pages. The "keyword" meta tag has been abused by some webmasters. For example, a recent ploy has been to put such words "sex" or "mp3" into keyword meta tags, in hopes of luring searchers to one's website by using popular keywords. The search engines are aware of such deceptive tactics, and have devised various methods to circumvent them, so be careful. Use keywords that are appropriate to your subject, and make sure they appear in the top paragraphs of actual text on your webpage. Many search engine algorithms score the words that appear towards the top of your document more highly than the words that appear towards the bottom. Words that appear in HTML header tags (H1, H2, H3, etc) are also given more weight by some search engines. It sometimes helps to give your page a file name that makes use of one of your prime keywords, and to include keywords in the "alt" image tags. One thing you should not do is use some other company's trademarks in your meta tags. Some website owners have been sued for trademark violations because they've used other company names in the meta tags. I have, in fact, testified as an expert witness in such cases. You do not want the expense of being sued! Remember that all the major search engines have slightly different policies. If you're designing a website and meta-tagging your documents, we recommend that you take the time to check out what the major search engines say in their help files about how they each use meta tags. You might want to optimize your meta tags for the search engines you believe are sending the most traffic to your site. Concept-based searching (The following information is out-dated, but might have historical interest for researchers) Excite used to be the best-known general-purpose search engine site on the Web that relies on concept-based searching. It is now effectively extinct. Unlike keyword search systems, concept-based search systems try to determine what you mean, not just what you say. In the best circumstances, a concept-based search returns hits on documents that are "about" the subject/theme you're exploring, even if the words in the document don't precisely match the words you enter into the query. How did this method work? There are various methods of building clustering systems, some of which are highly complex, relying on sophisticated linguistic and artificial intelligence theory that we won't even attempt to go into here. Excite used to a numerical approach. Excite's software determines meaning by calculating the frequency with which certain important words appear. When several words or phrases that are tagged to signal a particular concept appear close to each other in a text, the search engine concludes, by statistical analysis, that the piece is "about" a certain subject. For example, the word heart, when used in the medical/health context, would be likely to appear with such words as coronary, artery, lung, stroke, cholesterol, pump, blood, attack, and arteriosclerosis. If the word heart appears in a document with others words such as flowers, candy, love, passion, and valentine, a very different context is established, and a concept-oriented search engine returns hits on the subject of romance. This ends the outdated "concept-based" information section. What does it all mean? You now know more than you probably ever wanted to know about indexing, query refining and relevancy ranking. How do we put it all together to make Web searching easier and more efficient than it currently is? Let's try some practical applications. It's time for: Find People on the Web (friends, celebrities, classmates, public figures)
<urn:uuid:7789a31d-ed8b-44c7-9a82-f63ab0348e38>
CC-MAIN-2017-09
http://www.monash.com/spidap4.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00542-ip-10-171-10-108.ec2.internal.warc.gz
en
0.93938
3,202
3.390625
3
How big is the Web? One way to look at is this: There are roughly 4,348 pages out there for each of the 6.9 billion people on the planet, or to put it another way 30 trillion pages. And there will be a lot more tomorrow. Given the immensity of the Web, it seems nothing short of magic that Google's search, imperfect as it is, still indexes it as well as it does. The Google search blog is often very interesting, and a recent post gives us some insight into how the engine works and how it identifies and kills spam. As you may know, Google sends out "robots," little programs that crawl across the Web, following links from page to page while sorting them by content and other factors, and adding information to an index. That index is immense, taking up over 100 million gigabytes. Even so, not every page on the Web is indexed. When the robots get to a page, they look for a file called robots.txt which tells the engine not to index. If it's there, and contains instructions placed by an authorized Web master, the Google robot will not index the page. When you type something in a search box, formulas called algorithms evaluate your query and pull relevant pages from the index. Exactly how those pages are ranked is a closely guarded secret, but Google does say that it uses over 200 factors to do so. Results are typically served up in one-eighth of a second. Humans, of course, do not enter the picture, but Google uses a corps of trained people to evaluate the accuracy of searches by testing. In a typical year, the company says, it will run over 40,000 evaluations. Most spam removal is automatic, but some questionable pages are examined by hand. Google looks for quite a few factors that indicate spam. Hidden text and "keyword stuffing" is a clue that a page is bogus, as is user-generated spam that appears on forum or guestbook pages or user profiles. Last year, Google launched an update to its anti-spam algorithm called Penguin which decreases the rankings of sites that are using what it calls Webspam tactics. When Google is going to take action against a site, it attempts to find and notify the owners and gives them a chance to fix the problem. The number of those requests varies quite a bit, but in one particularly busy month last year, more than 650,000 notices to Web sites were sent out. As important as search results are to users, they can be life and death to a commercial Web site. They have a huge impact on how much traffic a site gets, and that, in turn, affects ad revenue. Anyone who runs a commercial site (including this one) spends a good deal of time trying to figure out ways to rank high in searches, or in the case of news sites (like this one) how to be included in results on Google News.
<urn:uuid:002358fa-0120-4a18-8099-ac1a8e4b287d>
CC-MAIN-2017-09
http://www.cio.com/article/2370648/internet/inside-info-on-how-google-searches-the-web.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00242-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960046
600
3.0625
3
In preparation of our CCNA exam, we want to make sure we cover the various concepts that we could see on our Cisco CCNA exam. So to assist you, below we will discuss UDP. UDP: User Datagram Protocol UDP is a connectionless transport layer (layer 4) protocol in OSI model, which provides a simple and unreliable message service for transaction-oriented services. UDP is basically an interface between IP and upper-layer processes. UDP protocol ports distinguish multiple applications running on a single device from one another. Since many network applications may be running on the same machine, computers need something to make sure the correct software application on the destination computer gets the data packets from the source machine, and some way to make sure replies get routed to the correct application on the source computer. This is accomplished through the use of the UDP “port numbers”. For example, if a station wished to use a Domain Name System (DNS) on the station 22.214.171.124, it would address the packet to station 126.96.36.199 and insert destination port number 53 in the UDP header. The source port number identifies the application on the local station that requested domain name server, and all response packets generated by the destination station should be addressed to that port number on the source station. Details of UDP port numbers could be found in the TCP/UDP Port Number document and in the reference. Unlike the TCP , UDP adds no reliability, flow-control, or error-recovery functions to IP. Because of UDP's simplicity, UDP headers contain fewer bytes and consume less network overhead than TCP. UDP is useful in situations where the reliability mechanisms of TCP are not necessary, such as in cases where a higher-layer protocol might provide error and flow control, or real time data transportation is required. UDP is the transport protocol for several well-known application-layer protocols, including Network File System (NFS) , Simple Network Management Protocol (SNMP) , Domain Name System (DNS) , and Trivial File Transfer Protocol (TFTP). Protocol Structure – UDP User Datagram Protocol Header - Source port – Source port is an optional field. When used, it indicates the port of the sending process and may be assumed to be the port to which a reply should be addressed in the absence of any other information. If not used, a value of zero is inserted. - Destination port – Destination port has a meaning within the context of a particular Internet destination address. - Length – It is the length in octets of this user datagram, including this header and the data. The minimum value of the length is eight. - Checksum — The sum of a pseudo header of information from the IP header, the UDP header and the data, padded with zero octets at the end, if necessary, to make a multiple of two octets. - Data – Contains upper-level data information. I hope you found this article to be of use and it helps you prepare for your Cisco CCNA certification. Achieving your CCNA certification is much more than just memorizing Cisco exam material. It is having the real world knowledge to configure your Cisco equipment and be able to methodically troubleshoot Cisco issues. So I encourage you to continue in your studies for your CCNA exam certification.
<urn:uuid:dd8d2d93-a7a4-4078-9c68-cd9daf6e3cbb>
CC-MAIN-2017-09
https://www.certificationkits.com/udp/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00418-ip-10-171-10-108.ec2.internal.warc.gz
en
0.864312
690
4.09375
4
About Policy-Based Routing Policy-Based Routing – PBR gives you very simple way of controlling where packets will be forwarded before they enter in the destination-based routing process of the router. It’s a technology that gives you more control over network traffic flow because you will not always want to send certain packets by the obvious shortest path. That is the job of routing protocol. If you want to send some traffic to the destination using some other path, you will need to use a method that will catch the packet as soon as they enter into router and decides where to send packets before they enter destination-based routing process. That’s Policy-Based routing all about.
<urn:uuid:a638ef04-5f63-4bb5-8a2a-95f3a7df1bd1>
CC-MAIN-2017-09
https://howdoesinternetwork.com/tag/route-map
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00238-ip-10-171-10-108.ec2.internal.warc.gz
en
0.926763
141
3.046875
3
21st century skills have been a hot topic in the world of education, and there is an overwhelming amount of 21st century skill information on the web. However, it’s not easy for every education professional to absorb what it means to them and their district. - The world is more connected, flatter, and moving faster. Technology evolution, a maturing world economy, dynamic teaming and collaboration. Windows of opportunity are getting smaller as news flows faster. Reaction time is a critical differentiator. - Information is growing rapidly – and all can contribute. Information is exploding – but some is accurate, some is not, some are opinions, some are lies, some are personal expressions. Information in the new world is not static – it is interactive and dynamic. So based on these changes, what are the new and growing skills required in the 21st century? For the benefit of my own school district – and anyone trying to get their arms around the fundamentals – I’ve narrowed the list to seven key skills: - Information Literacy: Navigating, interpreting and effectively using the explosion of information available to us is critical in the 21st century. - Media Literacy: IM streams, blogs, streaming video, web conferences – information is being channeled through ever-changing media. The ability to navigate and interpret those media in context, as well as the ability to use those media effectively to communicate are critical skills. - Information Technology Literacy: The tools that we use to create or access media that contain information are constantly evolving. Understanding exactly which tools to use, and when, in a constantly evolving tools environment is a critical skill. - Global Literacy: The world is more connected, and insularity is not an option. Awareness, social and cross-cultural skills are valuable. - Flexibility & Adaptability: The world has always been changing, but change happens – and is communicated – faster. Agility is critical in the 21st century. - High-Level Knowledge Skills: In a flat world, lower-level skills are a commodity. Critical thinking, problem-solving, creativity and innovation are valuable. - Communication & Collaboration: A connected world requires better communication skills, and the ability to dynamically team to accomplish tasks. Want to dive deeper? I’d recommend the Partnership for 21st Century Skills. And my colleague Daryl Plummer’s post on 20th century thinking. And, of course, my own thoughts on the impact of the web, social software and cloud computing on education. Good luck, and I’d love comments! Comments or opinions expressed on this blog are those of the individual contributors only, and do not necessarily represent the views of Gartner, Inc. or its management. Readers may copy and redistribute blog postings on other blogs, or otherwise for private, non-commercial or journalistic purposes, with attribution to Gartner. This content may not be used for any other purposes in any other formats or media. The content on this blog is provided on an "as-is" basis. Gartner shall not be liable for any damages whatsoever arising out of the content or use of this blog.
<urn:uuid:57febea4-2146-4609-9926-eb5205679a96>
CC-MAIN-2017-09
http://blogs.gartner.com/thomas_bittman/2009/01/30/21st-century-skills-for-dummies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00414-ip-10-171-10-108.ec2.internal.warc.gz
en
0.913405
653
3.078125
3
LONG BEACH, CALIF. -- There's a major supply chain issue in space. It costs $10,000 per kilogram to blast something into orbit; a liter of water in space costs $10,000. In fact, only 2% of what NASA sends into space on its rockets is actually used by astronauts for experiments. The rest of the weight is made up by the space vehicle itself and its fuel. So when it comes to experimentation on the International Space Station, transporting even small items is costly. This year, however, NASA and partner Made In Space Inc. are planning to solve part of that problem by installing a 3D printing station on the International Space Station (ISS). The 3D printer station will be able to receive .stl (stereolithography) files with CAD designs transmitted from earth to print a variety of tools astronauts may need, according to Jason Dunn, CTO of Made in Space. The new printing station will be made available to astronauts regardless of their nation status. Dunn, who spoke at the RAPID Conference here, said that the company's first fused filament fabrication (FFF) 3D printer was sent up last year via a SpaceX Dragon cargo craft. The small desktop 3D printer was used to print 20 objects preloaded on an SD card; a 21st object was printed based on an .stl file transmitted to the printer from Made In Space. The object, a wrench head made from ABS thermoplastic, took about two hours to print, and when it was finished, American astronaut Barry Wilmore removed it from the machine and commented that he wished he had "his ratchet." Seven days later, after Made In Space designed the printable ratchet and had NASA run it through their qualifications, the file was transmitted to the surprised astronaut, who then put the new ratchet and head to work. "We're using the space station as a test bed to prove out this technology," Dunn said. "The space station today has limited volume; it's filled with redundancy, it's filled with spare parts." It's also filled with trash, trash that Dunn said may be recyclable into polymer filament for printing new tools and test equipment. NASA is working with Made In Space now to create a polymer recycling center on the ISS for that very purpose. Dunn believes 3D printing (also known as additive manufacturing) will be a pathway to humans being able to populate other planets, printing supplies from locally harvested materials or recycling material that's no longer needed. Additive manufacturing will also enable more fragile objects be created in space than could ever be rocketed up because unlike space equipment today, it won't need to withstand the two Gs of pressure a blastoff places on them. In fact, printed space objects may someday be so fragile, that they'd fall to pieces under Earth's gravity. Now, if a specific tool is needed on the fly, astronauts must sometimes spend time creating makeshift items to fit the bill. Considering an astronaut's time costs about $40,000 an hour, using that time to build tools out of parts is more than a little wasteful. "Imagine if you could remove the strain of all that added mass [in a spacecraft]... and time. You just build what you need when you need it," Dunn said. This story, "How astronauts 3D printed a wrench they needed in space" was originally published by Computerworld.
<urn:uuid:ee3b4c4e-388e-440d-837a-9dfbb448d066>
CC-MAIN-2017-09
http://www.itnews.com/article/2923841/3d-printing/how-astronauts-3d-printed-a-wrench-they-needed-in-space.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00534-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960531
707
3.390625
3
A couple of weeks ago I wrote about some of things that I found most frustrating about being a programmer, back when I was a programmer. One of those things was the size of indentation in code. Somewhere along the line, I was trained (or just started on my own; can’t recall) to set the tab key in my editor to four spaces. That’s what I stuck with for years and got used to seeing. When I read someone else’s code that used two or eight character tabs, I found it annoying. It wasn’t a dogmatic thing for me; I didn’t care that much. Four spaces was just my personal preference. In the programmer community, though, discussions of coding style such as the size of code indentation can quickly turn into a holy war. Some people tend to have very strong opinions on it. It flared up a little bit in the comments some people made on the article; but there are plenty of lengthier (and more heated) discussions on it elsewhere on the web. Spacing and indentation in code are important, of course, to help organize things and to make it readable. Code is often read by different people, and the preferences (or training) can differ from programmer to programmer. Also, depending on what type of indentation is used, the same code opened on different operating systems can look different (and less legible). It’s not as trivial an issue as it may sound to the non-programmer, and it’s also an argument that will likely never end. Among the choices are things like: hard tabs (that is, the tab key inserting the actual ASCII tab character) versus soft tabs (replacing the tab character with multiple spaces), how many columns wide shoud the indentation be, how to deal with indentation or spacing within a line (some prefer hard tabs at the start of lines, but soft tabs in the middle of lines), etc. There are pros and cons to each choice; hard tabs use less space in the file, but soft tabs will lead to consistent spacing across operating systems. Two spaces were better than four or eight when trying fit things on 80 character lines. On and on (and on) it can go.
<urn:uuid:7cef4e68-4c09-43b4-a5f3-6050e018513c>
CC-MAIN-2017-09
http://www.itworld.com/article/2713010/it-management/religion--politics-and-coding-indentation-style--the-three-great-debates.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00410-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952479
465
2.5625
3
Members of the BYU Supercomputing team recently posted a tutorial for getting started with SLURM, the scalable resource manager that has been designed for Linux clusters. SLURM is currently the resource manager of choice for NUDT’s Tianhe-1A, the Anton Machine built by D.E. Shaw Research, and other clusters, including the Cray “Rosa” system at the Swiss National Supercomputer Centre and Tera100 at CEA. In essence, SLURM’s functions as an allocation mechanism to divvy up resources on both an exclusive and non-exclusive basis, as well as a framework for starting, executing and monitoring jobs on a set of designated nodes. It also manages scheduling conflicts by handling the queue of jobs. As Dona Crawford from Lawrence Livermore noted about their use of SLURM for their BlueGene/L and Purple systems, using SLURM reduced “large job launch times from tens of minutes to seconds.” She went on to note that “This effectively provides us with millions of dollars with of additional compute resources without additional cost. It also allows our computational scientists to use their time more effectively. SLURM is scalable to very large numbers of processors, another essential ingredient for use at LLNL. This means larger computer systems can be used than otherwise possible with a commensurate increase in the scale of problems that can be solved. SLURM’s scalability has eliminated resource management from being a concern for computers of any foreseeable size. It is one of the best things to happen to massively parallel computing.” One of the advantages that SLURM users point out is that it’s relatively simple to get started and there are a wide array of modular elements that help to extend the core functionality. For those who want a bare-bones setup (as the one described in the accompanying video), it takes well under an hour to get it up and running.
<urn:uuid:6738cb24-ff82-4bd5-8bdd-86add501abbb>
CC-MAIN-2017-09
https://www.hpcwire.com/2013/09/04/up_and_running_with_slurm/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00110-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952574
403
2.828125
3
Another concern of the power grid comes from the lack of security. Just recently it was revealed that the energy grid has become vulnerable to attack. Coming from the recent declassification of a 2007 report from the National Academy of Sciences, the lack of a physical security has experts worried. Although the United States Federal Energy Regulatory Commission (FERC) has been assigned with creating a new security strategy through its nearly created office of Energy Infrastructure Security, the threat still exists. If attacked, experts warn that the power grid could suffer more damage than it did during Superstorm Sandy with the possibility of massive blackouts lasting weeks or even months at a time. While this knowledge has likely cause operators additional stress, one way to help alleviate the burden is to look at renewable resources— such as hydro, geothermal, solar and wind—for power. By using renewable resources, operators can take extra precaution to protect their campuses from future security risks regarding power sources. If hackers attack the power grid, operators will have a peace of mind knowing they can continue operations thanks to the innovative power supply. New Power Options To help alleviate the strain on the power grid, data center operators are finding new ways to gather power. As the risks increase, no longer can they rely solely on the power of their host country and supplemental power is becoming vital. With demand for energy at an all-time high it’s crucial to ensure the power stays on even as the grid stretches to capacity. As a result of the pressure from the weakening grid, data centers have begun to utilize renewable resources harvested from their surroundings. According to the National Renewable Energy Laboratory, data centers across the country can utilize renewable energy technologies, but some technology solutions are better suited for select geographical locations. Although the United States offers suitable locations, some companies have started venturing outside their home countries for stronger solutions. Large enterprises, such as BMW, Facebook and Google have begun to move data center operations abroad to Iceland, Sweden and Finland, respectively. Attracted by the cool climates and relatively low pricing, these artic campuses are allowing operators to harvest renewable resources from their host countries for both power and cooling. With that, site selection plays the ultimate role in determining whether alternative technology can be accessed. As an added benefit, by gathering energy from the host country via renewables, data centers can control pricing and lower customer’s carbon footprint. Facebook’s facility in Sweden will require 70 percent less power than traditional data centers, while BMW’s move to Icelandic facility will save it around 3,600 metric tons of carbon emissions per year. Furthermore, the campuses will no longer be restricted to only utilizing their host countries power grid. Instead, their ability to gather power with renewable resources will lessen the unease and anxiety suffered by data center operators. Without being bound solely to the host countries power, data centers can remain online even if disaster strikes. While no one expects the power grid to fail completely, high-power users can and should expect to make lasting changes to how they collect their power. By utilizing alternative technology, data center operators can rest easy knowing their systems will remain online at all times, even during storms as severe as Sandy. Though the aging infrastructure and lack of security will continue to plague the grid, operators can begin to change their responses by taking action and thinking outside the box. Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. Pages: 1 2
<urn:uuid:f36704d3-0feb-4b9c-96f7-11e9c5ebffbc>
CC-MAIN-2017-09
http://www.datacenterknowledge.com/archives/2013/01/16/us-power-grid-has-issues-with-reliability/2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00107-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942782
722
3.03125
3
The presence of a firewall, whether it’s hardware or software, provides a significant security boost for any computer user. This is especially true if you manage a business. Companies need to ensure that their systems are sufficiently protected in order to prevent the infiltration of viruses, bugs, or hacking. Firewalls are an indispensable security measure that dramatically reduces risks to your business network. As the internet has grown and progressed over the years, websites have become more interactive and functional. Many company websites are used to collect customer information, collect online orders, complete transactions, host chat conversations, and much more. With so much information being stored electronically, it is critical for businesses to do everything in their power to protect it. Losing company or customer data can have a significant impact on a company, such as hindered productivity or a tarnished reputation. In some cases, it can even result in the closure of a business. Additionally, if your company utilizes cloud computing for storing important information, it is important to minimize risk by employing a firewall. Although many cloud computing hosts offer higher levels of security, it is important for businesses to do what is in their power to minimize risk. Cloud computing is a great service for businesses because of its ease of use, but storing data electronically can pose a risk if it is not properly protected by users on both ends; the cloud host and a company using their services. In some cases, hackers are infiltrating business networks in order to piggyback on their internet service. This can result in hefty overage fees, costing the company a considerable amount of money. Additionally, if a hacker is stealing a significant amount of bandwidth, employees may experience a slow internet connection. Again, this can impact productivity, which then results in a loss of revenue. A firewall can keep track of suspicious logins, filter our SPAM emails, and block suspicious applications. It can also monitor network activity and create a log, which can assist in identifying where and when a breach may have occurred. This feature can help your company contain and fix the problem faster, instead of trying to solve the issue after the damage has already taken place. Many hardware and software companies are now providing firewalls that are specifically designed for businesses. As more businesses and customers continue to migrate online, there is an increased necessity for reliable security measures. If the proper defenses are not put in place, a business may be at risk of losing everything. To learn more about obtaining a secure connection, click here. Blog Posted by Vanessa Hartung
<urn:uuid:f1894bb1-0afc-4696-a3ec-263812e14057>
CC-MAIN-2017-09
http://blog.terago.ca/2012/12/13/why-your-company-needs-a-firewall/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00403-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948347
512
2.5625
3
Millennials have a healthy skepticism toward information they view on Twitter, according ot a new Michigan State University report. The National Science Foundation research, aimed at examining social media and false memory, showed 74 undergrads images on a computer showing a man stealing a car. The images were followed by false information shown in a Twitter-like stream or in a more traditional news approach, and sure enough, the study showed that students were much less likely to form false memories based on the Twitter-like info. "Our findings suggest young people are somewhat wary of information that comes from Twitter," said Kimberly Fenn, assistant professor of psychology at Michigan State. "We propose young adults are taking into account the medium of the message when integrating information into memory," she added. Some Twitter followers, like Peoria, Ill., Mayor Jim Ardis, might be wise to more carefully weigh what they come across on Twitter, too (he's in hot water for overreacting in a big way to a spoof Twitter account recently). Twitter boasts some 230 million users, with the service being most popular among teens and people in their 20s.
<urn:uuid:bda23aff-a76c-4781-ae40-26d1c3c5c38e>
CC-MAIN-2017-09
http://www.networkworld.com/article/2226914/data-center/study--don-t-believe-everything-you-read-on-twitter.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00403-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954749
230
2.9375
3
Twitter. Netflix. Spotify. If you are one of the hundreds of million users on these sites, you may have noticed a disruption in their services last Friday, October 21st. They, among many other high-traffic sites, were the targets of a massive DDoS cyber attack. How were these sites attacked? Through a company called Dyn, an Internet performance company that helps to route internet traffic. With Dyn as their conduit, the hackers caused a significant Internet outage all throughout the United States. It is becoming increasingly clear that these types of DDoS attacks are on the rise and that unsecured Internet of Things (IoT) infrastructure is the hacker’s weapon of choice that deserves immediate attention from anyone who can take action. A DDoS attack involves overwhelming a web server with so much illegitimate traffic that the server under attack is crippled and unable to respond to legitimate requests. The malware used in the Dyn attack was able to exploit unsecured IoT devices such as DVRs, printers, surveillance cameras, and routers. Once a device is infected, the malware is able to coordinate multiple other devices and use their processing power to form a “botnet” – a network of infected computers that execute the DDoS attack. According to the Dyn security team, they observed tens of millions of individual devices associated with the botnet that carried out the attack, There is no easy way to determine if a particular device on your network is infected with malware or if it has been recruited for malicious botnet activity. There are, however, a number of things you can do right away to help secure your devices and help to prevent future attacks designed to cripple the Internet.
<urn:uuid:48c5129d-ba02-4d62-bfdc-f43ce4f77888>
CC-MAIN-2017-09
https://www.convergint.com/how-to-protect-your-iot-devices-from-malware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00403-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960068
342
2.515625
3
Yan W.-D.,Wuhan University | Yan W.-D.,Engineering and Environmental Geology Survey Institute of Qinghai Province | Wang Y.-X.,Wuhan University | Gao X.-Z.,Engineering and Environmental Geology Survey Institute of Qinghai Province | And 4 more authors. Northwestern Geology | Year: 2013 Gonghe Basin of Qinghai Province, which locates in northern Tibetan Plateau, is a faulted basin formed since mesozoic era. Around the basin is controlled by large active faults; inside the basin is the accumulation of large thickness of Quaternary and Neogene strata, the exposed thickness is up to 900-1440m, and the substrate is formed by Indosinian granite. Exploration results display that the value of heat flow is high inside the basin. Besides, the basement granite geothermal gradient is greater than 5°C , thermal anomaly is obvious. According to the broad band seismic observation data on the Qinghai-Tibet Plateau, there is a 150km lowvelocity zone on the east Kunlun orogen where Gonghe Basin lies in. It is associated with the mantle plume in deep mantle of the Bayan Har orogen, which is characterized by huge lowvelocity abnormity. The low-velocity zone extends to the earth crust and forms an abnormal area of heat flux at different parts with 1-40km below the earth's surface in Gonghe Basin and its surroundings, accounting for the forming of geothermal energy based on abundant hot-dry rocks and geothermal water in shallow part of the basin. It is significant not only for the city's heating system but also for electricity generation. Source
<urn:uuid:5faa6eac-1110-474f-aaed-e9a188ba0fc3>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/engineering-and-environmental-geology-survey-institute-of-qinghai-province-2591527/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00403-ip-10-171-10-108.ec2.internal.warc.gz
en
0.913382
351
2.65625
3
Ramirez-Villegas J.,International Center for Tropical Agriculture | Ramirez-Villegas J.,CGIAR Research Program on Climate Change | Ramirez-Villegas J.,University of Leeds | Boote K.J.,Agronomy Dep. | And 2 more authors. Agricultural and Forest Meteorology | Year: 2016 Common bean production in Goiás, Brazil is concentrated in the same geographic area, but spread across three distinct growing seasons, namely, wet, dry and winter. In the wet and dry seasons, common beans are grown under rainfed conditions, whereas the winter sowing is fully irrigated. The conventional breeding program performs all varietal selection stages solely in the winter season, with rainfed environments being incorporated in the breeding scheme only through the multi environment trials (METs) where basically only yield is recorded. As yield is the result of many interacting processes, it is challenging to determine the events (abiotic or biotic) associated with yield reduction in the rainfed environments (wet and dry seasons). To improve our understanding of rainfed dry bean production so as to produce information that can assist breeders in their efforts to develop stress-tolerant, high-yielding germplasm, we characterized environments by integrating weather, soil, crop and management factors using crop simulation models. Crop simulations based on two commonly grown cultivars (Pérola and BRS Radiante) and statistical analyses of simulated yield suggest that both rainfed seasons, wet and dry, can be divided in two groups of environments: highly favorable environment and favorable environment. For the wet and dry seasons, the highly favorable environment represents 44% and 58% of production area, respectively. Across all rainfed environment groups, terminal and/or reproductive drought stress occurs in roughly one fourth of the seasons (23.9% for Pérola and 24.7% for Radiante), with drought being most limiting in the favorable environment group in the dry TPE. Based on our results, we argue that even though drought-tailoring might not be warranted, the common bean breeding program should adapt their selection practices to the range of stresses occurring in the rainfed TPEs to select genotypes more suitable for these environments. © 2016 Elsevier B.V. Source Aina O.,Agronomy Dep. | Quesenberry K.,Agronomy Dep. | Gallo M.,Agronomy Dep. Crop Science | Year: 2012 Arachis paraguariensis Chodat & Hassl. is a potential source of novel genes for the genetic improvement of cultivated peanut (Arachis hypogaea L.) because some of its accessions show high levels of resistance to early leaf spot caused by Cercospora arachidicola Hori. In this study, induction of high frequency shoot regeneration from quartered-seed explants was accomplished for six plant introductions of A. paraguariensis under continuous light on Murashige and Skoog (MS) medium containing 4.4 mg L -1 thidiazuron (TDZ) in combination with 2.2 mg L -1 6-γ-γ-(dimethylallylamino)-purine (2ip). Recovery of a moderately high number of plantlets per quarter seed cultured was also achieved on medium containing 4.4 mg L -1 thidiazuron in combination with 1.1 to 4.4 mg L -1 6-benzylaminopurine (BAP) with bud formation occurring as early as 1 wk after culture initiation. There were no differences in seed production or in early leaf spot incidence between plants of two genotypes of A. paraguariensis derived from seeds vs. in vitro tissue culture derived plants; however, cultivated peanut cv. Florunner had a higher incidence of early leaf spot. © Crop Science Society of America. Source
<urn:uuid:a187084b-d098-49f1-a442-5287ca8bb0fe>
CC-MAIN-2017-09
https://www.linknovate.com/affiliation/agronomy-dep-2028859/all/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00103-ip-10-171-10-108.ec2.internal.warc.gz
en
0.919263
810
2.890625
3
Get a glimpse inside Paul Cooke's e-book "The definitive guide to Windows 2000 security" with this series of book excerpts, courtesy of Realtimepublishers.com. This excerpt is from Chapter 5, "Configuring access control." Click for the book excerpt series or get the full e-book. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. I've talked a bit about the inheritance of ACEs, but not about how this process actually occurs. ACE inheritance is the process by which the ACEs in the ACL of a parent object are propagated to the ACL of a child object. Inherited ACEs aren't always recomputed but are instead propagated from the parent object to the child object in the following circumstances: - When a new child object is created - When the DACL of the parent object is modified. - When the SACL of the parent object is modified. Whenever one of these events happens, Windows 2000 must re-evaluate the inheritance of ACEs. One of the things that you need to be aware of when this re-evaluation occurs is that a container object may carry an ACL that isn't effective on the container itself but is there only to be inherited down the chain. When this configuration occurs, the ACEs are inherited down the object hierarchy until they can be applied to a non-container object. They then become effective ACEs for that object. Because an object can inherit ACEs and have explicit ACEs applied directly to it, the specific order that ACEs are processed becomes important. Windows 2000 manages the order of ACEs by putting them into what is known as canonical order, as shown in Figure 5.13. Figure 5.13: The canonical order of ACEs. Although this figure gives you an idea of what canonical order is, it's best described by these three simple rules: 1. Explicit ACEs are grouped before any inherited ACEs. 2. Within a group of explicit ACEs, access-denied ACEs are placed before access-allowed ACEs. 3. Inherited ACEs are ordered in the same way in which they're inherited. Inherited parent ACEs come first, followed by grandparent ACEs, and so on. By placing ACEs in canonical order, you can be assured of two things. - Explicit ACEs are evaluated before inherited ACEs -- The owner of a child object really has control over the object's access, rather the object's parent having control. Thus, you can define permissions on an object that modify the effects of inherited permissions. - Access-denied ACEs are evaluated before access-allowed ACEs -- You can allow access to a large number of users while denying access to a subset of the group. Click for the next excerpt in this series: Security descriptors Click for the book excerpt series or get the full e-book.
<urn:uuid:3a49812c-900d-47ef-93e0-86fe1294a851>
CC-MAIN-2017-09
http://www.computerweekly.com/news/1280096138/ACE-inheritance
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00631-ip-10-171-10-108.ec2.internal.warc.gz
en
0.925281
611
2.578125
3
The Internet gives you access to the world. But it also gives the world access to you. The borderless, dynamic nature of the web makes it more likely that you’ll be a victim of a crime online than in your real life. In real life, we take basic safety precautions. So why wouldn’t you do the same online. To get you started here are a few basic principles or prompt commandments, so to speak, to prepare for Safer Internet Day on the seventh of February. Now go forth can connect safely. [Image by Tony Alter | Flickr] Finland is not part of Scandinavia. It's not on the Scandinavian peninsula, nor do Finns speak… March 1, 2017 More than 99 percent of all malware designed for mobile devices targets Android devices, Olaf… February 15, 2017
<urn:uuid:6ab00e92-2f87-4d42-9268-4fcf0c8c3439>
CC-MAIN-2017-09
https://safeandsavvy.f-secure.com/2017/01/27/10-things-we-can-all-do-to-make-a-safer-internet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00507-ip-10-171-10-108.ec2.internal.warc.gz
en
0.902908
174
2.78125
3
At present, SQL Server 2008 continues to support two modes for validating connections and authenticating access to database resources: (Windows Authentication Mode) and (SQL Server and Windows Authentication Mode) also known as "Mixed Mode". Both of these authentication methods provide access to SQL Server 2008 and its resources. Lets first examine the differences between the two authentication modes. Windows Authentication Mode Windows Authentication Mode is the default and recommended authentication mode. It tactfully leverages Active Directory user accounts or groups when granting access to SQL Server. In this mode, Database Administrators are given the opportunity to grant domain or local server users access to the database server without creating and managing a separate SQL Server account. Also worth mentioning, when using Windows Authentication mode, user accounts are subject to enterprise wide policies enforced by the Active Directory domain such as complex passwords, password history, account lockouts, minimum password length, maximum password length and the Kerberos protocol. These enhanced and well defined policies are always a plus to have in place. SQL Server and Windows Authentication (Mixed) Mode SQL Server and Windows Authentication Mode uses either Active Directory user accounts or SQL Server accounts when validating access to SQL Server. SQL Server 2005 introduced a means to enforce password and lockout policies for SQL Server login accounts when using SQL Server Authentication. SQL Server 2008 continues to do so. The SQL Server polices that can be enforced include password complexity, password expiration, and account lockouts. This functionality was not available in SQL Server 2000 and was a major security concern for most organizations and Database Administrators. Essentially, this security concern played a role in helping define Windows Authentication as the recommended practice for managing authentication in the past. Today, SQL Server and Windows Authentication Mode may be able to successfully compete with Windows Authentication mode. Which Mode should be Used to Harden Authentication? Once the Database Administers are aware of the authentication methods, the next step is choosing one to manage SQL Server security. Although, SQL Server 2008 now has the ability to enforce policies, Windows Authentication Mode is still the recommended alternative for controlling access to SQL Server because this mode carries added advantages; Active Directory provides an additional level of protection with the Kerberos protocol. As a result, the authentication mechanism is more mature, robust and administration can be reduced by leveraging Active Directory groups for role based access to SQL Server. Nonetheless, this mode is not practical for everything out there. Mixed Authentication is still required if there is a need to support legacy applications or clients coming in from a platform other than windows and there exist a need for separation of duties. To summarize it is common to find organizations where the SQL Server and Windows team do not trust one another. Therefore, a clear separation of duties are required as SQL Server accounts are not managed via Active Directory. Using Windows authentication is a more secure choice, however, if Mixed Mode authentication is required then make sure to leverage complex passwords and the SQL Server 2008 password and lockout policies to further bolster security.
<urn:uuid:fff113b6-fd60-49d1-a407-76ffc94dcfd6>
CC-MAIN-2017-09
http://www.networkworld.com/article/2350774/microsoft-subnet/which-sql-server-2008-authentication-mechanism-should-i-choose-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00275-ip-10-171-10-108.ec2.internal.warc.gz
en
0.876523
598
2.53125
3
Remember that road surface being tested in the Netherlands that acted as a giant solar panel converting solar energy into electricity? Well, guess what? It actually worked. Six months into the test, the engineers say they've generated 3,000kwH of power from the 70-meter bike path test track. That's enough power to run a one-person household for a year, and more than expected of the project, according to SolaRoad, the company behind the experiment. Data centers are heavy users of electricity, and SolaRoad's better-than-expected electricity generation will be interesting news for those designing data centers. SolaRoad's road surface acts as a huge photovoltaic panel. Practical applications thought of thus far include street lighting, traffic systems, and electric vehicles. Designers are keen on the idea of developing a system where electricity could be passed onto vehicles as they drive down the road, for example. Glass and concrete construction The project uses standard, off-the-shelf solar panels that the engineers have placed between layers of glass, silicon rubber, and concrete. Those concrete modules consist of 2.5-by-3.5-meter slabs capped with 10-millimeter thick tempered glass. Crystalline silicon solar panels are located between the glass and concrete. The researchers are delighted that the project worked, in part because of the technical challenges. The top layer had to absorb sunlight, unlike normal blacktop. But it also had to be long-term skid-resistant for the bicycle tires, unlike what you'd get with shiny glass. It had to repel dirt in order to keep the sun shining in, but could not break even if a service truck drove on it. Glass is obviously dangerous and could injure someone if it broke. The skid resistance was addressed with a coating for the glass In a 2,543-comment Reddit debate over the news of the successful test, Reddit user Imposterpill sarcastically comments: "I have an idea…why don't we put solar cells on our roofs?" Good point. Why roads, one might ask? What's wrong with roofs? Well, the engineers have an answer for that comedian: Total electricity consumption in the Netherlands is around 110,000 GWh, and that keeps going up. That number, taking into account the small size of the country and the limited number of roofs available, means that even if all suitable roofs were equipped with solar panels, they would only supply a quarter of Dutch power consumption. The same limits might apply in a data center. One day data center designers may want to look at surrounding infrastructure for panel placement. In other words, the roadway. Surprisingly, the wise-crackers at Reddit haven't posed the question - what happens when there's a traffic jam? The cars on the road will surely block the sunlight and reduce yield. Well, the engineers do acknowledge that as a potential problem, and they say that they are looking into it as part of the pilot study. Another Reddit user suggests placing the solar panels over the road instead of on it. However, in true Reddit-user logic, BloodBride disagrees and says: "Solar panels OVER the road increase the amount of drunk people throwing traffic cones up there. Traffic cones ON a road invariably just get stolen, worn as hats and taken home." And that's problem solving. This article is published as part of the IDG Contributor Network. Want to Join?
<urn:uuid:704d3cef-3629-4bfb-b969-63beda898e70>
CC-MAIN-2017-09
http://www.networkworld.com/article/2921244/data-center/solar-power-road-surface-actually-works.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00451-ip-10-171-10-108.ec2.internal.warc.gz
en
0.959529
716
3.234375
3
Ideological Origins of the Open-Source Movement “Coders of the world, unite! You have nothing to lose but your proprietary agreements.” OK, so this wasn’t really the rallying cry of open-source community’s revolt against the idea of intellectual property. But it might as well have been, as the Copyleft ideal served as a kind of ideological cornerstone for the open-source movement. A concept born out of a yearning to be liberated from legal restrictions, Copyleft specifies that all software that adheres to this standard can be duplicated, changed to any extent desired and redistributed to anyone anywhere as long as the subsequent solution also follows that standard. Copyleft is a play on words that kind of pokes fun at its opposite, copyright. However, the suffix is somewhat misleading. In the traditional parlance, the “left” side of the political spectrum indicates socialism, equality, and government controls and regulations as favored solutions to societal challenges, whereas the “right” denotes support for the free market, competition and the private sector. Whether this is true in practice is irrelevant to this discussion—take those issues up in your blogs, if you like—it’s what these words represent that matter most. And Copyleft, with its oh-so-clever title, might be something of a misnomer. The History of Copyleft In the early 1970s, a Harvard physics student by the name of Richard Stallman, who also worked in Massachusetts Institute of Technology’s (MIT) artificial intelligence lab, observed the process by which he and other top tech students worked on a wide range of software projects. (Incidentally, Stallman graduated from Harvard the same year that Microsoft founder Bill Gates enrolled there.) In these assignments, the software would evolve as it was passed back and forth between these students, who would change it to meet their particular needs or enhance functionality. Fast-forward about a decade: Now a professor at MIT, Stallman was frustrated by what he perceived to be freedom-stifling proprietary policies of the rising IT companies. In 1984, he left his position at the school to fully devote himself to the GNU project, which he’d launched just a few months prior. GNU, which stands for GNU’s Not UNIX (a recursive acronym, and another example of the droll semantics of this bunch), was a UNIX-ish operating system that was more or less available for anyone to tweak and tune. Additionally, in 1985 Stallman established the Free Software Foundation (FSF), an organization created to advocate—you guessed it—free software. It was around this time that the term Copyleft was introduced—not by Stallman, but rather by his colleague Don Hopkins, who would go on to contribute to popular computer games like Sim City and The Sims. Hopkins had written a letter that included the phrase “Copyleft: All Rights Reversed.” However, clearly Stallman was the main driver behind the idea, and efforts like his GNU Manifesto played the biggest role in shaping and promoting the concept. Linus ’n’ Linux A few years later—in Helsinki, Finland, of all places—a then-unknown college student named Linus Torvalds was studying computer science and in his spare time, devoting his tech talents to making things like Pac Man videogame knock-offs. In 1991, Torvalds unveiled a “hobby” he had been working on in a USENET newsgroup, an operating system kernel based on the Minix OS previously invented by Andrew Tanenbaum. Linux, or Linus’ Minix, caught on like wildfire on a dry prairie, and part of its success was due to the GNU components that Torvalds used in its development. Also, the accessibility and cost of the open-source OS contributed to its rapid adoption. Today, Linux occupies the second spot in terms of operating system market share, behind the slightly older Microsoft Windows. Torvalds himself still promotes the open-source ideal generally, although he has been criticized by Stallman and other GNU luminaries for what they believe to be a lack of zeal on his part. Here Comes the Internet Although software developers and designers had used open-source methodologies as far back as the 1960s, the idea of open source as a philosophy really picked up once the Internet came into its own as a medium for the masses. This allowed for the free exchange of ideas (and software) among people who had formerly been severely limited by space and time constraints. It also allowed open-source solutions to progress to dizzying new heights by frequent changes. In fact, only about 2 percent of the current Linux kernel was actually authored by Torvalds. Although open source still applies largely to IT, it’s becoming something of a way of life for people as well. On the Web, interested parties can download recipes for open-source soda and beer (now we’re talking), and refine either to suit their own tastes. There have even been some discussions online around open-source pharmaceuticals. (But I think I’ll be sticking to the corner drug store for my medicine, thanks.) What Are the Implications? Because of the communal nature and language of Copyleft and open source, there is a tendency among those who might not know better (or those with an agenda) to think of the concept as being something akin to the unholy spawn of Marx and Engels. It is sometimes frowned upon as being an idea that goes against capitalism, entrepreneurship and other widely valued Western traditions. But is it? Not exactly. Although Copyleft heavily promotes the rights of the user—much more so than the profit margins of the software manufacturers—it’s not really anti-business. The “free” in free software refers to what users are allowed to do with it, which is whatever they want, provided they follow Copyleft guidelines. It does not indicate that something is free of charge. In fact, the FSF has an initial distribution price for its GNU OS. And even if it is no cost, like Linux, there are other ways to make money off of it, as companies like Red Hat and IBM have found. Even Microsoft employs open-source development techniques in-house, though once its products hit the shelves, you’d better have a license if you want to use any of them. –Brian Summerfield, [email protected]
<urn:uuid:1a6d2924-8041-4efd-a62f-810086d2489f>
CC-MAIN-2017-09
http://certmag.com/copyleft-ideological-origins-of-the-open-source-movement/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00327-ip-10-171-10-108.ec2.internal.warc.gz
en
0.969182
1,356
2.53125
3
New NASA program furthers space exploration Less than a month after the Space Shuttle made its final landing, NASA announced that operations of the International Space Station as well as human exploration will be getting some extra focus. The new program named the Human Exploration and Operations Mission Directorate will specifically focus on areas beyond low-Earth orbit and will manage the Space Station commercial crew and cargo developmental programs, building a spacecraft made to travel beyond low-Earth orbit, developing a heavy-lift rocket and more, a NASA news release said. "America is opening a bold new chapter in human space exploration," NASA Administrator Charles Bolden said. Connect with the FCW staff on Twitter @FCWnow.
<urn:uuid:ea30d384-ea21-4657-85b9-eb5e8d1e10a3>
CC-MAIN-2017-09
https://fcw.com/articles/2011/08/15/agg-new-nasa-program-furthers-space-exploration.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00027-ip-10-171-10-108.ec2.internal.warc.gz
en
0.919889
139
2.734375
3
MindJet has just announced the fourth generation of mind-mapping technology. Each generation has added new functionality and widen the scope of use; as a by-product each generation has included extra support for people with disabilities. The first generation was pen on paper with linear notes being replaced by ovals with text connected by lines with text. They were an excellent way for an individual to organise information and as a visual aide-memoire. The second generation provided an editing tool for creating mind-maps on a PC. This solved the problem that first generation maps got very messy very quickly as more information was added or new links created. It also enabled the handling of bigger maps as whole branches could be closed down or opened up, or the viewer could zoom in or out. The improvement in the overall quality of the presentation, the ease of navigation and the ability to send electronically, meant that maps could be effectively shared. The mind-map moved from an aide-memoire to a communication media. People with a variety of disabilities began to use mind-maps. People with limited or no use of their hands, who could not draw generation one maps, could now use the technique. People with dyslexia found mind-maps easier to understand and create than linear text, especially when they could include colours and images, so they began to use the technique as a communication media. People with limited vision who could see the overall structure of the map, found the electronic mind-map easier to navigate than linear text as they could pick up on the visual clues of colour, image and structure. The third generation extended the electronic functionality by enabling connections between the map and other artefacts. For example a node on the map could be connected to all the documents, presentations, project plans, etc. related to the node. Any of the artefacts could then be opened from within the map. This moved maps from a communication media to an organisation method. Mind-maps became the first thing people opened in the morning as they could now organise their work around it. The better integration with other tools on the PC meant that speech recognition and text-to-speech technologies could be used with mind-maps hence increasing their usability by people disabilities. The fourth generation moves mind-maps from an organisation method to a collaboration tool. With Mindjet Connect multiple users in multiple locations can work on the same map at the same time. The package includes instant messaging (IM) and voice over IP (VoIP) so participants can discuss and modify the map interactively. Users can also work on their own part of the map and the changes are immediately available to all the users. Users can view the map either by having Mindjet installed on their own machine or through a zero footprint browser solution. The browser solution makes it possible for ad-hoc users to be invited in to collaborate on the map. Collaboration brings great benefits to users who find it difficult to travel. Many people with disabilities find travel difficult either because the travelling itself is a challenge or because specialised technologies such as large screens or speech recognition are not available at the destination. Collaboration systems make it easy for them to work from their own location and fully participate in the interactive collaboration. Mindjet Connect allows multiple users to work off the same map whilst each having their own view open; this means that a blind user can open up a text hierarchy view of the map whilst other users will have a pictorial view. Mindjet has always been useful to people with disabilities but with this jump to interactive collaboration it is opening up opportunities for them to exploit their full potential in projects and the workplace.
<urn:uuid:8a45cb8f-459c-4d2f-ae58-9059b6047a7b>
CC-MAIN-2017-09
http://www.bloorresearch.com/analysis/mindjet-connect-boosts-accessibility/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00199-ip-10-171-10-108.ec2.internal.warc.gz
en
0.962565
734
2.546875
3
As microprocessors push up against the limits of miniaturization, many are reflecting on what the post-silicon era has in store. Recently Sandia National Laboratories published an article describing the steps it is taking to extend the pace of computational progress over the coming decades. Some of the forward-leaning technologies include self-learning supercomputers and systems that greatly outperform today’s best crop while using less energy. As history tells us, many of today’s established technologies would have seemed impossible at one time. Think about explaining Internet-connected smart phones to a pre-mobile, pre-Web generation – and that wasn’t that long ago. “We think that by combining capabilities in microelectronics and computer architecture, Sandia can help initiate the jump to the next technology curve sooner and with less risk,” said Rob Leland, head of Sandia’s Computing Research Center. Leland is leading a new initiative focused on next-generation computing called Beyond Moore Computing that encompasses Sandia’s efforts to advance computing technology beyond the exponential trend that was observed by Gordon Moore in 1965. Moore’s law can be extended for a few more process shrinks but the cost is no longer feasible from an energy perspective. The industry needs technology that uses less energy at the transistor device-level, expressed Leland. Scientists at Sandia anticipate that multiple computing device-level technologies will evolve to fill this gap, as opposed to one dominant architecture. So far, there exist about a dozen candidates, including tunnel FETs (field effect transistors), carbon nanotubes, superconductors, and paradigm-changing approaches like quantum computing and brain-inspired computing. Leland makes the case that Sandia Labs, a multi-program laboratory operated by Sandia Corporation, a subsidiary of Lockheed Martin Corp., is well positioned to shape future computing technology. The lab has decades of supercomputing experience, both on the hardware and software side, extending to capability computing and capacity computing. Leland references two key facilities in particular that will contribute to next-gen computing: the Microsystems and Engineering Sciences Applications (MESA) complex, which carries out chip-level R&D; and the Center for Integrated Nanotechnology (CINT), a Department of Energy Office of Science national user facility operated by Sandia and Los Alamos national laboratories. This is really an inflection point, where it is difficult to predict what tomorrow’s computers will look like. “We have some ideas, of course, and we have different camps of opinion about what it might look like, but we’re really right in the midst of figuring that out,” Leland said. One way that computing’s progress has been limited is the mandate for backwards software compatibility. Many computers are running code that was optimized to run on a different architecture. “To break out of that, we have to find different architectures that are more energy efficient at running old code and are more easily programmed for new code, or architectures that can learn some behaviors that once required programming,” notes Erik DeBenedictis of Sandia’s Advanced Device Technologies department. He expects that computers are about a decade away from being able to manage both old and new code in an efficient manner. DeBenedictis is pushing for breakthroughs beyond the transistor level. He cites cognitive computers and technologies that move data more efficiently as being crucial for the kinds of big data problems that are becoming so prominent. This new generation of cognitive computers would be self-learning and able to share some of the programming burden. DeBenedictis makes the point that “while computers have gotten millions of times faster, programmers and analysts are pretty much as efficient as they’ve always been.” Smarter computers have the promise of ameliorating this bottleneck. As for a timeline, Advanced Device Technologies department manager John Aidun says that post-silicon technology is coming sooner than one might think. Looking through the lens of national security, Sandia thinks this new tech will be needed sooner than industry would develop it on its own. Hence, the concerted efforts in this direction. Aidun estimates Sandia could have a prototype within a decade. The lab is working to accelerate the process by identifying computer designs that leverage new device technologies and demonstrate fabrication steps that would lower the risk for industry. Mobile computing is an area that’s getting a lot of attention. Mobile meets a lot of the requirements of UAVs and satellites. On-board processing for satellites and other sensors would mean less need for data transfer. Again, with history as a guide, the next big thing in computing may be an extention of a current technology, a mix of technologies (as in heterogeneous computing) or it might be something entirely different and new.
<urn:uuid:e15ad8c7-cdd0-4514-97cf-0406e829c346>
CC-MAIN-2017-09
https://www.hpcwire.com/2014/05/28/sandia-launches-post-silicon-development-effort/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00319-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934173
1,000
3.125
3
Most Internet users know about the existence of software Trojans, but that of hardware ones is less known. They consist of integrated circuits that have been modified by malicious individuals so that when triggered, they try to disable or bypass the system’s security, or even destroy the entire chip on which they are located. As hardware devices are almost exclusively produced in countries where controls about who has access to the manufacturing process are non-existent or, at best, pretty lax, government agencies, military organizations and businesses that operate systems critical to a country’s infrastructure can never be too careful when checking whether the devices they are planning to use have been tampered with. There are a number of techniques for detecting hardware Trojans, but they are time- and effort-consuming. So a team of researchers from the Polytechnic Institute of New York University (NYU-Poly) and the University of Connecticut have decided to search for an easier solution, and came up with the idea of “designing for trust.” “The ‘design for trust’ techniques build on existing design and testing methods,” explains Ramesh Karri, NYU-Poly professor of electrical and computer engineering. Among those is the use of ring oscillators – devices composed of and odd number of inverting logic gates whose voltage output can reveal whether the circuit has or has not been tampered with – on circuits. Non-tampered circuits would produce always the same frequency, but altered ones would “sound” different. Of course, sophisticated criminals could find a way to modify the circuits so that the output is the same, so the researchers suggest creating a number of variants of ring oscillator arrangements so that hardware hackers can’t keep track of them. While the theory does sound good, the researchers have encountered some difficulty when it comes to testing it in the real world. Companies and governments are disinclined to share what hardware Trojan samples they may have, since that would require sharing actual modified hardware that could tip off the researchers to their proprietary technology or can endanger national security. Luckily for them, NYU-Poly organizes an annual Cyber Security Awareness Week (CSAW) white-hat hacking competition called Embedded Systems Challenge (this year’s edition is currently underway), for and during which students from around the country construct and detect hardware Trojans, and these samples are readily available to them and to the public.
<urn:uuid:708497e8-6526-4322-9ffd-af54b9c9f429>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2011/11/10/new-techniques-for-detecting-hardware-trojans/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00371-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949206
504
3.09375
3
In a rare confluence of events, all three branches of the federal government are weighing changes that would affect when and how personal data is accessed. The approaches are somewhat contradictory: Some moves would protect citizen privacy, while others could result in more access by government agencies to records kept by businesses and smartphone users about personal information. Encryption technology is usually at the center of the discussions, with intelligence officials eager to find ways to detect communications on smartphones used by criminals and terrorists. Various actions are taking place in the federal judiciary, before Congress, as well as the executive branch. FCC and ISP privacy In the latest proposal made last week, the Federal Communications Commission wants Internet service providers to receive customer permission before their personal data is shared for marketing and other purposes. The FCC will debate the proposal at its March 31 meeting. The proposal quickly won an endorsement from nearly 60 privacy and digital rights groups, including Free Press. Meanwhile, opponents also have emerged, including the Information Technology & Innovation Foundation, which said the U.S. Federal Trade Commission's oversight of broadband providers already protects broadband customer privacy while balancing privacy with industry costs and innovations. Both FCC Chairman Tom Wheeler and FTC Chairwoman Edith Ramirez appeared together at the CES trade show in January to urge tech companies to expand their efforts to protect consumer privacy. Apple and the FBI Also receiving big headlines is the FBI's attempt to force Apple to write new software that would override password protections on the iPhone of a mass shooter in last year's deadly San Bernardino, Calif., attacks. Magistrate Judge Sheri Pym on Feb. 16 ordered Apple to comply, but the company is appealing the decision on constitutional and other legal grounds. A hearing on the appeal is set for Tuesday. Many experts expect the case will end up at the U.S. Supreme Court. A series of affidavits by both parties in the case have been filed almost daily. Last week, the FBI described how it tried to access content on the work-issued iPhone 5c used by San Bernardino shooter Syed Rizwan Farook. Legislation expected to pass in Congress calls for creating a 16-member commission on security and technology challenges. The commission, drawn from a broad base of security and privacy advocates, would have a year to issue a final report. Sen. Mark Warner (D-Va.), one of the commission's co-sponsors, said the group can "strike an appropriate balance that protects American's privacy, American security and American competitiveness." A big issue for backers of the commission is ensuring that Congress not pass knee-jerk legislation seeking backdoors or other workarounds of encryption used on smartphones and other devices. The concern is that since many encryption apps are available from foreign companies out of reach of U.S. laws, any U.S. regulation would only hurt U.S.-based companies. Furthermore, terrorists could resort to using apps developed in other countries, or build their own. While much of the concern over encryption and privacy rose out of the mass shooting attacks in Paris and San Bernardino, the recent deliberations before all the major branches of government can also be tied to the election calendar, analysts noted. FCC commissioners and officials at the Department of Justice, the FBI and other security agencies are appointed by the president, and Barack Obama's term ends in January. The same goes for 435 members of the House and one third of the members of the Senate. In the judiciary branch, the Apple-FBI case -- and others -- could drag on well past January. If the case heads to the Supreme Court, the appointment and confirmation of a ninth justice to replace recently deceased conservative Antonin Scalia, could have bearing on the outcome. In June 2014, the high court ruled unanimously in favor of civil liberties and personal privacy in the landmark Riley v. California case. That ruling held that police may not search digital information on a cell phone without a warrant, even if the phone was seized from an arrested person. Some legal scholars see that case as having a bearing on smartphone privacy cases, since there is so much personal data, such as financial and health information, contained on a smartphone. While the FBI and other agencies are pushing for access to a smartphone that was specifically designed by Apple to protect personal information, other government actions, like the one before the FCC, are heading in the other direction. "The FCC plan is right on the mark for protecting consumer privacy … but it is also in direct contradiction in spirit to what the FBI is asking for from technology companies," said Avivah Litan, a longtime security analyst at Gartner, in an email. "There is a ton of rich data at ISPs that can be used to identify and track terrorists and criminals. In fact, this data is more fertile than what is on a personal smartphone because it reveals networks and connections that involve crime or terrorist rings," Litan added. Given that a terrorist or a criminal could easily resort to using a prepaid burner phone, (a phone briefly used and then disposed and replaced) — as happened with two other phones in the San Bernardino attack — Litan suggested that going after smartphones protected with encryption might not be the most effective course for the FBI. "The government should accept that encryption advances are well underway, so trying to force Apple and Google to open backdoors for them is a futile exercise," Litan said. "The cat is already out of the bag," she wrote in a recent blog. "The government needs to master new techniques for gathering intelligence and finding perpetrators instead of bullying technology and phone companies to open backdoors for them," she said in another blog. "The government should stick to principals and stay away from technology details," she said. "And they need to accept that we are no longer living in the 20th century. The world has moved forward, as has technology. They should make the best of it and take advantage of all the information that is indeed out there, instead of sticking to old ways of doing business and blaming others for their ineptitude."
<urn:uuid:53675846-a281-4738-9962-eb9f53d2dea4>
CC-MAIN-2017-09
http://www.computerworld.com/article/3044497/data-privacy/privacy-issues-hit-all-branches-of-government-at-once.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00071-ip-10-171-10-108.ec2.internal.warc.gz
en
0.959196
1,232
2.625
3
By Matt Champion Recently, I attended the IoT Evolution Expo in Las Vegas. While I was there, I sat in on a panel called “The Farmer’s IoT Markets: From Drones to Prawns”, and what really struck home was how such an important industry like Agriculture can benefit from IoT technologies. Agriculture is one of the largest industries in the world and is also one of the most vital. Everything we consume comes from a field, it may be highly processed or it may be fresh picked, but it all originates from the same place: a farmer’s fields. Unfortunately, in some areas such as the state of California, strict regulation on water consumption has driven out many farmers or caused significant changes in how they do business. Efficiency, one of the main takeaways from the panel, is what will transform agriculture. For instance, if we can precisely monitor the soil moisture level in the field and dial it in to exactly what the crop needs, we will be much more efficient than just watering for time and gallons. Along with monitoring the soil, another process that benefits greatly from IoT is field irrigation. Companies like Net Irrigate and Observant monitor automated irrigation, allowing the system to precisely fit the needs of the crop being grown. Additionally, these companies are producing affordable solutions that not only monitor the location and amount of water placed on the field, but can send alerts if there is damage or theft of assets at the site. If fields have sensors monitoring soil moisture, water pumps, valves, and all of these sensors report to one location, the automation process allows the farmers to have better yielding crops, less water consumption, larger profit margins and higher quality food. The second big takeaway was centered around tracking product and food safety. New regulations around produce monitoring and tracking have been implemented to help identify any areas that could possibly risk the safety of the food supply. Small IoT devices that can track crates, pallets, or truckloads of produce from the field to the production facility create a bread crumb trail right back to the source of the product and even the crew that harvested it. By monitoring the movement and temperature, through companies like Locus Traxx, the producers of our perishable food supply can be better protected against loss and damages associated with the transportation and movement of product. This technology is not limited to field crops either. Sensors can be attached to livestock to track their movement, take their temperature, and other vital signs. Keeping livestock healthy allows for a better life for the animal, as well as a higher quality product for the consumer, with lower costs to the farmer. Visual monitoring of fields by drones can save farmers countless hours and allow them to get critical feedback in order to make decisions on a daily basis. Even simple security is being addressed by devices remotely placed around the operations. If more farmers took the time to adapt their operations with IoT solutions, their operations would increase in efficiency, and the possibility for the agriculture industry to become more environmentally conscious and profitable is greatly advanced. Furthermore, the most important thing to remember, is in this increasingly over populated world, we can make better use of our land, time, and resources to better our own lives and those of future generations.
<urn:uuid:1f839f39-9191-4a56-bbb4-abd56dbcf885>
CC-MAIN-2017-09
http://www.jbrehm.com/blog/2016/8/10/what-iot-means-for-farmers
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00491-ip-10-171-10-108.ec2.internal.warc.gz
en
0.958607
658
2.9375
3
1. Control Services and Daemons Review how to manage services and the boot-up process using systemctl. 2. Manage IPv6 Networking Configure and troubleshoot basic IPv6 networking on Red Hat Enterprise Linux 3. Configure Link Aggregation and Bridging Configure and troubleshoot advanced network interface functionality, including bonding, teaming, and local software bridges. 4. Control Network Port Security Permit and reject access to network services using advanced SELinux and firewalld filtering techniques. 5. Manage DNS for Servers Set and verify correct DNS records for systems and configure secure DNS 6. Configure Email Delivery Relay all email sent by the system to an SMTP gateway for central delivery. 7. Provide Block-Based Storage Provide and use networked iSCSI block devices as remote disks. 8. Provide File-Based Storage Provide NFS exports and SMB file shares to specific systems and users. 9. Configure MariaDB Databases Provide a MariaDB SQL database for use by programs and database 10. Provide Apache HTTPD Web Service Configure Apache HTTPD to provide Transport Layer Security (TLS)-enabled websites and virtual hosts. 11. Write Bash Scripts Write simple shell scripts using Bash. 12. Bash Conditionals and Control Structures Use Bash conditionals and other control structures to write more sophisticated shell commands and scripts. 13. Configure the Shell Environment Customize Bash startup and use environment variables, Bash aliases, and Bash 14. Linux Containers Preview Preview the capabilities of Linux containers, Docker, and other related technologies in Red Hat Enterprise Linux 7. 15. Comprehensive Review Practice and demonstrate knowledge and skills learned in Red Hat System
<urn:uuid:4cb2586e-9eff-4e4a-9d46-fa594352c065>
CC-MAIN-2017-09
https://www.globalknowledge.com/ca-en/course/116202/red-hat-system-administration-iii-rh254/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00491-ip-10-171-10-108.ec2.internal.warc.gz
en
0.687335
386
2.578125
3
Korean engineers have broken a record by transmitting enough power wirelessly over a distance of about 16 feet to charge up to 40 smartphones at the same time. The researchers, from the Korean Advanced Institute of Science and Technology (KAIST), created a "Dipole Coil Resonant System" (DCRS) made specifically for an extended range of inductive power transfer between transmitter and receiver coils. The development of long-distance wireless power transfer has attracted a lot of attention by researchers in recent years. The Massachusetts Institute of Technology (MIT) first introduced a Coupled Magnetic Resonance System (CMRS) in 2007. It used a magnetic field to transfer energy for a distance of 2.1 meters (about seven feet). According to the Korean researchers, CMRS has unsolved technical limitations that make commercialization difficult. For one, CMRS has a rather complicated coil structure (it's composed of four coils for input, transmission, reception, and load); bulky-size resonant coils; and a high frequency (in a range of 10MHz) The KAIST team uses a lower, 20kHz frequency. While that may seem like a unique move, the KAIT engineers are using the same technology as WiTricity, a company in Watertown, Mass. WiTricity has been developing magnetic resonance charging over distance for sale to manufacturers since 2009. What the KAIST researchers did was build a bigger system. Overall configuration of KAIST's DCRS system, showing primary and secondary coils (Image: KAIST). WiTricity's wireless charging technology is designed for "mid-range" distances, which it considers to be anywhere from a centimeter to several meters, according to Kaynam Hedayat, Witricity's product manager. Magnetic resonance wireless charging works by creating a magnetic field between two copper coils. The larger the copper coils and the greater the power being pushed through them, the bigger the size of the magnetic field. What KAIST researchers did was build a 10-foot-long, pole-like transmitter and receiver that was able to create a magnetic field large enough to transmit 209 watts of power over a distance of five meters (or about 16 feet). Over that distance, the wireless transmitter still emitted enough power to charge up to 40 smartphones, if plugged into an outlet powered by the wireless transmitter. But, as the distance increased, the power dropped off significantly. The Korean engineering team conducted several experiments and achieved "promising results." For example, at 20kHz, the maximum output power was 1,403 watts at a three-meter distance; 471 watts at four meters; and 209 watts at five meters. "For 100 [watts] of electric power transfer, the overall system power efficiency was 36.9% at three meters, 18.7% at four meters, and 9.2% at five meters," Chun Rim, a professor of Nuclear & Quantum Engineering at KAIST, said in a statement. "A large LED TV as well as three 40 [watt]-fans can be powered from a five-meter distance." KAIST's DCRS magnetic resonance system. Note the two coils on either side of the room (Image: KAIST). The Korean researchers believe that wireless charging will eventually be as common as Wi-Fi in homes and public places. WiTricity, a creator of wireless charging systems, has an intellectual property (IP) license agreement with Toyota Motor Corp. Under the agreement, Toyota is expected to offer wireless charging on future rechargeable plug-in hybrid electric and fully electric vehicles. David Schatz, director of business development at WiTricity, demonstrates to Computerworld how a new prototype wireless charger called "Prodigy" can power a device from about 10 inches away. The size of the coils in WiTricity's system are dependent on the application and the application environment (i.e., a vehicle, a smartphone, a wearable computing device, etc.), Hedayat said. The size of the transmitting coil is limited by the deployment environment and user expectations, while the size of the receiver coil is limited by the physical size of the device receiving power, Hedayat said. For example, WiTricity's wireless power transmitters for vehicles are about 19-in. square by 2-in. thick. The receiver coils that would be installed in a car or truck are about one foot square by .4-in. thick, WiTricity has been able to stretch the distance of its magnetic resonance charging field by using a "repeater," a small disk-like object that retransmits the magnetic signal. WiTricity is by no means alone in developing magnetic resonance charging devices, though it does claim its is the first based on the MIT technology. The company is a member of the Alliance for Wireless Power (A4WP). There are three major alliances backing various forms of wireless charging, including inductive magnetic charging. To date, products on the market have been built around magnetic inductive charging techniques, which require that a mobile device be in contact with a charging surface, such as a charging pad. The leading charging pad supplier has been Duracell's Powermat technology, a member of the Power Matters Alliance (PMA). Two of the three major wireless power consortiums have agreed to establish interoperability standards for wireless power. The partnership, announced earlier this year, pits the A4WP and the PMA against the largest of the industry groups -- the Wireless Power Consortium (WPC), which touts the Qi (pronounced "chee") wireless charging specification. Lucas Mearian covers consumer data storage, consumerization of IT, mobile device management, renewable energy, telematics/car tech and entertainment tech for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed. His e-mail address is [email protected]. Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "It's Now Possible to Wirelessly Charge 40 Smartphones From 16 Feet Away" was originally published by Computerworld.
<urn:uuid:7d8c974b-8bc6-4c35-9e7d-79c5607045c6>
CC-MAIN-2017-09
http://www.cio.com/article/2376848/mobile/it-s-now-possible-to-wirelessly-charge-40-smartphones-from-16-feet-away.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00191-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946463
1,276
3.265625
3
Programming for multicore systems can be complex, so an industry consortium led by Advanced Micro Devices has taken a step ahead in its goal to eliminate development challenges so applications are portable across devices, architectures and operating systems The HSA (Heterogeneous System Architecture) Foundation on Tuesday is expected to introduce a new uniform memory architecture called HUMA that makes different memory types in a system accessible to all processors. By breaking down barriers that separate different memory types, developers have access to a larger pool of shared memory in which code could be executed. The specification is part of HSA's open-hardware standard so program execution can be easily distributed to processing resources in servers, PCs and mobile devices. HSA's goal is to create a basic interface around industry-standard parallel programming tools so code can be written and compiled once for multiple devices. Computers and mobile devices today combine CPUs with many co-processors to speed up computing tasks. Some of the co-processors include GPUs (graphics processing units), DSPs (digital signal processors), network processors, FPGAs (field programmable gate arrays) and specialized ASICs (application-specific integrated circuits). Some of the world's fastest computers harness the joint computing power of GPUs and CPUs for complex math calculations, while mobile devices have multiple processors for graphics and security. Efficient processing leads to better smartphone and tablet performance, and also longer battery life, said Phil Rogers, corporate fellow at AMD, during a conference call to discuss the new specification. AMD later this year is expected to release laptop and desktop processors code-named Kaveri in which CPUs and graphics processors will be able to share memory. The HSA Foundation's goals are loosely tied to AMD's chip strategy in which the company integrates third-party intellectual property so chips can be customized to customer needs. For example, AMD is making a customized chip for Sony's upcoming PlayStation 4 gaming console. HSA also wants to lower development costs and reduce the need to recompile code to devices or chip architectures. Some of the features of HUMA include dynamic memory allocation and fast GPU access to system memory. "Every compute unit ... is going to have the same priority and going to all be able to look at the same memory," said Jim McGregor, principal analyst at Tirias Research. HUMA ensures every hardware unit has access to the same data, so the information doesn't need to be copied into different memory types. GPUs and CPUs today have access to different cache and memory types and the specification would break the traditional mold in which CPUs allocate memory for code execution, but the information is copied into GPU memory for execution by the graphics processor. "The other part is it is unifying the hardware and also software architecture. If you are writing in C++, you can say I want the GPU to execute it," McGregor said. The specification also reduces the need to transfer data between memory, and that eases bottleneck issues, McGregor said. AMD's Rogers said the specification recognizes multiple storage and networking interconnects, but did not say whether it would address nonvolatile storage units mimicking memory. Many server installations have solid-state drives as a form of cache in which data is copied and stored for a temporary period as a task is being executed. Facebook has floated the idea of using SSDs as a replacement for DRAM. HSA Foundation backers also include ARM, Sony, MediaTek, Qualcomm, Samsung, Texas Instruments, LG Electronics, Imagination Technologies and ST Ericsson. Intel is not a member of the HSA Foundation and is using its own co-processors, compilers and programming tools to accompany its chips. The idea of shared memory resources is also being chased by AMD rival Nvidia, which is not a member of the HSA Foundation. Nvidia next year plans to release a graphics processor based on the Maxwell architecture, which will unify GPU and CPU memory. The GPUs will be able to address CPU memory and vice versa, and applications will be easier to write with unified memory resources. Smartphones and tablets could get unified memory with Nvidia's upcoming Tegra 5 processor code-named Logan, which will have a graphics processor built on the Maxwell architecture and also support CUDA, which is Nvidia's proprietary set of tools for parallel programming. HUMA is compatible with popular programming languages such as C, C++ and Python, and multiple operating systems, AMD said.
<urn:uuid:94e919e6-3a93-49b3-992c-0b7b0d11a9af>
CC-MAIN-2017-09
http://www.cio.com/article/2386313/data-center/consortium-takes-steps-to-break-multicore-programming-barriers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00067-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942426
901
2.625
3
Scientists in Germany and The Netherlands have determined that by using tungsten and silicon nitride as a storage medium, they can store data that will last from a million to a billion years. They propose using the medium to retain data on human physiology that could be seen long after the human race is extinct. The nearly indestructible disks were created by the MESA Institute for Nanotechnology at the University of Twente in The Netherlands and The Freiburg Institute for Advanced Studies in Freiburg, Germany. The disk consists of a 338-nanometer (nm) thick layer of silicon nitride on top of a silicon wafer. A 50 nm layer of tungsten is patterned into QR (Quick Response) codes using optical etching lithography on top of the silicon nitride wafer. The QR codes, which had line-widths of about 100nm thick, were etched using an argon ion laser beam. A 224nm layer of nitride was then deposited on top of tungsten patterns to help protect them. To give some idea of how thin the materials used for the storage medium are, an 8.5-by-11-in. sheet of paper is about 100,000 nanometers thick. So the silicon wafers could only be read with the use of a microscope. The researchers chose QR codes, which are commonly used in consumer advertising today, because they can be easily read by machines using simple scanners. For example, QR codes can be read by today's camera-enabled smartphones. The materials used are highly resistant to heat. Silicon nitride has a melting point of 3,452 degrees Fahrenheit, and tungsten has a melting point of 6,170 degrees Fahrenheit. What the researchers created was a write-once, read many (WORM) disk platter that was nearly indestructible. WORM technology was used because the project's intent was to create an indelible record that would stand the test of time over millions of years, even after the human race may have become extinct. The storage medium was made for the Human Document Project, an initiative started in 2002 by a consortium of European institutions. The project's purpose was to create a digital library that would stand the test of time after human beings either migrated away from Earth or became extinct. To prove the disk could weather the test of time, the scientists etched the microscopic QR codes into the tungsten surface of the disk and then heat tested them to simulate age and what might happen to the material, for example, it were in a building fire. Initial calculations showed it is possible to store data for more than a million years and up to a billion years, the researchers stated in a published paper. The researchers first age-tested the disk by heating it to 445 Kelvin (341 Fahrenheit) for one hour. "We observe no visible degradation of the sample, which indicates that this sample would still be error free after 1 million years," the researchers stated. The disk was then tested at up to 763 Kelvin (913F) for two hours, which caused some degradation. As the temperature was increased, the top layer of silicon cracked and reduced the number of readable QR codes. While not readable by the QR code algorithm, the QR codes themselves were not "visibly" damaged and the tungsten was still present in the material. Overall, the QR codes lost about 7% of their readable data at higher temperature tests, the researchers said. "The misreading of the information is caused by the readout using an optical microscope without a monochromatic light source. The images are taken using a top mounted camera and contain a multitude of colors, caused by the variation in [the silicon's] thickness due to the cracking," the researches wrote. "The very simple detection software was unable to correctly assign a black or white color to multitude of colors caused by the cracking of the top silicon-nitride layer." This article, Researchers create indelible record on mankind for aliens to someday find, was originally published at Computerworld.com. Lucas Mearian covers storage, disaster recovery and business continuity, financial services infrastructure and health care IT for Computerworld. Follow Lucas on Twitter at @lucasmearian or subscribe to Lucas's RSS feed . His e-mail address is [email protected].
<urn:uuid:82411b78-04bd-4757-a5f7-068c429fcc16>
CC-MAIN-2017-09
http://www.computerworld.com/article/2485999/data-center/researchers-create-indelible-record-on-mankind-for-aliens-to-someday-find.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00243-ip-10-171-10-108.ec2.internal.warc.gz
en
0.9441
900
4.09375
4
Drones don’t normally need wheels, but they can come in handy on upside-down roads. That’s just what the underside of a bridge is, and Fujitsu wants to use drones with large wheels to scoot along them while checking for cracks and other wear and tear. At a technology expo in Tokyo on Tuesday, the electronics maker showed off a prototype quadcopter that could help reduce bridge inspection and maintenance costs. The drone looks a bit like a space station from science fiction. It consists of a central unit with four rotors, a high-definition camera and sensors including gyroscopes. On either side of this central unit are two large fiberglass wheels. While it’s aimed at streamlining maintenance, the drone’s unique design could inspire new applications for unmanned aerial vehicles. The wheels serve to protect the drone from pipes and other obstacles when it flies up the walls of bridge supports and along the undersides of bridges. They also help keep the drone a fixed distance from the surfaces it is video recording. The fisheye camera can capture close-up views of bridges and capture cracks as small as 2mm wide. The imagery is relayed to tablets or remote servers and used to create detailed 3D models of bridge pillars and undersides for remote inspection by engineers. If they spot a worrisome crack, they can then check out the structure in person. “This could be useful because Japan has 700,000 bridges over 2 meters high and the government stipulated that they must be inspected every five years,” said Hiroshi Haruyama, a Fujitsu senior manager. Weighing between 2 and 3 kilograms, the prototype was developed with the state-backed New Energy and Industrial Technology Development Organization and Nagoya Institute of Technology. The group plans to improve its 15-minute battery life and add more user-friendly control methods such as video game controllers. Fujitsu plans to continue trials of the drone before commercialization, which would initially target expressway operators in Japan.
<urn:uuid:bf8d42b0-6a0e-4587-b6b3-44942b7760e0>
CC-MAIN-2017-09
http://www.itnews.com/article/2921455/fujitsu-drone-uses-wheels-to-roll-along-bridges-walls.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00539-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942934
419
2.9375
3
By: Tina Rose Microsoft Underwater Data Center “The sea is everything” -Jules Verne Microsoft, believing that the sea holds the key to their future, has tested a self-contained data center that operates far below the surface of the ocean. The key to this study is the millions that it will save on the industry’s most expensive problem, air-conditioning. Thousands of computer servers generate a lot of heat, and continuing to maintain them effectively and efficiently is the reason for considering water as a cooling medium. Too much heat causes servers to crash, whereas, the possibility of running underwater servers could not only cool them, but cause them to run even faster. Code-named Project Natick, the answer might lead to giant steel tubes running fiber optic cables on the bottom of the ocean floor. Another option would be to capture the ocean currents with smaller turbines, encapsulated in small jellybean type shapes that would generate the electricity needed for cooling. With the exponential growth of technologies including the Internet of Things, centralized computing will be a bigger demand in the future. With more than 100 data centers currently, Microsoft is spending more than $15 billion to add more to their global data systems. While Microsoft is looking to underwater locations to meet their growing computing needs, there are other companies who have found other unusual locations and ways to build data centers, while taking advantage of differing resources. The SuperNap Data Center, a $5 billion dollar, 2 million square foot facility in Michigan is located in the former Steelcase office building. Switch built the SuperNap Data Center in Grand Rapids within the 7 story pyramid shaped building that features a glass and granite exterior. It will be one of the largest data centers found in the eastern U.S. Nautilus Data Technologies have developed floating data centers turning to the sea as well. They have recently announced their first project The Waterborne Data Center. They believe that their approach to cooling their data will save Americans who are spending currently over $13 billion a year. According to Arnold Magcale, CEO and co-founder, Nautilus Data Technologies, “The Nautilus proof of concept prototype exceeded all expectations – validating how our waterborne approach will provide the most cost effective, energy efficient and environmentally sustainable data center on the market.” At a more clandestine location, but also incorporates water as a cooling mechanism, Academica, designed a hidden underground data center to use pumped seawater to cool the servers. An added bonus is that the heat generated from the cooling process, provides heat to over 500 local homes before being regenerated back to the sea. “The sea is only the embodiment of a supernatural and wonderful existence.” -Jules Verne
<urn:uuid:0142a579-93da-4978-a737-6d2ec417701d>
CC-MAIN-2017-09
http://nautilusdt.com/2016/02/01/microsoft-underwater-data-center-to-be-tested/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00587-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934735
562
3.109375
3
American IT departments’ decisions could inadvertently put organizations at risk of an information security breach if they don’t have sufficient protocols for the disposal of old electronic devices. Even those with established processes could unwittingly initiate a security leak if they rely on wiping or degaussing hard drives, or handing over their e-waste to an outsourced recycler. Worse yet, some organizations might be stockpiling old technology with no plan at all. Despite the many public wake-up calls, most American organizations continue to be complacent about securing their electronic media and hard drives. Processes and protocols surrounding the destruction of electronic devices have been slow to adapt to new reality: that businesses large and small are increasingly dependent on digital information. Congress is hoping to hold businesses accountable for the protection of confidential information with the introduction of the Data Security and Breach Notification Act of 2013, which will require organizations that acquire, maintain, store or utilize personal information to protect and secure this data. However, legislation only goes so far and American organizations of all sizes must be more vigilant to protect themselves from a data breach that could damage their bottom line, with the prospect of losing revenue, reputation or clients. To mitigate the risk of fraud, businesses should consider the following tips: Think prevention, not reaction. There is no one-size-fits-all data protection strategy. Develop preventative approaches that are strategic, integrated and long-term, such as eliminating security risks at the source and permanently securing the entire document lifecycle in every part of your organization; Be security savvy. Put portable policies in place for employees with a laptop, tablet or smartphone to minimize the risk of a security compromise while travelling; Protect electronic data. Ensure that obsolete electronic records are protected as well. Simply erasing or degaussing a hard drive or photocopier memory does not remove information completely—physically crushing the device is the only way to ensure that data cannot be retrieved; Create a culture of security. Train all employees on information security best practices to reduce human error. Explain why it’s important, and conduct regular security audits of your office to assess security performance. “For every desktop computer, printer or mobile device purchased, there should be a secure disposal plan for outgoing technology,” said Michael Collins, Shred-it Regional Vice President. “More often than not, those devices are loaded with sensitive company or customer information that is recoverable if the hard drives aren’t physically destroyed.”
<urn:uuid:f0c538ef-8731-4a0a-9c0d-6d6e53fb0a2e>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2013/12/10/inadequate-electronic-disposal-protocols-can-lead-to-security-leaks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00587-ip-10-171-10-108.ec2.internal.warc.gz
en
0.917358
511
2.59375
3
Formatted copying allows the type, number, and order of columns in the data file to differ from the table. By specifying a list of columns and their types in the COPY statement, you instruct Ingres to perform a formatted copy. The COPY statement list specifies the order and type of columns in the data file. Ingres uses the column names in the list to match up file data with the corresponding columns in the table. For human readable text data files, the COPY list formats will almost always be a character type: char, c, text, or less commonly varchar or byte. The COPY statement converts (character) file data into table data types for COPY FROM, or the reverse for COPY INTO. The COPY list may contain other types as well, such as integer or decimal, but these are binary types for special programming situations; they are not human readable types. COPY also supports a "dummy" type, used to skip input data (FROM) or insert fixed output text (INTO). If some table columns are not listed in the COPY list for a COPY FROM, those columns are defaulted. (If they are defined in the table as NOT DEFAULT, an error occurs.) If some table columns are not listed for a COPY INTO, those table columns simply do not appear in the output data file. The order of columns in the table need not match the order in the data file. Remember that the order of columns in the COPY list reflects the order in the data file, not the order in the table. Additionally, a table column may be named more than once. (For COPY FROM, if a column is named multiple times, the last occurrence in the COPY list is the one that is stored into the table. Earlier occurrences undergo format conversion, but the result is discarded.) Special restriction: If the table includes one or more LONG columns (such as long varchar or long byte), columns cannot be reordered across any LONG column. For example, if the table contains (int a, int b, long varchar c), a COPY statement could use the order (b,a,c); but a COPY statement asking for (a,c,b) would be illegal (you cannot move column b to occur after the LONG column c). The values in the data file can be fixed-length, or variable-length. Values can optionally be ended with a delimiter; the delimiter is specified in the COPY list. COPY can also process a special case of delimited values, the comma separated values (CSV) delimiting form. Note: If II_DECIMAL is set to comma, be sure that when SQL syntax requires a comma (such as a fixed-length COPY type), that the comma is followed by a space. For example: COPY TABLE t (col1=c20, col2=c30, d0=nl) INTO ‘t.out’:
<urn:uuid:c98f30b4-25d4-4d6b-aaaf-3f03933e15f0>
CC-MAIN-2017-09
http://docs.actian.com/Ing_QUELRef/06_QUEL-EQUELStatements.6.035.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00232-ip-10-171-10-108.ec2.internal.warc.gz
en
0.801306
624
3.25
3
Ford is enlisting top U.S. universities to make self-driving cars a reality, announcing Wednesday that it hopes researchers at the Massachusetts Institute of Technology can come up with advanced algorithms to help vehicles learn where pedestrians and other automobiles will be located. "We're using data from the sensors both on board and off board," said Jonathan How, director of the MIT-Ford Alliance and a professor of aeronautics at MIT. He said that the system isn't just using the car's Lidar system, which captures a 3D view of its surroundings using spinning cameras, but crosswalk signs and traffic lights. See a demonstration of self-driving cars in a video on YouTube. If the car knows whether a traffic light is red or green or whether a crosswalk sign is illuminated, it will have even more information than what is collected by the car's sensors. It might also be able to avoid an accident with a car that runs a red light. "Having the sensors work in all conditions are issues that are fundamental to the problem," he said. How the sensors need to work in daylight, darkness, snow, rain and other weather conditions with the same reliability. While the ultimate goal is to have an automated, driverless car, MIT is taking smaller steps to get towards the goal. How and his team of students are working to bring autonomous shuttles to MIT campus in Cambridge. He said he hopes to have golf-cart size prototypes on campus later this year, moving to something more permanent in the future. He said that the vehicles would have "safety drivers" just in case. "The goal is to basically have a mobility on demand system," he said. Within two years, How imagines that all of the university's campus would be covered and students could order shuttles using an application on their smartphones. On the west coast, Stanford University researchers are tasked with helping cars see around obstacles. For example, when a vehicle is blocked by a large truck, it would be able to maneuver inside the lane to see what is beyond the obstruction. It would be able to take actions based on what it learns. Maybe it would wait for the obstruction to clear or it might determine that it is safe to pass the truck. Automated driving is part of what Ford calls its Blueprint for Mobility, which attempts to imagine the roads in 2025 and beyond. The company said that it is exploring all of the different components of automated driving including both the technological and legal hurdles.
<urn:uuid:18606dab-f45d-4e73-bbb5-742d89e75c9a>
CC-MAIN-2017-09
http://www.cio.com/article/2379373/automotive/ford-enlists-mit--stanford-to-drive-automated-cars.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00232-ip-10-171-10-108.ec2.internal.warc.gz
en
0.971606
508
2.9375
3
To support you in managing your network, establish security levels, and control system access, we’ve put this mini tutorial together for you. The network administrator tutorial has been designed to support you with your network security and network management needs. Let this tutorial be your one-stop resource centre which contains information and advice for network administrators. This network administrators’ tutorial covers network security and your network management. Table of contents: Network security refers to the procedures that a network administrator adopts to prevent and monitor any unauthorised access of a computer network. This also includes any misuse of network accessible resources. Network security can deny access and modify the network. Data breach at York University highlights urgency of network security checks, says ICO A data breach at York University brought to light the urgent need for network security checks, according to the ICO there. Once the University had completed its IT project, the IT department did not test the security of their IT system. As a network administrator you will want to find out how this cost them. Network security case study: College’s NAC virtual appliance makes grade This network security case study reveals how a NAC virtualisation appliance blocks malware for Wellington College. The network administrator’s chosen solution also provides increased capacity on demand. With UTM system, Blackpool Council trims network security costs Blackpool Council installed a UTM system in a bid to cut costs and strengthen its network security measures. The Council was faced with Conficker on its network, and a dwindling budget. Find out how the network administrator managed to turn things around and stay within budget. Network intrusion detection and prevention systems guide Network attacks are evolving, so your network security detection systems must evolve too. Read this mini guide to find out how network intrusion prevention should include anomaly-detection and application awareness. When to update wide area network security architecture As a network administrator you need to bear in mind that not all traffic is destined for the data centre alone. Learn about updating you network’s security architecture. Network disaster recovery plan template If you’re lacking a network disaster recovery plan, you can download our free template to help you get started. Find out how to ensure your network’s security with a network disaster recovery plan. Network management refers to the processes the network administrator uses in maintaining the operation and administration of an IT department’s network systems. Network management covers the methods and tools that relate to the operation, maintenance and provisioning of networked systems. Upgrade Ethernet to fabric for cloud computing Previously network administrators have been able to optimise network management by tweaking performance at the core of the network. However, Marcus Jewell, Brocade’s regional sales director believes virtualisation means network traffic becomes unpredictable. Networking basics: The importance of strong networking support It is vital that the data centre is up to the challenge of virtual networking, when enhancing server and storage systems. Read why virtual network support shouldn’t be overlooked in your network management strategy. Unified network management: Wired and wireless together for good? Read how these two organisations managed to improve reliability and security, through their integrated network management vision. Both organisations have put steps in place to reach a fully integrated network. But will the network administrators complete their missions? Using network management and monitoring systems across IT and buildings Bolton College decided to link up its campus network to its buildings facilities. Find out the reasons behind the move and read why the network administrator is using a monitoring and network management system to see both IT resources and buildings. Network DR plan basics If you’re not sure what to include in your network disaster recovery (DR) plan these DR plan basics may give you an idea as to what to document. In the event of a disaster, a good set of network management documentation can be valuable to a network administrator. More on network administration and network security Brocade’s network hardware price model: Pay-as-you-go Can your security strategy handle networked facilities management? Bring-your-own-device programs gain traction, vendors respond Xsigo evolves I/O virtualisationinto data centre fabric HP upgrades data centre network architecture, ships in the UK
<urn:uuid:e63ebe36-d7b1-40fc-b857-fabdaaaafb26>
CC-MAIN-2017-09
http://www.computerweekly.com/guides/The-network-administrators-guide
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00232-ip-10-171-10-108.ec2.internal.warc.gz
en
0.902665
873
2.609375
3
The Netflix-backed Encrypted Media Extensions (EME) proposal, and recent revelations that requirements for DRM in HTML5 are confidential, have generated furor among advocates of the Open Web. Let's cut through the hyperbole. Digital Rights Management (DRM) describes a class of technologies designed to prevent unauthorized copying and playback of digital media. Content providers that favor DRM claim that it's necessary to prevent copyright infringement. The Electronic Frontier Foundation (EFF) and the Free Software Foundation (FSF) claim that DRM is an anti-competitive practice that more often inconveniences legitimate users of such media. The fight over openness vs. protection of copyright holders has been a contentious issue for as long as digital media has been around. Owing largely to the Digital Millennium Copyright Act, powerful content creators such as movie studios and television networks currently hold the upper hand. The DMCA, which became law in the U.S. in 1998, made it illegal to disable DRM systems or to spread information about how to disable them. Groups opposed to DRM claim that it doesn't work and only gives corporations control over how people can use content, playback devices and computers after they've been legitimately purchased. DRM's main problem: It makes it impossible to let users play digital media and also prevent its reproduction. Even something as crude as pointing a camera at a video playing on a computer screen defeats DRM, and there's really no way to prevent that. More sophisticated tools for defeating DRM exist, of course - but, thanks to the DMCA, it's probably illegal for us to tell you about them. Proponents Say EME Protects Embedded Web Content in HTML5 Era The latest development in the battle over DRM is a draft W3C standard put forwarded by Netflix, Microsoft and Google called Encrypted Media Extensions. EME describes an application programming interface (API) that lets Web applications interact with content delivery modules. These modules may be built into Web browsers, operating systems, computer firmware or hardware, or they may be distributed separately. Content delivery modules work like plug-ins, such as Adobe Flash and Microsoft Silverlight, to enable specific capabilities in Web browsers. They are distributed and then must be downloaded to render certain content formats for display. In that they are external to the browser and interact with the browser in a standardized way, content delivery modules actually aren't that different from browser extensions, or plug-ins. The difference is that they interact with the browser in a specific way for a specific purpose - namely, playback of media that may or may not be encrypted. According to its sponsors, EME is necessary because the HTML5 video and audio elements, which enable progressive playback in Web browsers without plug-ins, currently lack means to prevent users from downloading, editing, inspecting or copying embedded Web content. Opponents of DRM argue that, except for content provided through plug-ins, the Web has always been and must always be open. The Web succeeds because users and developers are able to link, share, download, view source and even mash up content. The Web is fundamentally free and open by design to - as the W3C says in its mission statement - make the benefits of the Web "available to all people, whatever their hardware, software, network infrastructure, native language, culture, geographical location or physical or mental ability." [ Related: Mozilla CEO Stumps for Openness on Mobile Web ] Given this mission, it surprised many when Tim Berners-Lee, the director of the W3C, announced in October 2013 that playback of protected media was "in scope" for the HTML Working Group. Even more surprising, and infuriating to many, was the revelation from Netflix in January 2014 that its requirements for an acceptable DRM module were confidential - leaving the W3C in the position of creating a standard to satisfy requirements that it can't know. Recognizing that DRM is a highly contentious issue, the EME standard attempts to distance itself as much as possible from any particular DRM system. Supporters of EME emphasize that it doesn't actually implement DRM in browsers; it just lets browsers interact with DRM systems without the use of plug-ins. DRM, they argue, is already widely used on the Web. EME makes DRM more seamless and enables the playback of protected content on devices, such as smartphones and tablets, which don't support Flash or Silverlight. With Silverlight no longer under active development and Flash no longer available for mobile devices, a more universal Web video playback method is necessary, according to major content providers. Microsoft Internet Explorer 11 and Google Chrome already support DRM through EME. Chrome supports the Widevine Content Decryption Module, and Microsoft supports its own PlayReady DRM. The Apple Safari browser will likely support EME in the near future. Mozilla has stated its opposition to EME but has recently conceded there will be support for DRM in future Firefox versions anyway, for fear of losing users to other browsers if it doesn't play along. Opponents Say EME Will Further Restrict Web Use As if to demonstrate that the idea of anti-features is no longer off the table, a W3C community group formed last year around the idea of hiding Web application source code. The group serves as a demonstration of just how unpopular the idea is: There's been close to no discussion within the group, and most (if not all) members are known to be opposed to the founding idea of the group. Another popular argument in favor of DRM is that Hollywood will pack up and leave the Web if EME (or something similar) isn't implemented in browsers. However, it's also been pointed out that Hollywood needs the Web more than the Web needs Hollywood. The fact that the music and software industries have largely given up on DRM points to the possibility (and some would say inevitability) of a solution to the problem of piracy that avoids the use of plugins as well as the "baking in" of DRM into Web browsers. In the music industry, for example, record labels initially pushed for (and got) DRM in Apple iTunes. However, after Amazon started releasing DRM-free MP3s, every record label eventually dropped DRM on iTunes in 2009. The inconvenience of recording streaming audio, compared with the relatively low price of purchasing music, may be largely what makes DRM-free music viable. One example that indicates that people will pay for DRM-free video content is comedian Louis C.K.'s tremendous success with selling high-quality, DRM-free video downloads directly on his website. His " Live at the Beacon Theater" video netted more than $1 million from individual $5 downloads. Despite this success, the idea of DRM-free content for higher-value content that has been traditionally distributed using DVDs still worries motion picture studios. One thing is for certain: The gears of the W3C turn slowly, and they seem to be turning exceptionally slowly for EME. Perhaps a lengthy standardization process can give both sides a chance to work out their differences and come to an agreement. That said, it's far more likely that a completely different solution will arise in the meantime. Chris Minnick runs a Web design and development company and regularly teaches HTML5 classes for Ed2Go. Ed Tittel is a full-time freelance writer and consultant who specializes in Web markup languages, information security and Windows OSes. Together, Minnick and Tittel are the authors of the forthcoming book Beginning Programming with HTML5 and CSS3 For Dummiesas well as numerous other books. Read more about internet in CIO's Internet Drilldown. This story, "Can Digital Rights Management and the Open Web Coexist?" was originally published by CIO.
<urn:uuid:f4559398-debf-47a3-be86-3f91fcaf9ea9>
CC-MAIN-2017-09
http://www.networkworld.com/article/2358134/data-center/can-digital-rights-management-and-the-open-web-coexist-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00284-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940892
1,568
2.65625
3
For those of you following the D-Wave story, the designers of “the world’s first commercial quantum computer” have published a revealing blog entry detailing the company’s latest achievements. When Canadian startup D-Wave announced back in 2007 that it had developed a prototype quantum computing machine suitable for running commercial applications, the technical community paid attention, yet many were skeptical of the claim. Much of the debate has centered on the semantics of the term “quantum computing” and exactly what that means, but the folks at D-Wave were not easily discouraged. Within a few years, they had sold systems to Lockheed Martin and a NASA-Google collaboration. In May 2013, the Quantum Artificial Intelligence Laboratory, shared by NASA, Google, and Universities Space Research Association (USRA), took delivery of the D-Wave Two Computer backed by the 509-qubit Vesuvius 6 (V6) processor. Since the system went online, it has operated around-the-clock at nearly 100 percent usage – with the majority of time being spent on benchmarking. According to a recent blog post from D-Wave founder and chief technology officer Geordie Rose, six “interesting findings” have arisen as a result of this extensive benchmarking period. Rose notes that while some of these results have been published, he wants to provide his own take on what it all means. The six findings are as follows: - Interesting finding #1: V6 is the first superconducting processor competitive with state of the art semiconducting processors. - Interesting finding #2: V6 is the first computing system using ideas from quantum information science competitive with the best classical computing systems. - Interesting finding #3: The problem type chosen for the benchmarking was wrong. - Interesting finding #4: Google seems to love their machine. - Interesting finding #5: The system has been running 24/7 with not even a second of downtime for about six months. - Interesting finding #6: The technology has come a long way in a short period of time. Rose provides further thoughts on each of these, but #1 and #4 are especially telling. With regard to the first point, Rose states that a recently published paper “shows that V6 is competitive with what’s arguably the most highly optimized semiconductor based solution possible today, even on a problem type that in hindsight was a bad choice. A fact that has not gotten as much coverage as it probably should is that V6 beats this competitor both in wallclock time and scaling for certain problem types.” Finding four is backed by a blog post that the Google team published last week. “In an early test we dialed up random instances and pitted the machine against popular off-the-shelf solvers — Tabu Search, Akmaxsat and CPLEX. At 509 qubits, the machine is about 35,500 times (!) faster than the best of these solvers,” writes the Google team. There was earlier discussion of a 3,600-fold speedup, but the Google rep explains that was on an older chip with only 439 qubits. “This is an important result,” Rose adds. “Beating a trillion dollars worth of investment with only the second generation of an entirely new computing paradigm by 35,500 times is a pretty damn awesome achievement.” As for the final point – the fast pace of the D-Wave technology – the CTO notes that all of these advances have been completed in the last year. In closing, he says “the discussion is now about whether we can beat any possible computer – even though it’s really only the second generation of an entirely new computing paradigm, built on a shoestring budget.” Rose expects that within the next few generations, the D-Wave processor will do just that.
<urn:uuid:61b3af79-a238-4842-bc46-b49389ad9726>
CC-MAIN-2017-09
https://www.hpcwire.com/2014/01/23/d-wave-aims-beat-classical-computer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00460-ip-10-171-10-108.ec2.internal.warc.gz
en
0.958705
810
2.515625
3
If you use Wi-Fi on your laptop, there's an excellent chance you're using Atheros chipsets for your wireless networking. Atheros' silicon is in gear from Linksys, D-Link and Netgear to name but a few vendors. However, although Atheros has been popular, they haven't always been friendly to open-source and Linux developers. That has been changing over the years and now, thanks to Sam Leffler, noted open-source developer, the HAL (hardware abstraction layer) for Atheros' ath5k and ath9k chip families. This is another major step in opening up hardware for Linux, Free BSD, and the other open-source operating systems. Earlier this year, Atheros released an open-source driver for its latest 802.11n chipsets. While Atheros had long offered some support for Linux, it has always insisted on keeping its HAL code proprietary. Last year, an open-source alternative, OpenHAL, became available, but it wasn't completely compatible with the newer chipsets. Now, Leffler's efforts has lead to an open HAL. Looking ahead, Leffler wrote, "This means that in the future all fixes, updates for new chips, etc. will need to be a community effort." According to Leffler, Atheros also stated, "the Linux platform will be the reference public code base so folks wanting to add support for other platforms will have to scrape the information from there." What it all boils down to for desktop Linux users is that you can look forward to being able to wirelessly network any Linux on any laptop or desktop without a second thought. It's another great day for Linux users.
<urn:uuid:f1670f4a-fa30-4e8f-a80b-d54c5288c7cd>
CC-MAIN-2017-09
http://www.computerworld.com/article/2479832/internet/atheros-wi-fi-goes-open-source--linux-friendly.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00636-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95354
351
2.640625
3
Install the GNU ARM toolchain under Linux Embedded development tools for a popular processor If you are interested in embedded systems development on a widely used microprocessor, the Advanced RISC Machines (ARM) core fits the bill. This article provides a starting point for understanding the software side of embedded systems development by describing one set of commonly used tools: the GNU ARM toolchain. The ARM family Paramount among the concerns of embedded systems developers is how to get the most processing power from the least amount of electricity. The ARM processor family has one of the better designs for balancing processor power and battery power consumption. The ARM core has gained technology advances through several versions over the last 20 years. Recent system on a chip (SoC) processors running on mobile phones such as the T-Mobile G1 Android combine dual-core (ARM9 and ARM11) processors to improve the performance of multimedia applications on low-powered platforms. The more recent ARM cores support two operational states: ARM state, in which the core executes 32-bit, word-aligned instructions, and THUMB state, which executes 16-bit, halfword-aligned instructions. ARM mode provides the maximum power and addressing range capability of the processor, whereas THUMB mode allows portions of a program to be written in a very tight, memory-conserving way that keeps memory costs low. Switching between the modes is a trivial exercise, and for many algorithms, the size of the code required can be reduced significantly. ARM processors improve performance by taking advantage of a modified Harvard architecture. In this architecture, the processor employs separate data and instruction caches, but they "feed" off of the same bus to access external memory. Furthermore, the instructions are put into a five-stage "pipeline" so that parallel processing occurs on the five most recent instructions loaded into the pipeline. In other words, each of the five separate actions (fetch, decode, ALU, memory, write) involving instructions occur in parallel. So long as the pipeline is flowing steadily, your code enjoys the speed advantages of parallelism, but the moment a branch occurs to code outside the pipeline, the whole pipeline must be reset, incurring a performance penalty. The moral is to be careful in designing your code so that you use branching only minimally. A unique feature that the ARM architecture provides—forcing programmers to think in new and unique ways—is that every instruction can be optionally executed based on the current state of the system flags. This feature can eliminate the need for branching in some algorithms and thus keep the advantages of a pipeline-based instruction- and data-caching mechanism running at best performance (as branching can force the caches to be unnecessarily cleared). The GNU ARM toolchain Most ARM systems programming occurs on non-ARM workstations using tools that cross-compile, targeting the ARM platform. The GNU ARM toolchain is such a programming environment, allowing you to use your favorite workstation environments and tools for designing, developing, and even testing on ARM simulators. Hosted by CodeSourcery (see Related topics for a link), the GNU toolchain described in this article—also known as Sourcery G++ Lite—is available for download and use at no cost. All the tools but the GNU C Library are licensed under the standard GNU Public License version 3 (GPL3). The GNU C Library is licensed under the GPL version 2.1. Included in the GNU toolchain are the binary utilities ( binutils), the GNU Compiler Collection (GCC), the GNU Remote Debugger (GDB), GNU make, and the GNU core utilities. Also included in the Sourcery G++ Lite package is extensive documentation of the GNU toolchain tools, including the GNU Coding Standards document—good reading all around! Under the documentation for the GNU assembler, as, you will find a lot of ARM-specific information: opcodes, syntax, directives, and so on. Downloading and installing the GNU toolchain To download the GNU toolchain, visit the CodeSourcery download site (see Related topics) and choose the IA32 GNU/Linux TAR file: Versions of the GNU toolchain are available for all the major client operating systems, but this article focuses on installing and using the Lite version of the toolchain under Linux®. Because the ARM design has progressed through different versions throughout its history, different C libraries for three of the most common versions of the processor design—ARM v4T, ARM v5T, and ARM 7—are included in the Lite package. Next, use the bunzip2 command to extract the file into your home directory. Listing 1. Extracting the downloaded GNU toolchain $ bunzip2 arm-2008q3-72-arm-none-linux-gnueabi-i686-pc-linux-gnu.tar.bz2 $ tar -xvf arm-2008q3-72-arm-none-linux-gnueabi-i686-pc-linux-gnu.tar . . . (Files listed while being extracted from the archive.) . . . $ Now, modify your PATH environment variable to access the bin directory of the toolchain, and the tools are ready to use. Configuring Linux to use the GNU toolchain On the toolchain download page, you'll also find several useful PDF files—as well as a detailed Getting Started guide—that document the included tools. The instructions in this article are a condensed version that should get your toolchain up and running quickly. If you do a lot more ARM programming than Intel® processor programming, an alternative method is to add symbolic links to your /usr/local/bin directory that reference the tools in the toolchain bin directory. In this way, you can use shortcuts like as to run the interactively. Listing 2 shows examples of how to set up these symbolic Listing 2. Setting up symbolic links to ARM tools # cd /usr/local/bin # which arm-none-linux-gnueabi-as /home/bzimmerly/Sourcery_G++_Lite/bin/arm-none-linux-gnueabi-as # ln -s /home/bzimmerly/Sourcery_G++_Lite/bin/arm-none-linux-gnueabi-as as # ls -l as lrwxrwxrwx 1 bzimmerly bzimmerly 76 2009-03-13 02:48 as -> /home/bzimmerly /Sourcery_G++_Lite/bin/arm-none-linux-gnueabi-as # ./as --version GNU assembler (Sourcery G++ Lite 2008q1-126) 184.108.40.20680215 Copyright 2007 Free Software Foundation, Inc. This program is free software; you may redistribute it under the terms of the GNU General Public License version 3 or later. This program has absolutely no warranty. This assembler was configured for a target of `arm-none-linux-gnueabi'. # which command to locate the full path name of the command to make it easier to copy and paste it into the ln command. Then, use the ln -s command to create the symbolic link. After that, verify that the link was successfully created by listing it, then running it. By creating a simple symbolic link to all the tools in the GNU toolchain's bin directory, you won't have to type out the long as is easier to type than arm-none-linux-gnueabi-as every time you want to run the assembler! Writing a program for the ARM architecture Many guides are available for writing ARM programs in the ever-popular C programming language, but few are written that address pure assembler. I'm going to break tradition for the purpose of this article and write an example program in pure assembler—something usually considered the "black art" of programming. It is a simple "Hello World" type of program targeted to a unique version of ARM Linux. The example program described in this article is designed to run on the T-Mobile G1 mobile phone running Android Linux. It is generically written so that it should run on other ARM-based Linux platforms, as well (making only standard system calls). However, the minimal version of the Linux kernel that you'll need to use is version 2.6.16, which allows you to use the new Embedded Application Binary Interface (EABI) kernel system calls. Note: To read a good article on programming the ARM for bare metal instead of under Linux, see the link to the Embedded.com article in Related topics. Using your favorite editor, create a script called build using the code in Listing 3. This script runs the GNU ARM assembler, followed by the linker. The goal in creating this program was to make it into a very tiny executable, so I stripped it of all debugging information with the linker switch After creating the script, make it executable by issuing the chmod +x build command. Listing 3. Build the ARM Linux Hello World application #!/bin/bash echo Building the ARM Linux Hello World... arm-none-linux-gnueabi-as -alh -o hw.o hw.S > hw.lst arm-none-linux-gnueabi-ld --strip-all -o hw hw.o After this, create the source module, called hw.S, and put the code in Listing 4 into it. Listing 4. ARM Linux Hello World @filename: hw.S .text .align 2 .global _start @ ssize_t sys_write(unsigned int fd, const char * buf, size_t count) @ r7 r0 r1 r2 _start: adr r1, msg @ Address mov r0, #1 @ STDOUT mov r2, #16 @ Length mov r7, #4 @ sys_write svc 0x00000000 @ int sys_exit(int status) @ r7 r0 mov r0, #0 @ Return code mov r7, #1 @ sys_exit svc 0x00000000 .align 2 msg: .asciz "Hello Android!\n\n" In GNU assembler parlance, the "at" sign ( used to designate line comments. The assembler ignores everything after @ up to the end of the line. The program uses two standard Linux system calls: sys_exit. Above the assembler code in each of these calls is the C language equivalent of it in commented form. This makes it easier to see the how the ARM registers properly map into the calling parameters that the system calls use. The rule to remember is, parameters go left to right, special in that this is where you place the system call number being svc 0x00000000 instruction tells the ARM processor to call the "supervisor," which in this case is the Linux Testing the ARM program The toolchain provides the ever-popular GDB for debugging low-level programs. When the program is targeted for a single-board computer with a JTAG or ICE unit attached, you can use the Sourcery G++ Lite debugger (gdb) to debug the ARM code remotely. If you wish to test the code as I did—on the Android Linux system running on a mobile phone—you need to attach the phone to the workstation using the USB cable that came with it, then use the Android software development kit's (SDK's) command to transfer the program to the phone. Once on the phone, in a directory that can contain executable code (/data/local/bin), make the program executable by issuing the chmod 555 hw chmod command on Android doesn't Finally, use the adb shell command to connect to the phone, use cd to change to the correct directory, and run it with ./hw. If all goes according to plan, the program should respond as it did on my phone, by greeting you with "Hello Android!" If this small taste of ARM assembly programming interests you, feel free to learn more about this processor design from the links provided in Related topics. The best resource for in-depth study of the ARM core is the Bible of ARM development, the ARM ARM, which is short for ARM Architecture Reference Manual. For the career-minded, consider this: There are millions upon millions of mobile phones in the world today, with more being created each year. The state of the art has gotten to the point that we are actually able to carry around in our pockets a dual-core processor machine with multiple gigabytes of storage, fully networked into the Internet for instant information and entertainment. There is a crying demand among mobile phone vendors for talented programmers. With ARM as prevalent as it is, there's plenty of work to be done—and it's fun work, too! As always, feel free to have fun with the tools and enjoy yourself. Programming is part art, part science, and can be one of the most enjoyable careers in the business of technology. - Visit the ARM Web site. - Visit the GNU ARM Web site. - From Embedded.com, check out the series on "Building Bare-Metal ARM Systems with GNU." - Read about Qualcomm's dual-core SoC. - Get the Sourcery G++ Lite package from the CodeSourcery download site. - Download and read ARM ARM, the Bible of ARM programming. - If you're more interested in embedded development for PowerPC, the nine-part developerWorks series "Migrating from x86 to PowerPC" takes you through the process of developing a vehicle-control device using the Kuro Box development board. - In the developerWorks Linux zone, find more resources for Linux developers, and scan our most popular articles and tutorials. - See all Linux tips and Linux tutorials on developerWorks.
<urn:uuid:afb72008-3bd4-4054-840b-4053b5aba209>
CC-MAIN-2017-09
http://www.ibm.com/developerworks/linux/library/l-arm-toolchain/index.html?ca=dgr-lnxw97ARM-Toolchain&S_TACT=105AGX59&S_CMP=grlnxw97
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00404-ip-10-171-10-108.ec2.internal.warc.gz
en
0.865105
2,959
3.34375
3
Offering convenience and ease of use that have revolutionized the way people use computers and networks, wireless networks have also complicated endpoint management and security. Wireless networks have earned a reputation for being difficult to monitor and administer, exposing organizations to a higher rate of infection from Trojans and viruses and incurring greater support costs than anticipated. Today, the term “wireless security” usually means technologies that prevent unauthorized or malicious users from connecting to a wireless network. Wireless security technologies inspire heated discussions about key negotiation and data encryption, as well as user and host authentication. While these mechanisms are vital components of a secure wireless architecture, they do little to guarantee the configuration and patch levels of the machines joining the wireless network, and little to reduce the likelihood of a legitimate user’s infected machine using the wireless connection to spread chaos throughout the production infrastructure. The real world limitations of “traditional” wireless security have been made abundantly clear during the past two years by the Blaster and the associated Windows RPC attacks, Sasser, the Agobot/Phatbot family of Trojans and other notorious Windows security incidents. As organizations quickly learned, neither encryption nor strong authentication defends an organization against Blaster and its ilk. In fact, relying solely on these mechanisms may actually make the organizational exposure worse because once these machines are authenticated, they typically have access to file shares and other network resources which can be leveraged by malicious code to spread infections. And if VPNs are used to provide access to remote users across public, insecure networks, they often unwittingly become the channel these mindless destructive exploits usurp to bypass firewalls and other perimeter defenses. New challenges also bring new opportunities. Many security architects and network administrators are using the rapid adoption of wireless connectivity to reduce these mobile computing risks, by supplementing their native wireless security mechanisms with endpoint configuration management and enforcement tools. These systems secure wireless networks by blocking access to the production environment until an endpoint has passed a security audit which validates the endpoint’s patch level, the presence and state of security tools and a variety of system configuration details. The endpoints gain access to production systems only after their compliance to security policy requirements has been verified by the audit. A number of commercial endpoint policy management and enforcement systems manage network access control levels dynamically, using the results of scans or agent-based audits, allowing administrators to easily apply the same endpoint security requirements across many different types of network access methods including wireless, VPN/remote access and traditional LAN switches. Administrators can use these systems to display and verify many details about the endpoint configuration, including the registry settings, operating system and application versions, anti-virus signatures and running network services and processes. In addition to access control, these offerings typically support a variety of configurable endpoint remediation options, ranging from message pop-ups on the endpoint system to redirecting the user to a Web server to automated patching without any user intervention. This powerful combination of endpoint visibility and audit mechanisms, dynamic access enforcement and transparent remediation significantly reduces the chances that a rogue or infected PC will be able to compromise a production network through wireless (and other) links. While all network topologies will benefit from policy enforcement technologies, wireless networks gain some of the most significant advantages. Even relatively simple checks – like verifying that the anti-virus process is up to date and running – can greatly reduce the chances of a virus or Trojan penetrating the wireless infrastructure. And an enforcement mechanism that requires all laptops to have critical patches, up-to-date and running anti-virus programs, no file sharing and encrypted storage for corporate documents will greatly reduce the chances of a laptop leaking sensitive data when connecting to the corporate network wirelessly from hotspots at an airport lounge or coffee shop. Thus, the new wireless security paradigm starts at the endpoint, combining inspection and remediation tools with network-based dynamic access controls to let colleagues take full advantage of wireless network ease and convenience, while keeping competitors and other digital vermin out.
<urn:uuid:15a3a9a5-6253-4033-ae06-5510d6275d7e>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2005/04/25/wireless-security-starts-at-the-endpoint/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00280-ip-10-171-10-108.ec2.internal.warc.gz
en
0.923772
810
2.515625
3
Fujitsu Works to Find New Ways to Cool Small Mobile Devices An innovative, thin heat pipe that's less than 1 mm thick is being developed by Fujitsu to improve internal cooling in tomorrow's smartphones, tablets, laptops and other compact electronic devices. The low profile heat pipe, which fits inside a device and wicks heat away from heat-generating components that are inside is being developed by Fujitsu Laboratories, according to an April 14 report on Phys.org. Inside the heat pipe is a liquid that, when passing over the heat sources, turns into a vapor, which then turns back into a liquid as it is cooled, similar to the process used in air conditioning systems. "Smartphones, tablets, and other similar mobile devices are increasingly multifunctional and fast," the article stated. "These spec improvements, however, have increased heat generated from internal components, and the overheating of localized parts in devices has become problematic." To battle the worsening heat problems, Fujitsu's thin heat pipe is capable of transferring approximately five times more heat than current thin heat pipes, the story reported, making it possible for CPUs and other heat-generating components to run cooler and to avoid concentrated hot-spots inside devices. The heat pipe technology was detailed by Fujitsu at the Semiconductor Thermal Measurement, Modeling and Management Symposium 31 (SEMI-THERM 31) in March in San Jose, Calif. The idea of heat pipes are not new, but they continue to find new places where they can be featured, including this recent research by Fujitsu. So what's this mean for future smartphone, laptop, tablet and other mobile device owners? Well, it could mean that the devices we buy in the future could run cooler, a feature that is important for reliability, battery life and longevity. And ultimately it could also mean increased comfort when holding a very warm mobile device in one's hand. Research like this again shows the amazing nature of innovation among scientists and researchers who are seemingly always finding ways to solve some of the continuing challenges that affect devices we use every day. You certainly can't put a big cooling fan inside a thin device like a smartphone or tablet, so new fixes have to use creative thought processes. Fans won't fit? So what about a thin tube that circulates fluid which changes from liquid to vapor in a constant cycle, helping to remove heat and keep the device cooler? Very cool. To me, this idea for thin loop heat pipe innovations is very fitting this week during the 45th anniversary of the Apollo 13 spaceflight, when NASA mission specialists helped bring the crew home safely after a critical oxygen tank exploded on the way to the moon in April 1970. Incredible thinking by NASA during that amazing mission brought astronauts James Lovell, Fred Haise and Jack Swigert back to Earth after a huge mishap. Similar smart thinking around Fujitsu's new heat pipe could help keep tomorrow's mobile devices a lot cooler. And like the Apollo 13 wizardry that found a way to battle every obstacle during the mission, this is innovation at its finest.
<urn:uuid:80937afe-7323-410e-ae15-de9d77ec97b0>
CC-MAIN-2017-09
http://www.eweek.com/blogs/first-read/fujitsu-works-to-find-new-ways-to-cool-small-mobile-devices.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00632-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947047
632
2.875
3
A Caterham F1 car at the Hungarian Grand Prix. How is Dell helping the team keep up with the competetion? Time is of the essence for all businesses, particularly when milliseconds can mark success. In Formula 1, the pressure for saving time is not just confined to the race track. IT also plays a vital role in a team’s performance and is instrumental to help shave tenths of a second off lap times. Understanding aerodynamics is the key to unlocking speed, and that’s where Caterham F1 Team’s Dell HPC comes in. Any team who tried to make a car without a HPC solution would be miles off the pace. The driver’s sheer speed and skill is not the only key element that comes into play for on-track success, and HPC technology is fundamental to climbing up the F1 grid. Sitting in a room which is strictly temperature controlled, the Dell HPC, powered by Intel, is the powerful beast behind the Caterham F1 Team in Leafield Technology Centre- but it seems even more significant in the context of how this type of technology influences Formula 1 racing today. F1 teams have long relied on wind tunnels for testing parts to find out if they are going to add performance to the car or not, yet the relentless advance of technology has seen those teams make use of Computational Fluid Dynamics (CFD) in tandem with wind tunnel work. CFD is often called ‘a wind tunnel in a computer’ and F1 teams rate this technology highly as it allows them to virtually test how modifications to a car could have an impact on the car’s aerodynamic performance and ultimately its speed. CFD starts with a process called meshing, which means virtually segmenting the car into millions of tiny cells. These squares are in turn halved into thousands and thousands of tiny triangles. At Caterham F1 Team, the teams’ engineers use CFD to draw detailed information on what is happening with the temperature, pressure, turbulence and velocity inside each triangle that covers the surface of the virtual car. William Morrison, Caterham F1 Team’s IT Infrastructure Manager, explains why this is so important: "CFD means you can look at the airflow over and through various components of the car, and this simulation allows us to filter so that we can test theoretical developments and match them up to wind tunnel results, before coordinating the two sets of results between each other." "By trying out a number of different ideas and testing whether they will work or not, it means we are able to carry out development work without actually having to make parts," Morrison explained. "To purely do development work in a wind tunnel would be very costly and time-consuming because of how long it takes to produce the wind tunnel models; the HPC allows a very quick turnaround of theoretical models before you decide whether to physically make them or not. It’s vital for a modern F1 team to have HPC simulation capability." The primary function of the supercomputer at Caterham F1 Team is to handle a huge number of of operations at once. "The HPC typically has about six ‘jobs’ going through it at any one time, which could last anything from six hours to two days each," says Morrison. "For an average 17-hour job the HPC will do approximately ten billion calculations." To put that in context, the average PC you might have at home to surf the internet would take between four and five months to get through that amount of maths. The calculations the HPC does are not simple either. Partial differential equations are the norm and they reveal everything about how air is flowing in and around the part being tested. Once the HPC has performed its ten billion calculations, it will spend the final two hours of a 17-hour job streamlining these to about 800 million pieces of individual data, which will then be presented to the CFD analysts in video, graph and picture form. "What we get from this is a different insight into the aerodynamics, because you’re getting a full 3D flow simulation," explains Morrison. "You’re seeing the complete flow structures of air off the body of the car." "It used to be that a typical job may be 17 hours," explains Morrison, "but this has recently come down to about 12. This has been done through enhancements to the model set-up, and optimising the way the model solves. There is constant work being done to improve the performance of the calculations so it’s quicker and more reliable. We have a little group of about three people dedicated to improving this all the time. We’re always looking to reduce the solve time because that means we can get more jobs through it per day." So given the importance of the work constantly going through the HPC, what happens in the event of an unwanted event like a power cut? "We have battery conditioning units so that there’s always conditioned power coming into the HPC, and if there’s a disruption to the external power supply we’ve got a generator outside which kicks in automatically," says Morrison. "If the HPC was running at full power and the electricity suddenly went off, the generator would be able to keep it going for a few hours." Saving time is a constant battle for any team in the F1 circuit and for Caterham F1 Team the Dell HPC is the nerve centre of the entire operation. Without it their car simply could not be designed or developed, which is why the HPC runs 24 hours a day, 365 days a year – always with a queue of between ten and twenty jobs waiting to be solved by powerful calculating prowess. "December is one of the busiest times of the year in Formula 1, so the HPC is still working flat out even on Christmas Day," laughs Morrison. "It’s not allowed any time off."
<urn:uuid:d9ad1d86-eefa-4af2-a988-212f3ba132e1>
CC-MAIN-2017-09
http://www.cbronline.com/news/enterprise-it/server/case-study-how-a-dell-supercomputer-is-helping-caterham-f1-team-maximise-performance
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00332-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960739
1,232
2.734375
3
Programming Function Keys Questions derived from the 1Z0-141 – Oracle Forms: Build Internet Applications Oracle Self-Test Software Practice Test. Objective: Programming Function Keys SubObjective: Define key triggers and their uses Item Number: 1Z0-126.96.36.199 Multiple Answer, Multiple Choice For which purposes would key triggers be used? (Choose two.) - Performing inserts - Navigation control - Field-level validation - Disabling function keys - Replacement of default functionality of function keys D. Disabling function keys E. Replacement of default functionality of function keys Key triggers would be used to disable function keys or replace default functionality of function keys. Key triggers allow you to modify, replace or disable the standard functionality of a function key. A key trigger is a subprogram that executes when the function key associated with the trigger is pressed. For example, a Key-Delrec trigger could be created to replace the functionality of the Delete Record key or a Key-Listval trigger could be created to execute additional PL/SQL code when after an LOV is displayed. You should note that not all of the functions keys defined within Forms can be redefined. All of the other options are incorrect. Key triggers are not used to perform inserts, control navigation or perform validation. Validation triggers are used to perform validation, navigational triggers are used for navigation and transactional triggers are used to manipulate data. 1. Oracle Forms Online Help Designing Forms Applications – Triggers – Interface Event Triggers
<urn:uuid:4d67b8b2-9258-4d4c-a6cb-531e3efb7f3c>
CC-MAIN-2017-09
http://certmag.com/programming-function-keys/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00628-ip-10-171-10-108.ec2.internal.warc.gz
en
0.784762
322
2.671875
3
Hamburg, the second largest city in Germany, has just unveiled its Green Network Plan, a compilation of strategies designed to eliminate the need for cars in the city in the next two decades. Hamburg is made up of 40 percent green areas, gardens, parks and squares, and the new plan is designed to unite these areas in a way that will be completely accessible by foot or bike. “Other cities, including London, have green rings, but the green network will be unique in covering an area from the outskirts to the city center,” city spokeswoman Angelika Fritsch told The Guardian. “In 15 to 20 years, you’ll be able to explore the city exclusively on bike and foot.” The city also plans to utilize the green areas both to help absorb carbon dioxide and prevent flooding. Hamburg’s average temperature has increased about 34 degrees in the past 60 years and the sea level has risen about 7 inches. The city will work to unite each of the seven municipalities of the metropolitan region to ensure that all residents receive access to green pathways. Another area that has been taking initiative toward greener transportation is San Francisco with its Connecting the City project, which launched in 2011. The San Francisco Bicycle Coalition spurred the project, which aims to create 100 miles of cross-town bikeways by 2020, with three roadways receiving primary focus: the Bay to Beach, North-South and Bay Trail routes. The coalition aims for these three roadways to be bike friendly by 2015, with additional busy areas soon following suit. The goal is to continue substantially increasing the amount of people who choose to bike every day.
<urn:uuid:e032bd07-4079-49bd-928e-7d33f08020f8>
CC-MAIN-2017-09
http://www.govtech.com/health/Hamburg-Plans-Eliminate-Cars.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00028-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95675
340
2.890625
3
Technology Trends and its Impact on Higher Education Instead of just one or two prominent technology trends, higher education has been transformed by the blending of the best of a number of technology trends. After more than 900 years of a traditional learning environment in which the teacher imparted knowledge to the student via face-to-face teacher-led discourse or lecture, the face of higher education has now been significantly altered in the last dozen years. Social media, mobile, and cloud have already impacted higher education in many significant ways and continue to change the landscape. Teaching modalities have already been changed and educators are continuing to push the limits of integrating newer technology in their respective fields, morphing traditional methods to newer flipped classes, adaptive learning, online learning, blended learning, and on-demand learning, including MOOCs. The emergence of Learning Management Platforms that leverage managed content and other educational resources, cloud, social media, and mobile technology has helped to accelerate the changes in the higher education landscape. Big data is expected to have a significant impact in the higher education research environment in the next few years. While commercial vendors such as Amazon and Walmart have been successfully using big data for their daily operations for many years, the use of big data in higher education is still in its infancy. Higher education institutions replete with faculty and staff skilled in high performance computing, data sciences, computer sciences, and advanced analytics, are well-positioned to leverage these talents to help lead researchers to discoveries in areas such as the sciences, medicine, business, and engineering fields. Teaching modalities have already been changed and educators are continuing to push the limits of integrating newer technology in their respective fields Additionally, while not quite big data by its strictest definition, advanced data analytics are being increasingly used by higher education institutions to assess the effectiveness of the emerging teaching and learning methods on the success of students in terms of classroom performance and graduation. As Learning Management Platforms mature, these new data points from the advanced data analysis can be integrated back into the system, thus allowing a more effective learning environment for students. Advanced data analytics also helps faculty, staff, and the institutions become more efficient and effective in the use of resources. Depending on the technology, certain disciplines are more impacted than others. For instance, 3D printing allows students and teachers from fields such as architecture, engineering, and design to create innovative new solutions with this new 3D modelling capability, and these technologies can then be incorporated into their instructional curriculum. Not surprisingly, some of the touted “new” technologies have fallen by the wayside. While a few of them have turned out to be much less than promised for higher education (e.g. Second Life), others such as netbooks, podcasts and flip cams, have since been superseded by newer technology trends. On the other hand, emerging technologies such as the Internet of Things (IoT) haven’t even scratched the surface in higher education. Higher education institutions have their fair share of the early IoT devices such as Internet connected cameras, sensors, HVAC controls, electronic locks, etc. From researchers with sensor devices in the field to students with gaming consoles in residence halls, the proliferation of IoT devices in higher education has barely begun. According to Gartner, “IoT, which excludes PCs, tablets and smartphones, will grow to 26 billion units installed in 2020 representing an almost 30-fold increase from 0.9 billion in 2009.” Cisco estimates that we will have about 50 billion IoT devices by 2020. The change in network addressing from IPv4 to IPv6, and an increasingly connected world have set the stage for an explosion of IoT devices. As technologies have matured, IT professionals in higher education institutions, and their vendor and consultant partners, have learned to integrate these newer technologies with existing infrastructures, and to secure them as well as possible. With this growth in staff experience and skills, adoption of BYOD is becoming the accepted norm in higher education. Newer mobile devices such as smartphones (especially iPhones) and tablets such as iPads, Chromebooks and Surface Pros, are becoming easier to integrate. However, IT staff still have to continue to plan and enhance their infrastructures, including the expansion of external connectivity speeds, backbone speeds, building distribution speeds, WIFI, and Digital Antenna Systems (DAS). With the expected flood of IoT devices, higher education IT professionals need to constantly enhance their security skills and security infrastructures as well since the average user is often more focused on the functionality and convenience aspects of the IoT devices rather than the secure use of them. Recently, I had the privilege of listening to an excellent presentation on IoT devices by a Director of Research from the Institute for the Future, titled “From an Internet of Things to Systems of Networked Matter: Exploring the future of IoT.” She recommended that IoT designers adopt Five Design Principles for IoT devices: 1. Design beyond Efficiency 2. Design for Collective Benefit 3. Design for Human and Machine Systems 4. Design for Equity 5. Design for Failures I asked the Research Director if the Institute would consider adopting a 6th Principle—“Design for Security.” In response to my question, she explained that the 5th Principle could include Security. While it made sense, I would have been more comforted with an explicit, separate 6th Principle—“Design for Security and Privacy.” In a higher education institution, IT security professionals are constantly kept busy with protecting data that ranges from Personally Identifiable Information (PII), FERPA data, PCI data, PHI data, research data, and other sensitive data. Each of the new technology trends has introduced its own unique challenges for a higher education institution and its community in protecting data, both to meet compliance mandates and for protecting the institutional community’s privacy and security. The emerging IoT devices will continue to bring with them a whole new range of functionalities and challenges!
<urn:uuid:f1543616-a30b-432d-9cef-742f87b44289>
CC-MAIN-2017-09
http://k12.cioreview.com/cioviewpoint/technology-trends-and-its-impact-on-higher-education-nid-23398-cid-143.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00448-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954176
1,223
2.890625
3
This resource is no longer available Cisco Telepresence Zones There are many zones in Cisco TelePresence products; so many, in fact, that there is a lot of confusion about the different zones and their uses. The concept of zones originally came from the H.332 RAS protocol; zones are utilized by gatekeepers to resolve phone number to IP address mapping and to manage device bandwidth using Call Admission Control (CAC). The concept of zones in Cisco TelePresence products is confusing to many people; but this white paper should clear up some of the confusion.
<urn:uuid:806f5919-72db-4f67-a497-283641656aef>
CC-MAIN-2017-09
http://www.bitpipe.com/detail/RES/1363983866_172.html?asrc=RSS_BP_TERM
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00448-ip-10-171-10-108.ec2.internal.warc.gz
en
0.938629
117
2.515625
3
Wireless networks have certainly brought a lot of convenience to our lives, allowing us to work and surf from almost anywhere—home, cafes, airports and hotels around the globe. But unfortunately, wireless connectivity has also brought convenience to hackers because it gives them the opportunity to capture all data we type into our connected computers and devices through the air, and even take control of them. While it may sound odd to worry about bad guys snatching our personal information from what seems to be thin air, it’s more common than we’d like to believe. In fact, there are hackers who drive around searching for unsecured wireless connections (networks) using a wireless laptop and portable global positioning system (GPS) with the sole purpose of stealing your information or using your network to perform bad deeds. We call the act of cruising for unsecured wireless networks “war driving,” and it can cause some serious trouble for you if you haven’t taken steps to safeguard your home or small office networks. Hackers that use this technique to access data from your computer—banking and personal information—that could lead to identity theft, financial loss, or even a criminal record (if they use your network for nefarious purposes). Any computer or mobile device that is connected to your unprotected network could be accessible to the hacker. While these are scary scenarios, the good news is that there are ways to prevent “war drivers” from gaining access to your wireless network. Be sure to check your wireless router owner’s manual for instructions on how to properly enable and configure these tips. - Turn off your wireless network when you’re not home: This will minimize the chance of a hacker accessing your network. - Change the administrator’s password on your router: Router manufacturers usually assign a default user name and password allowing you to setup and configure the router. However, hackers often know these default logins, so it’s important to change the password to something more difficult to crack. - Enable encryption: You can set your router to allow access only to those users who enter the correct password. These passwords are encrypted (scrambled) when they are transmitted so that hackers who try to intercept your connection can’t read the information. - Use a firewall: Firewalls can greatly reduce the chance of outsiders penetrating your network since they monitor attempts to access your system and block communications from unapproved sources. So, make sure to use the firewall that comes with your security software to provide an extra layer of defense. Although war driving is a real security threat, it doesn’t have to be a hazard to your home wireless network. With a few precautions, or “defensive driving” measures, you can keep your network and your data locked down.
<urn:uuid:d655c2a4-2b2c-4efb-bc55-db900057ea4c>
CC-MAIN-2017-09
https://securingtomorrow.mcafee.com/consumer/identity-protection/wardriving/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00448-ip-10-171-10-108.ec2.internal.warc.gz
en
0.924976
577
2.765625
3
Optical amplifiers are the device that can amplifie optical signal directly, without the need to first convert it to an electrical signal. They are a key enabling technology for optical communication networks. Together with wavelength-division multiplexing (WDM) technology, which allows the transmission of multiple channels over the same fiber, optical amplifiers have made it possible to transmit many terabits of data over distances from a few hundred kilometers and up to transoceanic distances, providing the data capacity required for current and future communication networks. Optical amplifiers are important in optical communication and laser physics. Today’s common used optical amplifiers include Erbium-Doped Fiber Amplifier (EDFA), Raman Amplifier, and Silicon Optical Amplifier (SOA). Erbium-Doped Fiber Amplifier (EDFA) EDFA works through a trace impurity in the form of a trivalent erbium ion that is inserted into the optical fiber’s silica core to alter its optical properties and permit signal amplification. The trace impurity is known as a dopant, and the process of inserting the impurity is known as doping or being doped. Pump lasers, known as pumping bands, insert dopants into the silica fiber at a 980 or 1480 nanometer (nm) wavelength, resulting in a gain, or amplification, in the 1550 nm range, which is the optical C-band. The 1480 nm band is usually used in amplifiers with greater power. Pump lasers operate bidirectionally. This action amplifies a weak optical signal to a higher power, effecting a boost in the signal strength. However, EDFAs are usually limited to no more than 10 spans covering a maximum distance of approximately 800 kilometers (km) and also cannot amplify wavelengths shorter than 1525 nanometers (nm). The EDFA was the first successful optical amplifier and a significant factor in the rapid deployment of fiber optic networks during the 1990s. In a Raman amplifier, the signal is amplified due to stimulated Raman scattering (SRS). Raman scattering is a process in which light is scattered by molecules from a lower wavelength to a higher wavelength. When sufficiently high pump power is present at a lower wavelength, stimulated scattering can occur in which a signal with a higher wavelength is amplified by Raman scattering from the pump light. SRS is a nonlinear interaction between the signal (higher wavelength; e.g. 1550 nm) and the pump (lower wavelength; e.g. 1450 nm) and can take place within any optical fiber. In most fibers however the efficiency of the SRS process is low, meaning that high pump power (typically over 1 W) is required to obtain useful signal gain. Thus, in most cases Raman amplifiers cannot compete effectively with EDFAs. Raman amplification provides two unique advantages over other amplification technologies. The first is that the amplification wavelength band of the Raman amplifier can be tailored by changing the pump wavelengths, and thus amplification can be achieved at wavelengths not supported by competing technologies. The other more important advantage is that amplification can be achieved within the transmission fiber itself, enabling what is known as distributed Raman amplification (DRA). Raman amplifiers are most often used together with EDFAs to provide ultra-low NF combined amplifiers, which are useful in applications such as long links with no inline amplifiers, ultra-long links spanning thousands of kilometers, or very high bit-rate (40/100 Gb/s) links. Silicon Optical Amplifier SOAs are amplifiers which use a semiconductor to provide the gain medium. They operate in a similar manner to standard semiconductor lasers (without optical feedback which causes lasing), and are packaged in small semiconductor “butterfly” packages. Compared to other optical amplifiers, SOAs are pumped electronically (i.e. directly via an applied current), and a separate pump laser is not required. However, small size and potentially low cost due to mass production, SOAs suffer from a number of drawbacks which make them unsuitable for most applications. In particular, they provide relatively low gain (
<urn:uuid:f629368f-2ffc-4bad-84e2-0c51ec9dc15b>
CC-MAIN-2017-09
http://www.fs.com/blog/optical-amplifiers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00624-ip-10-171-10-108.ec2.internal.warc.gz
en
0.931524
851
4.03125
4
Networking 101: Understanding the Data Link Layer What's more important than IP and routing? Well, Layer 2 is much more important when it's broken. Many people don't have the Spanning Tree Protocol (STP) (define) knowledge necessary to implement a layer 2 network that's resilient. A switch going down shouldn't prevent anyone from having connectivity, excluding the hosts that are directly attached to it. Before we can dive into Spanning Tree, you must understand the inner workings of layer 2. Layer 2, the Data Link layer, is where Ethernet lives. We'll be talking about bridges, switching and VLANs with the goal of discovering how they interact in this part of Networking 101. You don't really need to study the internals of Ethernet to make a production network operate, so if you're inclined, do that on your own time. Ethernet switches, as they're called now, began life as a "bridge." Traditional bridges would read all Ethernet frames, and then forward them out every port, except the ones they came in on. They had the ability to allow redundancy via STP, and they also began learning which MAC addresses were on which port. At this point, a bridge then became a learning device, which means they would store a table of all MAC addresses seen on a port. When a frame needed to be sent, the bridge could look up the destination MAC address in the bridge table, and know which port is should be sent out. The ability to send data to only the correct host was a huge advance in switching because collisions became much less likely. If the destination MAC address wasn't found in the bridge table, the switch would simply flood it out all ports. That's the only way to find where a host actually lives for the first time, so as you can see, flooding is an important concept in switching. It turns out to be quite necessary in routing too. Important terminology in this layer includes: Unicast segmentation: Bridges can limit which hosts hear unicast frames (frames sent to only one MAC address). Hubs would simply forward everything to everyone, so this alone is a huge bandwidth-saver. Collision Domain : The segment over which collisions can occur. Collisions don't happen any more, since switches use cut-through forwarding and NICs are full-duplex. If you see collisions on a port, that means someone negotiated half-duplex accidentally, or something else is very wrong. Broadcast Domain : The segment over which broadcast frames are sent and can be heard. A few years later, the old store-and-forward method of bridge operation was modified. New switches started only looking at the destination MAC address of the frame, and then sending it instantly. Dubbed "cut-through forwarding," presumably because frames cut through the switch much more quickly and with less processing. This implies a few important things: A switch can't check the CRC to see if the packet was damaged, and that implies that collisions needed to be made impossible. Now, to address broadcast segmentation, VLANs were introduced. If you can't send a broadcast frame to another machine, they're not on your local network, and you will instead send the entire packet to a router for forwarding. That's what a Virtual LAN (VLAN) does, in essence: It makes more networks. On a switch, you can configure VLANs, and then assign a port to a VLAN. If host A is in VLAN 1, it can't talk to anyone in VLAN 2, just as if they lived on totally disconnected devices. Well, almost; if the bridge table is flooded and the switch is having trouble keeping up, all data will be flooded out every port. This has to happen in order for communication to continue in these situations. This needs to be pointed out because many people believe VLANs are a security mechanism. They are not even close. Anyone with half a clue about networks (or with the right cracking tool in their arsenal) can quickly overcome the VLAN broadcast segmentation. In fact, a switch will basically turn into a hub when it floods frames, spewing everyone's data to everyone else. If you can't ARP for a machine, you have to use a router, as we already know. But does that mean you have to physically connect wires from a router into each VLAN? Not anymore, we have layer 3 switches now! Imagine for an instance, if you will, a switch that contains 48 ports. It also has VLAN 1 and VLAN 2, and ports 1-24 are in VLAN 1, while ports 25-48 are part of VLAN 2. To route between the two VLANs, you have basically three options. First, you can connect a port in each VLAN to a router, and assign the hosts the correct default route. In the new-fangled world of today, you can also simply bring up two virtual interfaces in each VLAN. In Cisco-land, the router interfaces would be called vlan1 and vlan2. They get IP addresses, and the hosts use the router interface as their router. The third way brings us to the final topic of our layer 2 overview. If you have multiple switches that need to contain the same VLANs, you can connect them together so that VLAN 1 on switch A is the same as VLAN 1 on switch B. This is accomplished with 802.1q, which will tag the packets as they leave the first switch with a VLAN identifier. Cisco calls these links "trunk ports," and you can have as many VLANs on them as the switch allows (currently 4096 on most hardware). So, the third and final way to route between VLANs is to connect a trunk to a router, and bring up the appropriate interfaces for each VLAN. The hosts on VLAN 1, on both switch A and B will have access to the router interface (which happens to be on another device) since they are all "trunked" together and share a broadcast domain. We've saved you from the standard "this is layer 2, memorize the Ethernet header" teaching method. To become a true guru you must know it, but to be a useful operator, (something the cert classes don't teach you) simply understand how it all works. Join us next time for an exploration of the most interesting protocol in the world, Spanning Tree.
<urn:uuid:c7a14300-861c-44fa-a3bd-50432a3c1a70>
CC-MAIN-2017-09
http://www.enterprisenetworkingplanet.com/print/netsp/article.php/3577261/Networking-101-Understanding-the-Data-Link-Layer.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00148-ip-10-171-10-108.ec2.internal.warc.gz
en
0.961821
1,326
3.546875
4
Downloaders and droppers are helper programs for various types of malware such as Trojans and rootkits. Usually they are implemented as scripts (VB, batch) or small applications. They don’t carry any malicious activities by themselves, but just open a way for attack by downloading/decompressing and installing the core malicious modules. To avoid detection, a dropper may also create noise around the malicious module by downloading/decompressing some harmless files. Very often, they auto-delete themselves after the goal has been achieved. Downloaders and droppers emerged from the idea of malware files that were able to download additional modules (i.e. Agobot, released in 2002). An interesting example of a modern downloader is OnionDuke (discovered in 2014), carried by infected Tor nodes. It is a wrapper over legitimate software. When a user downloads software via an infected Tor proxy, OnionDuke packs the original file and adds a malicious stub to it. When the downloaded file is run, the stub first downloads malware and installs it on a computer, and then unpacks the legitimate file and removes itself in order to be unnoticed. Common infection method Most of the time, the user gets infected by using some unauthenticated online resources. Infections are often consequences of activities like: - Clicking malicious links or visiting shady websites - Downloading unknown free programs - Opening attachments sent with spam - Plugging infected drives - Using Infected proxy (like in case of OnionDuke) They may also be installed without user interaction, carried by various exploit kits. Downloaders are usually tiny, and rarely get meaningful, unique names. Usually they are called from their architecture and platform to which they are dedicated. Some examples: - TrojanDownloader: MSIL/Prardrukat They can be used to download various malware of different families. Sometimes, they are distributed by some bigger campaigns like OnionDuke. Downloaders often appear in non-persistent form. They install the malicious module and remove themselves automatically. In such a case, after a single deployment they are no longer a threat. If for some reason they haven’t removed themselves, they can be deleted manually. More dangerous variants are persistent. They copy themselves to some random, hidden file and create registry keys to run after the system is restarted, attempting to download the malicious modules again. In such cases, to get rid of the downloader it is necessary to find and remove the created keys and the hidden file. What remains to do is to take appropriate steps in order to neutralize the real weapon carried by the dropper. The difficulty level of cleaning the system varies as the payload may be of different types. The most universal way is to use good quality, automated anti-malware tools and run a full system scan. A successfully deployed downloader results in having a system infected by the core, malicious module. Keeping good security habits, such as being careful about visiting certain websites and not opening unknown attachments minimizes the risk of being affected by malicious downloaders. However, in some cases it is not enough. Exploit kits can still install the malicious software on the vulnerable machine, even without any interaction. That’s why it is important to have good quality anti-malware software.
<urn:uuid:6c49b151-c7c4-4e12-8a06-67950e118f77>
CC-MAIN-2017-09
https://blog.malwarebytes.com/threats/trojan-dropper/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00148-ip-10-171-10-108.ec2.internal.warc.gz
en
0.92183
685
2.53125
3
Cloud Computing Basic Cheat Sheet Cloud computing models vary: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Manage your cloud computing service level via the surrounding management layer. - Infrastructure as a Service (IaaS). The IaaS layer offers storage and compute resources that developers and IT organizations can use to deliver business solutions. - Platform as a Service (PaaS). The PaaS layer offers black-box services with which developers can build applications on top of the compute infrastructure. This might include developer tools that are offered as a service to build services, or data access and database services, or billing services. - Software as a Service (SaaS). In the SaaS layer, the service provider hosts the software so you don’t need to install it, manage it, or buy hardware for it. All you have to do is connect and use it. SaaS Examples include customer relationship management as a service. Deploying Public, Private, or Hybrid Clouds Cloud Computing happens on a public cloud, private cloud, or hybrid cloud. Governance and security are crucial to computing on the cloud, whether the cloud is in your company’s firewall or not. - Public clouds are virtualized data centers outside of your company’s firewall. Generally, a service provider makes resources available to companies, on demand, over the public Internet. - Private clouds are virtualized cloud data centers inside your company’s firewall. It may also be a private space dedicated to your company within a cloud provider’s data center. - Hybrid clouds combine aspects of both public and private clouds. Cloud Computing Characteristics Cloud computing requires searching for a cloud provider. Whether your cloud is public, private, or hybrid, look for elasticity, scalability, provisioning, standardization, and billed usage: - Elasticity and scalability. The cloud is elastic, meaning that resource allocation can get bigger or smaller depending on demand. Elasticity enables scalability, which means that the cloud can scale upward for peak demand and downward for lighter demand. Scalability also means that an application can scale when adding users and when application requirements change. - Self-service provisioning. Cloud customers can provision cloud services without going through a lengthy process. You request an amount of computing, storage, software, process, or more from the service provider. After you use these resources, they can be automatically deprovisioned. - Standardized interfaces. Cloud services should have standardized APIs, which provide instructions on how two application or data sources can communicate with each other. A standardized interface lets the customer more easily link cloud services together. - Billing and service usage metering. You can be billed for resources as you use them. This pay-as-you-go model means usage is metered and you pay only for what you consume. Cloud Computing Issues Cloud computing issues span models (IaaS, PaaS, or SaaS) and types (public, private, or hybrid). Computing on the cloud requires vigilance about security, manageability, standards, governance, and compliance: - Cloud security. The same security principles that apply to on-site computing apply to cloud computing security. - Identity management. Managing personal identity information so that access to computer resources, applications, data, and services is controlled properly. - Detection and forensics. Separating legitimate from illegitimate activity. - Encryption. Coding to protect your information assets. - Cloud manageability. You need a consistent view across both on-premises and cloud-based environments. This includes managing the assets provisioning as well as the quality of service (QOS) you’re receiving from your service provider. - Cloud standards. A standard is an agreed-upon approach for doing something. Cloud standards ensure interoperability, so you can take tools, applications, virtual images, and more, and use them in another cloud environment without having to do any rework. Portability lets you take one application or instance running on one vendor’s implementation and deploy it on another vendor’s implementation. - Cloud governance and compliance. Governance defines who’s responsible for what and the policies and procedures that your people or groups need to follow. Cloud governance requires governing your own infrastructure as well as infrastructure that you don’t totally control. Cloud governance has two key components: understanding compliance and risk and business performance goals. - Data in the cloud. Managing data in the cloud requires data security and privacy, including controls for moving data from point A to point B. It also includes managing data storage and the resources for large-scale data processing.
<urn:uuid:6c3b3971-07c9-4179-9ee1-1930e9022386>
CC-MAIN-2017-09
https://cloudtweaks.com/2010/06/cloud-computing-for-dummies-basic-cheat-sheet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00324-ip-10-171-10-108.ec2.internal.warc.gz
en
0.906948
980
2.984375
3
Today’s post will cover Dynamic Memory in Hyper-V 2012/R2. This is Part 1 of two parts: Understanding Dynamic Memory. As you know Physical computers have a static amount of memory, which does not change until you shut down the computer and add additional physical (RAM). The experience with virtual machines is the same as when you do not configure them to use dynamic memory. Virtual machines are assigned the same amount of memory while they are running. However, with Hyper-V, you can configure virtual machines to use dynamic memory, which enables more efficient use of the available physical memory. If you enable dynamic memory, the memory is treated as a shared resource, which can be reallocated automatically between running virtual machines. Dynamic memory adjusts the amount of memory available to a virtual machine based on memory demand, available memory on the Hyper-V host, and the virtual machine memory configuration. This can make it possible to run more virtual machines simultaneously on the same Hyper-V host. This can be especially beneficial in environments that have many idle or low-load virtual machines such as pooled VDI environments. For Virtual Machines we have two options: 1- We can use Static amount of memory (We set an amount that is allocated to the virtual machine when starts and it is effectively reserved on the host). 2- We can also enable Dynamic memory (When we enable Dynamic memory, the value set on the Startup RAM: becomes the amount of memory the virtual machine is allocated when it first starts up), but we can also specify a minimum and a maximum amount of RAM. - Minimum RAM: obviously is what physical memory assign can shrink to… So this can be less what it starts with, the logic here is perhaps you have workload or Operating System that starts initially requires certain amount of memory, but when it has actually started it doesn’t require as much. You can decrease this value while the virtual machine is running. - Maximum RAM: we can have a maximum amount of RAM, the default is 1TB 🙂 normally we don’t leave this value, but this is the amount of memory that can be added to the virtual machine if it deems it necessary. You can increase this value while the virtual machine is running. - Memory Buffer: the threshold buffer is by default 20%, but we can change that. This buffer is saying look I want to make sure that always this amount extra memory assigned to the VM above that working set (above the amount that is being used by the processes). - Memory Weight: in times of contention, we can say what is the priority of the memory for this VM compare to others virtual machines? (Low or High priority). If we compare Dynamic Memory in the previous Hyper-V version (2K8R2), you can see we cannot decrease or increase the minimum/maximum amount of memory while the VM is running. In Part 2 we will see Dynamic Memory in action! So stay tuned, until next time…
<urn:uuid:200b6f1e-fff7-42a7-b725-3adef3e92d84>
CC-MAIN-2017-09
https://charbelnemnom.com/2014/01/understanding-dynamic-memory-in-hyper-v-2012r2-part-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171807.25/warc/CC-MAIN-20170219104611-00024-ip-10-171-10-108.ec2.internal.warc.gz
en
0.90972
610
3.296875
3
USB keys were famously used as part of the Stuxnet attack on the Iranian nuclear program and for good reason: it’s got a high rate of effectiveness, according to a researcher at Black Hat 2016. Of 297 keys spread around the University of Illinois Urbana Champaign 45% were not only plugged into victims’ computers but the victims then clicked on links in files that connected them to more malware, says Elie Burstzein, a Google researcher who presented the results. That rate was pretty constant regardless of where the keys were dropped and what they looked like, he says. Keys were left in parking lots, common rooms, hallways, lecture halls and on lawns. Some had no labels but others did that said confidential and exam answers. Some had metal door keys attached on a ring and some had door keys plus a tab with an address and phone number. More than half of those that were opened were opened within the first 10 hours. Twenty-one percent of those who plugged in the devices then took a survey to say why they did. Sixty-eight percent said they wanted to return the drives and 18% said they were just curious and the rest had various other reasons. What were they curious about? Pictures were consistently popular at 33% to 45% depending on the type of key that was picked up. Resumes were about as popular as photos with a spike in interest to 53% for those keys that were unlabeled. Other documents not so much. Burstzein says building the keys was not trivial. The team he worked with had to figure out how to make a device small enough to fit into a key case, create a mold for the case, pour the resin, figure out how to unmold it so it had a smooth look and trim it to appear professional. It took weeks to perfect the techniques. Each one cost about $40. The team also spent a lot of time writing code for the keys that could figure out what operating system was running on the machine they were plugged into. One test was a shell script that tried to lock the scroll lock key. If it worked, it was a Windows machine. It was difficult to test the timing between commands and know they were successfully executed so they used caps-lock toggling as an indicator. When a command was successful, it would toggle the switch as a feedback bit, he says. The code used reverse shell to get thorough the firewalls, scripting language and obfuscation to avoid antivirus detection, used a payload that delivered a maximum 62.5 keystrokes per second and used metasploit to act as a command and control server. Preventing USB key attacks isn’t easy. The best methods are to educate users not to plug them in, block their use altogether on machines or restricting use. This story, "Black Hat: How to make and deploy malicious USB keys" was originally published by Network World.
<urn:uuid:85da2056-a7eb-49a5-ae11-5f6eeeba4051>
CC-MAIN-2017-09
http://www.csoonline.com/article/3104112/security/black-hat-how-to-make-and-deploy-malicious-usb-keys.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00376-ip-10-171-10-108.ec2.internal.warc.gz
en
0.987382
607
2.59375
3
In the last week of June, as wildfires raged in the dry forests of Colorado, journalists at The Denver Post covering the disaster received an invitation from Google: Display on your website a map we have created using data feeds from federal and state authorities, the American Red Cross and other sources. Readers will get information and updates on wildfire locations, weather forecasts, public shelters sites, satellite images and more. Editors at The Post confirmed the data collected was reliable and then posted the map, Chuck Murphy, an editor at the newspaper, said in an email. The visualization was created by the Google Crisis Response team, a group set up in 2010 after the Haiti earthquake to make information available during disasters. (The group also set up projects in 2011 focusing on the earthquake and tsunami in Japan and floods in Vermont.) While this information resource was just one element in a mix of stories, photographs and video reports published at DenverPost.com during the wildfires in Waldo Canyon and other areas, the Google project represents a growing effort by industry and government to create data visualizations of rapidly-changing conditions. The trend makes sense. Researchers at the American Red Cross and Google have found that people affected by a natural disaster or other crisis quickly head online to search for information that can help them. The 2010 Red Cross survey found that 69 percent of web users expect disaster responders to monitor social media such as Facebook and Twitter to learn where to send help. Utilities serving Washington D.C., Maryland and Virginia posted data visualizations about power outages and restoration estimates for people wondering about their service after a June 29 storm and subsequent heat wave. For utilities, bringing outage updates to the forefront can be part of a two-way conversation with customers that not only keeps customers informed about power restoration progress. It also can save money because data presented in a visual format like a map means fewer call center inquiries to answer. ComEd, an electric utility serving Illinois, made this point in May when it unveiled an interactive map to show the location of power outages in the region. The online map also has a form to fill out when the power goes out and it debuted on the web and as part of a new smartphone app designed to let users check and pay their electric bill, report a meter reading and view their account history. In a presentation for Intelligentutility.net about the project, Frank Scumacci, general manager of eChannels at Comed, said his team justified the cost of implementing the app by citing the money saved when customers reported outages through it. The June 29 storm led to at least 17 deaths and massive power outages in Maryland, Virginia and Washington, D.C., The Washington Post reported. On July 1, close to 1 million customers were without power during a post-storm heat wave and newspaper’s website kept a running tally of outages complete with links to utility websites including Baltimore Gas and Electric, Dominion, Northern Virginia Electric Cooperative, Potomac Electric Power Co. (Pepco) and Potomac Edison. Government agencies also posted visualizations to show data collected and services available. In the east, for example, the city of Washington posted a map showing public libraries operating as cooling centers for heat-drenched residents. Out west, InciWeb, a web portal for wildfires incorporating data from 11 state and regional agencies posts maps that show the locations of wildfires delineate firefighters’ efforts to contain them. Such maps tell data-driven stories with high utility. That doesn’t mean they attract the most viewers, however. The Denver Post, which has seen web traffic rise during its coverage of the wildfires, found that its pieces showing aerial photos of destroyed neighborhoods were particular draws for online readers, said Murphy. Michael Goldberg is editor of Data Informed. Email him at [email protected].
<urn:uuid:4b0bd529-eb7c-4b33-b352-342a8e08178b>
CC-MAIN-2017-09
http://data-informed.com/data-visualizations-on-display-wake-wildfires-power-outages/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00372-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945566
786
2.796875
3
Two web browsers developed by Chinese search giant Baidu have been insecurely transmitting sensitive data across the Internet, putting users' privacy at risk, according to a new study. Baidu responded by releasing software fixes, but researchers say not all the issues have been resolved. It focused on the Windows and Android versions of Baidu's browser, which are free products. It also found that sensitive data was leaked by thousands of apps that use a Baidu SDK (software development kit). With the browsers, Citizen Lab found that a user's search terms, GPS coordinates, the addresses of websites visited and device's MAC (Media Access Control) address were sent to Baidu's servers without using SSL/TLS encryption. "The transmission of personal data without properly implemented encryption can expose a user's data to surveillance," the report noted. Other sensitive information, such as IMEI (International Mobile Station Equipment Identity) numbers, nearby Wi-Fi networks and their MAC addresses, and hard-drive serial numbers were transmitted with weak encryption that could be broken. Neither web browser used digital code signatures for updates, meaning attackers could try to slip in their own code instead. The government of China strictly controls Internet use, and Baidu can be required to hand over user data to intelligence agencies and law enforcement. The data collection raises questions about whether it could be used against those who oppose government policies. "While Internet companies often collect personal user data for the normal and efficient provision of services, it is unclear why Baidu Browser collects and transmits such an extensive range of sensitive user data points," the report said. Citizen Lab also found that thousands of mobile apps that use Baidu's mobile analytics SDK transmit the same sensitive information back to the company. "Any app that uses this SDK for statistics and event tracking sends messages to Baidu’s servers," the report said. Baidu officials could not be immediately reached for comment, but Citizen Lab published a document with questions it posed and answers from the company. Baidu didn't answer a question about what user data is it required to retain under Chinese law. It also said it was unable to comment on why requests using its browsers to visit websites outside of China went through a proxy server. But Baidu said it has improved security based on the researchers' findings. For example, it said data transmitted by the Android browser would be fully encrypted by the end of this month, and for the Windows browser by early May.
<urn:uuid:0b55588f-2b04-4b3f-b328-3a0f77defc2f>
CC-MAIN-2017-09
http://www.computerworld.com/article/3037029/security/baidu-web-browsers-leaked-sensitive-information-researchers-say.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00372-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955225
517
2.703125
3
SANTA CLARA, Calif. - Human brains will someday extend into the cloud, futurist and computer pioneer Ray Kurzweil predicted at the DEMO conference here on Tuesday. Moreover, he said, it will become possible to selectively erase pieces of our memories, while retaining some portions of them, to be able to learn new things no matter how old the person is. "The brain doesn't grow much from a very young age," he said. Humans have, more or less, 300 million pattern-recognizing modules in the neocortex, the portion of the brain where thought occurs. "One of the reasons kids can learn new languages, or pretty much anything, so quickly is because they haven't filled up those pattern recognizers," he said. "It's virgin territory." I think we'll be in augmented reality all the time. Raymond Kurzweil, Computer pioneer and futurist "You can learn new material at any age, but there is a limited capacity. That's one of the things we will overcome by basically expanding the brain into the cloud," he said. "We need to be able to repurpose our neocortex to learn something new. People who have a rigid process and hold onto old information; they will have a hard time doing that. You need to be able to move on." While Kurzweil did not give a timetable for these predictions, he said the notion of "brain extenders" has already begun thanks to technology including IBM's Watson supercomputer and augmented reality. "I think we'll be in augmented reality all the time," Kurzweil said. "Just to be told what people's names are [via augmented reality popups] -- that would be most helpful," he said, to laughter from the crowd. "That's a killer app." As for Watson and its ilk, Kurzweil said, there has been "discernible progress" in artificial intelligence. But it's one thing to recall, say, anything from Wikipedia or other encyclopedias. "But understanding is way below human levels," and that's the next frontier, to help technical assistants understand the real meaning of human speech, a breakthrough he's already predicted will happen in 2029. Advances in natural language processing will help with that, Kurzweil said. It's an area he knows about, having created the industry's first multi-font optical character recognition system, as well as text-to-speech synthesizers and other breakthroughs. His company, Kurzweil Computer Products, through a series of acquisitions and mergers ultimately became part of Nuance Communications Inc., which developed technology that was much of the basis for Apple's Siri. "The natural language understanding in Siri is fairly weak and needs a lot of improvement," Kurzweil said. "For a version 1 product it's pretty good; usually version 1 doesn't work at all." Other DEMO news Of around 90 companies and products launching at DEMO Fall 2012, a good number of these are related to big data, cloud computing, infrastructure and new ways to recruit and hire candidates. Others are social-networking and consumer plays, or other types of corporate tools. Some of the highlights in various categories include: - Bella Dati: A reporting tool that turns sales, marketing, production or financial data into visualized reports and dashboards. Analyze, share data, or embed it into your apps. Works on a Web browser or mobile device. - Talksum Data Stream: For processing data streams in real time. Features include ingest, filtering, monitoring and routing. - Opal Brainstorms: Allows employees to offer new ideas, enables collaboration across various regions and business units, and creates a dedicated space to refine, merge and synthesize ideas into "actionable outcomes." - Solstice: Multiple people using a range of devices can collaborate and share content on one or more displays. Can move content from one screen to another. - Prezentarium: Helps deliver presentations and allows the presenter to interact with members of the audience. People can share feedback and ask questions, and share the presentation via their social networks. - Lifebeat: Unified communications that combine under one interface on your smartphone all your communication channels (voice, text, email) with your contacts. - Learn27: Allows companies to create a virtual academy and start delivering courses. - LFE.com Allows experts to share expertise and get paid. To really work like a human brain, artificial intelligence (AI) tools need to be built hierarchically, like a brain. "And then you have to educate the synthetic neocortex" as we do with newborn babies, he explained. That's what Watson's achievement was -- an educated brain extender, up to a point. As with many other things, AI has the ability to be misused in the wrong hands, Kurzweil acknowledged. "Is fire a good thing? It keeps us warm and cooks our food, but it's also used to burn down our villages. "If it turns on you, you need to get even smarter AI." But, he said, "I think we could take comfort with how we've done with software viruses." Although these viruses have become increasingly sophisticated over time, "we have an evolving technological immune system that's kept up with it, more or less. You can have arguments about it, but I think it's working." Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center. This story, "Kurzweil: Brains Will Extend to the Cloud" was originally published by Computerworld.
<urn:uuid:fba4b6fa-6c4a-432f-9976-fd3a10396f90>
CC-MAIN-2017-09
http://www.cio.com/article/2391631/cloud-computing/kurzweil--brains-will-extend-to-the-cloud.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00016-ip-10-171-10-108.ec2.internal.warc.gz
en
0.953377
1,164
2.921875
3
The sun emitted two solar flares today, including one that was the most intense the sun can unleash. One flare peaked at 4:01 a.m. ET and the second, a more significant flare, peaked at 11:03 a.m. ET, causing a temporary radio blackout, according to the National Weather Service's Space Weather Prediction Center. Despite their intensity, the flares are unlikely to cause geomagnetic storm activity on Earth because of their launch position on the sun. The flares were the second and third the sun has emitted in three days. However, the National Weather Service is still waiting to see the impact of another solar flare emitted earlier this week. It could cause minor geomagnetic storms, which are temporary disturbances in the Earth's magnetosphere that can distrupt radios, navigation systems and radar. They also can cause intense auroral displays. Both of today's solar flares were produced from a region of the sun called 1882. The earlier solar flare was classified as an X1.7 class, while the second one was an X2.1. An X-class flare denotes the most intense flares, while the number adds more information about its strength. An X2, for instance, is twice as intense as an X1. NASA noted that in the past, X-class flares of this intensity have caused degradation or blackouts of radio communications for about an hour. It's not unusual to see an increase in solar flares since the sun's normal 11-year activity cycle is near its peak. The largest X-class flare in this cycle was an X6.9 that erupted on Aug. 9, 2011, according to NASA. NASA caught this image of the second solar flare that the sun emitted today. The flare appears as the bright flash on the left. (Photo: NASA) Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected]. Read more about government/industries in Computerworld's Government/Industries Topic Center. This story, "Sun emits super-intense flare today" was originally published by Computerworld.
<urn:uuid:f551aea8-115b-461f-827c-e97acaab75f3>
CC-MAIN-2017-09
http://www.networkworld.com/article/2171226/smb/sun-emits-super-intense-flare-today.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00420-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955276
476
3.609375
4
What are big data techniques and why do you need them? The increase in data volumes threatens to overwhelm most government agencies, and big data techniques can help was the burden. Big data is a new term but not a wholly new area of IT expertise. Many of the research-oriented agencies — such as NASA, the National Institutes of Health and Energy Department laboratories — along with the various intelligence agencies have been engaged with aspects of big data for years, though they probably never called it that. It’s the recent explosion of digital data in the past few years and its expected acceleration that are now making the term relevant for the broader government enterprise. At their most basic, big data strategies seek a process for managing and getting value out of the volumes of data that agencies have to grapple with, which are much greater than in the past. In particular, they aim to process the kind of unstructured data that’s produced by digital sensors, such as surveillance cameras, and the smart phones and other mobile devices that are now ubiquitous. That increase in data won’t slow any time soon. Market analyst IDC, in its “2011 Digital Universe Study,” predicted that data volumes would expand 50-fold by 2020, with the number of servers needed to manage that data increasing by a factor of 10. Unstructured data such as computer files, e-mail messages and videos will account for 90 percent of all the data created in the next decade, IDC said. Breaking things down into clearly defined constituent parts nets four areas that describe big data issues. Variety: The data is in both structured and unstructured forms; ranges across the spectrum of e-mail messages, document files, tweets, text messages, audio and video; and is produced from a wide number of sources, such as social media feeds, document management feeds and, particularly in government, sensors. Velocity: The data is coming at ever increasing speeds — in the case of some agencies, such as components of the Defense Department and the intelligence community, at millisecond rates from the various sensors they deploy. Volume: The data has to be collected, stored and distributed at levels that would quickly overwhelm traditional management techniques. A database of 10 terabytes, for example, is an order or two less than would be considered normal for a big data project. Value: The data can be used to address a specific problem or can address a particular mission objective that the agency has defined. It’s not only the much bigger quantities of data, it’s also the rate at which it’s coming in and the fact that it’s mostly unstructured that are outstripping the ability of people to use it to run their organizations with existing methods, said Dale Wickizer, chief technology officer for the U.S. Public Sector at NetApp Inc. “The IDC study also says that the number of files are expected to grow 75 times over the next decade,” he said, “and if that’s true, then a lot of the traditional approaches to file systems break because you run out of pointers to file system blocks.” That massive increase in the quality and velocity of data threatens to overwhelm even those government agencies that are the most experienced in handling problems of big data. A report in 2011 by the President’s Council of Advisors on Science and Technology concluded that the government was underinvesting in technologies related to big data. In response, six departments and agencies — the National Science Foundation, NIH, the U.S. Geological Survey, DOD, DOE and the Defense Advanced Research Projects Agency — announced a joint research and development initiative on March 29 that will invest more than $200 million to develop new big data tools and techniques. The initiative “promises to transform our ability to use big data for scientific discovery, environmental and biomedical research, education and national security,” said John Holdren, director of the White House Office of Science and Technology Policy. Big data is definitely not just about getting a bigger database to hold the data, said Bob Gourley, founder and chief technology officer of technology research and advisory firm Crucial Point and former CTO at the Defense Intelligence Agency. “We use the term to describe the new approaches that must be put in place to deal with this overwhelming amount of data that people want to do analysis on,” he said. “So it’s about a new architecture and a new way of using software and that architecture to do analysis on the data.” Those efforts almost always involve the use of Hadoop, he said, which is open-source software specifically developed to do analysis and transformation of both structured and unstructured data. Hadoop is central to the search capabilities of Google and Yahoo and to the kinds of services that social media companies such as Facebook and LinkedIn provide. From its beginning, Google has had to index the entire Internet and be able to search across it with almost instant results, Gourley said, and that’s something the company just couldn’t do with old architectures and the traditional ways of querying and searching databases. “Today’s enterprise computer systems are frequently limited by how fast you write data to the disk and how fast you can read data from it,” Gourley said. “You also have to move them around in the data center, and they have to go through a switch, which is another place where things really slow down.” If you have a data center larger than 1 terabyte, searches could take days for complex business or intelligence applications, he said. “If you were to sequence the entire human genome the old-fashioned way, it could take weeks or months, whereas with big data techniques, it would take minutes,” he added. However, those techniques are not a panacea for all government’s data problems. Even though they typically use cheap commercial processors and storage, big data solutions still require maintenance and upkeep, so they’re not a zero-cost proposition. Oracle, as the biggest provider of traditional relational database solutions to government, also provides big data solutions but is careful to make sure it’s customers actually need them, said Peter Doolan, group vice president and chief technologist for Oracle Public Sector. “We have to be very careful when talking to our customers because most of them will think we’re trying to recast big data in the context of relational databases,” he said. “Obviously, we like to speak about that, but to fully respect big data, we do cast it as a very different conversation, with different products and a different architecture.” Doolan begins by asking clients about the four Vs listed above and determines whether an agency’s problems fall into those categories. Frequently, he said, clients discover that their problems can be solved with their existing infrastructure. Many times, the problems are related to content management rather than big data. It’s also a matter of explaining just how complex big data solutions can be. “Lots of people seem to think that big data is like putting Google search into their data and then surfing through that data just as they would a desktop Google search,” Doolan said. “But a Hadoop script is definitely not a trivial piece of software.” Along those lines, then, big data solutions should be applied to specific mission needs rather than viewed as a panacea for all of an agency’s data needs. It’s a complementary process rather than a replacement for current database searches. “We’re at the beginning of the hype cycle on big data,” Wickizer said. “The danger is that more and more enterprises will rush to adopt it, and then the trough of disillusionment will hit as they realize that no one told them of the custom coding and other things they needed to do to make their big data solutions work. This is important as agencies make partner decisions; they need to be critical about who they work with. However, he also sees big data as an inevitable trend that all organizations will eventually need to be involved in. In light of the mass of unstructured data that’s starting to affect them, “the writing is on the wall for the old approaches,” Wickizer said. “We’re at the beginning of the next 10-year wave [in these technologies], and over that time, we’ll end up doing decision support in the enterprise much differently than we have in the past.”
<urn:uuid:46d87a3e-e999-48a7-8115-ceecf2f6190d>
CC-MAIN-2017-09
https://fcw.com/microsites/2012/snapshot-managing-big-data/01-big-data-techniques.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00188-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957949
1,791
3.28125
3
Wally Vahlstrom brings more than 40 years of electrical engineering experience to his position as the director of technical services for Emerson Network Power’s Electrical Reliability Services group, where he is responsible for failure investigation work, conformity assessment services, power system studies and reliability analysis. A service professional is working on a piece of electrical equipment. He’s done it hundreds of times before, but in a moment of carelessness or by total accident, an arc flash occurs. Most of us think this couldn’t happen to us. The truth is that electrocutions are the fourth leading cause of traumatic occupational fatalities, and according to the American Society of Safety Engineers, more than 3,600 workers suffer disabling electrical contact injuries annually. To help prevent electrical injuries and deaths, the new 2012 version of the National Fire Protection Association (NFPA) 70E: Standard for Electrical Safety in the Workplace makes important changes in the areas of safety, maintenance and training. For professionals who build and manage data centers, the following articles will have the most impact: Work Involving Arc Flash Hazards Employees working on electrical equipment without adequate Personal Protective Equipment (PPE) risk serious injury or death when an electrical arc occurs. Even someone standing more than 10 feet from the fault source can be fatally burned. Arc flash labeling is key to preventing injuries. Data centers that have not been vigilant in their labeling may need to create new arc flash hazard labels to meet the expanded requirements of NFPA 70E. Labels now must include nominal system voltage, arc flash boundary and at least one of the following: - Incident energy and corresponding working distance - Minimum arc rating of clothing - Required level of PPE - Highest Hazard/Risk Category (HRC) for the equipment Additionally, the previous 4-foot default value for arc flash boundary has been changed to require that the arc flash boundary distance be calculated for all locations where the voltage is greater than 50 volts. The new standard makes it clear that label information should be based on calculated values rather than using the table values provided in NFPA 70E. While there previously was no industry standard, DC equipment labeling is now required. Equations used by state-of-the-art software for calculating DC arc flash values are shown in Section D.8. Using such software is recommended for determining incident energy and HRC. Safety-Related Work Practices Overall, 2012 NFPA 70E requires more documentation and more training than in the past. This includes documenting the required meeting between employers and contractors to communicate known hazards and installation information the contractor needs to make assessments. The new standard also strengthens its provision that employees who work around (not just on) energized electrical equipment must be safety trained. New requirements include: - Performing annual inspections to ensure each employee is complying with all safety-related work practices - Retraining employees at intervals not to exceed three years - Documenting training content - Auditing your safety training program and field work at least every three years Regarding testing and maintenance practices, NFPA 70E now specifies that only qualified persons may work within the limited approach boundary. The definition of qualified includes required training, demonstrated skills, and knowledge of installation and hazards. Additionally, ground fault circuit interrupters must be used where required by local, state and federal codes or standards (OSHA 1910). General Maintenance Requirements Before the 2012 edition, NFPA 70E requirements for conducting maintenance on electrical equipment were specified only for overcurrent protective devices. Now, your organization must maintain in a legible condition and keep current a single-line diagram for the electrical system. Additionally, electrical equipment must be maintained according to manufacturers’ instructions or industry consensus standards to reduce the risk of failure and the subsequent exposure of employees to electrical hazards. The standard also now requires showing that overcurrent protective devices have been maintained, tested and inspected according to manufacturers’ instructions or industry consensus standards such as ANSI/NETA 2011 Standard for Maintenance Testing Specifications and NFPA 70B Recommended Practice for Electrical Equipment Maintenance. Benefits of Change Understanding the 2012 NFPA 70E changes that most affect data centers will help you bring your electrical safety programs into compliance for the protection of workers and visitors to your facility. Implementing a comprehensive electrical safety program will: - Reduce injuries and fatalities - Reduce lost worker productivity - Avoid higher insurance prices and costly fines from OSHA, which looks to NFPA as a national consensus standard for electrical safety (and references its provisions in federal citations) - Promote optimum system performance and efficiency To fully understand its implications for your data center, Emerson Network Power encourages you to purchase a copy of 2012 NFPA 70E from the NFPA website. Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
<urn:uuid:760cce8f-5106-4c57-84d6-d14f1173e66a>
CC-MAIN-2017-09
http://www.datacenterknowledge.com/archives/2012/09/21/new-nfpa-workplace-electrical-safety-provisions-require-changes-in-data-center-practices/?utm-source=feedburner
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00240-ip-10-171-10-108.ec2.internal.warc.gz
en
0.913734
1,011
2.640625
3
This resource is no longer available How Smartphones Can Help Disaster Relief Efforts Smartphones have become an essential piece of technology in today’s organizations. These versatile devices not only enhance communication, collaboration, and productivity, they can also be key to getting help to the people who need it most. In this brief video, discover why International Medical Corps uses BlackBerry smartphones and applications to get relief to vulnerable populations quickly and effectively. Learn how these devices help them share information quickly and efficiently, enhancing their relief efforts.
<urn:uuid:e48e8a59-582c-446d-91e4-73610a2f3f93>
CC-MAIN-2017-09
http://www.bitpipe.com/detail/RES/1346437990_510.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00536-ip-10-171-10-108.ec2.internal.warc.gz
en
0.910754
105
2.75
3
Tainted Tomato Cases Reach 552 read the headline of a June 20, 2008, MSNBC News Services story. The Associated Press reported on Aug. 29, 2007, Spinach Recalled Over Salmonella Fears. This was preceded on May 10, 2006, by another MSNBC News Services story with the headline, Toxic Pet Food Kills Dozens of Dogs. Though some of these stories were later shown inaccurate, these and other high-profile, food-safety crises vividly demonstrate how difficult it is to provide clear and consistent information to affected individuals, the professionals who come to their aid and the public at large. The risk communications surrounding these events were often inaccurate, inconsistent, inadequate or late. Communication missteps hampered community, state and national responses to the threats; in some cases, they resulted in widespread public confusion. The headlines above speak to the challenges and perils of communicating to a public and media who often demand immediate answers, scientific certainty and reassurances. Organizations that routinely monitor food-safety communications -- such as Booz Allen Hamilton's Food Safety Workgroup -- note that in the case of the bagged-spinach recall, federal public health authorities were faced with a communications dilemma. Although a single manufacturer was the target, that one source had packaged spinach under multiple brands, which led to confusion about and resistance to the recall. The result: Many consumers believed that no spinach was safe, and more broadly, many avoided buying or eating not only spinach, but also a wide variety of green leafy produce. In the case of the pet-food recall, initially federal authorities didn't correctly identify the problem's scope, and that impacted the credibility of their later communication to the public and media. In fact, some authorities primarily reposted the manufacturers' press releases, while consumers complained loudly that they lacked correct or sufficient information about which products or manufacturers used the toxic ingredient. The steady stream of images and sound bites in the media, as well as an online outcry from pet owners who lost their beloved pets, only intensified the crisis. These unintended consequences of food recalls and warnings illuminate the need for advanced planning, clear strategies, enhanced coordination and risk tools to effectively communicate with the public before, during and after food-safety defense crises. Research and experience show that the key to successful risk communication is for emergency management and public health systems to respond to public perceptions and to establish, maintain and increase their own credibility and therefore the public's trust. The public must believe that government and commercial entities are working together, doing everything they can and staying on top of the situation. But most importantly, the public wants to see evidence that these entities have viable plans for early detection, rapid assessment, timely communications and that they issue clear, accurate guidance on what the public should or shouldn't do -- e.g., "Do not eat food product X, which might include specific identification guidance, or find a substitute." Salmonella is a group of bacteria that causes food poisoning. Every year, 40,000 salmonella cases are reported in the United States, according to the Centers for Disease Control and Prevention. In her 2004 New York Times article, Jennifer Wilkins suggested the United States is particularly vulnerable to both unintentional food safety and intentional food-defense crises, and pointed to several factors that contribute to this vulnerability. For one, the United States is importing increasing amounts of food. Also in 2004, James Zirin noted in The Washington Times that while half of the food consumed in this country is imported, the U.S. Department of Health and Human Services estimates that less than 10 percent of it is ever inspected. Moreover, contamination by food-borne diseases or a terrorist act doesn't have to occur within the United States to have a devastating affect on its food supply. These factors may have been top-of-mind for then departing Secretary of Health and Human Services Tommy Thompson when he publicly acknowledged the vulnerability of the U.S. food supply. In an Associated Press report, Thompson said, "For the life of me, I cannot understand why the terrorists have not attacked our food supply because it is so easy to do." Thompson was quoted similarly in 2001, two months after the World Trade Center attacks. At that time, and according to a story by Frederick Golden in Time, Thompson told Congress, "I am more fearful about this [organized attack on food and crops in North America] than anything else." Whether the causes are intentional or unintentional, communicating about food-safety defense crises falls within the risk-communications parameters. Numerous terms are used in the subject area of risk, which can be confusing. The President's Commission on Risk Assessment and Risk Management final report refers to risk assessment as the process of organizing and evaluating information about the nature, strength of evidence and likelihood of adverse health or ecological effects from one or a combination of threats. Risk management is the process of analyzing, selecting, implementing and evaluating actions to reduce risk. In this article, the term risk communication means the interactive exchange of information and opinion among individuals, groups and institutions about food-safety defense risk. Risk communication is a science-based approach for communicating effectively in emotionally charged, high-stress or controversial situations. Also worth noting is the substantive growth in the risk-communications body of knowledge over the last 20 years. Still, risk communication continues to be a powerful, albeit neglected, tool for policymakers and emergency management professionals. Although salmonella is primarily transmitted through meat products, there has been an influx of vegetable-related cases in the last year. A thorough understanding of the principles in risk-communication research and practice can inform and guide communication decision-makers in managing message content, messenger characteristics and channel effectiveness. The three key concepts are: perceptions of risk; mental noise; and anticipation, preparation and practice. In his research, leading risk-communication expert Peter Sandman has identified at least 20 risk-perception or fear factors that can affect how people in a high-concern state of mind view the magnitude of a food-safety defense risk, and therefore its acceptability. For example, risks are generally more worrisome, feared and less acceptable if they are perceived to be: o under the control of others, especially if they aren't trusted; o unfair or inequitable; o man-made as opposed to natural in origin; o unfamiliar or exotic; o uncertain; and o a threat to children. Further, risk-perception theory counters the conventional notion that facts speak for themselves. People commonly accept high risks, yet at the same time, they fear or become outraged over less-likely risks. For example, once a risk becomes familiar or is no longer new, it's often less of a concern. In their perceptions research, noted risk-communication experts David Ropeik and Paul Slovic, show that at the time bovine spongiform encephalopathy (BSE), commonly known as mad cow disease, first appeared in Germany, a public opinion survey found that more than four in five Germans (85 percent) thought it was a serious threat to public health. The same poll was conducted concurrently in the United Kingdom, where BSE had been around for years and had killed numerous animals and more than 100 people. The UK poll found that only four in 10 British citizens (40 percent) considered mad cow disease a serious threat. More recently in 2003, Steve Raabe of The Denver Post and Matthew Walter of the Arkansas Democrat-Gazette, reported on how the U.S. Department of Agriculture (USDA) announced a presumptive diagnosis of BSE in an adult Holstein cow located in Washington state. Officials traced the animal's origin to Canada using an ear tag identification number. The immediate fallout was dramatic: Fearful consumers in the United States and Canada stopped buying beef, and exports from both countries were stalled for months. Sales in related industries similarly declined. Tyson Foods Inc., the largest beef producer in the United States, estimated that BSE cut its beef segment operating income by $61 million in 2003. The USDA's response in this case was fast and dramatic, and it averted a much larger and more devastating crisis. First, the USDA was proactive in communications by announcing the cow's presumptive diagnosis before labs in England had verified the BSE. Second, despite a relatively low risk level to other livestock and human health, the agency publicly announced proposals to cut the risk even further, including more testing, additional monitoring and tighter controls for imported cattle. Sources said these proactive communications in 2003 helped avoid a UK-style outbreak and more damaging impacts. According to the UK's Department for Environment, Food and Rural Affairs, BSE affected 180,625 British cattle and a virtual worldwide ban on British beef cost farmers billions of dollars. When individuals are severely stressed and/or otherwise highly concerned about a risk, such as food-safety defense, their ability to process information is typically reduced by up to 80 percent. These concerns -- or mental noise - serve to distract individuals (consumers) and diminish their ability to effectively hear, understand and remember messages. Constructing and delivering information to a stressed population during a food-safety defense crisis is therefore radically different from normal communication. Noted social scientist Dr. Elaine Vaughan, points to a number of possible negative consequences when risk-communication techniques and approaches aren't appropriately applied, including: o The audience is confused by the message. o Strong reassurances are issued prematurely. o Fears are raised without a simultaneous increase in self-efficacy or confidence in risk-reduction steps. o Contradictory messages are sent. o Public perceptions are ignored and concerns are not addressed. o The public refuses to follow recommendations. o The public's confidence declines in the assessment of risk by experts. o There's unnecessary social and economic disruption. Planning is essential for successful risk communication about food-safety defense. Numerous communications experts -- such as Sheldon Krimsky, Alonzo Plough, Caron Chess and R.C. Brownson -- advocate for risk-communication planning that employs specific techniques and approaches rather than generic program goals; be based on a working knowledge of the audiences; provide a framework for addressing audience concerns; and most of all, be flexible and allow for the unexpected. In addition, emergency management professionals must have a clear understanding of the purpose of the communication, the audience and the fundamental message before engaging in risk communication. Emergency management professionals first must know the purpose of the risk communication. The initial impetus behind communicating food-safety defense risks is usually reactive (risk communication activities in response to a public health and safety concern can be either reactive or proactive, depending on the situation). If a risk communication is to take place, is the goal to inform or persuade audiences? Each situation is unique, but with few exceptions, risk communication is used to assist individuals, communities and society at large to prevent, reduce or mitigate their risk. For risk communication to be effective, knowledge about the intended audience is also essential. The same risk may have to be communicated to multiple audiences, including scientists, the general public, mass media, administrators, health-care professionals, private organizations, administrators or elected officials, and therefore must be tailored to the needs of each. It may also be necessary to use different channels to reach these various audiences. Moreover, the complexity and uncertainty of the scientific issues can mean that literacy and numeracy of audiences are especially important considerations if they are to understand and act on the messages. The third element of risk communication is creating the message and preparing the messenger. Based on the science, purpose, audience and situation, emergency management professionals must decide on the main message to communicate. In risk communication, it could be that there's little reason for concern, a great need for concern or that the potential risk is unknown. Planning is essential in developing and using consistent messages. It's important to recognize, however, that the risk-communication message may have to change over time because of the situation's uncertainty and the possibility that new information will be uncovered. Much of the success of effective risk communication about food-safety defense is predicated on the amount of work and detailed thinking that goes into planning and preparation before the crisis occurs. The more questions that can be asked and answered during this stage, the better the outcome will be. This is especially true regarding high-visibility issues, such as food-safety defense. Planning questions can be framed as elements to make a food-safety defense risk communication plan easy to use, flexible and easily adaptable for evolving situations. The following are some sample questions developed by Booz Allen Hamilton's Food Safety Workgroup: What do we want our risk communication to accomplish? o Increase collaboration with industry/growers to create a "cascading" communication affect with suppliers (grocery, wholesalers, food service) to increase message consistency and accuracy. o Proactively engage print, broadcast and electronic media by providing stories and amplifying messages through effective partnerships. o Harness and integrate the power of online media such as blogs, webcasts and other electronic media into one risk-communication plan. o Train representatives in the delivery of key messages. o Increase communication and information flow with manufacturers, distributors and retailers about sourcing, containing and limiting distribution of the product (food). o Communicate at the point of sale to get the prevention and mitigation message to consumers at time of purchase decision. What's an optimal combination of strategies? o Establish a public-facing Web site as a portal for accessing information, messages, materials, etc. o Assess which communication channels are most viable based on geography, type of product, type of consumer and so forth. o Gather and use historic data from previous recall efforts. When was it done well? Are there examples of when, how and why the public and food-supply chain reacted favorably or unfavorably (following guidance) to a recall? How can we assess progress and impact? o Conduct formative research and benchmark against best practice in risk communication. o Communicate lessons learned by monitoring media response; evaluating effectiveness of messenger, message and means; assessing risk-communication gaps and strengths; and making real-time adjustments when needed. o Monitor blogs and other electronic media: Which has the highest traffic and the most chatter (both positive and negative)? o Analyze media placement and coverage and its implications for the overall risk-communication strategy. What's the real or perceived benefit if we execute our plan? o Improve communication to strengthen reputation/credibility with consumers, industry, retailers, Congress, partners and other key stakeholder groups. o Establish communications presence in the market as the go-to source for information, updates and ongoing guidance. o Use short- and long-term metrics to show impacts on building ownership of the process with the target partner groups and progress on the food-protection plan. Communicating food-safety defense risk is a complex endeavor with multiple perspectives, approaches and components. There's no single, standard food-safety defense situation or plan. The affected individuals may live in geographic proximity or be scattered throughout the country. The type of exposure and its extent and potential risks, possible actions that can be taken and so forth are highly variable, and each situation is unique. The best practices outlined in this article present a flexible, multicomponent approach for addressing the public's concerns, establishing trust and producing an informed public that's involved, interested, thoughtful, solution oriented and collaborative. These unintended consequences of a food crisis illuminate the need for advanced planning, clear strategies, enhanced coordination and risk tools to effectively communicate with the public before, during and after food-safety defense crises. Research and experience show that the key to successful risk communication is for emergency management and public health systems to respond to public perceptions and to establish, maintain and increase their own credibility and therefore the public's trust. This can be accomplished with advanced risk and crisis communication planning, clearly developed strategies and the development and implementation of communication tools and tactics to effectively reach the public before, during and after a food safety or food defense crisis.
<urn:uuid:c219a64a-d199-462e-8047-4fed368633c9>
CC-MAIN-2017-09
http://www.govtech.com/featured/Food-Safety-Principles-Before-and-After.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00464-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943931
3,288
2.578125
3
Los Alamos builds time machine to the way the Web was - By Henry Kenyon - Aug 08, 2011 The Internet is constantly shifting, with many websites changing their content and appearance almost daily. Such a protean environment poses a problem for archivists or anyone interested in seeing what a home page looked like at a specific date and year. But a new application and standard could potentially turn Web browsers and other software into time machines. A team of computer scientists at the Los Alamos Research Library in New Mexico and Old Dominion University (ODU) in Virginia has written a technical specification that embeds the concept of time into Internet applications. The specification is part of the team’s proposed information framework, known as Memento, which creates an application that allows the version control of Web pages, databases and other online information sources. Work on Memento originated from collaborative work by Los Alamos and ODU on digital preservation methods to create long-term data repositories, said Herbert Van de Sompel, leader of the Los Alamos team. The interest in archiving Web pages grew out of this initial work. There are actually many Web archives, often part of a national museum or archive such as the Library of Congress or the British Museum, Van de Sompel said. But to find old Web pages, users must know where to look. Memento uses a protocol that allows searches across all of these archives, he said. The Library of Congress’ Internet Archive began in the 1990s. Much of the archiving technology currently in use relies on Web crawling software to take snapshots of Web pages for storage. Most Internet archives typically only archive/search sites in their own countries, Van de Sompel said. Content management programs such as Wikipedia also record and store versions of a page over time. Memento allows users to access a page as it appeared on a specific date. There are also other ways now available to collect and record Web pages as they appeared on a specific date, but there is no simple, one-step process to look up old pages. “There is a sizable portion of the Web of the past that is available. But accessing it is cumbersome and that’s where Memento comes in,” he said. Memento does not search at the archive level. Instead it works by time stamping a page version that allows it to be referenced at a later date. Van de Sompel said that one application could be in content management and version control systems such as Wikipedia by using a universal resource identifier, or URI. Universal resource locators, or URLs, are a subset of the URI protocols, which include http:// and ftp:// formats. Memento can also search through dates using a slider graphic. For example, a user selects a newspaper Web site and moves the slider back to a specific date, and Memento will call up the archived page. By entering a site’s URI into Memento, users can search multiple archives via an HTTP-based search tool. The technology automatically searches all of the Internet’s Web archives and directs the user to the archived copy, no matter its location. This is a considerable advantage over current searches, in which users must know the locations of the archives to access their data. “You have to consult each of [the archives] through search,” Van de Sompel said. One of the Memento team’s goals is to make the time-searching capability a standard Internet protocol. However, before it becomes widely accepted, the researchers have developed a plug in that can be used on the Firefox Web browser. There is also a mobile version for the Android operating system that is under development, Van de Sompel said. Memento has many possible government applications for archiving and storage. For example, in the Netherlands, Van de Sompel said many municipalities are actively archiving their Web presences because they anticipate legislation will soon require them to do so. This process has attracted a number of companies that are helping Dutch town and city governments archive their sites. In the United Kingdom, a law requires government Web pages to have active hyperlinks. Often, old links go dead as a site is updated and data changes. Memento could provide a solution to this by allowing administrators to track back to a time when a link was last active. Link data also changes over time, Los Alamos computer scientist Robert Sanderson said. Sites such as data.gov.uk, which list statistical and economic data such as national gross domestic product, change their information over time. So the links for a nation’s GDP would change over the years. A tool such as Memento would allow researchers to locate that specific data, Sanderson said. In the United States, the Citability group promotes accessibility to public Web sites and archived data. “This is a perfect playing field for Memento,” Van de Sompel said. On the commercial side, he said a large company in the Washington D.C. metro area is talking to the Los Alamos team because it is very interested in archiving its internal Web sites, which include classified and nonclassified data. To ensure that that the capability gets more attention outside of the world of archivists, Los Alamos is working on a large-scale collaborative effort with the International Internet Preservation Consortium on a project to integrate all of the consortium’s archives. The Los Alamos team is one year into the process. Van de Sompel said. However, he cautioned that even if it is adopted as a standard, it does not mean that it will be widely used or accepted. Despite these challenges, ultimately Van de Sompel would like to see Memento or a similar archival software tool automatically accessible through popular Web browsers or websites such as Twitter or Wikipedia.
<urn:uuid:f63a47fc-c86d-4493-91e6-62381e67c109>
CC-MAIN-2017-09
https://gcn.com/articles/2011/08/08/memento-los-alamos-internet-time-machine.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00585-ip-10-171-10-108.ec2.internal.warc.gz
en
0.929883
1,213
3.296875
3
This past spring, Google began feeding its natural language algorithm thousands of romance novels in an effort to humanize its “conversational tone.” The move did so much to fire the collective comic imagination that the ensuing hilarity muffled any serious commentary on its symbolic importance. The jokes, as they say, practically wrote themselves. But, after several decades devoted to task-specific “smart” technologies (GPS, search engine optimization, data mining), Google’s decision points to a recovered interest among the titans of technology in a fully anthropic “general” intelligence, the kind dramatized in recent films such as Her (2013) and Ex Machina (2015). Amusing though it may be, the appeal to romance novels suggests that Silicon Valley is daring to dream big once again. The desire to automate solutions to human problems, from locomotion (the wheel) to mnemonics (the stylus), is as old as society itself. Aristotle, for example, sought to describe the mysteries of human cognition so precisely that it could be codified as a set of syllogisms, or building blocks of knowledge, bound together by algorithms to form the high-level insights and judgments that we ordinarily associate with intelligence. Two millennia later, the German polymath Gottfried Wilhelm Leibniz dreamed of a machine called the calculus ratiocinator that would be programmed according to these syllogisms in the hope that, thereafter, all of the remaining problems in philosophy could be resolved with a turn of the crank. But there is more to intelligence than logic. Logic, after all, can only operate on already categorized signs and symbols. Even if we very generously grant that we are, as Descartes claimed in 1637, essentially thinking machines divided into body and mind, and even if we grant that the mind is a tabula rasa, as Locke argued a half-century later, the question remains: How do categories and content—the basic tools and materials of logic—come to mind in the first place? How, in other words, do humans comprehend and act upon the novel and the unknown? Such questions demand a fully contoured account of the brain—how it responds to its environment, how it makes connections, and how it encodes memories. To read full article, click here.
<urn:uuid:7bc0b6ca-5c6d-40cd-97e3-45349719d284>
CC-MAIN-2017-09
http://www.digitalreasoning.com/buzz/will-reading-romance-novels-make-artificial-intelligence-more-human.2466048
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00285-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948544
478
2.546875
3
This story was originally published by Data-Smart City Solutions. What could you do with access to the complete — and massive — collection of every public tweet ever tweeted? Earlier this year, Twitter posed this question to academic and research institutions across the globe, ultimately planning to reward some of the best ideas with the first Twitter Data Grants, a pilot initiative that would provide the winning research projects access to that data. One of those winners is a study of foodborne gastrointestinal illness, taking place at Harvard Medical School under the leadership of Dr. John Brownstein. The group works within the budding field of “digital epidemiology.” As Brownstein described it, “We know that people use technology to communicate illness. … Our goal is to take that information, organize it and provide a new view of public health.” Twitter will provide the researchers with data associated with tweets, culled from a set of keywords they have identified, including “diarrhea,” “nausea” and a range of other words related to food and feeling sick. While all these tweets are public — in other words, not protected by users — Twitter does not typically allow for such extensive access. Results from keyword searches, for example, are limited to a certain number of tweets, and the real-time streaming function Twitter offers also yields only a subset of the tweets that meet the searcher’s criteria. The information will enable the HMS team to search through the tweets to identify trends and patterns that are potentially associated with known incidents of foodborne illness, in the hopes of suggesting a method for predicting outbreaks — “a new way of monitoring for any issues at restaurants [and] potentially contaminated products,” according to Brownstein. Because a tweet referencing post-dinner queasiness is not a perfect indication of whether someone indeed experienced a bout of food poisoning, for instance, the group plans to reach out to the authors of relevant tweets, in order to gather more information about their gastrointestinal experience. This more precise data will ideally facilitate a more accurate predictive model. The researchers are considering partnering with local public health departments for purposes of this outreach. Dr. Brownstein indicated that the primary challenge to working with the data in this way is “a ‘signal and the noise’ issue.” “We’re talking about smaller clusters of food related events,” he said. “The question is if that’s enough data to identify an event that’s taking place and to be a useful public health signal.” While other public health researchers and officials may not be able to take advantage of as complete a body of tweets as this study, the digital epidemiologists at Harvard Medical School were able to do similar work previously with the more limited set of tweets from Twitter’s streaming feature. And Brownstein noted that Twitter is becoming more open with its data, so this opportunity may open up to others in the future. In fact, in its initial announcement of the Data Grants, Twitter acknowledged “it has been challenging for researchers outside the company who are tackling big questions to collaborate with us to access our public, historical data.” As Twitter moves in the direction of “connecting research institutions and academics with the data they need,” other powerful and perhaps surprising ways this data can be used remain to be seen.
<urn:uuid:c2f57076-36f6-43ab-bb28-f14b9e41da40>
CC-MAIN-2017-09
http://www.govtech.com/internet/Can-Twitter-Help-Predict-Foodborne-Illnesses.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00637-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942359
700
2.78125
3
For anyone bearing the brunt of an Internet hoax that just won't die, there's little more to hope for in terms of potential relief than a story on Snopes.com stating unequivocally that the hoax is indeed a hoax. After all, Snopes is the gold standard when it comes to debunking nonsense. Computer scientist Bjarne Stroustrup recently received that maximum measure of support. Yet the man who designed C++ -- first released commercially in 1985 -- remains resigned to the fact that a fictitious interview he did not give in 1998 about nefarious motivations he did not have for developing the programming language will nonetheless continue to provide him genuine irritation. He's right, of course, and we'll get to his reasoning in a moment. Here's the background for those unfamiliar with the tale: On June 6, Snopes published its findings about the claim that "C++ designer Bjarne Stroustrup admitted in an interview that he developed the language solely to create high-paying jobs for programmers." The Snopes verdict: false. Here's an example of the charge Snopes unearthed from an August, 2009 email: "I just ran across a 'leaked' interview with programming language C++ author Bjarne Stroustrup where he clames (sic) to have developed C++ for the express intent of creating a demand for programmers after IBM swamped the market with C programmers in the 90's. Basically I am asking if there is any truth to this. If so this guy duped an entire industry." There isn't and he didn't. As the bogus legend goes, Stroustrup was interviewed in 1998 by the IEEE's Computer magazine, which opted not to publish the most incendiary passages "for the good of the industry." If you'd like to read those fictitious passages, there are excerpts available in this Buzzblog post -- http://tinyurl.com/86t2hwg -- and the entire work can be found within the Snopes article that brands it a fake: http://tinyurl.com/23tq9d3. I found it interesting reading primarily because the "interview" was so over-the-top as to call into question why anyone would possibly think it real. Yet they do ... to this day. So back to what Stroustrup thinks about Snopes taking up his cause. His website includes a couple of warnings to would-be correspondents that they should not hold their breath awaiting a prompt reply - "I get a lot of email ... sometimes I get overwhelmed" -- yet he needed under an hour to answer my inquiry. It's clear this indignity has gotten under his skin. "This hoax has been doing the rounds for about a decade. It seems that several times every year someone (usually someone who dislikes C++ for some reason) finds it funny and re-posts it somewhere," Stroustrup says. "By now, I'm a bit tired of it." I told him I found it difficult to fathom that people actually believe he said the things he didn't say. "Yes, it is amazing. But every year I get several emails asking me to confirm it or say that it is wrong. It's a FAQ :-)" He means that literally, too, as in it's listed among the FAQs on his website. So does he anticipate that the Snopes debunking will do any good? "Not really. The main effect will probably be to add one more source of the 'interview.' " Know of another similarly frustrating tale? The address is [email protected].
<urn:uuid:1f25f7bc-1cb4-4b59-acb8-81981e1f5484>
CC-MAIN-2017-09
http://www.networkworld.com/article/2189385/data-center/snopes-com-debunks-old-c---hoax--but----.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00637-ip-10-171-10-108.ec2.internal.warc.gz
en
0.969125
743
2.578125
3
Microsoft Internet Explorer comes with Java virtual machine and accompanying class packages. Multiple security vulnerabilities have been found in the Java environment. Some of these allow an attacker to deliver and run arbitrary code on the Internet Explorer or Outlook user’s system when a hostile web site or mail message is viewed. The latest versions of the software are affected by the flaws, but Outlook (Express) users aren’t vulnerable to the mail-based attack if the security zone of mail is set to Restricted. This is the default case with Outlook Express 6 and Outlook with the latest security updates. In this case Java Applets aren’t shown at all in mail messages; if Applets are shown, then the user is vulnerable. Java Applets are small Java programs that can be embedded inside HTML documents. Applets are generally secure because the Java environment enforces strict security policies for them. Applets are enabled by default in most web browsers today. As opposed to normal executable programs, Java Applets don’t contain machine language code but special “bytecode” which is interpreted by the Java virtual machine, a kind of simulated processor. Bytecode doesn’t have direct means of controlling the processor or operating system’s resources. Java applications in general can do file or network operations just like any program. Applets are treated differently; because Applets contain untrusted code supplied by web sites (or anyone sending you mail), they are run within a strictly bound “sandbox”. They can’t access local files and their allowed network operations are very limited. When the Java environment is implemented correctly, untrusted Applets can’t do anything dangerous. The flaws discussed here aren’t related to the Java or Applet concepts, but individual implementations of them. There were more than ten (10) different Java vulnerabilities found and reported to Microsoft. Some of these allow file access on the viewer’s system, some allow access to other resources, and some allow delivery and execution of arbitrary program code on the victim system. These attacks can be carried out when a web page or mail message containing a hostile Applet is viewed with Internet Explorer or Outlook. In this case the Applet may upload any program code and start it. The code can do any operations the user can do – read or modify files, install or remove programs, etc. The vulnerabilities are mostly related to native methods and their improper or missing parameter checking. There are also some logical mistakes and some problems in package, field, or method visibility (ie. public/protected/private). Some of the vulnerabilities deal with system dependant memory addresses, which makes exploiting them more difficult; some of the more serious ones don’t require such information. Native methods are pieces of ordinary machine language code contained by Java classes. Technically their code come from DLL’s, but within Java they look like ordinary Java methods. An Applet can’t contain native methods for obvious reasons, but many of the core Java classes contain them. For instance all file operations are eventually done by native methods. They are used to do operations that aren’t possible or practical to do in pure Java. They may be also used for speed-critical parts of the code. Native methods aren’t bound by the Java security policies and can access the processor, operating system, memory, and file system. Security-wise, native methods are a weak link. Unlike ordinary Java code, they can contain traditional programming flaws like buffer overflows. If an untrusted Java Applet can invoke a native method containing a security flaw, it may be able to escape its sandbox and compromise the system. In most Java implementations there are a lot of native methods scattered in the core Java classes. Many of them are declared private so that an Applet can’t directly invoke them. In some of these cases a hostile Applet may still call another method which in turn may pass some of the parameters to a private native method. If the parameters aren’t checked adequately by the Java code passing them, an Applet might be able to do unwanted operations even if the native method doesn’t have flaws. Most of these vulnerabilities do not seem to originate from the original Sun Microsystem’s code, but the modifications or additions made by Microsoft. Sun’s Java Plug-in was tested against them but no knownly exploitable vulnerabilities seem to exist. Any detailed technical information has been left out of this advisory in order to prevent exploitation of the vulnerabilities. Due to the educational value it may be published later. Microsoft was first contacted in July 2002 and started their investigation of potential Java vulnerabilities. More of them were found during August and reported to the vendor. Microsoft has acknowledged most of the vulnerabilities and is currently working on a patch to correct them. To protect themselves, Internet Explorer and Outlook (Express) users can disable Java Applets until the patch is released. This can be done in Internet Options -> Security -> Internet -> Custom Level -> Microsoft VM, select “Disable Java”. If you want to use an Applet on a certain web site you trust, you can add the site to the Trusted Sites zone and enable Applets in that zone.
<urn:uuid:4091adc7-e589-4d40-ad87-e8802ac34ec7>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2002/11/11/vulnerabilities-in-microsofts-java-implementation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00281-ip-10-171-10-108.ec2.internal.warc.gz
en
0.915752
1,082
3.296875
3
A Switzerland-based standards group wants to stop the madness of having a different charger for every laptop, and will publish a technical specification for universal laptop chargers early next year. The specification from the International Electrotechnical Commission will cover "a wide range of notebook computers and laptops," and defines elements such as the connector plug, safety and environmental considerations, according to a press release. IEC already deserves some credit for its role in defining a standard for mobile phones two years ago, but that effort was largely assisted by the prevalence of Micro-USB connectors in smartphones. Creating a standard for laptop chargers could be trickier. Today's laptop chargers come in all shapes and sizes, and cater to a wide range of power requirements. Case in point: HP's Chromebook 11 uses a Micro-USB charger, but this solution would be impractical for more powerful laptops with bigger batteries. The USB Implementers forum is working on its own specification, called USB Power Delivery, that could provide up to 100 watts from a power outlet to a laptop's USB port. But IEC makes no mention of this effort. Laptop makers have good reason to embrace a universal charging standard. In addition to being better for the environment while reducing headaches for users, it could also cut manufacturing costs if companies no longer need to include a charger with every laptop. But on the other hand, proprietary power cord replacements are a source of revenue for many PC manufacturers. Hopefully groups can work together on a speedy solution, one that doesn't just end up in more conflicting standards. Lead image: zim2411 via Flickr/Creative Commons This story, "One Power Cord to Rule Them All: Standards Group Pushes for Universal Laptop Charger" was originally published by PCWorld.
<urn:uuid:cb4e74f3-6fee-483b-af97-712343be1675>
CC-MAIN-2017-09
http://www.cio.com/article/2380075/laptop-computers/one-power-cord-to-rule-them-all--standards-group-pushes-for-universal-laptop-charge.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00633-ip-10-171-10-108.ec2.internal.warc.gz
en
0.943535
360
2.640625
3
DARPA Focuses On War Injuries In Brain ResearchDepartment of Defense research agency will foot half the bill for the White House’s new R&D initiative into the human brain. 14 Amazing DARPA Technologies On Tap (click image for larger view and for slideshow) The Defense Advanced Research Projects Agency will be a key participant in a new federal research initiative to better understand and map the human brain. The Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative, announced by President Obama on Tuesday, aims to develop new data processing and imaging technologies to help scientists improve their understanding of how brain function is linked to human behavior and learning. It will also address the treatment and prevention of Alzheimer's, schizophrenia, autism, epilepsy, traumatic brain injury and other neurological disorders. The White House plans to provide $100 million in funding in the first year of the program, half of which will come from DARPA. The agency, which is part of the Department of Defense, is interested in brain research as it relates to post-traumatic stress disorder, as well as brain injuries and recovery. DARPA said new tools are needed to measure and analyze electrical signals and the biomolecular dynamics that support brain function. Given its mission as a defense research agency, DARPA positioned its involvement in the scientific initiative as supporting national security. [ Wondering what the government's brain research entails? Read Obama Brain Mapping Project Tests Big Data Limits. ] "This kind of knowledge of brain function could inspire the design of a new generation of information processing systems; lead to insights into brain injury and recovery mechanisms; and enable new diagnostics, therapies and devices to repair traumatic injury," DARPA director Arati Prabhakar said in a written statement. DARPA has seven programs underway that tie into the BRAIN initiative. They include a project called Revolutionizing Prosthetics that seeks to advance upper-limb prosthetic technology. So far, the program has resulted in the development of two prototype prosthetic arm systems, which will get added functionality as research progresses. DARPA's Restorative Encoding Memory Integration Neural Device (REMIND) program is geared to determining how short-term memory is encoded to help soldiers recover from memory loss, while its Reorganization and Plasticity to Accelerate Injury Recovery (REPAIR) program aims to create a neuroscience community that uses brain models to treat injuries. Other areas of investment and research by DARPA include the impact of stress on the brain; helping soldiers with amputations, spinal cord injuries, and neurological diseases; understanding traumatic brain injury by measuring the severity of blasts to which soldiers have been exposed; and developing analytical tools to assess soldiers' psychological state. DARPA acknowledged that there are "societal questions" that could arise about research surrounding these technologies. The agency said it plans to work with experts to address these issues. A well-defended perimeter is only half the battle in securing the government's IT environments. Agencies must also protect their most valuable data. Also in the new, all-digital Secure The Data Center issue of InformationWeek Government: The White House's gun control efforts are at risk of failure because the Bureau of Alcohol, Tobacco, Firearms and Explosives' outdated Firearms Tracing System is in need of an upgrade. (Free registration required.)
<urn:uuid:fb6652eb-c2a7-4417-8f92-6f0f709261c2>
CC-MAIN-2017-09
http://www.darkreading.com/risk-management/darpa-focuses-on-war-injuries-in-brain-research/d/d-id/1109380?piddl_msgorder=thrd
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00509-ip-10-171-10-108.ec2.internal.warc.gz
en
0.933543
684
2.53125
3
“Anyone know what this is? Class? Anyone? Anyone? Anyone seen this before? The Laffer Curve. Anyone know what this says?” “Bueller? Bueller? Bueller?” While Ferris Bueller famously skipped economics (and many other classes) in his eponymous movie, many of us have endured similar torture in our science classes. Undergraduate courses at large universities are often prone to such dry, lecture-driven debacles. Institutional realities are often at fault: The real teaching in introductory biology classes—500 strong but 200 in attendance—does not take place in the lecture hall, but rather in smaller sections and labs taught by graduate students. Yet these teaching assistants are seldom properly prepared. Undergraduates suffer as a result, turning into parrots adept at recitation but inept at critical thinking. The University of Wisconsin-Madison is one of many poster children for teaching woes at large universities. With nearly 30,000 undergraduates, introductory classes can be staggeringly large, a situation that does little to facilitate valuable student-instructor interaction. Hoping to chip away at the problem, four professors at Wisconsin started the Teaching Fellows Program in 2003 to prepare graduate students for their teaching appointments in the sciences. The program, though small, has been successful enough to warrant publication in the Friday edition of the journal Science. For years, experts have encouraged universities to provide teaching assistants with pedagogy courses. But for nearly as many years, universities have been hesitant to either provide or require such preparation. Perhaps, the papers’ authors seem to suggest, universities are unwilling to pony up the cash to fund the courses until their efficacy is proven. The four Wisconsin professors placed the Teaching Fellows Program under the microscope, examining 44 of the 63 fellows who have participated in the program. The papers’ authors poured over the curricula, videos, teaching statements, and educational materials that the participants developed as a part of the course. Their findings give hope to high school seniors dreading the collegiate transition. Fellows in the program developed more engaging, interactive curricula. Over two-thirds of the units involved what the authors call “active learning”–small group discussions, case study analysis, and so on. And even more heartening, 76% of the units were geared toward scientific discovery. Come test time, recitation just wouldn’t cut it. Yet college-level science education is not out of the woods. Since the Teaching Fellows Program began five years ago, the 63 participants have only taught 1,900 undergraduates. Thousands more remain tethered to the old system. And while fellows’ curricula stressed involvement in the scientific process, none included accommodations for disabled students. The largest hurdle for undergraduate science education, though, may still lie at the administrative level. Experts have pushed for required pedagogy courses, but little headway has been made. While the paper does not point fingers, I think that as long as funds for improving teaching remains tight, undergraduate education at large universities may lag accordingly. Science, 2008. DOI: 10.1126/science.1166032
<urn:uuid:8e892b42-d2a6-4bc5-bcad-df5c40bc2fcc>
CC-MAIN-2017-09
https://arstechnica.com/uncategorized/2008/11/asleep-at-the-desk-undergrad-education-gets-a-boost/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00509-ip-10-171-10-108.ec2.internal.warc.gz
en
0.946197
647
2.859375
3
Uh oh. Did you notice what today is? That’s right: Friday the 13th. According to Wikipedia, the superstition surrounding this day comes simply from the fact that Fridays and the number thirteen are both independently considered unlucky, and so in conjunction they are like a Big Mac of ill fortune. Maybe you believe in luck, maybe not. But no matter where you fall on the superstition spectrum, we’ve got some advice for you: better safe than sorry. It’s easy to feel that hacking and identity theft are things that happen to other people. But unfortunately, those “other people” numbered almost 12 million last year. According Smartphones and mobile devices have made personal data that much more vulnerable, and so breaches are on the rise. According to Javelin Strategy and Research, the number of reports of identity fraud increased last year by one million. It’s not luck–it’s statistics. Protect yourself with Keeper.
<urn:uuid:2fa5001d-cfd7-4cc6-a566-434a55c250e3>
CC-MAIN-2017-09
https://blog.keepersecurity.com/2013/12/14/avoiding-bad-luck-in-cyberspace/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171775.73/warc/CC-MAIN-20170219104611-00329-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941155
204
2.546875
3
State and local governments are climbing aboard the green bandwagon, although "green building" innovations such as energy-efficient light bulbs dominate the ride. Emphasis on lighting is understandable given that light bulbs devour 22 percent of all electricity produced nationwide, according to the U.S. Department of Energy (DOE). But green government involves more than the usual suspects. Emerging projects in state and local government show that IT workers have a role to play via data center consolidations, GIS, inventory databases and telework. Consolidations typically get the most attention from CIOs because data centers account for 1.5 percent of all electricity consumption in the country, according to the U.S. Environmental Protection Agency (EPA). Data center consolidation slashes power consumption by eliminating redundant processing and unnecessary cooling, as well as by conserving floor space. It may seem lofty to implement green projects for environmental concerns alone. However, the trick of a successful green IT project may be pursuing something the government needs to accomplish anyway. For example, agencies typically consolidate data centers to reduce costs, simplify IT management and improve business continuity. But these initiatives also lower electricity bills, which automatically reduces carbon emissions. An inventory management system cuts costs by eliminating unnecessary purchases; it obviously reduces consumption, which ultimately lowers emissions. GIS and mobile applications eliminate unnecessary driving, which improves productivity and reduces fuel costs - once again, a carbon emissions reduction comes built in to that investment. California law requires state government agencies to cut energy consumption by 20 percent by 2015. The California Department of General Services (DGS) intends to meet the mandate by collecting information that gives the department new insight into state operations. That insight will be used boost efficiency and conserve resources, said Will Semmes, chief deputy director of the DGS. For example, the department - which negotiates all statewide vehicle procurement contracts - is implementing fleet management changes that will provide better data about the use of state vehicles. The 119 California agencies that own at least one state vehicle will upload fleet information into a centralized DGS database. The solution will offer the DGS an unprecedented breadth of information for calculating carbon emissions and other factors, said Semmes. "If you multiply by 50,000 vehicles, we're going to get information that we've never had before about the use of our fleet in different aspects: the actual disposition of the vehicle, whether it was used appropriately and whether it was used enough to warrant having that vehicle," Semmes said. "How are the emissions? How much fuel did we use? Are we buying the right fuel? Where did we buy it from, and therefore what kind of alternative fuel infrastructure can we get in those places where we seem to be buying the most?" Semmes said the database will give the DGS a comprehensive view of all state vehicles, making it easier to see if unused vehicles could be shared among agencies. The project could potentially reduce the number of cars the state owns and maintains, thereby cutting costs and reducing consumption. The DGS also aims to reduce carbon emissions by purchasing more alternative fuel. The agency has many vehicles that can operate on traditional or alternative fuel, but state workers often don't fill them with alternative fuel because it's usually unavailable on their routes. "I think it's unconscionable for us to demand of a child support worker, who just happened to get the E85 car [that is fueled by a mixture of ethanol and gasoline] that day, to drive 20 miles out of her way to fill it up with E85 when her job, and the only thing she's being judged by, is whether she picked up this foster kid and took him to the right place on time," Semmes said. State employees who work out of the office pay for fuel with government credit cards. The DGS is considering using government purchase card data to determine where employees buy fuel for state vehicles. The agency could then work with the California Air Resources Board (ARB) - which recently received legislative funding to establish alternative energy infrastructure in the state - to suggest appropriate locations for alternative fuel sites. "You're going to see the adoption of these alternative fuels at a much greater rate than you would if you were to force employees to do all this other stuff. It would cost you money. It would slow things down. It would interrupt that child support services worker, and if you multiply that by 215,000 employees, you've got yourself a productivity problem," Semmes said. The state also could track vehicle usage efficiency by installing GPS systems in them. However, state leaders have resisted GPS vehicle trackers. "It's not part of this project, but it's something we'd like to try. A lot of folks have wanted to go down this road over the years, but when they found out that either the modules cost $800 a car or whatever the issue was, everybody got cold feet," Semmes said. He said collaboration between the California Department of Transportation and the EPA to secure funding for GPS trackers is a possibility. If the DGS can't find enough money to install them in every car, it could install them in some, but in a way drivers couldn't detect which vehicles had them, said Semmes. Even with that compromise, GPS systems would be a tough sell with many state employees. He said the best way to implement GPS systems would be with the intent to use the data for overall analysis of all vehicle activity - not individual vehicle travel. "You can look at the driving habits over a period of time and say, 'Look at that. This department is driving with a sedan with one person in it from this location to that location. At the same time, another department is driving with a sedan with one person in it from the same location to the same location. Let's combine the trips and ditch one of those cars,'" Semmes said. California state government must track exactly how much energy it uses - and where it uses that energy the most - in order to achieve the mandated 20 percent cut in energy consumption. Rather than collecting massive stores of data on energy usage and building a new database to establish benchmarks, Semmes is taking advantage of existing resources outside state government. The federal Energy Star program already has a database built for analyzing power consumption. And the power utilities already have information on where and how much electricity agencies use because electricity is metered. Instead of asking agencies to assemble the data, Semmes will have utilities download the information directly into the Energy Star database. "We can use that benchmarking data to determine what makes sense economically and which green things we should be doing," Semmes said. But there are limits on the level of detail those benchmarks would provide at various facilities. In many cases, just one meter measures electricity for an entire campus. Semmes said the limited benchmarks would at least give the DGS a head start as it prioritizes energy efficiency efforts. Mapping the Wind Property tax revenue is pouring into a once-anemic government budget in Cascade County, Mont., thanks to a wind speed GIS tool the county provides to wind power developers. Normally when a new business moves to Cascade County, it relocates to the city of Great Falls for access to sewer and water infrastructure. Consequently outlying county areas often get no additional tax revenue from new businesses. Wind turbines, though, are ideally suited for rural land, said Peggy Beltrone, Cascade County commissioner. Strong, predictable wind patterns that are attractive to wind power providers can be found in many parts of Montana. Cascade is competing for attention by removing some of the preliminary work that providers would typically do before constructing turbines. Providers can attain maps of the area's prevailing winds from the EPA, but the Cascade wind power GIS tool offers additional information the EPA doesn't include. "We stick the topography layer in so they know if the wind source is sitting on top of a mountain - that wouldn't be as good [a location to build] because you need roads to get up to it," Beltrone said. "We have the land database, so if they're interested, they can know exactly who owns the property. You can't get that on a federal map. We've also plotted the power lines on the map." The county started the project in 2002, and two years later successfully attracted Horseshoe Bend, the first wind farm facility Cascade County brought in by using the GIS tool. That facility began generating power in 2006. The facility features six 1.5-megawatt turbines, with a combined value of $10 million. "The taxes off a project like that are considerable. It goes directly to the funds that desperately need money to serve the rural parts of the county. Per turbine, the project brings in about $25,000 per year in tax revenue. A project of six turbines might be $150,000 in taxes, although it's going to decrease every year because of depreciation," Beltrone said. "That's real money, and it's going to schools, libraries, health departments and road departments." That revenue came from a GIS tool that cost almost no extra money, because the county already had an active GIS team in place. The Horseshoe Bend facility is locally owned, but most wind providers will lease land from farmers for the turbines, said Beltrone. That will bring extra income to landowners. "The large industrial-size wind projects are owned by multinational corporations that lease the land. A farmer who puts a turbine on his property might have a lease that would pay between $3,500 and $4,500 per year, per machine," Beltrone said. Four other wind power companies are exploring Cascade County as a potential site, she said, and they were attracted to Cascade through the GIS tool. Beltrone said a private Cascade County citizen even used the GIS tool to build a wind turbine to produce his own onsite generation. The project also made Beltrone a national player in the wind energy industry. She now holds a seat on the DOE's Wind Powering America Committee, which promotes wind energy nationwide. "Now I'm really at a point where I can talk about Cascade County because I'm at national meetings and making speeches on behalf of what the federal government's been doing," Beltrone said. The Telework Option Many vendors pitch telework products as a way for governments to attract and retain a skilled IT work force. Telework also offers green benefits - fewer polluting cars in the fleet and fewer cubicles in the office mean less consumption. Virginia already utilizes telework for IT workers. Roughly 40 percent of the Virginia Information Technologies Agency's (VITA) work force telecommutes at least one day per week, said Aneesh Chopra, VITA's secretary of technology. "It's a priority for our governor, and the security issues can be addressed if you pay for the appropriate security standards and you have the right tools in place," Chopra said. He said adjusting management styles to accommodate telework is more challenging than solving security issues. Chopra contends agency managers must learn to manage based on workers' production rather than the duration they see them at their desks. Chopra said he thinks part of the reason the VITA's telework participation is so high is that Virginia moved all of its IT workers last year to a facility 20 miles south of the old Richmond, Va., building. The move created a longer commute for many workers, which made telework more attractive. Chopra said the process hasn't required dramatic changes to employee and manager interactions. Some telework advocates recommend equipping remote employees with Web cameras and other communication devices. But Virginia's current IT workers adjusted well to telework using BlackBerrys and e-mail, Chopra said. Beyond the green benefits, telework and flexible work hours will play a critical role in attracting young IT workers, according to Mitzi Higashidani, chief deputy director of the California Department of Technology Services. "They may work at 3 a.m. because that's when they're awake. Perhaps their highest performance and creativity peak is at that time," Higashidani said. She said California's purchasing habits are already reflecting the move to telework. "We're retooling. We are no longer buying desktops. Everybody who shows up to work is going to get a laptop and a phone. Work is going to happen through Web conferencing. Some managers expect you to show up to meetings, but you don't have to see the person for it to be a personal experience," Higashidani said. "When the work force moves out and becomes mobile and flexible, you have to stay in touch more through phones and other tools. Blogs, wikis and similar technologies will be the way this work force works. Baby boomers are slow in adopting that." She added that California is developing a security strategy to make telework possible. Currently telework happens on a limited basis, said Semmes. "There is no overall telework policy. It's done department by department, which is common sense because it gives managers flexibility," Semmes said. Almighty Data Center Government data center consolidation is now commonplace across the country. For instance, Texas is midway through a massive outsourced consolidation effort, reducing 5,500 physical servers down to 1,300. The project also will cut 15 mainframes to six. Much of the consolidation is powered by virtualization technology, which has become increasingly popular. Texas pursued consolidation to cut costs and reduce the staff it needed to maintain IT. The resultant green benefit was not a primary motivation, but it came in handy when selling the project to the Texas Legislature, said Lara Coffer, technology center operations division director for the Texas Department of Information Resources (DIR). IBM won the outsourcing contract. The consolidation will centralize IT management and eliminate redundant functions. "We have duplicate systems. For example, everybody's running their own e-mail systems. Everybody's running their own domain name systems," Coffer said. Coffer said she prefers to advocate those green initiatives that are side benefits to projects that serve other state needs. The DIR used the green component of its consolidation to sell it to the Legislature. "There are so many people who think of green as building out of materials that are recyclable. What I've really seen that can give you the most cost-effective benefits are simple green initiatives, like energy-efficient equipment and consolidation," Coffer said. Servers in the new data center use direct current (DC) technology, which is about 12 percent more efficient than the more common alternating current (AC) server technology, according to DC Power for Improved Data Center Efficiency, a report from the Lawrence Berkeley National Laboratory. The new data center also uses sophisticated cooling technology that enables the facility to cool servers while using less power, said Coffer.
<urn:uuid:f20fb50a-7c26-46cc-9424-8bcf371f0e25>
CC-MAIN-2017-09
http://www.govtech.com/policy-management/Fleet-Management-GIS-and-Telework-Power.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00449-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95943
3,056
2.640625
3