text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Cutting the cord: Using ambient Wi-Fi for power and communications Whether it’s an office network or the broader expanse of the Internet of Things, one of the biggest challenges – if not THE biggest challenge – in deployment is getting power to the devices. Temperature sensors placed throughout buildings can cut utility bills. Light sensors can trigger streetlights to come on when needed. But the cost of wiring up all those sensors can be prohibitive. Using batteries to power the devices is in many cases not viable, either. Batteries add to the size of the devices they are powering. And replacing batteries requires expensive manpower on a regular basis. Looking at the problem, researchers at the University of Washington have developed one solution – devices that harvest power from Wi-Fi signals and use that power to communicate with standard Wi-Fi routers. The trick, said Bryce Kellogg, doctoral student in electrical engineering and a member of the team developing what they call “Wi-Fi backscatter,” is that the devices can communicate with Wi-Fi routers in a passive way that consumes very little power, so little that the power can be harvested from the received Wi-Fi signal itself. “The way backscatter works is that you reflect wireless signals instead of transmitting them,” said Kellogg. “You send a bit by either reflecting or not reflecting a signal. Because we’re not broadcasting a signal it is really low-powered. It takes a just a few microwatts of power to switch between states with the antenna. Even better, the UW team’s backscatter tags are designed to communicate directly with the hardware that already exists in Wi-Fi routers. For each packet it receives from a device, Wi-Fi routers calculate a set of numbers that represent the signal strength between the two devices. More recent routers – those that use the 802.11n and 802.11ac standards – track additional “channel state information” about the status of the signal between the router and each device. “What we do is we sort of piggyback on top of this,” Kellogg said. The team’s backscatter tags have a tiny switch and a small antenna that allow it to modulate the signal bounced back to the router to convey data. Once software is installed on the router so that it knows what the patterns of bouncebacks mean, the tag can send data to the router. If the tag is attached to a temperature sensor, for example, the current temperature can be transmitted to the router and, from there, to any device on the network. “Because existing Wi-Fi routers can pick up the signal strength with Wi-Fi backscatter, all these smart devices could communicate with an existing Wi-Fi router with just a software update,” Kellogg said. The technology doesn’t support large-scale data transfers, such as streaming movies or talking on a cell phone, which require significantly more power. But it is, according to Kellogg, ideally suited for transmitting sensor data and even limited text messages. A cell phone equipped with a backscatter tag could, for example, send a short text message even if the phone’s battery was dead. The team’s current version of its backscatter tag has communicated with a Wi-Fi device at rates of 1 kilobit per second with about 2 meters between the devices. They expect to be able to extend that range to about 20 meters. According to Kellogg, that would be done primarily by developing new encoding techniques. “The way in which you encode information makes it easier to pick up at long ranges so you don’t need a really nice, clear signal,” explained Kellogg. The biggest advantage, of course, is that when those building the Internet of Things deploy hundreds of thousands of sensors in a city, if they equip the sensors with UW’s backscatter tags, they won’t have to send thousands of workers out to replace batteries in a couple of years. According to Kellogg, the team is exploring plans to form a company to commercialize the devices. At that point, they might even give them a name. Posted by Patrick Marshall on Aug 12, 2014 at 12:00 PM
<urn:uuid:d9e44db5-5fff-4893-bdc9-f76d68c4f7da>
CC-MAIN-2017-09
https://gcn.com/blogs/emerging-tech/2014/08/wi-fi-backscatter.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00105-ip-10-171-10-108.ec2.internal.warc.gz
en
0.935116
889
3.34375
3
The Android smartphones NASA blasted into orbit snapped pictures of Earth before burning up in the atmosphere. The trio of Android smartphones NASA blasted into orbit recently have ended their journey by burning up in the atmosphere, but not before snapping shots of Earth -- and the pictures don't look too bad. The "PhoneSats" were a NASA experiment to develop super-cheap satellites and to determine whether a consumer-grade smartphone can be used as the main flight avionics of a capable satellite, NASA said. NASA says the three miniature satellites used their smartphone cameras to take pictures of Earth and transmitted these "image-data packets" to multiple ground stations. As part of their preparation for space, the smartphones were outfitted with a low-powered transmitter operating in the amateur radio band. Every packet held a small piece of the big picture. As the data became available, the PhoneSat Team and multiple amateur radio operators around the world collaborated to piece together photographs from the tiny data packets. Piecing together the photo was a very successful collaboration between NASA's PhoneSat team and volunteer amateur ham radio operators around the world. NASA researchers and hams working together was an excellent example of Citizen Science, or crowd-sourced science, which is scientific research conducted, in whole or in part, by amateur or nonprofessional scientists. On the second day of the mission, the Ames team had received more than 200 packets from amateur radio operators. "Three days into the mission we already had received more than 300 data packets," said Alberto Guillen Salas, an engineer at Ames and a member of the PhoneSat team. "About 200 of the data packets were contributed by the global community and the remaining packets were received from members of our team with the help of the Ames Amateur Radio Club station, NA6MF." The mission successfully ended Saturday, April 27, after predicted atmospheric drag caused the PhoneSats to re-enter Earth's atmosphere and burn up, NASA said.
<urn:uuid:fba3cc23-3ff6-4ec1-b185-f671e14c1bda>
CC-MAIN-2017-09
http://www.networkworld.com/article/2165968/smartphones/nasa-smartphone-satellites-beam-clear-images-of-earth.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00457-ip-10-171-10-108.ec2.internal.warc.gz
en
0.959822
397
3.328125
3
Zach Harris, a Florida-based mathematician, discovered that Google and many other big Internet companies use weak cryptographic keys for certifying the emails sent from their corporate domains – a weakness that can easily be exploited by spammers and phishers to deliver emails that, for all intents and purposes, look like they were sent by the companies in question. According to Wired, he discovered that almost by chance after receiving an email from a Google job recruiter. Doubting its authenticity, he checked the e-mail’s header information, and it seemed legitimate. But he also noticed that the DomainKeys Identified Mail key the company uses for the google.com e-mails was only 512 bits long and, therefore, crackable within days with the help of cloud computing. Believing this to be a recruiting test that all who want to be considered for the job must pass, he decided to crack the key and use it send emails to Google’s two founders, making it seem to each of them like it was coming from the other but also including his email address in the return path. Having received no reply in the next few days from either of them or from the recruiter that first contacted him, he began to doubt his initial impression and went to check Google’s cryptographic key. He then discovered that it had been changed to the standard length, leading him to conclude that Google had not, obviously, been aware of this vulnerability until the moment they received those emails. His interest piqued, he poked around to check whether other popular firms, online services and social networks were vulnerable to the same attack, and discovered that PayPal, eBay, Apple, Amazon, Twitter, and many other companies – including several banks – were using 384 bits, 512 bits, or 768 bits keys. The good news is that since his discovery he contacted all of those companies with this information and most have already issued new keys for the domains and revoked the old ones, in order to thwart potential phishers. The bad news is that he found a similar flaw in the receiving domains, which often accept test DKIM keys, verifying the emails as legitimate instead of as unsigned. US-CERT has issued a warning about the issue, advising sysadmins to check the length of DKIM keys, to replace them (if needed) with 1024 bit or longer keys (particularly for long-lived keys), and to configure their systems to not use or allow testing mode on production servers.
<urn:uuid:9745f182-6657-4634-bcf2-0f9d6bafae54>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2012/10/24/weak-crypto-allowed-spoofing-emails-from-google-paypal-domains/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00509-ip-10-171-10-108.ec2.internal.warc.gz
en
0.979832
507
2.546875
3
On Mars, it's as if the shutdown never happened - By Frank Konkel - Oct 18, 2013 Curiosity chugs toward Mars' Mt. Sharp. (NASA photo) On Mars, it's as if the government shutdown never happened. NASA's Mars rover, Curiosity, continued chugging toward a mountain on the Red Planet while Congress fought over conditions for a spending deal. NASA staffed a skeleton crew of 550 out of its total 18,000 workforce during the shutdown, keeping Curiosity on its eight-kilometer trek to Mount Sharp. Moving at a maximum speed of 1.5 inches per second, Curiosity is sometimes able to cover 40 meters of ground per day. While it continued sending images back to NASA as it journeyed to its ultimate Martian destination, the rover's team was happy to reintroduce itself via its favorite means of public communication: Twitter. Eager to be back after a nearly three-week social media hiatus, the Curiosity team linked to a picture of the 5.5-km Mount Sharp, which Curiosity could reach by the end of 2013. "Allow me to reintroduce myself," the Curiosity team announced on Twitter to its 1.5 million followers. "I'm back on Twitter & even closer to Mars' Mount Sharp." Fortunately for NASA, the shutdown does not appear to have delayed the launch of its next Mars probe, the Mars Atmosphere and Volatile Evolution (MAVEN) orbiter. According to NASA officials, the $650 million mission remains on pace to launch Nov. 18 because NASA granted it a shutdown exemption. In addition to studying the Martian atmosphere, MAVEN is to act as a communications relay between NASA and the two rovers cruising around on the Red Planet: Curiosity and Opportunity. The orbiter NASA currently uses as a communications relay is more than a decade old. Had it not been exempted, the shutdown could have caused MAVEN to miss its window, which closes Dec. 7. If that had happened, the next possible launch date for Maven would have been 2016 due to the positioning of Mars and Earth. NOAA assessing shutdown impacts Now back to full staff, the National Oceanic and Atmospheric Administration is sorting out whether the shutdown affected the development of its two largest satellite programs, the Joint Polar Satellite System and the Geostationary Operational Environmental Satellite-R (GOES-R) program. Worth a collective $22 billion in estimated lifecycle costs, the satellite programs are vital to NOAA's mission of providing weather forecast data to scientists on the ground. "Currently, NOAA is assessing the short and long-term impacts of the government shutdown to the development of, and launch schedules for, all the spacecraft in its satellite acquisition portfolio, particularly, GOES-R and JPSS," a NOAA spokesperson told FCW. Both programs have experienced cost setbacks and launch delays in the past, and both received congressional attention in September after critical reports were released by the Government Accountability Office. A team of experts from NOAA, NASA, the Department of Defense and international partners and contractors will complete an analysis of the impact of the shutdown to their costs and schedules over the next several weeks. Frank Konkel is a former staff writer for FCW.
<urn:uuid:2638801b-5537-4380-9678-b7fac65090f8>
CC-MAIN-2017-09
https://fcw.com/articles/2013/10/18/mars-curiosity-shutdown.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00033-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949988
660
2.78125
3
Moore's Law created a stable era for technology, and now that era is nearing its end. But it may be a blessing to say goodbye to a rule that's driven the semiconductor industry since the 1960s. Imagine if farmers could go year to year knowing in advance the amount of rainfall for their area. They could plant their crops based on expected water availability. That's been the world that device makers, who are gathering this week in Las Vegas for the Consumer Electronics Show (CES), have long been living in, except that every year has been a good one. Droughts haven't been part of the forecast, yet. The tech industry has been able to develop products knowing the future of processing power, giving them the ability to map microprocessor performance gains to their devices accordingly. In sum, the technology industry has been coasting along on steady, predictable performance gains. But stability and predictability are also the ingredients of complacency and inertia. At this stage, Moore's Law may be more analogues to golden handcuffs than to innovation. Technology innovation, particularly in the last decade, has been "a succession of entertainment and communication devices that do the same things as we could do before, but now in smaller and more convenient packages," wrote Robert Gordon, an economist, in a recent paper for the National Bureau of Economic Research on whether U.S. economic growth is over. Moore's Law, first described by Intel co-founder Gordon Moore in 1965, a co-founder of Intel, outlines how the number of transistors on a chip would increase at a regular pace. But the law was never indefinite and today, microprocessors are reaching a point where they can shrink no more. The 14 nanometer silicon chips that are now heading to mobile phones and elsewhere may eventually reach 7 or even 5 nanometers, but that may be it. When the European Commission looked at the changing landscape in high performance computing and the coming end of Moore's Law, it saw opportunity. No longer will "mere extrapolation" of existing technologies provide what is needed, but, instead, there will be a need for "radical innovation in many computing technologies," it said in a report this year. Radical innovation beyond Moore's Law will require "new scientific, mathematical, engineering, and conceptual frameworks," said the U.S. National Science Foundation, in a recent budget request. The NSF sees a need for new materials that can work in quantum states, or even "molecular-based approaches including biologically inspired systems." That new technology could be carbon nanotube digital circuits, which can deliver a 10x benefit in a metric that considers both performance and energy. A nanotube is a rolled-up sheet of graphene. Another innovation that may replace or more likely augment microprocessors is quantum computing, something both NASA and the NSA are working on, as are most other major nations. The end of Moore's Law was a topic of discussion at the recent SC13 supercomputing conference. Experts see instability and much uncertainty ahead because of its demise. Marc Snir, director of the Mathematics and Computer Science Division at the Argonne National Laboratory, and a computer science professor at the University of Illinois at Urbana-Champaign, told attendees ( see slides ) alternate technologies are not yet ready. Christopher Willard, chief research officer at Intersect360 Research, said that the era of buying commercial off-the-shelf products to assemble a high performance system is coming to an end. "The market should then be entering a new phase of experimentation, and computer architecture innovations," he said. The demise of Moore's Law is already evident in the high performance computing world. If Moore's law was still functioning as it did in the past, the U.S. would have an exascale system in 2018, instead of the early 2020s, as now predicted. A 1 gigaflop system was developed in 1988 and nine years later work was completed on a 1 teraflop system. In 2008, work on a 1 petaflop system was finished. A petaflop is a thousand teraflops, or one quadrillion floating-point operations per second. The problem with Moore's Law isn't as urgent for the device makers at CES as it is for HPC scientists. But there is a shift in themes at CES away from smaller, faster, better to the Internet of Things. The underlying message is: True computing power is measured by the ability of a mobile platform to control and track a multitude of physical and virtual objects over a network. But that message that might work for just so long. The problem that high performance computing faces in reaching exascale will also eventually visit on the device makers at CES, which was launched in 1967, two years after Gordon Moore delivered the paper outlining Moore's Law. The problem the device makers at CES face is that Moore's Law ends for everyone. Patrick Thibodeau covers SaaS and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov, or subscribe to Patrick's RSS feed . His email address is [email protected]. Read more about hardware in Computerworld's Hardware Topic Center. This story, "Welcome to the era of radical innovation" was originally published by Computerworld.
<urn:uuid:9e220da4-f94a-4723-b73a-80a2b5f55c94>
CC-MAIN-2017-09
http://www.networkworld.com/article/2173165/smb/welcome-to-the-era-of-radical-innovation.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00505-ip-10-171-10-108.ec2.internal.warc.gz
en
0.95646
1,115
2.9375
3
The FAA this week took a step closer to setting up a central hub for the development of key commercial space transportation technologies such as space launch and traffic management applications and setting orbital safety standards. The hub, known as the Center of Excellence for Commercial Space Transportation would have a $1 million yearly budget and tie together universities, industry players and the government for cost-sharing research and development. The FAA expects the center to be up and running this year. The new center would be an offshoot of other FAA Centers for Excellence that through myriad partnerships develop and set all manner of aviation standards from aircraft noise and emissions to airport systems. According to the FAA the center's purpose is to create a world-class consortium that will identify solutions for existing and anticipated commercial space transportation problems. The FAA expects the center to perform basic and applied research through a variety of analyses, development and prototyping activities. The FAA said the center would have five central areas of work: 1. Space Launch Operations and Traffic Management: Research would include engineering, operations, management, and safety areas of study related to the overall commercial space traffic management systems and its interactions with the civil aviation traffic management systems. The center would look at: on-orbit operations emergency response, ground safety, spaceports, space traffic control and space environment. 2. Launch Vehicle Systems, Payloads, Technologies, and Operations: Here the center would look at launch vehicles, systems and payloads. Specific areas of research include safety management & engineering, flight safety analyses & computation, avionics, propulsion systems, sensors, software, vehicle design and payloads. 3. Commercial Human Space Flight: Research here can provide critical information needed to allow the ordinary citizen -- a person without the benefit of the physical, physiological, and psychological training and exposure to the space environment that the traditional astronaut has -- to travel to space safely, to withstand the extremes of the space environment, and to readjust normally after returning to Earth, the FAA stated. 4. Space Commerce: This category of research encompasses the subcategories of space business and economics, space law, space insurance, space policy, and space regulation. Research will include developing innovative and practical commercial uses of space; innovative business and marketing strategies for companies involved in commercial launch operations and related components and services; support of the US commercial space transportation industry's international perspective and competitiveness; and developing innovative financing for commercial launch activities. 5. Cross-Cutting Research Areas: The idea here is to look for ways to cut the costs of developing the four research topics mentioned above, focusing on safety, testing and training, the FAA stated. As the commercial space industry slowly ramps up there will be a need for such centers, experts say. And there does seem to be growth for the industry. A study last year showed the total investment in that industry has risen by 20% since January 2008, reaching a total of $1.46 billion. The study, done by researchers at the Tauri Group and commissioned by the Commercial Spaceflight Federation said revenues and deposits for commercial human spaceflight services, hardware, and support services has also grown, reaching a total of $261 million for the year 2008. The Federation says that when you combine NASA, other government agencies, and commercial customers, the commercial orbital spaceflight industry is planning over 40 flights to orbit between now and 2014. The study was based on a survey of 22 companies engaged in commercial human spaceflight activities, including Armadillo Aerospace, Masten Space Systems, Scaled Composites, Space Adventures and Space X. The FAA last November streamlined the environmental review part of permit applications for the launch and/or reentry of reusable suborbital rockets to help bolster a fledgling commercial space market. Follow Michael Cooney on Twitter: nwwlayer8 Layer 8 Extra Check out these other hot stories:
<urn:uuid:f5e753df-10bb-4700-b69f-1fcfdc242937>
CC-MAIN-2017-09
http://www.networkworld.com/article/2230586/security/faa-close-to-setting-up-commercial-spaceflight-centers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00381-ip-10-171-10-108.ec2.internal.warc.gz
en
0.928825
781
2.875
3
NASA engineers are waiting to see if they can pull a long-running Mars rover out of stand-by mode. The Mars rover Opportunity, which has been working on the Red Planet for more than nine years, put itself into stand-by mode this month during a period when communications with its handlers on Earth were cut off. Earlier this month, communication with all NASA's machines working on Mars became spotty and then stopped all together because the sun was almost directly in the path between Earth and Mars. The solar conjunction is just ending, and as communications began to be restored, NASA on Saturday learned of Opportunity's troubled status. On Monday, NASA programmers sent new commands to Opportunity to try to get the robotic rover to resume operations. "Our current suspicion is that Opportunity rebooted its flight software, possibly while the cameras on the mast were imaging the sun," said Mars Exploration Rover Project Manager John Callas, in a statement. "We found the rover in a standby state called automode, in which it maintains power balance and communication schedules, but waits for instructions from the ground. We crafted our solar conjunction plan to be resilient to this kind of rover reset, if it were to occur." NASA engineers believe Curiosity, the super rover that landed on Mars last August and is Opportunity's successor, came through the solar conjunction without a problem. Curiosity's controllers plan to send it a new set of commands on Wednesday to get the super rover working again. During the solar conjunction, all the machines working on Mars were given special instructions. NASA scientists gave the two working robotic rovers, Curiosity and Opportunity, as well as the orbiters, Odyssey and the Mars Reconnaissance Orbiter, instructions to do minimal work during the time they were out of contact. The two rovers were commanded to remain stationary for the month and to not use their robotic arms. This article, NASA's Mars rover may be in trouble after month of silence, was originally published at Computerworld.com. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected]. Read more about government/industries in Computerworld's Government/Industries Topic Center. This story, "NASA's Mars rover may be in trouble after month of silence" was originally published by Computerworld.
<urn:uuid:5faf5480-53f1-4247-8771-7ab167d0a061>
CC-MAIN-2017-09
http://www.itworld.com/article/2709920/it-management/nasa-s-mars-rover-may-be-in-trouble-after-month-of-silence.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00021-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954368
509
2.5625
3
The use of wireless systems for network and Web access is exploding-from wireless LANs to e-mail capable cell phones. It makes network access completely portable, and with systems starting at under $300, it’s very affordable. But widespread wireless use has raised serious new security challenges. How can you be certain the person connecting to your wireless network is a legitimate user and not a hacker sitting in your parking lot? In a recent study, over half of the companies surveyed didn’t even use the most basic encryption and security features of their wireless LAN systems. For those companies who choose to implement security features, solutions include access control appliances or VPNs to protect their wireless systems instead of (or in addition to) wireless encryption to establish a secure session. Once a secure session is initiated, however, security cannot stop there. The organization must be confident that users entering the secure sessions are who they say they are-they must be positively identified. Most organizations, unfortunately, choose fixed passwords to identify users, and fixed passwords are inherently weak. Many password attacks exist today-from hacking dictionaries to sniffers, from social engineering to personal information attacks. These tools-like the L0phtCrack brute-force password cracker -are readily available on dozens of Web sites, free to anyone with a Internet access. These attacks can easily compromise fixed passwords, no matter how stringent the organization’s password policy. In fact, organizations lose millions of dollars every year due to password breaches. In late 2002, an identity theft ring was exposed is the U.S. that had victimized over 30,000 people; the suspects allegedly stole passwords from credit agencies and banks, accessing credit reports and information, and costing customers over US$2.7 million. Best defense: strong authentication Passwords are easily compromised because there’s only one factor to possess: the password. With it, an attacker can access network systems again and again. With strong authentication, there are usually at least two factors. Often, these two factors are something you know, like a personal identification number, and something you have, which can be a hardware token, a digital certificate, a smart card, or other device. An ATM card is an excellent example of this: you must have the ATM card and know the PIN to access your accounts. The user experience: logging on When a strong authentication system with hardware tokens is put in place to protect a wireless LAN, a user requests access and is presented with an onscreen dialog box or prompt to enter a username and a one-time password. The user activates the hardware token in order to get the one-time password on the token screen, which must be typed in at the prompt in order to gain access to the requested resource. Once a one-time password is used, it can’t be re-used to gain access. This eliminates many of the vulnerabilities of fixed passwords, making sniffing, hacker dictionaries, personal information attacks, and other common password attacks useless to hackers. The administrator experience: adding strong authentication There are generally three ways to use strong authentication systems to protect wireless LANs: 1. EAP and 802.1X. Specialized authentication servers can interoperate with strong authentication services using the Extensible Authentication Protocol (EAP) and 802.1X infrastructure. This combination is much more secure than competing standards. EAP and 802.1X are used to control access to network devices, including wireless LANs. These standards have been embraced by a number of leading hardware and software vendors including Cisco, Microsoft, and Hewlett-Packard, and many products, designed for both wireless and wired networks, already implement these standard. Only certain types of EAP protocols, like TTLS (Tunneled Transport Layer Security) and PEAP (Protected EAP), support strong authentication. 2. Wireless access control appliances. Because Web-based authentication is easy to deploy, it is becoming more pervasive. One solution to use Web-based authentication with wireless LANs is called an access control appliance, a firewall-like device that sits between the wireless access point and the rest of the network. These appliances force wireless users to authenticate at the application level (typically from their Web browsers over HTTPS) before receiving access to the rest of the network. In this setup, anybody can connect to the wireless access point without authentication, but users must authenticate in order to get beyond the local subnet to organizations’ trusted networks. Interoperability with strong authentication systems is often accomplished using the RADIUS protocol. 3. Virtual private networks. VPNs are traditionally used to link internal networks across an insecure network, or for remote access. More and more organizations are using VPNs to wireless LAN connections by connecting the wireless client to an internal network via the VPN gateway through an encrypted tunnel. This ensures the authenticity and secrecy of the information as it is passes across insecure networks. To securely authenticate VPN users (whether wireless or not), strong authentication can be added (often using the common RADIUS protocol) to provide a high level of security recommended by experts. The device experience: embedded in the BIOS Some organizations prefer an extra layer of security: allowing authorized users only to access protected information from certain computers or workstations. While some authentication systems can recognize and authenticate IP addresses, there is now another way to identify devices. Many BIOS chips can utilize security software that embed security information directly on the BIOS. This information, similar to digital certificates, is recognized by some authentication systems, and ensures that users must access protected networks only from their assigned laptops or other BIOS-based devices. The bottom line Creating a secure tunnel for wireless access is a vital element of corporate security, but can be easily undermined by weak user passwords. Without strong authentication, attackers can often access back-end information just as easily as your authorized users.
<urn:uuid:339fcd99-6f61-4638-b7da-7d855aaeafe8>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2003/05/06/positive-identification-in-a-wireless-world/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00197-ip-10-171-10-108.ec2.internal.warc.gz
en
0.918884
1,204
2.8125
3
BGP is the Internet routing protocol. He is making the Internet work. BGP protocol performs actions by maintaining IP networks table for the biggest network in the world – The Internet. The BGP protocol, as a code of behavior, supported core routing decisions over the Internet. Instead of using traditional IGP (Interior Gateway Protocol) metrics, BGP protocol relies upon available path, network guidelines and rule-sets for routing decision making purposes. And just for this feature, it is sometimes expressed as a reachability protocol. The main idea behind the BGP creation was to replace the EGP (Exterior Gateway Protocol) to permit a complete decentralized routing. NSFNET backbone is the best example of a decentralized system.
<urn:uuid:b3e485db-6cd1-4840-b9a0-7c2ee825cada>
CC-MAIN-2017-09
https://howdoesinternetwork.com/tag/routing-protocol
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00549-ip-10-171-10-108.ec2.internal.warc.gz
en
0.874316
146
3.640625
4
Ah, Google, your Street-View publicity stunts never fail to entertain. This time, the Mountain View pranksters have stuck their 360-degree cameras up an actual mountain—Mont Blanc, to be precise. In fact, Google claims to also be helping scientists track the effects of global climate change warming. This is while the ice atop the massif is said to be shrinking. In IT Blogwatch, bloggers feel cold just looking at it. Your humble blogwatcher curated these bloggy bits for your entertainment. We need news. Trevor Hughes has views on the views—Google Street View cameras top Mont Blanc for 360-degree views: Google has made it possible to visit one of Europe’s most storied peaks from the comfort of your computer...Mont Blanc, the highest of the Alps. The peak on the French-Italian border is about 15,777 feet high [and] covered in a perpetual snow-and-ice field. Viewers also can virtually “walk” across the Mer de Glace — a river of ice that experts...say is endangered by global warming. Viewers also can [get] the perspective of a paraglider, a speeding trail-runner and a skier who shows off some dazzling aerial flips. Far out, man. Frederic Lardinois writes Google Hauls Its Street View Cameras Up The Mont Blanc: For this project, Google partnered with a number of photographers, skiers, mountaineers, climbers and runners to build up this library of...imagery. You get images of some of the most iconic trails around the mountain in the summer and winter [and] see what Mont Blanc looks like from the perspective of an ice climber, for example. Now here's an appropriate name. It's Cam Bunton, with Google’s latest breathtaking addition to Street View is a virtual tour of Mont Blanc: Google’s new virtual exploration is simply breathtaking, and easily worth a few minutes of your time. Google has littered the mountain with a bunch of awesome 360-degree Photo Spheres. You’ll get a look around the summit. ... You’ll be able to climb up a serac. Google wants to preserve what it can...so that future generations can see what it used to look like, and so that scientists can see how it’s changing. But why does Google do these things? Tom Dawson thinks he knows—Google Brings Street View to Majestic Mont Blanc: Street View has become an interesting side-project for the Google Maps team, and it’s arguably become part of pop culture. Street View now offers users one more piece of culture to explore from the comfort of their tablets, phones or computers. We can’t help but think Google is doing these sort of things...to help with their Cardboard project. Regardless, there’s more fun and inspiring content to oggle over. We could do with a local voice, too. Bonjour, Oliver Gee. Ça va? Climb Mont Blanc from your armchair thanks to Google: The American search engine sent a team of climbers, skiers, and photographers to the summit. ... The result is spectacular. For the ascent...it was elite guide Korra Pesce who carried the Street View Trekker up...the Goûter Route of the mountain. See a summary...in the video below. You have been reading IT Blogwatch by Richi Jennings, who curates the best bloggy bits, finest forums, and weirdest websites… so you don’t have to. Catch the key commentary from around the Web every morning. Hatemail may be directed to @RiCHi or [email protected]. Opinions expressed may not represent those of Computerworld. Ask your doctor before reading. Your mileage may vary. E&OE.
<urn:uuid:215a7f5b-f7ae-4b3b-b61f-740834589ef4>
CC-MAIN-2017-09
http://www.computerworld.com/article/3025484/cloud-computing/google-street-view-mont-blanc-itbwcw.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00193-ip-10-171-10-108.ec2.internal.warc.gz
en
0.906088
813
2.78125
3
As of December 1, 2016, US law enforcement has gained new hacking powers thanks to changes to Rule 41 of the Federal Rules of Criminal Procedure that now simplify the process of getting warrants to hack into devices of US citizens and the citizens of other countries. The Rule 41 amendments had been proposed in 2014 by an advisory committee on criminal rules for the Judicial Conference of the United States. In April 2016, the United States Supreme Court, and not Congress, approved the proposed procedural changes. According to standard US government procedures, the Supreme Court then forwarded the amendment to Rule 41 to US Congress, who had until today to disavow the proposed changes. The technical procedure through which could have been accomplished included passing a law that shot down the proposed amendment. There were several attempts to prevent the changes to Rule 41. Senators Ron Wyden (Oregon) and Rand Paul (Kentucky) came the closest, in both stopping the law, or at least delaying with three months the due date until it could be shot down, but have eventually failed. According to the "new" Rule 41, the FBI and other US law enforcement now have at their disposal a simplified procedure for requesting warrants that allow them to hack the computers and devices of people they have probable cause of committing a crime. Previously, law enforcement had to request a warrant from a judge from the same jurisdiction where the possible subject resided. If it needed to hack into devices belonging to a group of individuals, it needed to obtain different warrants, in all states, which was a time-consuming operation. According to the revised Rule 41, law enforcement can now request one warrant for hacking anyone in the US, even multiple targets, from one single judge. Furthermore, if the target is using Tor, I2P, VPNs, or other technologies that mask his IP address, the FBI has the legal power (in their eyes) to hack anyone across the globe. The FBI isn't strange to such scenarios, and it didn't wait for the new Rule 41 amendment to pass. In 2015, the FBI obtained one warrant, which it used to hack over 8,000 computers in 120 countries. Also included in Rule 41 is a clause that allows judges to issue warrants that allow law enforcement to hack or seize devices part of a botnet. Nowadays we have botnets of IoT smart devices, botnets of infected home WiFi routers, botnets of infected PCs, botnets of infected mobile devices, and so on. Any malware that infects any device and uses an online command and control server is a botnet, even annoying adware families. Almost all malware families today use C&C servers, and indirectly form a botnet. Technically, the FBI and US law enforcement can hack anything they want on the suspicion a device has been infected with malware. In a statement published in June, the US Department of Justice has tried to reassure the US population that protections provided by the Fourth Amendment are still into play and law enforcement must establish probable cause before requesting such warrants. Nevertheless, judges are still the ones ruling on these warrants. Just this spring, the media blasted a clueless judge that oversaw the copyright battle between Oracle and Google. The judge had a very hard time understanding basic principles such as APIs and programming languages. Throwing around words like botnets and malware at such judge would likely result in approval of any warrant the FBI would be requesting. While the FBI and other law enforcement agencies try to push the agenda for new laws that fight new "cyber" threats, nobody's talking about educating members of the judicial system. There's a trend across the world with several countries passing privacy-intrusive and sweeping surveillance laws. Just two weeks back, the UK has approved the most extreme surveillance law ever passed in the history of a Western democracy, as Edward Snowden characterized the new Investigatory Powers Bill (IP Bill), which was passed into law this week. Similarly, also this month, China passed new a cyber-security law that allows it to restrict Internet access in the country in the case of a "national security" issue. This week, Russia and China signed a pact that would allow the Kremlin government access to Chinas' famous Great Firewall technology. Russia is already running its own "blocklist," but now hopes to gather know-how on running a proper Internet censorship tool from the world's best, which is with no doubt the Chinese administration.
<urn:uuid:2e5eff59-7275-4f27-9d88-1229f8f8572f>
CC-MAIN-2017-09
https://www.bleepingcomputer.com/news/government/as-of-today-us-law-enforcement-has-new-hacking-powers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170708.51/warc/CC-MAIN-20170219104610-00369-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963888
898
2.515625
3
Efficiency Through Unification Sometimes called unified computing or dynamic infrastructure, the convergence of computing resources describes “bringing together of compute, storage, and networking in order to facilitate virtual computing and application deployment flexibility,” said Jim Frey, managing research director for networks at Enterprise Management Associates. Although the details of implementing converged infrastructure can be complex, the basic premise is straightforward. In the old days, businesses divided their computing resources into silos to perform independent or dedicated tasks. For example, a business might own one physical server which behaved as a mail server, and another machine as a storage server, and another as a Windows domain controller. Depending on the size of the organization, each of these may actually have consisted of multiple machines and associated hardware, all dedicated to their assigned task. The problem is siloed computing can be a very inefficient way to allocate resources. Consider: - Computing power - In a siloed configuration, each CPU may only be utilized a portion of the time. The business is essentially paying to provision, power, and maintain a machine that is only leveraging a fraction of its capacity. - Storage capacity - Likewise, if the mail server is nearly running out of disk space but there is ample space on the Windows domain controller, this excess capacity is not easily accessed by the task that needs it most. - Heterogeneous platforms - An organization may need to support software for multiple OSes; dedicating separate hardware to each requires increasing physical assets. The promise of convergence At its most extreme expression, converged infrastructure makes a simple proposition: all of your data center assets -- computing power, storage space, network connectivity, power and cooling -- are one common utility pool from which you draw on for whatever tasks you need at that moment. In the perfectly converged environment, there are no silos at all. Computing assets are totally elastic, adapting to changing demands on the fly. Of course, in the real world life is more complicated. Many organizations already have investments in existing infrastructure and cannot justify tossing it all overboard wholesale. “In rare cases,” noted Frey, “there are true Greenfield build-outs where a clean slate exists.” Still, the opportunity window for converged infrastructure is increasing for three important reasons: depreciation of existing infrastructure, the maturing of virtualization, and the appeal of the cloud. Driven by virtualization Converged infrastructure owes its very existence to the rise of virtualization technology. Virtualization essentially rides an abstraction layer between the OS and the underlying hardware. Software running on top of the virtualization layer is agnostic about the physical hardware. Through virtualization, multiple OS instances can run simultaneously on the same physical machine. These OS instances can be the same or different from one another, allowing one set of hardware assets to host multiple Windows instances, or mixed Windows Unix-like instances, and so on. Further, as performance needs increase the hardware assets can be improved “under” the OS, so that there is no need for the virtualized OS to know about changes in hardware infrastructure or experience interruptions to reconfigure against changes in underlying technology. Virtualization offers both a high degree of flexibility and maximizes hardware utilization. Unlike the inefficient silo model, virtualization allows physical assets to be fully used with minimal idle time, lowering costs in acquisition, power and cooling, and maintenance. But another reason that virtualization is leading the way toward converged infrastructure, said Frey, is that “Most folks looking at production deployment of server virtualization have realized that they need to refresh their data center infrastructure.” In other words, updating the data center for virtualization is already a big step toward a fully-fledged converged infrastructure platform. A growing range of vendors are now in the business of selling converged infrastructure platforms. At their most unified, these essentially amount to “cloud in a box” solutions, which combine a diverse set of technologies within one set of hardware and managed by a unified set of tools. HP offers their CloudSystem, a blade-based converged infrastructure platform that combines computing, networking, and storage technologies. It is designed to scale flexibly, so that the virtualized technologies running on the hardware can draw from the whole available hardware pool. VCE, a combined venture between Cisco, Intel, VMWare, and EMC, markets their Vblock converged infrastructure turnkey platform. Unlike the HP device that combines HP technologies, the Vblock merges technologies from its parent companies. Dell weighs in with a different take on converged infrastructure, one that acknowledges the existing hardware investments a business may already have. The Dell Virtual Integrated System is a software layer that merges the assets of existing hardware. And now IBM has entered the fray with their recently announced PureSystem; basically, a cloud enabled data center in a box with integrated management tools and provisioning so customers can expand capacity and get up and running within hours of the boxes hitting the loading dock instead of days or months. Technology moves quickly and the promise of unified platforms under the converged infrastructure moniker can’t escape this reality. However, adaptable they may be, the only sure truth is that a converged system represents a “snapshot in time” of current technologies. But, as Frey points out, it “remains to be seen” whether “their lifetime will be longer than many of those coming before," with regard to turnkey unified platforms. For the business case, though, Frey explains that converged infrastructure solutions “must only prove their worth for the time it takes for them to be fully depreciated, which depending on the organization is typically three to five years. If they can survive and continue to provide value for that length of time, then they have arguably done their job.” Aaron Weiss a technology writer, screenwriter and Web development consultant who spends his free time stacking wood for the winter in Upstate New York. His Web site is bordella.com.
<urn:uuid:48a76ef4-510c-4a66-8d54-366f5e7ab6eb>
CC-MAIN-2017-09
http://www.enterprisenetworkingplanet.com/print/datacenter/converged-infrastructure-efficiency-through-unification.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00545-ip-10-171-10-108.ec2.internal.warc.gz
en
0.921154
1,238
2.84375
3
In the 1940s, kids could stand up in the bed of a pickup truck as their parents drove down the road; babies rode in their mothers' laps; and there were no seat belts, padded dashboards, crumple zones, rumble strips, air bags, Bott's dots, anti-lock brakes or traction control. Perhaps it's because of all this modern safety equipment that traffic deaths have fallen to the lowest rate since the 1940s in most states, even though speeds are faster and traffic much more tangled. But automobiles and their makers have gone even further. They want to protect themselves from drivers who speed, drink, text, follow too close, and swerve in and out of lanes. So they've come up with the idea of a car that drives itself and keeps the owner away from the controls -- cars that will dominate our roadways by 2050, according to a study by IHS Automotive. The path to domination is clear, according to the study: Self-driving auto sales will jump from 230,000 in 2025 to 11.8 million per year in 2035, and by 2050 nearly all cars will drive themselves. These cars gently tells the occupant to sit back, relax, watch a movie and "leave the driving to us" -- that slogan from the yesteryear of Greyhound buses and mass-transit. "Us," in this case, is a variety of technologies that keep the car on the road, under the speed limit, and away from the bumper of the car in front. "Us" includes GPS satellite navigation, sensors, radar, blind spot alerts and so on. Google tried the idea first. Then, the company got permission to self-drive in California, Nevada, Florida and Michigan. In most legislation, humans must still be behind the wheel, but that's just window dressing: Technology is in the driver's seat. And now the plot to turn drivers into passengers has reached Washington, which only this month began deliberations that may require new cars to install anti-collision technology -- vehicle-to-vehicle (V2V) technology that's a sort of social network for cars so they can chat about road conditions and their positioning, and avoid collisions. According to ABC News, the Government Accountability Office said recently that V2V could prevent as many as 76 percent of potential multi-vehicle collisions, and that in 2011, there were 5.3 million car crashes, injuring 2.2 million people and killing 32,000. The National Highway Traffic Safety Administration says it will announce a decision within a few weeks.
<urn:uuid:700b243a-966c-4b02-a3a2-4739d3aa1dc0>
CC-MAIN-2017-09
http://www.govtech.com/Will-Self-Driving-Cars-Rule-Our-Roadways.html?flipboard=yes
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00069-ip-10-171-10-108.ec2.internal.warc.gz
en
0.96976
527
2.5625
3
NASA seems hamstrung on this one. Tasked with watching out for huge chunks of space rocks that could smash into the earth, it has been denied the money to actually do the job. The problem is that while Congress mandated four years ago that NASA detect and track 90% of space rocks known as near earth objects (NEO) 140 kilometer in diameter or larger, it has not authorized any funds to build additional observatories, either in space or on the ground, to help NASA achieve its goals, according to a wide-ranging interim report on the topic released by the National Academy of Sciences this week. The report notes that NASA has managed to accomplish some of the killer asteroids mandate with existing telescopes but with over 6,000 known objects and countless others the task is relentless. NASA does carry out the "Spaceguard Survey" to find NEOs greater than 1 kilometer in diameter, and this program is currently budgeted at $4.1 million per year for FY 2006 through FY 2012. The National Academy of Sciences report contains five key observations about the task at hand for NASA: 1. Congress has mandated that NASA discover 90% of all near-Earth objects 140 meters in diameter or greater by 2020. The administration has not requested and Congress has not appropriated new funds to meet this objective. Only limited facilities are currently involved in this survey/discovery effort, funded by NASA's existing budget. 2. The current near-Earth object surveys cannot meet the goals of the 2005 NASA Authorization Act directing NASA to discover 90% of all near-Earth objects 140 meters in diameter or greater by 2020. 3. The orbit-fitting capabilities of the Minor Planet Center are more than capable of handling the observations of the congressionally mandated survey as long as staffing needs are met. 4. The Arecibo Observatory telescope continues to play a unique role in characterization of NEOs, providing unmatched precision and accuracy in orbit determination and insight into size, shape, surface structure, multiplicity, and other physical properties for objects within its declination coverage and detection range. 5. The United States is the only country that currently has an operating survey/detection program for discovering near-Earth objects; Canada and Germany are both building spacecraft that may contribute to the discovery of near-Earth objects. However, neither mission will detect fainter or smaller objects than ground-based telescopes. The report goes on to state: Imminent impacts (such as those with very short warning times of hours or weeks) may require an improvement in current discovery capabilities. Existing surveys are not designed for this purpose; they are designed to discover more-distant NEOs and to provide years of advance notice for possible impacts. In the past, objects with short warning times have been discovered serendipitously as part of surveys having different objectives. Search strategies for discovering imminent impacts need to be considered, and current surveys may need to be changed. Although the threat posed to human life by near-Earth objects has received much attention in the media and popular culture, with numerous movies and television documentaries devoted to the subject, to date there has been relatively little effort by the U.S. government to survey, discover, characterize, and mitigate the threat, the report states. Requirements have been imposed on NASA in this area without the provision of funds to address them. Despite this problem, the United States is still the most significant actor in this field with few exceptions, if only because other countries have devoted negligible resources to it. If the threat of NEOs and solutions to deal with that threat are to be further explored, additional resources will be required, such as for completion of dedicated telescopes and increased funding for existing key facilities and research and analysis programs, the report concludes. Layer 8 in a box Check out these other hot stories:
<urn:uuid:c53bf435-9a6e-455a-bb6d-cebb7c32ad9a>
CC-MAIN-2017-09
http://www.networkworld.com/article/2236785/security/killer-asteroids-getting-free-pass-on-nasa-s-watch.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00245-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955419
776
3.421875
3
The number of bridges considered to be in the worst shape has declined in the vast majority of states — all but nine — in the years since a Minneapolis bridge collapse brought national attention to the America’s decaying bridges. All told, the number of “structurally deficient” bridges dipped by 14 percent in the last six years. But even with the improvement, one in 10 bridges in the country is still considered structurally deficient. Structurally deficient bridges are not necessarily unsafe but they are the ones in the most need of maintenance, rehabilitation or outright replacement based on a numerical score developed by the federal government. Such bridges must be inspected every year, because one or several of their load-carrying parts are in poor condition. Officials often divert heavy vehicles from crossing them. “There has been a renewed focus and increased attention paid to bridges,” said Robert Victor, who heads an American Society of Civil Engineers’ group that evaluates the state of the country’s infrastructure. The public seizes on stories of bridge failures, particularly when they involve motorists’ deaths, Victor said. That was the case in the 2007 collapse of the Minneapolis bridge carrying Interstate 35W over the Mississippi River. Thirteen people died in the collapse. “Minneapolis grabbed people’s attention,” Victor said, much in the same way as flooding in New Orleans after Hurricane Katrina focused the country’s attention on the levee system. King Gee, director for engineering and technical services for the American Association of State Highway and Transportation Officers, indicated the real impact of the Minneapolis bridge collapse was not on the public — because the news quickly faded — but on state engineers assessing the weaknesses of their own systems. “It caused every state to look at their own bridges,” he said. Federal policy shifts also prompted states to better manage their inventory of bridges and roads, Gee adds. The Federal Highway Administration over the last 12 years has pressed states to make decisions on what would be the most cost-effective ways to improve the system as a whole, rather than just address individual roads or bridges. Congress codified that approach in its latest surface transportation law, called MAP-21. The approach is easily applied to bridges and to pavement, Gee said, because states have so much data on their condition. States have been inspecting bridges since the 1960s. The federal surface transportation law expires this fall, and federal transportation money to states may dry up even before that. If Congress does not find the money to continue or increase current funding levels, the number of troubled bridges could start climbing again, Gee said. Another factor that could make future bridge upkeep difficult is their age. The average age of bridges in the country is 43 years old, and most of them were only built to last for 50 years In recent years, a handful of states have accounted for most of the national improvement in reducing structurally deficient bridges. Oklahoma, Missouri, Texas, Mississippi, Pennsylvania and Ohio posted the biggest improvements in the last six years and together were responsible for 57 percent of the decrease in structurally deficient bridges nationwide. The states started in very different situations at the time of the Minneapolis bridge collapse. Take Pennsylvania and Texas. Pennsylvania had the highest percentage of structurally deficient bridges of any state in 2007. Despite major repair efforts, it remained in that spot through 2013. Texas, on the other hand, began with a lower percentage of structurally deficient bridges than all but four other states. By 2013, only Florida and Nevada had a smaller portion of deficient bridges. Making state-to-state comparisons can be tricky, national experts warn. Sun Belt states tend to have fewer deteriorating bridges than those in the Northeast or Midwest, where the infrastructure tends to be older and the winter climate harsher on steel and concrete structures. Since the Minneapolis bridge collapse, Oklahoma reduced the number of its structurally deficient bridges by more than 1,700. But the rallying cry for the improvement, officials there say, came before the Minneapolis tragedy. Fourteen people died in a 2002 Oklahoma bridge collapse, when a barge collided with a bridge carrying Interstate 40 over the Arkansas River. The accident also shut down that span of the highway for two months during reconstruction, and officials had to reroute traffic to other roads. But the 60 bridges on the two detour routes were in bad shape, too, said Terri Angier, a spokeswoman for the Oklahoma Department of Transportation. “At same time traffic was going over bridges, we were under them, fixing them up and jacking them up,” she said The state paid $15 million to replace the span of Interstate 40 that collapsed and another $15 million to fix the bridges that were used in the detours. State funding for roads and bridge repair had stagnated since the 1980s, and the transportation agency did not have enough money to repair or sometimes even just paint bridges, Angier said. A major Oklahoma wheat grower reported that it had to send its trucks through Kansas because so many Oklahoma bridges could not handle heavy trucks. And then, in 2004, a chunk of concrete fell off a bridge 50 miles south of Oklahoma City and killed a Texas woman. Legislators agreed to the first in a series of boosts for transportation funding in 2005. They devoted general funds, in addition to money from fuel taxes, to improving the road network. Lawmakers passed several more increases, the most recent in 2012, and largely spared the new transportation money in budget cuts this year, Angier said. The new laws will increase Oklahoma’s annual road budget from $200 million a year in 2005 to $775 million by 2018. This year’s budget is roughly $550 million. Angier is excited about the state’s progress. “We’re going after the Number 1 ranking in good bridges,” she said. Other states “no longer have Oklahoma to point to as worse than them.” Federal data on the condition of bridges also show that, since the Minneapolis bridge collapse: This story was originally published by Governing
<urn:uuid:9768e6e5-8b3a-407e-80f2-5ebc80917290>
CC-MAIN-2017-09
http://www.govtech.com/transportation/States-Trim-List-of-Bad-Bridges.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00065-ip-10-171-10-108.ec2.internal.warc.gz
en
0.961711
1,250
3.125
3
The National Electronic Disease Surveillance System (NEDSS) will "revolutionize public health by gathering and analyzing information quickly and accurately," according to the Centers for Disease Control and Prevention (CDC). This may be the case, but not without considerable adjustment and some aggravation, states have found. The NEDSS initiative's goal is to promote the use of data and information standards to develop a more efficient way for the CDC to collect and disseminate information about diseases. The CDC is responsible for prevention and control of chronic-health diseases, environmental diseases and communicable diseases, and collects data to get a national picture of public health practices in the country. Each state's health department is responsible for collecting data from its counties and passing it to the CDC. The problem, however, is state and municipal governments use more than 100 different data systems and data from many jurisdictions, which have different vocabularies and codes. The CDC hopes the initiative -- which sets forth guidelines not only for data collection, reporting and common vocabularies, but also standard architecture -- will generate a more standard case report that can be sent to state health departments. The NEDSS base system, which has not yet been released, is the computer application the CDC will offer. The Java-based application developed by Computer Sciences Corp. will create a standard NEDSS compliant interface for jurisdictions that have yet to develop their own. "That's basically a re-engineering of current systems and getting it over the Web," said Dr. Claire Broome, a senior adviser with the CDC. "It's for use at the state level, and that will be useful to our state partners. The thing that is really different about NEDSS is the capacity to receive and transmit standard electronic messages." The NEDSS initiative also provides a single user face with the core common data for the different systems. As diseases are diagnosed in laboratories around the country, an information system generates a Health Level 7 (HL7) standard message and sends it to a NEDSS "inbox." HL7 is a standards-developing organization geared toward creating interoperable clinical data exchange. Its information exchange model is widely used in the United States and internationally. The system is designed to replace current CDC surveillance systems, including the National Electronic Telecommunications System for Surveillance (NETSS), the HIV/AIDS reporting system, and systems for vaccine-preventable diseases, tuberculosis and infectious diseases. The CDC said NEDSS pilot projects have resulted in more than double the number of cases reported, and in some instances, the information provided was more complete. Despite the CDC's confidence in these initial pilot projects, states are still unsure about key parts of the new system. Some states expressed frustration over the NEDSS initiative, which was characterized as a "moving target." "[The CDC] is close to being able to say, 'This is our final data model,' but they've gone through more than a dozen versions," said an anonymous source from a state department of health. "It's hard for us to say, 'OK, there's the model we're going to use, and we're going to set up this big database [based on that model].'" The fact that the CDC has not set mandates to implement any of the standards has made it difficult to convince counties that NEDSS is the way to go, the source said. "[Counties] keep coming back saying, 'Where is the department letter that says all of our systems should be NEDSS compliant?'" the source added. One early problem with the NEDSS plan was that it didn't give the NEDSS initiative a "friendly face," said the source; this is crucial to the ability of state departments of health to sell the NEDSS initiative to counties. The CDC's new approach to the initiative focuses more on functionality, calling for a Web-based standard with one interface. "What we've realized is that to increase the functionality of a system, we have to offer some slick stuff through an Internet-based front end," the source said. "If I hit the submit button, I can get myself a nice little GIS report on where diseases have been occurring in my county." Congress has provided federal grants for state and local governments to take advantage of NEDSS, and many jurisdictions are using the funds to modify existing systems to make them NEDSS compatible. California considered adopting the base system, but chose to develop its existing system for compliance. The state will use the federal funds to "take care of infrastructure items that usually don't get covered through government categorical funding," said Ed Eriksson, the state's NEDSS project manager. "We want to be able to own the kinds of functionality that would be rolled out to counties. "Usually, they're not going to fund a project manager or an integration-minded person who will think about how it's going to integrate with other applications and business processes," Eriksson continued. "We're using NEDSS to address some of these infrastructure items, such as security data modeling and coming up with a vocabulary that's based out of HL7 so a case or specimen has the same definition across all applications." Other states, such as Alabama, are using NETSS and waiting for the base system to develop further. "Why reinvent the wheel?" said Richard Holmes, director of surveillance for Alabama's Department of Health, adding that Alabama is "essentially sitting in a holding pattern waiting for [the CDC] to put us in the queue for release of the base system." That type of delay is one reason other states chose to develop their own systems. "The CDC tends to take its time developing things," said Carol Hoban, project manager for the Georgia Department of Health. Georgia developed its own Web-based system that is NEDSS compatible, also making sure its system would be tailored to the state's needs. "The reporting needs vary from state to state, although there are national reporting standards," Hoban said. National standards, like HL7, are driving NEDSS, the CDC said. Without them, results of disease outbreaks across multiple counties or states would be difficult to interpret because of the different vocabulary used by different jurisdictions. "Total flexibility would not be useful," said the CDC's Broome. Maine pondered before deciding to go with the base system, but with some modifications. "We talked with the CDC and asked about porting it over from a Wintel platform to a UNIX platform," said Mike Wenzel, health program manager for the Maine Immunization Program. "We're also looking into using iHUB [middleware] as a translation methodology between different entities." The NEDSS guidelines will become the backbone of Maine's public health infrastructure, Wenzel said. "It will become a unifying set of standards for how we perform all our public health operations," he said. "In essence, we're going to de-silo our Bureau of Health applications into an Oracle-NEDSS kind of situation and call it the Maine Public Health Infrastructure."
<urn:uuid:0cd8e685-449e-48f4-9cd0-6f582eca5591>
CC-MAIN-2017-09
http://www.govtech.com/security/Streamlining-Standards-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00417-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957489
1,460
3.0625
3
Cybersecurity action must catch up to interest: A lesson from “Snowden” Across the country this fall, movie-goers settled into their theater seats, popcorn tubs balanced on their laps, and immersed themselves in one of the best-known data breach stories of our time. “Snowden” was no documentary in terms of its veracity, and critics had mixed feelings about the film overall. But the public spent $28 million on tickets to see it, and that says something about the level of interest in security these days. What does it say exactly? Perhaps not what most experts would like to hear. As a society, we seem to be more interested in cyberfiction than actual cybersecurity. Too many companies are still waiting for a disaster to land at their doorsteps before they take action to protect their data. Businesses are aware of security risks – the threat is so clear that Hollywood is making bank on it – but organizations have yet to get adequately proactive about protecting their most important asset: their sensitive data. This is baffling, especially when a single malware attack can cost a small or medium-sized business as much as $100,000. That’s a painfully expensive lesson, and yet a Kaspersky Lab report recently found that 67 percent of survey respondents had learned this lesson the hard way, through complete or partial data loss due to a cryptomalware attack. Half of organizations said in the same survey that malware attacks are the greatest threat they face today, which makes sense – a single attack could wipe out their whole security budget for the year, never mind the cost in reputation damage and data loss. Ransomware variants are constantly changing, so it’s hard for any security team to completely guarantee it can withstand an attack. However, there are plenty of steps companies can take before they’re faced with the no-win choice to pay a ransom or not. Reducing risk has to start with data awareness. What sensitive information do you have? Where is it? Who can access it? You can’t lower your risk until you know how exposed you are. By assessing your environment, you can take informed action to shrink your potential attack surface. Data awareness also involves monitoring data activity levels and automating the alert mechanism for suspicious activities. When you see rapid data change rates, you need to be ready to coordinate with underlying storage to automatically create snapshot copies of your files for easy recovery in the event of a ransomware attack. You should also have forensic tools at the ready to help accelerate response and recovery after an attack. Among the facts you’ll need to ascertain quickly: when did the attack occur, where was the root cause, what was the total impact, and which files, file shares and virtual machines were affected? When you have the analysis capabilities to answer these questions immediately after spotting an attack, you’ll be in a much better position to quickly recover lost data and restore full operations. Entertainment comes first in “Snowden,” and Hollywood can keep audiences engrossed with dialogue like, “Most Americans don’t want freedom – they want security.” Reality is more complex, of course. Businesses want the freedom that comes from security – but they need to take steps to get it. Get informed about your company’s data security risks with a free audit.Like This
<urn:uuid:379a63ee-9828-4ab2-9c88-ad1b60670c00>
CC-MAIN-2017-09
http://blog.datagravity.com/cybersecurity-action-must-catch-interest-lesson-snowden/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00293-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965734
696
2.5625
3
Attacking Asthma with Advanced Telehealth Monitoring AT&T’s prototype infrastructure provides a complete, end-to-end solution for delivering sensor data from patient to end users. Asthma is on the rise, affecting people of all ages, particularly the young. This rise is a result of biological and chemical factors, with a main culprit thought to be airborne pollutants, smoke, fragrances and perfumes, and cleaning solvents and other volatile organic compounds (VOCs) that evaporate into the air from fabrics and carpets. With homes sealed to save energy and cut heating and cooling bills, VOCs build up in homes in high enough concentrations to trigger asthma attacks in some people. AT&T is currently testing a portable room-monitoring device capable of detecting VOCs and then issuing alerts so those susceptible to VOC-induced asthma attacks can take preventative action. Because healthcare is becoming increasingly data-driven as a new generation of lightweight, low-power sensors is being incorporated into a variety of new and old medical devices. Sensor-equipped heart monitors, pill dispensers, pulse-oximeters, glucometers, and other devices such as the VOC detectors will soon be sending continuous, real-time data. Data transmitted to physicians’ offices will give physicians something they rarely have today: a long-term, comprehensive, multidimensional view of health and wellness. This data will enable people to monitor their own health parameters and the environment around them, allowing those with chronic conditions to more carefully manage their symptoms and giving those in good health early indications of developing health problems. But it’s the transmission of medical sensor data—to doctors, specialists, and researchers—that will do the most to transform healthcare. Communications as a healthcare tool Data transmitted to physicians’ offices will give physicians something they rarely have today: a long-term, comprehensive, multidimensional view of health and wellness, as well as a baseline by which to evaluate changes and anomalies. Today, physicians typically get only a snapshot of a patient’s health taken during an office visit, usually when the patient is sick. This “data” is hardly representative, and says little about what is happening in the months, or even years, between office visits. With so little data about a patient, physicians tend to focus on the specific problem or complaint, while disregarding other health indicators that provide more context for a patient’s symptoms. Anticipating the direction healthcare was taking, AT&T started years ago looking at what would be required for relaying health data over AT&T’s existing IP net. But sensor data, captured continuously over a long period, will give physicians a bigger picture, allowing them to see the interactions among multiple factors, such as how glucose readings change relative to weight or blood pressure. In the same fashion, data from a VOC detector may enable a physician to correlate a patient’s asthma attacks with a specific VOC trigger. With more data, physicians have more information to treat the illness rather than just alleviate symptoms. When transmitted to medical researchers, sensor data aggregated from large numbers, perhaps millions, of people can be mined to better understand the links between diseases and their underlying causes. This is especially important for asthma and other complex diseases that are caused by a combination of environmental, biological, genetic and other factors. In the case of asthma, researchers don’t yet have a good understanding of why some people suffer asthma attacks and some do not, or why some people are susceptible to chemical triggers, and others to biological ones. Data is key to answering these questions, but it can only be provided by individuals. Sensors make it easier to collect data. Getting the data from individuals to those that need it is the current focus. Tying together technologies for an end-to-end infrastructure The difficulty in transmitting medical data has always been collecting and delivering the data in a format that healthcare professionals and researchers can immediately begin using and analyzing without being concerned with the technical, low-level details of data transmission. AT&T is now close to providing that capability. Anticipating the direction healthcare was taking, AT&T started years ago looking at what would be required for relaying health data over AT&T’s existing IP network. One key decision was choosing an appropriate protocol for transmitting sensor data from the analog device to the IP network. AT&T researchers, after evaluating different protocols, chose IEEE ZigBee wireless technology for several reasons. It has low power demands, powerful local networking M2M capability, and, like Wi-Fi, ZigBee allows devices to relay data by passing it off to nearby devices to reach more distant ones. (This contrasts with peer-to-peer Bluetooth, which does not richly network devices or provide mesh connections.) Wi-Fi and ZigBee belong to the same IEEE family of standards, 802.11 and 802.15.4, respectively. The current step is to demonstrate meaningful use: to show that the VOC detector can be used by real people in real circumstances to avoid asthma attacks. To transmit ZigBee-based data from sensors, AT&T Research helped create the ActuariusTM gateway, which uses a fixed broadband connection or a ZigBee-enabled smartphone to collect and securely forward measurements from Personal Health Devices standardized by IEEE 11073 and the Continua Health Alliance. Data transmission is just one part of a complete, end-to-end communications infrastructure, which must also encompass cloud computing and services, big-data analytics and management, and emerging sensor-equipped devices—all areas in which AT&T maintains active and far-ranging research efforts. AT&T has one other critical advantage when it comes to transmitting sensitive medical data: AT&T is a trusted entity, and does not use the model of “free” services where the hidden cost is the service provider’s access to and use of data. Proving the concept The asthma VOC detector is the first device of its kind being tested in conjunction with this infrastructure. Last year, researchers demonstrated in the laboratory that the VOC detector can detect a range of airborne VOCs and then alert when concentrations might be high enough to trigger attacks. The current step is to demonstrate meaningful use: to show that the VOC detector can be used by real people in real circumstances to avoid asthma attacks. Preliminary trials are now under way in conjunction with major healthcare partners. Though limited in scope now, the trials will expand next year as the number of participants increases substantially. Even while meaningful use is being tested, AT&T is looking to make the device more useful by creating improved software and analytics so the device can discriminate among the various types of VOCs. Inserted into the VOC detector, this code will allow the detector to correlate occurrences of specific elevated VOC concentrations with other measurements (e.g., heart rate, blood oxygen, and other asthma indications) to try to zero in on the specific compound that triggers an attack in a specific individual. This ability to personalize healthcare for the individual, to know the specific VOC triggers or know which medical indicators are anomalous for a specific person, further points to the almost limitless potential of data-driven healthcare. As more data is collected and analyzed, physicians and medical researchers will understand better how to not only help those already sick, but to maintain wellness in those who are healthy. This is something healthcare has been needing for a long time. Proving that a VOC detector can prevent asthma attacks is a step toward a healthcare system centered on health maintenance. Results of the trial are expected in 2013. AT&T's history with medical devices AT&T may seem a nontraditional healthcare participant, but the company’s technology has long been incorporated into medical devices. Metal detector (1881). Alexander Graham Bell invents a device to locate bullets lodged in Civil War survivors. Coils generate small currents of electricity, producing a signal when near a metal bullet. The detector was used in an unsuccessful attempt to locate the bullet lodged in President Garfield’s body. The metal coils in the spring mattress, an innovation of the day, interfered with the ability to find the bullet. AT&T wireless heart monitor (1974). A miniaturized FM transmitter sends analog heart data to a nearby FM radio using early FCC unlicensed spectrum rules; the monitor’s “tunnel diode” device (used for the low-power oscillator) is now being revisited for exploration of Terahertz spectrum use. CCD imaging technology (1970s). Developed originally as a new type of computer memory, the CCD (for charged-coupled device) incorporated light-sensitive silicon 100 times more sensitive than film or camera tubes. The technology was quickly incorporated into all camera types, including (in the 1990s) endoscopes and other medical cameras that could be used to look inside the body. The inventors of the CCD, Willard Boyle and George Smith of Bell Labs at Murray Hill, shared half a Nobel Prize for the invention. Smart Slipper (2008). AT&T researchers embed pressure sensors, accelerometers, and a ZigBee radio into a slipper's cushioned insoles to continuously gather data about a patient’s gait and weight distribution. This data is transmitted over AT&T's network to physicians who can evaluate who is at risk of falling or to see early warning signs of Alzheimer’s or other health problems. The Smart Slipper, produced by the company 24Eight LLC (now ACM Systems), is now in clinical trials. Actuarius Gateway (2009). Named for the honorific bestowed on physicians during the middle ages (after the physician Joannes Zacharias Actuarius), this medical gateway converts ZigBee-protocol data to IP for transmission to healthcare professionals. Invented at AT&T Research, Actuarius can use a fixed broadband connection or a ZigBee-enabled smartphone to collect and securely forward measurements from Personal Health Devices and the Continua Health Alliance. Working with the AT&T Network, VitalSpan a (cloud-based data system), and a Health Information Exchange such as AT&T’s Healthcare Community Online, the system can automatically retrieve health data from devices and make them available to medical professionals. About the author Bob Miller heads the Communications Technology Research Department at AT&T Labs - Research. His department develops new concepts and technologies for next-generation AT&T wired and wireless broadband packet access systems and services. He holds a variety of patents covering wireless transmission, advanced speakerphones and acoustics, digital telephones, and advanced networking applications using IP technologies.
<urn:uuid:8baf1364-6f7d-4661-ad4a-564e15267d68>
CC-MAIN-2017-09
http://www.research.att.com/articles/featured_stories/2012_12/201212_asthma_VOC_detector.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00293-ip-10-171-10-108.ec2.internal.warc.gz
en
0.924785
2,215
2.734375
3
Panasonic's Eco Solutions division today announced the launch of what it is calling the most powerful photovoltaic panel in the world. The new prototype solar panel has a solar energy conversion efficiency of 22.5% on a commercial-sized module. The prototype was built using solar cells based on mass-production technology, Panasonic said. Last year, Panasonic announced it had achieved a photovoltaic cell efficiency rating of 25.6%. "This new record on module-level efficiency adds to the 25.6% efficiency record achieved last year at cell-level. The new panel efficiency record demonstrates once again Panasonic's...ongoing commitment to move the needle in advanced solar technology," Daniel Roca, senior business developer at Panasonic Eco Solutions Europe, said in a statement. The latest advance, in theory, allows Panasonic to squeak past SolarCity as having the most efficient solar panel in the world. SolarCity announced last week that it had achieved an efficiency rating of 22.04% in panels that it will begin manufacturing this month. However, Panasonic's solar panels are based on "thin-layer" solar cells, which are more expensive to produce than the standard solar cells being used by SolarCity. Panasonic was also unable to give a date for when its panels would be used in commercial solar panels. However, it did say it's planning for a mid- to late-2016 release time. The new solar panel test results were confirmed by the Japanese National Institute of Advanced Industrial Science and Technology (AIST), Panasonic said. The 72-cell, 270-watt prototype solar module incorporates "newly developed enhanced technology" that will eventually be scaled to volume production, Panasonic said. This story, "Panasonic surpasses SolarCity with world's most efficient solar panel" was originally published by Computerworld.
<urn:uuid:1a65c06e-16be-4b5b-9ff8-4354d9cf4b23>
CC-MAIN-2017-09
http://www.itnews.com/article/2989813/sustainable-it/panasonic-surpasses-solarcity-with-worlds-most-efficient-solar-panel.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00290-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942066
374
2.5625
3
The use of laptop encryption in higher education, especially by faculty and staff, seems like a no-brainer to me. After all, such computers are full of personal information, not only of the devices' owners themselves but also of the student body (they still use SSNs as student IDs, don't they?).While the Department of Education may not openly require the use of encryption software, it's always a good idea. Even if you think that your computer is properly protected behind locked doors. Why? As the University of South Carolina shows us, doors can be busted. According to databreaches.net and a breach notification letter posted at abccolumbia.com, the physics and astronomy departments at the University of South Carolina experienced a data breach when a laptop was stolen from a locked room. The data breach affected 6,000 students who were enrolled in physics and astronomy classes at SC between January 2010 and today.The breached data involved full names, SSNs, and other personally identifiable information. While disk encryption for student data was not employed, password recovery was used (which is tantamount to applying leaches to a massive melanoma – in other words, less than useless) and the laptop was stored in a locked room.Considering the type of information that was being stored in that room, however, it surprises me (well, maybe it doesn't. I've heard of worse, actually) that these were the only things between a sensitive data and a burglar. One wonders: if the Department of Education also had a policy of issuing monetary fines – like the Department of Health and Human Services, which can impose a penalty of up to $1 million – for preventable data breaches, would the University of South Carolina relied only on a door for their security needs?You know what's really surprising, though? That in the past three years, 6,000 students were enrolled in physics and astronomy courses. (And, personally, this is music to my ears.) Many universities and small colleges have undergone the process of replacing student ID numbers with something other than SSNs. This is a great first step towards data security. After all, you can't have a data breach on what you don't collect.However, personal information encompasses more than SSNs alone. A student's grades, for example, are also subject to protection. Naturally, these scores have to be linked to some form of identifier, be it a first and last name, a student ID number, or whatever.In fact, that such information has to be linked to an identifier means that the potential for a data breach is always there. Not using proper protection, then, is an invitation for future data breaches.
<urn:uuid:0c2827d1-a9ef-49e9-bbc8-f0137fb55bb6>
CC-MAIN-2017-09
http://www.alertboot.com/blog/blogs/endpoint_security/archive/2013/07/05/education-encryption-laptop-encryption-beats-locked-rooms-shows-university-of-south-carolina-data-breach.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00410-ip-10-171-10-108.ec2.internal.warc.gz
en
0.973085
543
2.734375
3
Expert analysis finds the bad guys increasingly use stronger encryption to protect their malware and botnets. When a new software threat is discovered, reverse engineers dig into the code to find ways to detect the attack, identify the code and its authors, and discover the purpose behind the malware. Such investigations pit the digital detectives who reverse engineer malicious programs against the developers who created the malware. In the cat-and-mouse game, reverse engineers can easily find copies of the software to crack open and analyze, and attackers respond by throwing up a number of hurdles to slow down analysts' efforts. Chief among the roadblocks are encryption and obfuscation. In the not-too-distant past, encryption in malware was a sign of an ambitious effort on the part of the program's author. Today, nearly all malware uses some encryption, and perhaps two-thirds of botnets use encrypted communications to obfuscate their activities, says Jeff Edwards, a research analyst with network security firm Arbor Networks. "There is a gradual trend toward improving their encryption," Edward says. "It all comes down to whether the botnet operators and authors feel pressure to evolve." With the takedown of the Rustock and Kelihos botnets, which counted tens of thousands of compromised computers among their nodes, the underground operators controlling the botnets are likely feeling pressure to hide their activities to an even greater extent. In addition, as malicious software developers grow more experienced, they frequently add more complex and better implemented encryption to their products. The Black Energy bot software, for example, originally used a basic encryptor to scramble its executable to avoid detection by antivirus software and used Base64 encoding to scramble its communications. Both were easily reversed engineered. The latest version of Black Energy, however, uses a variant--somewhat flawed, it turns out--of the more robust RC4 stream cipher to encode its communications. In a recent set of blog posts, Arbor Networks analyzed the encryption of four major bot programs used for denial-of-service. The analyses have found a wide variety of encryption methods, from custom substitution algorithms to the RC4 stream cipher, a popular encryption method used in Secure Sockets Layer, among other protocols. In one analysis, Arbor researched the Dark Comet remote access trojan, which uses RC4 to encrypt its communications and uses other interesting techniques to obfuscate the encryption keys. "It's all over the map--you get everything from no encryption to really solid encryption," Edward says. "RC4 is the most popular one right now, or some variation of RC4. It's a standard, it's well understood, and it's reasonably secure." Encryption in botnets has evolved slowly. Five years ago, the Sinowal, or Torpig, trojan used a modified version of the XTEA block cipher to encrypt its configuration data, according to Kurt Baumgartner, senior security researcher with Kaspersky Lab. Since late 2008, the Waledac and Kelihos, or Hlux, botnets used custom implementations of the advanced encryption standard (AES) mixed with other encoding and compression to obfuscate their code and communications, he says. See the future of business technology at Interop Las Vegas, May 6-10. It's the best place to learn how cloud computing, mobile, video, virtualization, and other key technologies work together to drive business. Register today with priority code CPQCNL07 to get a free Expo Pass or to save 25% on Flex and Conference passes.. New Best Practices for Secure App DevelopmentThe transition from DevOps to SecDevOps is combining with the move toward cloud computing to create new challenges - and new opportunities - for the information security team. Download this report, to learn about the new best practices for secure application development. Published: 2015-10-15 The Direct Rendering Manager (DRM) subsystem in the Linux kernel through 4.x mishandles requests for Graphics Execution Manager (GEM) objects, which allows context-dependent attackers to cause a denial of service (memory consumption) via an application that processes graphics data, as demonstrated b... Published: 2015-10-15 Cross-site request forgery (CSRF) vulnerability in eXtplorer before 2.1.8 allows remote attackers to hijack the authentication of arbitrary users for requests that execute PHP code. Published: 2015-10-15 Directory traversal vulnerability in QNAP QTS before 4.1.4 build 0910 and 4.2.x before 4.2.0 RC2 build 0910, when AFP is enabled, allows remote attackers to read or write to arbitrary files by leveraging access to an OS X (1) user or (2) guest account. In past years, security researchers have discovered ways to hack cars, medical devices, automated teller machines, and many other targets. Dark Reading Executive Editor Kelly Jackson Higgins hosts researcher Samy Kamkar and Levi Gundert, vice president of threat intelligence at Recorded Future, to discuss some of 2016's most unusual and creative hacks by white hats, and what these new vulnerabilities might mean for the coming year.
<urn:uuid:01f1e9e8-a867-4124-b5d7-f5ecac65b103>
CC-MAIN-2017-09
http://www.darkreading.com/risk-management/malware-writers-pack-in-better-encryption/d/d-id/1103817?piddl_msgorder=thrd
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00410-ip-10-171-10-108.ec2.internal.warc.gz
en
0.909578
1,050
2.921875
3
Cloud Networked Manufacturing Cloud Computing provides a new way to do business by offering a scalable, flexible service over the Internet. Many organizations such as educational institutions, business enterprises have adopted cloud computing as a means to boost both employee and business productivity. Similarly, manufacturing companies found that they may not survive in the competitive market without the support of Information Technology (IT) and computer-aided capabilities. The advent of new technologies has changed the traditional manufacturing business model. Nowadays, collaboration between dispersed factories, different suppliers and distributed stakeholders, in a quick, real-time and effective manner are significant. Cloud manufacturing, as a new form of networked manufacturing, encourages collaboration in any phase of manufacturing and product management. It provides secure, reliable manufacturing lifecycle and on-demand services at low prices through networked systems. In the literature, there are various definitions of Cloud manufacturing (CM). For example, Li, Zhang and Chai (2010) defined cloud manufacturing as “a service-oriented, knowledge-based smart manufacturing system with high efficiency and low energy consumption”. In addition, Xu (2012) described Cloud Manufacturing as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable manufacturing resources (e.g., manufacturing software tools, manufacturing equipment, and manufacturing capabilities) that can be rapidly provisioned and released with minimal management effort or service provider interaction“. According to Tao and his colleagues (2011), one of the key characteristics of cloud manufacturing is service-oriented. Manufacturing resources and abilities can be virtualized and encapsulated into different manufacturing cloud services such as Design as a service (DaaS), Manufacturing as a service (MFGaaS), Experimentation as a service (EaaS), Simulation as a service (SIMaaS), Management as a service (MaaS), Maintain as a service (MAaaS), Integration as a service (INTaaS). Cloud users can use these services based on their requirements via the wide Internet. Furthermore, Cloud manufacturing can provide various and dynamic resources, services, and solutions for addressing a manufacturing task. Like Wikipedia, Cloud manufacturing is a group innovation-based manufacturing model. Any person or company can participate in and contribute their manufacturing resources, abilities, and knowledge to a cloud manufacturing service platform. Besides, any company can use these resources, abilities and knowledge to carry out its manufacturing actions. It would seem that within a Cloud manufacturing environment, an enterprise does not need to possess the entire hardware manufacturing environment (such as workshop, equipment, IT infrastructures, and personnel) or the software manufacturing ability (such as design, manufacturing, management, and sales ability). An enterprise can obtain the resources and abilities, and services in Cloud manufacturing platform according to its requirements after payment. Cloud Manufacturing consists of four kinds of cloud manufacturing service platform which are: - Public CM service platform: manufacturing resources and abilities are shared with the general public in a multi-tenant environment. - Private CM service platform: manufacturing resources and abilities are shared within one company or its subsidiaries. It is managed by an organization or enterprise to provide greater control over its resource and service. - Community CM service platform: manufacturing resources and abilities are controlled and used by a group of organizations with common concerns. - Hybrid CM service platform: it is a composition of public and private cloud. Services and information which are not critical are stored in Public CM, while critical information and services are kept within the private CM. Cloud manufacturing consists of technologies such as networked manufacturing, manufacturing grid (MGrid), virtual manufacturing, agile manufacturing, Internet of things and cloud computing. It can reduce cost of production and improve production efficiency, distribution of integrated resources, and resource efficiency. (Image Source: Shutterstock) By Mojgan Afshari
<urn:uuid:05c0b2a0-2e4a-4f96-a5bc-28f86699f0b3>
CC-MAIN-2017-09
https://cloudtweaks.com/2014/08/cloud-networked-manufacturing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00286-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941889
774
2.625
3
PNRP name resolution protocol uses this two steps: - Endpoint determination – In this step the peer is determining the IPv6 address of the computer network card on which the PNRP ID service is published. - PNRP ID resolution – After locating and testing the reachability of the peer with the PNRP ID with desired PNRP service, the requesting computer sends a PNRP Request message to that peer for the PNRP ID of the desired service. Other side is sending a reply in which it confirms the PNRP ID of the requested service. It also sends a comment, and up to 4 kilobytes of additional information in that reply. Using the comment and additional 4 kilobytes there can be some custom information sent back to the requestor about the status of server or computer services. In the process of discovering needed neighbor, PNRP is making an iterative process in which it locates all nodes that have published their PNRP ID. The node performing the resolution is in charge of communicating with the nodes that are closer to the target PNRP ID. Peer is firstly examining all entries in its own cache. If an entry that matches the target PNRP ID is found, it sends a PNRP Request message to the peer and waits for a response. In this way it can be sure that the node is available and not cached but unavailable. On other side, if an entry for the PNRP ID is not found, the peer sends a PNRP Request message to the peer with a PNRP ID that most closely matches the PNRP ID of the target node. The node that receives the PNRP Request looks at its own cache and then: - If the PNRP ID is found, it sends positive reply with the answer. - If the PNRP ID is not found and there is no PNRP ID in its cache that is closer to the target PNRP ID, the requested peer sends back a response that he is not a peer that can help him. The requesting peer then chooses the next-closest PNRP ID. - If the PNRP ID is not found and a PNRP ID in the cache is closer to the target PNRP ID, the peer sends an answer with the IPv6 address of the peer that corresponds to PNRP ID that most closely matches the target PNRP ID. Using IPv6 address received in first step the requestor will now ask this next peer with that IP address if he knows where the node with specified PNRP ID is. Peer who is requesting name resolution in this iterative process eventually locates the node that has the searched PNRP ID registered. There is an example of simple PNRP ID searching process. For example, PC1 has entries for PNRP ID (200) and the PNRP ID with 450 and 500. In the picture below every blue arrow from one PC to other PC symbolizes that the PC from which the arrow starts has an entry in its cache for the node to which the arrow is going. When PC1 wants to fide a PC on which the process PNRP ID is 800, this is taking place: - Because 500 is closer to 800 than other values here, PC1 sends PNRP Request message to PC3 - PC3 does not have an entry for the PNRP ID value of 800 and does not even have entries close to 800. PC3 sends back to PC1 a negative response. I can help you, more or less. - 450 is the next close PNRP ID value to 800, PC1 continues with sending request to PC2. - PC2 knows about PNRP ID of 800 in its cache. He can respond to PC2 with IPv6 address of PC5. - PC1 then sends a PNRP Request to PC5. - PC5 responds with an positive response, and sends name resolution response to PC1.
<urn:uuid:6aa6643f-68ba-4310-a8d9-90d467bf607d>
CC-MAIN-2017-09
https://howdoesinternetwork.com/2012/pnrp-name-resolution-how-it-works
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171706.94/warc/CC-MAIN-20170219104611-00638-ip-10-171-10-108.ec2.internal.warc.gz
en
0.919452
831
2.625
3
Every time a major security breach makes the headlines, a common reaction happens. Even before the details of the breach are known, the infosec world gets into a frenzy of speculation as to how the attack happened, who conducted it, and whether the attackers were skilled or not. Invariably the conversation focuses onto the company that is the victim of the attack, and it often tends to highlight how stupid, negligent or weak its security defenses were. In effect, we blame the victim for being attacked. While the organization may have been negligent, or their security not up to scratch, we should not forget they are still the victim. How good, or not, the victim’s security is a separate issue for a separate conversation. Foisting blame on the victim on top of having to deal with the incident does not bring much value to the conversation. The blame for the attack should lie squarely on the shoulders of those who conducted it. Our tendency to blame others for security failings does not stop at the victims of security breaches. Security professionals often berate developers for writing insecure code, when in fact those developers are coding in the way they have been trained. Users are derided, mocked, and blamed for clicking on links, falling for phishing scams, or not following policies, when all they were trying to do was their work. Management gets blamed for not investing enough money or resources into security. Vendors are blamed for producing and selling products that do not meet our expectations when it comes to protecting our systems. We blame governments for not giving security the attention it should get or not giving the necessary resources to law enforcement to deal with the rise in cybercrime. It is interesting to note that in all the assigning of blame we very rarely blame ourselves. There is an appropriate saying: “When pointing a finger at someone there are always three of your fingers pointing back at you.” This is something that we in information security need to think about. Instead of concentrating on the weaknesses of others we should look at our own shortcomings. We never seem to ask why is it that developers have not been trained or made aware on how to code securely? How come users don’t understand the risks of clicking on links and attachments or realize that security policies are in place for a reason? Why does senior management not appreciate the risk poor information security poses to the business? We criticise and berate others for not understanding information security as well as we do and then wonder why no one will talk to us. We fail to engage with developers, users, and management to proactively understand their requirements. We rarely look at ways to support them so that they can do their jobs in a secure manner. Blame shames people and makes them less willing to share in the future. Users will be afraid to report potential security breaches as a result of clicking on a link in an email, which will lead to our networks being potentially exposed. Companies will not be willing to share how they suffered a security breach as they fear the ridicule and negative impact on their image from those who may focus on the inadequacies of their defenses rather than the fact they are a victim. When we don’t share our experiences we cannot as an industry learn, and by not learning we will find it more difficult to protect ourselves. So next time you are dealing with users who do not know how to work in a secure manner, don’t blame the users but rather take a step back and try to understand where and how we have failed to enable them to work securely. When management does not provide the necessary resources to improve information security, let’s not blame them for not understanding the issue. Instead let’s try to learn how to better present the business case that will get management to approve the investment. The next time a company’s network security is breached remind yourself that they are the victim of a crime. Instead of shaming and blaming the victim, our focus should be on how to stop those responsible for the attacks creating more victims. In the blame game nobody wins, yet everybody loses. As the famous American novelist John Burroughs said: “You can get discouraged many times, but you are not a failure until you begin to blame somebody else and stop trying.” We have too much at stake in ensuring our systems and networks are secure to fail at what we do. We will be discouraged many times but let’s not become failures – let’s stop playing the blame game. Brian Honan is an independent security consultant based in Dublin, Ireland, and is the founder and head of IRISSCERT, Ireland’s first CERT. He is a Special Advisor to the Europol Cybercrime Centre, an adjunct lecturer on Information Security in University College Dublin, and he sits on the Technical Advisory Board for a number of innovative information security companies. He has addressed a number of major conferences, he wrote the book ISO 27001 in a Windows Environment and co-author of The Cloud Security Rules. He regularly contributes to a number of industry recognized publications and serves as the European Editor for the SANS Institute’s weekly SANS NewsBites.
<urn:uuid:633df584-def8-447d-b6b6-86a0daddbf42>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2013/08/26/there-are-no-winners-in-the-blame-game/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00338-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963725
1,056
2.546875
3
HTML forms are one of the features in HTTP that allows users to send data to HTTP servers. An often overlooked feature is that due to the nature of HTTP, the web browser has no way of identifying between an HTTP server and one that is not an HTTP server. Therefore web browsers may send this data to any open port, regardless of whether the open port belongs to an HTTP server or not. Apart from that, many web browsers will simply render any data that is returned from the server. One thing to keep in mind is that HTML forms can be hosted on one website (attacker’s website) and send data to an open port on a victim server. When an attacker can control what is returned by the server, the victim becomes vulnerable to security issues such as Cross Site Scripting. In the case of HTTP servers, this is a well known issue and therefore modern web servers do not exhibit this behavior by default. However this is not the case with other kinds of servers such as SMTP (Simple Mail Transfer Protocol) or FTP (File Transfer Protocol) servers, often these servers will echo back error messages containing user input. When this user input can be controlled by the attacker, bad things can happen. Download the paper in PDF format here.
<urn:uuid:44aa2e1f-2b27-404d-8908-2da64efd1b5c>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2008/07/09/the-extended-html-form-attack-revisited/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00514-ip-10-171-10-108.ec2.internal.warc.gz
en
0.933795
254
3.34375
3
Vulkan is a new low-level API that developers can use to access the GPU. This can be used instead of OpenGL or Direct3D. It is essentially the successor to OpenGL as the standard is created by the Khronos Group, a standards organization. Khronos created Vulkan to be an open standard royalty-free. Developers are able to take advantage of Vulkan’s reduced CPU overhead and efficient performance with games, applications, and mobile. Version 1.0 of the specification was released today and the first Vulkan SDK, LunarG, was also released for Windows and Linux. Vulkan is available on multiple versions of Microsoft Windows from Windows 7 to Windows 10, and has been adopted as a native rendering and compute API by platforms including Linux, SteamOS, Tizen and Android. AMD, ARM, Intel, NVIDIA, and other industry pillars have been quick to adopt the standard. NVIDIA offers beta support for Vulkan in Windows driver version 356.39 and Linux driver version 355.00.26. AMD similarly offers beta support for Vulkan with beta drivers for Windows 7 – Windows 10.
<urn:uuid:faeafde8-6717-4f8b-acea-2db5f0db7efc>
CC-MAIN-2017-09
https://www.404techsupport.com/2016/02/khronos-vulkan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00458-ip-10-171-10-108.ec2.internal.warc.gz
en
0.931248
224
2.640625
3
"My computer is acting up. It must be a virus." You've undoubtedly heard comments like this or even thought this yourself. In actuality, most computer glitches are caused by software conflicts or user error. Viruses aren't as common as other computer problems. They're found in about 0.15 percent of e-mails, according to the latest figures from MessageLabs, a provider of Internet security products that each day analyzes more than 180 million e-mails worldwide for its business clients. That makes viruses less prevalent than phishing attacks that try to trick you into revealing your credit card, banking, or other personal information, which make up about 0.45 percent of e-mails. The most common e-mail problem, however, is spam, those unsolicited, untargeted commercial messages sent in bulk, with such messages comprising a whopping 44.96 percent of all e-mails. But viruses do get a lot of attention, and it's easy to see why. They have an ominous and mysterious aura. And they can do serious damage, including wiping out all the data on your hard drive. Some may not do overt harm but instead scare you with a pop-up text message such as "Gotcha," a photo of a raised middle finger, or a sinister audio or video file. Computer viruses are simply small computer programs. Like human viruses, computer viruses can replicate, spreading like a disease from one computer to another through e-mail or, less commonly, through infected CD-ROM discs, USB drives, music and other file-sharing networks, and Web sites. All indications are that viruses are typically written by pranksters in their teens and twenties, according to virus experts. Some are written by truly disturbed individuals, the kind of sociopaths who indiscriminately slash tires. Some may be written in a more formalized way, by members of organized crime families or foreign terrorist groups. And some are intended as "good viruses" to delete other viruses but may inadvertently cause harm, for instance, by deleting a vital system file by mistake. Viruses may be written from scratch by programmers. Or they may be created with virus-writing kits, requiring no programming knowledge. Some virus writers write viruses for the intellectual challenge, never intending to release them. Some of these viruses get released accidentally. Web sites and online chat rooms exist where virus writers ask questions, trade tricks, and boast of their exploits. The first line of defense against viruses, as with every potential computer disaster, is to make regular backups of the vital data stored on your hard drive. Ideally you should periodically do this to a medium that's not continuously connected and accessible, to prevent a virus from infecting it too. The next safety step is to use antivirus software. Top programs include Symantec's Norton AntiVirus, available separately or as part of other Symantec products, and McAfee VirusScan, also available separately or as part of a larger suite of other products. Antivirus programs scan relevant files looking for the specific programming code, or signatures. of known viruses. They also look for common behaviors of viruses. To avoid conflicts, you should use only one anti-virus program at a time. Another excellent program, which can be used in conjunction with anti-virus and other security programs, is Spybot Search & Destroy, program that removes spyware and other malware. It's a superb example of international altruistic entrepreneurship. The program is written and supported by German software engineer Patrick Kolla and the volunteers who work with him, and it's distributed by Kolla's Irish company Safer Networking Ltd. The program, which has won many awards for its effectiveness, is free for noncommercial use, supported by donations. Fees for corporate use depend on the size of your network. Also with protecting yourself with the above software shields, you should be careful about e-mail attachments. Don't open any from people you don't know. If you receive an attachment from someone you do know but weren't expecting it, it can be good practice to contact the sender to verify that the person actually intended to send it. It's also important to keep your operating system up to date, ideally directing it to download bug fixes and other updates automatically. It's equally important to keep your antivirus and other security software up to date by doing the same. Users of Microsoft Windows and Windows programs are most vulnerable to viruses, in part because of their market share and in part because there's a hostility in the virus underground toward big business that Microsoft represents. But Apple Macintosh and Linux users also need to be careful. Reid Goldsborough is a syndicated columnist and author of the book Straight Talk About the Information Superhighway. He can be reached at [email protected] or http://www.netaxs.com/~reidgold/column.
<urn:uuid:886604c0-22dd-4300-aa07-02e157d84176>
CC-MAIN-2017-09
http://www.govtech.com/security/Personal-Computing-How-Serious-a-Threat.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171078.90/warc/CC-MAIN-20170219104611-00634-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957124
1,006
3.1875
3
Who doesn't love Pandora Radio? But listening to it on my Android phone is the fastest way to kill the battery and what good is a mobile phone if it has to be constantly plugged in? New research shows that Android phones are the most data hungry smartphones out there. A group of researchers at AT&T Labs are calling on app makers to fix this problem by building more energy-aware apps. Not surprisingly, Pandora is one of their test subjects. (Facebook is another). It's about time! All smartphones could use more energy efficient apps, but Android users suffer the most. New research from Nielsen found that although iPhone users engage in as much or more data-intensive activities (downloading apps, streaming music or video), as Android users, Android phones gobble up more monthly data. Each month, Androids are consuming 90MB more data than iPhone users. As every smartphone user knows, the more data transferred, the faster the battery drains. But apparently it's not the OS that's the issue ... it's the underlying apps, researchers say. Enter new research being done by AT&T to create energy-efficient apps that recognize they are on a cell network, and limit both the number of times an app connects to the network and the time needed to connect. They have developed a tool that helps app developers figure out when their apps really need full power connections (download speeds of around 7.1Mbits/sec) or when the app can get by on a proposed "intermediate state" which consumes half the power and transmits less data at a slower speed, typically by sharing a low-speed channel, (often 16kbps). For instance, the researchers found that when they ran Pandora for 12 minutes, the app conducted a series "of short bursts once every 62.5 seconds ... While the music itself was sent simply and efficiently as a single file, the periodic audience measurements — each constituting only 2KBs or so — were being transmitted at regular 62.5-second intervals. The constant cycle of ramping up to full power (2 seconds to ramp up, 1 second to download 2KB) and back to idle (17 seconds for the two tail times ... was extremely wasteful," they wrote. After reading the paper, I had many questions. I contacted the researcher Alexandre Gerber, a principal member of the technical staff at AT&T Labs Research, and asked them. How do the different platforms compare when it comes to energy efficiency already? Apple/iPhone/ios vs. Android vs. BlackBerry vs. Windows Phone 7. Does development platform influence this? Are some better than others? Different OSs may have different energy efficiency in terms of some system components such as CPU and memory. But we are looking at the efficiency of accessing network (3G power consumption contributes about 50% of overall handset power consumption). That is mostly determined by the application instead of OS. How do different network speeds/types influence this … 3G vs. 4G vs. WiFi? It is the resource control policy of different networks that influence the energy efficiency. Cellular networks usually have similar resource control mechanisms, but cellular network technology is also getting better over time. WiFi has a different approach and it is more energy efficient than cellular. The paper mentions, “One popular app was found to be using 40% of its power consumption to transmit 0.2% of its data.” … was this a typical finding? Or was it an extreme finding? This is a common observation for applications with periodic data transfers (e.g. ads, keep alive, pull instead of the more efficient push, audience measurements), although the numbers may not always be that high. In terms of hours of battery life, how much power overall would you guess is wasted by apps that do a poor job of managing state? (What I mean is, if you have a battery that is supposed to give you six hours of talk time, but dies in three hours of app usage, how much battery life would you get back if all of your apps were energy efficient? A few minutes? A few hours?) Clearly that depends on the application you are using. For a large Internet radio, for instance, if 40% of its radio power, which contributes to 50% of total device power, is wasted, then you can save about 40% * 50% = 20% of overall battery life. So this could end up being a significant amount of time. How much is app battery usage influenced/dependent on the handset? Do the same apps consume different amounts of energy on different handsets (and HTC android phone vs. a Motorola one? An iPhone 3 vs an iPhone 4)? Yes they differ. The table below compares power consumption of three radio states (IDLE, FACH, and DCH) of two phones: HTC TyTn II and Google NexusOne. These are measurements made as part of our study in our Research group; it is independent of measurements made by our official device testing group: Radio State | TyTn | NexusOne P(IDLE) | 0 | 0 P(FACH) | 460mW | 450mW P(DCH) | 800mW | 600mW In your paper, you detailed the results of analyzing the Pandora app, what smartphone platform did you use to analyze it? Generally speaking, did you discover the Facebook app was more (or less) energy efficient than Pandora? This is an apple to orange comparison. These are applications that are difficult to compare; the content is completely different. A comparison would only make sense between the same type of applications. For instance, we noticed that Pandora is more efficient than other Internet radios because it is sending data in bursts followed by long periods of inactivity, as opposed to continuously streaming content like some other Internet radios.
<urn:uuid:02767e2e-d641-4281-9e29-6664400d0602>
CC-MAIN-2017-09
http://www.networkworld.com/article/2229434/opensource-subnet/at-t-researchers-call-for-smartphone-apps-that-won-t-suck-your-battery-dry.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173866.98/warc/CC-MAIN-20170219104613-00034-ip-10-171-10-108.ec2.internal.warc.gz
en
0.954814
1,196
2.609375
3
Providing greater insight and control over elements in our increasingly connected lives, the Internet of Things (IoT) emerges at a time when threats to our data and systems have never been greater. There is an average of thirteen enterprise security breaches every day, resulting in roughly 10 million records lost a day—or 420,000 every hour. As new connected devices come to market, security researchers have taken up the cause to expose their vulnerabilities, and make the world aware of the potential harm of connecting devices without properly securing the Internet of Things. Gartner's Market Guide for IoT Security provides recommendations to meet IoT security challenges. Security experts Chris Valasek and Charlie Miller grabbed headlines with their research on the vulnerability of connected cars when they hacked into a Toyota Prius and a Ford Escape using a laptop plugged into the vehicle’s diagnostic port. This allowed the team to manipulate the cars headlights, steering, and breaking. In April 2014, Scott Erven and his team of security researchers released the results of a two-year study on the vulnerability of medical devices. The study revealed major security flaws that could pose serious threats to the health and safety of patients. They found that they could remotely manipulate devices, including those that controlled dosage levels for drug infusion pumps and connected defibrillators. In 2012, the Department of Homeland Security discovered a flaw in hardened grid and router provider RuggedCom’s devices. By decrypting the traffic between an end user and the RuggedCom device, an attacker could launch attacks to compromise the energy grid. We can sort potential attacks against the Internet of Things into three primary categories based on the target of the attack—attacks against a device, attacks against the communication between devices and masters, and attacks against the masters. To protect end users and their connected devices, we need to address all three of these IoT attacks. To a potential attacker, a device presents an interesting target for several reasons. First, many of the devices will have an inherent value by the simple nature of their function. A connected security camera, for example, could provide valuable information about the security posture of a given location when compromised. A common method of attack involves monitoring and altering messages as they are communicated. The volume and sensitivity of data traversing the IoT environment makes these types of attacks especially dangerous, as messages and data could be intercepted, captured, or manipulated while in transit. All of these threats jeopardize the trust in the information and data being transmitted, and the ultimate confidence in the overall infrastructure. For every device or service in the Internet of Things, there must be a master. The master’s role is to issue and manage devices, as well as facilitate data analysis. Attacks against the masters – including manufacturers, cloud service providers, and IoT solution providers – have the potential to inflict the most amount of harm. These parties will be entrusted with large amounts of data, some of it highly sensitive in nature. This data also has value to the IoT providers because of the analytics, which represent a core, strategic business asset—and a significant competitive vulnerability if exposed. Dive in to the details of securing the Internet of Things with our comprehensive guidebook, “Building a Trusted Foundation for the Internet of Things.” In this on-demand webinar, we explore the challenges to securing the Internet of Things as well as how to mitigate these IoT threats.
<urn:uuid:c0894ca6-bd34-4ae2-8659-89fc58539005>
CC-MAIN-2017-09
https://safenet.gemalto.com/data-protection/securing-internet-of-things-iot/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00630-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942431
692
2.765625
3
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach. In major metropolitan areas and smaller cities alike, governments are adopting software-defined networking (SDN) and network function virtualization (NFV) to deliver the agility and flexibility needed to support adoption of “smart” technologies that enhance the livability, workability and sustainability of their towns. Today there are billions of devices and sensors being deployed that can automatically collect data on everything from traffic to weather, to energy usage, water consumption, carbon dioxide levels and more. Once collected, the data has to be aggregated and transported to stakeholders where it is stored, organized and analyzed to understand what’s happening and what’s likely to happen in the future. There’s a seemingly endless list of potential benefits. Transportation departments can make informed decisions to alleviate traffic jams. Sources of water leaks can be pinpointed and proactive repairs scheduled. Smart payments can be made across city agencies, allowing citizens to complete official payments quickly and reducing government employee time to facilitate such transactions. And even public safety can be improved by using automated surveillance to assist the police watch high-crime hotspots. Of particular interest is how healthcare services can be improved. There is already a push to adopt more efficient and effective digital technology management systems to better store, secure and retrieve huge amounts of patient data. Going a step further, a smart city is better equipped to support telemedicine innovations that require the highest quality, uninterrupted network service. Telesurgery, for example, could allow for specialized surgeons to help local surgeons perform emergency procedures from remote locations — the reduction of wait time before surgery can save numerous lives in emergency situations, and can help cities and their hospital systems attract the brightest minds in medical research and practice. The smart city of today While the smart city is expected to become the norm, examples exist today. Barcelona is recognized for environmental initiatives (such as electric vehicles and bus networks), city-wide free Wi-Fi, smart parking, and many more programs, all of which benefit from smart city initiatives. With a population of 1.6 million citizens, Barcelona shows that smart city technologies can be implemented regardless of city size. But even smaller cities are benefitting from going “smart.” In 2013 Cherry Hill, New Jersey, with a population of only 71,000, began using a web-based data management tool along with smart sensors to track the way electricity, water, fuel and consumables are being utilized, then compared usage between municipal facilities to identify ways to be more efficient. Chattanooga, Tennessee, population 170,000, along with its investment to provide the fastest Internet service in the U.S., has recently begun developing smart city solutions for education, healthcare and public safety. How do cities become smart? The most immediate need is to converge disparate communications networks run by various agencies to ensure seamless connectivity. To achieve this, packet optical based connectivity is proving critical, thanks largely to the flexibility and cost advantages it provides. Then atop the packet optical foundation sits technology that enables NFV and the applications running on COTS (commercial off-the-shelf) equipment in some form of virtualized environment. SDN and NFV allow for the quick and virtual deployment of services to support multiple data traffic and priority types, as well as increasingly unpredictable data flows of IoT. Decoupling network functions from the hardware means that architectures can be more easily tweaked as IoT requirements change. Also, SDN and NFV can yield a more agile service provision process by dynamically defining the network that connects the IoT end devices to back-end data centers or cloud services. The dynamic nature of monitoring end-points, location, and scale will require SDN so that networks can be programmable and reconfigured to accommodate the moving workloads. Take for example, allocating bandwidth to a stadium for better streaming performance of an event as the number of users watching remotely on-demand goes up—this sort of dynamic network-on-demand capability is enabled by SDN. Additionally, NFV can play a key role where many of the monitoring points that make the city "smart" are actually not purpose-built hardware-centric solutions, but rather software-based solutions that can be running on-demand. With virtual network functions (VNF), the network can react in a more agile manner as the municipality requires. This is particularly important because the network underlying the smart city must be able to extract high levels of contextual insight through real-time analytics conducted on extremely large datasets if systems are to be able to problem-solve in real-time; for example, automatically diverting traffic away from a street where a traffic incident has taken place. SDN and NFV may enable the load balancing, service chaining and bandwidth calendaring needed to manage networks that are unprecedented in scale. In addition, SDN and NFV can ensure network-level data security and protection against intrusions – which is critical given the near-impossible task of securing the numerous sensor and device end points in smart city environments. Smart city business models In their smart city initiatives, cities large and small are addressing issues regarding planning, infrastructure, systems operations, citizen engagement, data sharing, and more. The scale might vary, but all are trying to converge networks in order to provide better services to citizens in an era of shrinking budgets. As such, the decision on how to go about making this a reality is important. There are four major smart city business models to consider, as defined by analysts at Frost & Sullivan (“Global Smart City Market a $1.5T Growth Opportunity In 2020”): - Build Own Operate (BOO): In a BOO model, municipalities own, control, and independently build the city infrastructure needed, and deliver the smart city services themselves. Both operation and maintenance of these services is under the municipality’s control, often headed up by their city planner. - Build Operate Transfer (BOT): Whereas in a BOO model, the municipality is always in charge of the operation and management of smart city services, in a BOT model that is only the case after a little while – the smart city infrastructure building and initial service operation is first handled by a trusted partner appointed by the city planner. Then, once all is built and in motion, operation is handed back over to the city. - Open Business Model (OBM): In an OBM model, the city planner is open to any qualified company building city infrastructure and providing smart city services, so long as they stay within set guidelines and regulations. - Build Operate Manage (BOM): Finally, there is the BOM model, which is where the majority of smart city projects are likely to fall under. In this model, the smart city planner appoints a trusted partner to develop the city infrastructure and services. The city planner then has no further role beyond appointment – the partner is in charge of operating and managing smart city services. SDN and NFV: The keys to the (smart) city With the appropriate business model in place and the network foundation laid out, the technology needs to be implemented to enable virtualization. Virtualized applications allow for the flexibility of numerous data types, and the scalability to transport huge amounts of data the city aims to use in its analysis. SDN and NFV reduce the hardware, power, and space requirements to deploy network functions through the use of industry-standard high-volume servers, switches and storage; it makes the network applications portable and upgradeable with software; and it allows cities of all sizes the agility and scalability to tackle the needs and trends of the future as they arise. Like the brain’s neural pathways throughout a body, SDN and NFV are essential in making the smart city and its networks connect and talk to each other in a meaningful way. This story, "SDN and NFV: The brains behind the “smart” city" was originally published by Network World.
<urn:uuid:3744fcb2-5b66-4dd5-964b-41acf86e953b>
CC-MAIN-2017-09
http://www.itnews.com/article/3000819/software-defined-networking/sdn-and-nfv-the-brains-behind-the-smart-city.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00330-ip-10-171-10-108.ec2.internal.warc.gz
en
0.932565
1,656
2.796875
3
SAS vs. SATA - Page 5 ZFS does read checksums. When it writes data to the storage, it computes checksums of each block and writes them along with the data to the storage devices. The checksums are written in the pointer to the block. A checksum of the block pointer itself is also computed and stored in its pointer. This continues all the way up the tree to the root node which also has a checksum. When the data block is read, the checksum is computed and compared to the checksum stored in the block pointer. If the checksums match, the data is passed from the file system to the calling function. If the checksums do not match, then the data is corrected using either mirroring or RAID (depends upon how ZFS is configured). Remember that the checksums are made on the blocks and not on the entire file, allowing the bad block(s) to be reconstructed if the checksums don't match and if the information is available for reconstructing the block(s). If the blocks are mirrored, then the mirror of the block is used and checked for integrity. If the blocks are stored using RAID then the data is reconstructed just like you would any RAID data - from the remaining blocks and the parity blocks. However, a key point to remember is that it in the case of multiple checksum failures the file is considered corrupt and it must be restored from a backup. ZFS can help data integrity in some regards. ZFS computes the checksum information in memory prior to the data being passed to the drives. It is very unlikely that the checksum information will be corrupt in memory. After computing the checksums, ZFS writes the data to the drives via the channel as well as writes the checksums into the block pointers. Since the data has come through the channel, then it is possible that the data can become corrupted by a SDC. In that case ZFS will write corrupted data (either the data or checksum possibly both). When the data is read, ZFS is capable of recovering the correct data because it will either detect a corrupted checksum for the data (stored in the block pointer) or it will detect corrupted data. In either case, it will restore the data from a mirror or RAID. The key point is that the only way to discover if the data is bad is to read it again. ZFS has a feature called "scrubbing" that walks the data tree and checks both the checksums in the block pointers as well as the data itself. If it detects problems then the data is corrected. But scrubbing will consume CPU and memory resources while storage performance will be reduced to some degree (scrubbing is done in the background). If you get a hard error on the drive (see first section) before ZFS scrubs the data that affects corrupted data (due to SDC in the SATA channel) then it's very possible that you can't recover the data. The data was corrupted but the checksums could have been used to correct it but now a drive with the block and block pointer is dead making life very difficult. Given the drive error rate of Consumer SATA drives in the first section and the size of the RAID groups, plus the SATA Channel SDC, this combination of events can be a distinct possibility (unless you are start scrubbing data at a very high rate so that newly landed data is scrubbed immediately, which limits the performance of the file system). Therefore ZFS can "help" the SATA channel in terms of reducing the effective SDC because it can recover data corrupted by the SATA channel, but to do this, all of the data that is written must be read as well (to correct the data). This means to write a chunk of data you have to compute the checksum in memory, write it with the data to the storage system, re-read the data and checksum, compare the stored checksum to the computed checksum, and possibly recover the corrupted data and compute a new checksum and write it to disk. This is a great deal of work just to write a chunk of data. Another consideration for SAS vs. SATA is the performance. Right now SATA has a 6 Gbps interface. Instead of doubling the interface to go to 12 Gbps, the decision was made to switch to something called SATA Express. This is a new interface that supports either SATA of PCI Express storage devices. SATA Express should start to appear in consumer system in 2014 but the peak performance can vary widely from as low as 6 Gbps for legacy SATA devices to 8-16 Gbps PCI Express devices (e.g. PCIe SSDs). However, there are companies currently selling SAS drives with a 12 Gbps interface. Moreover, in a few years, there will be 24 Gbps SAS drives. SATA vs. SAS: Summary and Observations Let's recap. To begin with, SATA drives have a much lower hard error rate than SAS drives. Consumer SATA drives are 100 times more likely to encounter a hard error than Enterprise SAS drives. The SATA/SAS Nearline Enterprise drives have a hard error rate that is only 10 times worse than Enterprise SAS drives. Because of this, RAID group sizes are limited when Consumer SATA drives are used or you run the risk of multi-disk failure that even something like RAID-6 cannot help. There are plenty of stories of people who have used Consumer SATA drives in larger RAID groups where the array is constantly in the middle of a rebuild. Performance suffers accordingly. The SATA channel has a much higher incidence rate of silent data corruption (SDC) than the SAS channel. In fact, the SATA channel is four orders of magnitude worse than the SAS channel for SDC rates. For the data rates of today's larger systems, you are likely to encounter a few silent data corruptions per year even running at 0.5 GiB/s with a SATA channel (about 1.4 per year). On the other hand, the SAS Channel allows you to use a much higher data rate without encountering an SDC. You need to run the SAS Channel at about 1 TiB/s for a year before you might encounter an SDC (theoretically 0.3 per year). Using T10-DIF, the SDC rate for the SAS channel can be increased to the point we are likely never to encounter a SDC in a year until we start pushing above the 100 TiB/s data rate range. Adding in T10-DIX is even better because we start to address the data integrity issues from the application to the HBA (T10-DIF fixes the data integrity from the HBA to the drive). But changes in POSIX are required to allow T10-DIX to happen. But T10-DIF and T10-DIX cannot be used with the SATA channel so we are stuck with a fairly high rate of SDC by using the SATA Channel. This is fine for home systems that have a couple of SATA drives or so, but for the enterprise world or for systems that have a reasonable amount of capacity, SATA drives and the SATA channel are a bad combination (lots of drive rebuilds and lots of silent data corruption). File systems that do proper checksums, such as ZFS, can help with data integrity issues because of writing the checksum with the data blocks, but they are not perfect. In the case of ZFS to check for data corruptions you have to read the data again. This really cuts into performance and increases CPU usage (remember that ZFS uses software RAID). We don't know the ultimate impact on the SDC rate but it can help. Unfortunately I don’t have any estimates of the increase in SDC when ZFS is used. Increasingly, there are storage solutions that use a smaller caching tier in front of a larger capacity but slower tier. The classic example is using SSD's in front of spinning disks. The goal of this configuration is to effectively utilize much faster but typically costlier SSD's in front of slower but much larger capacity spinning drives. Conceptually, writes are first done to the SSD's and then migrated to the slower disks per some policy. Data that is to be read is also pulled into the SSD's as needed so that read speed is much faster than if it was read from the disks. But in this configuration the overall data integrity of the solution is limited by the weakest link as previously discussed. If you are wondering about using PCI Express SSD's instead of SATA SSD's drives you can do that but unfortunately, I don't know the SDC rate for PCIe drives and I can't find anything that has been published. Moreover, I don't believe there is a way to dual-port these drives so that you can use them between two servers for data resiliency (in many cases if the cache goes down, the entire storage solution goes down). If you have made it to the end of the article, congratulations, it is a little longer than I hoped but I wanted to present some technical facts rather than hand waving and arguing. It's pretty obvious that for reasonably large storage solutions where data integrity is important, SATA is not the way to go. But that doesn't mean SATA is pointless. I use them in my home desktop very successfully, but I don't have a great deal of data and I don't push that much data through the SATA channel. Take the time to understand your data integrity needs and what kind of solution you need. Photo courtesy of Shutterstock.
<urn:uuid:8d001a59-5d59-4146-b482-5986a8968df1>
CC-MAIN-2017-09
http://www.enterprisestorageforum.com/storage-technology/sas-vs.-sata-5.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00382-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952807
1,956
3.15625
3
NASA's quantum lab could be giant leap for computing - By Frank Konkel - Oct 28, 2013 Research at NASA's Quantum Artificial Intelligence Laboratory – home to one of the few quantum computers in the world -- could help unlock the potential of quantum approaches to optimizing air traffic control, robotics, pattern recognition, and mission planning and scheduling. The machine that has the interest of NASA and partners Google and the Universities Space Research Association (USRA) is the D-Wave 2, a 512-qubit quantum computer built by D-Wave Systems at NASA's Ames Research Center. In traditional computing, data is encoded in the binary bit – either as a 1 or a 0 – but a quantum computer's use of quantum mechanical phenomena allows a quantum bit, or "qubit," to represent a one, zero or both values simultaneously. There is no classical computing approach equivalent to the way the qubit allows information to be processed, so advanced quantum computers may one day quickly solve problems that would take even today's fastest supercomputers millennia to decode. NASA will focus first on getting acclimated to the quantum computer, developing artificial intelligence and quantum-classical hybrid algorithms, quantum computer acceptance tests and problem decomposition and hardware embedding techniques. "The quantum computer isn't designed to be a replacement for traditional computers and supercomputers. It's designed to be used in conjunction with them," said Vern Bronwell, CEO of D-Wave Systems. The company sold its first D-Wave system to Lockheed Martin in 2011, garnering significant attention -- and some criticism from skeptics, who questioned whether D-Wave's quantum claims were for real. In many ways, the NASA-Google-USRA partnership over D-Wave 2 serves notice to the scientific community that it is legitimate. One of NASA's initial applications with the quantum computer is exploring new computational techniques in the search for other Earth-like planets outside the solar system. NASA's now-inoperable Kepler telescope, for example, produced a slew of transit data that traditional computers try to make sense of with the use of heuristic algorithms that find approximate solutions when classic methods can't find exact solutions. NASA said this computational limitation implies that some planets – smaller ones that are close to Earth's size – go undiscovered, but suggests quantum approaches could find planets that were otherwise undetectable. Bronwell said NASA scientists have already found planets by running some data from Kepler in the quantum computer. They were not new planets, he said, but the find "verified the machine works." Interest in quantum computing is likely to increase in the scientific community as advancements are made with D-Wave 2, but the intelligence and defense communities have also taken note of supercomputing alternatives. The intelligence community's research arm, the Office of Intelligence Advanced Research Projects Activity, recently developed a program that aims to create superconducting supercomputers that use less energy and are far faster than today's supercomputers. Perhaps not coincidentally, D-Wave 2's Vesuvius processor operates at 20 millikelvin, or just above absolute zero. Frank Konkel is a former staff writer for FCW.
<urn:uuid:629ed80a-c2ba-4253-8561-560c675e850f>
CC-MAIN-2017-09
https://fcw.com/articles/2013/10/28/nasa-quantum-lab-giant-leap.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00502-ip-10-171-10-108.ec2.internal.warc.gz
en
0.921383
646
3.421875
3
The intention to adopt cloud computing has increased rapidly in many organizations. Cloud computing offers many potential benefits to small and medium enterprises such as fast deployment, pay-for-use, lower costs, scalability, rapid provisioning, rapid elasticity, ubiquitous network access, greater resiliency, and on-demand security controls. Despite these extraordinary benefits of cloud computing, studies indicate that organizations are slow in adopting it due to security issues and challenges associated with it. In other words, security is one of the major issues which reduces the cloud computing adoption. Hence, cloud service providers should address privacy and security issues as an urgent priority and develop efficient and effective solutions. Cloud computing utilizes three delivery models (SaaS, PaaS, and IaaS) to provide infrastructure resources, application platform and software as services to the consumer. These service models need different level of security in the cloud environment. According to Takabi et al. (2010), cloud service providers and customers are responsible for security and privacy in cloud computing environments but their level of responsibility will differ for different delivery models. Infrastructure as a Service (IaaS) serves as the foundation layer for the other delivery models, and a lack of security in this layer affects the other delivery models. In IaaS, although customers are responsible for protecting operating systems, applications, and content, the security of customer data is a significant responsibility for cloud providers. In Platform as a service (PaaS), users are responsible for protecting the applications that developers build and run on the platforms, while providers are responsible for taking care of the users’ applications and workspaces from one another. In SaaS, cloud providers, particularly public cloud providers, have more responsibility than clients for enhancing the security of applications and achieving a successful data migration. In the SaaS model, data breaches, application vulnerabilities and availability are important issues that can lead to financial and legal liabilities. (Image Source: via Brightpattern.com) Bhadauria and his colleagues (2011) conducted a study on cloud computing security and found that security should be provided at different levels such as network level, host level, application level, and data level. Network Level Security: All data on the network need to be secured. Strong network traffic encryption techniques such as Secure Socket Layer (SSL) and the Transport Layer Security (TLS) can be used to prevent leakage of sensitive information. Several key security elements such as data security, data integrity, authentication and authorization, data confidentiality, web application security, virtualization vulnerability, availability, backup, and data breaches should be carefully considered to keep the cloud up and running continuously. Application level security Studies indicate that most websites are secured at the network level while there may be security loopholes at the application level which may allow information access to unauthorized users. Software and hardware resources can be used to provide security to applications. In this way, attackers will not be able to get control over these applications and change them. XSS attacks, Cookie Poisoning, Hidden field manipulation, SQL injection attacks, DoS attacks, and Google Hacking are some examples of threats to application level security which resulting from the unauthorized usage of the applications. Majority of cloud service providers store customers’ data on large data centres. Although cloud service providers say that data stored is secure and safe in the cloud, customers’ data may be damaged during transition operations from or to the cloud storage provider. In fact, when multiple clients use cloud storage or when multiple devices are synchronized by one user, data corruption may happen. Cachin and his colleagues (2009) proposed a solution, Byzantine Protocols, to avoid data corruption. In cloud computing, any faults in software or hardware that usually relate to inappropriate behavior and intrusion tolerance are called Byzantine fault tolerance (BFT). Scholars use BFT replication to store data on several cloud servers, so if one of the cloud providers is damaged, they are still able to retrieve data correctly. In addition, different encryption techniques like public and private key encryption for data security can be used to control access to data. Service availability is also an important issue in cloud services. Some cloud providers such as Amazon mentions in their licensing agreement that it is possible that their service is not available from time to time. Backups or use of multiple providers can help companies to protect services from such failure and ensure data integrity in cloud storage. By Mojgan Afshari
<urn:uuid:70a2c556-0533-4a15-a4bc-e5a8d3cf0c13>
CC-MAIN-2017-09
https://cloudtweaks.com/2014/07/computing-security-network-application-levels/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00202-ip-10-171-10-108.ec2.internal.warc.gz
en
0.933116
894
2.71875
3
Using ESP to Prevent Replay Attacks The tighter your network's security is, the more difficult it is for a hacker to break in. However, hackers tend to be clever and have lots of methods of getting into a network. Prior to Windows 2000, hackers could use a method called a replay attack to break into even some of the most secure networks. Replay attacks are seldom used because of their complexity--often, less-complicated methods work just as well. The problem is that before Windows 2000, there were lots of ways to protect against the less sophisticated attacks, but few (if any) ways to protect against replay attacks. In a replay attack, a hacker uses a protocol analyzer to monitor and copy packets as they flow across the network. Once the hacker has captured the necessary packets, he can filter them and extract the packets that contain things like digital signatures and various authentication codes. After these packets have been extracted, they can be put back on the network (or replayed), thus giving the hacker access to the desired access. Replay attacks have existed for a long time. Years ago, replay attacks were simply aimed at stealing passwords. However, given the encryption strength of passwords these days, it's often easier to steal digital signatures and keys. Repelling Attacks with IPSec Windows 2000 provides a way to protect against a replay attack: the IPSec subcomponent called Encapsulating Security Payload (ESP). The IPSec protocol is a security-enabled protocol that's designed to run on IP networks. IPSec runs at the network level and is responsible for establishing secure communications between PCs. The actual method of providing these secure communications depends on the individual network. However, the method often involves a key exchange. ESP is the portion of IPSec that encrypts the data contained within the packet. This encryption is controlled by an ESP subcomponent called the Security Parameters Index (SPI). In addition to the encryption, ESP can protect against replay attacks by using a mathematically generated sequence number. When a packet is sent to a recipient, the recipient extracts the sequence number and records the sequence number in a table. Now, suppose a hacker captured and replayed a packet. The recipient would extract the sequence number and compare it against the table that it has been recording. But the packet's sequence number will already exist in the table, so the packet is assumed to be fraudulent and is therefore discarded. // Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all.
<urn:uuid:125da2d4-d077-405d-982b-ffb0a5b586d4>
CC-MAIN-2017-09
http://www.enterprisenetworkingplanet.com/print/netsecur/article.php/624871/Using-ESP-to-Prevent-Replay-Attacks.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00554-ip-10-171-10-108.ec2.internal.warc.gz
en
0.956634
570
2.703125
3
Super Bowl LI was a historic day for the sport of football. It was also a historic day for technology. Picture this: wireless and cellular data consumed by fans at Super Bowl games has doubled every year for the last five years. According to Mobile Sports Report, Super Bowl LI broke the single-day wireless data use mark, with at least 37.6 terabytes used. It took almost $100 million in investment from telecommunications providers, thousands of man-hours, and dozens of months to provision the connectivity for the four hour “data feast!” Such growth in data is not limited to sports arenas, it is happening in all walks of life. In cloud computing — the de-facto computing paradigm today — data must be transported to the cloud for computation to occur. With data growth outpacing bandwidth growth by a factor of two, this paradigm is not sustainable. These fundamental and long-lasting trends are giving rise to the new paradigm of edge computing, an exciting trend in the Internet of Things. The powerful idea behind edge computing is to process the data right where it is generated. A stadium has potentially more raw capacity than some of the most powerful supercomputers in the world when it’s full of twenty thousand people carrying smartphones that sport a powerful CPU, array of sensors, storage and multiple radios for communications – and can be connected to one another. Add tablets, drones, connected vehicles, and other smart devices into the mix, and you have beginnings of the next paradigm of computing. However, for edge computing to come alive, the devices must be able to communicate without relying on internet or cellular connectivity. A team of IBM Research scientists is creating new peer-to-peer mesh networking technology that allows any modern mobile device to communicate directly with another without needing any Wi-Fi or cellular connectivity. As a first step in piloting this technology, IBM and The Weather Company have announced Mesh Network Alerts for alerting billions of people with potentially life-saving weather information, even when they have no internet connectivity. So how does this technology work? We asked Dr. Nirmit Desai, one of the IBM scientists behind this technology, to provide some details. How do Mesh Network Alerts work? Does this work on any regular mobile phone? Nirmit Desai: Yes, this works on most of the modern mobile phones, Android or iOS, without needing any special jailbreaking or hardware extensions. All modern mobile devices come with Bluetooth and Wi-Fi radios and our software uses these radios in innovative ways to discover and connect with devices around you. How close do these devices need to be? ND: Interestingly, this varies a lot across device models, but typically a pair of devices within a few hundred feet of each other can communicate. However, messages can hop over multiple intermediate devices to reach devices much farther away from the original sender. Of course, unlike the current internet, the messages will take longer to reach and the amount of data we can transport is still limited. This is why the weather alerts are a great first application for this. What are some of the challenges you faced? ND: Mesh networking is a widely understood concept in the scientific community. There are routing algorithms, performance analysis techniques, and some small-scale field tests to learn from. However, what is new here and probably the most challenging still is enabling this on millions of regular mobile devices without any special software or hardware. Battery is a scarce resource and device radios consume a lot of power. Deciding when and how often a device should use the radios for discovery has a major impact on battery life. We had to come up with computationally simple yet effective algorithms to bring down battery consumption to less than one percent per hour on most devices. Another major challenge has been to ensure that a mesh network can be formed even when the devices are in our pockets. This is essential, because an alert may arrive at any time and, on receiving it, each device must forward the alert to neighboring device to ensure that everyone gets the message in time. In this way, mesh networking exemplifies the power of communities. However, this is very difficult to achieve as the mobile operating systems such as iOS and Android are designed to support interactive apps where the users are actively engaged. Can anyone send information on such networks? ND: We can protect both the confidentiality and the integrity of the messages. In doing so, we are bringing together IBM’s security expertise and The Weather Company’s experience in delivering highly usable apps to millions of users. For the weather alerts scenario, it is essential that hackers cannot spread misinformation, e.g., fake alerts, but the content of the messages is not confidential. We employ state of the art cryptography techniques to ensure that users are only shown alerts that verifiably come from The Weather Company. Are there applications of this technology beyond weather alerts? ND: Most certainly, we are just getting started! Networks become congested or unavailable in crowded areas, e.g., sports arenas, music concerts, theme parks, cruise ships, disaster zones. On the other hand, mesh networks thrive with such high density of devices. We may see many of the social-sharing apps leverage mesh networking to carry data. Gaming apps have already applied this among a restricted group of players.
<urn:uuid:fccb0268-f6d8-4f15-943f-7c277a876816>
CC-MAIN-2017-09
https://www.ibm.com/blogs/research/2017/02/bringing-edge-computing-to-life/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00554-ip-10-171-10-108.ec2.internal.warc.gz
en
0.930113
1,078
2.921875
3
Rootkit malware has not been viewed as a formidable security threat for quite some time — the malware reached its peak levels in early 2011. Since then, rootkits have been on the decline. During the fourth quarter of 2013, McAfee Labs researchers found the rate of rootkits had fallen below the amount present in 2008. McAfee Labs has long credited 64-bit processors with the prevention of rootkit attacks, making the operating system kernel more difficult to attack. However, the first quarter of 2014 was hit with a spike in rootkit malware. The stealthy nature of rootkit malware is what makes their resurgence dangerous. Once a rootkit gains access to a system, it is able to remain undetected while it steals information for an extended period. The longer it is unnoticed, the greater are the chances for attackers to steal and destroy data on both corporate and individual scales. The main culprit in early 2014 was a single 32-bit family attack, which is a possible anomaly. Newer and smarter forms of this malware have learned how to circumvent the 64-bit systems, hijack digital certificates, exploit kernel vulnerabilities, digitally sign malware, and attack built-in-security systems. McAfee Labs believes these methods will result in a resurgence of rootkit-based attacks. The drastic decline in rootkit samples is depicted in the chart below. As Windows adopted the 64-bit platform, the microprocessor and OS design brought heightened security thanks to digital signature checking and kernel patch protection. Sample counts declined along with rootkit techniques used to gain kernel access. Efforts to access the kernel or install malicious device drivers were blocked with the increased protection of the 64-bit systems. The heightened security subsequently spiked the cost of building and deploying rootkits on the protected platforms. From Roadblocks to Speed Bumps Security measures and increased rootkit costs aside, attackers seem to have finally found ways to gain kernel-level access of 64-bit systems. The most recent malicious rootkit to penetrate the kernel, Uroburos, remained undetected for three years. By exploiting a known vulnerability in an old VirtualBox kernel, Uroburos was able to load its unsigned malware and override PatchGuard — a protection within 64-bit Windows meant to thwart attackers. Stolen private keys also offer attackers access to 64-bit systems. Valid digital signatures also assist in circumventing security measures. McAfee Labs has seen a rise in all types of malicious binaries just like these with digital signatures. The McAfee Labs team examined the past two years of data to find out how many 64-bit rootkits have used stolen digital certificates and discovered: While 64-bit processors and 64-bit Windows have implemented new security measures to safeguard against rootkits, it’s important to realize that no security is completely bulletproof. A more comprehensive security system that integrates hardware, software, network, and endpoint protection is the best rootkit defense.
<urn:uuid:b2dea3d3-74e6-47c0-bf7c-44e7973f1479>
CC-MAIN-2017-09
https://www.mcafee.com/cf/security-awareness/articles/rise-of-rootkits.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174135.70/warc/CC-MAIN-20170219104614-00078-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934075
598
2.890625
3
There are two classifications for wireless power, non-radiative (near-field) and radiative (far-field). Near-field is employed by products that can be charged via short distances, like an electric toothbrush, RFID tags, implanted medical devices and electric vehicles. You might even have a device with Near Field Communications (NFC) that can let your device communicate with other NFC compatible devices; it’s most commonly used as a quick way to transfer data. Near-field, as its name suggests, requires the two objects to be close, and probably touching. You might have a wireless charger for your smartphone, but you need to leave it sitting on the base in order for it to work. You can’t walk around the room and expect your device to charge in your pocket, or while you’re browsing Twitter next to the charger. Far-field, or radiative wireless power, uses electromagnetic radiation -- in the form of microwaves or laser beams – directed at the device that needs to be powered. Problems arise with far-field wireless power because people generally need to stay out of the way of the electromagnetic fields, making it a less ideal solution for charging a device you carry in your pocket. Wireless charging also requires both devices to have an antenna of sorts so that the antenna (or coil, or laser beam) in the base can transform the power into an electromagnetic field to send to a receiver, which then converts it back to electric power to charge the battery in your device.
<urn:uuid:dbf63718-f013-40ee-9b2d-f0dc4f41dd00>
CC-MAIN-2017-09
http://www.itnews.com/article/2952403/consumer-technology/inside-the-evolving-world-of-wireless-charging-technology.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00498-ip-10-171-10-108.ec2.internal.warc.gz
en
0.952538
314
3.34375
3
This method combines at least two signals at different wavelengths for transmission along one fiber-optic cable. CWDM uses fewer channels than DWDM, but more than standard WDM. In 2002, a 20-nm channel spacing grid was standardized by ITU (ITU-T G.694.2) ranging from 1270 to 1610 nm. However, since wideband optical amplification is not available, this limits the optical spans to several tens of kilometers. CWDM is a cost-effective option for increasing mobile backhaul bandwidth, but it does come with some network characterization and deployment challenges, including a limited maximum distance. However, a major advantage is that it can be easily overlaid on existing infrastructure. The most basic configuration is based on a single fiber pair, where one fiber is used to transmit and the other to receive. This configuration often delivers eight wavelengths from 1471 nm to 1611 nm. However, networks are now deploying in the O-band, which doubles the capacity to 16 wavelengths (1271 nm to 1451 nm), excluding the 1371 nm and 1391 nm water peak wavelengths. CWDM architecture is only comprised of passive components, namely multiplexers and demultiplexers; no amplifiers are used. This means there is no amplification, and therefore, no noise. The main advantage of this is that there is no need to measure the optical signal-to-noise ratio. Upon activation, barring improper fiber characterization, only the following elements can prevent proper transmission: - Transmitter failure - Sudden change in the loss created in an OADM - Human error (e.g., connection to the wrong port or splicing to the wrong filter port) Links can be tested end-to-end and fully characterized with a specialized CWDM OTDR. Thanks to its CWDM-tuned wavelengths, this tool will drop each test wavelength at the corresponding point on the network (e.g., customer premises, cell tower, etc. This means that each part of the network can be characterized at the head-end, which will save time and avoid travelling to hard-to-get-to sites. Once the wavelength is active, a channel analyzer must be used at the customer premises or cell tower to validate that it is present and that the power level received is within budget. This OTDR and channel analyzer combo is also useful when a single customer is experiencing issues. If the channel analyzer cannot confirm that the channel is present and within power budget, the CWDM OTDR can be used to test at either a specific or out-of-band wavelength to detect issues. The advantage of using an out-of-band wavelength (1650 nm) is that the OADM will filter it out.
<urn:uuid:2bb95fd8-9d29-4831-a642-a19f69fae62e>
CC-MAIN-2017-09
http://exfo.com/glossary/coarse-wavelength-division-multiplexing-cwdm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00198-ip-10-171-10-108.ec2.internal.warc.gz
en
0.9361
581
3.390625
3
Brute-Force SSH Server Attacks SurgeIf such an attack succeeds, the attacker may be able to view, copy, or delete important files on the accessed server or execute malicious code. The number of brute-force SSH attacks is rising, the SANS Internet Storm Center warned on Monday. There "has been a significant amount of brute force scanning reported by some of our readers and on other mailing lists," said Internet Storm Center handler Scott Fendley in a blog post. "... From the most recent reports I have seen, the attackers have been using either 'low and slow' style attacks to avoid locking out accounts and/or being detected by IDS/IPS systems. Some attackers seem to be using botnets to do a distributed-style attack, which also is not likely to exceed thresholds common on the network." Data gathered by DenyHosts.org, a site that tracks SSH hacking attempts, appears to confirm Fendley's claim. A graph of the site's data shows SSH hacking attempts rising sharply over the past weekend. SSH stands for secure shell. It is a network protocol for creating a secure communications channel between two computers using public key cryptography. A brute-force SSH attack, a kind of dictionary attack, is simply a repeating, typically automated, attempt to guess SSH client user names and/or passwords. If such an attack succeeds, the attacker may be able to view, copy, or delete important files on the accessed server or execute malicious code. The SANS Institute last year said that brute-force password-guessing attacks against SSH, FTP, and Telnet servers were "the most common form of attack to compromise servers facing the Internet." A paper published earlier this year by Jim Owens and Jeanna Matthews of Clarkson University, "A Study of Passwords and Methods Used in Brute-Force SSH Attacks," found, based on an analysis of network traffic, that even "strong" passwords may not be enough to foil password-guessing attacks. ("Strong" passwords are typically a combination of letters and numbers, both upper and lower case, that don't form recognizable words.) The paper focuses on the vulnerability of Linux systems to brute-force SSH attacks. "While it is true that computers running Linux are not subject to the many worms, viruses, and other malware that target Windows platforms, the Linux platform is known to be vulnerable to other forms of exploitation," the paper states. "A 2004 study conducted by the London-based security analysis and consulting firm mi2g found that Linux systems accounted for 65% of 'digital breaches' recorded during the 12-month period ending in October 2004." The paper points to remarks by Dave Cullinane, CISO at eBay, and Alfred Huger, VP at Symantec Security Response, to the effect that Linux machines make up a large portion of the command and control networks of botnets. It also notes that "Linux systems face a unique threat of compromise from brute-force attacks against SSH servers that may be running without the knowledge of system owners/operators. Many Linux distributions install the SSH service by default, some without the benefit of an effective firewall." Thus, all it takes to compromise such systems is to guess the password, and attackers have machines trying to do just that at all hours of the day. To make matters worse, attackers are sharing dictionaries of username/password pairs that include a significant number of "strong" passwords. Fendley recommends that IT administrators consider defenses advocated by Owens and Matthews in their paper. These include: using host-based security tools to block access to servers; disabling direct access to root accounts; avoiding easily guessed usernames, such as a person's first or last name; enforcing the use of strong passwords, public key authentication, or multifactor authentication, depending on the security posture of the organization in question; and limiting publicly accessible network services through iptables or other host-based security measures.
<urn:uuid:84519a41-6f6f-4c53-a91d-51d449ae0094>
CC-MAIN-2017-09
http://www.darkreading.com/attacks-and-breaches/brute-force-ssh-server-attacks-surge/d/d-id/1067816
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171176.3/warc/CC-MAIN-20170219104611-00374-ip-10-171-10-108.ec2.internal.warc.gz
en
0.934087
804
2.625
3
What happens when you Google your name, and old, embarassing information appears in the results? You could ask the site in question to take it down, but if they say no, there's not much you can do--unless you live in Europe. An EU court recently issued a ruling on the "right to be forgotten," which says that, by request, Google and other search engines should hide links to web pages with "inadequate, irrelevant or no longer relevant," information. You can read more about this in our FAQ. Following the controversial ruling, Google established an "advisory board" to advise them on how to implement the decision. Wikipedia founder Jimmy Wales is on this board. My Twitter discussion with Jimmy Wales is available via Storify, and he was kind enough to answer a few questions by email as well. These excerpts have been edited for length. TechHive: Search links can be removed, but not the originating source documents. How then is this censorship? Jimmy Wales: It absolutely is censorship and no one serious has any doubts about that at all. As a simple test, ask yourself whether this would be possible in the U.S. without a repeal or modification of the First Amendment--it would not. Google goes to an enormous amount of effort to write software to express editorial judgment about which links are displayed--that's freedom of expression. The links do not come up in random order. We may criticize the results, just as we may criticize the writings of a movie reviewer. What we may not do, if we are respecting freedom of expression, is use the force of law to suppress a link to content that is legally published and true. It is not appropriate for any government to attempt to force its judgment of what is "relevant" on anyone. And it is a censorship not just of Google but of the newspapers affected by it. Newspapers print what they think is relevant. In the case the ECJ [Court of Justice of the EU] considered, the newspaper declined to remove the information from their archive. The punishment they are suffering is that it's now very difficult to find the page. For anyone to claim this is not censorship of the newspaper is ludicrous. The primary means by which newspapers distribute their content online is via search engines. This ruling denies them access to those readers--that's censorship plain and simple. TH: Should an independent agency be tasked with making decisions on which links to remove, or should this be fully tasked to the respective search engines? JW: The editorial judgment about what links to legally published and truthful content to show in response to a query should be left with absolute discretion to the search engines. Search engines are not an exception to freedom of speech. Remember, we are not talking about libel here, and we are not talking about illegally posted copyrighted material and we are not talking about child pornography--these are not legal to publish in the first place. This decision does not touch on those issues at all. TH: Is there a possibility, thanks to this ruling, that wealthier people could unduly influence search results and poor people not? JW: Of course they can. They can now hire a lawyer to threaten Google with legal action in Europe. Before this, everyone has a similar playing field. Anyone can start a personal blog to rebut or respond to false claims. Things are now much better for the rich (and for con artists) and much worse for everyone eelse. TH: Should adults ever have the right to have search engine remove links to stories that mention them? JW: Absolutely not, under no circumstances ever is it OK to use the force of law to suppress truthful speech. The very idea is disgusting and philosophically bankrupt.
<urn:uuid:b3243414-0614-4516-b760-8c0ecef1debc>
CC-MAIN-2017-09
http://www.cio.com/article/2375676/security-privacy/wikipedia-s-jimmy-wales---right-to-be-forgotten--is-censorship.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00074-ip-10-171-10-108.ec2.internal.warc.gz
en
0.961151
763
2.53125
3
Substantial efforts have been made to the Federal Highway Administration Roads and Weather Management Program. For instance, the DSRC (Dedicated Short Range Communications) technology which uses IEEE 802.11p in the 5.9 GHz band is now being used in connection with various federal and state connected vehicle programs, these programs pinpoint opportunities of obtaining data from vehicles acting as mobile device platforms. DSRC can either be a one or two-way short-range to mid range wireless channel that enables the transmission of very high data. This is critical in the communication-based active safety application to prevent accidents. This DSRC (Dedicated Short Range Communications) wireless spectrum is specifically designed for automotive use. There are two types of Dedicated Short Range Communications and they include; Vehicle-to-Vehicle(V2V) and Vehicle–to-Infrastructure (V2I). These two need a protected wireless interface constancy in short time delays and extreme weather conditions, all of which is enabled by DSRC. Potential Applications of DSRC on Traffic Management and Public Safety - Warnings on blind spots - Do not pass warnings - Traffic data collection - Parking payment - Rail intersection warning - Safety inspection - Electronic toll collection - Unexpected decelerating - Synchronization of the adaptive cruise control - Rollover warning - Collision Warning - Incoming vehicle warning - Clearing commercial vehicles - In-vehicle display of road signs and billboards Dedicated Short Range Communications (DSRC) was developed with the main goal of permitting technologies that support safety applications and communications between vehicle-based devices and infrastructure, to reduce the number of accidents. DSRC is currently the only short range wireless alternative that delivers high consistency when required. This is because safety applications require a high level of link consistency; Dedicated Short Range Communications does this by working in vehicle speed flexibility and even in extreme weather conditions, it can deliver protected performance. The main advantages of DSRC (Dedicated Short Range Communications) include improved flexibility and collision avoidance. Pedestrian and Bike Safety A car equipped with the DSRC technology can detect a pedestrian with a DSRC-enabled smartphone. The position, direction and speed of the pedestrian are determined by a smartphone application. The car can use the Dedicated Short Range Communications technology to position all the surrounding vehicles. In the occasion of an approaching collision as detected by the smartphone application, the system alerts the pedestrian through a repeating and high volume beeping and a warning on the screen of the smartphone. At the same time, the driver to the possible collision is alerted with a loud alarm and optical warnings on the vehicles heads up display, and the navigation screen. The driver can also receive information on whether the pedestrian is texting or listening to music or whether he or she is on a phone call. Approaching Emergency Vehicle Warning Vehicle–to-Vehicle DSRC allows the transfer of information about an emergency of an approaching car, which should be passed from one vehicle to another, and forwarded through traffic. In this case, DSRC (Dedicated Short Range Communications) will help avoid collisions hence saving lives. The Cooperative Adaptive Cruise Control DSRC is more effective in a situation when the cruise control fails, especially when near radar waves. When a car nears a sharp bend, the DSRC Systems sends a warning to cruise control system of any moving vehicles when approaching a sharp turn. Dedicated Short Range Communications (DSRC) is necessary for the safety of both drivers and pedestrians; however, its effectiveness is highly dependent on cooperative standards for interoperability.
<urn:uuid:d06125ff-22de-43d8-976e-763a6c12ef28>
CC-MAIN-2017-09
https://www.fluidmesh.com/dsrc-dedicated-short-range-communications/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00250-ip-10-171-10-108.ec2.internal.warc.gz
en
0.923085
742
3.15625
3
…or How to Put the Internet in a Van… One of the foremost issues facing science and industry today is storing the ever-increasing amounts of data that are created globally. Google’s Eric Schmidt claims that every two days humanity generates as much information as it did from the dawn of civilization up until 2003. There may be a way to address this challenge thanks to the world’s oldest storage medium – DNA. Studies conducted by pioneering researchers George Church, a professor of genetics at Harvard Medical School, and Ewan Birney, associate director of the European Bioinformatics Institute (EBI), show DNA-based storage to be remarkably effective and efficient. Delivering the keynote talk for the EUDAT 2nd Conference in October of last year, Ewan Birney discussed the exciting activity that is occurring at the intersection of biology and big data science. In an interview with ISGTW, Birney shares additional details about his DNA encoding projects, working with source material like Shakespeare’s sonnets, an excerpt from Martin Luther King’s ‘I have a dream’ speech, a PDF of Watson and Crick’s famous paper describing the double helix structure of DNA, a picture of the EBI, and a piece of code that explains the encoding procedure. The beauty of DNA as a storage mechanism, according to Birney, is that it’s electricity-free, incredibly dense, and stable. DNA that’s over 700,000 years old has been recovered. “You’ve just got to keep it cold, dry and in the dark,” Birney told ISGTW. Birney goes on to explain that the technology to read and write DNA has existed since bacteria were first genetically engineered in 1973. A 2003 project, lead by Pak Chung Wong from the Pacific Northwest National Laboratory, transferred encrypted text into DNA by converting each character into a base-4 sequence of numbers, each corresponding to one of the four DNA bases (Adenine, Cytosine, Thymine, and Guanine – also known by the abbreviations A, C, T, and G). Bacteria were considered to be an optimal host because they replicate quickly, generating multiple copies of the data in the process, and if a mutation occurs within an individual bacterium, the remaining bacteria will still retain the original information. Live DNA is not without problems, though. Fast replication rates threaten to compromise data over long periods of time. There is also a risk that the inserted DNA could interfere with the host bacteria’s normal cellular processes, destabilizing the bacterial genome. As Geoff Baldwin, a reader in biochemistry at Imperial College London, UK, explains “This does not bode well for the use of bacteria as a mass data storage device.” Researchers proposed using ‘naked’ DNA instead since living cells are not necessary for DNA to remain intact. Unlike bacteria, naked DNA doesn’t require genetic manipulations to safely insert it into a host. Birney and his team encoded computer files totaling 739 kilobytes of unique data – including all 154 of Shakespeare’s sonnets – into naked DNA code, synthesized the DNA, sequenced it and reconstructed the the files with over 99 percent accuracy. With the current high costs of reading and writing DNA, this technology is not yet suitable for mass storage. It is, however, already economically viable for very long term (1,000 years or more) applications, such as nuclear site location data, and other governmental, legal and scientific archives that need to be kept long-term but are infrequently accessed. Furthermore, the researchers note that current trends are reducing DNA synthesis costs at a pace that should make DNA-based storage cost-effective for long-term archiving (~50 year periods) within a decade. “DNA is remarkable,” observes Birney, “just one gram of DNA can store about a petabyte’s worth of data, and that’s with the redundancy required to ensure that it’s fully error tolerant. It’s estimated that you could put the whole internet into the size of a van! You can also copy trivially. The only problem at the moment is cost: it’s prohibitively expensive to write DNA. Nevertheless, this technology is expected to come down in price dramatically over the coming years. The only question is: how quickly will it come down in price?”
<urn:uuid:df2350dd-e64f-4d5b-bfaa-5a205bfb51f1>
CC-MAIN-2017-09
https://www.hpcwire.com/2014/01/16/old-world-technology-meets-new-world-problem/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00426-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939402
920
3.359375
3
For many people, the term “hacking” means that a criminal has broken through a firewall to get access to a network. The firewall is one of the easiest security concepts for people to understand, and often is thought of as the guard at the gate who provides entry based on a list of authorized visitors or other criteria. It helps that the term “firewall” originated outside of IT as a literal physical wall that was meant to prevent a fire from spreading, so the word itself was already in the public vernacular before the Internet was popular. ‘Firewall’ is also one of the oldest internet security terms, having been formally introduced by academia in the 1980’s. Because of the history and context of the term, it makes sense that people tend to think that the firewall is what gets “broken” in a hack. Modern firewalls are much more than a gate that allows traffic in and out based on simple rules. The latest firewalls provide several other functions, such as DHCP, secure VPNs, Link balancing, and more. As business needs have evolved with the rise of branch offices, remote workers, and SaaS applications, the network firewall has evolved to keep pace and aggressively protect the network perimeter and provide the necessary services to enable the business it protects.
<urn:uuid:f3d1c68a-104b-4edb-ae35-d70e1c049e0c>
CC-MAIN-2017-09
https://blog.barracuda.com/category/security/barracuda-web-security-gateway/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171232.43/warc/CC-MAIN-20170219104611-00070-ip-10-171-10-108.ec2.internal.warc.gz
en
0.972559
270
3.453125
3
Steve Jobs And Tech SecurityApple's products continue to highlight what relatively secure operating system environments look like. The death of Steve Jobs is triggering not a little reflection over the impact that one man could have over the form and function of the technology we use every day. The Apple II and Apple Macintosh were systems on which many of today's technology professionals first cut their teeth. In more recent years, the iPod, iPhone, and iPad came to reshape or outright define our notions of what constituted an MP3 player, smartphone, or tablet. At least for the past 10 years, however, what's also been notable with Apple products is the degree to which their users don't have to worry about security. Not that Apple's operating systems are 100% secure; they're not. But with the large quantity of malware targeting Windows operating systems increasing this year by 21%, and the quantity of malware targeting Google's Android smartphone and tablet operating system increasing by 400%, Apple's products are notable for the exploits they're not experiencing. Why don't Apple products experience the same levels of malware as Windows or Android, and how much of that can be traced to Jobs' legacy? The leading explanation has long been that attacking Apple desktops and laptops offers insufficient benefits to merit the time and cost required to develop the necessary malware. According to Net Applications, Windows controls 86% of the PC operating system market, while Apple accounts for only 6%, and Linux, 1%. Why attack Apple, when there are still so many people still using Windows XP? Furthermore, the Windows security situation has created a vicious security circle. For example, one of the leading scams--free AV--doesn't even exploit Windows. Instead, it makes people think that their Windows systems have been compromised, getting them to pony up $49.95 for bogus antivirus software, or more for the equally bogus "premium edition." In other words, criminals are launching social engineering attacks predicated on the legacy of poor Windows security. On the Apple front, Jobs notably chose to base the Apple OS X operating system, introduced for desktops in 2001, on Unix, which arguably made it more secure than its Windows rival. But Apple OS X isn't invulnerable--far from it. In fact, at the Black Hat conference in Las Vegas this summer, security researchers demonstrated that Mac OS X was vulnerable to the advanced attacks plaguing such businesses as RSA. Still, few people--if any--appear to be launching such attacks against Apple users. Another Jobs decision that's had strong security upsides has been Apple's walled-garden approach to distributing applications for its iOS (iPhone, iPod Touch, iPad) devices. Namely, only applications from the Apple App Store can be installed on said devices (at least without jailbreaking them). But before developers can place their applications in the App Store, first they get vetted, and then each version of their application gets vetted, to ensure that it meets security standards, including doing what it says it does. The result has been a smartphone ecosystem in which users are at very low risk of being exploited via malicious applications. Meanwhile, for users of Apple's biggest competitor, Android, instead of seeing a walled garden, it's arguably more of a jungle, as they can install any application from any source. Google also doesn't screen applications before offering them for sale. As a result, Trojan applications sometimes even end up in the official Android Market, requiring Google to expunge them, and occasionally even use its "kill switch" to remove especially malicious ones from devices. Google makes the operating system, but doesn't control the Android smartphone ecosystem. Apple, of course, has taken the opposite approach, and it's notable that third parties are stepping forward to provide Android users with a more Apple-like application vetting and distribution system, such as the Amazon Appstore for Android. Of course, Apple and Google's respective approaches have their own pros, cons, and tradeoffs--not least of which is cost. But Apple iOS, for people willing to buy into its walled-garden approach, sees few exploits. Attackers seem to favor the anything-goes approach of Android, not least because it makes distributing malicious applications much easier. How much of that iOS versus Android--or OS X versus Windows--security equation has had to do with Apple's design, ecosystem, market share, or luck? Regardless of the answer, as people celebrate what was Jobs' rare genius at bringing highly usable and desirable products to market, don't forget the security aspect of that equation, and the positive contribution to form and function offered by not having to deal with the latest malware outbreak or targeted exploit. Security professionals often view compliance as a burden, but it doesn't have to be that way. In this report, we show the security team how to partner with the compliance pros. Download the report here. (Free registration required.)
<urn:uuid:3940b8ee-825a-4f18-bacf-e8d85fa450be>
CC-MAIN-2017-09
http://www.darkreading.com/risk-management/steve-jobs-and-tech-security/d/d-id/1100574?cid=sbx_bigdata_related_commentary_intrusion_prevention_big_data&itc=sbx_bigdata_related_commentary_intrusion_prevention_big_data
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00542-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957309
997
2.6875
3
Ever wish you could get your hands on the software that NASA used to launch its Apollo lunar missions or to get robots working on Mars? Thinks this code could help you build software for your own project or company? If so, NASA has something for you. The space agency announced that on Thursday it will make a catalog of NASA-developed code available to the public. More than 1,000 coding projects will be released in 15 categories, including project management systems, design tools, data handling and image processing. Code also will be released for life support systems, and robotic and autonomous systems. "Software is an increasingly important element of the agency's intellectual asset portfolio, making up about a third of our reported inventions every year," said Jim Adams, NASA's deputy chief technologist, in a statement. "We are excited to be able to make that software widely available to the public with the release of our software catalog." The software code is being released because NASA developers know that it would have uses beyond the missions for which it was originally developed. What NASA calls technology transfer gives U.S. taxpayers more bang for their buck. "NASA is committed to the principles of open government," said Adams. "By making NASA resources more accessible and usable by the public, we are encouraging innovation and entrepreneurship. Our technology transfer program is an important part of bringing the benefit of space exploration back to Earth for the benefit of all people." The codes are being made available for free. Some are available for all U.S. citizens and others are restricted to other federal agencies. This article, Building your own rocket or robot? NASA has code for you, was originally published at Computerworld.com. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed . Her email address is [email protected].
<urn:uuid:07a44dfb-e3f1-44fb-9f65-a6886733b7ca>
CC-MAIN-2017-09
http://www.computerworld.com/article/2489643/app-development/building-your-own-rocket-or-robot-nasa-has-code-for-you.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00294-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947024
408
2.640625
3
Distributed Denial of Service (DDoS) attacks aren’t infamous for their sophistication; however, they’re increasing adaptability warrants a fresh look at the evolving anatomy of these dynamic threats. What is a DDoS attack? A DDoS attack is an attempt to consume one or more finite resources on a target computer or network of computers. These attack vectors block genuine users from accessing the network, application, or service and exploit detectable vulnerabilities. What are the types of DDoS attacks? Although there is a broad spectrum of types of DDoS attacks, they can typically be categorized into one of the following: The intent of these attacks is to cause congestion by consuming the bandwidth either within or between the target network/service and the rest of the Internet. - TCP state-exhaustion These attacks target web servers, firewalls, and load balancers in an attempt to disrupt their connections, consuming the finite number of concurrent connections the device can support. - Application layer Also known as Layer-7 attacks, these threats target vulnerabilities in an application or server with the intent of establishing a connection and exhausting it by monopolizing processes and transactions. They are difficult to detect because it takes very few machines to carry out the attack, generating deceptively low traffic rates, making them the most serious type of DDoS attacks. The sophisticated attackers of today blend all three DDoS attack types, creating a formidable threat that is even more challenging for businesses to combat. How could DDoS attacks impact my business? The modern business landscape all but requires a(n) website/application(s) with uninterrupted performance. DDoS attacks pose a serious threat to maintaining business continuity in today’s web-based world. From “Mafiaboy’s” notorious “Project Rivolta” that brought down the websites of Amazon, CNN, Dell, E*Trade, eBay, and Yahoo! in 2000, to the recent server attack on game developer and publisher Blizzard, the storied history of DDoS attacks speaks for themselves. DDoS: Next Gen Defense When it comes to addressing DDoS threats, prevention is key (see Cybersecurity: 5-Step Plan to Address Threats & Prevent Liability for detailed tips). Here are the most beneficial DDoS prevention tools: Train staff to recognize the signs of an attack is essential, vendors advise. They should know what DDoS patterns look like, as well as how to respond if they’re alerted to the website or application being down. Regularly update and proactively patch servers and other network elements to mitigate potential threats. - Choose a provider with 24/7 DDoS prevention at the network connection layer– this can be detection and human mitigation, network defense systems (advanced, enterprise level threat protection solutions), or often both. - A website firewall (or web application firewall) secures connections and maintains data integrity. - Content Delivery Network (CDN)-based DDoS protection adds another layer of critical defense for websites at the point of contact. Stay tuned for upcoming DDoSand security features on the Codero Blog, and subscribe to the Codero blog for other helpful tips from our team of hosting experts.
<urn:uuid:5968f34d-9e73-442e-988a-4cc5c75d0561>
CC-MAIN-2017-09
http://www.codero.com/blog/adaptive-ddos-attacks-demand-next-gen-defense/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00063-ip-10-171-10-108.ec2.internal.warc.gz
en
0.904761
665
3.0625
3
OpenSSH is a common tool for most of network and system administrators. It is used daily to open remote sessions on hosts to perform administrative tasks. But, it is also used to automate tasks between trusted hosts. Based on public/private key pairs, hosts can exchange data or execute commands via a safe (encrypted) pipe. When you ssh to a remote server, your ssh client records the hostname, IP address and public key of the remote server in a flat file called “known_hosts“. The next time you start a ssh session, the ssh client compares the server information with the one saved in the “known_hosts” file. If they differ, an error message is displayed. The primary goal of this mechanism is to block MITM (“Man-In-The-Middle“) attacks. But, this file (stored by default in your “$HOME/.ssh” directory) introduces security risks. If an attacker has access to your home directory, he will have access to the file which may contains hundreds of hosts on which you also have an access. Did you ever eared about “Island Hopping” attack? Wikipedia defines this attack as following: “In computer security, for example in intrusion detection and penetration testing, island hopping is the act of entering a secured system through a weak link and then “hopping” around on the computer nodes within the internal systems. In this field, island hopping is also known as pivoting.“ A potential worm could take advantage of the information stored in the file to spread across multiple hosts. OpenSSH introduced a countermeasure against this attack since the version 4.0. The ssh client is able to store the host information in a hash format. The old format was: host.rootshell.be,10.0.0.2 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0ei6KvTUHnmCjdsEwpCCaOHZWvjS \ jytm/5/Vv1Dc6ToaxTnqJ7ocBb7NI/HUQEc23eUYjFrZQDS0JRml3RnsG0UzvtIfAPDP1x7h6HHy4ixjAP7slXgqj3c \ fOV5ThNjYI0mEbIh1ezGWovwoy0IxRK9Lq29CacqQH8407b1jEj/zfOzUi3FgRlsKZTsc3UIoWSY0KPSSPlcSTInviG \ oNi+9gC8eqXHURsvOWyQMH5K5isvc/Wp1DiMxXSQ+uchBl6AoqSj6FTkRAQ9oAe8p1GekxuLh2PJ+dMDIuhGeZ60fIh \ eq15kzZGsDWkNF6hc/HmkJDSPn3bRmo3xmFP02sNw== With the version 4.0, hosts are stored in this new format: |1|U8gOHG/S5rH9uRH3cXgdUNF13F4=|cNimv6148Swl6QcwqBOjgRnHnKs= ssh-rsa AAAAB3NzaC1yc2EAAAABIw \ AAAQEAvAtd04lhxzzqW57464mhkubDixZpy+qxvXBVodNmbM8culkfYtmq0Ynd+1G1s3hcBSEa8XHhNdcxTx51MbIjO \ dCbFyx6rbvTIU/5T2z0/TMjeQyL3SZttbYWM2U0agKp/86FdaQF6V87loNcDq/26JLBSaZgViZS4gKZbflZCdD6aB2s \ 2sqEV4k7zU2OMHPy7W6ghNQzEu+Ep/44w4RCdI5OYFfids9B0JSUefR9eiumjRwyI0dCPyq9jrQZy47AI7oiQJqSjvu \ eMIwZrrlmECYSvOru0MiyeKwsm7m8dyzAE+f2CkdUh6tQleLRLnEMH+25EAB56AhkpWSuMPJX1w== As you can see, the hostname is not readable anymore. To achieve this result, a new configuration directive has been added in version 4.0 and above: “HashKnownHosts [Yes|No]“. Note that this feature is not enabled by default. Some Linux (or other UNIX flavors) enable it by default. Check your configuration. If you switch the hashing feature on, do not forget to hash your existing known_hosts file: $ ssh-keygen -H -f $HOME/.ssh/known_hosts Hashing ssh keys is definitively the right way to go but introduce problems. First, the good guys cannot easily manage their SSH hosts! How to perform a cleanup? (My “known_hosts” file has 239 entries!). In case of security incident management or forensics investigations, it can be useful to know the list of hosts where the user connected. It’s also an issue for pentesters. If you have access to a file containing hashed SSH hosts, it can be interesting to discover the hostnames or IP addresses and use the server to “jump” to another target. Remember: people are weak and re-use the same passwords on multiple servers. By looking into the OpenSSH client source code (more precisely in “hostfile.c“), I found how are hashed the hostnames. Here is an example: “|1|” is the HASH_MAGIC. The first part between the separators “|” is the salt encoded in Base64. When a new host is added, the salt is generated randomly. The second one is the hostname HMAC (“Hash-based Message Authentication Code“) generated via SHA1 using the decoded salt and then encoded in Base64. Once the hashing performed, it’s not possible to decode it. Like UNIX passwords, the only way to find back a hostname is to apply the same hash function and compare the results. I wrote a Perl script to bruteforce the “known_hosts” file. It generates hostnames or IP addresses, hash them and compare the results with the information stored in the SSH file. The script syntax is: $ ./known_hosts_bruteforcer.pl -h Usage: known_hosts_bruteforcer.pl [options] -d <domain> Specify a domain name to append to hostnames (default: none) -f <file> Specify the known_hosts file to bruteforce (default: /.ssh/known_hosts) -i Bruteforce IP addresses (default: hostnames) -l <integer> Specify the hostname maximum length (default: 8 ) -s <string> Specify an initial IP address or password (default: none) -v Verbose output -h Print this help, then exit Without arguments, the script will bruteforce your $HOME/.ssh/known_hosts by generating hostnames with a maximum length of 8 characters. If a match is found, the hostname is displayed with the corresponding line in the file. If your hosts are FQDN, a domain can be specify using the flag “-d“. It will be automatically appended to all generated hostnames. By using the “-i” flag, the script generates IP addresses instead of hostnames. To spread the log across multiple computers or if you know the first letters of the used hostnames or the first bytes of the IP addresses, you can specify an initial value with the “-s” flag. Examples: If your server names are based on the template “srvxxx” and belongs to the rootshell.be domain, use the following syntax: $ ./known_hosts_bruteforcer.pl -d rootshell.be -s srv000 If your DMZ uses IP addresses in the range 192.168.0.0/24, use the following syntax: $ ./known_hosts_bruteforcer.pl -i -s 192.168.0.0 When hosts are found, there are displayed as below: $ ./known_hosts_bruteforcer.pl -i -s 10.255.0.0 *** Found host: 10.255.1.17 (line 31) *** *** Found host: 10.255.1.74 (line 165) *** *** Found host: 10.255.1.75 (line 69) *** *** Found host: 10.255.1.76 (line 28) *** *** Found host: 10.255.1.78 (line 56) *** *** Found host: 10.255.1.91 (line 51) *** ^C My first idea was to bruteforce using a dictionary. Unfortunately, hostnames are sometimes based on templates like “svr000” or “dmzsrv-000” which make the dictionary unreliable. And about the performance? I’m not a developer and my code could for sure be optimized. The performance is directly related to the size of your “known_hosts” file. Be patient! The script is available here. Comments are always welcome. Usual disclaimer: this code is provided “as is” without any warranty or support. It is provided for educational or personal use only. I’ll not be held responsible for any illegal activity performed with this code.
<urn:uuid:dd194ce7-480c-4160-88cb-048d9e2c4c97>
CC-MAIN-2017-09
https://blog.rootshell.be/2010/11/03/bruteforcing-ssh-known_hosts-files/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171670.52/warc/CC-MAIN-20170219104611-00291-ip-10-171-10-108.ec2.internal.warc.gz
en
0.790821
2,208
3.421875
3
With energy costs higher in recent years, many are looking at their IT systems' power usage and asking if there is any way to cut it down and reduce costs. With regard to servers and data centers, Andreas Antonopoulos has addressed this issue in his New Data Center Strategies newsletter. But the IEEE is also taking a look at what can be done on the networking side. Earlier this month there was a "call for interest" on energy-efficient Ethernet within the standards body. The slides presented at the call for interest raise good points. They say the total energy consumption of network equipment is 13 terawatt hours per year, and that's not including servers, PCs and other devices that are on a network. The presentations note that desktop-to-switch links are mostly idle, with bursts that are seconds or even hours apart. LAN link utilization, they say, is generally between 1% and 5%. Reducing link rates can save energy, but it would be good if there were a way to advertise the desire to change up to a higher speed when needed and change speed quickly, according to a presentation by Hugh Barrass of Cisco. In other words, use a high-data-rate PHY when utilization is high, and a low-data-rate PHY when utilization is low, as Broadcom's Howard Frazier points out. Again, this is in the very early stages, and it will be interesting what comes out of it. But if the industry can help reduce energy consumption by recognizing that not every network link needs Gigabit Ethernet (or some other high data rate) all the time, that's all to the good.
<urn:uuid:051bd239-d2ed-4587-a209-3c063ac80a4b>
CC-MAIN-2017-09
http://www.networkworld.com/article/2301547/lan-wan/energy-efficient-ethernet-.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00411-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964224
335
2.78125
3
As much as there is hype about the Internet of Things (IoT) and protecting it, there is no such thing as "IoT Security" per se. There is just the usual security engineering that is applied to IoT. Security engineering is about determining assets, threats to assets, and cost-effective means of mitigation. There are many models and ways for carrying out such analysis, but for the most part they all boil down to those key elements. Such security analysis applies to networks, it applies to servers, it applies to cars, and it also applies to IoT. That said, security engineering in IoT does pose a few unique challenges, which I would like to discuss now. There are many different types of IoT devices with different properties. IoT is a tiny sensor running an 8-bit processor with a battery for ten years, and IoT is a multi-core control hub or a smart television. Those devices are different from one another in almost everything: their hardware, their operating system, their type of software, their level of assurance, and their ability to support different security protocols. The analysis of IoT security for a heterogeneous network consisting of a dozen types of components coming from different vendors, and yet that must rely on each other, is very complex. Since IoT security is a relatively new domain, there are fewer standards; and more importantly -- fewer industry practices for securing IoT networks. When an engineer wants to deploy a new web application, he has many checklists and do's and dont's he could follow. Same for the designer of a new mobile application. However, when an IoT vendor launches on the design of a new IoT device, wishing to make it secure, he is largely on his own. Securing unfamiliar platforms where no standard practices are yet available requires more know-how and leaves more room for errors by each vendor designing and implementing his own wheel. IoT is all about connectivity. If it's not for increased connectivity, we have already got IoT for decades. More connectivity, in security engineering, implies a wider attack surface, as it implies more ports for the attacker to possibly hook into the network through. One of the asymmetries of security engineering is that the attacker gets to choose his preferred method and point of entry into the system, while the defender has to protect all entry points against all methods. More connectivity implies more entry points that require attention. Security mechanisms shall be deployed under the assumption that they need to last for a decade, and renewability mechanisms need to be deployed under the assumption that the security mechanisms that were designed for a decade actually break in a week. This is a theory that is easy to understand but is very difficult to implement in practice. Some IoT devices are actually designed to live for many years, and yet some IoT devices are limited in their ability to be patched, or need to be certified and thus patching can only be done in long complex cycles. The sense of security we feel as humans is a function of more parameters than just how secure we really are. The human brain reacts to risks in ways that involve a lot of pre-historic programming, causing some well known biases. For example: we tend to promote risks caused by malicious people over risks caused by nature, and we have stronger reaction to rare risks than to everyday risks. We also relate more to risks that involve tangible physical consequences over those that are completely digital, even if the monetary damage they cause is similar. Compromise of an IoT system often has physical artifacts. As such, we perceive it as a bigger deal than credit cards being stolen from some database somewhere. It follows that the security engineer has less room for errors. We do not mind using operating systems that get us infected with malware once or twice a year, but a few publicized cases of burglary through a faulty lock system could be enough to bring the lock vendor out of business. Since IoT is just in its infancy in terms of deployment, we still are not sure, as a society, what we should really be afraid of and what not. In more mature industries we more or less know what risks we need to focus on and what risks "just never happen". We know of all types of adversaries, what capabilities they have and how much they are willing to invest in getting to any of the assets of our system. We do not have this knowledge in IoT. Every new domain that emerges spurs a new parallel domain of adversaries attempting to exploit the system for gain. For IoT, a domain that is both new and heterogeneous, we do not yet have a firm grasp of the entire wealth of adversaries and their motives. Some of those adversaries do not even know they are adversaries as of yet. What follows is that the security engineer can afford less shortcuts. When we protect digital video content against copyright infringement, we know that the analog signal gives degraded output that the adversary barely cares about, and that low-resolution content can be protected by cheaper means because the value for the attacker is significantly lower. In IoT we are still at the point of having to protect against an abstract adversary. Form is loading...
<urn:uuid:95d2c984-262d-4fe5-af03-10c3b6add097>
CC-MAIN-2017-09
https://www.hbarel.com/analysis/eng/top-challenges-of-securing-iot
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171416.74/warc/CC-MAIN-20170219104611-00463-ip-10-171-10-108.ec2.internal.warc.gz
en
0.972702
1,033
2.703125
3
Everything has security problems, even Linux. An old and obscure problem with the gcc compiler was recently discovered to have left a security hole in essentially every version of Linux that anyone is likely to be running. Here's what you need to know about fixing it. The problem itself was discovered by Brad Spengler, the hacker behind the open-source network and server security program, grsecurity. What he found was that in some network code, there was a procedure that included a variable that could be set to NULL (no value at all). Now, this didn't appear to be a problem because the programmer also included a test which would return an error-message if the variable turned out to have a NULL value. So far, so good. Unfortunately, the gcc code optimizer on finding that a variable has been assigned a NULL value removed the test! This left a hole, that didn't exist in the original program. Using this hole, and code provided by Spengler, any cracker with sufficient access to a Linux computer could get into the computer's memory and, from there, get into all kinds of mischief. For more on the down and dirty technical details, turn to Jonathan Corbet's story, "Fun with NULL Pointers." That was bad. But, then Google Security Team members Tavis Ormandy and Julien Tiennes discovered that this kind of problem existed in numerous, network protocol programs. To be exact, the problem exists in implementations of AppleTalk, IPX, IRDA, X.25, AX.25, Bluetooth, IUCV, INET6 (aka IPV6), PPP over X and ISDN. Except for IPV6 (Internet Protocol Version 6) and Bluetooth, many of you may never even have heard of most of those protocols, never mind used them. That said, if the code for those protocols is in your Linux kernel, your Linux is vulnerable. Most of you, whether you know it or now, have one or more of those protocols active on your system, so this is not a small problem. Fortunately, there are fixes. Instead of trying to clean up the protocol implementations one by one--I mean seriously does anyone actually use IUCV (Inter-User Communications Vehicle), an old IBM VM networking protocol?--Linus Torvalds elected to force all these protocols to use kernel_sendpage(), which does the right thing with code having this problem. As Torvalds wrote on the LKML (Linux Kernel Mailing List), "Now, arguably this might be something that we could instead solve by just specifying that all protocols should do it themselves at the protocol level, but we really only care about the common protocols. Does anybody really care about sendpage on something like Appletalk? Not likely." So, the latest versions of the Linux kernel, 126.96.36.199 and 188.8.131.52, and for those using old Linux versions, Linux kernel 184.108.40.206, include this universal fix. Of course, you have to get that fix into what you're actually running. Most of the Linux vendors are rapidly pushing out the patch. Ubuntu has released it for its entire family from Ubuntu 6.06 to Ubuntu 9.04. This will also eventually cover Ubuntu downstream distributions like Mint. Tiennes has also written a grsecurity package to protect Debian and Ubuntu users . For Red Hat Linux users, there is a work-around, but you should know that there are reports that it doesn't cover all the bases. CentOS, which is based on Red Hat Linux, is recommending a similar fix, but it probably has similar problems. There is, however, a fix that's now available for Fedora 11. Novell's SLE (SUSE Linux Enterprise) and openSUSE just released patches today for all currently supported versions. If I haven't listed your distribution, check with your vendor or community. Within a few days, at most, you should have a fixed, and secure, Linux system.
<urn:uuid:121fe4ed-092a-4f69-af66-7896e03747ed>
CC-MAIN-2017-09
http://www.computerworld.com/article/2467470/open-source-tools/fixing-linux.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00339-ip-10-171-10-108.ec2.internal.warc.gz
en
0.950521
829
2.875
3
Predicted IoT Economic Impact The Internet of Things (IoT) is already a multi-billion dollar industry with global reach, and it’s expected that over the next five years businesses will spend close to $6 trillion in the IoT arena. With the technology driven by data, enabling organizations to analyze and optimize operations swiftly and efficiently, IoT tech is recognized as a vital part of digital transformation changing not only how data is collected and stored but revolutionizing information analysis that allows for better decision-making and strategy modification. For those with access to cutting-edge IoT solutions providing insights around risk management, customer behavior prediction, accurate market forecasting, and more, it seems the sky’s the limit. However, IoT is still a new technology with its own set of risks and challenges to manage. Security and Privacy Fears Top of mind when IoT risks and threats are considered is security and privacy. In both our business and personal lives we’re inundated with IoT tech, whether we realize it or not, and these devices collect and distribute information about us incessantly. The aim, of course, is to benefit us, but without the necessary security in place and defenses against interference, the effects could be disastrous. Most recently, the distributed denial-of-service (DDoS) attack on Dyn manipulated a known DNS vulnerability with a flood of hijacked webcams and video recorders sending countless information requests and flooding Dyn with incoming connections. We’ve already heard reports of remote automotive hackings, with further vulnerabilities exposed all too often, and more and more we’re discovering that devices such as drones, baby monitors, smart appliances, and even medical devices are vulnerable to attack. Often the only real ‘security’ provided with IoT devices is the lack of hack motivation; typically cyber-attacks follow the money and breaking into many of today’s IoT devices provides no such monetary compensation. This, however, is likely to change as IoT advances and further integrates into our lives. An additional concern of IoT relates to the environment and our ever-increasing energy needs. With innumerable wearables, sensors and other IoT devices promising improved quality of life, few contemplate any environmental effects of such networks. Experts, however, are paying more attention to the energy drain and resource usage of IoT. Says Kerry Hinton, former director of the Centre for Energy Efficient Telecommunications at the University of Melbourne, “The internet of things will be the biggest, most sophisticated piece of equipment that we’ve deployed across the planet – ever. That means that we’ve got to think about the potential limitations on it due to power consumption, the use of rare earth elements – all of that – from day one.” Currently, many IoT devices are low-power and low-data transmitting devices with long-lasting batteries, some even using sunlight, heat or vibrations to power themselves. This trend, however, may not continue as IoT develops and high-functioning devices demand access to mains power for optimal performance. IoT and the Developing World Though IoT can be seen globally, as with any tech it is less obvious in the developing world. We have seen a few examples of IoT and wearable devices making a real difference in this setting, from water pump monitoring, to environmental sensors providing necessary data to smallholder farmers, to disease and disaster tracking, but by and large IoT affects the advantaged developed world and skips over poorer in-need countries. Hopefully, McKinsey’s predicted IoT economic impacts of between $3.9 trillion and $11.1 trillion per year by 2025 will include solutions not limited only to the world’s more affluent nations. For IoT enthusiasts, the challenges above aren’t anything to shy away from, but instead provide new avenues for investigation and development; we’ve watched the industry boom in recent years, but it’s unlikely we can even imagine what’s still to come. By Jennifer Klostermann
<urn:uuid:cc0e85cd-5039-4624-b7f8-3a08a2e3ec82>
CC-MAIN-2017-09
https://cloudtweaks.com/2016/12/predicted-iot-economic-impact/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00511-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939912
817
2.640625
3
This summer Iowa State University took delivery of its most powerful supercomputer yet. “Cyence,” at a cost of $2.6 million, is capable of making 183.043 trillion calculations per second and has a total memory of 38.4 trillion bytes. To put that in perspective, one second of calculations by Cyence would take a single human five to six million years to complete, while the entire population of earth could perform the same calculation in 12 hours. |“Cyence” installed at Iowa State University. Photo by Bob Elbert.| Cyence has a rather unique configuration. The 4,768 core QDR InfiniBand cluster is comprised of 16-core SMPs and accelerated by GPUs and Phis. The bulk of the system employs 248 SuperMicro servers each with 16 cores, 128 GB of memory, Gigabit Ethernet and QDR (40Gb) InfiniBand interconnects. Two additional sets of 24 nodes are similarly outfitted, with the notable addition of NVIDIA K20 Kepler GPUs in one instance and Intel Phi Accelerator cards in the second. A large memory node contains 32 cores and 1 TB of main memory. The system runs Red Hat Enterprise Linux 6.4 and uses TORQUE (PBS) for resource and job management. Like other computers of this league, Cyence is being used to design and generate models to solve challenging problems. Operational since early July, the machine has already begun to produce data for 17 research projects from eight Iowa State departments in a broad range of disciplines, including bioscience, ecology, fluid dynamics, earth and atmospheric science, materials science, and energy systems. “The larger amount of computing power gives you better performance and makes the models you are using more realistic,” said Jim Davis, Iowa State’s vice provost for information technology and chief information officer. The more powerful machine also enables a faster pace of research. The parameters of a model can be changed with greater ease and multiple results can be generated quicker, plus it allows allows multiple research groups to run models on the computer at the same time instead of just one group. “This is very important to the research enterprise to have [Cyence] to carry out large scale research models,” Davis said. “This really shortens the time to discovery.” Cyence is a source of pride for the entire campus. Installed at Iowa State in June, the 183-teraflop (peak) system barely missed the TOP500 mark, but that fact does not detract from its value to its new users, the majority of whom are ISU researchers and graduate students. “It will make an impact on science,” proclaimed Davis. “We are providing facilities that faculty can use to accelerate their research work.” Arun Somani, associate dean for electrical and computer engineering, led the team that applied for the National Science Foundation (NSF) grant in 2011. Recognizing the benefits of their proposal, the NSF allocated more than $1.8 million for the HPC system and Iowa State came up with a matching grant of $800,000. On the university side, the investment was shared among the colleges of Engineering, Liberal Arts and Sciences, and Agriculture and Life Sciences, and vice president of research and economic development office. “This was a joint venture between the three colleges, which was very unique because you do not see this type of partnership at most universities,” said Somani, remarking on the project’s collaborative appeal. Chief Information Officer Davis agreed: “The work leading up to the award has been a productive partnership of faculty from many disciplines working together and with university administration and information technology specialists.” The university is already planning for its next HPC system, a so-called “condo cluster,” to be deployed next summer. The shared multi-departmental machine will further cement the collaborative element of HPC at Iowa State. “The idea behind this is that faculty develop common requirements, pool their funds and buy a much larger system than they could individually,” Davis explained. “Costs are kept low by sharing support and infrastructure, and by pooling unused capacity from all stakeholders. Researchers can run jobs and simulations that are much larger than they would be able to otherwise.”
<urn:uuid:fa1d617a-aaaf-4d2f-afaa-ba0b0d690553>
CC-MAIN-2017-09
https://www.hpcwire.com/2013/09/06/iowa_state_accelerates_science_with_gpu-phi_supercomputer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00511-ip-10-171-10-108.ec2.internal.warc.gz
en
0.94782
901
2.640625
3
Traffic congestion could soon be a distant memory for fire and rescue personnel in Palm Beach County, Fla. An intelligent traffic signal system is being installed that will adjust traffic light cycles to favor the routes being driven by first responders. Called Emergency.now, the technology connects Palm Beach County Fire Rescue’s computer-aided dispatch (CAD) system with the county’s traffic control system. When an emergency call is placed, a route is generated and transmitted to each responding vehicle. The software then adjusts the traffic lights along the route to stay green for longer stretches of time. The major difference between Emergency.now and other emergency traffic signal systems is that it doesn’t pre-empt traffic flow by setting up blinking yellow lights or completely stop one direction of traffic. The software adjusts the existing traffic cycle to meet an emergency responder’s needs and goes back to its normal timing quickly thereafter. The extended green lights help “flush” traffic through a particular road and intersection, delaying additional congestion that may come from motorists on side streets. For example, if a normal green light lasts for 30 seconds, the priority system can be set so a green light is extended for a longer period of time. It still cycles the lights so that motorists on side streets are able to go through, but gives a longer green light to the corridor that will be taken by a responder. According to Evan Bestland, division chief of Palm Beach County Fire Rescue, the county plans to add the technology to approximately 600 of its 1,000 intersections. Currently only about 50 intersections are connected to the priority traffic signal system. For the system to work efficiently, however, firefighters and other responders need to stick to the main roadways. Although drivers likely know all the shortcuts through side streets, now they don’t need to cut through neighborhoods to reach their destinations quickly. The CAD system was also programmed so that stop signs, speed bumps, guard gates, etc., are all assigned a negative time rating when the shortest route to an emergency is being plotted. So by default, the route sent to a responder likely won’t include cuts through heavy residential areas unless the emergency is located in one of them. “When the call gets generated and the major thoroughfares don’t have the penalties and the side streets do, it should route them to take advantage of that,” Bestland said. “I can’t help it if someone wants to take a shortcut. But they’re not going to get the full advantage of the system by doing that.” The technology is completely automated. Responders simply follow the route they’re given and the green lights automatically adjust their timing according to the ETA of the emergency vehicle. Speed is initially calculated using the speed limit of the road the responder is first on, or the road the vehicle is parked on. The full rollout of Emergency.now in Palm Beach County was approved earlier this year after years of testing and development. Initial work on the technology began in approximately 2005 after a new CAD system was purchased. Trafficware — then under the name Naztec — was approached with the concept and agreed that they could design software to sync the CAD system with the county’s traffic control system for $2.1 million. The catch, however, was that if Trafficware couldn’t get the software to work, Palm Beach County Fire Rescue wouldn’t have to pay. Bestland explained that since it was bleeding-edge technology, the county didn’t want to sacrifice taxpayer dollars for something that hadn't been fully vetted. It took eight years to perfect the system, but the county paid up and is satisfied with the results. There were a few hiccups along the way, however. When Palm Beach Fire Rescue first tested the system, the first few intersections weren’t working accurately. They discovered the software was basing the vehicle’s ETA at an intersection by the actual speed of the truck or ambulance, which at the start would be around 5 mph. The system was quickly adjusted to instead use the speed limit of the first road the vehicle would be on, which corrected the problem. Another discovery was that the system works better when emergency vehicles travel in packs. The extended green lights eventually time out and go red. So if one responder gets out the door right away and another is behind by a couple of minutes, the latter vehicle might get stuck with red lights. There’s no real fix to that issue, other than to travel in a group, or to have a responder who is running later fall further back to catch the next extended green light cycle. So emergency personnel are encouraged to travel in packs. “Because we’re flushing traffic out, if the traffic is lighter to begin with, that in itself is a big benefit,” Bestland said. “So even if we don’t have the green light, just because we moved traffic out of the way, that’s going to improve our response times.” Additional intersections will be added to the priority system network over the next several years. But while Palm Beach County Fire Rescue believes the system will help save lives, there are a few different upgrades in the works. One of the major enhancements the agency would like to see is the ability to reconfigure traffic light cycles once a victim has been picked up at a scene and is being transported to a hospital. The technology isn’t dependent on a vehicle starting at a fire station, so adjusting the system to work from different starting locations is on the agency’s agenda. Palm Beach County also has a number of drawbridges that emergency personnel would like to see added to Emergency.now and the county’s traffic control computer. Right now the priority traffic signal software and CAD establish a route using shortest physical distance. But if a bridge is up, a fire station further away might be the right one to send to a fire or accident scene. By adding the bridges and whether they are up or down, the CAD system could do the calculations to determine the right station to assign to a particular call, while the traffic signal technology does its job by extending green lights and flushing congested traffic out of the area. Right now, however, the priority for Palm Beach County Rescue is getting the system fully deployed. “My philosophy on technology is crawl, walk, run and fly,” Bestland said. “You really need to get the foundation right.”
<urn:uuid:2c9733a4-d076-4a5b-b3eb-d525a6c335ea>
CC-MAIN-2017-09
http://www.govtech.com/public-safety/Traffic-Signal-Tech-Improves-Rescue-Response-Time.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00455-ip-10-171-10-108.ec2.internal.warc.gz
en
0.955149
1,357
2.71875
3
USDA research efforts need management improvements: GAO - By Frank Konkel - May 14, 2013 USDA's research agencies have some GAO recommendations for improvement. (Stock image) The United States Department of Agriculture's two principal research agencies need to strengthen collaborative planning and reduce duplication in the face of increasingly tight budgets, according to a Government Accountability Office study released in April. The study was the culmination of a lengthy investigation into USDA's internal and external research agencies – the Agricultural Research Service (ARS) and the National Institute of Food and Agriculture (NIFA), respectively – which both play critical roles in supporting agricultural science and human nutrition. The agencies collectively spend close to $1.2 billion annually. USDA economists estimate that each dollar spent on public agricultural research returns an estimated $10 in benefits to the economy, with even greater returns predicted by university researchers. In the past, the agencies have made use of existing safeguards on the books to avoid potentially duplicative projects, including panels of independent external scientists who review projects, professional norms like peer reviews and USDA's Current Research Information System (CRIS), a database containing project-level information about all ongoing and completed research projects. Yet GAO found several shortcomings that limit the effectiveness of existing safeguards against duplicative efforts. Chief among them is outdated information, with information in CRIS on ARS projects typically a minimum of six months out of date, undermining the effectiveness of the database. In addition, GAO said one-third of NIFA's competitive grants are not subject to CRIS duplication checks. NIFA does not do its own research, but gathers it by awarding grants to individuals, institutions, and organizations, through increased education to improve scientific and agricultural literacy, and through its extension networks. Finally, GAO found that high-level collaborative planning processes could be "more systematic to make the best use of limited agricultural research resources," suggesting joint meetings between department heads on at least an annual basis. "The nation's increasingly tight budget environment underscores the need for federal research agencies to set priorities carefully and make effective use of limited research funding," the report states. "At both ARS and NIFA, national program leaders are responsible for setting the agencies' research priorities; obtaining input from stakeholders in industry, academia, and elsewhere; and identifying gaps in agricultural research. As agency budgets continue to tighten, pressure increases on program leaders to ensure that dollars go to the highest-priority activities and that research projects are complementary, rather than unnecessarily duplicative." Other recommendations made by GAO in the report were generally agreed to by USDA, which cited overall agency benefits for three of the four total recommendations. GAO recommended ARS issue formal written guidance that it should update research project data "at least quarterly" to ensure new projects aren't duplications, and directed NIFA to instruct its national program leader ensure staff check the entirety of its competitive grant awards against CRIS for duplicative efforts. GAO also instructed USDA to investigate whether other systems might be more effective and efficient than CRIS and revise its internal methods for identifying duplicative research projects. Frank Konkel is a former staff writer for FCW.
<urn:uuid:3aa19e68-f54f-4f6f-a13e-dd17ce547ac6>
CC-MAIN-2017-09
https://fcw.com/articles/2013/05/14/usda-research-gao.aspx?admgarea=TC_Management
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170614.88/warc/CC-MAIN-20170219104610-00631-ip-10-171-10-108.ec2.internal.warc.gz
en
0.940873
641
2.515625
3
Can plastic materials morph into computers? A research breakthrough published this week brings such a possibility closer to reality. Researchers are looking at the possibility of making low-power, flexible and inexpensive computers out of plastic materials. Plastic is not normally a good conductive material. However, researchers said this week that they have solved a problem related to reading data. The research, which involved converting electricity from magnetic film to optics so data could be read through plastic material, was conducted by researchers at the University of Iowa and New York University. A paper on the research was published in this week's Nature Communications journal. More research is needed before plastic computers become practical, acknowledged Michael Flatte, professor of physics and astronomy at the University of Iowa. Problems related to writing and processing data need to be solved before plastic computers can be commercially viable. Plastic computers, however, could conceivably be used in smartphones, sensors, wearable products, small electronics or solar cells, Flatte said. The computers would have basic processing, data gathering and transmission capabilities but won't replace silicon used in the fastest computers today. However, the plastic material could be cheaper to produce as it wouldn't require silicon fab plants, and possibly could supplement faster silicon components in mobile devices or sensors. "The initial types of inexpensive computers envisioned are things like RFID, but with much more computing power and information storage, or distributed sensors," Flatte said. One such implementation might be a large agricultural field with independent temperature sensors made from these devices, distributed at hundreds of places around the field, he said. The research breakthrough this week is an important step in giving plastic computers the sensor-like ability to store data, locally process the information and report data back to a central computer. Mobile phones, which demand more computing power than sensors, will require more advances because communication requires microwave emissions usually produced by higher-speed transistors than have been made with plastic. It's difficult for plastic to compete in the electronics area because silicon is such an effective technology, Flatte acknowledged. But there are applications where the flexibility of plastic could be advantageous, he said, raising the possibility of plastic computers being information processors in refrigerators or other common home electronics. "This won't be faster or smaller, but it will be cheaper and lower power, we hope," Flatte said. In the new research, Flatte and his colleagues were able to convert data encoded in a magnetic film from an electric flow into optics for an organic light-emitting diode (OLED). The LED was made out of the plastic, and connected to the magnetic film through a substrate. Plastics can't handle electricity; the data had to be converted into optics for communication. "The plastic devices are very important in certain areas of light emission but have tended not to be important in communication," Flatte said. The researchers were more concerned about making the technology happen -- environmental concerns related to plastic are a completely different discussion, Flatte said. To be sure, there are plastic devices with silicon computers in them already on the market, like a baby garment from Rest Devices, which has electronics to measure a baby's motion, temperature, breathing patterns and pulse. And before this week, basic transistors made out of plastic had been demonstrated. Now, this latest research establishes a method for plastic devices to read data from storage. "The writing problem would have to be solved. But I think [reading] is an important step forward," Flatte said.
<urn:uuid:dbdbfc0f-d49f-4495-85aa-e70b5baec854>
CC-MAIN-2017-09
http://www.cio.com/article/2376703/hardware/plastic-computers-taking-shape--but-won-t-replace-silicon.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00151-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963292
709
3.703125
4
FileZilla, Other Open-Source Software From 'Right' Sources Is Safe A basic tenet of open-source software security has long been the idea that since the code is open, anyone can look inside to see if there is something that shouldn't be there. It's a truth that does work and many of us who use open-source software daily accept it as such. That's why some recent news about a Trojan in a popular File Transfer Protocol (FTP) program is a potential cause for concern. The malware versions of FileZilla are corrupted versions of the software that steal users' credentials and could also be employed for spreading more malware. "Attackers also can download whole Web page source code containing database log-in, payment system, customer private information, etc.," Avast stated in a blog post. What's important to note here, though, is the fact that it is not the official version of FileZilla that is at risk. Bogus versions of FileZilla are at risk. Do a simple search on Google for FileZilla, and you'll find several sites with downloads for the program. Open-source software, by definition, is freely redistributable, so having FileZilla available from multiple locations is not a surprise or anything new. It's a situation that the FileZilla project is also well aware of at this point. "While this instance is one of the largest to date, there have been many cases of modified versions spreading malware hosted on third-party Websites for over a decade," the FileZilla site states. "We do not condone these actions and are taking measures to get the known offenders removed. Note that we cannot, in general, prevent tainted versions on third-party Websites or prove their authenticity, especially since the FileZilla Project promotes beneficial redistribution and modifications of FileZilla in the spirit of free open-source software and the GNU General Public License." The lesson and the message here is simple, but very, very important. When consuming or downloading open-source software, make sure that you're getting it from the legitimate source. For FileZilla, that means getting the FTP program directly from the project page itself. The larger question here is whether the same type of issue could potentially exist with other open-source software. It can, and that is why it's important that users only download software from the "right" place. In my opinion, the "right" place is the actual project page of a given open-source application. Linux users should also generally feel safe getting applications for their respective Linux distribution software repositories, as well, since those have generally just been packaged for specific distributions from the upstream project. So, the next time you look to download an open-source app, be sure to make sure you're getting it from a source that you can trust. Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.
<urn:uuid:7caa2f58-f2da-450c-be77-a26c2ed9545a>
CC-MAIN-2017-09
http://www.eweek.com/blogs/security-watch/filezilla-other-open-source-software-from-right-sources-is-safe.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00555-ip-10-171-10-108.ec2.internal.warc.gz
en
0.939298
616
2.6875
3
The race to build the world’s fastest supercomputer – and possibly even the first exascale-class system – just got more interesting. Russia and India are considering an alliance that would enable them to more effectively compete with rival supercomputing powers, in particular China. Last month, Boris Shabanov of the Russian Academy of Sciences extended an invitation to the Indian Institute of Science and the Karnataka government to explore the possibility of setting up a joint supercomputing center in Bangalore, according to a report in the Economic Times. “India has many skills for building supercomputers. It is very strong in software,” Alexey Shmelev, cofounder and chief operations officer of RSC group and delegate to the Russian Academy of Sciences, expressed to the paper. “I am ready to share technology with India. I guess there would not be many players who are willing to do so.” By uniting forces, the two nations would be in a better position to take on elite supercomputing powers like the United States and Japan, and most notably China – which is home to the fastest supercomputer in the world by a signification margin. China rose to the top of the supercomputing charts in June 2013 with its Tianhe-2 system, operated by the National University of Defense Technology. With 33.86 petaflops as measured by the LINPACK benchmark, the Chinese system beat out second place finisher Titan by nearly a 2-to-1 margin, and has retained its top spot ever since. Titan, the 17.59 petaflop supercomputer installed at the University of Tennessee, was the list champ from November 2012 until China knocked it off its perch. The US, EU, Japan, India, Russia and China have all expressed their intentions to reach exascale sometime around the year 2020. Many experts believe the odds are in China’s favor, but the outcome is far from decided. Most of these nations have the talent to get the job done, but the ultimate winner will be the nation that backs up its expressed intentions with a unwavering commitment to funding. India and Russia should not be discounted. India made a run at supercomputing glory in 2007 with its Eka system (“eka” means number one in Sanskrit). When Eka debuted it was the fourth fastest supercomputer in the world and the fastest in Asia. Since then, China and Japan have pulled ahead. India’s current top number-cruncher, the Indian Institute of Tropical Meteorology’s iDataPlex, has a benchmarked performance of 719 teraflops, earning it a 44 ranking on the TOP500 list. Ranked second with a speed of 386.7 teraflops is PARAM Yuva – II, unveiled by the Centre for Development of Advanced Computing in early 2013. Russia’s most powerful system, Lomonosov supercomputer, holds a 37th place ranking with 902 teraflops. Says Vipin Chaudhary, former chief executive of Computational Research Laboratories, a subsidiary of Tata Sons that built the Eka supercomputer: “We need to catch up first before trying to leapfrog US and China. A lot of training and research needs to be supported for sustained period of time.” To this end, India has committed about $2 billion dollars (Rs 12,000 crore) to the Indian Space Research Organisation and the Indian Institute of Science to develop a high-performance supercomputer by 2018. India’s government-backed computing agency, C-DAC, also announced a $750 million (Rs 4,500 crore) blueprint to set up 70 supercomputers over the next five years.
<urn:uuid:e10b050d-b455-4d82-8383-494d9f50bed8>
CC-MAIN-2017-09
https://www.hpcwire.com/2014/04/09/russia-india-explore-joint-supercomputing-project/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171932.64/warc/CC-MAIN-20170219104611-00075-ip-10-171-10-108.ec2.internal.warc.gz
en
0.941288
775
2.578125
3
This lecture is based on the Peripheral Component Interconnect Express, which is a standard for computer expansion cards. specifically, this is a standard for the communication link by which a PCIe device communicates with the CPU. According to Wikipedia, PCIe 3.0 (August 2007) is the latest standard. standard is an outgrowth of the original PCI standard, but is not compatible with PCI at the hardware level. standard is based on a new protocol for electrical signaling. This protocol is built on the concept of a lane, which we must define. Here are some capacity quotes from Wikipedia Per Lane 16–Lane Slot Version 1 250 MB/s 4 GB/s Version 2 500 MB/s 8 GB/s Version 3 1 GB/s 16 GB/s What is a Lane? A lane is pair of point–to–point serial links. It is a full–duplex link, able to communicate in two directions simultaneously. Each of the serial links in the pair handles one of the two directions. By definition, a serial link transmits one bit at a time. By extension, a lane may transmit two bits at any one one bit in each direction. One may view a parallel link, transmitting multiple bits in one direction at any given time, as a collection of serial links. difference is that a parallel link must provide for synchronization of the bits sent by the individual links. Data Transmission Codes standard is byte oriented, in that it should be viewed logically as a full–duplex byte stream. What is actually transmitted? of bits (transmitted or received) with bytes is handled at the Data Link layer. Suppose a byte is to be transmitted serially. from byte data to bit–oriented data for serial transmission is done by a shift register. The register takes in eight bits at a time and shifts out one bit at a time. The bits, as shifted out, are still represented in standard logic levels. transmit unit takes the standard logic levels as input, and converts them to voltage levels appropriate for serial transmission. Three Possible Transmission Codes The serial transmit unit sends data by asserting a voltage on the serial link. On simple method would be as follows. To transmit a logic 1, assert +5 volts on the transmission line. To transmit a logic 0, assert 0 volts on the transmission line. method has so many difficulties in practice that it cannot be used. Two of the most obvious are transmission of power and lack of data framing. for link management use codes that avoid these problems. Two of the more common methods used are NRZ and NRZI. Non–Return–to–Zero coding transmits by asserting the following voltages: For a logic 1, it asserts a positive voltage (3.0 – 5.0 volts) on the link. For a logic 0, it asserts a negative voltage (–3.0 to –5.0 volts). Non–Return–to–Zero–Invert is a modification of NRZ, using the same The Problem of Noise One problem with these serial links is that they function as antennas. They will pick up any stray electromagnetic radiation if in the radio range. In other words, the signal received at the destination might not be what was actually transmitted. It might be the original signal, corrupted by noise. The solution to the problem of noise is based on the observation that two links placed in close proximity will receive noise signals that are almost identical. To make use of this observation, we use differential transmitters to send the signals and differential receivers to reconstruct the signals. Differential Transmitters and Receivers transmission, rather than asserting a voltage on a single output line, the transmitter asserts two voltages: +V/2 and –V/2. A +6 volt signal would be asserted as two: +3 volts and –3 volts. A –8 volt signal would be asserted as two: –4 volts and +4 volts. Here is a standard diagram of a differential transmitter. The standard differential receiver is an analog subtractor. For a 6 volt transmitted signal, we have A = 3, B = –3; A – B = 3 – (–3) = 6. Noise in a Differential Link We now assume that the lines used to transmit the differential signals are physically close together, so that each line is subject to the same noise signal. received signal is the difference of the two voltages input to the differential receiver. The value received is V/2 – (–V/2) = V, the desired value. Ground Offsets in Standard Links All voltages are measured relative to a standard value, called “ground”. Here is the complete version of the simple circuit that we want to implement. is an assumed second connection between the two devices. This second connection fixes the zero level for the voltage. There is no necessity for the two devices to have the same ground. Suppose that the ground for the receiver is offset from the ground of the transmitter. The signal sent out as +V(t) will be received as V(t) – VO. Ground Offsets in Differential Links Here again, the subtractor in the differential receiver handles this problem. The signal originates as a given voltage, which can be positive, negative, or 0. The signal is transmitted as the pair (+V/2, –V/2). Due to the ground offset for the receiver, the signal is taken in as (+V/2 – VO, –V/2 – VO). The signal is interpreted as (+V/2 – VO) – (–V/2 – VO) = +V/2 – VO + V/2 + VO = V. link will correct for both ground offset and line noise at the same time. 1. Wikipedia http://en.wikipedia.org/wiki/PCI_Express 4. Wikipedia http://en.wikipedia.org/wiki/RS-422
<urn:uuid:aa926d59-3615-4ad1-8a55-166af0485d67>
CC-MAIN-2017-09
http://edwardbosworth.com/My5155_Slides/Chapter12/PCI_Express.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00195-ip-10-171-10-108.ec2.internal.warc.gz
en
0.901615
1,312
3.828125
4
WASHINGTON, D.C. -- The Centers for Disease Control and Prevention's Injury Center launched Injury Maps, an interactive Web-based mapping application of injury-related mortality in the United States. Injury Maps includes mortality rates for 1989 through1998, the CDC said. The data includes injuries from drowning, poisoning, motor vehicle accidents, homicide and five other injury causes of death. Injury Maps' location-based technology will give users the capability to create customized, interactive online maps that can be used by a wide range of organizations and individuals, such as other federal agencies, state and local health departments, policy makers, research institutions and students. For example, a health worker can use Injury Maps to create state or county maps that depict the death rates from several broad categories of injuries as well as create a map of a specific injury cause and compare this to state and national death rates. The CDC worked with MapInfo Corporation to create the mapping application. "A Web-based mapping application such as Injury Maps is an effective tool for public health officials to sort and analyze information when making important decisions about education and prevention programs," said Sabby Nayar, strategic industry manager, government, MapInfo.
<urn:uuid:b1f75bb9-be14-4a87-9fc5-e2b5f8436218>
CC-MAIN-2017-09
http://www.govtech.com/e-government/CDC-Rolls-Out-Web-Based-Injury-Mapping.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00195-ip-10-171-10-108.ec2.internal.warc.gz
en
0.917659
243
2.796875
3
Updated: Software created by researchers at Rensselaer Polytechnic Institute uses a pattern-recognition process called kernel learning to more quickly assess molecules' properties. Researchers at Rensselaer Polytechnic Institute this month added a software program that uses adaptive learning to the roster of programs available for assessing molecules properties. While pharmaceutical companies already have software that searches through databases to screen for drugs for a given therapy, the new software works much faster by using neural networks and adaptive-learning methods to model compounds and predict their behavior. Drug-discovery companies all employ computational tools to aid in finding leads for drug development. But scientists at Rensselaer in Troy, N.Y., say the move into predictive modeling marks a shift away from laboratories assays of mathematical, computer-run models. Laboratories with the most high-throughput techniques can test a few hundred thousand molecules a day; existing computer programs can process just fewer than a million. But the Rensselaer software can crunch more than 10 million molecules a day, according to High Performance Computing. The software looks for similarities between molecules in a given database and those with known therapeutic potential. The advantage is chiefly amount and type of chemical information that is available through this method; for a method that produces this much chemical information, the speed is quite fast. The software comes from a National Science Foundation-funded project called Drug Discovery and Semi-Supervised Learning (DDASSL, pronounced "dazzle"). Curt Breneman, a chemistry professor; Kristin Bennett, a mathematics associate professor; senior research associate N. Sukumar; and Mark Embrechts, an associate professor in decision sciences and engineering systems, worked together to develop the software. Computer testing is less expensive and faster than testing actual molecules, and allows workers to pare down the number of tests that need to be performed. Dr. Breneman says, "That approach helps to focus more attention on molecules with the highest probability of success, and also allows dead-ends to be identified before many resources are expended on them. The ultimate pay-off of this methodology may be that it can help to speed up the development of new drugs." Though several software programs already exist to assess compounds in silico, they can be slow, not particularly predictive or both. The Rensselaer software uses two shortcuts to search large molecular databases rapidly. First, the software renders a description of both a molecules shape and the electrical properties on its surface as a set of numbers. These number sets can be processed rapidly by a computer. Then, the software searches for common chemical properties associated with molecules for a particular therapy. It does not use the method of so-called docking software, which looks at the interaction of a molecule with a particular protein. Instead, it uses a pattern-recognition process called kernel learning. The software is presented with a small set of molecules with the right features, which are analyzed as described above. Then, the software churns through a molecular database, looking for promising compounds. "Conventional techniques are not truly predictive and dont work," Bennett said. "So, we borrowed pattern-recognition techniques already used in the pharmaceutical industry and added algorithms based on support vector machines. That gives us a technique to predict which molecules are promising." Projects are under way to further evaluate how predictive the new software is. Pattern-recognition techniques are rapidly becoming more sophisticated and more capable of using data from laboratory experiments. In unrelated work, researchers at the Harbor-ULCA Medical Center used computational methods and proteomics to find a structure that is common to otherwise diverse and distinct antimicrobial peptides. In a recent review in Science magazine, Yale University chemistry professor William Jorgensen stressed that no single computer program will be sufficient to find drug candidates and that some of the slower processes yield absolutely crucial information "There is not going to be a voilà moment at the computer terminal," he wrote. "Instead, there is systematic use of wide-ranging computational tools to facilitate and enhance the drug-discovery process." Editors Note: This story was updated to include additional information and comments from a discussion with Curt Breneman. Check out eWEEK.coms Enterprise Applications Center at http://enterpriseapps.eweek.com for the latest news, reviews, analysis and opinion about productivity and business solutions. Be sure to add our eWEEK.com enterprise applications news feed to your RSS newsreader or My Yahoo page:
<urn:uuid:cbb66e80-9cde-47f6-b99e-42ba15ff4b30>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Enterprise-Applications/Adaptive-Learning-Speeds-New-DrugScreening-Software
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172649.58/warc/CC-MAIN-20170219104612-00599-ip-10-171-10-108.ec2.internal.warc.gz
en
0.915212
919
3.296875
3
GCN LAB IMPRESSIONS Solar-powered processor raises new possibilities - By Greg Crowe - Sep 23, 2011 At Intel’s Developer Forum, Chief Technology Officer Justin Ruttner demonstrated one of Intel’s latest research items — a microprocessor that consumes insanely low levels of power. Code-named Claremont, the circuits on this processor operate very close to their “threshold voltage,” which is the minimum voltage at which the circuit can change states and pass a current. This allows the entire processor to run on less than 10 milliwatts at its minimum, which is a significant improvement. “So where does the solar power come in?” I hear you ask. “Your headline promised solar power!” OK, stop yelling, I’ll tell you. In order to demonstrate how little power this chip needs to run, the demonstration model was powered by a small photovoltaic cell that was about the size of the processor itself. That is pretty impressive. Claremont might not ever be in any publicly available products, so don’t start standing in line for one just yet. However, the data they’ve accumulated from it in the lab will probably enable Intel to integrate aspects of this technology with a wide variety of platforms. Among the possibilities Intel suggests are longer battery lives, energy-efficient multicore processors in everything from handhelds to servers to supercomputers, and generally greener computing. Who knows, maybe one day they can use offshoots of this technology to make digital devices that are entirely powered off of solar energy. Wouldn’t that be cool? If you ever needed more power, you could simply shine more light at your computer. At the very least, the folks at Intel have figured out how to keep their demo going even if the building’s power goes out. Greg Crowe is a former GCN staff writer who covered mobile technology.
<urn:uuid:f675a189-2e6b-4325-9e78-ec0834a48703>
CC-MAIN-2017-09
https://gcn.com/articles/2011/09/23/intel-solar-powered-processor.aspx?admgarea=TC_EMERGINGTECH
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00595-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944307
407
3.0625
3
Name a tech company, any tech company, and they're investing in containers. Google, of course. IBM, yes. Microsoft, check. But, just because containers are extremely popular, doesn't mean virtual machines are out of date. They're not. Yes, containers can enable your company to pack a lot more applications into a single physical server than a virtual machine (VM) can. Container technologies, such as Docker, beat VMs at this part of the cloud or data-center game. VMs take up a lot of system resources. Each VM runs not just a full copy of an operating system, but a virtual copy of all the hardware that the operating system needs to run. This quickly adds up to a lot of RAM and CPU cycles. In contrast, all that a container requires is enough of an operating system, supporting programs and libraries, and system resources to run a specific program. What this means in practice is you can put two to three times as many as applications on a single server with containers than you can with a VM. In addition, with containers you can create a portable, consistent operating environment for development, testing, and deployment. That's a winning trifecta. If that's all there was to containers vs. virtual machines then I'd be writing an obituary for VMs. But, there's a lot more to it than just how many apps you can put in a box. Container problem #1: Security The top problem, which often gets overlooked in today's excitement about containers, is security. As Daniel Walsh, a security engineer at Red Hat who works mainly on Docker and containers puts it: Containers do not contain. Take Docker, for example, which uses libcontainers as its container technology. Libcontainers accesses five namespaces -- Process, Network, Mount, Hostname, and Shared Memory -- to work with Linux. That's great as far as it goes, but there's a lot of important Linux kernel subsystems outside the container. These include all devices, SELinux, Cgroups and all file systems under /sys. This means if a user or application has superuser privileges within the container, the underlying operating system could, in theory, be cracked. That's a bad thing. Now, there are many ways to secure Docker and other container technologies. For example, you can mount a /sys file system as read-only, force container processes to write only to container-specific file systems, and set up the network namespace so it only connects with a specified private intranet and so on. But, none of this is built in by default. It takes sweat to secure containers. The basic rule is that you'll need to treat containers the same way you would any server application. That is, as Walsh spells out: - Drop privileges as quickly as possible - Run your services as non-root whenever possible - Treat root within a container as if it is root outside of the container Another security issue is that many people are releasing containerized applications. Now, some of those are worse than others. If, for example, you or your staff are inclined to be, shall we say, a little bit lazy, and install the first container that comes to hand, you may have brought a Trojan Horse into your server. You need to make your people understand they cannot simply download apps from the Internet like they do games for their smartphone. Mind you they shouldn't be downloading games willy-nilly either, but that's a different kind of security problem! Other container concerns OK, so if we can lick the security problem, containers will rule all, right? Well, no. You need to consider other container aspects. Rob Hirschfeld, CEO of RackN and OpenStack Foundation board member, observed that: "Packaging is still tricky: Creating a locked box helps solve part of [the] downstream problem (you know what you have) but not the upstream problem (you don’t know what you depend on)." To this, I would add that while this is a security problem, it's also a quality assurance problem. Sure, X container can run the NGINX web server, but is it the version you want? Does it include the TCP Load Balancing update? It's easy to deploy an app in a container, but if you're installing the wrong one, you've still ended up wasting time. Hirschfeld also pointed that out container sprawl can be a real problem. By this he means you should be aware that "Breaking deployments into more functional discrete parts is smart, but that means we have MORE PARTS to manage. There’s an inflection point between separation of concerns and sprawl." Remember, the whole point of a container is to run a single application. The more functionality you stick into a container, the more likely it is you should been using a virtual machine in the first place. True, some container technologies, such as Linux Containers (LXC), can be used in lieu of a VM. For example, you could use LXC to run Red Hat Enterprise Linux (RHEL) 6 specific applications on a RHEL 7 instance. Generally speaking though you want to use containers to run a single application and VMs to run multiple applications. Deciding between containers and VMs So how do you go about deciding between VMs and containers anyway? Scott S. Lowe, a VMware engineering architect, suggests that you look at the "scope" of your work. In other words if you want run multiple copies of a single app, say MySQL, you use a container. If you want the flexibility of running multiple applications you use a virtual machine. In addition, containers tend to lock you into a particular operating system version. That can be a good thing: You don't have to worry about dependencies once you have the application running properly in a container. But it also limits you. With VMs, no matter what hypervisor you're using -- KVM, Hyper-V, vSphere, Xen, whatever -- you can pretty much run any operating system. Do you need to run an obscure app that only runs on QNX? That's easy with a VM; it's not so simple with the current generation of containers. So let me spell it out for you. Do you need to run the maximum amount of particular applications on a minimum of servers? If that's you, then you want to use containers -- keeping in mind that you're going to need to have a close eye on your systems running containers until container security is locked down. If you need to run multiple applications on servers and/or have a wide variety of operating systems you'll want to use VMs. And if security is close to job number one for your company, then you're also going to want to stay with VMs for now. In the real world, I expect most of us are going to be running both containers and VMs on our clouds and data-centers. The economy of containers at scale makes too much financial sense for anyone to ignore. At the same time, VMs still have their virtues. As container technology matures, what I really expect to happen, as Thorsten von Eicken, CTO of enterprise cloud management company RightScale, put it is that VM and containers will come together to form a cloud portability nirvana. We're not there yet, but we will get there. This story, "Containers vs. virtual machines: How to tell which is the right choice for your enterprise" was originally published by ITworld.
<urn:uuid:c1df0b94-731b-4dad-963e-b96ceeaf27d8>
CC-MAIN-2017-09
http://www.networkworld.com/article/2916091/cloud-storage/containers-vs-virtual-machines-how-to-tell-which-is-the-right-choice-for-your-enterprise.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00295-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945
1,571
2.578125
3
There’s a new language forming around data. Nebulous “big data” has been on fervent tongues of the forward thinking for a few years now. This ambiguous term is now being broken down into definable parts, and futurist Chris Dancy is our linguist. He makes the distinction between “soft data” and “hard data.” Big data, as we know it, is mostly soft. Soft, as in, formable, not as solidly truthful as what’s called hard data. Soft data includes social media activity, transaction logs’ data, and anything individual that can be manipulated by attempting to appear a certain way. Basically, there is choice involved in soft data—what you post, what you buy, what apps you use. Soft data informs who we think we are. It contributes to the illusory working self, based as much on goals, desires, faulty memory, popular archetypes we map ourselves onto, as it is based on the internal identity we feel and try to make external. Hard data though doesn’t lie. Hard data is made up of the constant day-to-day facts that are gathered from sensors in an individual’s environment or on his or her body. The typical human doesn’t access this data. Dancy, strapped with at least three sensors at all times, is able to track, store, and search data concerning his mood habits, health habits, work habits, social habits, etc. Someday it will be any quantifiable habit imaginable. He can search any previous day for every piece of information gathered that day. This kind of data defies the faulty memories or false impressions we can make up for our past. Of course, all it can do is invoke memory with data as the retrieval cue—one step short of human anomalies, people who can remember almost everything, even the trivial minutia of everyday. In fact, it can provide a narrative over a specified amount of time, rather than being fundamentally episodic evidence. Basically, technology is helping humans reach a more accurate self-identification, what futurists are calling “the quantified self” (Wired). The accumulation of data, and ability to form it into narratives, provides a portrait of the self that is based on solid data. We are nearing the ability to actually conceive philosopher David Hume’s “Bundle Theory,” that posits that the idea of self is a collection of every single instance of a person’s existence. Here’s where a third type of data arises. Kind data. Dancy says, “I’m obsessed with data being kind.” By kindness, he means, the art of applying data to benefit human lives. He figures that data usage will go from where it is now, soft data, skip past personally collected hard data, and move to what he calls “core data” (Ogilvy Do). Core data is data already collected and interpreted by a third party, and applied to the individual. That’s where it can become truly kind. When data is combined to become intuitive to customer needs, an individual can make decisions based on things she or he could not normally compute, or would take too much time to do so. Kind data could be developed to inform a decision by using data on health, mood, activity, and even genetics. Dancy uses the simple example of a menu only offering options dependent on predetermined health and mood decisions made before, as well as that day’s current calorie intake. Kind data frees up time by eliminating erroneous information and options from the environment (Ogilvy Do). How can metrics be developed to offer practical individualized information? Will IT management be able to provide the metrics businesses need to make data beneficial to consumers? Or will data take a different path? We’ve heard of data access invading personal privacy, through hackers as well as corporations. The future of data reminds of Kurt Vonnegut’s criticism of science in Cat’s Cradle: “There was no talk of morals.” As data and its effect on our lives grows, it’s up to individuals in technology to make sure it is as kind as possible.
<urn:uuid:103e6646-d71d-4c59-bcdd-a7ecd9f01ac2>
CC-MAIN-2017-09
https://www.cherwell.com/blog/kind-data
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00471-ip-10-171-10-108.ec2.internal.warc.gz
en
0.945083
890
2.609375
3
An analysis of a piece of Android spyware targeting a prominent Tibetan political figure suggests it may have been built to figure out the victim's exact location. The research, performed by the Citizen Lab at the University of Toronto's Munk School of Global Affairs, is part of an ongoing project that is looking at how the Tibetan community continues to be targeted by sophisticated cyberspying campaigns. Citizen Lab obtained a sample of an application called KaKaoTalk from a Tibetan source in January, according to its blog. KaKaoTalk, made by a South Korean company, is a messaging application that also lets users exchange photos, videos and contact information. The application was received on Jan. 16 through email by a "high profile political figure in the Tibetan community," Citizen Lab wrote. But the email was forged to look like it had come from an information security expert who had previous contact with the Tibetan figure in December. At that time, the security expert had sent the Tibetan activist a legitimate version of KaKaoTalk's Android Application Package File (APK) as an alternative to using WeChat, another chat client, due to security concerns that WeChat could be used to monitor communications. But the version of KaKaoTalk for Android had been modified to record a victim's contacts, SMSes and mobile phone network configuration and transmit it to a remote server, which had been created to mimic Baidu, the Chinese portal and search engine. The malware is capable of recording information such as the base station ID, tower ID, mobile network code and area code of the phone, Citizen Lab said. That information is typically not of much use to a scammer who is trying to pull off frauds or identity theft. But it is useful to an attacker that has access to a mobile communication provider's technical infrastructure. "It almost certainly represents the information that a cellular service provider requires to initiate eavesdropping, often referred to as 'trap and trace'," Citizen Lab wrote. "Actors at this level would also have access to the data required to perform radio frequency triangulation based on the signal data from multiple towers, placing the user within a small geographical area." The Citizen Lab noted that their theory is speculative and that "it is possible that this data is being gathered opportunistically by an actor without access to such cellular network information." The tampered version of KaKaoTalk has many suspicious traits: it uses a forged certificate and asks for extra permissions to run on an Android device. Android devices typically forbid installing applications from outside Google's Play store, but that security precaution can be disabled. If users are tricked into granting extra permissions, the application will run. Citizen Lab notes that Tibetans may not have access to Google's Play store and must install applications hosted elsewhere, which puts them at a higher risk. Citizen Lab tested the tampered version of KaKaoTalk against three mobile antivirus scanners made by Lookout Mobile Security, Avast and Kaspersky Lab on Feb. 6 and March 27. None of the products detected the malware. Citizen Lab wrote that the finding shows those who are targeting the Tibetan community quickly change their tactics. As soon as discussions began to move away from WeChat, the attackers "leveraged this change, duplicating a legitimate message and producing a malicious version of an application being circulated as a possible alternative," Citizen Lab wrote. Send news tips and comments to [email protected]. Follow me on Twitter: @jeremy_kirk
<urn:uuid:4d391545-7bf3-4685-a427-6698cec236e7>
CC-MAIN-2017-09
http://www.itworld.com/article/2708531/security/android-messaging-malware-targets-tibetan-activists.html?page=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00468-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964633
724
2.546875
3
This resource is no longer available BlackBerry NFC Security Overview Near Field Communications (NFC) is a short-range wireless technology that is used in applications required to use 4 centimeters or less to transfer information – allowing users to send/receive signals with a simple tapping motion. This technology is commonly seen in access cards to buildings, or to pay by tapping a credit card at cash registers. Tune into this webcast to learn about the NFC capabilities of BlackBerry smartphones – giving users the ability to share and exchange information simply by tapping the devices together. Uncover what NFC allows the smartphones to do – from using Bluetooth technologies to sharing documents, pictures, and web content. Learn more about this technology and the security, IT policies, and applications controls that are available on your BlackBerry.
<urn:uuid:ae6d89da-7266-4992-9e47-dee749421834>
CC-MAIN-2017-09
http://www.bitpipe.com/detail/RES/1332189292_32.html?asrc=RSS_BP_TERM
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00644-ip-10-171-10-108.ec2.internal.warc.gz
en
0.89979
159
2.671875
3
Cyberattacks are top of mind today, with recent news about high profile incidents involving the Democratic National Committee and Yahoo dominating the headlines. While such incidents certainly lead to much handwringing, cyberattacks perpetrated on individuals, companies and countries can have significant fallout that outlasts the current news cycle. At best, cyberattacks can be a nuisance, and at worst, they can have devastating and long-lasting negative implications. The Individual Level On an individual level, cyberattacks can have various degrees of impact. One common cyberattack scenario – a hacker steals credit card information and uses the account to make fraudulent purchases. These types of incidents are certainly disruptive but are more of a nuisance if a card holder reports the charges in a timely fashion, typically within 60 to 90 days. This may be among the largest cybercrime issues, says Brad Gross, an attorney based in Weston, Fla., who has served as a prosecutor of Internet and technology-based crimes for many years. Indeed, according to the Javelin Strategy & Research 2015 Data Breach Fraud Impact Report, nearly 32 million U.S. consumers had their credit cards breached in 2014. Despite that figure, “the real issue is one for the credit card company and not so much for the individual who isn’t going to be responsible for fraudulent charges,” says Gross. For the credit card companies, fraud is a big problem – in 2014, 90 percent of consumers who were victimized received new cards at a cost of up to nearly $13 each, according to the Javelin report. In response, card issuers have turned to EMV chips for increased transactional security. However, one of the unintended consequences of this move, says Shaun Murphy, inventor, founder, and CEO of Sndr.com, a provider of an encrypted app for communications, is that cybercriminals have turned their attention to other kinds of attacks against individuals. “Credit card information is very valuable to hackers, but other things like your best friend’s name in high school, your first pet’s name – that type of information is extremely beneficial,” Murphy says. To perpetrate financial fraud, hackers need in-depth knowledge of individuals often gained through doxing – the process of capturing pieces of personal information from social media and other online sources and aggregating that information to create a virtual profile. “This concept of doxing is one step in the process of figuring out who you are online and how to get access to information,” Murphy says. One aim of doxing efforts: cybercriminals take over an individual’s social media and email accounts – freezing access to these accounts until individuals pay ransom. These doxing actions can also allow cybercriminals to gain access to bank accounts or incur debts in an individual’s name – financial losses that can fall solely on the wallets of individuals. In addition to monetary losses, such doxing incidents can lead to credit problems which can have long-term implications. In effect, says Murphy, such attacks are the equivalent of a digital mugging – the victim is left to deal with the loss if the breach isn’t discovered and reported promptly. What is more unsettling for individuals is that credit card and bank information usually are accompanied by personal information including address, medical data and social security numbers – ultimately the kind of data that leads to identity theft. In these situations, an individual’s very identity is high-jacked by thieves, potentially wreaking havoc on a person’s ability to get a credit card, take out loans – even get a job. Identity theft is particularly nefarious, says Gross. “First there is the actual damage – such as accounts opened or debts incurred in an individual’s name,” he says. “Then there is the personal impact that can’t be overlooked. The amount of time an individual has to invest in getting his or her identity back is staggering.” That said, individuals are not likely to abandon social media nor keep their cash under their mattresses since every aspect of personal lives is increasingly digital and connected. Joseph Carrigan, a senior security engineer on the staff at the Johns Hopkins Security Information Institute in Baltimore, says it behooves individuals to up their security hygiene which currently is at a sorry state. This should begin with “establishing strong passwords that are diverse across sites,” he explains. Individuals also shouldn’t assume that websites where they have entered their passwords offer adequate protection – and hence they should change their passwords frequently. As for personal information, keep backups to ensure that important data is not held hostage. In short, says Carrigan, individuals need to be constantly vigilant and take a certain amount of ownership for securing their own data. The Company Level For companies, cyberattacks are a constant threat. Target, TJX and Home Depot are among the high-profile companies that have had their systems breached and consumer data exposed. In the short term, such breaches can negatively affect those businesses that are victims of large and well-publicized hacks. At Target, for example, both the CEO and CIO were replaced following the 2014 breach that exposed security vulnerabilities. And Yahoo’s most recent revelation in December of a breach resulted in an immediate hit to its stock price and called into question whether a proposed merger with Verizon would take place. Yet security experts say that such negative fallout is often short term. “I haven’t stopped shopping at Target, nor have most consumers as the company is doing well,” says Carrigan. Consumers may be appeased by the monitoring and credit reporting companies are typically required to provide in the event of a data breach. In effect, the backlash from consumers is temporary, particularly at those big box retailers that offer convenience and pricing advantages. Like individuals, companies have also been victims of ransomware – and have paid to have their systems unlocked. While Carrigan doesn’t advocate paying a ransom (as it often serves to invite further such attacks), he understands the propensity for doing so; often the ransom is less than what it would cost to hire consultants to unlock data. Of more concern to companies is the theft of intellectual property, such as trade secrets, copyrighted material, product designs, customer lists, inventions and the like. Such information is valuable, both in terms of present revenue and future potential revenue. The loss of IP today can adversely affect a company’s profits, stock price and very existence long-term, yet the depth of threat is likely unknown. For one thing, IP can have intangible value – what is the cost of the inability to forge business contracts due to the loss of a trade secret? Then there’s the bad publicity in the cases where insiders – employees or former employees – have stolen IP. In these situations, companies often opt not to share embarrassing details – IP theft is not party to the same disclosure revelations as is the theft of patient or consumer data. In addition, companies themselves are unaware that they have been victimized. “I suspect in the majority of situations, businesses don’t know they’ve had their IP stolen,” says Kevin Beaver, principle information security consultant at Principle Logic LLC in Atlanta. “If you don’t have the proper security controls to detect it, then you won’t know when it happens.” Yet on a larger scale, IP theft is a serious and far-reaching issue that costs U.S. businesses billions of dollars annually according to 2014 statistics from the FBI. In response, the FBI and the Department of Justice have teamed up on a collaborative strategy enlisting online marketplaces, payment service providers, and online advertising platforms aimed at combatting online theft of IP. As security experts see it, the growing incidence of IP theft may serve as a wakeup call for businesses to take a more thorough approach to securing important corporate data. “Various studies have shown that IT and security pros don’t know where their sensitive information is located nor do they know what services are being used and what information is being sent out to the cloud,” Beaver says. A recent case highlights the problem at two prestigious New York law firms after Chinese nationals were charged with hacking into their email systems to access confidential data about mergers and acquisitions and profiting through trades. Certainly, the law firms’ reputation to maintain client/attorney privilege has been compromised – at least for the short term. However, such breaches serve to highlight vulnerabilities that often point to more serious system issues. “The migration to the cloud, storing all of our files and messages and some provider’s systems and trusting that provider to take care of it is not adequate,” says Murphy. With potentially millions of their own money at risk, companies need to take a more proactive and comprehensive approach to information security encompassing not only technology but people and processes as well. The Country Level On a country level, “the U.S. has been a target of cyberattacks for decades — 99 percent of which you will never know about nor ever know what the fallout or resolution will be,” says Gross. The recent situation involving the Democratic National Committee and alleged Russian meddling may seem surprising to the general population, but are fairly commonplace activities, says Gross. If nation states can perpetrate attacks on each other, it is not much of a theoretical stretch to think that terrorists can perpetrate cyber-terrorists attacks as well. “For the foreseeable future, cyberattacks are the biggest risk for the country,” says Murphy. While the threat of a conventional attack is ever-present, “there’s a certain level of luck involved in pulling off a terrorist attack in terms of the timing and escaping detection,” Murphy explains. “But with a cyberattack, you can have a room with three people in it and a $200 laptop and they can bring down a country’s critical infrastructure overnight.” A small cyberterrorist group can take down a power grid, disrupt financial transactions or hack into government systems to expose sensitive information. The goal of such attacks is often not to cripple the infrastructure for the long term. Bringing down a power grid for a few hours – “a quick hit if you will, is enough to undermine the confidence and security of the American public,” says Gross. “And that in and of itself has far reaching effects.” Taking down a power grid for a bit longer – say two or three days –can have a cascading effect on the food supply, water system and economic activity. One of the challenges with cyberterrorism incidents as opposed to state-sponsored cyberattacks is that they are asymmetrical, says Carrigan. When nations are involved in cyber warfare, both sides have infrastructure that can be targeted in retribution – hence they are symmetrical. That is not the case for terrorists – a group can take down a power grid and there is nothing commensurate that the U.S. can target in response. That makes fighting cyberterrorism particularly challenging, says Carrigan. In particular, he is concerned about vulnerabilities in the energy and financial sectors, a situation that is exacerbated given that systems today are so interconnected. It is the vulnerabilities within the critical infrastructure systems that give Carrigan the most concern. As with cyberattacks on individuals and companies, increased security measures can certainly mitigate some threats, but they must be combined with policies and vigilance to ensure the best safeguards.
<urn:uuid:ac388764-9fe8-465e-8adf-98f5eef0b26e>
CC-MAIN-2017-09
https://techdecisions.co/network-security/worst-case-scenario-cyber-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00412-ip-10-171-10-108.ec2.internal.warc.gz
en
0.960538
2,380
2.65625
3
A decompiler is commonly viewed as a tool to recover the source code of a program, the same way as a disassembler is a tool to convert a binary executable program to an assembler text. This is true in some cases but only in some. Most of the time the goal is not to get a compilable text. What would one do with such a text? Recompile and link it to get a copy of the input binary program? The only legitimate situation I can imagine is when the source text has been lost and the owner of the program tries to recover it. Even in this case I personally would rewrite everything from scratch because the disassembler/decompiler output is a nightmare to maintain: it lacks comments, inline functions, preprocessor symbols, abstractions, class inheritance, etc. In short, I wouldn’t consider obtaining a compilable text as a major goal of a binary reverse engineering tool (RET). One goal of a RET is to represent the program logic in the most precise and clear way. This representation should be easily understandable by a human being. This means that the fundamental blocks comprising the program logic should be easily visualized on request. Questions like “what is the function called from the current instruction”, “where is the current variable initialized or used”, “how many times is the current instruction executed” arise continuously during the analysis of almost any program. In simple cases a disassembler can answer these questions, a decompiler is more intelligent and can perform deeper analysis (e.g. pointer analysis and value propagation could help to find the answer even in the presence of indirect addressing modes and loops). In both cases the RET helps the analyst to grasp the meaning of the program and convert a sequence of instructions into very high level descriptions like “send a mail message to all contacts” or “get list of .doc files on the computer”. The clearest program representation is of a paramount importance to understand its purpose. Another important goal of a RET is to automate program analysis and instrumentation. In this case RET can be used as a basic building unit for another program. We could analyze the input program in order to: - find logically equivalent blocks (copyright breaches; recognition of library functions) - find logical flaws (automatic vulnerability search; code quality assessment) - deobfuscate it (malware analysis) - add profiling instrumentation (optimization; code coverage) - prepare various inputs (testing) Obviously we do not need to have a nice and readable program representation to achieve the second goal. Everything can stay in the binary form because the result of the analysis will be used by another computer program. One may even ask himself, “is it a decompilation, after all?” The input program is not converted into a high level language (HLL). More than that, the program text may even not be generated. However, both cases use the same fundamental algorithms to analyze the input program. These algorithms are similar to the algorithms used in HLL compilers. The core of a RET builds an intermediate internal representation (usually a tree or graph-like data structure) which can then be either converted to an HLL text or used to discover interesting properties of the input program. In the same way a compiler builds an intermediate representation which is used to generate machine code. Maybe we should stop using the word “decompiler” so widely to avoid confusion and to stop mixing concepts.
<urn:uuid:79ec1c51-1c9f-45d4-acb8-d59e8e880572>
CC-MAIN-2017-09
http://www.hexblog.com/?p=25
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00288-ip-10-171-10-108.ec2.internal.warc.gz
en
0.918706
724
3.125
3
Data warehousing is a technology that aggregates structured data from one or more sources so that it can be compared and analyzed for greater business intelligence. Data warehouses are typically used to correlate broad business data to provide greater executive insight into corporate performance. Data warehouses use a different design from standard operational databases. The latter are optimized to maintain strict accuracy of data in the moment by rapidly updating real-time data. Data warehouses, by contrast, are designed to give a long-range view of data over time. They trade off transaction volume and instead specialize in data aggregation. Many types of business data are analyzed via data warehouses. The need for a data warehouse often becomes evident when analytic requirements run afoul of the ongoing performance of operational databases. Running a complex query on a database requires the database to enter a temporary fixed state. This is often untenable for transactional databases. A data warehouse is employed to do the analytic work, leaving the transactional database free to focus on transactions. The other benefits of a data warehouse are the ability to analyze data from multiple sources and to negotiate differences in storage schema using the ETL process.
<urn:uuid:9662a0ec-d3b1-4feb-8b87-3e1181c4a98a>
CC-MAIN-2017-09
https://www.informatica.com/sg/services-and-training/glossary-of-terms/data-warehousing-definition.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00340-ip-10-171-10-108.ec2.internal.warc.gz
en
0.915964
228
2.90625
3
How Big Data can help us learn about the words we use and the world we live in Did you know that the majority of folks in Midland America use “uh” over “um”? That “gin and juice” has been in sharp decline since 1994? Or what about the fact that the American Dream didn’t exist until the late 1920s? That’s not to say that Americans haven’t dreamed of bigger and better prior to 1930, only that the phrase itself started gaining in popularity around that time. Not to get too much into the semiotic weeds, but the phrase “American Dream” is what’s called a signifier; a word or phrase, and nothing more. The other side of that equation, the signified, is the Rags-to-Riches story that so often comes to mind when we read or hear the phrase. It’s an interesting correlation that, as the country was steadily declining into what would become the Great Depression, the wish for riches began to manifest itself in our common language. The question then is, what does any of this have to do with Big Data? The short answer is: everything. Linguists have long studied trends in language but have found their data sets too small and incomplete to ever get a clear picture of the path of certain words or phrases. Now, thanks to sites like Google Books Ngram project, Rap Genius, and researchers like Jack Grieve (of the “uhs” and “ums”) we have the ability to not only access those massive amounts of data, but to begin searching across the breadth of them as well. And when we say massive, we’re talking Google’s repository of over 4 million books spanning a 200 year period, thousands upon thousands of crowdsourced rap lyrics, and some 6 billion words searched for all those “uhs” and “ums.” So why does any of this matter? It matters because it’s through the study of language and trends within language that we gain insights into much more than just words. We’re able to see the stories behind the trends, be they as big as the advent of the American Dream, or as little as the decline of a popular topic in rap music. We’re able to see all the things that are most important to a culture and for how long they stayed important. We’re able to see that the American Dream is as relevant now as it was in the 1930s, while our interest in Snoop Dogg’s drink of choice has waned a bit. The big question is what do we do with not only the information about the phrases that are trending, but the Zeitgeist associated with those phrases? The short answer is: a lot. Take a recently familiar term, binge watching. While the idea of marathon watching a season’s worth of TV shows has been around since the release of VHS boxsets, it’s only recently that the term binge watching has come into the mainstream, thanks in large part to Netflix. While the company released all of Lilyhammer in early 2012, it was the first season of House of Cards in 2013 that brought both the term and associated action to the forefront of a lot of people’s minds. The ability to recognize the trend of both the phrase and the action of binge watching has proven valuable to some businesses. Over the past two years Comcast has been allowing viewers unlimited access to entire seasons of shows during their annual Watchathon, an “epic week of binge viewing.” The week coincides with a lot of shows that are either ending their current seasons or on the verge of a season premiere. What they’ve found is that shows featured in the promotion saw as much as a 69% increase in the live viewing numbers for the premiere. Their success comes as a result of a handful of vital elements. The first is the identification of a trending phrase; that’s the signifier, the collection of letters and words. The second key piece is to understand what emotional concept exists in tandem with the phrase; the pleasure that comes from being able to catch up on an entire show’s history or burning through a just released series in a weekend. Companies such as Comcast that are able to bring those two pieces of the puzzle together are able to not only increase its viewership for live premieres, but ensure that its customers are happy and satisfied. Photo Credit: Matheus Almeida
<urn:uuid:b7762b84-56c3-4f96-9bdc-3d3d84dd6470>
CC-MAIN-2017-09
http://blog.nexidia.com/watching-language/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00108-ip-10-171-10-108.ec2.internal.warc.gz
en
0.968863
940
2.9375
3
An asteroid is flying relatively closely past Earth today, NASA reported. No need to duck or take to the root cellar, though. The asteroid, dubbed 2014 DX110, will pass by closer than the distance between the Earth and the moon. However, it will safely pass by, with its closest approach, at about 217,000 miles, at 4 p.m. ET. That is relatively close, considering that the average distance between Earth and its moon is 239,000 miles. NASA, in October, released a report that another asteroid was passing close by Earth. That asteroid, 2013 TV135, was the size of about four football fields and came within 4.2 million miles of Earth, which is about three quarters of the distance to Jupiter's orbit. The asteroid that is passing by today is much smaller, measuring about 100 feet across. Scientists are increasingly interested in studying asteroids to help protect the planet in the event of a possible devastating collision. They also want to learn whether the makeup of asteroids might offer clues to the birth of the universe. NASA's asteroid-grabbing plan includes finding a near-Earth asteroid that weighs about 500 tons but may be only 25 or 30 feet long. The space agency is aiming to have astronauts visit an asteroid by 2025. The plan, which has been opposed by House Republican leaders, is being supported in NASA's proposed 2015 budget. This article, Duck! Just kidding. Asteroid to pass safely past Earth today, was originally published at Computerworld.com. Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is [email protected]. Read more about government/industries in Computerworld's Government/Industries Topic Center. This story, "Asteroid to Pass Safely Past Earth Today" was originally published by Computerworld.
<urn:uuid:3a9b731e-f74c-4b5e-be2d-591c371b9a06>
CC-MAIN-2017-09
http://www.cio.com/article/2378145/government/asteroid-to-pass-safely-past-earth-today.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00108-ip-10-171-10-108.ec2.internal.warc.gz
en
0.959927
415
3.0625
3
Google Hacking involves an attacker submitting queries to Google’s search engine with the intention of finding sensitive information residing on Web pages that have been indexed by Google, or finding sensitive information with respect to vulnerabilities in applications indexed by Google. Google Hacking is by no means confined to searching through the Google search engine but can be applied to any of the major search engines. As search engines crawl their way through web applications with the intent of indexing their content they stumble upon sensitive information. The more robust and sophisticated these crawlers become the more coverage they get of a server exposed to the web. Thus any information, accidentally accessible through a web server or a web application will quickly be picked up by a search engine. Sensitive information may be on the personal level such as security numbers and credit card numbers and passwords, but it also encompasses technical and corporate sensitive information such as client files, the company’s human resources files, or secret formulas put accidentally on a server. Additionally the search engine picks up information that may expose application vulnerabilities such as error messages contained in the server’s reply to the search engine’s request, directory listings and so on. All this sensitive information is available for anyone to see through the appropriate search terms. Although the coined term highlights the giant search engine Google, we consider the domain of this attack to include all available search engines, including Yahoo!, Ask.com, LiveSearch and others. Real-life examples of data leaking onto the Web and found by Google include SUNY Stony Brook where the personal information of 90,000 people was jeopardized when the information was mistakenly put on the Web, Jax Federal Credit Union where information was picked up by Google from a Web site belonging to JFCU print service provider, and the compromise of the personal details of several thousands residents by the Newcastle-upon-Tyne city council. Different resources exist which provide effective terms to use for Google Hacking. Probably the most renowned source is Johnny’s I Hack Stuff Google Hacking Database which contains a comprehensive list of terms used to search the Web for files containing authentication credentials, error codes and vulnerable files and servers and even Web server detection. Furthermore, Google Hacking may also be used as a tool for fast proliferation of malicious code. The famous SantyWorm defaced Web sites by exploiting a certain PHP vulnerability. The SantyWorm spread to vulnerable machines by searching Google for such machines and infecting them. Search Engine Hacking Prevention: Unfortunately, once sensitive information is available on the Web, and thus available via a search engine, a professional information-digger will most probably get his or her hands on it. However, there are a few measures one can easily apply to prevent search engine related incidents. Prevention includes making sure that a search engine does not index sensitive information. An effective Web Application Firewall should have such a configurable feature – with the ability to correlate search engines’ user-agent or a range of search engines’ IP addresses with patterns on requests and replies that hint of sensitive information, such as non-public folder names like “/etc” and patterns that look like credit card numbers, and then blocking replies if there is a chance of leakage. Pattern lists may also be found at Johnny’s I Hack Stuff resources. Detection of sensitive data appearing in a web search includes periodically checking Google to see whether information has leaked. Available tools with just that task in mind may be found on the Internet, such as GooScan and the Goolag Scanner.
<urn:uuid:ff077514-d081-4731-964b-050577b30ded>
CC-MAIN-2017-09
https://www.imperva.com/Resources/Glossary?term=google_hacking
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00108-ip-10-171-10-108.ec2.internal.warc.gz
en
0.917126
723
2.9375
3
Manage Linux Storage with LVM and Smartctl In our recent installments we learned a whole lot about Linux software RAID. Today we're going to learn about LVM, the Logical Volume Manager, and Smartctl, the excellent hard disk health-monitoring utility. LVM creates logical storage volumes, and allows you to increase the size of your volumes painlessly on live filesystems. Smartctl uses your hard disk's built-in Self-Monitoring, Analysis and Reporting Technology (SMART) to test its health, and to warn you of impending failures. SystemRescueCD and Knoppix CD both include LVM, mdadm, and smartmontools when you need an external repair or management disk. LVM is actually LVM2, but most folks call it LVM and are happy. LVM1 should have long disappeared from distribution repositories, and it's easy enough to check: $ lvm version LVM version: 2.02.26-RHEL5 Good enough. (Incidentally, you might be interested in this article about Red Hat's new official don't-say-RHEL policy.) Creating LVM volumes overwrites everything; you can't create a new LVM setup over existing data. These days most Linux distributions include LVM options in their installers, which is a nice easy way to set it up. It is possible to put your root filesystem in an LVM volume, but I don't recommend it. It complicates booting, updates, and repairs, and your root filesystem shouldn't be growing at such a rate that you need LVM anyway. The steps to setting up LVM are simple, and you can practice on a single hard disk with multiple partitions. First have at least two disk partitions available, then initialize your physical volumes, create a volume group, and then create your logical volumes: # pvcreate -v /dev/sda1 /dev/sda2 # vgcreate -v -s 32 vg-testvm /dev/sda1 /dev/sda2 # lvcreate -v -L 4g -n lv-home vg-testvm # lvcreate -v -L 2g -n lv-var vg-testvm Use vgdisplay -v and lvdisplay -v to see your new creations and complete details. My own naming convention is to use "vg" to indicate a volume group, and "lv" for a logical volume. So you see the structure here: the volume group is your total LVM storage space, which is comprised of several physical disk partitions, and then you have to divide your volume group into logical groups, or even just one logical group. The -v switch turns on verbosity so you know what it's doing, and -s 32 creates physical extents that are 32 megabytes in size. Extents are often shrouded in mystery because no one bothers to explain them, but actually they're not mysterious at all. Physical extents are LVM's individual storage blocks, so the smallest possible size for a logical volume is a single extent. There is a maximum of 65,536 extents available per Linux kernel. The default size is 4 MB, which limits the maximum size of your volume group to about 256 GB. You can calculate a reasonable extent size by dividing the desired size of your volume by 65K. Extent sizes must be a power of 2, so round up to the next one and leave room for growth. Extent size doesn't affect performance, just your storage allocations. Extents are fixed when you create your volume group, so you can't change them later. You have to increase or decrease the size of your volumes according to your extents, so here we're limited to 32 MB increments. The maximum possible size of a logical volume for 2.6 kernels is 16 terabytes on 32-bit systems, and 8 exabytes on 64-bit systems. LVM filesystems and Mountpoints Now it's time to put filesystems and mountpoints on your logical volumes. Logical volumes are akin to physical disk partitions, so "lv-home" is going to be /home, and "lv-var" is /var: # mkfs.xfs /dev/vg-testvm/lv-home # mkfs.ext3 /dev/vg-testvm/lv-var You may use any filesystem you want. Now create your mountpoints, adjust permissions and ownership, and then create your /etc/fstab entries. You can use either the /dev names or UUIDs: /dev/vg-testvm/lv-home /home xfs defaults 0 2 /dev/vg-testvm/lv-var /var ext3 defaults 0 2 UUID=8d566d0e /dev/vg-testvm/lv-home /home xfs defaults 0 2 UUID=681919d5 /dev/vg-testvm/lv-var /var ext3 defaults 0 2 The UUIDs are truncated to conserve pixels. vgdisplay -v shows your UUIDs. Now you can reboot or manually mount your new logical volumes, and you're ready to start using them just like physical disk partitions. Increasing the Size of a Logical Volume Follow these steps to add a physical disk partition to an existing logical volume: # pvcreate -v /dev/sdb1 # vgextend vg-testvm /dev/sdb1 # lvextend -L+10G Then you must resize your filesystem using the resizing command specific to your filesystem. ReiserFS can be safely resized while mounted, and XFS must be mounted. Ext2/3 should be unmounted first: # umount /var # resize2fs -p /dev/vg-testvm/lv-var # mount /var The others look like this: # resize_reiserfs /dev/volumegroup/logical-volume # xfs_growfs /home ReiserFS uses the /dev name, and XFS uses the name of the mountpoint. JFS is rather complicated, so I shall leave it to the reader to find and follow correct instructions. Disk Failure Warning System It's a lot nicer to replace a failing disk at leisure than to be surprised, so the smartmontools package is a great addition to your LVM and RAID setups. SMART can often be enabled in your BIOS, or with smartctl. First see if it is turned on: # smartctl -i -d ata /dev/sda SMART support is: Available - device has SMART capability. SMART support is: Enabled # smartctl -H -d ata /dev/sda === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED If your drive keeps its own error log, and not all of them do, check this too: # smartctl -l error -d ata /dev/sda If there are no errors, or just a few old transient errors, it's OK. If the drive fails the health test and shows many errors, especially repeated errors, the disk is doomed. You can enable smartd to continually monitor disk health, and warn you of impending problems. On CentOS it's on by default, so all you have to do is edit /etc/smartd.conf to name the disks you want monitored, and enter your email for notifications. Fedora users have to turn it on in the Services control panel. On Debian and Ubuntu, enable it in /etc/default/smartmontools. This is a simple example /etc/smartd.conf: /dev/hda -H -m [email protected] /dev/sda -d ata -H -m [email protected] And that's the works. This is very simple, and many disks still run even though they fail the health test, so you have time to replace them before they die completely. You can run more extensive tests and get even better warnings of impending troubles; see man smartctl and man smartd for details. RAID 10 Update There is little documentation on Linux's RAID 10, or how it differs from RAID 1+0. Fortunately, some readers passed on some good information on what RAID 10 is, and what sets Linux's implentation apart. First of all, never mind that everyone says that RAID 10 and 1+0 are the same thing and that even man md says this they're not the same. RAID 10 is a new type of array; it is a single array, it can have an odd number of drives, and the minimum required are two. RAID 1+0 is a combination of multiple arrays, it requires at least four disks, and must always have an even number.
<urn:uuid:84c344c9-3d9c-4774-ad47-9ba4cb004f9d>
CC-MAIN-2017-09
http://www.enterprisenetworkingplanet.com/print/netsysm/article.php/3733141/Manage-Linux-Storage-with-LVM-and-Smartctl.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00512-ip-10-171-10-108.ec2.internal.warc.gz
en
0.860745
1,881
2.640625
3
Do you feel compelled to wear a Richard Nixon mask or a baseball hat equipped with infrared signal emitters on the brim when you leave the house? If so, you may be trying to prevent a passerby on the street from guessing your name, interests, Social Security number, or credit score using only a pair of face-scanning glasses and an iPhone. This is not science fiction—law enforcement has been using facial recognition technology for years. Through advances in facial recognition software and the convergence of the vast amount of personal information on social networks (especially photographs), smartphones, the power of cloud computing, and statistical re-identification, the use of this technology has the potential to become widespread. The potential ubiquitous use of facial recognition technology raises critical concerns regarding privacy, security, and basic freedom. Facial recognition technology traces its origin to government-funded research in the 1960s. The technology works by using an algorithm to create a unique numerical code from distinguishable landmarks on faces, sometimes called nodal points. The technology measures approximately 80 nodal points, such as the distance between eyes, nose width, eye socket depth, and jaw line length. The unique code or “biometric template” created by facial recognition software from a photograph can be stored in a database and later compared to other photographs to create a match. There are several applications of facial recognition technology in law enforcement that most would agree are useful. Police in Tampa, Florida have made over 500 arrests after identifying suspects by taking photographs at a traffic stop and comparing the images to a mugshot database. In 2010, the Massachusetts state police obtained over 100 arrest warrants for creating false identities and revoked 1,860 licenses using facial recognition software against the state’s driver’s license registry. In Britain, Scotland Yard is using facial recognition software to identify suspects from the recent riots in London. Facial recognition can also provide modern convenience. Since 2002, Australians have been able to use self-processing e-passports at airport customs checkpoints. Advertisers have generated more relevant billboard advertisements based on the age and gender of passers-by. Even Facebook uses facial recognition to suggest the identity of friends to tag in a photo, and programs like iPhoto and Picassa allow users to organize photographs by faces. The technology is not foolproof, and there are other applications that are outright alarming. The ability to successfully identify a person by matching two photographs is dependent on the quality of the images. If the person in the photograph is not directly facing the camera with open eyes and in front of a plain, light-colored background, the performance of the facial recognition software declines. Thus, while you can obtain a high-quality picture from a driver’s license database, pictures taken without the cooperation of the subject (e.g. through surveillance cameras) rarely meet the ideal standard. Although the technology has improved over the last ten years, there is an inherent error rate because it is reliant on statistics. Accordingly, either matches that should be made do not occur or false identifications happen. A driver in Boston recently had his license revoked because his picture closely matched the picture of another driver. Although his license was returned, it took days of wrangling for him to prove his identity. At least 34 other states are using similar technology. There are no current reported statistics on the number of false positives, but Massachusetts alone issues 1,500 suspension letters per day using the system. On August 4, 2011, researchers from Carnegie Mellon’s CyLab presented the results of three experiments from which they concluded that it is possible to use facial recognition software to identify strangers and then determine sensitive information about that person, including their Social Security number. In one experiment, the researchers were able to identify members of Match.com, who used pseudonyms on the dating site to protect their identities, by comparing their profile photograph to photographs on Facebook. In the second experiment, they took photographs of college students that they were able to successfully match one-third of the time to the student’s Facebook profile (in less than three seconds). In the third experiment, the researchers used a custom iPhone application to predict a stranger’s Social Security number (generally just the first five digits) by matching a photograph to a Facebook profile picture in conjunction with information about the stranger’s state and year of birth gathered online. The lead researcher, Alessandro Acquisiti, said: “A person’s face is the veritable link between their offline and online identities.” In addition to the obvious privacy concerns, there are security and personal liberty concerns. According to a report, one in 750 passengers scanned at an international airport in the United States is falsely identified, and some of the falsely identified individuals may have been temporarily detained by the FBI. In locations where biometric data like facial recognition is used to gain entry to a secured area or through customs, the failure of those institutions to safeguard that data in a computer system can lead to unauthorized persons gaining access. Although it is not yet possible to consistently and accurately identify all of the faces in a crowd, the technological limitations are likely to continue to fade. The billions of images tagged on social networking sites and associated data provide an easily accessible source of personal information to match with other offline data collected by data aggregators, which can be turned into detailed personal profiles and sold to companies for use in behavioral advertising targeted directly to you through your smartphone or cable box. It may become possible to search for a person online using an image of their face just as easily as it is now to enter a name in a search engine. On the law enforcement side, the FBI will begin testing its Next Generation Identification facial recognition system in January 2012 in four states. The system, which will also use biometric indicators (e.g. iris scans and voice recordings) to identify suspects, will match a photo of an unknown person against mug shots. Facial recognition technology has not gone unnoticed by lawmakers and regulators. The FTC is hosting a workshop to explore beneficial uses of the technology and the associated privacy and security concerns on December 8, 2011. And U.S. Senator John Rockefeller has asked the FTC to provide a report on the findings from its workshop to his Commerce Committee. This article, which was published in the December 2011 CBA Report, is republished with permission.
<urn:uuid:854fb650-46b7-4e97-ae20-b43171578498>
CC-MAIN-2017-09
https://www.dataprivacymonitor.com/federal-legislation/facial-recognition-the-end-of-privacy-or-a-precursor-for-new-laws/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00508-ip-10-171-10-108.ec2.internal.warc.gz
en
0.947497
1,293
3.140625
3
Latency is the amount of time it takes for a single packet to traverse a network. It is normally expressed as a round-trip-time (RTT) which is the amount of time it takes a packet to go from A to Z and back again. A “ping” (from submarine sonar) is a common tool used to measure latency. There are three primary sources of latency in a network: - Time-of-flight: the time is takes a packet to physically traverse the cables in the network. - Buffers: Temporary storage areas in routers and switches used to absorb bursts of data between links of different speeds. This is a variable number based on utilization and router/switch design but is usually measured in hundreds of milliseconds for loaded networks. - Play-out-time: The amount of time it takes to get a packet onto a link. It is equal to packet size/link speed. On a loaded network, it is buffer latency that dominates. On an unloaded network, time-of-flight dominates. Lag is the Quality of Experience (QoE) impact on different network issues, resulting from latency. It is the delay experienced by a user when they press a button or drag a window and have to wait for something to happen. Latency causes lag but so does packet loss. When a user experiences lag, it manifests itself as: slow mouse movement, characters being delayed from appearing on the screen after typing or poor user experience with the interface. An example is when a user clicks on an icon but nothing happens so they click again and then skip two screens past where they want to go. Also, a user dragging on a window with the mouse with no immediate movement resulting. When you fix packet loss you reduce lag. IPQ from LiveQoS helps to reduce latency by reducing the retransmissions required. The result to the end user will be an improved quality of experience that can be measured in the reduction of the lag that is seen. View our latest video, which demonstrates virtual desktop lag.
<urn:uuid:49a10a1c-5c64-4b3b-a2e1-ddb1ed9bbd3b>
CC-MAIN-2017-09
http://liveqos.com/latency-vs-lag-whats-the-difference/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00384-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942357
423
3.671875
4
Dozens of self-signed SSL certificates created to impersonate banking, e-commerce and social networking websites have been found on the Web. The certificates don't pose a big threat to browser users, but could be used to launch man-in-the-middle attacks against users of many mobile apps, according to researchers from Internet services firm Netcraft who found the certificates. "The fake certificates bear common names (CNs) which match the hostnames of their targets (e.g. www.facebook.com)," the Netcraft researchers said Wednesday in a blog post. "As the certificates are not signed by trusted certificate authorities, none will be regarded as valid by mainstream web browser software; however, an increasing amount of online banking traffic now originates from apps and other non-browser software which may fail to adequately check the validity of SSL certificates." Among the self-signed certificates found by Netcraft were certificates for domain names belonging to Facebook, Google, Apple, Russian bank Svyaznoy and large Russian payment services provider Kiwi.ru. If an application doesn't properly validate the authenticity of certificates it encounters, attackers can use self-signed certificates that are not issued by legitimate certificate authorities (CAs) to launch man-in-the-middle attacks against that application's users. Such attacks involve intercepting the connections between targeted users and SSL-enabled services and re-encrypting the traffic with fake or forged certificates. Unless victims manually check the certificate details, which is not easy to do in mobile apps, they would have no idea that they're not communicating directly with the intended site. In order to pull-off man-in-the-middle attacks, hackers need to gain a position that would allow them to intercept traffic. This is relatively easy to do on wireless networks using techniques like ARP spoofing, but can also be done by compromising a router or by hijacking the victim's DNS settings. Web browsers are generally safe against man-in-the-middle attacks if attackers don't use valid certificates obtained illegally by theft or by compromising certificate authorities. That's because over the years the SSL implementations in browsers have been thoroughly tested, patched and strengthened. If modern browsers encounter a self-signed certificate, they will prompt a hard-to-ignore warning that will force users to either stop or manually confirm that they want to proceed despite the security risks. However, that's not the case with many other desktop or mobile applications. In 2012 a team of researchers from Stanford University and the University of Texas at Austin investigated the SSL implementations in many non-browser applications, both desktop and mobile. "Our main conclusion is that SSL certificate validation is completely broken in many critical software applications and libraries," they said in their research paper at the time. "When presented with self-signed and third-party certificates -- including a certificate issued by a legitimate authority to a domain called AllYourSSLAreBelongTo.us -- they establish SSL connections and send their secrets to a man-in-the-middle attacker." That same year another team of researchers from the Leibniz University of Hannover and Philipps University of Marburg in Germany analyzed 13,500 popular Android applications from Google Play and found that 1,074 of those applications contained SSL code that either accepted all certificates -- including self-signed ones -- or accepted certificates issued by legitimate authorities for domain names different than the ones the apps connected to. It doesn't seem things have changed too much over the past two years. Researchers from security firm IOActive recently analyzed mobile banking apps for iOS devices from 60 financial institutions from around the world and found that while most of them used SSL, 40 percent of them did not did not validate the authenticity of digital certificates they received from servers, making them vulnerable to man-in-the-middle attacks. Some of the self-signed certificates found by Netcraft tried to mimic legitimate certificates down to the name of certificate authorities that supposedly issued them, which suggests they were specifically created for malicious purposes. One rogue certificate for *.google.com was crafted to appear as if it were issued by America Online Root Certification Authority 42, closely mimicking a legitimate AOL CA trusted in all browsers, the Netcraft researchers said. A certificate for login.iqbank.ru claimed to have been issued by Thawte, another legitimate certificate authority, and the rogue certificate for qiwi.ru claimed it had been issued by SecureTrust, a CA operated by Trustwave. The certificate for *.itunes.apple.com was crafted to appear as if it was signed by VeriSign. This is in clear contrast with some self-signed certificates for high-profile domain names that were also found, but don't appear to be serving a malicious purpose. One such certificate, issued for several Google-owned domains, was signed by a made-up certificate authority called Kyocast Root CA. KyoCast is a third-party modification for rooted Chromecast devices that allows them to access services outside of Google. It intentionally redirects connections that should normally go to Google's services to servers operated by KyoCast and the self-signed the certificate is used in that operation. There have been signs lately of cybercriminals becoming interested in large-scale man-in-the-middle attacks. The Polish Computer Emergency Response Team (CERT Polska) reported last week that attackers were exploiting vulnerabilities in home routers in Poland to change their DNS settings and intercept online banking traffic from users. Those particular attacks used a technique called SSL stripping to overcome banks using HTTPS on their sites, but certificate chain validation weaknesses in mobile banking apps could easily be exploited to achieve the same result, and with fewer indications to victims that an attack is in progress.
<urn:uuid:39db68bd-0606-4c4e-87aa-d27108b1f958>
CC-MAIN-2017-09
http://www.cio.com/article/2378741/encryption/dozens-of-rogue-self-signed-ssl-certificates-used-to-impersonate-high-profile-sites.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00384-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964526
1,173
2.78125
3
The Global Distribution System (GDS) is a unique tool used by travel agents worldwide to book air travel – yet it has very few security features. Unless the system’s IT infrastructure is overhauled, cybercriminals will continue to be able to access passengers’ personal data and hack their plane tickets. When booking tickets, passengers provide the GDS with their last name, first name, telephone number, passport number and other confidential information. This data is collected in “Passenger Name Records” or PNRs which, for the past 60 years, have been used by customs and police authorities to monitor international flows of people and combat terrorism and organised crime. Data was originally entered manually, but has been collected automatically using computers since the 1970s. Major GDS weaknesses Two cybersecurity researchers, Karsten Nohl and Nemanja Nikodijevic, recently studied the GDS. Worryingly, they concluded that the system’s infrastructure had only minimal security features, which dated back to the 1970s. The system’s main weakness is that no password is required to access PNR files – just the passenger’s last name and PNR code. This code only contains six characters, meaning hackers can repeatedly attack airline websites until they find the correct combination. This kind of attack is called a brute force attack. Another major weakness is the fact that the passenger’s name and PNR code appear on all flight tickets and baggage labels. They are either printed directly or using a barcode, which can easily be read using barcode apps. Passengers who share photos of plane tickets on social media are therefore extremely vulnerable to having their accounts hacked. In the space of just a few weeks at the end of 2016, the two researchers observed that 75,000 such photos had been posted. After obtaining the passenger’s last name and PNR code, hackers are able to access passenger and PNR files. From there, they can: - Steal the passenger’s frequent flyer miles, or cancel the original flight and use the airline credit to book another trip. - Modify the passenger name on the ticket and use it themselves, depending on the airline. - Use the passenger’s ticket without changing the name, in regions where identity checks are not always performed (for example, in the Schengen area). This form of identity theft represents a clear terrorism risk. - Steal the passenger’s personal data (PNRs in passenger files and data entered on the plane ticket such as geolocation). Essential security measures To improve system security, several simple measures could be implemented: - Record connection data in the GDS in order to trace fraudulent activity. - Add a user password, in addition to the six-character PNR code. - Limit the number of connection requests from a single IP address to prevent brute force attacks. In reaction to this study, one of the three GDS systems (Amadeus, Galileo and Sabre) appears to have implemented the last security measure. However, it may only have been installed on a temporary or one-off basis. PNRs were originally created to tackle terrorism and organised crime. The surprising vulnerability of the GDS system is regrettable, because it leads to two major risks: financial fraud and identity theft.
<urn:uuid:d5df36e3-87f6-4ab8-a32e-f640dd21c4a9>
CC-MAIN-2017-09
https://blog.cybelangel.com/cybercriminals-can-hack-plane-tickets/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00380-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936801
687
2.5625
3
Primer: Wireless Mesh NetworksBy David F. Carr | Posted 2005-09-07 Email Print Wireless mesh networks can cover large areas, such as cities, more economically than standard wireless setups. What is it? In a mesh network, the wireless connection extends not only to client computers, such as wireless laptops, but between other network nodes. This is in contrast with a typical wireless local area network, where the client computers connect wirelessly to an access point but that device is in turn plugged into the wired corporate network. The connection between the local area network and a larger corporate network or the Internet is known as the "backhaul." In a mesh network, the access point is connected to a wireless router (or the two may be combined into one device). That router, in turn, hands off a network transmission to one or more other routers before the signal reaches a wired connection. In much the same way that a standard router chooses from many possible paths across the Internet, a wireless router scans radio connections to find the least congested path. Wireless backhaul makes it possible to cover large areas more economically than if each access point had to be provided with a wired connection. Who are the vendors? Motorola became the biggest entrant with its acquisition of MeshNetworks last year. The MeshNetworks products were derived from military research and employ a proprietary radio networking protocol, rather than the Wi-Fi (802.11 protocol) standards that support familiar consumer applications such as coffee-shop hot spots. Motorola says its technology handles radio interference and connections with moving vehicles better than 802.11. However, this year Motorola is introducing a new product that can communicate by either its proprietary protocol, 802.11, or a new licensed 4.9 gigahertz public-safety band for data networking. Other startups, such as Tropos and Firetide, are adding their own wireless routing technology. For an open-source option that can be assembled from commodity PC and 802.11 networking hardware, look at LocustWorld (www.locustworld.com) or one of the products based on its software. Who's using it? Much of the interest in this technology has come from municipalities wanting to provide citywide data networking to police, firefighters and other public employees. Some cities and grassroots organizations are offering Internet access via mesh technology; the city of Philadelphia is planning an ambitious project of this sort. Cities often can mount the required equipment on light poles throughout an area. Richard A. Bull, police chief of Ripon, Calif., says his city is using Motorola gear to eliminate its dependence on data networking from cellular carriers. In the process, city departments get access to higher-bandwidth applications such as interactive mapping and video streaming from surveillance cameras, he says. "If there's a disturbance at a truck stop in town, the officers can pull up a feed from the camera and see what's going on before they even get there," Bull says. Commercial applications are rarer, but Firetide cites as a customer a Holiday Inn in Bluefield, W.Va., that deployed mesh infrastructure because the franchisee needed to meet a mandate for providing guests with Internet access but wanted to avoid tearing up walls to run cables. Motorola has customers in the mining industry that use mesh networking to provide data services to the drivers of earthmoving vehicles and to personnel down in tunnels. In general, mesh networking makes sense if the network must be deployed over a large area, the organization wants to avoid running wires to access points, or both. When doesn't it make sense? If you can accomplish your goals with 802.11 access points and wired backhaul, there may be no reason to look at wireless mesh technology. Within an office building, there's probably no reason to spend up to $2,000 on a wireless mesh router instead of plugging in a standard $100 wireless access point. A price difference of 10 to 20 times is typical, says Motorola mesh networks marketing director Rick Rotondo. "You can pull a lot of wire for that," he says. But where wiring is impractical, mesh can extend the reach of wireless.
<urn:uuid:f910f6e4-2a37-47b2-b270-bbc1b3a4707b>
CC-MAIN-2017-09
http://www.baselinemag.com/it-management/Primer-Wireless-Mesh-Networks
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00500-ip-10-171-10-108.ec2.internal.warc.gz
en
0.942741
849
2.90625
3
We all recently heard that Twitter fell victim to an anonymous hacker and over 55,000 login credentials were exposed. While to some, a hacked Twitter account may not seem as significant as a bank or credit card account, a security breach of any kind can cause damage, lost information and identity theft. Back in February of this year, UNC Charlotte had a security breach of personal account information where at least 350,000 social security numbers were compromised. We also know of the potential Cyber War mentioned in our last blog, and how attacks could shut down major infrastructures necessary to live in the United States. Regardless of the type of security breach, they all show a weakness in our systems. Unfortunately, this type of criminal behavior will probably never cease as we continue to input valuable information about our personal lives onto the web. Just how dangerous can a security breach be? The potential Cyber War is just the beginning. In order to combat the potential mayhem that could be, the Government wants to take steps that would help prevent and guard against these major cyber breaches and threats. Cyber Intelligence Sharing and Protection Act, also known as CISPA, was just recently passed through the House of Representatives on April 26. CISPA is a proposed law that would allow the sharing of private information from the internet between the U.S. Government and technology and manufacturing companies to help the U.S. investigate cyber attacks. CISPA is favored by major corporations such as Facebook and Microsoft, but many people and companies are concerned about how CISPA would allow the government to monitor an individual’s internet browsing usage. It’s scary to imagine that a potential law would allow the government to monitor internet usage, private emails, etc., and that our information could be handed over without our knowledge. Having your personal information readily available to others is a thought that no person dreams about which is why we take measures to prevent our information from being exposed–to anyone! The House of Representatives recently spent hours debating this difficult, proposed law known as CISPA. The law is meant to protect cyber security, but in many ways it also makes consumers vulnerable to data privacy attacks. No one likes the idea of private information being shared, regardless of whose hands it ends up in. Some have even snickered about the “Big Brother” effect this potential law could have on consumers. Whether or not CISPA ends up passing through the Senate, take the next step to keep your personal information protected NOW by using Keeper! Keeper is one of the world’s most downloaded security applications. It uses military grade, 128-bit AES security to keep your usernames and passwords safe and secure. Keeper will also help you manage the plethora of combinations used on a daily basis by auto-launching websites and auto-filling your passwords. Keeper can be synced to all of your devices including your mobile device, tablet, desktop and laptop! Download Keeper now at www.keepersecurity.com/download
<urn:uuid:92942f06-469a-49ca-9c5c-65967e54f560>
CC-MAIN-2017-09
https://blog.keepersecurity.com/2012/05/10/our-private-information-is-our-private-information-not-yours/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170700.71/warc/CC-MAIN-20170219104610-00024-ip-10-171-10-108.ec2.internal.warc.gz
en
0.964761
600
2.96875
3
Yes, an IPv6 address at the binary level is still used the same way an IPv4 address is used. Yes, the address bits are still divided between a network part that specifies the exact location of the link to which a device is attached and a host part that identifies a specific device on the link. Yes, you still use CIDR notation (a forward slash and a number) to specify an address prefix of some length. And, yes, if you want to represent just the prefix you set all the host bits to zero (a 24-bit IPv4 prefix might be written as 192.168.23.0/24; a 48-bit IPv6 prefix might be written as 2001:db8:9c5::/48). Above those bit-level functional equivalences, it's a whole new ball game. A single design principle dominates all others in IPv4 address design: address conservation. Variable-Length Subnet Masking (VLSM) is an essential IPv4 design tactic in which the number of hosts required on individual subnets throughout the network is carefully balanced against the total number of subnets your IPv4 prefix can support. You wind up with several different subnet sizes in your network, each allowing for just enough known or forecast host addresses and no more. In fact, the very concept of subnetting in IPv4 is the idea of borrowing some of the host bits to use as part of the network prefix. This dates back to pre-CIDR days when unicast IPv4 prefixes belonged to one of three classes (/8, /16, or /24). IPv6 prefix assignments, on the other hand, are treated differently. There is always allowance of a 64-bit host portion (the Interface-ID); except for networks that are deemed to only need a single subnet, such as homes or small offices, your prefix assignment will be some length shorter than 64 bits such as /40, /48 or /56. Those bits of the network portion between the fixed prefix assignment and the fixed 64-bit Interface-ID are for subnetting. You don't have to borrow host bits. IPv6 represents a mind-boggling number of addresses, and that boggleness extends right down into your own network. Think about this: If you are allocated a /40 prefix, you have the capacity to support as many /64 subnets--4.3 billion--as there are individual addresses in the entire IPv4 address space. If you are an enterprise network, you are more likely to get a /48; that's still 65,536 64-bit subnets. And each /64 supports 1.8 x 1,019 individual addresses.
<urn:uuid:7fb71343-c43d-40ce-9746-4ff78cf0e2e4>
CC-MAIN-2017-09
http://www.networkcomputing.com/networking/ipv6-design-forget-ipv4-rules/1511838043
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00200-ip-10-171-10-108.ec2.internal.warc.gz
en
0.933616
556
3.046875
3
Engagement is an important part of learning. Guidance, inspiration, and general interest lead to successful learning outcomes. It is crucial that district administrators and teachers create an interactive and collaborative learning environment for students to become engaged and involved in the curriculum. Below are some recommendations for how to use technologies to enhance learning in the digital classroom and create a successful and engaging learning environment. - Use interactive learning tools that engage students by leveraging the rich Apple ecosystem of education apps, as well as, the iBookstore for digital textbooks. - Provide guidance in the classroom by leveraging technologies like Apple’s Guided Access, so that teachers can focus student iPads on specific learning apps and websites. - Create a personalized learning experience by providing self service technologies for students using important attributes tailored towards student performance, learning style, and demographics to get the content to them that they will engage with and benefit from the most. Also, leverage iBeacons to extend these capabilities and enable distribution of apps and content to students based on proximity to buildings, classrooms, or libraries. - Create a transformative learning experience for students by leveraging the power of technologies like iPad to engage students’ senses and provide adaptive learning programs, tactile learning, and interconnected education and social learning tools. Learn more about ways the digital classroom is changing the way teachers are teaching and students are learning.
<urn:uuid:8670b9a6-a31c-4ca8-a117-aad7916b60cc>
CC-MAIN-2017-09
https://www.jamf.com/blog/four-ways-to-engage-students-in-the-digital-classroom/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00552-ip-10-171-10-108.ec2.internal.warc.gz
en
0.930461
272
3.890625
4
TORONTO, ON--(Marketwired - August 22, 2016) - More than two million kids across the province are preparing to head back to school soon, but everyone in the family can use a refresher course on how best to care for their mouths. The Ontario Dental Association (ODA) wants you to remember the "old school" basics of oral health care which includes brushing twice daily and not eating too much sugar. Here are the ABCs and Ds to follow for a healthy smile…and a healthy mouth. Awareness: Stress, medications, smoking, overconsumption of alcohol and sugar, as well as acidity levels in juice, soda and sports drinks, can negatively impact your oral health. They may also increase your risk of developing gum disease, tooth decay and oral cancer. Brushing: Brushing your teeth in the morning and evening not only gives you a fresh, sparkling smile, it's also a critical component in preventing tooth decay and gum disease. Cleaning: Other areas of your mouth need attention that brushing alone can't provide. Flossing removes particles of food from in between teeth and using mouthwash can reduce plaque, cavities and gingivitis. Dentist: Getting a regular dental exam is key to maintaining optimal oral health. Your dentist is trained to detect and diagnose problems before you see or feel them, which is also when they are much easier and less expensive to treat. Dr. Jack McLister, ODA President, says, "As we return to our usual routines, back-to-school time is also a perfect opportunity to reassess the oral health-care routine of the whole family to make sure everyone maintains a healthy smile." About the Ontario Dental Association The ODA has been the voluntary professional association for dentists in Ontario since 1867. Today, we represent more than 9,000, or nine in 10, dentists across the province. The ODA is Ontario's primary source of information on oral health and the dental profession. We advocate for accessible and sustainable optimal oral health for all Ontarians by working with health-care professionals, governments, the private sector and the public. For more information on this and other helpful dental care tips, visit www.youroralhealth.ca.
<urn:uuid:4446922a-9141-415d-9cb4-557b2582384c>
CC-MAIN-2017-09
http://www.marketwired.com/press-release/back-to-basics-re-educate-yourself-on-how-to-keep-a-healthy-mouth-2152432.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00196-ip-10-171-10-108.ec2.internal.warc.gz
en
0.949993
466
3.1875
3
A new algorithm that analyzes speech could one day enable smartphones to sense your mood. Who knows, maybe it could even prevent you from saying something you'll regret. Researchers from the University of Rochester will describe their work at the IEEE Workshop on Spoken Language Technology this week in Miami. "We actually used recordings of actors reading out the date of the month -- it really doesn't matter what they say, it's how they're saying it that we're interested in," said Wendi Heinzelman, professor of electrical and computer engineering, in a statement. The program designed by the engineers analyzes 12 features of speech, including pitch and volume, to identify emotions, such as sad, happy, fearful and disgusted. Tests so far have proven 81% accurate for the program, which has been developed into a prototype app, which displays a happy or sad face after recording and analyzing a user's voice. It could some day be used for everything from adjusting colors displayed on a phone to launching music that fits your mood. Heinzelman and her Bridge Project team are also working with psychologists at the university to explore issues such as parent-teenager relations. The program was built by Na Yang, one of Heinzelman's grad students, during a stint at Microsoft Research. Bob Brown tracks network research in his Alpha Doggs blog and Facebook page, as well on Twitter and Google +.A Read more about anti-malware in Network World's Anti-malware section. This story, "Smartphones Could Soon Double as Mood Rings" was originally published by Network World.
<urn:uuid:774d994a-b7ae-485f-958b-65b1c9a1268e>
CC-MAIN-2017-09
http://www.cio.com/article/2389930/mobile/smartphones-could-soon-double-as-mood-rings.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00548-ip-10-171-10-108.ec2.internal.warc.gz
en
0.967865
325
2.828125
3
Almost 20 years ago, Transmeta CEO David Ditzel and his colleague, David Patterson, collaborated to coauthor the famous article, The Case for a Reduced Instruction Set Computer.? In this article they argued that microprocessors were too large and complex and that by moving much of that complexity from the silicon into software they could increase performance and lower die size, power consumption, and cost.?? Today, 20 years later the RISC movement has changed the way that computer architecture is done...yet it seems that things are moving back the other way. Modern CPUs have hardware that performs on-the-fly optimization and reordering of code, branch prediction, and host of other tricks in order to squeeze out every last ounce of performance from an application.? Sometime last year, Ditzel made a comment to the computing press that I've gotten a lot of mileage out of. I used it as an opener to my RISC vs. CISC piece, because it summed up how I felt about the current state of the microprocessor industry. After seeing yesterday's presentation on Transmeta's new Crusoe technology, I can see now that that quote was actually a little mini-preview of what Transmeta has been working on these past five years:? "Today [in RISC] we have large design teams and long design cycles," he said. "The performance story is also much less clear now. The die sizes are no longer small. It just doesn't seem to make as much sense." The result is the current crop of complex RISC chips. "Superscalar and out-of-order execution are the biggest problem areas that have impeded performance [leaps]," Ditzel said. "The MIPS R10,000 and HP PA-8000 seem much more complex to me than today's standard CISC architecture, which is the Pentium II. So where is the advantage of RISC, if the chips aren't as simple anymore?" As Ditzel points out, modern CPUs are more complex, have more hardware, and perform more functions than their early RISC predecessors. All that hardware requires lots of power though, and the more power a CPU draws the hotter it gets. Back to the drawing board: new problems, new solutions Of course, power consumption and heat dissipation aren't really an issue if you're running a database server, a graphics workstation, a gaming rig, or the like. In those types of computing environments all that matters is raw performance. AMD's Athlon is a testament to what happens when you sacrifice transistors and wattage to appease the performance-hungry power-user's need for speed. The Athlon is way huge and smokin' fast. But where there's smoke, there's fire--you could cook pancakes on an Athlon's die. Nevertheless, the Athlon and its rival, the PIII, are good solutions to a particular set of problems. Specifically, both AMD's and Intel's design teams were dealing with the problems that arise when you try to get the most performance possible out of the aging x86 ISA. The ultimate goal was raw speed, and the majority of each team's design decisions were made with the following ends in mind: - x86 compatibility - the fastest possible application performance Both of the above factors are essential to the success of a high end x86 CPU that's aimed at the server and workstation market. What if, though, a team of CPU designers were to go back to the drawing board and start from scratch on a CPU design with a different market, and different set of questions, in mind? Transmeta's Crusoe team did just that. They started over again, but this time instead of asking "how fast can we possibly make this," they asked "how efficient can we possibly make this, and still have it run x86 apps acceptably." Thus, Crusoe's designers were working towards two primary design goals that dictated the decisions and tradeoffs that they made. Transmeta wanted the Crusoe to have: - full x86 compatibility - the lowest possible power consumption - a level of x86 application performance that provides for a reasonably good Notice that last bullet point there. Crusoe isn't about framerates (yet), and it isn't about 3DSMAX rendering or weather simulation. It's about doing low to mildly compute-intensive sorts of tasks like word processing, video and audio playback, web browsing, email, etc. And more specifically, it's about taking those tasks on the road. The Crusoe team's answer to the questions they asked is an impressive blend of software and hardware technology that should make anyone with even a passing interest in CPU architecture sit up and take notice. Throughout the rest of this article, I'll be talking about the actual technology behind the Crusoe in detail: the Code Morphing software, the VLIW core, the Long Run power management features, and more. I'll look at how those technologies work and what they offer. Finally, I'll talk about where Transmeta is going with all of this, and what kinds of products they're likely to bring out in the future.
<urn:uuid:0903a849-9bde-400e-b8ef-d9f2653feef7>
CC-MAIN-2017-09
https://arstechnica.com/features/2000/01/crusoe/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00248-ip-10-171-10-108.ec2.internal.warc.gz
en
0.965767
1,059
2.65625
3
Work that NASA's Marshall Space Flight Center did to create an inexpensive spacecraft with off-the-shelf parts has helped one company facilitate better and cheaper satellite communications. NASA's Fast, Affordable, Science and Technology Satellite, also know as FASTSAT, was built to demonstrate researchers’ capability to build, deploy and operate a science and technology flight mission at lower costs than previously possible. In 2012, the satellite wrapped up a successful two-year, on-orbit demonstration mission. Part of building the satellite meant developing a low-cost telemetry unit, which is used to facilitate communications between the satellite and its receiving station. Alabama-based Orbital Telemetry Inc. licensed the NASA technology and is offering to install the cost-cutting units on other commercial satellites.
<urn:uuid:0e2abccd-c93b-4923-a931-1d2c7560f295>
CC-MAIN-2017-09
http://www.computerworld.com/article/2473447/emerging-technology/151319-nasas-spinoff-technologies-are-outta-this-world.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170794.46/warc/CC-MAIN-20170219104610-00068-ip-10-171-10-108.ec2.internal.warc.gz
en
0.948424
157
3.46875
3
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. What is it? Cascading Style Sheets (CSS) offers a way of adding styles, such as fonts and colours, to web documents. CSS enables presentation to be separated from content to cope with the different platforms on which web pages are displayed. According to website accessibility expert Jakob Nielsen, "Web style sheets are cascading, meaning that the site's style sheet is merged with the user's style sheet to create the ultimate presentation. "These differences make it important that web style sheets are designed by a specialist who understands the many ways the result may look different than what is on his or her own screen." Style sheets may be external, meaning they can be specified once and applied to all the documents on a website, or embedded within a particular document. Where did it originate? CSS began life in 1994 at Cern, the cradle of the web, when H†kon Wium Lie published the first draft of cascading HTML style sheets. He had the backing of HTML3.0 architect Dave Raggett, who realised that HTML needed a purpose-built page description mechanism. In February 1997 CSS got its own World Wide Web Consortium (W3C) working group. The first commercial browser to support CSS was Microsoft's Internet Explorer 3. What is it for? Different style sheets arrive in a series, or cascade, and any single document can end up with style sheets from multiple sources, including the browser, the designer and the user. Cascading order sorts out which set of rules are to influence the presentation. What makes it special? CSS gives a greater level of control over how work is presented than with HTML. How difficult is it to master? Style sheets can either be hand-written using a text editor or with one of the growing number of web design tools which support CSS. The W3C CSS home page has a list of these tools which include Dreamweaver, Adobe Golive and Homesite. You do not need to know CSS syntax but those who do can fine-tune their style sheets. Where is it used? CSS is currently the most widely supported way of styling web documents. Not to be compared with... Cascading system failures - the impact of the collapse of one part of an infrastructure on the next. What systems does it run on? CSS is supported by most current browsers and web design tools. Not many people know that... "CSS is now being taken up, but HTML is in danger again," said Bert Bos, W3C's style sheet activities co-ordinator. What is coming up? CSS3, six years in the making, promises to be much simpler to use than CSS2/.2.1. You should not need to spend much money learning CSS. The World Wide Web Consortium has comprehensive links to tutorials, how-to articles and books by the likes of H†kon Wium Lie, Bert Bos, Dave Raggett and Jakob Nielsen. Most date back a few years, but remember that the current level, CSS2, has been around since 1997. Rates of pay CSS is used by web designers but also sought in software developers, testers and technical authors. Rates vary accordingly.
<urn:uuid:90ce8028-8a7d-427a-b3bc-a478ee128b4f>
CC-MAIN-2017-09
http://www.computerweekly.com/news/2240056970/Cascading-Style-Sheets-separate-presentation-from-website-content
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00596-ip-10-171-10-108.ec2.internal.warc.gz
en
0.932556
696
3.25
3
To satisfy more applications of our life, Fiber Optic Sensing technology has grown in importance over the last few years. Fiber Optic sensing technology is now a fully mature and cost-effective technology which offers major advantages over conventional measurement methods. Especially the use of Fiber Bragg Gratings (FBGs) for measuring strain and temperature is now widespread throughout many industries and in many demanding applications. Sensing Fiber Optic Cable is a sensor based on optical fiber cable. Due to the physical quantities of outside environment, such as pressure vibrations, temperature changes, the transmission light intensity, phase, polarization, and other changes have changed. In order to detect the required physical quantities, Sensing Fiber Optic Cable is used because its more sensitive characteristic for some of these physical effects(see the following picture). Applications of Sensing Fiber Optic Cable Fiber optic sensor technology has been developed to satisfy particular needs in specific applications. It initially applied in the military. As the rapid development of technology, it becomes more generic and has a broad range of applications in more fields, even expected unforeseen applications will emerge. Sensing Fiber Optic Cable, as one important achievement of this technology now is widely used in Petrochemical, Steel, Mine for fire detecting, building health detecting, temperature detecting action. Oil & Gas Industry Being used for optimized production and integrity monitoring in risers, umbilicals and oil wells, and for subsea, reservoir and seismic monitoring. To monitor the temperature of energy production and distribution facilities, power cables, high voltage switch gears and transformers etc. and contribute to the optimization of performance and operational safety. To monitor the soil movement, dams and construction areas, also understand and monitor hydrological processes in agriculture, oceans, lakes, rivers and sewers. Installed in transportation infrastructures, along highways embeded in roads, bridges and rail tunnels to achieve efficient, fast, flexible and cost-effective structural monitoring as well as fire, ice or water detection. To protect our borders and critical infrastructure, such as pipelines, power distribution, airports, construction sites or national borders. Sensing Fiber Optic Cable can help preventing major damage in many cases by measure any temperature increase caused by local fires or overheating in a specific area accurately and swiftly. Sensing Fiber Optic Cable in Fiberstore There are 6 kinds of Sensing Fiber Optic Cables on sale in Fiberstore. Our sensing fiber optic cable with excellent mechanical performance makes it use as easy as a wire and adapt to various conditions. PBT Tube Temperature Sensing Optical Cable The cable is consist of bare fiber, ointment, PBT imitation tubing, Kevlar and outer jacket which has a good performance of optic transmission, an excellent anti-electromagnet ability and resist water very well. It can be used for temperature and stress measurement, which is perfect for high voltage and electromagnetic area because of the nonmetallic structure. Armored Temperature Detecting Sensor Cable It is strengthened by both SUS spring tube and SUS braiding and has very good mechanical performance of tensile resistance and pressure resistance. The cable is widely used in fire detecting, building health detecting, temperature detecting, etc. Silica Gel Sensing Fiber Optical Cable The structure of cable is very simple, but the special Silicon jacket and Teflon tube ensure a very good performance of high temperature resistant and high voltage resistant so that it is very suitable for high temperature resistant and high voltage environment, it could work normally even in the 250 ℃ high temperature environment or 6kv high voltage environment. Teflon Sheathed sensor cable The Teflon cable is very suitable for high temperature resistant environment, it could work normally even in the 150 ℃ environment. And it could be used for fiber temperature sensing system. Seamless Tube Temperature Sensing Optical Cable The cable is make up of bare fiber, ointment, stainless steel seamless tube and sheath. Seamless tube can provide high tensile resistance and crush resistance. It is usually used in oil field, mine and chemical industry, for temperature and pressure inspection to avoid any accident. Copper Braid Armored Sensor Cable Copper braid armored sensor cable could be used in outdoor optical fiber communication and optical fiber sensor. In power environment, the special cable structure could reduce the impact of electromagnetic wave and electromagnetic field, and make less optical signal loss.
<urn:uuid:7218d653-c3a2-47cf-a4a8-81a5e3323d22>
CC-MAIN-2017-09
http://www.fs.com/blog/sensing-fiber-optic-cables-the-fiber-also-the-sensor.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172783.57/warc/CC-MAIN-20170219104612-00296-ip-10-171-10-108.ec2.internal.warc.gz
en
0.917478
903
2.6875
3
Cloud computing technology is a way to increase Information Technology or IT capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing new software. Cloud computing encompasses any subscription based or pay per use service that, in real time over the Internet, extends IT’s existing capabilities. Cloud computing service providers offer storage and virtual servers that Information Technology or IT staff of a small or medium size company can access on demand. Enterprises are increasingly using Cloud computing technology for business critical needs. Cloud computing technology is quickly replacing the local Information Technology (IT) infrastructure managed and run by small and medium size businesses (SMBs) themselves. Cloud offers capabilities which enable small and medium size businesses (SMBs) to stitch together IT, storage, and computational capacity as a virtualized resource pool available over the network. In today’s competitive economic environment, as businesses are trying their best to balance and optimize their information technology or IT budgets, Cloud computing can be an effective strategy to reduce the information technology operations and management costs, and to free up critical resources and budget for discretionary innovative projects. Typically, a business organization has a 80 by 20 split between regular ongoing information technology or IT operations cost which includes hardware, software licensing costs, development, data center maintenance, etc. versus new investment for solving needs which are critical for a business to survive in these challenging times. Cloud computing technology can have a significant impact on this by reducing the footprint of information technology operations by taking out the upfront capital investments needed for hardware and software. It enables a new model viz. – use what is needed and pay for what is used model. This entails businesses to invest on innovative solutions that will help them address key customer challenges instead of worrying about cumbersome operational details. Cloud Computing Technology provides plenty of benefits to small and medium size businesses (SMBs). You get extremely high uptime, that is, almost up to hundred percent. You are free to scale up or down resources. Cloud hosting service provider allows end users to access cloud hosted software from where ever they are located. Multiple end users can instantly share same data file at the same time. End users save much money on Information Technology or IT operations. SAS 70 Type II compliant data centers are used. Experts provide free 24/7 support service via chat, phone, email, etc. Cloud computing service provider uses rolling backup technology to backup data.
<urn:uuid:7fb9fa58-d418-4a0b-bce8-2e8724a7869b>
CC-MAIN-2017-09
http://www.myrealdata.com/blog/202_cloud-computing-%E2%80%93-a-transformative-phenomenon
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00065-ip-10-171-10-108.ec2.internal.warc.gz
en
0.906891
484
2.8125
3
Who's using wikis - By John Zyskowski - Jan 26, 2009 The list of government wikis and the variety of purposes for which they are used continues to expand. The versatility of wikis’ freely editable Web pages makes them a great tool for the types of knowledge sharing and collaborative projects that government employees work on every day. Here is a sampling of government wikis that illustrate that wide range of uses. SUMMARY: This internal wiki lets agents and analysts share lessons learned, best practices and subject-matter expertise. It is a collaborative effort between FBI’s chief knowledge officer and chief technology officer. NAME: Colab — Collaborative Work Environment CREATOR: General Services Administration’s Office of Intergovernmental Solutions SUMMARY: Colab provides wiki space that several intergovernmental groups have used, including the CIO Council’s Federal Data Architecture Subcommittee. The wikis can be public or private. CREATOR: State Department SUMMARY: U.S. foreign affairs agencies use this internal wiki to share nonclassified information about diplomacy and international relations. NAME: DOD Techipedia CREATOR: Defense Department SUMMARY: Agency scientists, engineers, acquisition workers and military service members use this wiki to share information and improve collaboration. CREATOR: Office of the Director of National Intelligence SUMMARY: Analysts from 16 federal intelligence agencies share information via the wiki, which has three components defined by security clearance level. NAME: Kiwi Wiki Service CREATOR: National Institutes of Health’s Center for Information Technology SUMMARY: The service provides internal wikis for NIH researchers. It now supports more than 40 wikis for various projects, including scientific collaboration, project management and documentation development. NAME: MAX Federal Community CREATOR: Office of Management and Budget’s Budget Formulation and Execution Line of Business SUMMARY: Federal workers involved in acquisition, budget, e-government, grants, performance, planning, and other areas of interagency activity use wiki to share information and collaborate. NAME: Telecommunications and Electronic and IT Advisory Committee Wiki CREATOR: U.S. Access Board SUMMARY: The committee used the wiki to support its review of the Access Board’s standards and guidelines related to electronic and information technology accessibility under Section 508 of the Rehabilitation Act. John Zyskowski is a senior editor of Federal Computer Week. Follow him on Twitter: @ZyskowskiWriter.
<urn:uuid:0dcc5a0c-c5ba-4ffe-a78e-6afe146c2886>
CC-MAIN-2017-09
https://fcw.com/articles/2009/01/26/who-is-using-wikis.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00117-ip-10-171-10-108.ec2.internal.warc.gz
en
0.863869
525
2.703125
3
Despite its small size, your Android phone is an incredibly complicated and powerful piece of gear. It can get you online, take photos, make phone calls—it can even pay for your groceries. There’s a lot going on, which means a lot to learn, even if you’re otherwise savvy about technology. We’re here to help. Knowing these terms will help you get a better grasp on the tech that powers your phone. Bookmark this page for future reference, and if you have any other terms you think we should include, feel free to leave a comment below. ADB, otherwise known as Android Debug Bridge, is a developer tool that lets you send commands to an Android device you’ve connected to your computer. It’s a fairly advanced tool, and you run it through the command line on your PC or Mac, but if you ever want to install, say, a developer preview release of Android on your phone, you’ll need to delve into ADB. Android Studio is Google’s software development kit (a collection of programing apps and tools, abbreviated SDK) that developers use to build apps for Android devices. Android Studio includes a code editor, various code templates you can use as the basis of your app, device simulators for testing your apps, and numerous other development tools. Want to learn more about building apps for Android? Hop on over to Google’s Android Developers site. API: Short for Application Programming Interface. APIs are functions that developers can call on to access specific features by calling upon programs, code, and services that others have written. For example, if a developer wants to draw a button on the screen, she can insert a small bit of code that says “draw this kind of button, with this color and size and style, at this location” instead of dozens of lines of code that tells the graphics processor, in detail, exactly how to draw a button. If the application wants your location, it can use the location API to “get the device’s location” and let Google’s code handle the rest, instead of requiring the developer to build an entire location service from scratch just for her own app. There are thousands of APIs in Android, covering everything from drawing interface elements, to the cameras, to location access, to accessing storage, to 3D graphics (see: OpenGL ES) and much more. ARM usually refers to the processor architecture that is most commonly found in mobile devices like smartphones and tablets. While ARM-based processors can vary widely—you’ll find them in both the smallest mobile devices as well as high-end servers—the variants used inside smartphones and tablets are designed to be smaller and to consume less energy than the processors you’ll typically find in your PC. That doesn’t mean ARM processors are weak, though—the processors found inside mobile devices have grown significantly more powerful in recent years, and that trend shows no sign of slowing. The ARM architecture standard is maintained by the company ARM, which licenses their own designs for processors, processor cores, and processor architectures to other manufacturers. Samsung and Qualcomm are two major manufacturers of ARM-based processors; Apple uses its own ARM-based A-series processors in the iPhone and iPad, too. Baseband: Your phone’s baseband is, in short, the part of the phone that handles its radio connections (cellular and Wi-Fi, for example). The phone’s baseband consists of a processing chip and code that work their magic behind the scenes. You normally won’t have to touch your phone’s baseband system software, but if you need to know which baseband version your phone is running, go to Settings > About phone. Bootloader: Before your phone’s operating system even starts up, your phone runs a piece of software called a bootloader. Bootloaders generally run some initial startup tests and related tasks, then tell your operating system to start up. It’s very similar to the BIOS on your PC in this regard. The bootloader does its work in the background, so you never see it unless you specifically choose to do so. If you know what you’re doing, though, you can access the bootloader interface on your Android phone using the aforementioned Android Developer Bridge (ADB) tool. Dalvik and ART (short for Android Runtime) are, simply put, what allows apps to run on Android. They are what’s known as “managed runtime environments”: Think of them as the “box” within which all apps for Android run. By running apps within a “box,” developers don’t have to worry as much about the device you’re running the app on; Dalvink and ART also handle some under-the-hood tasks on behalf of the apps. Android apps are based primarily on Java, and ART (and Dalvik before that) are the parts of Android that compile the Java code to run on your device. The newer ART component is faster and takes better advantage of modern processor features than the older Dalvik component. Doze: New in Android 6.0 Marshmallow, Doze is a power-saving feature that prevents your phone from carrying out certain tasks if your phone’s been sitting idle for a while. It puts your phone into a deep sleep mode and only wakes it up sporadically to handle background tasks, which saves a lot of battery power. It’s on by default, but you can easily turn it off for individual apps. Fragmentation: In the Android world, variety is most certainly the spice of life: Android phones come in practically every shape or size, and you’ll almost assuredly find at least one Android phone you like. While this variety can be a good thing, it can also cause issues, such as apps that don’t run on all current devices or slow uptake on new releases of Android. Observers often refer to this potential problem as “fragmentation.” Is fragmentation that big of an issue? That’s certainly debatable. But it’s probably less of a concern for “normal” users than it is for more technical folk. The wide variety of hardware and software in the Android world can be a pain for developers, whose apps have to be tested on a wide variety of devices to ensure they work right. For the rest of us, the slow rollout of new Android updates is one thing to be concerned about, if only for the security improvements that often accompany them. HDR stands for High Dynamic Range, and it refers to a photography technique in which several photos taken at different exposures which are then combined into a single image. By snapping photos at different exposures, you can get proper exposure and detail in both the dark and bright areas of the photo. Some modern phones can take HDR photos in a single snap by carefully processing the data from an advanced sensor with a wide dynamic range. IPS, short for In-Plane Switching, is a display technology used in LCD screens. IPS displays have a wider viewing angle compared to twisted nematic (TN) displays—the other prevalent LCD technology—largely eliminating the color-shifting issues that have traditionally affected LCDs. They also have a wider color gamut, and can accurately reproduce more colors. Among Android devices, the other popular display technology is OLED. Material Design: In Android Lollipop, Google introduced a fresh new look and feel for Android, which the company called Material Design. Material Design puts an emphasis on depth, bold graphical elements, and fluid motion to help you get a sense of place. Google currently employs the Material Design look and feel across Android, its various other mobile apps, and some of its online tools such as Google Maps and the redesigned Google+ social network. Material Design isn’t just a set of interface elements from Google. It’s a clearly codified set of interface conventions that describe all aspects of how an application interface should look and behave. Fonts, colors, animations, shadows, layers, patterns, the placement of interface elements; all this and more are described in detail in the Material Design guidelines set by Google. The idea is that third-party app developers should make Android apps that have a consistent look, and operate with the same interface conventions, so they all appear to be part of a cohesive ecosystem. That way users don’t get confused by changing interfaces. Milliampre-hour—abbreviated mAh—is a unit for measuring electrical charge. It’s often used to describe the capacity of a battery, and you’ll often find it in marketing materials for smartphones and tablets. The higher the mAh rating for a battery, the larger its capacity. However, since the tech specs and power management systems can vary wildly from one device and another, a battery’s charge capacity isn’t the only factor that determines battery life. A phone with a smaller mAh rating on its battery might last longer if the display and processor and radios are more energy-efficient, for instance. Miracast is an industry standard technology that lets you stream what’s on your phone to or tablet onto your TV or other display. Miracast-compatible devices work together on a peer-to-peer basis—you don’t need a router to act as a middleman between your phone and TV. In order to use Miracast, you need to be running an OS that supports it, and you need a Miracast-compatible TV or receiver dongle. Check out PCWorld’s Miracast setup tutorial to learn more. Nearby: As its name suggests, Nearby lets you connect to and share data with other devices that are, well, nearby. According to Google, Nearby “uses a combination of Bluetooth, Wi-Fi, and inaudible sound” to identify other nearby devices. Nearby isn’t a standalone feature in Android; instead, it’s a set of technologies that developers can incorporate into their apps. For example, Google Play Games uses Nearby to find others near you who you can play a game with—a handy feature if you want to play with a family member or roommate without going through a network or server. Nexus: This is Google’s brand name for a line of phones and tablets that run pure, unadulterated “stock” Android, sold direct to consumers by Google, and with Google providing software updates with no carrier interference. Nexus devices don’t come bundled with carrier-specific apps, nor do they include the customized interfaces and branded features that so many Android phones ship with. As an added bonus, Nexus devices get new releases of Android first, often weeks or months before other phones. It’s important to note that Google doesn’t actually manufacture Nexus devices: Instead, it partners with other hardware manufactures, like Samsung, LG, and Huawei, which design and build a phone or tablet that meets Google’s specifications. Nexus phones started as a necessity—Google says it can’t really develop Android as a competitive operating system without simultaneously developing phone hardware to test the latest features before release. And developers need a “baseline” phone that runs unaltered Android to test their applications on. Today, Nexus devices appeal to the mass market, with prominent TV ads and such. NFC, which stands for Near Field Communication, is a technology that lets you wirelessly beam information between devices. Unlike Bluetooth and Wi-Fi—which have ranges that can span multiple rooms in your house—NFC has a range of only a few inches. It’s built into many smartphones, and it’s most commonly used to facilitate mobile payment systems like Google Pay and Apple Pay: Hold hold your phone up to an NFC-equipped credit card terminal, wait for the reader to recognize your phone and retrieve your payment info, and you’re all done. On Android phones, NFC can also be used to facilitate the transfer of data from one phone to another by “bumping” them together. OLED and AMOLED: OLED, which stands for Organic Light-Emitting Diode, refers to a specific kind of display technology. Unlike LCDs, OLEDs emit their own light, so they don’t require a backlight. AMOLED (Active Matrix Organic Light-Emitting Diode) is a variant of OLED technology commonly used in smartphones. HowStuffWorks has an in-depth overview of how OLED screens work and the differences between the different kinds of OLED screens. OLED displays can have excellent color reproduction and very fast switching speeds (the ability of pixels to turn on and off). And since the pixels emit their own light, when they’re off, they’re completely black. Only recently have OLED displays been able to get as bright as LCDs, however, and they tend to be more expensive. OpenGL ES: OpenGL is a set of programming APIs and technologies that developers can use as the basis of apps that employ 3D rendering. Many games are built atop OpenGL, as are many 3D modeling and design apps. OpenGL ES is a pared-down version of OpenGL built for devices like smartphones and tablets (the ES stands for “embedded systems”). The 3D games you play on your Android phone are likely built atop OpenGL ES.
<urn:uuid:29650bd0-c799-439d-806e-ad1bd13cf2f6>
CC-MAIN-2017-09
http://www.itnews.com/article/3012144/android/android-a-to-z-a-glossary-of-android-jargon-and-technical-terms.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171971.82/warc/CC-MAIN-20170219104611-00469-ip-10-171-10-108.ec2.internal.warc.gz
en
0.933834
2,807
3.125
3
Study Finds 'Alarming' Ignorance About Cybercrime"Consumers' unsecured computers play a major role in helping cybercriminals conduct cybercrimes," the National Cyber Security Alliance warns. At the RSA Conference on Wednesday, the National Cyber Security Alliance (NCSA) reported that U.S. consumers don't understand botnets, networks of compromised computers that have become one of the major methods for attacking computer systems. "Botnets continue to be an increasing threat to consumers and homeland security," said Ron Teixeira, executive director of the NCSA, in a statement. "Consumers' unsecured computers play a major role in helping cybercriminals conduct cybercrimes not only on the victim's computer, but also against others connected to the Internet." The NCSA survey involved 2,249 online consumers between the ages of 18 and 65, polled by Harris Interactive. The NCSA said its study indicates that Americans understand that their computers can be subverted, thereby degrading security for others. Among the study's findings: 71% are not familiar with the term "botnet"; 59% believe it's unlikely that their computer could affect homeland security; 47% believe it's not possible for their computer to be commandeered by hackers; 51% have not changed their password in the past year; and 48% do not know how to protect themselves from cybercriminals. Such findings should come as no surprise. Last October, a joint study conducted by McAfee and the NCSA found that almost half the consumers surveyed erroneously believed their computers were protected by antivirus software. Moreover, the ongoing success of social engineering attacks demonstrates that people are easily fooled. And really, given the frequency with which studies exposing people's ignorance about all manner of things appear, it should be assumed that more education about everything is needed. Teixeira considers it "alarming" that people don't know how to keep their computers secure. That may well be cause for alarm, but it's worth noting that companies with highly paid IT professionals get hacked, too. That's at least as alarming, if not more so.
<urn:uuid:249dd3b5-e6e2-43e5-9a3a-ac80ae100231>
CC-MAIN-2017-09
http://www.darkreading.com/risk-management/study-finds-alarming-ignorance-about-cybercrime/d/d-id/1066719?piddl_msgorder=thrd
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169776.21/warc/CC-MAIN-20170219104609-00413-ip-10-171-10-108.ec2.internal.warc.gz
en
0.963978
440
2.640625
3
The federal government already holds many of the tools it needs to effectively recruit, hire and retain many of our nation’s smartest and most highly-skilled workers – those in science, technology, engineering, mathematics and medical fields. Agencies just need to leverage those tools more effectively, according to a new report. The report, “The Biggest Bang Theory: How to Get the Most Out of the Competitive Search for STEMM Employees,” released Wednesday by the Partnership for Public Service and Booz Allen Hamilton, outlines several steps agencies can take to compete with other industries for the so-called “Sheldons” – a reference to the brilliant young physicist in the hit sitcom “The Big Bang Theory.” “It’s a good news-bad news story,” Dr. Ronald Sanders, a vice president and fellow at Booz Allen Hamilton, said Wednesday. “The bad news is that STEMM talent is in short supply, and in government, with citizenship and clearance requirements, they further shrink the candidate pool. . . The good news is that the government has a lot of tools as its disposal to compete in that market.” Roughly one-quarter of the federal government is composed of employees with STEMM skills, with 15.4 percent of those employees working in information technology fields. STEMM fields are even more top-heavy than other federal job fields, with 74 percent of federal STEMM workers over the age of 40, and just 7.6 percent under age 30, according to the report. So how do agencies effectively compete for STEMM workers while also meeting obligations to federal policies and objectives? The Partnership and Booz Allen Hamilton interviewed 13 federal agencies and five private sector companies to come up with some best practices that “can help agencies sharpen their game for STEMM hiring.” “We found many positive, proactive steps agencies are taking to bring top STEMM talent on board, practices that in many cases can be replicated across government,” the report states. “These agencies operate under most of the same constraints as any other, but they have worked within the system to maximize their hiring impact.” The report offers 10 recommendations for landing STEMM talent: Use the mission as the magnet: Market the unique opportunities only your agency offers. Recruit upstream: Recruit students as young as their mid-teens up through the college level by hosting workshops, lectures and internships. Send out the Sheldons: Enlist current STEMM workers to visit college campuses and career fairs to recruit talent. Keep their eyes on an XPRISE: Use competitions to open the door to talented people. Go virtual: Use the Internet, social media and other online tools to communicate the mission and opportunities with potential STEMM recruits. Offer quantum-leap career paths: Allow STEMM workers to take on varied assignments throughout their careers. Start a chain reaction: Share lists of pre-screened candidates throughout your agency. Beta-test your talent: Give potential candidates challenging assignments to prove their skills. Create a Parallel Universe: Offer dual career paths for STEMM workers that allow for opportunities to excel in areas other than management. Find the Prime Numbers: Use data and dashboards to measure hiring, performance and job satisfaction. While the list may seem daunting, Sanders recommends that agencies with high numbers of STEMM jobs begin by forming a close alliance between human resources professionals and the senior technical colleagues. “So many of the strategies outlined depend on a close partnership between the HR people and the people who actually speak the language and have been practicing in the STEMM disciplines,” he said. At the same time, the government’s employment brand has taken a bit of a hit in light of sequestration, budget cuts and pay freezes, Sanders added. “This is a pretty risky time for management in the federal government,” he said. “Agencies are going to have to try that much harder to say, ‘we are still here, and we are a great employer of STEMM talent.’”
<urn:uuid:c1f21fde-e709-4d8e-946f-10678b100d9b>
CC-MAIN-2017-09
http://www.nextgov.com/cio-briefing/wired-workplace/2013/05/ten-tips-attract-sheldons-your-agency/63182/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00341-ip-10-171-10-108.ec2.internal.warc.gz
en
0.956556
854
2.6875
3
Sophos has revealed new research into the use of other people’s Wi-Fi networks to piggyback onto the internet without payment. The research shows that 54 percent of computer users have admitted breaking the law, by using someone else’s wireless internet access without permission. According to Sophos, many internet-enabled homes fail to properly secure their wireless connection properly with passwords and encryption, allowing freeloading passers-by and neighbours to steal internet access rather than paying an internet service provider (ISP) for their own. In addition, while businesses often have security measures in place to protect the Wi-Fi networks within their offices from attack, Sophos experts note that remote users working from home could prove to be a weak link in corporate defences. Stealing Wi-Fi internet access may feel like a victimless crime, but it deprives ISPs of revenue. Furthermore, if you’ve hopped onto your next door neighbours’ wireless broadband connection to illegally download movies and music from the net, chances are that you are also slowing down their internet access and impacting on their download limit. For this reason, most ISPs put a clause in their contracts ordering users not to share access with neighbours – but it’s very hard for them to enforce this. Have you ever used someone else’s Wi-Fi connection without their permission? Sophos online survey, 560 respondents, 31 October – 6 November 2007. Sophos recommends that home owners and businesses alike set up their networks with security in mind, ensuring that strong encryption is in place to prevent hackers from eavesdropping on communications and potentially stealing usernames, passwords and other confidential information.
<urn:uuid:b6f06c2e-18c8-46ed-8cc2-64f755757cb4>
CC-MAIN-2017-09
https://www.helpnetsecurity.com/2007/11/15/wi-fi-piggybacking-widespread/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00341-ip-10-171-10-108.ec2.internal.warc.gz
en
0.936982
342
2.625
3
Here’s an interesting dilemma: What if you were awarded millions of dollars to build a new, state-of-the-art, 1-petaflops supercomputer, but had no place to put it? That’s the situation that the US Department of Energy’s National Renewable Energy Laboratory (NREL) faced a few years ago. Congress appropriated money for NREL to order a new supercomputer system, but its existing datacenter was too small to hold it. NREL’s solution: It began working on a new energy-efficient datacenter, one designed to be cheaper to build and operate than comparable datacenters. At the same time, it pooled some money with Sandia National Laboratories in Albuquerque, NM, in order to jointly purchase a 500 teraflops system. The labs installed that system in the Sandia datacenter, and both organizations have access to it until NREL’s datacenter is fully equipped. NREL then requisitioned its new computer from HP. The whole process is now coming to completion. The new datacenter is largely done. The first phase of the new computer system has been installed and tested. Delivery of phase two – the 1-petaflops system – is about to begin. It’s just in time. The shared computer at Sandia, a Red Mesa system from Sun’s pre-Oracle days, is no longer sufficient to serve both labs’ needs. It is averaging 92 percent utilization, day in and day out. Despite the time it took, there were advantages to this approach. HP is sending NREL some new, still-unnamed servers that not only include some of the the latest Intel Xeon processors and Xeon Phi co-processors, but also a new warm-water liquid cooling system that HP has not yet unveiled to the public. NREL was also able to essentially design a datacenter around its new computer system in order to create an integrated whole. The cooling system, for example, makes compressor-based chillers unnecessary. The servers use 480 VAC power, which eliminates power converters. Less equipment means more space, enabling the servers to be packed into just 10,000 square feet of raised floor space. Warm-water cooling means most of the servers do not require hot and cold aisle containment. The hot water can be used to heat the building or melt snow. “Taking this integrated look at a datacenter from an energy efficient building perspective drove a lot of the decisions we made,” says NREL Computational Science Center Director Steve Hammond. “Otherwise you could make locally-optimized decisions that are not as efficient as they could be if you stepped back” to see the big picture. The first racks of the new system were delivered last November, right after SC12. More arrived in early January. The final four racks (out of 10 total) arrived on February 19. Most of the equipment consists of HP ProLiant SL230s and SL250s Gen8 servers powered by Intel Xeon E5-2670 8-core CPUs. This is the Sandy Bridge generation, using 32nm technology. However, those last four racks each contain something new. They hold prototypes of a next-generation server family that HP will be introducing to the rest of the world next year. This new server uses next-generation Intel Xeon Ivy Bridge processors and Intel Xeon Phi coprocessors, both built on 22nm technology. These servers also feature HP’s prototype direct-to-chip warm-water liquid cooling system. “The primary heat exchange is at the chip level, with heat going (directly) to liquid rather than going to air first and then liquid,” says NREL’s Hammond. Water will arrive at the servers at about 75 degrees Fahrenheit and leave at about 100 degrees F. The combination of the ProLiant servers and the new prototypes comprise phase one, consisting of about 11,500 cores in 10 compute racks. That system reached over 200 teraflops on LINPACK tests last month, meeting its intermediate performance milestone. The real show, however, comes with phase two. That’s a 1-petaflops system made up entirely of HP’s new servers, including the new cooling system. These are the first production versions HP is delivering to a customer. They should start arriving from Houston by early summer and will be standing in the datacenter before the end of August. To say this is a showcase datacenter is an understatement. It has has floor-to-ceiling glass windows to allow visitors to look in from the corridors. “People say it looks more like an aquarium than a datacenter,” says Hammond. Part of the idea is to show off its energy efficiency for others interested in saving energy and money. Hammond is hoping, however, that the datacenter-under-glass doesn’t become too popular a display. He’s already regularly guiding visitors past the aquarium, despite the fact that the main system is not yet installed. He needs to get some work done.
<urn:uuid:d27213ab-9122-46c7-b2fd-0af71f7f2e47>
CC-MAIN-2017-09
https://www.hpcwire.com/2013/03/13/nrel_s_energy_efficient_supercomputer_debuts_new_tech/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00517-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944158
1,071
3.09375
3
Ransomware is the devilish and extremely debilitating program designed to lock and encrypt files in order to extort money from consumers, business owners, and even government officials. It seems that no one is safe in the fight against ransomware. Most ransomware programs are targeted at the most popular operating system, Windows. Ransomware programs can and will target other systems such as Android applications, Mac OS X and possibly even smart TVs in the near future. Not only is this an unsettling forecast for consumers, but also a call to action for preventative measures to protect your most important data files. What can be done? Most users have learned the hard way that it is better to back up sensitive data to an external hard drive. However, this type of malware is tuned in to this. When a ransomware program infiltrates a computer, it infects all accessible drives and shared networks, encrypting all files found. This makes for a very irritating discovery of locked data across the board. Rather than rely on the external hard drive method for backups, it is suggested that consumers adopt a new best practice. Ensure at least three copies of sensitive data are made, and stored in two different formats. At least one of these copies should be stored off-site or offline. This way if ransomware locks files away consumers are not forced into a sticky situation of deciding whether to risk paying for the data retrieval or losing the data forever. What to do when faced with ransomware? Not much can be done once ransomware has attacked. Most security researchers advise not paying for files to be unlocked, as there is no guarantee that the hackers will provide the deception key once paid. Security vendors also worry about the implications for fueling the fire. The more consumers give in and pay for the safe return of their data, the further encouraged ransomware criminals become to continue this practice of extortion. If I haven’t said it enough already, I will say it again. Prevention is key. Know how ransomware reaches your computer. Be especially careful of email attachments, word documents with macro code, and malicious advertisements. Always keep the software on your computer up to date. It is especially important to ensure that OS, browsers such as Flash Player, Adobe Reader, and Java are always updated when available. Unless you have verified the senders, never enable the execution of macros in documents. Finally and most importantly, perform daily activities from a limited user account rather than an administrative one. And always, always, utilize a well running and up to date antivirus program. If you would like to educate yourself in more detail about material presented in this blog post please visit:
<urn:uuid:4bae29e0-bc5c-407a-a956-0c5824932bb3>
CC-MAIN-2017-09
http://www.bvainc.com/ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00109-ip-10-171-10-108.ec2.internal.warc.gz
en
0.944004
528
2.734375
3
With vast amounts of sensitive data traversing the internet daily, one of the biggest challenges of modern-day protection is to properly encrypt and store this data to protect it from being compromised —either by internal threats or hackers. You can see the dilemma, though: if you use a key to protect your data, then, in effect, whoever has the key can gain access to the data. Sure, you could encrypt the key or the machine on which it’s stored, but that would require a new key stored somewhere—and on and on, in an infinite loop. What is a company supposed to do? Don’t View the World Through An Outdated Lens Cloud services, in particular, have been guilty of taking the legacy on-premise model and saying “It worked here, so it should work there!” This is only a half-truth; it has been shown time and again that if the legacy approach is applied to the cloud, the advantages and cost efficiencies quickly vaporize. One longstanding and well-understood encryption model is digital envelope encryption. The concept is straightforward: you have a key, you encrypt the file or message with it, you encrypt the key itself using something from the recipient party. Unlocking the file or message now requires two distinct elements. With now two stages to unlocking the prize, it significantly raises the complexity of getting at the underlying data. In digital envelope encryption the nuance is in how the key itself is encrypted using something only the recipient knows; for example, their self- or company-generated password. This results in a token which can then be safely stored. The only person that can get to the key—and subsequently the data—is the person with the password to unlock the token. Although digital envelope encryption is often leveraged by the largest cloud infrastructure providers (such as Amazon Web Services) to protect sensitive data, its adoption in the cloud space is still limited. But can this well-known model be adapted to fit today’s cloud-based reality? We believe it can. The solution is to store only the aforementioned unique token per user in the cloud. Then, when a user provides the correct credentials the token is challenged, unlocking the key into resident session memory and establishing secure cloud data storage and access with the user. When the user session ends, the memory-resident key is destroyed and only the original stored token remains. The critical thing to keep in mind is that the key never exists—or is transferred—outside of that memory-resident session. In other words, there is no opportunity for the key to be put into a compromised situation. And, if the service is built upon a fully-certified infrastructure like Amazon or Azure, the infrastructure is highly secured as well, adding an additional layer of environment protection. Additional benefits of cloud-based digital envelope encryption include: - Extended Services Accessibility: Because the key is only ever session memory resident, encryption and decryption can occur cloud-side without putting the data at risk, this enables more services to be applied to the underlying data as needed. - Data Lockout Prevention: With the use of individual user stored tokens as proxies for the key the core key can be an enterprise-wide key. This enables an admin, for example, to have access to the data if required, or easily reset an end-user’s password to recover data. - No Vendor Access: The vendor has no access to the key as it is stored as encrypted tokens. Therefore, they have no access to the data, and not even a court order, subpoena, or warrant can compel data production. - Key Durability: The core key can be enterprise-wide without fear of it being compromised or corrupted as it’s contained within each token and only exists memory-resident. Removing the need for key management and ensuring against data lockout issues. For these reasons, Druva has standardized on digital envelope encryption for our cloud services, which are being utilized by some of the most security-conscious enterprises in the world. Cloud-based digital envelope encryption allows these customers to both secure their data and do more with it through extended services around compliance, legal data management, analytics, and search, while taking advantage of the extended cost efficiencies that cloud services have to offer. We invite you to download the full technical brief titled ‘State-of-the-Art Security In The Cloud Era’ to learn more about the advantages of this encryption method. For a closer look at Druva’s unique approach to cloud data protection, click here and see how to keep your data properly secured in the cloud. Looking to increase your security posture? Learn about five unforeseen risks that you should consider when it comes to file sharing. Read our white paper, 5 Unseen Risks in Enterprise File Sharing, to learn more.
<urn:uuid:cfa05b04-0a80-4ed6-acb7-4c5e1887d8dd>
CC-MAIN-2017-09
https://www.druva.com/blog/are-your-keys-secure-4-reasons-to-consider-cloud-based-digital-envelope-encryption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00109-ip-10-171-10-108.ec2.internal.warc.gz
en
0.927601
995
2.828125
3
The following code is a part of an Activity class to create a dialog. Which is the Activity classmethod used to display this dialog? Which of these is the correct method to persist SharedPreferences? Which of these is the incorrect explanation of the Java Native Interface(JNI)? Which of these is called after the end of each test method of ActivitylnstrumentationTestCase2, aclass which provides the unit Activity function tests? Which is the correct explanation of ListView? Which of these Activity class methods must be overridden when creating a Menu that is displayedwhen the device’s Menu button is pressed? Which approval is necessary to execute Bluetooth actions such as connection requests,connection receipt, and data forwarding? Which class is used when a sensor is accessed?
<urn:uuid:b308e551-6339-4bb1-8ea8-9bc29cdd34e4>
CC-MAIN-2017-09
http://www.aiotestking.com/android/category/exam-oa0-002-android-application-engineer-certifications-basic/page/14/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00513-ip-10-171-10-108.ec2.internal.warc.gz
en
0.903243
161
2.65625
3
Sytek Inc developed NetBIOS in 1983 as an API (a specification proposed for using it as an interface to communicate by software parts) for software contact over IBM PC LAN networking technology. The Network Basic Input/Output System (NetBIOS) was at first introduced by IBM (a company, which is running IT consultation and computer technology business to access LAN resources. Since its creation, NetBIOS has developed as a starting point for a lot of other networking applications including International Business Machines, for example: Sytek (API). This Basic Input/Output system serves as an interface specifications to access the networking services. NetBIOS, as a specific software layer is developed to connect a network’s operating system with a specific hardware.LAN. NetBIOS improved form is allowed certain programs written with the NetBIOS crossing point to work on the token ring structural design, offered by IBM. NetBIOS provides to the network relevance “hooks” set to perform inter-application messaging and data transferring. In basic sense, NetBIOS can let applications to talk to networks. NetBIOS regulate the crossing point between network applications and LANs operating capabilities for the purpose to detach an application program from hardware reliance. Computers within a NetBIOS LAN are known by a name because every computer that is active inside network has its own permanent name involuntary. These names are utilized in identifying the network resources. Application programs utilize them for starting and as well closing sessions Computer systems in such LAN environment made communication by starting a session but this task is also done with the use of NetBIOS datagram and even using broadcast methods. Sessions let a large message in data form to be sent over the network. These are also performed the duty of error detection and after that correcting them. Such communication is performed on one-to-one basis. But methods as datagram and broadcast let one system to message in a limited message size to many other computers over the NetBIOS network, simultaneously, and without starting any session. Another limitation of datagram and broadcast methods is that no error detection or correction is possible if going to use them. Commonly used NetBIOS is supported Ethernet, TokenRing, in addition to IBM Networks. It also support: connection oriented communication as TCP and as well connectionless communication like UDP. It can uphold broadcasts and multicasting methods. Three services as: Naming, Datagram, and Session are supported by NetBIOS as well. The NetBIOS suffix (NetBIOS End Character or endchar) can be NetBIOS name’s 16th character and the purpose of such character use is to specify the record or service type for the registered name record. The number of record types is limited to 255. However, in actual use the number of commonly used NetBIOS Suffixes can be substantially smaller. The most common NetBIOS Suffixes (in hexadecimal format) are mentioned below and these are at this time used by Microsoft WindowsNT. - <computername>00UWorkstation Service - <computername>06URAS Server Service - <computername>22UExchange Interchange ASCII value of NetBIOS names 16th character can be described as: - 00: Workstation Service - 20: File Service/Host Record)
<urn:uuid:92349403-462c-453c-a5e5-dccb18d7a6f9>
CC-MAIN-2017-09
https://howdoesinternetwork.com/2012/netbios
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00513-ip-10-171-10-108.ec2.internal.warc.gz
en
0.91928
684
3.375
3
How Virtualization Figures into Power Savings How does virtualization figure into this power-saving equation? Does it save-or cost us-energy? When you transfer a vast amount of data on a virtualized basis, you're going to be activating areas within the data center that have probably cooled down and not processed anything in awhile. So the local building or energy management systems may have throttled those areas down to save energy. But you need to be able to go there because virtualization, to be successful, has two components to an equation that most people don't realize: You virtualize IT, but you also have to have the equal virtualization of the facility. So, "VxIT," from a mathematical perspective, is equal to "VxFacility." You have to keep the two in harmony. There is a reason for that. When you virtualize a process on the IT side on a data center that is not a green field, the problem is that the data center was designed with upper and lower [power] limits. We always knew what happens when you exceeded [a power limit]: The system shuts down. But we did not understand what would happen if you could actually drive a process below its design requirements. Power and cooling are designed for a window of operation. When you go below the lower limits of the window, what happens from a cooling perspective? Systems will shut off. Our root cause analysis is done, the data center crashed, yet no one knows why. The system simply shut down. What happened was, virtualization saw there was a problem [and] it transferred the workload someplace else, so that line went above the design requirement again. Same thing with the power systems. The frequency among multiple UPSes [uninterruptible power supplies] can become unstable. When that instability exceeds the threshold level, they'll take themselves offline. The safety circuits are operating; they're doing what they were designed to do. So what's the answer? The answer is to understand that when you virtualize the IT, you have to review the facility part.
<urn:uuid:1f3f19f3-b8f6-4b41-975d-a668c208d437>
CC-MAIN-2017-09
http://www.eweek.com/c/a/Green-IT/Ciscos-Marcoux-Charged-Up-for-Designing-a-Corporate-Green-Roadmap/2
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00457-ip-10-171-10-108.ec2.internal.warc.gz
en
0.966669
428
2.578125
3
The last we had heard of Google's self-driving car project was in August 2012, when Google announced that the cars had driven 300,000 autonomous miles. Today, that number has more than doubled to 700,000, and Google's cars are now tackling driving in the big city. Google's cars could probably perform well in an ideal driving environment, but the real world is full of one-off situations that a self-driving car will need to be able to navigate. By trying to figure out city navigation, Google's cars more frequently run into these strange traffic situations. Getting self-driving cars on the road requires teaching them to deal with a myriad of abnormal traffic conditions, and Google says it has built software models for "thousands" of different situations. The company has taught its cars to detect construction zones by recognizing yellow signs and traffic cones, and the cars can even change lanes as indicated by the cone layout. The cars now move to keep a safe distance from obstacles, like vehicles stopped on the side of the road. They won't stop in the middle of a railroad crossing, but instead will wait on one side of it until there is enough room in the traffic ahead for the car to fully cross. Often, the worst part of driving is other drivers and pedestrians, but Google says, "What looks chaotic and random on a city street to the human eye is actually fairly predictable to a computer." The cars now model things like the likeliness of a car running a red light, and they can detect cyclists, read their hand signals, and predict their movement. Google's cars have also learned to not run down pedestrians and cyclists at crosswalks, and they can even track objects behind them. The company admits that it still has a lot of work to do, and it's sticking to its hometown of Mountain View for now. Google notes that over 30,000 people die in traffic accidents in the US every year. A self-driving car will never get tired, distracted, or drunk, and it can see farther than its human counterparts, see at night, and see in 360 degrees. This is one of the rare projects that could change the world, and Google says it's "optimistic" that a self-driving car is an achievable goal.
<urn:uuid:921fae86-18cc-4e2a-bde8-fce193446d1f>
CC-MAIN-2017-09
https://arstechnica.com/gadgets/2014/04/googles-self-driving-cars-hit-700000-miles-learn-city-navigation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00457-ip-10-171-10-108.ec2.internal.warc.gz
en
0.980143
461
3.046875
3
From Minecraft models to body parts and many things in between. Last week Yoshitomo Imura, 27, was arrested in Japan for printing out five 3D guns, two of which were functional. It’s not the first time scare stories have emerged about the potential capabilities of 3D printing, and in this piece we take you through some of the wackier suggestions for the budding technology. 1) Body parts Even as one part of humanity is making weapons, another is trying its utmost to create body parts, and without resorting to human farming. So far scientists have printed ears, kidneys and bones, and although organ functionality is a work-in-progress, it’s likely printing will be making major contributions to medicine over the next few decades. Printed food may not sound appetising, but this logical extension to food processing has some key advantages. Home cooks creating ornate chocolate designs will find Choc Edge’s printer expands their culinary capacity, even if the £2,900 price is rather expensive. NASA has also been looking into pizza printers for its hungry astronauts, and other printers dedicated to pasta, corn and candy hint at the possibilities for food fabrication. 3) Computer circuits With the rise of Raspberry Pi we have seen increasingly advanced forays into serious DIY electronics, and Dimatix Fujifilm’s printers have continued down an obvious path, allowing you to design and print customised circuits and electronics.
<urn:uuid:69c2444c-6266-4141-9369-8b245e20e9bf>
CC-MAIN-2017-09
http://www.cbronline.com/news/social-media/7-insane-things-a-3d-printer-can-let-you-fabricate-4265024
null
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171608.86/warc/CC-MAIN-20170219104611-00509-ip-10-171-10-108.ec2.internal.warc.gz
en
0.957385
299
2.796875
3